We present a model and a hardware architecture for the computation of bottom-up inherent visual attention for FPGA. The bottom-up inherent attention is generated including local energy, local orientation maps, and red-green and blue-yellow color opponencies. In this work, we describe the simplifications to parallelize and embed the model without significant accuracy loss. We also include feedback loops to adapt the weights of the features, depending on the target application.
Supplementary notes can be added here, including code and math.