A traditional camera consists of a 2D array of light-sensitive pixels. In contrast, FPSPs integrate a processor within each pixel on the same chip. FPSPs are also referred to as processor-per-pixel arrays (PPA) or cellular-processor arrays (CPA).
FPSPs offer unique advantages over traditional image sensors and vision systems by embedding computation directly into the image sensor array. Here are the main benefits:
While FPSPs offer powerful advantages, they also present unique challenges that researchers and engineers must overcome:
While both FPSPs and event cameras aim to overcome limitations of traditional frame-based vision sensors, they differ significantly in operation and purpose:
One notable example is SCAMP5, an FPSP designed and developed by Dr. Piotr Dudek and his team at the University of Manchester. Below are some key applications and advantages of Focal-Plane Sensor-Processors (FPSPs), demonstrated using the SCAMP5 device.
Visual-inertial odometry running at 300 FPS using SCAMP5 FPSP
[1] Visual Inertial Odometry using Focal Plane Binary Features (BIT-VIO)
Matthew Lisondra, Junseo Kim, Riku Murai, Kourosh Zareinia, Sajad Saeedi
IEEE International Conference on Robotics and Automation (ICRA)
Yokohama, Japan, May 13-17, 2024
Fast homography and visual odometry running at 300 FPS
[2] High-frame-rate Homography and Visual Odometry by Tracking Binary Features from the Focal Plane
Riku Murai, Sajad Saeedi, Paul H.J. Kelly
Springer, Autonomous Robots, vol. 47, pages 1579–1592, 2023
High-speed robot navigation with the CAIN compiler
[3] Compiling CNNs with Cain: focal-plane processing for robot navigation
Edward Stow, Abrar Ahsan, Yingying Li, Ali Babaei, Riku Murai, Sajad Saeedi, Paul H.J. Kelly
Springer, Autonomous Robots, vol. 46, pp. 893–910, 2022
Another compiler to generate code for FPSPs to run CNNs
[4] Cain: Automatic Code Generation for Simultaneous Convolutional Kernels on Focal-plane Sensor-processors
Edward Stow, Riku Murai, Sajad Saeedi, and Paul HJ Kelly
Languages and Compilers for Parallel Computing (LCPC)
Stony Brook, NY, USA, Oct 14-16, 2020
High-speed 6 DOF visual domtery using FPSP
[5] BIT-VO: Visual Odometry at 300 FPS using Binary Features from the Focal Plane
Riku Murai, Sajad Saeedi, and Paul HJ Kelly
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Las Vegas, NV, USA, Oct 25-29, 2020
High-speed inference on the focal plane
[6] AnalogNet: Convolutional Neural Network Inference on Analog Focal Plane Sensor Processors
Matthew Z Wong, Benoit Guillard, Riku Murai, Sajad Saeedi, and Paul HJ Kelly
arXiv:2006.01765
arXiv
High-speed face recognition with the AUKE compiler
[7] AUKE: Automatic Kernel Code Generation for an Analogue SIMD Focal-Plane Sensor-Processor Array
Thomas Debrunner, Sajad Saeedi, Paul H J Kelly
ACM Transactions on Architecture and Code Optimization, vol. 15(4), pp. 1-26, 2019
High-speed low-power 4 DOF visual domtery using FPSP
[8] Camera Tracking on Focal-Plane Sensor-Processor Arrays
Thomas Debrunner, Sajad Saeedi, Laurie Bose, Andrew J Davison, Paul H J Kelly
High Performance and Embedded Architecture and Compilation (HiPEAC), Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG)
Valencia, Spain, January 21-23, 2019
A compiler to generate code for FPSPs to run CNNs
[9] AUKE: Automatic Kernel Code Generation for an Analogue SIMD Focal-Plane Sensor-Processor Array
Thomas Debrunner, Sajad Saeedi, Paul H J Kelly
High Performance and Embedded Architecture and Compilation (HiPEAC)
Valencia, Spain, January 21-23, 2019
What do we need to run high-speed low-power SLAM algorithms?
[10] Navigating the Landscape for Real-time Localisation and Mapping for Robotics Virtual and Augmented Reality
S. Saeedi, B. Bodin, H. Wagstaff, A. Nisbet, L. Nardi, J. Mawer, N. Melot, O. Palomar, E. Vespa, T. Spink, C. Gorgovan, A. Webb, J. Clarkson, E. Tomusk, T. Debrunner, K. Kaszyk, P. Gonzalez-de-Aledo, A. Rodchenko, G. Riley, C. Kotselidis, B. Franke, M. F. P. O’Boyle, A. J. Davison, P. H. J. Kelly, M. Lujan, and S. Furber
Proceedings of the IEEE, vol. 106(11), pp. 2020-2039, 2018