Focal-plane Sensor-processor Arrays (FPSP)

What is Focal-plane Sensor-processor Arrays (FPSP)?

A traditional camera consists of a 2D array of light-sensitive pixels. In contrast, FPSPs integrate a processor within each pixel on the same chip. FPSPs are also referred to as processor-per-pixel arrays (PPA) or cellular-processor arrays (CPA).

Why use FPSPs instead of traditional image sensors?

FPSPs offer unique advantages over traditional image sensors and vision systems by embedding computation directly into the image sensor array. Here are the main benefits:

  • Low Latency: Processing happens directly at the pixel level, enabling ultra-fast response times.
  • Low Power Consumption: By eliminating the need to transfer raw data to a central processor, FPSPs significantly reduce power usage.
  • High Parallelism: Each pixel includes its own processor, allowing massively parallel computation ideal for tasks like edge detection, optical flow, and motion tracking.
  • Reduced Bandwidth Requirements: Since processing is done locally, only the results—not the full image—need to be transmitted, minimizing data load.
  • Compact System Design: Combining sensing and processing on a single chip reduces the need for additional hardware and simplifies the overall system architecture.
  • Real-Time Performance: Ideal for time-sensitive applications such as robotics, drones, and mobile devices where low-latency processing is critical.

What are the challenges of working with FPSPs?

While FPSPs offer powerful advantages, they also present unique challenges that researchers and engineers must overcome:

  • Limited On-Chip Memory: Each pixel has minimal storage, which restricts the complexity of algorithms that can be executed locally.
  • Programming Complexity: Developing code for massively parallel pixel arrays requires specialized knowledge and tools not common in traditional image processing workflows.
  • Hardware Constraints: Most FPSPs operate with low-resolution grayscale output and limited dynamic range compared to conventional image sensors.
  • Lack of Standardization: Few commercial platforms are available, and tools are often custom or experimental, making development and deployment harder.
  • Data Extraction Bottlenecks: Although local processing reduces bandwidth, extracting intermediate data from the chip can still be challenging and slow.
  • Debugging and Visualization: The parallel nature of computation makes it difficult to monitor or debug pixel-level operations in real-time.

How are FPSPs different from event cameras?

While both FPSPs and event cameras aim to overcome limitations of traditional frame-based vision sensors, they differ significantly in operation and purpose:

  • Data Representation:
    • FPSPs: Process full image frames directly on the sensor by performing computations at each pixel.
    • Event Cameras: Only output asynchronous "events" when changes in brightness occur at individual pixels.
  • Output Type:
    • FPSPs: Can output processed results (e.g. edge maps, motion vectors) instead of raw frames.
    • Event Cameras: Produce a continuous stream of timestamped events rather than full frames.
  • Latency and Speed:
    • FPSPs: Achieve low latency by processing data in parallel across the pixel array.
    • Event Cameras: Have extremely low latency, reacting to changes in microseconds due to their asynchronous nature.
  • Suitability:
    • FPSPs: Well-suited for low-power, frame-based processing with programmable in-sensor computing.
    • Event Cameras: Best for high-speed motion detection, low-light conditions, and sparse data processing.

Are there any existing hardware implementations of FPSPs?

One notable example is SCAMP5, an FPSP designed and developed by Dr. Piotr Dudek and his team at the University of Manchester. Below are some key applications and advantages of Focal-Plane Sensor-Processors (FPSPs), demonstrated using the SCAMP5 device.

Visual-inertial odometry running at 300 FPS using SCAMP5 FPSP

[1] Visual Inertial Odometry using Focal Plane Binary Features (BIT-VIO)
Matthew Lisondra, Junseo Kim, Riku Murai, Kourosh Zareinia, Sajad Saeedi
IEEE International Conference on Robotics and Automation (ICRA)
Yokohama, Japan, May 13-17, 2024

Fast homography and visual odometry running at 300 FPS

[2] High-frame-rate Homography and Visual Odometry by Tracking Binary Features from the Focal Plane
Riku Murai, Sajad Saeedi, Paul H.J. Kelly
Springer, Autonomous Robots, vol. 47, pages 1579–1592, 2023


High-speed robot navigation with the CAIN compiler

[3] Compiling CNNs with Cain: focal-plane processing for robot navigation
Edward Stow, Abrar Ahsan, Yingying Li, Ali Babaei, Riku Murai, Sajad Saeedi, Paul H.J. Kelly
Springer, Autonomous Robots, vol. 46, pp. 893–910, 2022


Another compiler to generate code for FPSPs to run CNNs

[4] Cain: Automatic Code Generation for Simultaneous Convolutional Kernels on Focal-plane Sensor-processors
Edward Stow, Riku Murai, Sajad Saeedi, and Paul HJ Kelly
Languages and Compilers for Parallel Computing (LCPC)
Stony Brook, NY, USA, Oct 14-16, 2020

High-speed 6 DOF visual domtery using FPSP

[5] BIT-VO: Visual Odometry at 300 FPS using Binary Features from the Focal Plane
Riku Murai, Sajad Saeedi, and Paul HJ Kelly
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Las Vegas, NV, USA, Oct 25-29, 2020

High-speed inference on the focal plane

[6] AnalogNet: Convolutional Neural Network Inference on Analog Focal Plane Sensor Processors
Matthew Z Wong, Benoit Guillard, Riku Murai, Sajad Saeedi, and Paul HJ Kelly
arXiv:2006.01765
arXiv

High-speed face recognition with the AUKE compiler

[7] AUKE: Automatic Kernel Code Generation for an Analogue SIMD Focal-Plane Sensor-Processor Array
Thomas Debrunner, Sajad Saeedi, Paul H J Kelly
ACM Transactions on Architecture and Code Optimization, vol. 15(4), pp. 1-26, 2019


High-speed low-power 4 DOF visual domtery using FPSP

[8] Camera Tracking on Focal-Plane Sensor-Processor Arrays
Thomas Debrunner, Sajad Saeedi, Laurie Bose, Andrew J Davison, Paul H J Kelly
High Performance and Embedded Architecture and Compilation (HiPEAC), Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG)
Valencia, Spain, January 21-23, 2019

A compiler to generate code for FPSPs to run CNNs

[9] AUKE: Automatic Kernel Code Generation for an Analogue SIMD Focal-Plane Sensor-Processor Array
Thomas Debrunner, Sajad Saeedi, Paul H J Kelly
High Performance and Embedded Architecture and Compilation (HiPEAC)
Valencia, Spain, January 21-23, 2019

What do we need to run high-speed low-power SLAM algorithms?

[10] Navigating the Landscape for Real-time Localisation and Mapping for Robotics Virtual and Augmented Reality
S. Saeedi, B. Bodin, H. Wagstaff, A. Nisbet, L. Nardi, J. Mawer, N. Melot, O. Palomar, E. Vespa, T. Spink, C. Gorgovan, A. Webb, J. Clarkson, E. Tomusk, T. Debrunner, K. Kaszyk, P. Gonzalez-de-Aledo, A. Rodchenko, G. Riley, C. Kotselidis, B. Franke, M. F. P. O’Boyle, A. J. Davison, P. H. J. Kelly, M. Lujan, and S. Furber
Proceedings of the IEEE, vol. 106(11), pp. 2020-2039, 2018