Dynamic Vision Sensors

Dynamic vision sensors are a completely new way of doing machine vision. Conventional video cameras see the world as a series of frames. Successive frames contain enormously redundant information, wasting energy, computational power and time. In addition, each frame imposes the same exposure time on every pixel, making it impossible to process scenes containing very dark and very bright regions.

The DVS (Dynamic Vision Sensor) solves these problems by using patented technology that works like your own retina. Instead of wastefully sending entire images at fixed frame rates, only the local pixel-level changes caused by moving in a scene are transmitted – at exactly the time they occur. The result is a stream of events at microsecond time resolution, equivalent to or better than conventional high-speed vision sensors running at thousands of frames per second. Power, data storage and computational requirements are also drastically reduced, and dynamic sensor range is increased by orders of magnitude due to the local processing.

The DVS128 is available with a USB 2.0 interface, or an address-event representation (AER) output (DVS-PAER), or embedded with a microcontroller (eDVS).

DAVIS - 2nd generation Dynamic Vision Sensors

Dynamic and active-pixel vision sensors extend DVS technology by allowing simultaneous capture of standard absolute intensity frames from the same pixels.

DAS - Dynamic Audio Sensors

Dynamic audio sensors model the cochlea in sending asynchronous spike-encoded representation of auditory activity. The DAS1 has stereo channels.


The pushbot is a robotic platform with embedded DVS, for research into event-based sensory-motor systems.

The Physiologist's Friend

The Physiologist's Friend Chip electronically emulates cells in the visual system and responds realistically to visual stimuli.

If you are a physiologist working on the properties of the early visual system, you could use this chip as a simple substitute animal.  You could use it to train students to run your rig. Or you could use it as a set of realistic and known receptive fields with spiking or analog responses for developing new spike-triggered averaging techniques. Or when debugging a new setup, you can use it simply to make sure everything runs as expected. The idea is that experimenters can test nearly the complete loop -- which generally involves stimulus generation, recording, and analysis, without using actual animals. Bugs in the large portions of the setup can be found before any animals are used.

If you teach about how the brain works, the chip is a captivating way to involve the students in demonstrations of a number of basic properties of the visual system: adaptation, complementary (ON/OFF) coding, spikes, complementary push-pull input to cortical cells. Using an overhead projector, you can conduct experiments on this model visual system.