REFERENCE DESIGNS


Veronica

Designed to fit into Virtual Reality Head Mounted Displays (VR HMDs) and Smart Glasses, the Veronica is a complete reference design hosting Inuitive’s NU3000 3D Imaging and Vision processor. The veronica is designed to enable shorter time to production when dealing with manufacturing calibration and alignment procedures.


Twiggy


Optimized for Google Tango based smartphones and tablets. Twiggy is a small footprint reference design module provides INUITIVE’s best of breed combination of depth sensing with vision technologies.


M3.2H

Tiny depth sensor for short-range gesture recognition.

PRODUCTS

NU3000 3D IMAGING & VISION PROCESSOR

A multi-core signal processor chip that supports 3D Image Processing (3D depth) and Computer Vision (CV) processing

  • Multi core CV and depth processor Dedicated depth engine
    40nm geometry
    Connects to 3 cameras
  • Size: 12X10mm
  • Command, control and interface all external elements on the module, including:
    • Two RGB/IR CMOS sensors, aligned in a stereoscopic setup
    • An RGB CMOS sensor, con gured to function as a standard web-cam
    • Multiple illumination interfaces - An external LPDDR memory unit
  • Depth map generation, based on the information generated by RGB or IR sensors
  • 6DoF pose estimation and feature tracking for SLAM (Simultaneous Localization and Mapping)
  • Real time processing capable of synchronizing, time- stamping and processing inputs from multiple sensors to serve as a smart sensor hub.
  • Main interfaces
    • Host - MIPI CSI-2 Master TX, 2 lanes or USB2/3 - Memory - High performance LPDDR2
    • External - 3xCon gurable Bidirectional MIPI CSI-2: Master TX or Slave RX, 2 lanes for sensor-connections, 3xUART, 4xI2C, SPI for ash/other, I2S, GPIO and timers

POWER EFFICIENT

The unique multi-core architecture implemented in the NU3000, together with the dedicated hardware accelerated depth- from-stereo core, make the NU3000 the most
power-e cient solution for depth sensing.

3D IMAGE PROCESSING

Single module covering all real-time requirements of mobile, VR and AR systems: depth generation, mapping and localization, nger tracking and
gaze tracking.


NU3000_High Level Block Diagram  

NU4000 NEXT GENERATION 3D IMAGING AND VISION PROCESSING WITH DEEP LEARNING

NU4000 is a superior multi-core vision processor that supports 3D Imaging, Deep Learning and Computer Vision processing for Augmented Reality and Virtual Reality, Drones, Robots, and many other applications. This new generation processor enables high quality depth sensing, “On chip SLAM”, Computer Vision and Deep Learning (CNN) capabilities; all in affordable form factor and minimized power consumption, leading the way for smarter user experiences.

NU4000 brings to the market an unmatched Imaging, Vision and AI computing power that all together exceed a total of 8 Terra OPS (Operations per second). It introduces an optimized Embedded Vision architecture that effectively combine a set of computing blocks, that clearly puts it as the most powerful vision processor available to date, among them are:

  • 3 Vector Cores that provides 500 Giga OPS
  • A dedicated CNN processor that provides beyond 2 Terra OPS which enables large deep neural networks such as VGG16 reaching the rate 40 frames (ROIs) per seconds at 10 times less the power of equivalent GPU, DSP or FPGA implementations
  • 3 Powerful CPU Cores that provides more than 13,000 CoreMark
  • Depth processing engine that enables throughput of 120Mp/s and supports multiple simultaneous streams of both stereo and structured light
  • SLAM engine that enables highly accurate key points extraction at 120fps from two cameras simultaneously
  • Advanced Time-Warp HW engine that reduces the Motion-to-Photon latency toward 1msec for very extensive VR and MR use cases such as 2 displays of 2Kx2K @ 90fps
  • More than 3MB of on-chip SRAM for servicing the vision cores
  • High throughput LPDDR4 interface for reducing the external memory access bottlenecks
  • Connectivity to 6 cameras and 2 displays
  • Advanced low power 12nm silicon process

HIGH PERFORMANCE

Powerful core-processors, backed up by hardware accelerators for depth, SLAM and Computer Vision processing reduce latency down to 1mSec, leading to an enhanced VR/AR experience without nausea.

DEEP LEARNING ENGINE

Convolutional Neural Network Engine for Deep Learning enables object detection, classi cation and localization, scene recognition and more.