Inuitive makes chips. Our core technology is chips optimized for computer vision in mobile devices. The flagship product of this technology is called NU3000. NU3000 is a dedicated signal processor, architected for 3D image processing and computer vision. Its architecture is optimized to support depth sensing, 3D image processing and computer vision. It is a powerful 3D multipurpose image signal processor SoC. 

Inuitive enable devices to imitate the real world, mimic human capabilities and interact naturally.

VIRTUAL REALITY AND AUGMENTED REALITY

ENABLING NEXT GENERATION VIRTUAL AND AUGMENTED REALITY APPLICATIONS.

Indoor navigation, positioning, mapping and re-construction by using:
• 3D Depth sensing
• Computer vision
• Deep learning
 
DESIGNED TO CREATE UNMATCHED USER EXPERIENCES USING DEPTH SENSING, COMPUTER VISION AND DEEP LEARNING
 
Virtual Reality (VR) and Augmented Reality (AR) will reshape the way we play, learn and shop.
AR and VR provide out of this world environments in which we work and socialize.
Many future smart devices like AR/VR glasses, smartphones, robots and drones require more innovative technologies to support the next generation of virtual applications. INUITIVE’s mission is to advance these possibilities by enabling the best Computer Vision and 3D Sensing technologies to integrate with deep learning capabilities.
 


MOBILE

BRING AUGMENTED REALITY TO YOUR SMARTPHONES.

Indoor navigation, positioning, mapping and re-construction by using:
• 3D Depth sensing
• Computer vision
• Deep learning
 
INNOVATIVE VISUAL EXPERIENCES ON YOUR MOBILE DEVICES
 
INUITIVE’s depth sensing and computer vision technology tracks 3D motion of an object while simultaneously creating a map of its environment. By integrating external vision and orientation sensors to collect real-time data, developers can quickly create ingenious applications for real-world 3D gaming, indoor navigation, and both virtual and augmented reality experiences. Using INUITIVE’s technology, people will be able to interact with their environments, positioning themselves in the scene with enhanced object, body, facial and gesture recognition capabilities.
 
KEY ELEMENTS: POWER CONSUMPTION, SIZE AND COST

The primary challenge all devices face is extending battery life. As the number of energy‑sapping applications running on a device grows, developers seek to reduce power consumption. Key parameters for depth sensor performance are resolution, sensitivity, frame rate, field of view, power consumption and physical size.
The following INUITIVE parameters influence the quality of depth that can be generated while reducing power
consumption:
Mechanical design aspects optimized for smartphones, phablets and tablets, Advanced active stereo techniques
Powerful imaging processor enables real time image enhancement and photography effect functionality
Robust Ambient Sensitivity provides optimal power usage for devices indoors versus outdoors and in different lighting conditions.


ROBOTICS AND DRONES

BRING AUGMENTED REALITY TO YOUR SMARTPHONES.

In order to avoid physical obstacles, both robots and drones need depth sensing capabilities. To do that, there’s a need to have a basic “sense” of what’s around the machine. These robots and drones will be seamlessly integrated into our everyday lives. Historically, robots have been limited to industrial settings. But thanks to the same technology powering our smartphones, robots are poised to evolve into intelligent machines capable of perceiving their environments, understanding our needs and helping us in unprecedented ways. Sensor fusion, computer vision and machine learning technologies will enable smarter robots and drones to “see” in 3D, sense their surroundings, avoid collision, and autonomously navigate their environments.

Robotics: In order for a robot to autonomously navigate its environment, it has to accurately estimate its relative position and orientation while moving through an unknown environment.

Drones: So many things are possible by combining VR and drones. For example educators can allow students to hover over the Great Wall of China as if they’re in a floating bubble and look in any direction they want. Photographers can pilot a drone over the Grand Canyon at sunset to capture the perfect magazine cover shot. Drone racing enthusiasts can sit in the driver’s seat of the winning pilot. And oil refinery inspectors can check every nook and cranny of a fractioning tower without leaving their office desk.


Vertical Applications

Automotive
Depth sensors for automotive can and will be used to track the driver for drowsiness. A depth sensor, combined with an image processor, can collect and track NUI information about the driver (head position, eye tracking, blinking). Based on that NUI information, data can be deducted from it about driver alertness. For automotive, the vision is a sensor placed in the cabin that faces the driver (and possibly the other passengers). The sensor primary role is to scan the driver’s state of alertness and lack of focus.

3D Scanning

A 3D scanner is a device capable of scanning an object or a scene, then reconstructing the scanned object by creating a 3D imaging file. In many ways, 3D scanners are complementary products to the rising trend of 3D printers. Using a 3D scanner, one can record the parameters of an object and then have it re-created using a 3D printer.

3D scanners are all about accuracy. The output of a 3D scanner is a 3D model file. To make an accurate 3D model file, the 3D scanner needs to have high quality depth images as raw input. Consequentially, 3D scanners require very high-quality precision depth sensors as their basic building blocks.
3D scanners (together with 3D printers) will soon be consumer products, much like cameras and printers.


Home Monitoring and Elderly Care

Depth sensors can be used to track people moving around the house. By securing a living environment with several sensors, one can quickly and economically create a smart grid that maintains information on the whereabouts of the residents. Moreover, by implementing a skeletal extraction layer and some more image processing algorithms, emergency situations can be altered. This technology can be applied to the autonomous caring of elderly lonely people.