OFILM collaborated to develop the Mi-Sense depth vision module of Xiaomi's first full-size humanoid robot CyberOne "Tie Da"

Xiaomi held a new product launch conference in autumn, and Xiaomi's first full-size humanoid bionic robot, CyberOne, was officially unveiled. According to OFILM, the Mi-Sense depth vision module carried by CyberOne is designed by Xiaomi and developed by OFILM together; combined with AI interaction algorithm, it not only has complete three-dimensional space perception ability, but also can realize character identification, gesture recognition, With facial expression recognition, CyberOne can not only see but also understand. 

Xiaomi's full-size humanoid bionic robot CyberOne is 177cm tall and weighs 52kg. Its stage name is "Tie Da". Xiaomi said that "Tie Da" uses a variety of technologies, can perceive 45 human semantic emotions, and distinguish 85 environmental semantics; it is equipped with Xiaomi's self-developed whole-body control algorithm, which can coordinate the movement of 21 joints; it is equipped with Mi Sense visual space system, It can reconstruct the real world in 3D; 5 kinds of joint drives in the whole body, and the peak torque is 300Nm. Xiaomi said that the intelligence and mechanical capabilities of CyberOne were all developed by Xiaomi Robotics Lab. There is a myriad of software, hardware and algorithm development efforts behind this.


Recently, OFILM released its self-developed machine vision depth camera module , which is mainly composed of iToF module, RGB module, and optional IMU module. The product has an accuracy of up to 1% within the measurement range, and has a wide range of application scenarios. It can pass third-party experiments. Room IEC 60825-1 certified to meet the laser safety Class1 standard. iToF technology is one of the mainstream 3D visual perception technologies. OFILM develops a 3D intelligent depth camera based on iToF+RGB depth measurement technology.

In fact, as the eyes of machines, machine vision is one of the best options for smart devices to understand the world and have the ability to perceive and understand the environment. The depth camera can identify the three-dimensional coordinate information of each point in the field of view, so that the computer can obtain the 3D data of the space and can restore the complete three-dimensional world and realize various intelligent three-dimensional positioning.

The 3D smart depth camera released by OFILM is mainly composed of three parts: iToF module, RGB module, and optional IMU module.

In terms of external interfaces, this smart depth camera has a USB 3.0 Type-C interface for data transmission such as depth, IR, RGB, and IMU. The bandwidth of USB 3.0 can theoretically support point cloud data. The transmission has strong function scalability. At the same time, considering the synchronization between the user's other sensors and the depth camera, the depth camera retains the synchronization signal interface and some GPIO interfaces and power supply interfaces, making it easier for users to integrate multiple sensors.


Ou Feiguang introduced that the intelligent depth camera has a powerful SoC platform, which integrates IP such as ISP, ARM CPU, embedded GPU and NPU. In the later stage, it can be considered to release computing power for users, so that users can make better use of the computing power of the depth camera. At present, deep decoding occupies part of the GPU computing power and part of the CPU computing power, and the NPU computing power can be fully released later.

The iToF-RGBD depth resolution released this time supports up to VGA (640x480), and the frame rate is 30fps. Later, a higher frame rate mode will be opened to support the binning mode.

Meanwhile, the FoV of the depth is 76° in the horizontal direction, 61° in the vertical direction, and the laser wavelength is 940 nm. RGB uses a 1.3-megapixel sensor with a global shutter, with a FoV of 93° horizontally and 82° vertically.

Regarding the depth measurement performance, it currently supports the indoor measurement range of 0.1 to 5 meters (test environment: indoor illumination does not exceed 1w Lux, 90% reflectivity white wall), and outdoor measurement requirements of 0.1 to 3 meters (test environment: outdoor does not More than 5W Lux, 90% reflectivity of the measured object), the accuracy is up to 1% in the measurement range.

In the field of face recognition, it is mainly used for face-swiping payment, authentication and face-swiping access control, face-swiping attendance, etc.; in the field of robotics, it is mainly used for obstacle avoidance, SLAM, AI recognition, etc., and can also be used in scenarios such as human modeling; In the field of 3D scanning, it is mainly used for indoor scanning modeling; in the industrial field, it is mainly used for parts scanning, inspection and sorting of industrial automation. One of the bottlenecks in the development of many fields such as R/AR, access control, and intelligent transportation.

Post a Comment

0 Comments