Baidu explains in detail the new generation of self-developed processor Kunlun Core 2

Baidu Apollo and Kunlun Chip Company held the Apollo Day technology open day today, introducing a new generation of self-developed cloud-based general-purpose chips the second-generation Kunlun Chip AI chip.

According to reports, this chip is built on the basis of 7nm technology, equipped with GDDR6 high-speed memory, and the memory bandwidth can reach 512GB/s. It adopts the new generation Kunlun core XPU-R architecture, which has significantly improved versatility and performance and can provide 256TOPS@INT8 and 128 TFLOPS@ FP16 computing power.

Kunlun Core Technology CEO Ouyang Jian revealed that Baidu's self-developed AI chip Kunlun Core 2 has completed end-to-end adaptation for unmanned driving scenarios.

This chip also supports mainstream deep learning development frameworks, such as Tensorflow, Pytorch, PaddlePaddle, etc.; in the performance test of typical perception models, the performance of Kunlun Core is ~2 times that of mainstream solutions in the industry.

Features: high performance, strong programmability, strong portability, and good stability.

Baidu Apollo also announced their roadmap. They believe that 2025 is a critical year for intelligent driving. At that time, L2+ products will cross the gap, and the L4 business model will initially run smoothly.

Baidu Apollo also announced their Wenxin large-scale model "Perception 2.0" architecture, which supports multi-modal pre-fusion and supports blindness compensation for fisheye cameras. According to officials, the large model has become the core driving force for the improvement of autonomous driving capabilities.

According to reports, Apollo Lite is currently the only third in the world in the field of visual perception, and the only perception system in China that supports "pure vision" autonomous driving on urban roads.

The iterative self-training scheme makes full use of unlabeled data, which can help small models improve training efficiency.

According to reports, the new model will be applied in the following areas:

  • Far-sighted vision 3D perception
  • Multimodal end-to-end awareness
  • Point cloud perception effect
  • Long-tail data mining based on a weakly supervised pre-training model of graphics and text

Post a Comment