ABI Research predicts more than 2 billion devices with dedicated chipsets for ambient audio or natural language processing

Natural language processing and surround sound processing are now considered purely cloud-based technologies, limiting their proliferation in markets where security, privacy, and service continuity are critical elements for deployment. However, advances in deep learning compression technologies and peripheral AI chipsets are already allowing these technologies to be integrated at the endpoint. Consulting firm ABI Research estimates that more than 2 billion end devices will be shipped by 2026 with dedicated chipsets for surround or natural language processing.

Natural language and ambient sound processing will follow the same evolutionary path from the cloud to the periphery as machine vision. Thanks to efficient hardware and model compression technologies, this technology is now less resource intensive and can be fully integrated into end devices, ”said Lian Jye Su, Principal Artificial Intelligence and Machine Learning Analyst at ABI Research. - At the moment, most implementations focus on simple tasks such as wake word detection, scene recognition, and voice biometrics. Going forward, however, AI-enabled devices will have more sophisticated audio and voice processing applications.

The popularity of Alexa, Google Assistant, Siri and various chatbots in the corporate sector has led to a boom in voice user interface. In June 2021, Apple announced that Siri would handle certain requests and actions offline. This implementation frees Siri from the constant internet connection, which is significantly better for iPhone users. ABI Research expects Apple's competitors, especially Google, to follow suit and offer similar support for the Android operating system, which currently powers billions of consumer devices.

Post a Comment

0 Comments