Gartner Blog Network


Qualcomm’s Connectivity and Processing Solutions Accelerate Design of AI-Enabled IoT Devices

by Werner Goertz  |  August 3, 2018  |  Submit a Comment

As AI moves to the edge and into devices, product OEMs have more and more component and architecture options: Last week, we learned about Google’s strategy to own end to end AI architectures(1). This week, I experienced Qualcomm’s view of Edge AI computing:

Qualcomm brings two competitive strengths and value propositions to IoT endpoints:
a) connectivity
b) computing
This presents opportunities for devices vendors and ecosystems to accelerate their time to market, create efficient and interoperable AI products through chipsets, modules, and reference designs. In this context, it is important to understand that unlike other chip providers, Qualcomm now serves a fragmented customer base and has grown its customer portfolio to over 9,000 customers. Segments include home automation, home entertainment, retail/POS, signage, street lighting, robotics/drones, connected cameras, smart speakers, and others. This means for IoT device vendors that Qualcomm has the infrastructure to sell to and support smaller OEMs.

Unlike Google’s strategy of providing edge TPUs as an accelerator, Qualcomm delivers an integrated chip / chipset and developer platform that lets the OEM optimize across various compute platforms: if you want to run your object recognition (just as an example) platform at the edge, you can use Qualcomm’s “SNPE” (pronounced “snappy”) SDK to test which functions should be run on CPU, DSP, GPU, etc. and balance your system performance against power, performance, and area tradeoffs. Qualcomm’s approach to edge AI devices is a “kitchen sink” of CPUs, DSPs, GPUs allows developers to optimize workloads in a custom manner. Although Qualcomm’s solution raises the question of BOM affordability, I expect them to come up with stripped down, affordable solutions targeted at commodity endpoints over the next year.

Similar to AWS’ DeepLens, Qualcomm  has developed (together with Microsoft) a video capturing device running local ML models and inferencing engines. It’s called Vision AI Developer Kit(2). So, if you’re developing a computer vision(3) device or service (such as a facial recognition device, for example) you can develop  and train your ML models at the edge and optimize across the available compute resources.

(1) https://blogs.gartner.com/werner-goertz/2018/07/25/google-poised-global-iot-endpoints/

(2) https://www.visionaidevkit.com/

(3) See: Market Insight: Computer Vision in Devices Enhances User Experience – ID G00355741

Category: 

Werner Goertz
Research Director
1 years at Gartner
21 years IT Industry

Werner Goertz is a Research Director within Gartner's Personal Technnologies team, where he covers personal devices (smartphones, PCs/Ultrabooks, tablets/ultramobiles and wearables) and IoT. A special emphasis of his research lies in the Human Machine Interface (HMI) and multimodal I/O technologies: voice/speech processing and recognition, facial recognition and eye tracking, biometrics and motion/gesture control. Read Full Bio




Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.