Gartner Blog Network


Is Google poised to own global IoT endpoints?

by Werner Goertz  |  July 25, 2018  |  1 Comment

I have argued[1] that compute platforms (x86, ARM) were designed for requirements of historical workload categories (PC computing, mobile device processing, respectively). Neither the legacy ISAs (Instruction Set Architectures) nor corresponding microarchitectures (Intel Atom, ARM Cortex, etc.) were developed with AI and ML workloads in mind. AI processing today, whether based on gate arrays, GPUs , DSPs, or even using legacy CPU pipeline architectures (Harvard, Von Neumann), remains suboptimal because none of these were optimized for AI/ML workloads, especially for inferencing at the edge or in the endpoint device. Disruptive ISAs, most notably RISC V, are nascent and unproven for scalability.

Google took a revolutionary approach with its Cloud TPU (Tensor Processing Units) to disrupt cloud server processing for AI workloads. This made total sense as long as AI and ML processing was deemed a cloud-only architecture. But now we understand that AI scalability, especially in consumer and IoT applications, must integrate edge processing as a key architectural element. AI Frameworks have been extended to the edge (TensorFlow to TensorFlow Lite; MXNet to AWS Greengrass). Question is:

What compute/storage architectures exist in the edge device to accommodate the need for local AI?

Answer: all we have today are CPUs that were designed for legacy PC workloads and apps processors that were designed for power efficiency!

This is about to change: today, Google announced TPUs for edge devices. Announced at Google Next 2018, this Edge TPU comes as a discrete, packaged chip device. A collaboration with NXP was announced which (surprisingly considering my above rant about ISAs) implements four instances of and ARM-based pipeline. My guess is that eventually, this design will be licensed/integrated by other silicon partners. One thing is clear: if successful, Google will own edge AI processing both at the silicon as well as the device level. With that, it will own edge inferencing and ultimately, the customer!

The incumbent leader in cloud AI, AWS, with its cloud-centric (cloud-only?) architecture, owns AI endpoints to the extent of Echo devices and Alexa Voice Services implementers. Google Assistant will own not only the much larger Android install base but more importantly, all endpoints that implement edge TPUs! Owning this end-to-end architecture will give Google an edge not only in frameworks (TensorFlow Lite is the only one supported so far) but also in silicon/hardware.

It has been argued (and challenged by me) that Google’s knowledge graph will be key to Google’s dominance in AI. With today’s announcement by Google, the predicted outcome could be the same (Google will own AI), but the road to success will be paved with chips, and not knowledge.

Watch this blog space or talk to me offline about updates as Google rolls out Edge TPU later this year. Always happy to take Inquiry on this topic.

[1] See “Market Trends: AI in Edge Devices Creates Opportunities for Device Manufacturers” – G00333973

 

Category: 

Werner Goertz
Research Director
1 years at Gartner
21 years IT Industry

Werner Goertz is a Research Director within Gartner's Personal Technnologies team, where he covers personal devices (smartphones, PCs/Ultrabooks, tablets/ultramobiles and wearables) and IoT. A special emphasis of his research lies in the Human Machine Interface (HMI) and multimodal I/O technologies: voice/speech processing and recognition, facial recognition and eye tracking, biometrics and motion/gesture control. Read Full Bio


Thoughts on Is Google poised to own global IoT endpoints?




Leave a Reply

Your email address will not be published. Required fields are marked *

Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.