Tom Austin

A member of the Gartner Blog Network

Tom Austin
VP & Gartner Fellow
20 years at Gartner
41 years IT industry

Tom Austin, VP, has been a Gartner Fellow since 1997. He drives Gartner's research content incubator (the Maverick Program) and is leading a new research community creating research on the emerging era of smart machines. Read Full Bio

Get smart: Why is it smart to avoid talking about intelligence (machine or human)?

by Tom Austin  |  December 16, 2014  |  Submit a Comment

I avoid using the words “intelligence” and “intelligent” in my research and instead use the word “smart.” Why?

Smart is a far less pretentious term than Intelligent.

You can be a smart-alec (a ‘wise guy’), a smart dresser, a smarty pants or part of a smart mob. Indeed, there are at least 127 different SMART acronyms ( but there are over 500 intelligence terms which says there’s a lack of specificity there too, which is not so smart for a term that allegedly carries so much precise meaning. By the way, there are 937 different pages in Wikipedia that start with “Smart”. See and they don’t have an entry for “Smart Machines” yet either….

The word intelligent has all sorts of controversies and confusion around it, everything from the James Watson controversy and Herrnstein and Murray’s similar claim in the Bell Curve through the pseudoscience on intelligent design

In common use, the word intelligence is far too laden with all sorts of problems, not the least of which is the absence of a viable way to either define or measure it in a rigorous way.

We can’t measure smartness precisely either but the term doesn’t miscarry as many implications as the term intelligence. 

Get smart.



Submit a Comment »

Category: Uncategorized     Tags:

What else don’t we know about brain function?

by Tom Austin  |  December 10, 2014  |  Submit a Comment

In “AI: If you start with a false premise, enything’s possible,” I argued that we don’t know enough about brain function to emulate the brain of a human. On retrospection, I probably could have said goldfish or maybe even earthworm but let’s stay on point. A complete listing of what we don’t know isn’t possible. We don’t know what we don’t know. But I will use this space to periodically cite findings related to the notion that we are eons away from understanding everything about the brain, from electrochemistry through metabolism through epigenetic through … human thought. 

Science Daily recently reported on research on the role of astrocytes in brain function. Astrocytes are a form of glia, cells that scientists have traditionally treated as “supporting” tissue. They’re not neurons. There are something like ten times as many glia cells in the brain than there are neurons.

Astrocytes, according to the research, regulate “excitatory synapse formation through cells…and also play a role in forming inhibitory synapses.” What that says is we’re only beginning to understand how, at the cellular level, cells in the central nervous system interact with each other in fundamental brain processes.

What else don’t we know?

16 December 2014: Adding another piece of research

Here’s another piece of research on a different form of glia, oligodendroglia precursor cells, showing these non-neurons mediate neuronal processing, receive information from neurons and transmit or mediate communication to neurons.

What’s the point? We need to continue to do research on multiple fronts, including computational neuroscience, neurobiology, epigenetics, biopsychology, social sciences, cognitive processes and so forth. But to assume we are anywhere near being able to build a machine with the general purpose intelligence of a human is a belief centered on faith, not on sound science and engineering principles.



Submit a Comment »

Category: Uncategorized     Tags:

AI: If you start with a false premise, anything’s possible

by Tom Austin  |  December 1, 2014  |  Comments Off

If you start with the premise that someone will soon develop a smart machine that is as intelligent as humans (or more so), then it’s easy to come up with all sorts of fantastic potential outcomes to scare the Dickens (or delight the soul) of the reader. 

Ray Kurzweil, in his book The Singularity Is Near (2010) suggests that reverse-engineering the human brain so we can simulate it using computers may be only a decade away (and only require a million lines of code.)

Nick Bostrom’s 2014 best selling book, Superintelligence: Paths, Dangers, Strategies benefits from the hubbub created around Singularity. (Bistrom distances himself from Singularity in his preface but the book’s best seller status benefits from Singularity afterglow.)

“In this book, I try to understand the challenge presented by the prospect of superintelligence, and how we might best respond. This is quite possibly the most important and most daunting challenge humanity has ever faced. And— whether we succeed or fail— it is probably the last challenge we will ever face.”

From this, we get Elon Musk’s promo blurb (repeated elsewhere) “Worth reading…. We need to be super careful with AI. Potentially more dangerous than nukes”

And others, such as the Economist’s says “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.” For the record, I agree with the Economist but urge caution in the conclusions. (There are other more breathless pronouncements I won’t repeat.)

At issue is how realistic is the underlying premise of Singularity? It’s not at all realistic. I suspect that 20 or 30 years from now some zealot will predict that “reverse-engineering the human brain so we can simulate it may be only a decade (or two) away.


Isn’t the brain just a collection of neurons whose chemistry we understand and whose methods of communication is well understood? Hasn’t functional magnetic resonance imaging mapped out what connects where? Well, no. We do not fully understand all the processes involved in intracellular and intercellular operations and … here’s a key part … even if we did, how the heck do we get from that set of unjustified presumptions to a working, valuable, functional model of how humans behave and think?

Do you think we know how we think?

Ask a psychologist and they’ll give you a set of organizing theories (which vary from one school of thought to another) and, if they’re psychiatrists, they’ll talk about the gross effects of various drugs on certain chemical pathways (which is shorthand for saying they sort of understand how gross collections of cells seem to communicate) but they’ll readily admit that they don’t have a handle on how to go from gross human behavior to the neuroanatomy of synaptic vesicles and back again, taking into account the role of microglia and so forth. 

We know so much more now about how the brain behaves than ever before — and it’s led to positive interventional therapies — but we also know more about what we don’t know either. 

The idea that we know enough to emulate the brain to be at the level where, with the appropriately scalable hardware, we could effectively emulate human thinking and intelligence is absolutely mind boggling. 

Why are humans racists? Why do some rape, plunder and murder millions in their their search for power or to assert the correctness of their gods? Why do others dedicate their lives to the betterment of others? What makes some commit genocide? 

Taking off on the notion of a Turning Test, should the machine echo the racist remarks of a human interrogator? Or demonstrate repulsion or horror? Under which condition should it be deemed intelligent?

We know that doctors tend to overprescribe antibiotics when tired and judges hand down stiffer penalties as they get hungrier. How do the gut in particular and the body in general affect human decision making and intelligence? Do we build this into our emulation of approximately 100 billion neurons (10^11) with 10^14 synapses (all of which ignores the role of glia and other involved cells.)

Singularity advocates like to point to the exponential growth in performance of processors (and other technologies). What we lack is a sufficiently detailed understanding of human behavior and intelligence.

To me, singularity is a religion, a faith rooted in hope. That’s great. But let’s not let faith substitute for facts. We do not know enough about thinking, consciousness, behavior, attitudes and human capabilities to emulate that. Not now. So let’s not get too worked up about the threats of Strong AI.



Comments Off

Category: Uncategorized     Tags:

How Smart Are We Really? (hint — not very but help is on the way…)

by Tom Austin  |  November 20, 2014  |  Comments Off

Another proposal for a new alternative to the ’Turing test’ has emerged. It’s not interesting if you’re trying to determine if technology has finally equalled or surpassed human intelligence (whatever that is.) It is interesting because it provides a different approach to quantifying relative differences between machines but it’s too biased towards “humanness” measure instead of “effectiveness” measures. Suitability to task and ever increasing richness of capabilities are more important measures, along with other properties of smart machines which we’ve defined (e.g., autonomous behavior, active and passive learning and ability to abstract).

I’m not impressed by attempts to duplicate human intelligence. I’m anticipating more human-machine cooperation — where one’s strengths counterbalances the other’s weaknesses. In both directions. That’s likely to be the most fertile ground over the next decade. We’re going to come to appreciate how unintelligent humans are. And how stupid machines can be. But how the two together can be far better than either alone — within limits.

A shout out to Larry Dignan’s rant on ZDNet. Man after my own heart. Look at his comments about what he wants to see, not what the vendor he’s writing about is already willing to talk about. Nice piece, Larry!





Comments Off

Category: Uncategorized     Tags:

Smart Machines Are Going to Revolutionize How We Work — Starting with Email

by Tom Austin  |  November 19, 2014  |  Comments Off

Microsoft, Google and IBM are all over this point, investing heavily to reinvent email — eventually replacing the inbox (and folders and other machinery of office drudgery) with a virtual personal assistant (a la the Apple Knowledge Navigator video from 1987.) Watch the video.

Are you ready for it?

Comments Off

Category: Uncategorized     Tags:

Smart Machines: Tying Natural Language Processing and Image Processing — Image Scene Analysis Progress

by Tom Austin  |  November 18, 2014  |  1 Comment

The New York Times pointed out a body of image processing research published by Stanford and Google. The NYT piece gets into the basics and potential implications while the Stamford abstract shows some brilliant examples of results obtained.

All of this feels like major progress for the continued evolution of smart machines. We are in the age of smart machines and it’s going to be a hugely transformative period.

Check out the following:

Stanford Research (figure 1 is mind boggling)

Google Research (abstract) (research paper)

New York Times article 


(errata: Stanford was misspelled originally.)

1 Comment »

Category: Uncategorized     Tags:

Fridge Fantasies Redux

by Tom Austin  |  May 23, 2014  |  1 Comment

Read yet another piece where people were talking about IoT and how internet connected fridges would tell our smart phones to remind us when we came close to a grocery store that we only had 50 ml of Milk left so please pick some up. Oh, and by the way, buy brand Zed because it’s best (or so says one of 472,995 sponsors of this reminder service.)

Never mind the fantasies of the 1990’s where people were predicting — the same thing!

Think about all of the structural problems in making this come true. Even if people changed their refrigerators once every year (not very likely; there are better ways of spending money), how long will it take to modify all products you stick in the refrigerator to communicate their state to said box? What are the economics there? How do you instrument a poblano pepper to notify the crisper in the fridge that it’s getting lonely in there and it would like some more poblanos to keep it company? How much does this add to the cost of food? What’s the product life cycle for various food offerings, a factor which gates how quickly zero cost technology could be added? 

Then there’s the privacy issue. If the trash container and fridge get together and take stock of my consumption of red onions, broccoli and garlic, what kind of story could they fabricate? Maybe that my food wastage rate is 8% higher than that of other apartment owners in my complex. Or 17% less. And what if that news got out? Perish the thought (or cook it quickly so it doesn’t perish.)

I’ll confess. I don’t always keep tomatoes (fresh or canned) in the refrigerator. So how will my smart phone know to remind me about that inventory level?

A prediction: Before we have smart kitchen cabinets and smart cauliflower too, we’ll have useful (but not very smart) robots that can do a visual scan for you when you’re in the grocery aisle of your favorite food emporium and panic because you can’t remember if you’re over or understocked on jalapenos. Instead, you tell your video enabled relatively dumb remote assistant to go over to the refrigerator, open the door and point its camera at the crisper draw so you can figure it out for yourself.

And we’ll still be talking about intelligent fridge fantasies ten years from then…




1 Comment »

Category: Uncategorized     Tags:

Smart Machines: How Will They Disrupt Your Career?

by Tom Austin  |  February 10, 2014  |  1 Comment

How susceptible is your job to computerization? According to Frey and Osborne, 47% of all current U.S. jobs are at risk over the next two decades because they consist primarily of tasks that can be automated in that time period.  (Left unanalyzed are the beneficial impacts — such as new capabilities people will pick up by collaborating with smart machines.)  

I recently wrote that smart machines will continue to enhance and threaten the abilities of employees to do their jobs. By 2020, Gartner Predicts that a majority of knowledge worker career paths will be disrupted by smart machines in both positive and negative ways. Read the full report, Gartner Top Predictions 2014: Plan for a Disruptive, but Constructive Future

Smart machines  — a broad and powerful range of new systems— are emerging this decade. They do what we thought only people could do and what we didn’t think technology could do. Smart machines and smart advisors exploit machine learning and algorithms —  they learn from results and work faster than humans – they make smart people smarter. Virtual personal assistants  – focused on user behavior (habits, activities, needs) — make smart people more effective.  

The smart machine market is small, but growing, threatening to upend knowledge workers careers by 2020 (see chart). Ignore at your own peril.


How should you factor smart machines and machine-assisted tasks into your IT planning? View Gartner Top Predictions 2014: Plan for a Disruptive, but Constructive Future to see key findings, market implications and recommendations. Discover the competitive advantages that await early adopters.

For Gartner clients seeking more on the smart machines, view Predicts 2014: The Emerging Smart Machine Era and The IT Role in Helping High Impact Performers Thrive.

1 Comment »

Category: Uncategorized     Tags:

When is a dumb machine smart?

by Tom Austin  |  January 3, 2014  |  Comments Off

I’ve been working on research on “smart machines” for most of 2013 (and tracking same for years before then) but I recently came across a wonderful invention that illustrates the point that not all machines need to have all the attributes we think of regarding smart machines.

Behold the Spoon Full of Sensors To Help Parkinson’s Patients Feed Themselves! The designers here recognized how the same accelerometers and actuators used for image stabilization in advanced cameras these days could be applied to help these people lead more normal lives. 

Here, the designers and engineers were smart and the specific product they created is a dumb machine but the end result is freedom and joy for the afflicted. What a smart dumb machine!

Comments Off

Category: Uncategorized     Tags:

What should technologies like IBM’s Watson and Google’s Knowledge Graph mean to you?

by Tom Austin  |  April 24, 2013  |  Comments Off

I’ve seen the future … and now it’s within grasp. It’s going to impact your life and your work before this decade is out.

We just published a note entitled Exploit the Intersect of IBM’s Social Business and Solution Selling Strategies

A part of that note, probably one quarter, dives into what has been fascinating me for many, many years. There’s only so many features you can stick into an email program or a content editing tool. Particularly if you’re text-centric. Where do we go after the 177th version of a personal productivity tool suite? The 34th iteration of instant messaging? The 500th document database? So much of what we’re doing now is reinventing and refining what we were already doing in the pre-client-server era. Back in the late 80’s at Digital Equipment, we had a vision and architecture for compound documents in GUI environments…how many more iterations of that do we really need?

There’s more coming, very different. It’s not a new kerning tool. Or the next great slide transition mechanism. It’s about the rise of smart assistants. Natural language processing. Semantic analysis. Massive parallelization. Rule-based systems with machine learning. Pattern recognition and matching. Marry that to the scale of what Google can do and what IBM, with Watson and co-development partners can do.

Start with Google. Witness, for example: 

Then look at IBM. Witness, again, for example:

  • Watson — as in Ken Jenning’s declaration on Jeopardy “I, for one, welcome our new computer overlords
  • Consider the Watson “Oncology Treatment Advisor”. IBM co-developed it with Wellpoint who is now selling it. It’s narrowly focused today on lung and breast cancer cases. It digests hundreds of millions of pages of published research and other reference data, considers the patient’s data (such as diagnostic test data, prior treatments and broader history) and suggests to the clinician a list of alternative treatments to consider. The list is ordered — based on a calculated likelihood of success — and provides access to all the relevant information the system has considered in constructing each recommendation. 
  • IBM is also working with Memorial Sloan-Kettering Cancer Center and others on additional, very narrow but high-value use cases in various medical fields. Other co-development projects are under way in other industries.
  • This isn’t just about game shows and slights of programmer-hands! 

Google Now represents analysis of a longitudinal array of information about what you do, where you go, what you say, who you pay attention to and whom you interact with across time so that Google (and, for that matter, Siri, it’s cross-valley competitor) can predict what you will need in your current context — before you even know it. There’s a staggering amount of personal information it can mine.

This isn’t just the Apple Knowledge Navigator reborn.

And then there’s Watson and the techniques IBM is using to evolve future generations of its capabilities…

I see radical change coming — glorious and depressing, liberating and enslaving, enriching all and only a few. This isn’t necessarily the optimistic world of Brynjolfsson and MacAfee’s Race Against The Machine..

How is this going to affect your organization (IT)? Your enterprise? industry? economy? society? What do you counsel your children to pursue as a career? as their passion?


Comments Off

Category: Uncategorized     Tags: