Tom Austin

A member of the Gartner Blog Network

Tom Austin
VP & Gartner Fellow
20 years at Gartner
41 years IT industry

Tom Austin, VP, has been a Gartner Fellow since 1997. He drives Gartner's research content incubator (the Maverick Program) and is leading a new research community creating research on the emerging era of smart machines. Read Full Bio

Kudos to Andrew Ng (Stanford, Google, Baidu)

by Tom Austin  |  January 23, 2015  |  1 Comment

Andrew is chief scientist at Baidu research labs in California; CS faculty at Sanford and former direct of AI research at Google. In a recent interview with the Wall St Journal, he said:

WSJ: Who’s at the forefront of deep learning?

Ng: There are a lot of deep-learning startups. Unfortunately, deep learning is so hot today that there are startups that call themselves deep learning using a somewhat generous interpretation. It’s creating tons of value for users and for companies, but there’s also a lot of hype. We tend to say deep learning is loosely a simulation of the brain. That sound bite is so easy for all of us to use that it sometimes causes people to over-extrapolate to what deep learning is. The reality is it’s really very different than the brain. We barely (even) know what the human brain does.

Right on the money!

1 Comment »

Category: Uncategorized     Tags:

Bravo to The Future of Life Institute’s Research Initiatives and Musk’s Investment

by Tom Austin  |  January 19, 2015  |  Submit a Comment

Today, the FLI (Future of Life Institute) portal opens to proposals for funding research that aligns with the priorities they lay out in this paper.

FLI’s focus is on making AI more capable, more socially acceptable and maximizing its social benefits while minimizing the negative consequences.

While I’ve taken issue with much of the fear mongering by some about how AI could lead to the end of human life as we know it (and other hyperbole) — largely because I believe some AI proponents are greatly exaggerating what AI will be able to accomplish in the next few decades — I find FLI’s academic research priorities valuable.  

We aren’t building artificial versions of human intelligence (whatever that is) no more than airplane manufacturers build artificial versions of birds. And even if we name a plane a “blackbird,” that doesn’t mean it’s really a bird (leaking jef fuel from its titanium skin before takeoff.)

We are building machines that are smarter than ever before, machines that compliment human capabilities in a symbiotic relationship (machines are more capable than people in some areas and less capable in others; similarly people are more capable than machines in some areas and less capable than others).  

As described by FLI, research in legal and ethical issues, autonomous systems (including weapons), privacy and trust, are all very important, both short and long term. There are also more complex and decidedly non-trivial issues related to the evolution of economies and social systems, income disparity and alternative reward and support systems to deal with. (The symbiosis for some will feel more like a slow motion catastrophe to some others.)

Bravo too for Elon Musk’s contribution of 10 million US dollars to the FLI to support its research!


Submit a Comment »

Category: Uncategorized     Tags:

What can we learn from bird brains?

by Tom Austin  |  January 6, 2015  |  Comments Off

Have you seen the chatter on computational neuroscience, computational neurobiology, the existential risks AI poses for all of humanity and other dire forecasts for the future impact of all-knowing, sentient artificial intelligence?

All of it makes great intuitive leaps from models of how neurons behave to methods for constructing general purpose machine intelligence at a par with or superior to humans, potentially ending all carbon based life on the planet.

Nevermind the gaps in knowledge. (We know that having a pile of 300 million transistors doesn’t create a CPU but I’ll spare you more of that metaphor.)

Wired has breathlessly carried a story predicting “Reverse-Engineering of Human Brain Likely by 2020” and the Financial Times (FT) has carried stories about risks to civilization on a par with nuclear accidents and the invasion of an alien species. To be fair, the FT closes its article with a note saying that there is no evidence these catastrophes will occur but the buildup about disaster leaves the reader with the wrong conclusions.  (I wrote about Elon Musk and Steven Hawkings doomsday predictions earlier in this blog.)

With my background in the intersection between biology and behavior, I was attracted to a recent study published in the journal Current Biology (and summarized by Science Daily). In this empirical study, the authors conclude that crows demonstrate “higher order abstract reasoning” (i.e., crows can learn to judge sameness and differentness in abstract images.) Coauthor Wasserman is quoted in  as saying “we have always sold animals short,” he says. “Human arrogance still permeates contemporary cognitive science.”

We should learn from this crow study.

How well can we document how humans think? Crows? Rats? Cows? 

Throughout my life, I’ve heard all sorts of explanations for how humans are superior to other species. And time after time, the logical arguments fall to the side in the face of smart research testing (and debunking) the common wisdom.

Do we know enough about mouse behavior to emulate and surpass it with artificial intelligence? If not now, when? 

So, what can we learn from “bird brains?” That we don’t really know enough about how brains work. And just because we think we understand how neurons behavior doesn’t mean we can construct something as “smart” as a person (or a pig.)

For the record, we don’t really know enough about how neurons work and how they interact with glia cells. So we haven’t even gotten to first base. The search for “artificial intelligence” still feels to much like the search for the philosopher’s stone

Comments Off

Category: Uncategorized     Tags:

Tomorrow’s Article Today — The Big Miss of IQ tests

by Tom Austin  |  December 31, 2014  |  Comments Off

There’s a thought provoking piece on the Scientific American site entitled “Rational and Irrational Thought: The Thinking that IQ Tests Miss” subtitled “Why smart people sometimes do dumb things.”

There’s a cute set of tests the authors provide to evaluate “dysrationalia” and its causes that everyone ought to take.

As before, I find the authors arguments strong justification for banning the use of the term intelligence but, of course, let your own biases drive your own conclusions here.

Happy New Year!

Comments Off

Category: Uncategorized     Tags:

Get smart: Why is it smart to avoid talking about intelligence (machine or human)?

by Tom Austin  |  December 16, 2014  |  Comments Off

I avoid using the words “intelligence” and “intelligent” in my research and instead use the word “smart.” Why?

Smart is a far less pretentious term than Intelligent.

You can be a smart-alec (a ‘wise guy’), a smart dresser, a smarty pants or part of a smart mob. Indeed, there are at least 127 different SMART acronyms ( but there are over 500 intelligence terms which says there’s a lack of specificity there too, which is not so smart for a term that allegedly carries so much precise meaning. By the way, there are 937 different pages in Wikipedia that start with “Smart”. See and they don’t have an entry for “Smart Machines” yet either….

The word intelligent has all sorts of controversies and confusion around it, everything from the James Watson controversy and Herrnstein and Murray’s similar claim in the Bell Curve through the pseudoscience on intelligent design

In common use, the word intelligence is far too laden with all sorts of problems, not the least of which is the absence of a viable way to either define or measure it in a rigorous way.

We can’t measure smartness precisely either but the term doesn’t miscarry as many implications as the term intelligence. 

Get smart.



Comments Off

Category: Uncategorized     Tags:

What else don’t we know about brain function?

by Tom Austin  |  December 10, 2014  |  Comments Off

In “AI: If you start with a false premise, enything’s possible,” I argued that we don’t know enough about brain function to emulate the brain of a human. On retrospection, I probably could have said goldfish or maybe even earthworm but let’s stay on point. A complete listing of what we don’t know isn’t possible. We don’t know what we don’t know. But I will use this space to periodically cite findings related to the notion that we are eons away from understanding everything about the brain, from electrochemistry through metabolism through epigenetic through … human thought. 

Science Daily recently reported on research on the role of astrocytes in brain function. Astrocytes are a form of glia, cells that scientists have traditionally treated as “supporting” tissue. They’re not neurons. There are something like ten times as many glia cells in the brain than there are neurons.

Astrocytes, according to the research, regulate “excitatory synapse formation through cells…and also play a role in forming inhibitory synapses.” What that says is we’re only beginning to understand how, at the cellular level, cells in the central nervous system interact with each other in fundamental brain processes.

What else don’t we know?

16 December 2014: Adding another piece of research

Here’s another piece of research on a different form of glia, oligodendroglia precursor cells, showing these non-neurons mediate neuronal processing, receive information from neurons and transmit or mediate communication to neurons.

What’s the point? We need to continue to do research on multiple fronts, including computational neuroscience, neurobiology, epigenetics, biopsychology, social sciences, cognitive processes and so forth. But to assume we are anywhere near being able to build a machine with the general purpose intelligence of a human is a belief centered on faith, not on sound science and engineering principles.



Comments Off

Category: Uncategorized     Tags:

AI: If you start with a false premise, anything’s possible

by Tom Austin  |  December 1, 2014  |  Comments Off

If you start with the premise that someone will soon develop a smart machine that is as intelligent as humans (or more so), then it’s easy to come up with all sorts of fantastic potential outcomes to scare the Dickens (or delight the soul) of the reader. 

Ray Kurzweil, in his book The Singularity Is Near (2010) suggests that reverse-engineering the human brain so we can simulate it using computers may be only a decade away (and only require a million lines of code.)

Nick Bostrom’s 2014 best selling book, Superintelligence: Paths, Dangers, Strategies benefits from the hubbub created around Singularity. (Bistrom distances himself from Singularity in his preface but the book’s best seller status benefits from Singularity afterglow.)

“In this book, I try to understand the challenge presented by the prospect of superintelligence, and how we might best respond. This is quite possibly the most important and most daunting challenge humanity has ever faced. And— whether we succeed or fail— it is probably the last challenge we will ever face.”

From this, we get Elon Musk’s promo blurb (repeated elsewhere) “Worth reading…. We need to be super careful with AI. Potentially more dangerous than nukes”

And others, such as the Economist’s says “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.” For the record, I agree with the Economist but urge caution in the conclusions. (There are other more breathless pronouncements I won’t repeat.)

At issue is how realistic is the underlying premise of Singularity? It’s not at all realistic. I suspect that 20 or 30 years from now some zealot will predict that “reverse-engineering the human brain so we can simulate it may be only a decade (or two) away.


Isn’t the brain just a collection of neurons whose chemistry we understand and whose methods of communication is well understood? Hasn’t functional magnetic resonance imaging mapped out what connects where? Well, no. We do not fully understand all the processes involved in intracellular and intercellular operations and … here’s a key part … even if we did, how the heck do we get from that set of unjustified presumptions to a working, valuable, functional model of how humans behave and think?

Do you think we know how we think?

Ask a psychologist and they’ll give you a set of organizing theories (which vary from one school of thought to another) and, if they’re psychiatrists, they’ll talk about the gross effects of various drugs on certain chemical pathways (which is shorthand for saying they sort of understand how gross collections of cells seem to communicate) but they’ll readily admit that they don’t have a handle on how to go from gross human behavior to the neuroanatomy of synaptic vesicles and back again, taking into account the role of microglia and so forth. 

We know so much more now about how the brain behaves than ever before — and it’s led to positive interventional therapies — but we also know more about what we don’t know either. 

The idea that we know enough to emulate the brain to be at the level where, with the appropriately scalable hardware, we could effectively emulate human thinking and intelligence is absolutely mind boggling. 

Why are humans racists? Why do some rape, plunder and murder millions in their their search for power or to assert the correctness of their gods? Why do others dedicate their lives to the betterment of others? What makes some commit genocide? 

Taking off on the notion of a Turning Test, should the machine echo the racist remarks of a human interrogator? Or demonstrate repulsion or horror? Under which condition should it be deemed intelligent?

We know that doctors tend to overprescribe antibiotics when tired and judges hand down stiffer penalties as they get hungrier. How do the gut in particular and the body in general affect human decision making and intelligence? Do we build this into our emulation of approximately 100 billion neurons (10^11) with 10^14 synapses (all of which ignores the role of glia and other involved cells.)

Singularity advocates like to point to the exponential growth in performance of processors (and other technologies). What we lack is a sufficiently detailed understanding of human behavior and intelligence.

To me, singularity is a religion, a faith rooted in hope. That’s great. But let’s not let faith substitute for facts. We do not know enough about thinking, consciousness, behavior, attitudes and human capabilities to emulate that. Not now. So let’s not get too worked up about the threats of Strong AI.



Comments Off

Category: Uncategorized     Tags:

How Smart Are We Really? (hint — not very but help is on the way…)

by Tom Austin  |  November 20, 2014  |  Comments Off

Another proposal for a new alternative to the ’Turing test’ has emerged. It’s not interesting if you’re trying to determine if technology has finally equalled or surpassed human intelligence (whatever that is.) It is interesting because it provides a different approach to quantifying relative differences between machines but it’s too biased towards “humanness” measure instead of “effectiveness” measures. Suitability to task and ever increasing richness of capabilities are more important measures, along with other properties of smart machines which we’ve defined (e.g., autonomous behavior, active and passive learning and ability to abstract).

I’m not impressed by attempts to duplicate human intelligence. I’m anticipating more human-machine cooperation — where one’s strengths counterbalances the other’s weaknesses. In both directions. That’s likely to be the most fertile ground over the next decade. We’re going to come to appreciate how unintelligent humans are. And how stupid machines can be. But how the two together can be far better than either alone — within limits.

A shout out to Larry Dignan’s rant on ZDNet. Man after my own heart. Look at his comments about what he wants to see, not what the vendor he’s writing about is already willing to talk about. Nice piece, Larry!





Comments Off

Category: Uncategorized     Tags:

Smart Machines Are Going to Revolutionize How We Work — Starting with Email

by Tom Austin  |  November 19, 2014  |  Comments Off

Microsoft, Google and IBM are all over this point, investing heavily to reinvent email — eventually replacing the inbox (and folders and other machinery of office drudgery) with a virtual personal assistant (a la the Apple Knowledge Navigator video from 1987.) Watch the video.

Are you ready for it?

Comments Off

Category: Uncategorized     Tags:

Smart Machines: Tying Natural Language Processing and Image Processing — Image Scene Analysis Progress

by Tom Austin  |  November 18, 2014  |  1 Comment

The New York Times pointed out a body of image processing research published by Stanford and Google. The NYT piece gets into the basics and potential implications while the Stanford abstract shows some brilliant examples of results obtained.

All of this feels like major progress for the continued evolution of smart machines. We are in the age of smart machines and it’s going to be a hugely transformative period.

Check out the following:

Stanford Research (figure 1 is mind boggling)

Google Research (abstract) (research paper)

New York Times article 


(errata: Stanford was misspelled originally.)

1 Comment »

Category: Uncategorized     Tags: