Blog post

Microsoft release Tay onto the world and…

By Jonathan Care | March 24, 2016 | 2 Comments

Social MediaMachine Learning

Microsoft released Tay (a narrow scope A.I. with the interactional profile of a 14 year old teenager) yesterday, and today took it off-line in a flurry of press complaints that it “learned racism” from interaction with the denizens of Social Media. Frenzied press speculation has led to many calling this an indictment of Machine Learning.

It’s not. It’s an indictment of us as humans. This project has held a mirror up to human nature, and we do not like what we see

It’s an illustration of the old programmer’s saying that I learned in school. G.I.G.O. Garbage In – Garbage Out.

It’s a demonstration of something we all learn as parents, that malleable cognitive processe should not be exposed to influences that they cannot critically evalutate, and that the parent’s role is to ensure that learning is supervised, directed and contextualised appropriately.

There is an expectation that A.I. will be limited in its broad scope contextualisation for some time. Perhaps another way is to think that these systems are naive (and not just in a Bayesian sense). Naive entities will be inclined to parrot back what they have absorbed until they have a knowledge base on which to contextualise. It will be a long time before Machine Learning moves out from narrow-scope AI (useful in areas such as Fraud and Security Analytics) and becomes capable of General Cognition. The other point of course is that Social Media by and large is not an erudiate debate by the literati on the deep issues of the day, but is mostly cat pictures, sarcasm and angry shouting (with ads). It appears that Tay had Social Media as a sole model for human behaviour, and thus had nothing to compare it against, and a limited mechanism to make value judgements.

Would you leave a child alone in a place like that? Clearly Microsoft thought better of it, too.

Leave a Comment

2 Comments

  • Merv Adrian says:

    For goodness’ sake, don’t let Tay watch the presidential campaign, either.

  • Ekim Kaya says:

    Good point, Jonathan. We also agree that “repeat after me” feature was a bad idea. Here’s our take as a 9 year old bot company:

    https://medium.com/@botego/why-microsoft-s-chatbot-tay-failed-and-what-does-it-mean-for-artificial-intelligence-studies-fb71d22e8359#.wc3nalgzx