A decade ago, the ability to issue instructions to your cellphone by voice and have it actually respond to your commands, was the stuff of science fiction. Today, it’s a standard smartphone feature. But what will the artificial intelligence (AI) in our pocket be capable of in future? And how do the companies vying for dominance make their AI not just smartish, but smartest? Online advertising used to be the be-all and end-all for digital companies.

It’s still the primary source of revenue for Google, Facebook, Twitter and company, but there’s a new land grab on the go. Hardware and software companies alike are racing to create an AI so accurate, astute and responsive it could either make our lives unprecedentedly easy and leisure-filled, or see us enslaved by the resultant robotic overlords, depending which pundits, naysayers or film franchises you choose to believe. 

Luminaries like Stephen Hawkins and Elon Musk fear AI could mean the end of humanity as we know it, because creating an artificial intelligence greater than any human one seems to them likely to prompt said intelligence to do the only smart thing: namely dispensing with our destructive and fickle species immediately. Optimists, meanwhile, imagine an idyllic future where humans are spared mundane and tedious tasks thanks to ubiquitous and cheap machines smart enough to do them for us – but not smart enough to get fed up with their lot, presumably. 

There’s a good chance both sides overstate their case. Like most technology, AI isn’t the sort of thing that goes from conception to reality overnight. Instead, it’s emerging gradually from other, building-block developments like the digital assistants in our smartphones and computers, and the sensors that help us park our cars, land jumbo jets and put probes on celestial bodies beyond the one we inhabit. But recognising a concrete pillar as an obstacle is different from recognising it as a load-bearing architectural element.

AI still struggles with things like context, ambiguity and recognising images. The solution? Systems that can teach themselves. Microsoft’s CaptionBot lets you feed it images and tries to work out what’s in them. Following in the footsteps of Deep Blue and Kasparov, Google’s AlphaGo recently beat a human Go champion. But CaptionBot isn’t particularly adept yet, and AlphaGo isn’t any good at backgammon.

Microsoft’s Tay chat bot, meanwhile, had to be taken offline mere days after launch because its responses to human users from whom it was meant to learn rapidly, descended into racist drivel. The task of shifting AI from a useful, but at times clumsy, addition to our digital lives to a self-aware entity remains, perhaps blessedly, Herculean. But there are plenty of fine minds trying to whittle it down to more manageable, and potentially lucrative, chunks.

Image: Shaun Hill


Apple’s Siri (speech interpretation and recognition interface), first introduced with the iPhone 4S, is the poster child for contemporary consumer AI. Early versions of the software could only respond to a limited number of commands, were prone to frequent and sometimes hilarious errors when taking dictation, and struggled with accents, particularly ones like the Scottish accent, where even native English speakers can often be forgiven for battling with comprehension. 

But Siri, like its peers (and arguably unlike much of humanity), learns from its mistakes. Which is why the Siri found on today’s iPhone 6s, recent iPads, Apple Watch, Apple’s Carplay service and even the latest Apple TV, seems more like a precocious and capable adolescent than the bumbling, gurgling toddler it was when it was launched in 2011. If, heaven forbid, you’ve ever tried using Siri on an aeroplane to cue up a song or add an appointment to your calendar, you’ll have noticed it doesn’t work.

That’s because it requires an internet connection. Why? Because every word you say to Siri is sent off to Apple’s servers to be picked apart, processed and stored. It’s this cumulative approach that’s enabled the software to improve as rapidly as it has. Therein lies the challenge for other would-be AI solutions: in order to make them better you need to get people to use them regularly. But in order to do that, you have to ensure they’re sufficiently usable that people don’t give up in disgust and revert to typing every query into a search bar or web browser.


Microsoft’s equivalent Cortana – named for an AI character in the software giant’s Halo video game franchise – now comes baked into the company’s smartphones, tablets, laptops and desktops running the Windows 10 operating system. Cortana offers a similar feature set to Siri and, like its archrival, depends heavily on people using it if it’s going to both improve and stay abreast of the competition, which explains Microsoft’s eagerness to make it as accessible to consumers as possible. 

It should come as no surprise that Google’s thrown its hat in the AI ring, too. If you use an Android smartphone, the microphone on the Google search bar lets you dictate queries, to which the search engine provides alarmingly accurate responses. Alternatively, if you allow your handset to “listen” continuously, you can initiate the process by simply saying “OK, Google” in earshot of the device. 

Google’s obvious advantage is that while Apple and Microsoft make only a handful of handsets each, its Android mobile operating system is used on hundreds of devices from an ever-growing list of manufacturers. Plus, it’s made the most progress out of any company when it comes to giving its AI contextual smarts with services like Google Now trawling users’ e-mails, calendars and location history along with other data like Google live traffic information to offer timeous feedback, like how long your commute is likely to take on your usual route.

Rounding out the consumer AI roster there’s the Pringle-tube like Echo, a piece of hardware from Amazon, the online retailer that also brought us the Kindle e-reader. The Echo is both a wireless speaker and an always-listening device. Using a piece of software called the Alexa Voice Service in conjunction with your home internet connection, the Echo will play you music on command, look up sports results, help you settle braai-side debates by searching the web and, in the US, even let you order an Uber or a Domino’s pizza. 

But, more importantly, the Echo can control smarthome devices like connected light bulbs, wall switches, smart locks and thermostats – something all of the aforementioned companies are eager to add to their respective AI’s arsenal.


The biggest obstacle to making existing AI systems truly smart is getting them to play well with others, so to speak. Siri, Cortana and Google Now need to follow Amazon’s lead by being able to communicate with, and issue instructions to, other pieces of software and hardware. And all of them need greater contextual awareness of the sort Google Now offers.

Because while Siri can look up a restaurant’s contact details or travel websites, it still struggles to assimilate data from your calendar with a request to book a table for the night of your mother’s birthday. Each of the existing readily available AI solutions out there is limited to the functionality engineers have built into it. While each is getting better at recognising variations in pronunciation and other linguistic anomalies, a greater ability to learn and adapt is needed if these digital assistants are to become as capable as their human equivalents.


If a company called Viv Labs gets its way the digital assistant of the future will come from it, not from any of the big names in tech currently duking it out. Founded by three of the same people who created Siri and then sold it to Apple – Adam Cheyer, Chris Brigham and Dag Kittlaus – Viv Labs is working on an AI called Viv that’s able to overcome all of the obstacles
current solutions face. 

Viv is able to teach itself more than the difference between a Welsh accent and an Irish one – it’s able to make sense of complex sentences. So instead of asking Viv to find you the cheapest flight to London, you can ask it to find you the cheapest flight on a Star Alliance member airline on a particular day, book a window seat on the right-hand side of the plane and use the credit card that earns you the best rewards for such a purchase. 

Rather than selling Viv to a single company, its founders hope to sell it to every major technology company that wants AI integration. In this way, it should make telling your washing machine to do a second spin cycle via your Amazon Echo as easy as ordering an Uber by talking to your handset or finding out – from cross-referenced and reliable sources – whether or not Bill Gates ever actually said that 640K of memory ought to be enough for anyone (by most accounts, he didn’t). 

We’ll know soon enough whether or not Viv can surpass and supersede the likes of Siri. Until then, as Alan Turing wrote in his famous essay on AI, Computing Machinery and Intelligence, “We can only see a short distance ahead, but we can see plenty there that needs to be done.”


Devised in 1950 by British cryptographer and mathematician Alan Turing, who helped crack the Enigma code used for German military communications during the Second World War, the Turing Test is a test of machine or artificial intelligence (AI). A machine – or for modern purposes, piece of software – is said to have passed the test if a human communicating with it and another unseen human cannot reliably tell which is which. That is, the machine can respond in a manner so human-like that an actual human can’t tell that it is not, in fact, human, too. The Turing Test continues to feature prominently in discussions about, and demonstrations of, AI.

© Wanted 2022 - If you would like to reproduce this article please email us.