The Role of Artificial Intelligence in Future Technology
“That’s me!” is what world’s leading professional player Lee Sedol from South Korea would have proudly replied to someone asking “Who is the best player at the Chinese game of Go?” – until he was beaten in a five-game match in March 2016. His opponent was a computer program called “Alpha Go” (Silver et al., 2016). For the first time, a computer had exceeded human-level performance at playing Go. This was previously thought of as impossible because the number of valid sequences of moves is outrageously large: ~250150, compared to ~3580 for chess. The artificial player was not able to search this tree exhaustively; it had to mimic a human in that it assessed a given situation to make intelligent decisions – decisions more intelligent than the ones made by the human sitting across the table. Today, the best Go player is a computer. The machine, while using its classic strengths like processing power, also imitates human behavior and is now better and smarter than all humans in this particular field due to aforementioned ability. That can be considered a fundamental change.
Alpha Go is an example for an algorithm that the public and experts would attribute with intelligent behavior. It is an example of artificial intelligence, short AI. While the word artificial is easy to define in this context, namely as non-human / non-natural, intelligence is undoubtedly harder. Turing (1950) gave it a shot by coming up with “The Imitation Game”, also known as the Turing test: If a machine was able to fool a human interrogator into thinking they were chatting with an actual person, the machine would pass the test. According to Turing, this test can serve as a replacement for the key question: “Can machines think?” Since then, many more definitions for intelligence and AI have been proposed, ranging from “Intelligence is whatever machines haven’t done yet” (Tesler’s Theorem, ~1970) to “[AI is] a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” (Kaplan and Haenlein, 2018). Note that both, deep learning and machine learning, are subsets of AI (Goodfellow et al., 2016).
In order to predict the future role of AI, it seems appropriate to take a closer look at recent developments. AI has become a part of our everyday lives: We talk to our smartphones and they answer our questions; voice recognition and natural language understanding are involved. When we open our email inbox, text classification algorithms have already filtered most of the spam emails so we do not have to bother spotting and deleting them manually. We search the web and expect search results to be presented in a relevant order; the ones on top that answer our query best. Algorithms that can listen to music and transcribe it to sheet music make life easier for musicians. Beyond the scope of consumer applications, enterprises are employing AI. Just to name three examples (there are many more), SAP parses invoices (Katti et al., 2018) with deep learning methods, Airbus encourages research on ways of detecting anomalies in the operation mode of helicopters and airplanes based on sensory data, and Tesla’s Autopilot is pushing the frontier of autonomous driving.
Lifting these considerations beyond the examples to a higher abstraction level, the current role of AI can be described as supportive and very present. Today’s technology makes heavy use of AI, by enhancing the user experience, improving the human computer interaction, solving hard problems in domains like natural language processing and computer vision, or supporting humans in fulfilling repetitive tasks. Where desired, humans still have the validation option (last word). New products, previously technically not viable, are being built with AI in mind or even as the critical basis. AI is the technology behind many of today’s smart wizards.
I expect the recent tendencies of AI usage to continue, with a gradual transition from supportive use-cases to mission-critical ones. Accelerated by a very active and open-source friendly community of researchers, companies of all sizes will try to improve their technology with AI. McKinsey estimates “AI techniques have the potential to create between $3.5T and $5.8T in value annually across nine business functions in 19 industries [across the world]” (source: Forbes). As AI gains traction, more and more fields of technology will open up, in which AI will not only be nice to have but an integral part. Large players such as Alteryx, Microsoft, Google, SAP, Databricks, SAS, and Rapid Insight are working towards becoming the platforms of the intelligent algorithms of the future. They want the intelligent algorithms of all businesses to run on their machine learning clouds. These efforts suggest an upcoming expansion of the AI market.
Brundage et al. (2018) state that “less attention has historically been paid to the ways in which artificial intelligence can be used maliciously”. The authors have thought of a variety of malicious use-cases of AI, for instance “automation of social engineering attacks”, “state use of automated surveillance platforms to suppress dissent”, and “distributed networks of autonomous robotic systems […] execute rapid, coordinated attacks”. I expect these use-cases to become a serious threat. This is not the kind of “Transformers will kill us all” talk, but rather the assumption that the availability of novel tools with unprecedented capabilities will also call villains to the scene. In severe cases the villains may even be higher institutions: Recently, information about a censored search engine, requested by the Chinese government, has leaked. This is especially problematic as there is no higher instance capable of intervening.
Suppose, at a future point in time, AI were able to fully understand the content of a human conversation. We would be able to develop tools that visualize the branches of a discussion, annotate certain sentences with tags like example, proof, or opinion, and compute an abstract automatically. In a further step, the computer could engage in the discussion and bring in its own ideas with a knowledge base of seemingly infinite size. As another example, imagine AI could, given all the information accessible, make predictions about the future: Global leaders would be able to make decisions with the best outcome for all parties, the effects of draft laws could be predicted, stock prices could be forecasted, …
The past years have been exiting, for AI researchers and consumers alike. We have seen wide adoption of new, AI-based technologies and I do not expect this tendency to slow down anytime soon. Despite the risk of malicious use, we can be excited about the future: AI will continue to support consumers, professionals, and businesses. There are and will be plenty of exciting open research questions for scientists to be solved.
I wrote this essay in February 2019, as part of the application process for Master’s studies at the Technical University of Munich. Please feel encouraged to add your thoughts on the topic in the comments section of this post. The featured image shows a sunset captured in Los Angeles in March 2018. Thanks to ML-powered post-processing methods, the photo quality of smartphone cameras has become very good, recently._