Image of a young girl holding the hand of a robot.

In 2018 it’s hard to go a week without seeing an AI innovation making the news headlines. Just last week, NEX Team (a mobile intelligence company) released HomeCourt – an iOS app that combines your smartphone camera with artificial intelligence to count, track, and chart basketball shots in real-time. The app allows players to self-analyse and improve their performance, and has the potential to transform the way that athletes train. While HomeCourt represents niche applications of AI, engineers’ continued development of artificial intelligence(s) across various industries could revolutionise everything from aerospace technology and healthcare through to civil construction work and lifestyle activities. As such, we aim to explore where AI stands in 2018, where its development is heading, as well as the implications for when we get there.

What is artificial intelligence?

Depending on who you talk to, the definition for artificial intelligence will vary. It’s either an abstract concept, a discipline, or the end-goal of said discipline. For the purpose of this article, we will tend towards the latter two – discussing the development (and hypothetical end goal) of computer systems able to perform various tasks as well as, or better than, a human could.

Though intelligent machines are described in various forms across various societies since the Age of Antiquity, AI received its current name in 1956 during the Dartmouth Summer Research Project on Artificial Intelligence Workshop. Since then, the study and development of AI have since proliferated into an industry with a projected worth of $100 billion USD by 2030, and $500 billion by 2050.

Automation vs AI

Before we dive too deeply into the future of AI, it’s important to first clarify the distinction between artificial intelligence and automation, as the two terms are often – and incorrectly – seen as interchangeable. Automation typically refers to the use of hardware or software to automatically perform repetitive or tasks that humans would otherwise have to. It follows explicit, manually-programmed rules (often following the ‘if this, then that’ (IFTTT) logic) that output a predictable result each time. Automation updates your bank statement when triggered by new purchases, sends a mailing campaign to your entire subscriber list at once, and quickly converts a selection of images into a photo grid (using Apple’s new Shortcuts app, for example). Automation is a highly functional and versatile tool found in the majority of industries today, but it isn’t necessarily an ‘intelligent’ process: it receives an input, then generates a predictable output.

Artificial intelligence, by comparison, produces less predictable outcomes. The ultimate aim of AI research is to produce a machine, based on large volumes of data and highly complex algorithms, that can mimic or improve on what a human can do, say, or think. AIs can still output responses to a given input, but the difference from automation is that AIs use machine learning to analyse patterns in information, learn which outputs are wrong over time, and ultimately produce relevant, self-selected responses – much like you or me. 

How intelligent is AI at the moment?

While Hollywood certainly favours the advanced, hypothetical depictions of AI – Marvel’s Jarvis, Prometheus’ David, the Terminator, the Matrix, and so on – the reality is that we’re quite far away from that. In fact, AI currently falls – universally – into the bottom intelligence category: artificial narrow intelligence (ANI), which is limited to performing a specific (albeit complex) task.

Given the essence of futurism that we’ve attributed to AI, it’s understandable that people don’t often recognise the current forms of AI in modern research and commercial settings. This isn’t to trivialise their usefulness, however, as AIs improve as we continually refine their underlying algorithms, feed more data into them, and provide them with more processing power. Rule-based contextual processing has already beaten the world Chess and Go champions, produced a new category of computational photography, self-taught various subjects, and even spotted potential alien signals three billion light years away. Combining the right data sets and algorithms with sufficient processing power has produced a plethora of AIs that are changing our lives on individual, social, economic, and political levels.

That said, while the average AI today is orders of magnitude above automated machines in its complexity and performance, it still has neither general intelligence, nor self-awareness, and is limited to performing the one task. Whether the task is predictive texting, defending IT systems against cybercriminals, piloting a vehicle, or analysing medical images for rapid diagnoses, AIs can’t suddenly transfer skills developed in one area to something else.

Artificial General Intelligence

Transfer learning, a subset of machine learning research focused on ‘transferring’ the stored knowledge gained from problem to a different (or related) problem, is therefore of significant interest to researchers. By repeatedly solving problems, retaining the data, and attempting to apply it to new scenarios, an AI will gradually increase its understanding of the world and ‘know’ how to solve a variety of tasks. This is a key step towards achieving an Artificial General Intelligence (AGI) that displays panoptic equality to human intelligence, but there is debate as to whether we will ever “technically” achieve true AGI. Because it’s difficult to quantify human intelligence into a universally-accepted metric for us to compare against, and because human intelligence involves various processes that are not relevant to AI’s development, some of questioned whether we should instead focus more on developing AIs that use non-human approaches to problem-solving that complement our own thinking.

What’s more, while the arrival of deep learning and neural networks (check back for a forthcoming QEPrize article) are producing a significant performance jump for ANI(s), it’s been suggested that AGI won’t actually surface until the third wave of cognitive architectures deliver highly integrated systems that permit the seamless interaction of various functions – long and short-term memory, context, metacognition (self-awareness), and reasoning, as examples. Rather than developing along a linear progression timeline, some have suggested that the progression to general intelligence will spike suddenly once separate key factors come together. Nonetheless, the transition to commercial AGI(s) will be an important moment in human history with AIs rapidly producing waves of technological development rippling with effect through all levels of society. Perhaps more significantly, from this point on they will rapidly start to develop their own capabilities.

Artificial Superintelligence

At this point, we are purely speaking hypothetically, but while the transition from ANI to AGI spans decades, the transition from self-programming AGI to artificial superintelligence (ASI) should be comparatively instantaneous. Once the first low-level general intelligence is developed and can understand the world around it – even in a comparable state to, say, a four-year-old – it will rapidly develop over a period of days (or even hours) to the point where it doesn’t just surpass human capabilities, it dwarfs them. At this point, with human-equivalent processing now a milestone of the past and a new, self-optimised way of thinking, the AI could solve the plethoric mysteries of the universe that currently evade us.

Dystopic alternatives aside, with the bridge between string theory and quantum mechanics now uncovered and a new wave of medicine that outpaces the rate of ageing – human life spans could increase exponentially, or we could set ourselves on a path toward transition into higher planes of multi-dimensional existence.

The road to superintelligence

Now, while the above paragraphs are, as we discussed, a hypothetical look at the future trajectory of AI and the implications of its further development, and we’re certainly a long way away from achieving general intelligence – we’re still in a great position.

Compared to the Logic Theorist computer program developed in the 1950s, its 2018 counterparts have access to incomparable reservoirs of information and processing power with which to operate. Jarvis may indeed be a distant dream, but ANI is real, it’s pervasive in society, and it’s already opening up a realm of possibilities deemed inconceivable a decade ago.

Engineers and computer scientists are working ceaselessly to bring us the next wave of artificial intelligence, but this month, we’d like to take a moment to appreciate what they’ve achieved already.


Photo by Andy Kelly on Unsplash

Comments