The path to superintelligence – applications for AI

A girl on the left of the frame looks at the camera while holding the hand of a robot on the right hand side.

Categories: Technology

Photo by Andy Kelly on Unsplash


22 October 2018

Share:


In 2018 it’s hard to go a week without seeing an AI innovation making the news headlines. Just last week, mobile intelligence company NEX Team released HomeCourt, an iOS app that combines your smartphone camera with artificial intelligence to count, track, and chart basketball shots in real-time. The app allows players to self-analyse and improve their performance, and has the potential to transform the way that athletes train.

HomeCourt is indeed a niche application, but its effect highlights the impending and far more widespread impact of AI in society. As engineers continue to develop artificial intelligence across various industries, its application could revolutionise everything from aerospace technology and healthcare through to civil construction work and lifestyle. As such, we aim to explore where AI stands in 2018, where its development is heading, as well as the implications for when we get there.


Define: artificial intelligence

Depending on who you talk to, the definition of artificial intelligence will vary. It’s either an abstract concept, a discipline, or the end-goal of said discipline. For the purpose of this article, we will tend towards the latter two – discussing the development (and hypothetical end goal) of computer systems able to perform various tasks as well as, or better than, a human.

Though intelligent machines are described in various forms across various societies since the Age of Antiquity, AI received its current name in 1956 during the Dartmouth Summer Research Project on Artificial Intelligence Workshop. Since then, the study and development of AI have since proliferated into an industry with a projected worth of $100 billion USD by 2030, and $500 billion by 2050.


Automation vs AI

Before we dive too deeply into the future of AI, it’s important to first clarify the distinction between artificial intelligence and automation, as the two terms are often – and incorrectly – seen as interchangeable. Automation typically refers to the use of hardware or software to automatically perform repetitive or tasks that humans would otherwise have to. It follows explicit, manually-programmed rules (often following the ‘if this, then that’ (IFTTT) logic) and outputs a predictable result each time.

Automation updates your bank statement when triggered by new purchases, sends a mailing campaign to your entire subscriber list at once, and quickly converts a selection of images into a photo grid (using Apple’s new Shortcuts app, for example). Automation is a highly functional and versatile tool found in the majority of industries today, but it isn’t necessarily an ‘intelligent’ process: it receives an input, then generates a predictable output.

Artificial intelligence, by comparison, produces less predictable outcomes. The ultimate aim of AI research is to produce a machine, based on large volumes of data and highly complex algorithms, that can mimic or improve on what a human can do, say, or think. AIs can still output responses to a given input, but the difference from automation is that AIs use machine learning to analyse patterns in information, learn which outputs are wrong over time, and ultimately produce relevant, self-selected responses – much like you or me.


How intelligent is AI at the moment?

If you base your idea of AI on Hollywood, then you likely envisage the advanced, hypothetical depictions of AI – Marvel’s Jarvis, Prometheus’ David, the Terminator, the Matrix, and so on. In reality, those representations are little more than a distant and hypothetical future. Today, AI is universally categorised as artificial narrow intelligence (ANI) as can only perform a single (albeit complex) task; even if it can outperform humans at said task, it can't use that information to do anything else. It pales in comparison to our ability to 'infer'.

Given the intrinsic essence of futurism that AI commands, it's understandable that people overlook today's forms of narrow intelligence in research and commercial settings. However, it's important to remember that, even considering the disparity with science fiction, narrow intelligence is in no way trivial.

AI improves as we continually refine the underlying algorithms, introduce more data, and increase the processing power. Rule-based contextual processing has already beaten the world Chess and Go champions, produced a new category of computational photography, self-taught various subjects, and even spotted potential alien signals three billion light-years away.

Combining the right data sets and algorithms with sufficient processing power has produced a plethora of applications that are changing our lives on individual, social, economic, and political levels.

AI today is orders of magnitude above automated machines in its complexity and performance but, again, it still has neither general intelligence, nor self-awareness, and is limited to performing the one task. Whether the task is predictive texting, defending IT systems against cybercriminals, piloting a vehicle, or analysing medical images for rapid diagnoses, AI can’t suddenly apply that skill to something else.


Artificial General Intelligence

Transfer learning, a subset of machine learning research focused on ‘transferring’ the stored knowledge gained from problem to a different (or related) problem, is therefore of significant interest to researchers.

By repeatedly solving problems, retaining the data, and attempting to apply it to new scenarios, AI gradually increases its understanding of the world and gains the ability to solve a variety of tasks. This is a key step towards achieving an Artificial General Intelligence equal to human intelligence in every capacity. That said, there is much debate as to whether we will ever artificially achieve true general intelligence.

Because it’s difficult to quantify human intelligence as a universally-accepted metric, and because human intelligence involves various processes irrelevant to AI’s development, some have questioned whether we should shift focus and instead develop AI to use non-human approaches to problem-solving that complement our own thinking.


A way off

While the arrival of deep learning and neural networks produce a significant performance jump for modern AI, it’s been suggested that general intelligence won’t surface until the third wave of cognitive architectures allow a more seamless interaction of various key functions – long and short-term memory, context, metacognition (self-awareness), and reasoning, as examples.

Nonetheless, the transition to commercial AGI(s) will be an important moment in human history with AI rapidly producing waves of technological development, rippling with effect, through all levels of society. And, perhaps more significantly, from that point on they will rapidly start to develop their own capabilities.


Artificial Superintelligence

While the transition from narrow to general intelligence has already spanned decades, the transition from the first self-programming general intelligence to one of superintelligence should be relatively short. Once the first low-level general intelligence is developed and can understand the world around it – even in a comparable state to, say, a four-year-old – it will rapidly develop to the point where it doesn’t just surpass human capabilities, it dwarfs them.

At this point, with human-equivalent processing now a milestone of the past and a new, self-optimised way of thinking, the AI could solve the plethoric mysteries of the universe that currently evade us. With the bridge between string theory and quantum mechanics now uncovered and a new wave of medicine that outpaces the rate of ageing – human life spans could increase exponentially, or we could set ourselves on a path toward transition into higher planes of multi-dimensional existence.



Related Articles