Future of AI – less artificial, more intelligent (part two)

Abstract depiction of a neural network; a central node with copious strands weaving around it. To the right, the text reads: Future of AI – less artificial, more intelligent (part two)

Categories: Technology

Images created by Beyond Limits – Jessica Simm, Creative Producer.


25 March 2019

Author: Mark James

Share:


Artificial intelligence, robotics, and the pursuit of autonomous systems that we can trust. In part one, Beyond Limits CTO Mark James sets the scene for new developments at the intersection of AI and robotics. In part two, he describes how cognitive intelligence moves to the extreme edge, and provides cautionary guidance for humans to remain in control of artificial intelligence as it grows in power and capability.


Connectivity and intelligent analysis abilities remain stumbling blocks

Today we live in a digital world – virtually anything you can think of can be connected to virtually anything else; when you connect them, you generate data. Of course, moderation is important, here, but the fact is that data is now a fundamental resource in global society and we need to capitalize on it.

Instead of simply sensing our environment, we can transform it into something that is safer, more profitable, and insightful. The key to this is ‘actionable intelligence’ – data and information that can be immediately acted upon without further processing by man or machine.

As the number of commercial and industrial IoT devices proliferate, connecting them and getting them to behave intelligently are among the biggest challenges to realizing the full potential of automation.

An industrial facility might have anywhere from 20,000 to 30,000 sensors monitoring the status of thousands of machines and processes, but they often reside in silos that don’t communicate. Some AI solutions are dependent on a cloud service architecture – essentially a mainframe approach with centralized computing – but in many industrial locations it’s hard to rely on sufficient bandwidth (or even connectivity in the first place).

Data needs to be collected, correlated with historical performance data, and analyzed to provide actionable information and make decisions in real time. There’s no time to reach out to the mother ship for answers.


Intelligence where it counts

One important strategy for obtaining timely actionable intelligence is to embed intelligence at the source. This development enables decisions to be made at the sensor, rather than having to ‘phoning home’ to headquarters or a cloud service for what to do next.

Since many automation applications have operational control, making decisions quickly is essential. Unfortunately, the inherent latency when ‘crunching the numbers’ far from the edge is too great for many applications. In some cases, edge devices must be controlled within milliseconds.

With military aircrafts, for example, sensor data needs to be acted upon constantly, on the spot. If the thousands of airfoil sensors were wired to a central computer onboard the aircraft, the wiring and computer could weigh more than the aircraft’s wings. Communicating to the cloud at Mach speed is also out of the question, so we need to use edge computing architectures and equip smart devices with artificial intelligence.


AI Moving to the Edge

Cognitive intelligence is destined to be distributed to the edge of the network, typically implanted in chips. With cognitive intelligence and situational awareness embedded at the extreme edge, we can read sensor data and analyze it in the context of historical data, human expertise, and overall system performance goals.

With this, we can solve problems on the spot, in real time, which has profoundly positive implications across a breadth of applications and industries – it could bring human expertise to every node in a network, no matter how geographically dispersed. For autonomous operations to succeed on earth, as they have in space, the next big milestone in AI is intelligent hardware.


When Tools Become Extensions of Ourselves

We’ve come a long way since humans started using stone tools to conquer our environment. From our usage of smartphones to stay in contact with the world to our daily use of myriad appliances – using tools has become an innate part of human life. However, we’re at an interesting transition point.

As our tools become more advanced, they are evolving from passive extensions of ourselves to active partners working alongside us. An axe or a hammer is a passive extension of a hand, but a drone forms a distributed intelligence in partnership with its operator. Such tools can interact with us in ways never before possible. Similar to the working relationship between a human and a horse or dog, there is a shared mission or purpose and semi-autonomous actions.

Our tools are now becoming actors unto themselves, and their future is in our hands. Think about the evolution of the car: from horse and carriage to Model-T, from cruise control to adaptive cruise control and now to driverless cars. Engineers are even programming cars using subtle ethics models, helping them to determine how to proceed in situations where an accident is unavoidable.

These split-second decisions are not the province of simple sensor data or rules-based decision trees. Situational awareness and cognition are essential for informed judgement by autonomous systems.

Neural networks connected to CCTV cameras now easily outperform human beings in facial recognition in both speed and accuracy. Soon, the technology will be in place to make it possible to track everyone, everywhere, all the time. Naturally, this raises ethical questions.

Machine intelligence has made major advancements in the last five years, but still has a long way to go. It is probably impossible to limit how far AI will evolve, but we have time to embed safeguards to limit how these systems can affect us.


Limits to Machine Power

Many people believe that artificial intelligence is the same thing as machine learning. After all, they’ve heard about machine learning systems that can win a board game or video game, or one that can identify pictures of cats. But conventional machine learning solutions aren’t cognitive; they are trained from data but lack the ability to leap beyond missing or broken data and build a hypothesis about potential actions. Machine learning can be effective in detecting something anticipated, but it fails when confronted by the unexpected.

Cognitive solutions, meanwhile, are partially based on prior observation, but rely more on the deductive and inductive aspects of cognition. Beyond Limits technology, for instance, is a cognitive leap beyond conventional AI. It employs higher-order symbolic reasoning, providing a human-like ability to perceive, understand, correlate, learn, teach, reason, and solve problems faster than existing AI solutions.


Powering Machines That Think

Core to Beyond Limits cognitive AI systems is the ‘symbolic reasoning engine’, which implements cognitive intelligence. It’s a cognitive engine that uses the outputs from sensors and neural networks and applies its education to understand what it sees (as it sees it) and explain its answers so that a person can understand.

This is a new approach to reasoning, it’s like having your own personal Sherlock Holmes working for you 24/7, looking for subtle clues to catch the thief while he is trying to commit the crime, rather than after.

The AI autonomously shifts through corridors of information, discovering plausible facts and scenarios from diverse data. It does this, while avoiding the problems of typical machine learning systems, through a technique called ‘autonomic monitoring’.

Autonomic monitoring is based on the philosophy that the brain is composed of distinct but interacting modules. These modules, using both local learning (training) and innate knowledge (education), will self-organize to solve problems. Because the system is trainable and learns autonomously on the fly, autonomic monitoring allows for serendipitous discovery during data analysis.


Shaping Our Future

It’s inevitable that AI systems will eventually adapt to become our intellectual superiors, with or without our permission. The question is, therefore, how can we shape this metamorphosis so that AI systems evolve into something that works to our advantage? This may sound like an ultimatum, but we have a choice whether to shape AI so it becomes our trusted companion (not servant) or our enemy.

At the moment, we’re safe. Conventional artificial intelligence today is known as narrow AI (or weak AI). These systems are good at performing a single specific task like playing chess, solving equations, or driving a car, but they can’t do anything else. Cognitive AI is the middle layer of sophistication in the pyramid of artificial intelligence – more capable than conventional AI, but not at the level of fictional movie robots.

However, the long-term goal of AI scientists is to create strong AI. While narrow AI may outperform humans at whatever its specific task is, strong AI would outperform humans at nearly every cognitive task. It would, as you often hear in movies: “evolve beyond its original programming”. And strong AI would never need to sleep, take vacations, or participate in any of the distractions that make our lives more pleasant but admittedly less productive.


The concern about advanced AI isn’t really about malevolence versus benevolence. It’s about competence.


Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate. And they agree that there is no reason to expect AI to become either intentionally malevolent or benevolent. Instead, when considering how AI might become a risk, two things come to mind:

  • First, the AI might be intentionally programmed to do something devastating.
  • Second, the AI might be programmed to do something beneficial, but autonomously develops a destructive method for achieving its goal.

The concern about advanced AI isn’t really about malevolence vs. benevolence. It’s about competence, which is key to building trust. A super-intelligent AI will be extremely good at accomplishing its goals. If those goals aren’t aligned with ours, we have a problem.


Improving the Odds for Humans

There are two ways we can improve our odds of existing harmoniously with these intelligent entities. The first is to borrow an idea from Isaac Asimov, which he calls The Three Laws of Robotics. These three laws essentially define crucial rules that can be hardwired into all intelligent systems, preventing them from causing direct or indirect harm to humans and themselves.


The Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

- Isaac Asimov


To make this a reality rather than just a 1940’s sci-fi story, we at Beyond Limits are developing a technology we call ‘Trusted Autonomy, the first step to ensure machines will remain our trusted helpers and not our destroyers. Our systems are designed to explain their reasoning and present evidence to people, ensuring that decision-makers are aware of the rewards, risks, and reasons for the solutions that our AI recommends. People make the final decisions. Over time, as the system gets smarter and produces valuable results, humans can choose to trust the AI’s thinking.

The second idea is more extreme, but we are already seeing it slowly become reality: for humans to become more like AI. Younger generations already exhibit some traits of a ‘connected’ hive mind society, and we are also seeing various implants that augment humans, such as AI-based hearing aids and prosthetic limbs. There are also more exotic sensory systems that monitor nerve impulses and translate human thoughts into actions.


From the Atomic Age to the Cognitive Era

AI and automation, like many tools that humans have invented, can be engineered to help us live our lives. It’s up to us. Like the atom, AI is not evil in and of itself. What people choose to do with these tools is another matter, often driven by political considerations. What the AI industry can do (and what Beyond Limits does), is to build AI systems to enable a safer, smarter world and ensure that well-trained users leverage industry standards of safety to influence their AI-enabled decisions.

We are a privileged generation to live in this era full of technological advancements. Gone are the days when we did almost everything manually. Now we live in a time where a great deal of difficult work has been taken over by machines, software, and other automatic processes.

When they predict that machine intelligence will surpass human intelligence, scientists are counting on AI. Many believe that once the AI system starts working at its full capacity, it will reinvent the world that we know today. By inventing its own revolutionary new technologies, an AI superintelligence might help us eradicate war, disease, and poverty.

In this regard, the creation of strong AI might be the biggest event in human history. Imagine a world where menial tasks will be taken care of by AI applications – it will be a world in which much meaningless labour disappears. As this happens, humans can focus their strengths on higher levels of work, taking technology to new heights beyond the currently-accepted norms of human potential.

With the explosive growth in technology and AI development, we can expect to see many exciting new AI features and uses in the near future. Artificial Intelligence has a critically important role to play in the development of business and industrial processes. It also has incredible potential to take humans to the next level.

With the right guidance and a value system that keeps it on our side, AI is a tool that can help us become our better selves. The choice is ours.

More on the author, Mark James

CTO of Beyond Limits

More from this author Become a contributor

Related Articles