Alexa, can you hear me?
The fourth episode of the Create the Future podcast focuses on artificial intelligence, a topic often found at the centre of modern ethical discourse, and one that frequents both the cinema screens of Hollywood, and the pages of science fiction.
Joined by experts Dame Wendy Hall, professor of computer science at the University of Southampton; and Azeem Azhar, technology entrepreneur and producer of the Exponential View newsletter and podcast, we talk about the benefits of AI, as well as its ethical issues, its future, and why we should proceed with caution in its development.
- We can fake anything now. I mean video, text, sounds, music, you really can fake anything. I think we have a serious issue in the future. What? Who? Which sources do you trust, and how do we get around this problem that anybody can fake anything? In all sorts of walks of life if that’s going to cause really difficult problems.
- First of all, ethics is something that is quite an abstract concept. No one taught me about ethics when I took my driving test, but we expect a car to have ethical principles in terms of the algorithms that are used for the car to operate on our streets, deciding which people it will kill in a certain circumstance.
- Even if we haven’t got a lot of diversity in the teams doing the programming of artificial intelligence, we need diversity the teams doing the design, the testing, evaluating the behaviour of people when they use the system – we need that to be diverse. Not just in terms of gender, it needs to be diverse in terms of culture, race, ethnicity, religion, age, disability – so many different factors, and we’ve got to make an inclusive industry.
- I am worried about the misperception, and I’m worried that the media benefits from the misperception because it creates more alluring stories. AI is a tool. It’s a powerful tool, but it’s a tool nonetheless; it’s a hammer, it’s an iron, it’s a knitting needle, it’s a blender – that’s what it is. The risks of artificial intelligence come from the ways in which companies in particular, but also governments, choose to implement it and what they do with the consequences of that implementation.
- We decide the values – humans – and so we have to have the debate about the values that matter. While it’s true that AI can absolutely reinforce the structures of the past, it can do something else, I think it’s really interesting; it can lay bare, lay transparent, the framework or the architecture of the past,
- Testing takes time, time costs money, so there’s always a trade-off if you’re an AI developer between how much testing do you actually do before you push something out into the wild. So, the more we care about the outcome, the more it deals with situations are not very resilient or people who are a little bit vulnerable, we have to think “has the appropriate level of testing been done to make this system perform in an appropriate manner?”