Blog

Joanna Shields

Baroness Joanna Shields

Thinking Deep on the Future of AI

 

Baroness Shields’ Opening Keynote Address to CognitionX

 
 

Thank you for the vibrant welcome. I think that we can safely conclude from the nearly 7000 people who have registered and will join us today and tomorrow that this one of the hottest tickets and one of the most important topics in the world right now.

CogX is a celebration of the extraordinary potential of AI, blockchain, and other emerging technologies to transform our world for the better, but it’s also about tackling some of the biggest challenges that lie ahead.

Tabitha, Charlie, and their amazing team have brought together some of the world’s top thinkers, scientists, innovators, academics, coders, and creators, but what they have also done (rather splendidly, if I might add), is create a global community that I think we all feel very special to be a part of.

Over the next two days, the insights you will gain and the wisdom you will share have the potential to make this a defining event; which is as it should be, given that we are here to discuss technologies that are about to change our lives completely.

For more than a quarter century, we have been traveling at warp speed, and just as we emerge from the digital revolution we enter a new one—and this one represents a quantum leap in terms of impact.

I have witnessed my share of tech revolutions since arriving in Silicon Valley in the late 1980s with everything I owned fitting in my car. Now, that doesn’t necessarily make me a revolutionary, but it does give me an interesting perspective on why revolutions happen and what they teach us. A perspective about how technology shapes us and our society, even though most of us would prefer to think that it works the other way around.

When we think about revolutions we think about progress. About new ideas, new paradigms. Interestingly enough, the word revolution comes from the Latin word "revolvere," which means "to roll back." And it’s only recently that I began to think about what that means in the literal sense.

As a self-confessed tech utopian, I spent my life focusing on the positive impact of technology. How it can help us solve problems. How it empowers and enlightens us. How it has the potential to bring us all closer together. But technological revolutions don’t always turn out as planned.

Looking to history for examples, the Benedictine monks who invented the mechanical clock set out to become better devotees of God and ended up giving industry and capitalism its heartbeat. Gutenberg set out to spread the word of the church with the printing press which ended up breaking its monopoly on religion. Sir Tim Berners-Lee wanted to create an online space for the free exchange of ideas. Instead the web became a "handful of platforms that control which ideas and opinions are seen and shared." None of these inventions ended up quite like their inventors intended.

The late media critic Neil Postman described these unintended consequences as "technology’s Faustian bargain." He said, "What a new technology will do is no more important than what a new technology will undo."

When the digital revolution started 30 years ago, the mantra was that the internet would create this utopian global village, this wonderful place that would bring diverse people and their diverse ideas together in a celebration of knowledge. But somewhere around the Arab Spring of 2011, I began to notice patterns that I hadn't seen before and things that concerned me deeply. I started to see the negative side-effects the "global village" was having on our society and on real people, not the theoretical construct of "users."

What started as a beautiful ideal of bringing the world closer began unravelling; or, if we are to be true to the definition of revolution, "rolling back." And as time went by, cracks began to show in the fine veneer of the digital narrative.

Today, of course, you would have to be blind not to see those cracks, for they are deep enough to subsume us—but back then any questions raised about the negative impact of tech on society would be enough to brand you as a heretic. And when problems continued to surface, we assured ourselves that everything was fine. That any unpleasant side effect was the price to pay for progress but that it was worth it. And whilst acknowledging that darkness too inhabited our brave new world, we were blinded by the light.

And the light shone brightly. Digital technologies gave us more of absolutely everything. Our platformed, algorithmically optimised world gave us more choice, more friendships, more information. And more, in our society, is equated with good, right? We certainly thought so.

I mention this only as a cautionary tale. Because I truly believe this is the most exciting time to be alive and that we have the potential to do good at a scale unlike any time in human history.

But as we develop the next wave of technologies and apply AI to literally everything that we do, use, and experience, we have to be careful what we optimise for and who benefits from this optimisation. We need to think about how we ensure beneficial outcomes for all, and how this time we keep power in check.

Marshall McLuhan, who many refer to as the prophet of the information age, said, "When man is overwhelmed by information, he resorts to myth. Myth is inclusive, time-saving, and fast." For our industry, that myth is the algorithm.

I remember a quote from a teenager back in 2008 who, when asked where he goes to read the news, answered that he doesn’t go anywhere. "If the news is that important, it will find me."

An entire generation of products—indeed, an entire generation of entrepreneurs—was built on the ideal of an algorithm finding what you need or want, when you need it or want it, without requiring you to do any hard work or make any hard choices.

Well, we got exactly what we wished for but with a whole bunch of unintended consequences.

The remarkably prescient sociologist Eric Hoffer wrote 65 years ago that "revolutions are not set in motion to realise radical changes, but actually, it is drastic change which sets the stage for revolution in the first place." Hoffer argued that "when a population undergoing drastic change is without abundant opportunities for individual action and self-advancement, it develops a hunger for faith, pride, and unity. It becomes receptive to all manner of proselytizing. In other words, drastic change, under certain conditions, creates a proclivity for fanatical attitudes, united action, and spectacular manifestations of defiance; it creates an atmosphere of revolution."

This all sounds uncomfortably familiar in the context of the world today. When you combine the angst resulting from dramatic changes in society with algorithms that amplify passions and connect people with others who might share the same biased views, you have a toxic cocktail—an "asymmetry of passion" which manipulates people into believing that their views represent the majority when in reality they are a minority. This in turn makes them receptive to misinformation and normalises extremes on a mass scale. And this is the most divisive unintended consequence of all.

Needless to say, if drastic change is what sets a revolution in motion, one can only begin to imagine what the next revolution is going to look like. Years from now, when history writes the chapter entitled "The Age of Artificial Intelligence," will it celebrate the immense benefits that technology has delivered and the great human progress that followed? Or, will it be a requiem of regret for what we as humans have lost?

That is the most important question. And if the AI revolution is, as Google CEO Sundar Pichai said, "probably the most important thing humanity has ever worked on...more profound than electricity or fire," then the human race is at a turning point.

I say this without a doubt because compared to AI, the digital revolution with its huge issues around data, privacy and security, will seem like a dress rehearsal. We are about to unlock an incredibly powerful force. If the consensus is that AI is bigger than anything we’ve ever seen, it follows that its benefits and the risks it carries will be magnified too.

I nearly said "at least we don’t have to worry about the good things that AI will do," but then realized how foolish that is. Good depends entirely on the perspective of the person creating it. Who decides what "good" is, and who benefits from it? How will "good" outcomes from AI be distributed? How do we limit "bad" outcomes? Who will be the arbiter?

Lest you think these questions are trivial, we only have to remind ourselves what happened when we forgot to ask and answer them last time when we abdicated ourselves from thinking about the implications of our innovations and from taking responsibility for them.

We need to think hard about the questions for which artificial intelligence is the answer and we need to ask the right questions because some questions are more important than others. With no disrespect, how to serve the right ad at the right time to the right person is not an important question. But what’s important to note is that even frivolous questions can start from a meaningful place—like how to best connect people and share information.

But when the hunger for monetisation demands that inventions become ubiquitous and for one technology or platform to dominate in order to achieve that, the focal point of the question changes. It no longer serves its original master. It chooses a different one.

And so, in the next few minutes I’d like to share with you three questions I hope that you will all consider over the course of the next two days. These are not the only questions or necessarily even the right ones but they are questions worth considering if we truly want to get this revolution right.

The first question references what is known as Moravec's Paradox, which says that "while it is easy to make computers exhibit adult-like performance on intelligence tests, it is difficult to give them the skills of a one-year-old when it comes to perception and mobility." Or, in other words, that machines are much better at things that humans find difficult and really bad at things we find easy.

Paradoxes and principles like this are comforting to us in times of uncertainty. That is, until they are proven wrong. So, what happens when machines find a way to easily do everything we do? Have we already given machines more power than we would like to admit? Perhaps more power than we are even aware of?

In Yuval Noah Harari’s best seller Homo Deus, he talks about a post-human world where technology enhances human capabilities beyond natural limits to create a new form of "human." Until recently, that sounded like science fiction, but today we already have wearable devices, virtual and augmented reality, biomedical implants, robots, and, someday soon perhaps, a brain computer interface. Machines are already making many choices for us and about us, and yes, in many ways these developments are working out just fine—Who doesn’t want an early warning before you suffer a heart attack? Or an augmented reality experience that helps you deal with PTSD? Or a robotic limb if you have been injured in a car accident?

But when it comes to abdicating responsibilities for our decisions to machines, how far do we go? What boundaries do we set? And who gets to set them? Before the Nobel-Prize-winning physicist Richard Feynman died in 1988, he left on his blackboard at CalTech the following quote: "What I cannot create, I do not understand." Well, until recently, for any given machine, you could probably find a few people in the world who really understood how it worked. Early AI and Expert Systems enabled us to trace each step of the algorithm all the way to the result. But in the past few years, we’ve seen AI and machine learning evolve in ways that, under Feynman’s definition, we no longer understand, but more importantly, that we no longer feel the need to.

So, "what happens when our relationship with technology stops being that of creator and creation? What happens when AI begins creating itself?" I know there will be a lot of conversations in the coming days about super-intelligent AI. And to be honest, I have no idea how long it will take for AI to develop into sentient beings or if that’s even a real possibility. But I do know that delaying conversations about the impact of technology is never wise. Technology is not neutral. That’s because we humans are not neutral, even if we’d like to think so. Unconscious bias is already a huge problem in society, and we must take care that machines are not making decisions about who has access and who is excluded.

So what happens when bias is baked into the algorithm? Well, I read recently that a start-up in Spain has an innovative answer. It’s offering AI developers an ethics module, available as an SD card, to ensure your next line of code behaves well. It’s a nice idea but it raises a glaringly obvious question: If you were to integrate an ethics module into your code, whose ethics would you choose to embed?

Late last week, Google announced a set of AI principles. It’s great to see the company taking the initiative and clarifying where it stands and what boundaries it will not cross. But these issues are bigger than any one company, organisation, or country, for that matter. Cognitive scientist Joscha Bach has said that "the motives of our artificial minds are going to be those of the organisations, corporations, groups, and individuals that make use of their intelligence." He goes on to say that if "the motivation for AI stems from the existing building blocks of our society," then it follows that "every society will get the AI it deserves."

And so my final question to you is, "How do we make sure we get the AI we deserve? How do we get the AI that ensures we, as a society, can fix our biggest problems and provide well-being for all?"

Idealism is the starting point of every revolution. We’re all here today because we believe that it is possible to get this right.

I know it is, because I see it in my work at BenevolentAI. Every day, we push the boundaries of artificial intelligence and machine learning to unlock the power of data from decades of scientific publications, open data sets and research to identify and understand the underlying causes of disease and develop new treatments for patients. I see how scientists augment and refine AI to produce breakthrough results and how unconventional thinking combined with purposeful technology can deliver real promise for drug discovery. And I couldn’t be more proud to be part of the Benevolent team that is doing just that.

At a time of dystopian prophecies of machines ending humanity, I believe that AI applied in the right way will advance and improve lives and meet some of our world’s greatest challenges, from health to clean energy, climate change, food security, and poverty. Shouldn’t we focus on those problems first? Shouldn’t we motivate our best and brightest to do just that? And shouldn’t we ensure that the application of AI will be an expression of the very highest ethical standards known to humankind?

In an article titled "How The Enlightenment Ends" in this month’s issue of The Atlantic, Henry Kissinger, who is now 95, says that "we must expect AI to make mistakes faster—and of greater magnitude—than humans do."

Maybe Kissinger is right. Yes, we should expect AI to make mistakes faster, but we should also expect AI to make progress and positive impact faster and of greater magnitude.

And that is, after all, why we’re all here today. To learn from past mistakes and to ensure that the choices we make, as we cross the threshold into this new era, will deliver a better future for all.

Thank you.