Today artificial intelligence impacts nearly every aspect of society: changing the way we work, our home life and leisure time.
For these reasons, the impact of AI will be a major topic for discussion at this year’s Fundamental Rights Forum.
AI creates efficiencies, mopping up mundane, repetitive or even dangerous jobs while freeing people to concentrate on roles involving empathy and creativity. It’s the new Industrial Revolution.
But, for many AI is a black box. Its workings and the algorithms that drive it are a complete mystery. And its ubiquity raises ethical questions with direct implications for your human rights.
How do we legislate for trustworthy AI? How can we look behind the curtain to understand its workings? The decisions we make today will be felt for decades to come.
The CEO and Chairwoman of Mozilla, Mitchell Baker, is at the forefront of a movement to ensure the internet remains a global public resource that is open and accessible to us all.
She says: “We are at the very beginning of a smart revolution. We've seen phenomenal breakthroughs in the last five years, but we may not even be at the steam engine level yet—we just do not know. Today's machine learning is still early, clunky stuff.
“What will liberal democracy look like in a setting of machine learning, sensors, and a flow of information that's just unimaginable today? How will we think about human agency and our relationship to ourselves?
“Dramatic change is coming. And our ability to bend it in good directions is limited by where we are in space and in time.”
The Mozilla Foundation and the EU high-level group on artificial intelligence are working toward the development of ‘trustworthy AI’.
Mitchell explains: “Making AI ethical is a process that goes on over time. In the next 25 years, we will see massive change.
“But we can put structures in place that allow the beginnings of that, to make AI trustworthy. Our role is to do that.”
She continues: “The EU has seven principles and three overarching requirements. But I want to focus on a smaller set of practical steps that I think are important…Firstly, we have to know what it does.
“That seems obvious, but in fact, that's not the case and can be sometimes hard. We need transparency of design: That is data and algorithms. And we really do not have those yet.
“We also need transparency of execution. Increasingly we hear machine learning specialists talk about machines learning in unanticipated ways. Thirdly, we need transparency of results. What's actually happening?”
Mitchell concludes: “For AI to be trustworthy, we also have to know that what it is doing is balanced by benefits to the creator, benefits to society, and benefits to the individual and individual self-determination…Right now, 99.5% of the control and benefit is to the creator.
“EU work on legislation is pioneering. But I see this legislation as the beginning of the discussion and not the end.”
The Fundamental Rights Forum 2021 will explore new ways to ensure that AI does not negatively affect human rights in more detail.
Registration will open in July.
Stay informed about the debate around AI and all other human rights issues by signing up for our regular newsletter.