How to ensure we benefit society with the most impactful technology being developed today

As COO of one of the largest artificial intelligence labs in the world, I spend a lot of time thinking about how our technologies impact people’s lives and how we can ensure that our efforts lead to a positive result. This is the purpose of my work and the essential message I bring when I meet world leaders and key figures in our industry. For example, he was at the forefront of the panel on “Equity through Technology” that I moderated this week at World Economic Forum in Davos, Switzerland.

Inspired by the important conversations taking place in Davos about building a greener, fairer and better world, I wanted to share some thoughts on my own journey as a technology leader, as well as insight into how we , at DeepMind, let’s tackle the challenge of building technology that truly benefits the global community.

In 2000, I took a sabbatical from my job at Intel to visit the orphanage in Lebanon where my father grew up. For two months, I worked to install 20 PCs in the orphanage’s first computer lab, and to train students and teachers in their use. The trip started as a way to honor my father. But being in a place with such limited technical infrastructure also gave me a new perspective on my own work. I realized that without real effort from the technology community, many of the products I was building at Intel would be inaccessible to millions of people. I became aware of how this access gap exacerbated inequalities; even as computers solved problems and accelerated progress in some parts of the world, others lagged even further behind.

After that first trip to Lebanon, I started to reevaluate my career priorities. I’ve always wanted to be part of building breakthrough technology. But when I returned to the United States, my focus shifted to helping create technologies that could have a positive and lasting impact on society. This led me to various roles at the intersection of education and technology, including co-founder Team4Techa non-profit organization that works to improve access to technology for students in developing countries.

When I joined DeepMind as COO in 2018, I did so largely because I could tell the founders and team had the same focus on positive social impact. In fact, at DeepMind, we now champion a term that perfectly reflects my own values ​​and hopes for integrating technology into people’s daily lives: responsible pioneer.

I think pioneering responsibly should be a priority for anyone working in technology. But I also recognize that it’s especially important when it comes to powerful and pervasive technologies like artificial intelligence. AI is arguably the most impactful technology being developed today. It has the potential to benefit humanity in countless ways – from fighting climate change to preventing and treating disease. But it is essential that we consider both its positive and negative downstream impacts. For example, we need to design AI systems carefully and thoughtfully to avoid amplifying human biases, such as in hiring and policing contexts.

The good news is that if we continually challenge our own assumptions about how AI can and should be built and used, we can build this technology in a way that truly benefits everyone. It requires inviting discussion and debate, iterating as we learn, integrating social and technical safeguards, and seeking diverse perspectives. At DeepMind, everything we do stems from our company’s mission to solve intelligence problems to advance society and benefit humanity, and building a culture of pioneering responsibly is essential to making of this mission a reality.

What does responsible pioneering look like in practice? I believe it starts with creating space for open and honest conversations about accountability within an organization. One place we’ve done this at DeepMind is in our multidisciplinary leadership group, which advises on the potential risks and social impact of our research.

Developing our ethical governance and formalizing this group was one of my first initiatives when I joined the company – and in a somewhat atypical approach, I only gave it a name or even a specific objective. after several meetings. I wanted us to focus on the operational and practical aspects of responsibility, starting with a space without expectations in which everyone could speak candidly about what it means to be a pioneer. These conversations were essential in establishing a shared vision and mutual trust, which allowed us to have more open discussions in the future.

Another element of responsible pioneering is adopting a Kaizen philosophy and approach. I discovered the term kaizen in the 1990s, when I moved to Tokyo to work on DVD technology standards for Intel. It’s a Japanese word that translates to “continuous improvement” – and in the simplest sense, a kaizen process is one in which small incremental improvements, made continuously over time, lead to a more efficient system. and ideal. But it’s the mindset behind the process that really matters. For Kaizen to work, everyone who touches the system must watch for weaknesses and opportunities for improvement. This means everyone must have both the humility to admit something could be broken and the optimism to believe they can change it for the better.

When I was COO of online learning company Coursera, we used a kaizen approach to optimize our course structure. When I joined Coursera in 2013, courses on the platform had strict deadlines and each course was only offered a few times a year. We quickly realized that this didn’t offer enough flexibility, so we opted for an entirely on-demand, self-paced format. Enrollments have increased, but completion rates have plummeted – it turns out that while too much structure is stressful and inconvenient, too little leads to loss of motivation. So we again pivoted to a format where lesson sessions start several times a month, and learners work towards suggested weekly milestones. It took time and effort to get there, but continuous improvement eventually led to a solution that allowed people to fully enjoy their learning experience.

In the example above, our kaizen approach was largely effective because we asked our community of learners for feedback and listened to their concerns. This is another crucial element of responsible pioneering: recognizing that we don’t have all the answers and building relationships that allow us to continually tap into outside contributions.

For DeepMind, this sometimes means consulting experts on topics such as security, privacy, bioethics, and psychology. It can also mean reaching out to diverse communities of people who are directly impacted by our technology and inviting them to a discussion about what they want and need. And sometimes that just means listening to the people in our lives — regardless of their technical or scientific background — when they talk about their hopes for the future of AI.

Fundamentally, pioneering responsibly means prioritizing initiatives focused on ethics and social impact. An increasingly important area of ​​our research at DeepMind is how we can make AI systems more equitable and inclusive. Over the past two years, we have published research on Decolonial AI, queer equity in AI, mitigating ethical and social risks in AI language models, and more. At the same time, we are also working to increase diversity in the field of AI through our dedicated scholarship programs. Internally, we recently started hosting Responsible AI Community sessions that bring together different teams and efforts working on security, ethics and governance – and several hundred people have signed up to get involved.

I am inspired by the enthusiasm for this work among our employees and deeply proud of all my colleagues at DeepMind who put social impact first. By ensuring that technology benefits those who need it most, I believe we can make real progress in addressing the challenges facing our society today. In that sense, pioneering responsibly is a moral imperative – and personally, I can’t think of a better way forward.