The Key to Developing Responsible AI by Klaus Schwab and Cathy Lee

Generative AI will change the world, whether we like it or not. At this pivotal moment in the evolution of technology, public and private stakeholders must do all they can to ensure that the process leads to fair, equitable and sustainable outcomes.

GENEVA – In recent months, the pace of development of artificial intelligence has accelerated exponentially, with generative AI systems such as ChatGPT and Midjourney rapidly transforming a wide range of professional activities and creative processes. The window of opportunity to direct the development of this powerful technology in ways that minimize risks and maximize benefits is closing fast.

AI-based capabilities exist along a continuum, with generative AI systems such as GPT-4 (the latest version of ChatGPT) falling into the most advanced category. Given that such systems hold the greatest promise and can lead to the most insidious risks, they deserve particularly close scrutiny by public and private stakeholders.

Almost all technological developments have had both positive and negative impacts on society. On the one hand, it has boosted economic productivity and income growth, expanded access to information and communication technology, extended human life, and improved overall well-being. On the other hand, it has led to the displacement of workers, stagnant wages, increased inequality, and increased concentration of resources between individuals and firms.

Artificial intelligence is no different. Generative AI systems open up abundant opportunities in areas such as product design, content creation, drug discovery, healthcare, personalized education, and energy optimization. At the same time, it can be very disruptive, even harmful, to our economies and societies.

The risks posed by already advanced AI, and those that could reasonably be expected, are significant. Besides massively reorienting labor markets, large language paradigm systems can increase the spread of misinformation and perpetuate harmful biases. Generative AI also threatens to exacerbate economic inequality. Such systems may pose existential risks to the human race.

For some, this is a reason to put the brakes on AI research. Last month, more than 1,000 AI technologists, from Elon Musk to Steve Wozniak, signed an open letter recommending that AI Labs “immediately stop” training systems more powerful than GPT-4 for at least six months. During this downtime, they say, a set of shared safety protocols should be developed and implemented — “thoroughly audited and supervised by independent, external experts.”

Summer Sale: Save 30% on a new product note Subscription


Summer Sale: Save 30% on a new product note Subscription

For a limited time, you can get even more access to Project Syndicate – Including all new note Commentary, our full suite of subscriber-exclusive On Point content, is complete note Archive and more – starting from $84.99 $59.49 for the first year.

subscribe now

The open letter, and the heated debate it sparked, underscores the urgent need for stakeholders to engage in a broad, bona fide process aimed at alignment with robust common guidelines for the development and deployment of advanced AI. This effort must account for issues such as automation and job displacement, the digital divide, and the concentration of control over technological assets and resources, such as data and computing power. And the top priority must be to continually work to eliminate systemic biases in AI training, so that systems like ChatGPT don’t end up reproducing or even exacerbating them.

Proposals on artificial intelligence and the governance of digital services are already emerging, including in the United States and the European Union. Organizations such as the World Economic Forum also make contributions. In 2021, the Forum launched the Global Digital Safety Alliance, which aims to unite stakeholders in tackling harmful content online and facilitate the sharing of best practices for regulating online safety. The Forum then created the Digital Trust Initiative, to ensure that advanced technologies such as artificial intelligence are developed with the best interests of the public in mind.

Now, the Forum calls for urgent collaboration between the public and private sectors to address the challenges that have accompanied the advent of generative AI and build consensus on the next steps to develop and deploy the technology. To facilitate progress, the Forum, in partnership with AI Commons—a nonprofit organization supported by AI practitioners, academia, and NGOs focused on the public good—will hold a Global Summit on Generative AI in San Francisco on April 26-28. Stakeholders will discuss the impact of technology on business, society and the planet, and work together to devise ways to mitigate negative externalities and achieve safer, more sustainable and equitable outcomes.

Generative AI will change the world, whether we like it or not. At this pivotal moment in technology development, a collaborative approach is essential to enable us to do everything we can to ensure that the process aligns with our shared interests and values.

Leave a Reply

%d bloggers like this: