Invisible to us, artificial intelligence (AI) is already embedded in many domains of our everyday lives. First thing in the morning, when we're still tired and dishevelled and look our smartphone, AI unlocks it using image recognition software. Then, when we check our email, AI intercepts unwanted advertising and phishing emails and sends them right to the spam folder. Afterwards, when driving to work, the navigation system's AI warns us of traffic jams and finds us the fastest alternative. In the evening, Netflix's AI tell us which films we might like.
A dangerous black box?
Computer chips' processing capabilities have grown much stronger over the last decades, huge quantities of data have become available, and machine learning has been further developed. All this has moulded AI into its present form. Thanks to AI, computer-powered machines can now be assigned tasks that previously only humans could perform. And that has the potential to deeply change economies and societies. Machine translation, self-driving cars, chatbots, and image recognition are just a few examples of this. But for many, AI remains a possibly dangerous black box. AI's lack of decision-making transparency as well as its use for mass surveillance, to manipulate elections, and within autonomous weapons systems influence the extent to which people trust it. According to a survey conducted for the World Economic Forum, 60% of adults worldwide expect AI-powered products and services to make their lives easier. Yet at the same time, only 50% say they trust companies that use AI as much as companies that do not.
What does it take to build up trust in AI systems?
AI systems, developed worldwide, facilitate human endeavours in many fields. Yet they also raise fundamental ethical, legal, and standardisation issues that we must grapple with. Because technology is not value-neutral. The developers of AI are part of a culture and a social environment whose values find their way into the application of AI. Internationally, while this is being debated in many areas, the various discussions are dispersed, sector-specific and poorly coordinated. Switzerland's foreign policy aims to promote exchanges between the major players in AI. Based on its Foreign Policy Strategy 2020–23, Switzerland is taking an active role in shaping an international regulatory framework for AI that will lay the foundation for trust in AI systems. This framework must set out how the various actors are to cooperate, define clearly who is supposed to do what, and lay down clear rules on the application of AI. The latter must also allow the private sector enough leeway for innovation.
The international debate: Switzerland's added value
Compared to other countries, Switzerland is ahead of the field in terms of AI research, development and innovation. Numerous globally active companies in the medical technology, pharmaceutical and machinery industries are offshoring their production, but leaving their product and service development in innovation-conducive Switzerland. As a host state, Switzerland brings together numerous actors and organisations in International Geneva that are considered centres of normative power, i.e. actors such as states, multilateral organisations and private companies that play a key role in shaping globally applicable norms and standards for AI systems.
Another actor in Geneva is GESDA (Geneva Science and Diplomacy Anticipator), a foundation that aims to anticipate technological and scientific revolutions – including AI – and analyse their impact on humanity. International discussions of AI are also highly geopolitical. Thanks to its neutrality and political stability, Switzerland can add value in this area, facilitating compromises as a credible mediator.
Designing an international regulatory framework
The two-day AI with Trust conference in Geneva will bring together experts and key players in AI, providing a platform for exchanging ideas on the most pressing AI challenges. So far, global debates have seen an international regulatory framework emerging on and shaped by five levels that should be better interlinked: