Transparency and accountability: the challenges of artificial intelligence
Artificial intelligence (AI) and its potential applications are developing extremely quickly and we are witnessing a genuine digital revolution. Some practical applications are major steps forward. Artificial intelligence improves calculations, medical diagnoses and forecasts, in areas such as the weather and road traffic. It helps optimize production capacities and strengthens cyber security.
However, faced with this radical transformation of our societies, we must not lose sight of the risks attached to such technologies, such as :
• Development in the private sector of algorithms not subject to French or European regulation and which lack transparency ;
• Possible breaches of privacy, violations of confidentiality, data loss or discrimination ;
• Unresolved issues of responsibility in cases of “machine error”.
Civil society will rightly demand more and more transparency and accountability, especially where algorithms are concerned. Regulation of AI is therefore urgently needed.
This regulation should set down general principles and norms for the development of AI technologies, based on three fundamental concerns : investment and innovation, digital sovereignty, and ethical and human considerations.
Investment and innovation
In February 2020, the European Commission published theEuropean Strategy for Data and the White Paper on Artificial Intelligence. In these documents, the Commission considers investment to be key to accompanying the development of AI. These investments need to be focused on strategic sectors such as environment, health, transport and defence and security. The European Union intends to increase public and private investment over the next decade to at least €20 billion per year by 2030. Investment is also needed in vocational training and education to enable European citizens to “requalify” in these subjects. SMEs and start-ups need specific support, in order to limit the burden that regulation of AI could represent. Lastly, strengthening innovation capacities will help create a conducive environment, with solid infrastructure for benchmarking and experimentation.
Digital sovereignty
Where AI is concerned, national sovereignty and independence are at stake. This has long been a major concern for technological leaders in all sectors. Major companies in all sectors, from retail to agriculture, are seeking to incorporate machine learning into their products. At the same time, they are suffering from a shortage of AI talent. This situation is fuelling a wild race to attract the best AI start-ups, many of which are still in the first stages of research and development.
To guarantee our digital sovereignty, it is essential to :
• Develop our own IA applications, technologies and infrastructure ;
• Build a European regulatory model based on our European values and promote it worldwide.
To do so, and to better serve our economy and citizens, we need to analyse the data that drive AI and assess how they are created and can be used. That requires sovereign cloud computing solutions and easier data transfers, which is possible through the creation of common data spaces. At European level, that could take the form of a single market for data.
Moreover, Europe needs to seize on the potential of processing non-personal data or industrial data.
Ethical and human considerations
The EU, and France in particular, are particularly committed to acknowledgment of ethical and human considerations.
The High-Level Expert Group on Artificial Intelligence (AI HLEG) appointed by the Commission in April 2019 has defined seven series of criteria to guarantee trustworthy AI :
- Human agency and oversight ;
- Technical robustness and safety ;
- Privacy and data governance ;
- Transparency ;
- Diversity, non-discrimination and fairness ;
- Societal and environmental wellbeing ;
- Accountability.
On this basis, the Commission’s White Paper calls for an “ecosystem of trust” to ensure protection of fundamental rights, security and regulatory stability. Since 2019, France and Canada have been particularly active on this subject, with the launch of the Global Partnership on AI (GPAI), which aims to establish independent global artificial intelligence expertise and focus on ethical regulation.
France is also very active within the UN Secretary-General’s High-level Panel on Digital Cooperation, which seeks solutions to mitigate AI risks, such as requiring explainability [1] of the decisions and results of autonomous intelligent systems and ensuring humans are ultimately accountable for their use of these technologies : France’s vision is for humans to be primarily liable in the event of damage caused by the use of AI systems.
This point is in line with the French 1978 Act on Information Technology, Data Files and Freedoms, which stipulates that court decisions cannot be based solely on automated processing of data. France also contributes actively to ongoing work on AI in other forums, including UNESCO and the Council of Europe.
[1] Explainability (or explicability) refers to the possibility of explaining the results produced by algorithms (machine learning in particular).