Skip to main content

CommunicationPublished on 8 May 2025

Update - Current data protection legislation is directly applicable to AI

Artificial intelligence (AI) is penetrating economic and social life in Switzerland as elsewhere. The FDPIC therefore wishes to point out that the Federal Data Protection Act, which has been in force since 1 September 2023, is directly applicable to AI-supported data processing.

In March 2025, Switzerland signed the Council of Europe Convention on Artificial Intelligence (AI) and Human Rights, Democracy and the Rule of Law. As announced by the Federal Council, the necessary amendments to Swiss law will be made so that Switzerland can ratify the Council of Europe Convention. The Swiss regulatory approach to AI is based on three objectives: strengthening Switzerland as a centre of innovation, safeguarding fundamental rights, including economic freedom, and strengthening public confidence in AI.

In view of the rapid increase in AI-supported data processing, the FDPIC draws attention to the fact that, regardless of future regulations, the data protection provisions already in force must be complied with. The Federal Act on Data Protection (FADP) is formulated in such a way as to apply to all types of technology and is therefore also directly applicable to the use of AI-supported data processing. The FDPIC therefore draws the attention of manufacturers, providers and users of such applications that, when developing new technologies and planning their use, they are required by law to ensure that data subjects have the highest possible degree of digital self-determination.

In view of these requirements set out in the FADP, manufacturers, providers and users of AI systems must make the purpose, functionality and data sources of AI-based processing transparent. The legal right to transparency is closely linked to the right of data subjects to object to automated data processing or to request that automated individual decisions be reviewed by a human being - as expressly provided for in the FADP. In the case of intelligent language models that communicate directly, users have a legal right to know whether they are speaking or corresponding with a machine and whether the data they have entered is being processed to improve self-learning programs or for other purposes. The use of programs that enable the falsification of faces, images or voice messages of identifiable persons must also always be clearly indicated; this may even be completely unlawful in a specific case due to prohibitions under criminal law.

AI-supported data processing involving high risks is permitted in principle under the FADP provided appropriate measures to protect the data subjects are taken. For this reason, the law requires a data protection impact assessment to be conducted in high-risk cases. Applications which aim to undermine the privacy and informational self-determination protected by the FADP, however, are prohibited under data protection law. These include in particular AI-based data processing as observed in authoritarian states, such as comprehensive facial recognition in real time or the comprehensive observation and evaluation of lifestyle, known as ‘social scoring’.