09.11.2023 – Artificial intelligence (AI) is penetrating economic and social life in Switzerland as elsewhere. The FDPIC therefore wishes to point out that the Federal Data Protection Act, which has been in force since 1 September 2023, is directly applicable to AI-supported data processing.
Current data protection legislation is directly applicable to AI
The executive order signed by US President Biden on 30 October marks a significant step towards the regulation of artificial intelligence (AI). Back in June, the European parliament approved the continuation of legislative work on the AI Act, which will regulate AI throughout the EU. A Council of Europe committee on AI is also currently working on a framework treaty on AI, human rights, democracy and the rule of law.
In Switzerland, the Federal Administration is evaluating various approaches to the regulation of AI. This work will be completed by the end of 2024, after which the Federal Council will decide whether to call for regulations on the subject. Switzerland currently takes a sectoral approach to technology-specific regulation, which means that legislation relates to a specific sector or topic. It remains to be seen whether this sectoral approach will continue unchanged or whether across-the-board legislation on AI will be introduced.
In view of the rapid increase in AI-supported data processing, the FDPIC draws attention to the fact that, regardless of the approach to future regulation, the data protection provisions already in force must be complied with. The Federal Act on Data Protection (FADP) is formulated in such a way as to apply to all types of technology and is therefore also directly applicable to the use of AI-supported data processing. The FDPIC therefore draws the attention of manufacturers, providers and users of such applications that, when developing new technologies and planning their use, they are required by law to ensure that data subjects have the highest possible degree of digital self-determination.
In view of these requirements set out in the FADP, manufacturers, providers and users of AI systems must make the purpose, functionality and data sources of AI-based processing transparent. The legal right to transparency is closely linked to the right of data subjects to object to automated data processing or to request that automated individual decisions be reviewed by a human being - as expressly provided for in the FADP. In the case of intelligent language models that communicate directly, users have a legal right to know whether they are speaking or corresponding with a machine and whether the data they have entered is being processed to improve self-learning programs or for other purposes. The use of programs that enable the falsification of faces, images or voice messages of identifiable persons must also always be clearly indicated; this may even be completely unlawful in a specific case due to prohibitions under criminal law.
AI-supported data processing involving high risks is permitted in principle under the FADP provided appropriate measures to protect the data subjects are taken. For this reason, the law requires a data protection impact assessment to be conducted in high-risk cases. Applications which aim to undermine the privacy and informational self-determination protected by the FADP, however, are prohibited under data protection law. These include in particular AI-based data processing as observed in authoritarian states, such as comprehensive facial recognition in real time or the comprehensive observation and evaluation of lifestyle, known as ‘social scoring’.
Last modification 13.11.2023