Towards a responsible use of AI in research

© Aurel Märki / SNF

The rapid development of artificial intelligence (AI) not only contributes to improving efficiency in research and its funding but also opens entirely new pathways to scientific insights. However, the risks associated with AI must also be considered.

The development and use of AI-based tools is advancing rapidly, including in research and research funding. On the one hand, they are employed for processing and analysis of data (e.g., accessing volumes of data that cannot be processed by other methods, or searching for new, non-obvious correlations). On the other hand, generative AI applications are used for tasks such as literature research, editorial work, and translations. The transformative nature of AI can impact numerous scientific disciplines.

The SNSF is aware of both the potential of AI systems and the unresolved questions and risks associated with their use in science. The latter include, for example, maintaining good scientific integrity and ensuring the traceability and reproducibility of results achieved with AI. Addressing these questions is itself a subject of research funded by the SNSF.

AI as an umbrella term

Artificial intelligence encompasses a wide range of technologies and methods. These include approaches that have been researched intensively for decades, such as machine learning (statistical methods that recognize patterns in datasets to make predictions based on them), neural networks (approaches that mimic the functioning of the human brain) or natural language processing (the processing and understanding of human language). Simulating human intelligence is just one of many possible goals and applications.

Currently, so-called generative AI technologies are at the centre of public debate. These technologies create content such as texts, images or videos by using text commands (so-called prompts). Researchers also employ these methods across various disciplines for data analysis and processing, identifying patterns and correlations in datasets.

Researchers must assume responsibility

The SNSF welcomes researchers harnessing the potential of AI for their work. However, when submitting funding proposals, the underlying principle remains that researchers who use AI for their work are wholly responsible for the results produced. The basic principles of scientific integrity also apply to the use of AI. To maintain the confidentiality of the applications submitted to the SNSF, the guidelines for reviewers and referees have been adapted. Producing summaries or translations of whole application dossiers or of novel ideas using generative AI tools may violate this confidentiality. Such data must not be passed on to unauthorised third parties, including the providers of AI applications.

  • AI at the SNSF

    Dropdown Icon

    The SNSF currently utilizes two key AI technologies for processing funding applications: natural language processing and machine learning methods. The SNSF data team has developed an approach to assist employees in assigning funding applications to reviewers with the necessary expertise. The application, which is currently tested in day-to-day operations, analyses textual similarities between excerpts from the applications and the publications of the experts. Based on this analysis, the system suggests potential reviewers. The suggestions are then reviewed and, if necessary, adjusted by SNSF staff.

    In addition, the SNSF is collaborating with international research funding organisations to explore how the processing of funding applications could be improved with the help of AI applications. This includes participation in the project GRAIL ("Getting Responsible about AI and machine learning in research funding and evaluation") by the Research on Research Institute (RoRI), which was launched in mid-2023 and will run until 2025.

  • Regulation of AI

    Dropdown Icon

    As a technology with significant potential, artificial intelligence raises important questions about how society should manage and regulate it. Numerous ethical and legal aspects are subject of public debate.

    With the Artificial Intelligence Act (AI Act), the European Union has presented a detailed set of regulations that may also have an impact on Switzerland. The law prohibits specific applications of AI that could, for example, pose a threat to civil rights. AI systems that risk violating fundamental rights, e.g. in the health sector, are subject to strict regulatory requirements. The implications of the AI Act and other such regulatory efforts could be far-reaching for science.

    A particularly concrete debate revolves around the use of copyrighted material for training generative AI models. For scientific articles resulting from SNSF-funded research, the SNSF requires publication under a CC-BY licence (for Creative Commons attribution). This allows articles to be freely distributed and commercially used, provided the authors are correctly credited and any changes are clearly indicated. These publications can be used to train AI models. However, AI tools must also correctly cite researchers if they reproduce content from their publications in their results or responses.

  • News

    Dropdown Icon