https://mohnfoundation.no/en/palitelig-kunstig-intelligens/

Trustworthy AI

Over the last decade, we have seen significant technological breakthroughs in AI and it has become a natural part of research, education, and work. However, for AI to gain broad acceptance in society, it is crucial that AI systems are reliable. Current AI systems lack scientific definitions and clear criteria for human values such as justice, responsibility, safety, privacy, and other important ethical aspects. There are also serious weaknesses in the legal frameworks within which AI systems operate, and issues with accuracy and robustness in the systems.

To meet these challenges and strengthen high-quality AI research at UiB, Trond Mohn Research Foundation (TMF) and the University of Bergen (UiB) have developed the research program Trustworthy AI.

TMF announced a competition where UiB’s academic communities could apply for support for ambitious research projects with the goal of significantly increasing the trustworthyness of AI systems. The projects were to include expertise from multiple research groups and disciplines, and aim to develop processes, methods, algorithms, and/or tools to help solve the challenges.

Three new research projects have been selected for funding. They are:

AI and Education: Layers of Trust (EduTrust AI)

The main goals of the project are to identify layers of trust associated with the use of AI in the educational sector that considers the complex accountability relationships, to develop new knowledge, methods, guidelines, and tools for more reliable AI systems, to translate insights about legal, psychological and socio-cultural determinants of trust into legal requirements, and to provide input for practicable frameworks related to the challenging questions surrounding the use of student data and AI systems in education.
The project is a collaboration between the Centre for the Science of Learning & Technology (SLATE), the Faculty of Psychology, and the Faculty of Law. Professor Barbara Wasson from SLATE will lead the project.

TRUSTworthy AI models to predict progression to complications in patients with Diabetes (TRUST-AI4D)

This project aims to develop and implement a better, fairer, and safer machine learning algorithm for predicting complications in patients with diabetes. The project is a collaboration between the Department of Clinical Science at the Faculty of Medicine, and the Department of Mathematics at the Faculty of Mathematics and Natural Sciences. Professor Valeriya Lyssenko, Mohn Research Center for Diabetes Precision Medicine at the Department of Clinical Science, will lead the project.

Algorithmic Foundations of Trustworthy AI

The project aims to build a new theoretical framework for understanding, developing, and designing socially responsible algorithms that incorporate human values into AI systems. The project is a collaboration between the Department of Informatics at the Faculty of Mathematics and Natural Sciences, and the Department of Information and Media Studies, the Faculty of Social Sciences. Professor Fedor V. Fomin, Department of Informatics, will lead the project.

The joint cooperative project Trustworthy AI Synergy -TAIS

To facilitate co-operation and draw on synergy between these projects and other research at UiB, in Norway, and internationally, Trustworthy AI Synergy (TAIS) facilitates crossproject collaboration by providing common meeting places, hosting international researchers, supporting interdisciplinary cross-project young researcher networking, organising an international symposium, and promoting joint publications and dissemination.
The proposed activity will lead to more significant and impactful outcomes, particularly in the complex and vital domain of Trustworthy AI.

The project is a cooperation between th project leaders in the program and UiB AI. Professor Barbara Wasson, SLATE, is the project leader of TAIS.

 Trustworthy AI in short

Program period 2023-2028

TMF funding 20 MNOK
Total funding 49 MNOK

The project’s web pages:
Trustworthy AI- Synergy – TAIS, PI Barbara Wasson
TRUST-AI4D, PI Valeriya Lyssenko
EduTrust AI, PI Barbara Wasson
Algorithmic Foundations of Trustworthy AI, PI Fedor Fomin