Friedrich Schiedel Fellows
Meet the new Friedrich Schiedel Fellows who work on building new bridges between social sciences, technology, and other disciplines. Their interdisciplinary research projects, under the motto "Human-Centered Innovation for Technology in Society" focus on how technologies can be developed responsibly, human-centered, and democratically, serving the public good.
2024
Psychology impact assessment for interactional systems: defining the evaluation scope (PSAIS)
This research project aims to address the lack of frameworks for systematically assessing the diverse psychological impacts of AI. By adopting a participatory approach and considering cultural values, the project seeks to develop a multi-cultural mapping framework for evaluating the psychological impact of AI systems. The research will involve workshops and consultations with stakeholders from various sectors and regions to define evaluation criteria. The project will contribute to the development of concrete recommendations for action by providing a culturally-informed framework that can guide the responsible development and application of AI technologies. The impact of the project extends to academic disciplines, partner institutions, societal stakeholder groups, and policy actors. It will foster interdisciplinary knowledge exchange, stimulate discussions on standardization, and contribute to the reduction of potential inequities arising from technology adoption. The project aims to be a change agent by actively contributing to the implementation of the recommendations and ensuring user well-being and trust in AI systems.
Pitch: Auxane Boch
Sponsor 1: Prof. Dr. Christoph Lütge
Sponsor 2: Prof. Dr. Jochen Hartmann
Echoes of privacy: exploring user privacy decision-making processes towards large language model-based agents in immersive realities
User privacy concerns and preferences have been researched extensively in the context of various technologies, such as smart speakers, IoT devices, and augmented reality glasses, to facilitate better privacy decision-making and human-centered solutions. With the emergence of generative artificial intelligence (AI), large language models (LLMs) have started being integrated into our daily routines, where models are tuned with vast amounts of data, including sensitive information. The possibility of embedding these models in immersive settings brings a plethora of questions from privacy and usability point of view. In this project, through several user studies, including crowdsourcing ones, we will explore privacy concerns and preferences towards LLM-powered and speech-based chat agents for immersive settings and inference likelihood of alarming user attributes. The findings will help understand the privacy implications of such settings, design informed consent procedures that support users in immersive spaces that include LLMs, and facilitate privacy-aware technical solutions.
Pitch: Dr. Efe Bozkir
Sponsor 1: Prof. Dr. Enkelejda Kasneci
Sponsor 2: Prof. Dr. Gjergji Kasneci
Future Finance Law Hub (“F2L_Hub”)
The Future Finance Law Hub (“F2L_Hub”) is a project aim to establish a policy-maker hub at the intersection of law, technology and finance. In this domain, the objective is to attain comprehensive, innovative, and applicable results by adopting a multi-disciplinary and multi-stakeholder framework. In the short to medium term, the scope of F2L_Hub’s influence is primarily anticipated to encompass Germany and the Europe.
The Friedrich Schiedel Fellowship provides incomparable and generous support in the context of the establishment, structuring, and first outputs of F2L_Hub.
Pitch: Barış C. Cantürk
Sponsor 1: Prof. Dr. Boris Paal
Sponsor 2: Prof. Dr. Stefanie Jung
Harmful speech proactive moderation
Offensive speech remains a pervasive issue despite ongoing efforts, as underscored by recent EU regulations aimed at mitigating digital violence. Existing approaches primarily rely on binary solutions, such as outright blocking or banning, yet fail to address the complex nature of hate speech. In this work, we want to advocate for a more comprehensive approach that aims to assess and classify offensive speech within several new categories: (i) hate speech that can be prevented from publishing by recommending a detoxified version; (ii) hate speech that necessitated counter speech initiatives to persuade the speaker; (iii) hate speech that should be indeed blocked or banned, and (iv) instances mandating further human intervention.
Pitch: Dr. Daryna Dementieva
Sponsor 1: Prof. Dr. Jürgen Pfeffer & Prof. Dr. Yannis Theocharis
Sponsor 2: Prof. Dr. Georg Groh
Setting up the future with sustainable choices: GenAI support in resolving multi- stakeholder conflicts in sustainable critical metals & minerals development
This project outlines an innovative approach to resolve multi-stakeholder conflicts in the sustainable development of critical metals and minerals essential for decarbonization efforts. Recognizing the complexities and sustainability challenges within the supply chains of these materials, particularly those sourced from the Global South / emerging economies, the project proposes a digital platform leveraging reactive machine AI (RM-AI) and generative AI (Gen-AI) with human-in-the-loop functionalities. This platform is designed to facilitate transparent and inclusive discussions among public/community representatives, government, and industry stakeholders, ensuring a balanced consideration of environmental, economic, and social sustainability targets. Through co-developing a concept for an interactive, game- based decision-making tool powered by Gen-AI, the project aims to identify common interests, model sustainability trade-offs, and find consensus solutions that align with the societal goals of reducing inequality and promoting economic growth with decent work conditions. The project's integration of RM and Gen AI aims to bridge the gap between technical and non-technical decision-makers, enhancing stakeholder engagement and trust in AI-driven processes, thereby aligning closely with the fellowship’s mission of human- centered innovation and interdisciplinary collaboration for the public good.
Pitch: Dr. Mennatullah Hendawy
Sponsor 1: Prof. Dr. Christoph Lütge & Dr. Caitlin Corrigan
Sponsor 2: Prof. Dr. Svetlana Ikonnikova
Research-based theater: an innovative method for communicating and co-shaping ai ethics research & development
This project will implement a creative approach to conducting, educating, and communicating AI ethics research through the lens of the arts (i.e., research-based theater). The core idea revolves around conducting qualitative interviews and user studies on the impact of AI systems on human ethical decision-making. It focuses specifically on exploring the potential opportunities and risks of employing these systems as aids for ethical decision-making, along with their broader societal impacts and recommended system requirements. Generated scientific findings will be translated into a theater script and (immersive) performance. This performance seeks to effectively educate civil society on up-to-date research in an engaging manner and facilitate joint discussions (e.g., on necessary and preferred system requirements or restrictions). The insights from these discussions, in turn, are intended to inform the scientific community, thereby facilitating a human-centered development and use of AI systems as moral dialogue partners or advisors. Overall, this project should serve as a proof of concept for innovative teaching, science communication and co-design in AI ethics research, laying the groundwork for similar projects in the future.
More information on the project can be found here:https://www.ieai.sot.tum.de/research/moralplai/
Pitch: Franziska M. Poszler
Sponsor 1: Prof. Dr. Christoph Lütge
Sponsor 2: Prof. Dr. Johannes Betz
Developing the google maps for the climate transition
I envision to develop the Google Maps for the climate transition. Business leaders and policy makers need more comprehensive and timely evidence to accelerate industrial development of climate-tech effectively. With recent advances in Artificial Intelligence (AI), it is now possible to develop models that generate such evidence at large scale and in near real-time. In the project, I will analyze the global network of organizations collaborating on climate-tech innovation. The network is based on processing the social media posts of organizations via large language models (LLMs). It includes key public and private actors and spans various types of climate technologies (e.g., solar, hydrogen, electric vehicles) and types of collaborations (R&D collaborations, demonstration projects, equity investments). I will use the fellowship to conduct in-depth analyses generating valuable insights for managers and policy-makers on facilitating innovation clusters. Furthermore, I plan to operationalize the information retrieval and processing enabling analyses in real-time.
Pitch: Dr. Malte Toetzke
Sponsor 1: Prof. Dr. Florian Egli
Sponsor 2: Prof. Dr. Hanna Hottenrott
Participatory auditing (in cooperation with audit.eu)
The EU AI Act posits that providers (management and developers) of high-risk AI systems have to undergo conformity assessment. The conformity assessment encompasses several measures that are supposed to corroborate that a system is legally compliant, technically robust, and ethically sound, and can be considered ‘trustworthy AI’. The project ‘participatory auditing’ aims to contribute to the project Audit.EU (1) by exploring how companies can leverage their learnings from established compliance practices such as for the GDPR and (2) by proposing participation as an approach to source AI Act compliance-relevant information from suitable stakeholders to increase inclusivity and mitigate risks of discrimination. Participation is considered to enhance the process of achieving compliance through a comprehensive testing and feedback process. Based on learnings from established compliance measures, a framework for performing auditing in a participatory manner and in accordance with the EU AI Act will be developed and evaluated. The primary goal of the framework is to serve developer teams as a guideline.
Pitch: Chiara Ullstein
Sponsor 1: Prof. Dr. Orestis Papakyriakopoulos
Sponsor 2: Prof. Dr. Jens Großklags
Law & AI: navigating the intersection
Most areas of law that should in principle be relevant for AI currently leave many intersectional questions unanswered. The reason for these open questions is that jurisprudence cannot pursue its task of incorporating AI into the existing dogmatics because it lacks sufficient technological understanding. At the same time, developers lack knowledge of the law and therefore only base their design decisions on performance, but not compliance with e.g. data protection or anti-discrimination law. Although students from various professional backgrounds want to learn more about the underlying interface issues, truly interdisciplinary educational material is missing. My project will address this and transform the rare specialist expertise that currently only exists at TUM into a freely available online course. By fostering interdisciplinary collaboration between law and technology and sharing cutting-edge knowledge as effectively as possible, the project seeks to promote the responsible use of AI for the benefit of society.
Pitch: Niklas Wais
Sponsor 1: Prof. Dr. Boris Paal
Sponsor 2: Matthias Grabmair