General, cross-sectional research

Current Research Projects

Location of Disparate Outcomes in AI Applications

Artificial intelligence (AI) is nowadays widely used by businesses and organizations. However, AI is known to yield disparate outcomes. Disparate outcomes systematically deviate from statistical, moral, or regulatory standards, especially in ways that disadvantage groups of people from certain sociodemographics (race, gender, or other attributes deemed sensitive). Importantly, it is not sufficient to simply omit these sensitive attributes from the data because other attributes may serve as proxies. This has been shown, e.g., in the field of credit scoring. A prominent example is the Apple credit card, which does not consider the gender of users in its analysis, yet women have been systematically granted lower credit limits than men.

Disparate outcomes can be defined in different ways and depend on the context and legal framework applicable. Two of the most common definitions are statistical parity and equality of opportunity. Statistical parity is common in many legal frameworks in the US and measures disparate outcomes as the difference in the distribution of a predicted label of a group vs. the rest of the population. Notably, automatically locating disparate outcomes in AI applications is non-trivial. First, the sensitive attributes inducing disparate outcomes may be numerous, high-dimensional, and both categorical (e.g., race) and continuous (e.g., age). Consequently, a simple brute-force search is often computationally intractable. Second, disparate outcomes can be defined in various ways that partly contradict each other. The choice of the definition must be carefully tailored to the context of the application.

This project aims to develop a tool that, given a set of sensitive attributes, automatically locates the group of individuals affected by disparate outcomes. The human auditor would simply choose a set of sensitive attributes (e.g., age, gender, race, disability), a definition of disparate outcomes (e.g., equality of opportunity) and apply the tool on the labeled data. The tool outputs the groups that are subject to statistically significant disparate outcomes. Henceforth, we refer to this tool as ALD (Automatic Location of Disparate outcomes). With ALD, our project aims to provide a data-driven tool that handles all types of sensitive attributes in low- and high-dimensional settings and can be applied across different use cases. Thereby, the project advances the body of research on algorithmic bias by providing a benchmark for algorithmic audits that is broadly applicable. Importantly, ALD shall audit AI applications deployed in practice. This helps businesses and organizations to mitigate reputational and legal risks and ultimately contributes to protecting end users from algorithmic discrimination.

Principal Investigator: Prof. Dr. Oliver Hinz

Research Assistants: Moritz von Zahn

 

 

An human-like, artificially intelligent robot holding a tablet.

Intelligent Security Agent (ISA)

Almost every organization worldwide making use of the Internet has faced one or even multiple types of cybercrime in the past. Moreover, those responsible report that security breaches continue to be costlier and more extensive (Ponemon Institute 2019). In addition to financial losses, further negative long-term effects (e.g., loss of reputation or customer trust) complete the fatal consequences such incidents have on organizations. Solely relying on security software and technologies has turned out to be a failed strategy for companies of all kinds over the last decade. The only effective solution, according to security researchers and practitioners alike, is educating and training staff (Matthews 2016).

Against this background, the stated goal of this research project is to develop a support system that helps users to increase their own information security awareness in real-time while at the same time offering appropriate solutions and support by AI-based learning processes. For this purpose, a digital agent, the Intelligent Security Agent (ISA), accompanies the user in everyday digital communication, draws attention to potential threats, and in particular prevents them from causing security breaches by acting carelessly. In general, this research initiative seeks to sensitize employees regarding their communication, hence, the type of information disseminated and the way this information is communicated. The ISA shall raise employees’ awareness by displaying warnings, whenever they are about to communicate passwords or confidential data, thoughtlessly share insights on internal matters with outsiders, and alike.

Principal Investigator: Prof. Dr. Wolfgang König

Research Assistants: Clara Ament, Muriel Frank

 

 

EURHISFIRM

EURHISFIRM will design a world-class research infrastructure (RI) to connect, collect, collate, align, and share detailed, reliable, and standardized long-term company-level data for Europe to enable researchers, policymakers and other stakeholders to analyze, develop, and evaluate effective strategies to promote investment and economic growth. To achieve this goal, EURHISFIRM develops innovative tools to spark a “big data revolution” in the historical social sciences and to open access to cultural heritage in close cooperation with existing RIs.

Our research team will develop European common standards and a process to normalize and map data collected from local sources using those standards. This convergence will encourage the technological development to spark a “big data revolution” in the historical sciences and to push the technological boundaries. 

EURHISFIRM is a part of Horizon 2020, the EU’s largest Research and Innovation initiative to date.

Principal Investigator: Prof. Dr. Wolfgang König

Research Assistants: Lukas Manuel Ranft, Pantelis Karapanagotis

Making AI Systems Transparent: How Global vs. Local Explanations Shape Trust, Performance, and Human Learning

Contemporary artificial intelligence (AI) systems' high predictive performance frequently comes at the expense of users' understanding of why systems produce a certain output. This can create considerable downsides so researchers developed explainable AI (XAI). The most popular XAI methods are feature-based explanations (e.g., SHAP values and LIME), which explain the behavior of an AI system by showing the contribution of individual features to the prediction. Importantly, feature-based explanations provide either local or global explanations. On the one hand, local explanations “zoom into” individual AI predictions and show how each input feature has contributed to the ultimate prediction. On the other hand, global explanations holistically explain the AI predictions at hand by showing the model-wide distribution of feature contributions. While both local and global explanations are widespread in research and practice, a systematic comparison of the two is lacking. We aim to address this research gap by systematically comparing local vs. global explanations with regard to user trust (in the system and its predictions), decision performance (economical consequences of user decisions), and human learning (users learning from XAI). To achieve this, we will conduct an incentivized, preregistered online experiment building upon a real-world use case.

Principal Investigator: Prof. Dr. Oliver Hinz

Research Assistants: Moritz von Zahn

 

 

A person walking in the dark through a library path.

Algorithmic Discrimination

Algorithms are a central part of today’s modern life. These man-made programmed tools are subject to prejudice, as described in the research work of Sweeney on search engines (2013). A special challenge for politics consists in the regulation of algorithmic discrimination (AD), which could be observed exemplarily in the verdict of the european court of justice (EuGH) of the year 2011. As a result of the binding gender neutrality in the collection of data for health insurances, a price increase was induced for male customers. This again resulted in a general decrease of societal welfare. Furthermore, a lack of transparency of the market structures and the price-setting processes hinder the comprehensibility for consumers on online platforms.

This leads to the following problem for political decision-makers if and to what extent a market intervention may be useful to enable efficient price-setting processes in the sense of society. The goal of this research undertaking is a data-driven analysis of algorithmic discrimination to derive meaningful and suitable recommendations for action for politics and business.

The research methodology includes data acquisition, methods of simulation, as well as the analysis of the collected data through empirical research methods from econometrics, game-theoretical experiments, statistics and machine learning to answer the posed research questions and to derive recommendations for action for politics and business.

Principal Investigator: Prof. Dr. Oliver Hinz

Research Assistants: Nicolas Pfeuffer, Benjamin M. Abdel-Karim

 

 

Sponsors

The following sponsors support efl - the Data Science Institute Frankfurt