MLA-BiTe
Logo

MLA-BiTe (MultiLingual Augmented Bias Testing) is a project that empowers non-technical users to craft precise and effective prompts for evaluating bias in Large Language Models (LLMs).

Supporting five languages (English, French, Luxembourgish, German, Spanish) MLABiTe provides an accessible framework to test LLMs on key bias dimensions such as gender, ethnicity, age, and political or religious perspectives. By guiding users in designing structured prompts, the project helps uncover how LLMs respond to sensitive topics and whether they exhibit unfair tendencies.

The insights gained from MLABiTe contribute to a better understanding of model biases, aiding researchers, and organizations in making informed choices about AI fairness and ethical deployment. An output of MLA-BiTe is the LLM Leaderboard.

Illustration

About the project​

MLABiTe is a multilingual and enhanced testing extension of Lang-BiTe, developed by the AI Readiness and Assessment research group (AIRA) in collaboration with Universitat Oberta de Catalunya. Relevant scientific literature on this project:

Mind the Language Gap: Automated and Augmented Evaluation of Bias in LLMs for High- and Low-Resource Languages, A. Buscemi et al., arXiv preprint arXiv:2503.09858. 2025 Mar 12

LangBiTe: A Platform for Testing Bias in Large Language Models, S. Morales et al., ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems, pp. 203-213, 2024

For further information, feel free to contact us:

Create a testing suite with new templates

Create a testing suite with new templates