BIAS IN
LLMS
About this project

Why is this biased?

How can you reduce the bias?

Related Examples

Privacy Policy

Last updated: December 2024

Information We Collect

When you use Bias in LLMs, we collect:

  • Usage analytics through Google Analytics
  • Cookie consent preferences
  • No personal information is stored or collected

How We Use Your Information

We use collected data to:

  • Understand how visitors interact with the site
  • Improve the educational content and user experience
  • Remember your cookie preferences

Data Storage

We do not store personal data. Analytics data is processed by Google Analytics according to their privacy policy.

Cookies

We use cookies for analytics and to remember your cookie consent. You can control cookie settings through your browser.

Contact

For privacy questions, contact: hi@jesusmartin.eu

Terms of Service

Last updated: December 2024

Acceptance of Terms

By accessing and using Bias in LLMs, you accept and agree to be bound by these terms.

Educational Purpose

This tool is provided for educational purposes to help users understand and identify biases in Large Language Models. All content is free to use for learning and research.

Use License

Permission is granted to use this tool for personal, educational, and research purposes. You may not:

  • Use the content for commercial purposes without permission
  • Attempt to reverse engineer the website
  • Remove copyright or attribution notices

Content

All bias examples and educational materials are the intellectual property of Jesús Martín. The content is provided as-is for awareness and educational purposes.

Disclaimer

The materials are provided on an 'as is' basis. We make no warranties about the completeness or accuracy of the content.

Contact

For questions about these terms, contact: hi@jesusmartin.eu

About this project

Hi there!

I'm Jesús Martín, a senior UX designer at Amazon Smart Vehicles. This project, "Bias in LLMs," is my attempt to reconcile my passion for new technologies with my deep commitment to diversity, equity, and inclusion (DEI) initiatives.

Two experiences triggered this project. First, I encountered an LLM's inability to solve a riddle about a pilot and a nurse. Then, out of curiosity, I asked an LLM to create a song with my name. These experiences made me realize the significant impact LLMs can have on reinforcing stereotypes and how that can affect people's lives.

It's important to acknowledge that LLMs have biases, but also to understand that this isn't fundamentally different from us humans. Just as we develop limitations based on our upbringing and context, LLMs are trained on large but limited sets of information.

Being aware of these limitations can help us, as users, take measures to avoid or mitigate the impact of biases in LLMs.

You might wonder if these biases really matter. While some examples might seem obvious, they reveal a deeper problem. If LLMs show clear bias in simple, detectable cases, imagine how many biased responses we might be receiving without noticing. These hidden biases could be quietly shaping our thoughts and decisions in ways we don't realize.

Should we expect users to carefully craft prompts to avoid bias, or should AI companies take more responsibility? While many organizations are working on Responsible AI initiatives, there's no easy solution. From my design perspective, addressing these issues might require adding extra steps or verification questions in the AI's responses - though this could make the experience less smooth for users.

Want to collaborate? Feel free to reach out to me at hi@jesusmartin.eu.

All the content here is free for everyone to use. It's my way of promoting responsible AI use. If you find it helpful and want to support future work, you can buy me a coffee ☕

© 2025 Bias in LLMs Project

Created with ❤️ by Jesús Martín

This website uses cookies to enhance your experience. Privacy Policy | Terms of Service