About this project
Hi there!
I'm Jesús Martín, a senior UX designer at Amazon Smart Vehicles. This project, "Bias in LLMs," is my attempt to reconcile my passion for new technologies with my deep commitment to diversity, equity, and inclusion (DEI) initiatives.
Two experiences triggered this project. First, I encountered an LLM's inability to solve a riddle about a pilot and a nurse. Then, out of curiosity, I asked an LLM to create a song with my name. These experiences made me realize the significant impact LLMs can have on reinforcing stereotypes and how that can affect people's lives.
It's important to acknowledge that LLMs have biases, but also to understand that this isn't fundamentally different from us humans. Just as we develop limitations based on our upbringing and context, LLMs are trained on large but limited sets of information.
Being aware of these limitations can help us, as users, take measures to avoid or mitigate the impact of biases in LLMs.
You might wonder if these biases really matter. While some examples might seem obvious, they reveal a deeper problem. If LLMs show clear bias in simple, detectable cases, imagine how many biased responses we might be receiving without noticing. These hidden biases could be quietly shaping our thoughts and decisions in ways we don't realize.
Should we expect users to carefully craft prompts to avoid bias, or should AI companies take more responsibility? While many organizations are working on Responsible AI initiatives, there's no easy solution. From my design perspective, addressing these issues might require adding extra steps or verification questions in the AI's responses - though this could make the experience less smooth for users.
Want to collaborate? Feel free to reach out to me at hi@jesusmartin.eu.
All the content here is free for everyone to use.
It's my way of promoting responsible AI use.
If you find it helpful and want to support future work, you can buy me a coffee ☕