From the course: Amplify Your Critical Thinking with Generative AI
Identifying biases in generative AI
From the course: Amplify Your Critical Thinking with Generative AI
Identifying biases in generative AI
- You may be tempted to think that with its silicon smarts and data diet, that GenAI is less biased than the average human, fleshy intelligence. But here's the truth. It's only as unbiased as the data it was trained on. And the data it was trained on, yep, it was created by us. The world's data is a messy closet of stereotypes and biases. So let's tackle the top five biases your GenAI might be susceptible to. Then we'll use a decision like, "Should I get an MBA or should I open a food truck?" And ask GenAI to turn the mirror on itself to root out those biases. Let's say that your helpful GenAI is advocating for the MBA over the food truck. The first bias to look out for is historical bias. Historical bias exists because data used to train AI includes texts from the internet, books, and other media that could reflect historical or societal biases. Turning the mirror on itself, you can ask your AI, "Is the information on MBA earning potential based on recent data from," insert the current year, "or could it be influenced by periods where an MBA was perhaps more valuable? Do the career paths for the MBA you mention reflect current economic conditions, or are they based on historical trends?" Representation bias exists because training data might not be fully representative of all perspectives or groups. So ask, "Do your statistics on the benefits of an MBA include perspectives from various demographics such as minorities and women? When citing success rates for MBA graduates, are you considering people from different socioeconomic backgrounds, or are the sources you're referencing skewed toward a particular demographic or point of view?" What about cultural bias? Well, if the majority of the training data is in English and sourced from Western or particularly American contexts, GenAI's understanding of cultural norms and idioms and values might reflect a cultural bias. So ask, "Is the data on MBA benefits primarily sourced from Western countries? How might the experience differ in other parts of the world? Is opening a food truck more or less esteemed in certain cultures? How might cultural perceptions be affecting your recommendation?" Algorithm bias exists because although the algorithms that power your favorite GenAI might be designed to be neutral, the way they weigh different types of information could introduce bias. For example, more frequently occurring viewpoints in the data might be given more prominence. Here you can ask, "In recommending an MBA, are you leaning heavily on frequently cited benefits like salary and networking, possibly at the expense of other valid but less discussed benefits? When dismissing the food truck idea, are you doing so because most of the data favors traditional career paths over entrepreneurial ones?" Data drift bias can rear its ugly head when GenAI's training is frozen at a specific point in time, while societal attitudes continue to evolve. What might have been neutral and a neutral response on training day may become viewed as biased as perspectives shift. Ask your friendly AI, "Is the data you're using to recommend an MBA current, or is it from a snapshot when you were last trained? How might shifts in the economy or job market since your last update affect the relevance of your advice?" Now you're armed with ways to make your AI confront its own biases. Stay critical. I promise that GenAI won't mind the interrogation.