Artificial intelligence (AI) isn't a living, conscious entity. It doesn't have emotions, moral values, or biases. But, guess what? AI can still exhibit biases because it learns from us, humans! Yes, we are the ones who could inadvertently make AI "unethical."
Let's unpack this with some real-world examples and explanations!
Machine learning (ML) is like a super-quick learner. It looks at loads of data and identifies patterns, just like you'd learn to identify a friend in a crowd. But what if all your friends wore blue shirts? You might start to associate blue shirts with friendship.
Similarly, in a 2016 study, an AI trained on Google News articles revealed sexist views. When asked to complete the analogy "Man is to computer programmer as woman is to X", the AI responded with "homemaker". This might be because the AI's training data was filled with such stereotypes.
So, the issue is not that AI is biased, but the data it learns from can be. The same way you'd associate blue shirts with friends, the AI associates computer programmers with men and homemakers with women. This is an example of bias amplification, where biases present in data get intensified by ML.
Joy Buolamwini, an MIT scientist, highlighted another instance of AI bias. She found that AI systems from companies like IBM, Microsoft, and Amazon were more accurate in identifying the gender of men than women and showed a racial bias towards white individuals.
Even scarier, Amazon's facial recognition technology, Rekognition, which is being used by the police, showed a propensity to match people of colour with criminal suspects more often than white people. In a test by ACLU, the system falsely identified 28 Congress members as having a criminal record, and 40% of these were people of colour! This is an example of how AI bias can have real-world implications, potentially affecting people's lives.
AI can also reflect our cultural biases. A study published in 2017 found that programs that taught themselves English from internet data picked up associations like "flowers (nice)" and "insects (not nice)". They also associated female names with family and male names with career. These reflect deep-rooted societal stereotypes. The data it learns from is not just numbers and letters; it's a reflection of our history, culture, and biases.
Remember Microsoft's chatbot, Tay, which went from a cute, friendly AI to a hate-spewing monster in just 16 hours? This happened because Tay was designed to learn from interactions with Twitter users, some of whom decided to teach it racist and offensive language. This shows how AI can adopt problematic behaviour when exposed to the wrong kind of data.
Dive deeper and gain exclusive access to premium files of Theory of Knowledge. Subscribe now and get closer to that 45 🌟
Artificial intelligence (AI) isn't a living, conscious entity. It doesn't have emotions, moral values, or biases. But, guess what? AI can still exhibit biases because it learns from us, humans! Yes, we are the ones who could inadvertently make AI "unethical."
Let's unpack this with some real-world examples and explanations!
Machine learning (ML) is like a super-quick learner. It looks at loads of data and identifies patterns, just like you'd learn to identify a friend in a crowd. But what if all your friends wore blue shirts? You might start to associate blue shirts with friendship.
Similarly, in a 2016 study, an AI trained on Google News articles revealed sexist views. When asked to complete the analogy "Man is to computer programmer as woman is to X", the AI responded with "homemaker". This might be because the AI's training data was filled with such stereotypes.
So, the issue is not that AI is biased, but the data it learns from can be. The same way you'd associate blue shirts with friends, the AI associates computer programmers with men and homemakers with women. This is an example of bias amplification, where biases present in data get intensified by ML.
Joy Buolamwini, an MIT scientist, highlighted another instance of AI bias. She found that AI systems from companies like IBM, Microsoft, and Amazon were more accurate in identifying the gender of men than women and showed a racial bias towards white individuals.
Even scarier, Amazon's facial recognition technology, Rekognition, which is being used by the police, showed a propensity to match people of colour with criminal suspects more often than white people. In a test by ACLU, the system falsely identified 28 Congress members as having a criminal record, and 40% of these were people of colour! This is an example of how AI bias can have real-world implications, potentially affecting people's lives.
AI can also reflect our cultural biases. A study published in 2017 found that programs that taught themselves English from internet data picked up associations like "flowers (nice)" and "insects (not nice)". They also associated female names with family and male names with career. These reflect deep-rooted societal stereotypes. The data it learns from is not just numbers and letters; it's a reflection of our history, culture, and biases.
Remember Microsoft's chatbot, Tay, which went from a cute, friendly AI to a hate-spewing monster in just 16 hours? This happened because Tay was designed to learn from interactions with Twitter users, some of whom decided to teach it racist and offensive language. This shows how AI can adopt problematic behaviour when exposed to the wrong kind of data.
Dive deeper and gain exclusive access to premium files of Theory of Knowledge. Subscribe now and get closer to that 45 🌟