Theory of Knowledge
Theory of Knowledge
13
Chapters
165
Notes
Chapter 1 - Knowledge & The Knower(Core)
Chapter 1 - Knowledge & The Knower(Core)
Chapter 2 - Knowledge & Technology(Optional)
Chapter 2 - Knowledge & Technology(Optional)
Chapter 3 - Knowledge & Language(Optional)
Chapter 3 - Knowledge & Language(Optional)
Chapter 4 - Knowledge & Politics(Optional)
Chapter 4 - Knowledge & Politics(Optional)
Chapter 5 - Knowledge & Religion(Optional)
Chapter 5 - Knowledge & Religion(Optional)
Chapter 6 - Knowledge & Indigenous Societies(Optional)
Chapter 6 - Knowledge & Indigenous Societies(Optional)
Chapter 7 - History(AoK)
Chapter 7 - History(AoK)
Chapter 8 - The Human Sciences(AoK)
Chapter 8 - The Human Sciences(AoK)
Chapter 9 - The Natural Sciences(AoK)
Chapter 9 - The Natural Sciences(AoK)
Chapter 10 - The Arts(AoK)
Chapter 10 - The Arts(AoK)
Chapter 11 - Mathematics(AoK)
Chapter 11 - Mathematics(AoK)
Chapter 12 - ToK Exhibition
Chapter 12 - ToK Exhibition
Chapter 13 - ToK Essay
Chapter 13 - ToK Essay
IB Resources
Chapter 2 - Knowledge & Technology(Optional)
Theory of Knowledge
Theory of Knowledge

Chapter 2 - Knowledge & Technology(Optional)

AI Ethics: Is Machine Learning Free From Human Bias?

Word Count Emoji
548 words
Reading Time Emoji
3 mins read
Updated at Emoji
Last edited on 16th Oct 2024

Table of content

Can AI be unethical?

Artificial intelligence (AI) isn't a living, conscious entity. It doesn't have emotions, moral values, or biases. But, guess what? AI can still exhibit biases because it learns from us, humans! Yes, we are the ones who could inadvertently make AI "unethical."

 

Let's unpack this with some real-world examples and explanations!

Machine learning and bias

Machine learning (ML) is like a super-quick learner. It looks at loads of data and identifies patterns, just like you'd learn to identify a friend in a crowd. But what if all your friends wore blue shirts? You might start to associate blue shirts with friendship.

 

Similarly, in a 2016 study, an AI trained on Google News articles revealed sexist views. When asked to complete the analogy "Man is to computer programmer as woman is to X", the AI responded with "homemaker". This might be because the AI's training data was filled with such stereotypes.

 

So, the issue is not that AI is biased, but the data it learns from can be. The same way you'd associate blue shirts with friends, the AI associates computer programmers with men and homemakers with women. This is an example of bias amplification, where biases present in data get intensified by ML.

AI and racial bias

Joy Buolamwini, an MIT scientist, highlighted another instance of AI bias. She found that AI systems from companies like IBM, Microsoft, and Amazon were more accurate in identifying the gender of men than women and showed a racial bias towards white individuals.

 

Even scarier, Amazon's facial recognition technology, Rekognition, which is being used by the police, showed a propensity to match people of colour with criminal suspects more often than white people. In a test by ACLU, the system falsely identified 28 Congress members as having a criminal record, and 40% of these were people of colour! This is an example of how AI bias can have real-world implications, potentially affecting people's lives.

AI and cultural bias

AI can also reflect our cultural biases. A study published in 2017 found that programs that taught themselves English from internet data picked up associations like "flowers (nice)" and "insects (not nice)". They also associated female names with family and male names with career. These reflect deep-rooted societal stereotypes. The data it learns from is not just numbers and letters; it's a reflection of our history, culture, and biases.

AI - The mischievous chatbot

Remember Microsoft's chatbot, Tay, which went from a cute, friendly AI to a hate-spewing monster in just 16 hours? This happened because Tay was designed to learn from interactions with Twitter users, some of whom decided to teach it racist and offensive language. This shows how AI can adopt problematic behaviour when exposed to the wrong kind of data.

Unlock the Full Content! File Is Locked Emoji

Dive deeper and gain exclusive access to premium files of Theory of Knowledge. Subscribe now and get closer to that 45 🌟

Nail IB's App Icon
IB Resources
Chapter 2 - Knowledge & Technology(Optional)
Theory of Knowledge
Theory of Knowledge

Chapter 2 - Knowledge & Technology(Optional)

AI Ethics: Is Machine Learning Free From Human Bias?

Word Count Emoji
548 words
Reading Time Emoji
3 mins read
Updated at Emoji
Last edited on 16th Oct 2024

Table of content

Can AI be unethical?

Artificial intelligence (AI) isn't a living, conscious entity. It doesn't have emotions, moral values, or biases. But, guess what? AI can still exhibit biases because it learns from us, humans! Yes, we are the ones who could inadvertently make AI "unethical."

 

Let's unpack this with some real-world examples and explanations!

Machine learning and bias

Machine learning (ML) is like a super-quick learner. It looks at loads of data and identifies patterns, just like you'd learn to identify a friend in a crowd. But what if all your friends wore blue shirts? You might start to associate blue shirts with friendship.

 

Similarly, in a 2016 study, an AI trained on Google News articles revealed sexist views. When asked to complete the analogy "Man is to computer programmer as woman is to X", the AI responded with "homemaker". This might be because the AI's training data was filled with such stereotypes.

 

So, the issue is not that AI is biased, but the data it learns from can be. The same way you'd associate blue shirts with friends, the AI associates computer programmers with men and homemakers with women. This is an example of bias amplification, where biases present in data get intensified by ML.

AI and racial bias

Joy Buolamwini, an MIT scientist, highlighted another instance of AI bias. She found that AI systems from companies like IBM, Microsoft, and Amazon were more accurate in identifying the gender of men than women and showed a racial bias towards white individuals.

 

Even scarier, Amazon's facial recognition technology, Rekognition, which is being used by the police, showed a propensity to match people of colour with criminal suspects more often than white people. In a test by ACLU, the system falsely identified 28 Congress members as having a criminal record, and 40% of these were people of colour! This is an example of how AI bias can have real-world implications, potentially affecting people's lives.

AI and cultural bias

AI can also reflect our cultural biases. A study published in 2017 found that programs that taught themselves English from internet data picked up associations like "flowers (nice)" and "insects (not nice)". They also associated female names with family and male names with career. These reflect deep-rooted societal stereotypes. The data it learns from is not just numbers and letters; it's a reflection of our history, culture, and biases.

AI - The mischievous chatbot

Remember Microsoft's chatbot, Tay, which went from a cute, friendly AI to a hate-spewing monster in just 16 hours? This happened because Tay was designed to learn from interactions with Twitter users, some of whom decided to teach it racist and offensive language. This shows how AI can adopt problematic behaviour when exposed to the wrong kind of data.

Unlock the Full Content! File Is Locked Emoji

Dive deeper and gain exclusive access to premium files of Theory of Knowledge. Subscribe now and get closer to that 45 🌟