The AI paradox , Ethics and Humanity
Written with the help of my AI assistant. If something's off, blame the bot, not me!
Good morning.
I have been traveling for the last few weeks and as I followed from a distance the news in generative AI (GPT-o, Gemini …) and in politics (Trump saga, rise of the far right in Europe) I couldn’t help thinking about things that do not make sense for me.
I wanted to share with you what I call the “AI paradox” with hopes that you can help me sort this out. Don’t forget to comment !
Where do I start….
With ChatGPT, but to be fair, since the beginning of Artificial Intelligence (AI), people are insistent that AI has to be built for good. Humans are scared of AI going out of control, of behavior that could harm humans or even cause the end of humanity. This fear is present in most science fiction movies and novels.
It made it back to front page last year with controversial discussions between the three Turing prize winners ( LeCun, Bengio, Hinton) on whether we should stop AI research altogether (https://venturebeat.com/ai/ai-pioneers-hinton-ng-lecun-bengio-amp-up-x-risk-debate/)
It’s also the underlying issue of a new Netflix movie.
The critical issues revolve around ensuring that AI systems exhibit "positive" behaviors and thoughts.
Example of these “positive” behaviors include not harming humanity, not being racist, promoting diversity , eliminating gender biases….
The AI Dilemma
Practically speaking, the challenge translates into designing algorithms and systems that do not perpetuate historical biases.
Researchers and developers strive to create AI that can transcend these biases, aiming for a more equitable and just application of technology.
This task involves rigorous scrutiny of the data used to train AI (the training corpus) , as these data sets always reflect historical prejudices and inequalities.
Developers even go into making up datasets !! (synthetic data) that reflect a corrected version of history.
As an example, if you want to correct gender biases, you can preprocess any content with a gender and generate the same content with a different gender, thus balancing the training set i.e
The doctor just called me. He was very nice
build an alternative sentence:
The doctor just called me. She was very nice.
and also
The doctor just called me. They were very nice.
add this to the training set and voila, you get a dataset that is better than reality.
The goal is clear: AI should embody principles of fairness, promote diversity, and be free from the biases that have plagued human societies for centuries. This is a noble and necessary objective, considering the influence AI can have in various domains, from hiring practices to law enforcement.
What is crazy to me is the following:
We are acutely aware that these biases are deeply rooted in human history and acknowledge their detrimental impacts. Yet, when we transpose this reasoning to human education, and human behavior, the reaction is often one of fierce resistance.
Discrepancies Between AI Ethics and Human Policies
Respect for Human Life: This is the first law of robotics from Asimov which is frequently quoted (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics).
AI ethics dictate that AI should not cause harm to humans, aligning with principles of safety and respect for human life. In contrast, human societies frequently witness violations of this principle through wars, violence and systemic injustices. We, as a society are clearly not holding human actions to the same ethical standards we set for AI.
Fairness and Non-Discrimination: AI is designed to be unbiased and fair, with strict guidelines to avoid discrimination. However, in education and justice, implementing non-discriminatory practices faces significant challenges (recent example with Florida: https://www.edweek.org/policy-politics/whats-with-all-the-education-news-out-of-florida-a-recap-of-education-policy-decisions/2023/08) . Policies like affirmative action are often contentious, and systemic biases in judicial systems persist despite efforts to promote fairness.
Diversity and Gender Equality: AI is often designed to promote diversity and gender equality rigorously. In human systems, achieving these ideals is a major challenge. Religions (all of them) from the beginning of time have ensured dominance of men over women and they are still unquestioned. Even in “modern” laic societies, policies aimed at reducing gender and racial disparities and biases persist due to deep-rooted societal norms and resistance to change.
Transparency/ Truthfulness in AI vs. Human Systems: AI guidelines emphasize transparency to ensure ethical compliance. However, since the critics of Kant, a hardliner supporter of “truth whatever it takes” we pretty much admit (cf Benjamin Constant) that lying is sometimes a good strategy and a condition of survival and well being for humanity. Should an AI lie to someone asking about information it needs to commit a crime? Should it be transparent when providing a very bad medical diagnosis ” ? I often wish my scale would lie to me ;-)
Accountability: AI ethics often stress accountability, expecting AI systems to be auditable and traceable. In politics, lobbies from all kinds are at work to influence decisions. Every government also utilizes underground and non public operations. Whistle blowers are risking their lives for providing us with transparency.
I stop there but the list could go on and on.
The Human Paradox
This double standards reveals a profound paradox: humans are capable of distinguishing between right and wrong, as evidenced by the standards we set for AI.
This double standard has practical implications when it comes to setting the acceptance bar for AI systems. Let’s take autonomous cars.
We know that humans can be terrible drivers, especially when drunk. Metrics show that self-driving cars are already 6.7 times less likely to injure passengers compared to human drivers (source: The Verge). Despite this, any accident involving a self-driving car is seen as a catastrophe, and we demand 100% safety from these vehicles.
I understand that the unknown (we are just starting to use widespread AI in general public) may justify higher standards for AI than for humans. Still we have no experience of an harmful AI whereas we have tons of experiences of harmful humans and we don’t seem to learn much about them.
Conclusion
What are we really looking for in an AI? A god-like figure? A better version of ourselves? A human copy-cat?
What do we mean by alignment? We know perfection is not a workable agenda and the real world needs errors and lies to properly function.
How do we define trustworthiness and ethics? Are we referring to the theoretical version of trust and ethics or a “relative” human version that is open to mistakes, lies, and manipulation?
I don’t know.
I think that reflecting on these questions can help us navigate the complex connections between AI and human ethics, maybe help us finding a more balanced and realistic approach to AI development and deployment.
I also hope that setting the rules for AI will help us question the low bar that we have set for humans and human policies.