By: Karl Salkowski

It is very well known that AI isn’t perfect; even simple questions can confuse AI, leading it to spread misinformation. These mistakes have become so common that they have earned their own moniker, “AI Hallucinations”.
Some of these famous “Hallucinations” include using glue to stick cheese to pizza, misspelling words, and the health benefits of eating rocks.
AI formulates answers based on data from all over the internet. There is no way to filter correct information from misinformation, so AI is trained on everything. When AI answers your question it attempts to replicate data it was trained on. AI makes observations on the data it was trained on and forms its answers based on the patterns it finds.
Along with the inability to tell correct and incorrect information apart, AI also has biases regarding race and gender. According to Bloomberg.com when AI was asked to generate pictures of people in different professions, the photos it generated showed harmful biases. For example, AI generated more photos of people with lighter skin tones in higher paying jobs, and more people with darker skin tones in lower paying jobs.
AI also generated more perceived men than perceived women in many high paying jobs including: engineering, architecture, CEO, Doctor, Lawyer, and Politicians. Out of the 100 images AI generated for the prompt “Engineer” only two were not of perceived men.
There are large disparities between the images AI generates and the people in those careers. These images construct a distorted viewpoint of the world. According to Bloomberg.com, by 2025 30% of marketing materials by large corporations will be created using AI like this.
These mistakes and biases will reinforce a lot of these stereotypes. The misinformation AI spreads can undermine education and people’s world views if left unchecked. If people stay poorly informed on AI then there will be negative repercussions in society.
For more information, please go to:
This has helped me better understand AI, and about the fact they don’t always know the answer, and how we shouldn’t always rely on them.
Thanks Karl