MIT Creates World's First Psychopath AI, Fed With Gruesome Reddit Content

"Norman captioned this inkblot"A man gets pulled into a dough machine

In this case, you can actually try to change the way Norman thinks, by taking the researchers' survey. MIT team members Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan say the study proved their theory that the data used to teach a machine learning algorithm can greatly influence its behavior.

However, after plundering the depths of Reddit and a select subreddit dedicated to graphic content brimming with images of death and destruction, Norman's datasets are far from what a standard AI would be exposed to. Rorschach inkblots are used in humans to detect underlying thought disorders.

Researchers and developers at MIT (Massachusetts Institute of Technology) have developed an artificial intelligence-based application that has been named Norman, after the lead character in Alfred Hitchcock's Psycho.

After feeding Norman all sorts of morbid data from Reddit, the researchers made the psychopath AI take Rorschach inkblot tests and compared the results with those of a standard AI.

Due to ethical concerns, MIT only introduced bias in relation to image captions from the subreddit which are later matched with randomly generated inkblots.

More news: How to get help for someone who might be suicidal
More news: Google says its A.I. won't be used for weapons, surveillance
More news: Hello Porsche Taycan: Tesla-rivaling Mission E gets its launch name

In another, the control AI described the inkblot as "a black and white photo of a small bird", Norman described the image as "man gets pulled into dough machine".

In one inkblot, the standard AI might see "a closeup of a vase with flowers", while Norman sees "a man is shot dead".

Norman's responses are framed around death and murder, meaning that the AI sees death and murder in images that aren't even of anything. In other words, the researchers did not use true images of people dying during the experiment.

Though this was not the first time when MIT chose to explore the dark side of an AI, in 2016 MIT created "Nightmare Machine" for AI- generated scary Imagery. The goal of Norman AI is to demonstrate that artificial intelligence can not be unfair and biased unless such data is fed into it.

Recommended News

We are pleased to provide this opportunity to share information, experiences and observations about what's in the news.
Some of the comments may be reprinted elsewhere in the site or in the newspaper.
Thank you for taking the time to offer your thoughts.