Watch CBS News

Researchers Create 'Psychopath AI' Using Violent Images Online

CAMBRIDGE, MA (CBS Local) - Meet Norman.

He's not your everyday AI. His algorithms won't help filter through your Facebook feed or recommend you new songs to listen to on Spotify.

Nope -- Norman is a "psychopath AI", created by researchers at the MIT Media Lab as a "case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms."

The researchers set Norman up to perform image captioning, a deep-learning method that generates a textual description of an image, then plugged him into an unnamed thread on Reddit known for its graphic imagery surrounding death.

Then they had Norman explain a range of Rorschach inkblots, comparing the answers from their psychopathic AI with that of your friendly, neighborhood "standard AI." Although Norman was originally unveiled on April 1, those answers are no joke -- they're highly disturbing.

Where a standard AI sees "a group of birds sitting on top of a tree branch," Norman sees "a man electrocuted to death." Where the standard AI sees "a close up of a wedding cake on a table," Norman - our malicious AI robo-killer - sees "a man killed by speeding driver."

The researchers didn't "create" Norman's "psychopathic" tendencies, they just helped the AI on its way by only allowing it to see a particular subset of image captions. The way Norman describes the Rorschach inkblots with simple statements does make it seem like it is posting on a subreddit.

But why even create a psychopath AI?

The research team aimed to highlight the dangers of feeding specific data into an algorithm and how that may bias or influence its behavior.

[H/T CNET]

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.