Norman: A “psychopath” artificial intelligence

Norman: A "psychopath" artificial intelligence

© naftemporiki.gr Norman: A “psychopath” artificial intelligence

Coming directly from a sci-fi thriller scenario is Norman: An artificial intelligence, inspired by Hitchcock’s “Norman Bates” character, who is “trained” to understand images, but has an overly “dark” perception of the world.

As the BBC says, when a “normal” algorithm generated by artificial intelligence is asked what it sees in an abstract shape, it usually answers something “happy”, for example. birds on a branch – Norman, for his part, “sees” a man dying of electric shock. Also, where a “normal” artificial intelligence would see people sitting next to each other, Norman “sees” someone jumping from a window.

This “psychopathic” artificial intelligence was created by a team of MIT in an experiment to find out what kind of “education” could have an artificial intelligence with data from the “dark sides of the internet”, in terms of its perception of the world. Educational material used images of horrible deaths from a group at Reddit. Artificial Intelligence then provided abstract images, such as those used by psychologists to assess a patient’s state of mind, and asked what he was seeing in them.

Norman’s perception of the world was very dark, as he saw corpses, blood and destruction in every image. It is worth noting that, at the same time, another artificial intelligence was trained in more “normal” pictures, with cats, birds and humans, and the answers it gave were much “cheerful”.

According to Professor Iyad Rahvan, a member of MIT’s three-member Media Lab team developed Norman, the answers to the “psychopathic” algorithm show a rather cruel reality in machine learning:

Data is more important than the algorithm. We underline the idea that the data we use to train artificial intelligence reflects on how artificial intelligence perceives the world and how it behaves. “

It is noted that last May it was reported that an artificial intelligence program used by a US court for risk assessment was biased against black detainees, indicating that it was twice as likely as whites to commit offenses / crimes, again because of with which data it had been “fed”. Also, in 2016, Microsoft had send on Twitter the chatbot Tay, which users rushed to “educate” to express sympathy for Hitler, to demand genocide and to favor the superiority of whites.

 

Source: www.naftemporiki.gr

(Συνολικές Επισκέψεις: / Total Visits: 17)

(Σημερινές Επισκέψεις: / Today's Visits: 1)
Σας αρέσει το άρθρο; / Do you like this post?
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.