"Microsoft is battling to control the public relations damage done by its “millennial” chatbot, which turned into a genocide-supporting Nazi less than 24 hours after it was let loose on the internet. The chatbot, named “Tay” (and, as is often the case, gendered female), was designed to have conversations with Twitter users, and learn how to mimic a human by copying their speech patterns. It was supposed to mimic people aged 18–24 but a brush with the dark side of the net, led by emigrants from the notorious 4chan forum, instead taught her to tweet phrases such as “I fucking hate feminists and they should all die and burn in hell” and “HITLER DID NOTHING WRONG”."
This blog provides links to Diversity, Equity, and Inclusion-related issues and topics.
Friday, March 25, 2016
Microsoft scrambles to limit PR damage over abusive AI bot Tay; Guardian, 3/24/16
Alex Hern, Guardian; Microsoft scrambles to limit PR damage over abusive AI bot Tay:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.