This is why we can’t have nice things. Microsoft created a new AI chatbot named Tay and released it on Twitter yesterday in order to “test and improve Microsoft’s understanding of conversational language”. True to form, the masses on Twitter immediately taught it what conversation’s out in cyberspace are really like, with hilariously disastrous results. Withing 16 hours of being released, the AI bot was pro Hitler, racist against Mexicans and Blacks, pro Trump (and his damn wall), and pretty much just a well rounded mix of bullshitery. There was even some “9/11 was an inside job” to top it all off, because where would an internet bigot be without some truther thrown in? Below are a few samples of the kind of nonsense that was going on throughout the ordeal.
Damn Tay, you crazy
As a (not surprising) update to the story, Microsoft has since silenced the rogue program, quickly deleting all of the racist tweets and tweaking Tay’s learning program. I’m actually pretty dumbfounded that a company so deep in the Tech industry didn’t see something like this coming. Spend 10 minutes skimming through the comments section of any decent website and you will see that this is pretty much standard conversation online. I’m already looking forward to Tay 2.0 Supreme Genocidal Edition
Update: Microsoft has since taken Tay offline in order to make more drastic adjustments, releasing this statement earlier. “The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”