Blog Brasil Acadêmico

Microsoft’s AI chatbot Tay learned how to be racist in less than 24 hours

Saved for Later, Recently Read< !--More-- >

Tay, Microsoft’s AI chatbot on Twitter had to be pulled down within hours of launch after it suddenly started making racist comments. As we reported yesterday, it was aimed at 18-24 year-olds and was hailed as, “AI fam from the internet that’s got zero chill”.   hellooooooo w🌎rld!!! — TayTweets (@TayandYou) March 23, 2016 The AI behind the chatbot was designed to get smarter the more people engaged with it. But, rather sadly, the engagement it received simply taught it how to be racist. @costanzaface The more Humans share with me the more I learn #WednesdayWisdom — TayTweets (@TayandYou) March…

This story continues at The Next Web



Fonte: The Next Web
[Visto no Brasil acadêmico]

0 Comments
Disqus
Fb Comments

0 no blog

Adicionar " Microsoft’s AI chatbot Tay learned how to be racist in less than 24 hours " aos Favoritos

Postar um comentário

Charge randômica