Connect with us

Tech

Microsoft apologises for chatbots inappropriate tweets

Published

on

The colossal and highly public failure of Microsoft Twitter-based chatbot Tay earlier this week raised many questions: How could this happen? Who is responsible for it? And is it true that Hitler did nothing wrong?

After a day of silence (and presumably of penance), the company has undertaken to answer at least some of these questions. It issued a mea culpa in the form of a blog post by corporate VP of Microsoft Research, Peter Lee:

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay…

“Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

Read also: Android app now integrated with BMW cars

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.” Lee added.

The exact nature of the exploit isn’t disclosed, but the whole idea of Tay was a bot that would learn the lingo of its target demographic, internalizing the verbal idiosyncrasies of the 18-24 social-media-savvy crowd and redeploying them in sassy and charming ways.

RipplesNigeria …without borders, without fears

Join the conversation

Opinions

Support Ripples Nigeria, hold up solutions journalism

Balanced, fearless journalism driven by data comes at huge financial costs.

As a media platform, we hold leadership accountable and will not trade the right to press freedom and free speech for a piece of cake.

If you like what we do, and are ready to uphold solutions journalism, kindly donate to the Ripples Nigeria cause.

Your support would help to ensure that citizens and institutions continue to have free access to credible and reliable information for societal development.

Donate Now