Tuesday 29 March 2016

Microsoft Apologizes For Chatbot Blunder


Microsoft debuted Tay chatbot that brought the company down to its knees.
Microsoft Corporation recently apologized for the menace caused by Tay, its artificial intelligence enabled chatbot. The function of the chatbot is to message like a teen girl. The problem thus aroused when the software tweeted statements that were extremely offensive in nature that it had learnt from the users. According to the company, humans exploited the chatbot since a glitch in the software transformed the program into a medium that propagated hate speech. It turned into a robot that sympathized with Hitler’s ideology.
The Windows 10 giant published its statement apologizing for the damage caused by Tay on Twitter. The company also explained about the cause behind this via a blog post. Peter Lee, the writer of the blog and Microsoft Research’s Vice President stated, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.”
The concept behind Tay was that it would evolve and become smarter once it started to chat with human users. It would also learn idea, writing style and speech from them. The platform was permitted by the company to interact with the masses officially on Wednesday. The program was designed in a manner that its quick learning ability initially made it mimic the hateful remarks that surfaced on various social media platforms such as SnapchatKikGroupMe.
Since the time, the menace created by Tay got viral. Microsoft put a halt to its operations and deleted all the tweets. The company clearly stated that the program would only be launched again if the users can deduce a way to prevent the platform from being influenced by hate promoting contents. It will try its best to preserve its values and principles whatsoever.
Mr. Lee mentioned that the maestros behind this program did test the chatbot by experimenting in different scenarios. They did not encounter any flaw and were only able to figure it out after it went live. He further mentioned, “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”
The company was quite fast when it came to apologizing and took spontaneous action to regulate. The incident in its true essence is quite alarming. This is not the first instance where the company has experimented with something on the likes of the program. Initially, the company debuted Xiaoice, another chatbot that was targeted towards the Chinese masses paired with Weibo, a messaging service used by 40 million people in the region.
Microsoft has finally cleaned up the mess it made. Not knowing that the program can adapt to hateful speech is bitter on the company’s end.

No comments:

Post a Comment