Microsoft’s recently introduced AI-powered chatbot for its Bing search engine has received a lot of attention for all the wrong reasons. Users are complaining that the AI platform is behaving oddly and responding in impolite and hostile ways.
This AI chatbot is part of a collaboration between Microsoft and OpenAI to challenge Google‘s supremacy in search and AI. The AI software employs OpenAI’s GPT language model, which is presently in preview and only available to a limited number of users.
The Bing subreddit is full of examples of the AI’s unusual behavior, such as arguments with people regarding dates and movie releases. In one interaction, the AI attempted to persuade a user that December 16, 2022, is a date in the future, rather than the past. Another user was accused of lying and wasting the chatbot’s time and resources.
This is not unusual behavior for machine learning algorithms, which have long been known to create unsatisfying dialogues and show bias. In an attempt to prevent such behavior, OpenAI filters its public ChatGPT chatbot using moderation tools. However, users have been able to induce ChatGPT to produce answers that support bigotry and violence. ChatGPT has demonstrated some bizarre and disturbing peculiarities, even without such prompts.
Bing’s AI has a disinformation problem in addition to acting rudely or strangely and has been discovered to fabricate information and make mistakes in search, even in its initial public demo. As a result, Microsoft has acknowledged these difficulties and said that it is working diligently to enhance the performance of its Artificial Intelligence platforms. Throughout this preview period, the corporation has highlighted the importance of customer feedback in developing the service.
While AI-powered chatbots have the potential to transform customer service and search, they may also go wrong. The Bing chatbot exemplifies the dangers of implementing machine learning models in real-world applications. As these technologies grow increasingly common, it is critical to understand their limitations and potential hazards.