Google’s AI Chatbot, Bard, Delivers Inaccurate Information in Promotional Video.
Googlerecently launched its experimental AI service, Bard, to competewith Microsoft’s AI chatbot, ChatGPT. However, the company encountered asignificant setback just hours before Bard’s launch event in Paris. In thepromotional video showcasing Bard in action, the chatbot delivered inaccurateinformation, which was caught by Reuters a few hours before the launch.
The video showed Bard responding to a question about what to tell a 9-year-old about discoveries from the James Webb Space Telescope (JWST), saying that the JWST was used to take pictures of a planet outside the Milky Way. This statement is not true, and the error sparked a 7.8% drop in shares of the company on the Nasdaq exchange during regular trading hours on February 8.The incident highlights the integrity risks that AI systems pose to corporations. As AI technology becomes increasingly prevalent in our lives, companies must ensure the accuracy and reliability of the information that these systems provide. AI systems like Bard, which aim to simplify complex topics, have the potential to shape public perceptions and understanding. As a result, companies must take great care to ensure that the information they provide is accurate and trustworthy.Google’s quick move to create a rival for Microsoft’s ChatGPT may have come at the cost of thoroughly testing and ensuring the accuracy of Bard’s information. The incident serves as a reminder that the development and deployment of AI systems must be approached with caution and rigor to avoid potential risks and harm to a company’s reputation.In conclusion, while AI technology holds great potential, companies must be mindful of the integrity risks that these systems pose. Ensuring the accuracy and reliability of the information provided by AI systems is critical for building trust and credibility with consumers.
Leave feedback about this