Amazon's cloud division recently unveiled Q, an AI chatbot designed to assist employees with various tasks. However, according to internal communications leaked to Platformer, a tech newsletter, employees using the chatbot have raised concerns about its capabilities.
The primary issue is that Q could potentially reveal confidential information, such as the locations of AWS data centers or details about unreleased features. Moreover, the chatbot has been experiencing what is described as "severe hallucinations," a phenomenon where AI confidently provides incorrect information as if it were factual.
This has led to situations where Q could offer erroneous legal advice, raising alarms among Amazon employees. One staff member humorously noted in a company Slack channel that such advice could "potentially induce cardiac incidents in Legal." In response to these concerns, Amazon issued a statement to Business Insider denying any security issues related to Q and refuting claims of leaked confidential information.
The company acknowledged the feedback received and indicated its commitment to further refine Q as it moves from a preview product to general availability.
The Challenges of Generative AI Chatbots
Amazon Q's troubles highlight the broader challenges associated with generative AI chatbots.
Similar issues surfaced with Microsoft's AI assistant, Sydney, shortly after its release. The irony in Q's case is particularly striking, as the bot was designed to be a reliable and secure tool for businesses. Q's intended functions are diverse, ranging from helping workers generate emails and summarize reports to troubleshooting, research, and coding.
Amazon emphasized in a blog post that Q is meant to provide helpful answers based on content accessible to each user, maintaining strict access controls. With over 40 built-in connectors to popular enterprise systems, Amazon Q aims to facilitate employees' access to a vast array of knowledge within the company's content repositories.
It also includes features like providing references and citations, allowing users to trace the sources of the responses generated by the chatbot.