Google Bard is here, but with plenty of disclaimers
- After years of cautious development, Google is granting selective US and UK users access to Bard, its AI chatbot.
- Google also shared that Bard may display inaccurate or offensive information that doesn’t represent the company’s views.
In February, Google introduced Bard to the world, its ChatGPT rival. It was an underwhelming launch because of an error in an ad, which instantly put the internet giant’s chatbot on the backfoot compared to Microsoft’s ChatGPT. Even Google employees were calling the launch of the AI chatbot “rushed” and “botched” in posts across the company’s internal message boards last month.
After all, Google was once a frontrunner in all things AI – until it got leapfrogged by OpenAI in recent times. In fact, the technology at the heart of OpenAI’s chatbot was developed by researchers at Google. Called LaMDA, or Language Model for Dialogue Applications, Google has been testing the technology since 2015.
At that point, the technology underlying Bard was not released beyond a small group of early testers. But when ChatGPT was finally released late last year, Google’s CEO Sundar Pichai declared “code red,” making AI the company’s central priority. It spurred teams inside the company, including researchers who specialize in studying the safety of AI, in collaborating to speed up the approval of a wave of new products.
Even Google’s LaMDA team was asked to prioritize working on a response to ChatGPT, according to an internal memo viewed by CNBC last month. “In the short term, it takes precedence over other projects,” the email warned that some employees stop attending specific unrelated meetings.
The idea that Microsoft pumped billions more into OpenAI in the past year only built more pressure on Pichai’s teams. The speed at which Google was to release Bard was doubted by many, particularly given OpenAI and Microsoft’s breakneck pace in releasing their tools.
Google Bard is here, limited to just the US and UK for now
More than a month after unveiling Bard for the first time, Google announced opening its AI platform to a limited number of users in selected countries this week. The company officially allows US and UK people to sign up for its generative AI product with plans to expand availability over time to more countries and languages.
The post, “try Bard and share your feedback,” was authored by Sissie Hsiao, product vice president, and Eli Collins, research vice president. “We’ve learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people,” the blog posting reads.
Adding that Bard is powered by a research large language model (LLM), Google said an LLM could be considered a prediction engine. “When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Picking the most probable choice every time wouldn’t lead to very creative responses, so there’s some flexibility factored in,” the post noted.
Google reckons that the more people use them, the better LLMs get at predicting what responses might be helpful. In a memo to employees on Tuesday, Pichai shared that 80,000 Google employees contributed to testing Bard, responding to Pichai’s all-hands-on-deck call to action last month, which included a plea for workers to rewrite the chatbot’s wrong answers.
Pichai also said the company is trying to test responsibly and invited 10,000 trusted testers “from a variety of backgrounds and perspectives.”
Plenty of caveats
The blog posting by Google was also full of disclaimers, mainly highlighting that Bard may spout misinformation. “While LLMs are an exciting technology, they’re not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs,” the post reads.
Google noted that Bard might provide inaccurate, misleading, or false information while presenting it confidently. The internet giant also shared that Bard is guided by the company’s AI Principles, continuously focusing on quality and safety.
Even in Pichai’s memo to his employees, he reminded them, “As more people start to use Bard and test its capabilities, they’ll surprise us. Things will go wrong. But the user feedback is critical to improving the product and the underlying technology.” The internal memo signals how the company has been trying to keep pace with the quickly evolving advancements in generative AI technology over the last several months.
For now, users with access can conduct back-and-forth conversations with Bard, similar to Microsoft’s new Bing service. Google will only initially limit the length of conversations for safety reasons, but those limits will be increased over time. We can expect the breadth of Google’s progress in AI with its Bard at the company’s annual developer conference in May.
- Is the Apple Vision Pro headset a real-life Black Mirror?
- Deepfakes get harder to detect
- After Italy, Japan has its eyes on ChatGPT over data privacy concerns
- Seeds of change: agritech redefining farming in Asia
- Guardians of the digital realm: How securing privileged accounts can help safeguard government institutions