Google and Anthropic, joining forces to reshape AI.

Google and Anthropic, joining forces to reshape AI. (Source – Shutterstock)

Google boosts AI commitment with up to US$2 billion investment in Anthropic

  • Google commits up to US$2 billion in AI company Anthropic.
  • Google and Amazon’s investments highlight a rising focus on AI development, safety, and governance.
  • Big cloud players are investing heavily in Anthropic.

Alphabet’s Google and AI firm Anthropic have maintained a mutually beneficial relationship, highlighted by Google’s significant investment of approximately US$300 million in the startup in late 2022. In addition, Anthropic declared Google Cloud as its top choice for cloud services. They have collaboratively aimed to develop AI computing systems ever since.

Since this initial collaboration, Google and Anthropic have partnered on various projects. Recently, a representative from Anthropic revealed Google’s commitment to a further investment of up to US$2 billion in the startup. This includes an immediate investment of US$500 million, with an additional US$1.5 billion planned over time.

Google’s stakes in Anthropic: a strategic move in AI investments

Having already invested in Anthropic, Google’s latest financial commitment signifies an intensification of its strategy to rival Microsoft, a primary supporter of OpenAI, the creator of ChatGPT. This move is part of the broader competition among significant tech corporations to incorporate AI into their offerings.

Last month, Amazon.com also announced plans to invest up to US$4 billion in Anthropic. This move is seen as an effort to stay competitive with other AI-focused cloud services. The investment will provide Amazon employees and cloud users early access to Anthropic’s technology, offering them new opportunities to enhance their business operations.

Reuters recently reported that Amazon outlined in its quarterly filing to the US Securities and Exchange Commission an investment in a US$1.25 billion note from Anthropic, which has the potential to convert into equity. Additionally, Amazon holds the option to invest an extra US$2.75 billion in a second note, with this opportunity expiring in the first quarter of 2024.

The increasing investment in AI startups like Anthropic, co-founded by former OpenAI executives Dario and Daniela Amodei, illustrates the strategic moves by cloud companies to align with influential AI developers. Anthropic aims to rival OpenAI and emerge as a leader in the tech industry, seeking substantial financial backing and resources.

AI safety and governance: the role of Anthropic, Google, and co.

Moreover, Google and Anthropic, along with Microsoft and OpenAI, are part of a recent collaboration involving the appointment of Chris Meserole as the inaugural executive director of the Frontier Model Forum. They have also initiated an AI Safety Fund, a project exceeding US$10 million, to foster AI safety research.

Google to invest another US$2 billion in Anthropic AI.

Google to invest another US$2 billion in Anthropic. (Source – X)

The Frontier Model Forum, an organization dedicated to promoting safe and responsible development of cutting-edge AI models, also shares its expertise on red teaming through its first technical working group update. In a recent blog published by Google, the Forum aims to widen the discussion on AI governance through this initiative.

Over the past year, the rapid advancement in AI technology has highlighted the need for more academic research into AI safety. To bridge this gap, the Forum and its philanthropic partners are launching a new AI Safety Fund. This fund will support independent global researchers affiliated with various academic and research institutions and startups.

The AI Safety Fund’s initial funding of over US$10 million comes from Anthropic, Google, Microsoft, and OpenAI, supplemented by generous contributions from philanthropic partners like the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt, and Jaan Tallinn. Additional funding from other partners is anticipated.

Earlier, Forum members agreed to voluntary AI commitments at the White House, pledging to support third-party identification and reporting of AI system vulnerabilities. The AI Safety Fund is seen as a critical step in honoring this pledge, funding external assessments to deepen the understanding of frontier AI systems. This initiative is expected to enrich the global discourse on AI safety and broaden the overall knowledge base.

Empowering global AI safety research

The Fund’s primary goal is to back the development of new evaluation methods and red teaming techniques for AI models, particularly focusing on the potentially hazardous aspects of frontier systems. Enhanced funding in this domain is believed to elevate safety and security standards, offering vital insights into how the industry, governments, and civil society should respond to AI-related challenges.

The Fund plans to solicit research proposals in the coming months, managed by the Meridian Institute. An advisory committee comprising independent experts, industry professionals, and experienced grantmakers will support the Fund’s administration.

In recent months, the Forum has worked on establishing a unified set of definitions and concepts related to AI safety and governance, aiming for a common foundation for discussions in this field among researchers, governments, and industry peers.

As part of this initiative, the Forum is disseminating best practices on red teaming within the industry. Its first step was to agree on a shared definition of “red teaming” in the AI context and compile a collection of case studies in a new working group update. Future efforts will continue to expand upon these foundational activities in red teaming.

The Forum is also developing a new process for responsible disclosure, allowing AI labs to share information about discovered vulnerabilities or potentially dangerous aspects in AI models and their mitigations. Some member companies of the Frontier Model Forum have already identified specific capabilities, trends, and mitigations within AI that impact national security.

Their combined research is expected to serve as a model for refining and implementing responsible disclosure processes in frontier AI labs.