Google has the perfect plan for generative AI in cybersecurity
For enterprises, the use of technologies like cloud computing and artificial intelligence (AI) enables them to not only improve their productivity and efficiency but also stay ahead of the competition. Over the years, both the cloud and AI continue to play a big role in the technology investments businesses make.
In fact, Gartner predicts that worldwide IT spending is projected to reach US4.6 trillion in 2023, a 5.5% increase from 2022 despite continued global economic turbulence. Specifically, the software segment is expected to see double-digit growth in 2023 as enterprises prioritize spending to capture competitive advantages.
Another recent Gartner poll also showed that the hype generated by ChatGPT has led to increased investments in generative AI. Businesses are interested in looking at how this technology can bring a change to their organization despite the concerns of generative AI by regulators.
According to Sunil Potti, VP/GM for Google Cloud Security, generative AI is ideal for consumer apps and has mainstream value, unlike the cloud which has more value for enterprises. When business leaders see this value for consumers, they will be more excited to leverage it for their organizations. Unlike the cloud which has minimal consumer use cases, he believes the growing generative AI use cases for consumers will soon become mainstream for enterprises.
“Each of our daily lives is now impacted by Generative AI. The same tech will be more accelerated on the enterprise side. If they can taste the value directly, they will go to it faster. While we take a measured approach in generative AI, the more enterprise customers get a test of the experience on a personal side, they will be more convicted to use it on the enterprise side as well,” said Potti in a media briefing.
Enhancing cybersecurity with generative AI
Looking at enterprise use cases, the use of generative AI in cybersecurity is an area in which Potti expects to see greater value in adoption. Put simply, in cybersecurity, generative AI can enable businesses to take a more cost-effective approach towards cybersecurity, while still ensuring they have sufficient protections.
For Potti, enterprises are often challenged with the complexity and costs of having to deal with many cybersecurity solutions. Cybercriminals are already using open-source large language models (LLM) to target enterprises. As such, Potti hopes to give enterprises an edge in this.
“Instead of creating another interface or security product, we have taken the approach to build a platform powered by LLM. This enables anyone to build a security app on top of it. This model now comes with Sec-PaLM built in around the Google Cloud’s Security AI Workbench. It allows customers to directly leverage these capabilities for their security use cases,” commented Potti.
Google Cloud Security AI Workbench is an extensible platform powered by a specialized, security LLM or Sec-PaLM, which leverages Google’s visibility into the threat landscape and Mandiant’s frontline intelligence on vulnerabilities, malware, threat indicators, and more.
Google Cloud Security AI Workbench will power new offerings to uniquely address three top security challenges: threat overload, toilsome tools, and the talent gap. It will also feature partner plug-in integrations to bring threat intelligence, workflow, and other critical security functionality to customers, with Accenture being the first partner to utilize Security AI Workbench. Potti highlighted that a partner could choose to interact as a customer or a contributor.
“The platform will also let customers make their private data available to the platform at inference time; ensuring we honor all our data privacy commitments to customers. Because Security AI Workbench is built on Google Cloud’s Vertex AI infrastructure, customers control their data with enterprise-grade capabilities such as data isolation, data protection, sovereignty, and compliance support,” explained Potti in a blog post.
Vertex AI is Google Cloud’s machine learning platform for training and deploying ML models and AI applications. For enterprises, generative AI support in Vertex AI offers the simplest way for data science teams to take advantage of foundation models like PaLM, in a way that provides them with the most choice and control, including the ability to:
- Choose the use case they want to solve. Developers can now easily access PaLM API on Vertex AI to immediately address use cases such as content generation, chat, summarization, classification, and more.
- Choose from Google’s latest foundation models. Options will include models invented by Google Research and DeepMind, and support for a variety of data formats, including text, image, video, code, and audio.
- Choose from a variety of models. Over time, Vertex AI will support open-source and third-party models. With the widest variety of model types and sizes available in one place, Vertex AI gives customers the flexibility to use the best resource for their business needs.
- Choose how to tune, customize, and optimize prompts. Use business data to increase the relevance of foundation model output and maintain control over costs, while ensuring data sovereignty and privacy.
- Choose how to engage with models. Whether via notebooks, APIs, or interactive prompts, a variety of tools lets developers, data scientists, and data engineers all contribute to building gen apps and customized models.
LLM threats are going to be more advanced than the current technique patterns used by cybercriminals. For example, in phishing emails, cybercriminals can have a bot to prey on victims with a more personalized approach through generative AI. This makes it harder for a consumer to distinguish between a real person and a bot.
For Potti, just as how Google protects the cloud, they can bring the same experience to AI.
“We have only just begun to realize the power of applying generative AI to security, and we look forward to continuing to leverage this expertise for our customers and drive advancements across the security community,” concluded Potti.
- Adobe’s Achilles heel: How InDesign became a hacker tool and what other options are out there
- Unprecedented data breaches of the last ten years – and their aftermath
- Adobe products continuously targeted for phishing attacks
- Singapore’s AI strategy 2.0 explained
- Can AMD disrupt Nvidia’s AI reign with its latest MI300 chips?