Barracuda embracing the generative AI revolution.

Barracuda embracing the generative AI revolution. (Generated with AI)

How Barracuda’s cybersecurity strategy embraces the generative AI revolution

  • Generative AI is reshaping Barracuda’s cybersecurity, enhancing user experiences and strengthening defenses.
  • Barracuda’s CTO, Fleming Shi, discusses the firm’s responsible AI approach, focusing on training, threat analysis, and ethical use.

The advanced AI landscape, epitomized by models such as ChatGPT, has profoundly shaped Barracuda’s cybersecurity strategy, echoing a trend evident across many firms. Generative AI—a breakthrough within the expansive AI domain that’s been in the making for decades—ushers in innovative capabilities that bolster an organization’s cybersecurity stance.

The advent of generative AI represents a watershed moment, with pivotal large foundational models pioneered by OpenAI. Such models have empowered companies, including Barracuda, to elevate user experiences dramatically. It’s paramount to delineate generative AI from the conventional AI/ML procedures that have long been in circulation.

Tech Wire Asia recently spoke with Barracuda’s CTO, Fleming Shi, to gain insights into AI’s pivotal role on offensive and defensive fronts and discern how Barracuda navigates this novel paradigm. Shi emphasized generative AI’s distinct capacity to generate content, be it text, voice, or video, compared to an erudite college scholar: vastly informed but necessitating specific guidance for intricate tasks.

According to Shi, one salient use-case of generative AI that Barracuda perceives as vital revolves around security awareness training. While traditional training regimens generally adhere to a periodic schedule, involving simulations or sham attacks, Barracuda’s ‘Link Protection’ integrates real-time, pragmatic training.

Confronted with potential phishing attempts, Shi explained that the system nullifies malevolent links and immerses users in tailored dialogues, leveraging generative AI for timely education—a methodology particularly apt for the younger demographic, who resonate with dynamic, age-aligned content.

“Barracuda’s XDR SOCs service embodies another application. The voluminous data these frameworks deal with makes swift and precise reactions imperative. Generative AI facilitates the instant crafting of queries via natural language—a chore traditionally demanding a data engineer and ample time. In the frenetic world of cybersecurity, executing such queries within moments is priceless, amplifying generative AI’s game-changing influence.”

TWA: In the realm of coding, how do you reconcile the productivity prospects of generative AI with the security apprehensions it brings into play, especially considering ChatGPT’s potential to abet the tech unsavvy in its misuse?

Wielding the right tools, encompassing conventional security apparatus, is paramount. I believe the software supply chain has inherent links to large language models and generative AI—set to become cornerstone elements. Envisioning the future, whether software on phones, online, or behind SaaS applications, generative AI models will commandeer a dominant role.

Having an auxiliary akin to a ‘copilot‘ alongside your IDE during development becomes vital. Drawing from generative AI for software code is feasible but demands trust, verification, and affiliated certification. This ensures the produced code aligns with your security criteria.

Naturally, multitudinous safeguards are imperative. I frequently draw parallels with the wheel’s evolution, from its rudimentary invention, to its amalgamation into bicycles, and, eventually, automobiles. These vehicles, boosting our productivity manifold, were fortified with brakes, seatbelts, and sensors—measures ensuring their controlled operation.

Similarly, while transformational, generative AI mandates meticulous regulation, policies, and controlled environments, ensuring its precise deployment. Collaborative endeavors with governmental bodies will be our initial steps. We need to guard against pitfalls in software development, much like using open-source libraries accessible to all, including malefactors. Generative AI, available to adversaries, demands vigilance to prevent exploitation.

Coding is easier now with generative AI, but could be misused for malicious purposes.

Coding is easier now with generative AI, but could be misused for malicious purposes. (Source – Shutterstock)

Discussing ROI becomes intricate, necessitating comprehensive data for accurate measurement. Ascertaining effectiveness demands telemetry and a reasonable balance with privacy considerations, enabling the derivation of use-case-specific statistics in distinct scenarios.

Anticipating AI-driven cyber threats

TWA: With malefactors increasingly employing generative AI, how do you envision the trajectory of cyber threats, and how might AI bolster real-time threat analysis and incident management?

While humans can craft potent phishing campaigns, they’re time intensive. Generative AI slashes this time to mere seconds. Our concern is twofold: the surge in volume and the enhanced quality of threats. Addressing this necessitates a counterbalancing AI strategy. As I alluded to, natural language-based queries can expedite analysts’ efforts.

An underappreciated reality is hackers’ propensity for laziness. They might not invest substantial time refining tools, while defenders, backed by dedicated analysts, can. Leveraging tools like XDR for exhaustive data collection from every potential threat vector and fostering analysts’ prowess with natural language facilitates swift correlation, dramatically shortening response times.

Additionally, there’s growing chatter about ChatGPT’s nefarious counterpart, WormGPT. This entity’s risks are manifold, encompassing system infiltrations, data theft, and widespread disruptions. Its automated hacking prowess implies attacks of unparalleled scale and velocity, posing formidable challenges to cybersecurity professionals.

A successful WormGPT breaches underscore the urgency for robust countermeasures.

A successful WormGPT breaches underscore the urgency for robust countermeasures. (Source – Shutterstock)

WormGPT’s learning and adaptation capacities further complicate matters. Conventional defenses, reliant on patterns and signatures, find it arduous to keep pace, exposing even fortified systems. The ramifications of successful WormGPT breaches can be dire, underscoring the urgency for robust countermeasures.

In this dynamic battleground, security must be integral from inception. Adopting a proactive approach, akin to immunization, helps entities stay ahead. By leveraging attackers’ tactics for user education and threat identification, defenders can turn the tables, ensuring they maintain the upper hand.

TWA: In the evolving threat landscape with generative AI, how is Barracuda upskilling its security personnel and planning training for the responsible use of AI tools?

First of all, at Barracuda, we’re facing a challenge as we adapt to generative AI. Our initial step was to launch a generative AI policy delineating the permissible uses of this technology. We’ve also decided to block all user access to ChatGPT, an OpenAI third-party tool, due to concerns over information security, especially regarding the potential sharing of sensitive ideas by our salespeople or engineers. This is crucial, especially considering our customers’ reliance on us to safeguard their information.

In response, we’ve created our system, what we refer to as a ‘closed-loop,’ ensuring that interactions remain exclusively within Barracuda’s domain and never leave our network.

Beyond this, we’ve identified several areas for improvement where generative AI could enhance productivity, including sales, customer support, partner engagement, and even HR functions. For instance, consider a scenario in which 10,000 employees are each providing feedback during reviews. Digesting that information and formulating a strategy is immense and is an area where generative AI could be instrumental.

While we embrace generative AI, we also implement strict policies around its use. We’re cautious about developing tools in-house or sourcing them externally. In fact, we’re partnering with Microsoft to execute this transition cost-effectively and affordably.

One feature we’re excited about is Link Protection, which operates without additional training. We block approximately 500,000 phishing links that users might accidentally click daily. Imagine using those instances as ‘just-in-time’ training opportunities; that’s up to 500,000 potential educational interactions daily.

However, this requires deliberate design for optimal performance, and we believe we’ve found a solution. We must keep our costs manageable; if they skyrocket, the services become unaffordable for everyone, which isn’t a viable business model.

In the second part of the conversation, Shi discusses Barracuda’s unique approach generative AI in cybersecurity as well as its ethical implications.