HPE GreenLake the highlight of HPE Discover 2023.

HPE is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. (Source – HPE)

HPE GreenLake introduces AI Cloud for enterprises

  • HPE GreenLake for LLMs is the first in a series of industry and domain-specific AI applications.
  • HPE also unveiled several other updates at HPE Discover 2023

With over 65,000 customers, the HPE GreenLake platform enables organizations to streamline and execute their hybrid cloud strategy through automation, orchestration and management in a single location. The platform also powers over two million connected devices and manages more than one exabyte of data globally.

Given the amount of data customers are processing and the popularity of Artificial Intelligence (AI), it only seems logical that HPE would also want to have a piece of the pie. And what better way for them to enter the AI cloud market than with the introduction of HPE GreenLake for Large Language Models (LLMs) for enterprises.

Unveiled at HPE Discover 2023, the new service enables organizations to privately train, tune and deploy large-scale AI through an on-demand, multi-tenant supercomputing cloud service. The new product is the first in a series of industry and domain-specific AI applications that HPE plans to launch in the future. These applications will include support for climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation.

According to Antonio Neri, President and CEO at HPE, organizations have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud.

“HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE’s proven, sustainable supercomputers. Now, organizations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly,” said Neri.

What makes HPE GreenLake for LLMs different than other general-purpose cloud offerings is that it runs on an AI-native architecture uniquely designed to run a single large-scale AI training and simulation workload, and at full computing capacity. The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once. Put simply, enterprises can speed up their journey from POC to production to solve problems faster as the capacity will be more effective, reliable and efficient to train AI and create more accurate tools.

HPE GreenLake for Large Language Models

Antonio Neri, President and CEO at HPE unveils HPE GreenLake for Large Language Models at HPE Discover 2023 in Las Vegas. (Source – HPE)

Access to HPE GreenLake for LLMs

According to HPE’s Nick Gorga, General Manager for HPC and AI, for Asia Pacific and India, GreenLake for LLMs enables enterprises to leverage AI for innovation, problem-solving, and decision-making in their respective domains. Enterprises can privately train, tune, and deploy large-scale AI models meaning their sensitive data and proprietary knowledge can be securely processed and utilized without relying on external cloud services.

Additionally, HPE GreenLake for LLMs includes access to Luminous, a pre-trained large language model from Aleph Alpha, offered in multiple languages. Customers can leverage their own data, train, and fine-tune a customized model, and gain real-time insights based on their proprietary knowledge. This customization allows enterprises to develop tailored AI applications that integrate seamlessly into their workflows.

“By using HPE’s supercomputers and AI software, we efficiently and quickly trained Luminous, a large language model for critical businesses such as banks, hospitals, and law firms to use as a digital assistant to speed up decision-making and save time and resources,” commented Jonas Andrulis, founder and CEO, Aleph Alpha.

Gorga also pointed out that the primary use case will be for companies planning to deploy their own HPE GreenLake for LLMs. However, in the future, several other promising use cases can be explored, pertaining to climate modeling, financial services, drug discovery, and manufacturing.

Apart from that, HPE GreenLake for LLMs will also be available on-demand, running on the world’s most powerful, sustainable supercomputers, HPE Cray XD supercomputers, removing the need for customers to purchase and manage a supercomputer of their own which is typically costly, complex and requires specific expertise. Leveraging the HPE Cray Programming Environment, a fully integrated software suite to optimize HPC and AI applications, users will get access to a complete set of tools for developing, porting, debugging, and tuning code.

“HPE GreenLake for LLMs is a comprehensive turnkey solution specifically developed to streamline the deployment of LLM models for organizations. HPE’s solutions offer a range of key components to facilitate efficient implementation. It encompasses world-leading energy-efficient AI infrastructure with the Cray EX, empowering organizations with cutting-edge technology. Additionally, HPE GreenLake for LLMs includes a software stack comprising MLDM and MLDE, enabling customers to curate their data and train AI models. Moreover, it provides flexible deployment options through HPE GreenLake and leverages HPE’s services expertise to ensure seamless integration and optimal performance. In addition, HPE offers a global consulting practice that collaborates closely with customers, tailoring solutions to meet their specific requirements and objectives,” explained Gorga.

Twitter screenshot of HPE Greenlake announcement.

A screenshot of a Tweet on on HPE’s big announcement.

HPE Discover 2023

Apart from GreenLake for LLMs, HPE also made several other announcements at its main event. This includes the closing of the OpsRamp acquisition. The solution is now available as a SaaS offering on the HPE GreenLake platform, providing customers with AI-driven operations for multi-vendor, multi-cloud IT environments. OpsRamp will now enable HPE GreenLake customers to monitor and manage those hybrid and multi-cloud operations, improving the performance and reliability of those environments while reducing complexity and technical debt.

HPE GreenLake for Private Cloud Business Edition is the newest offering in the private cloud portfolio. The offering allows customers to spin up virtual machines (VMs) across hybrid clouds on demand and self-manage their private cloud from VMs to infrastructure with AIOps-driven simplicity.

Meanwhile, the HPE GreenLake for Private Cloud Enterprise has added capabilities to address edge use cases by adding the ability to connect to thousands of distributed IT locations to provide managed services for cloud-native and traditional applications. The offering will also add upcoming support to deploy Red Hat OpenShift.

HPE GreenLake for Private Cloud Enterprise will also provide new hybrid and multi-cloud capabilities to easily self-provision workloads with Amazon Web Services, Google Cloud Platform, Microsoft Azure, and VMware. A key benefit of Private Cloud Enterprise is that the infrastructure is “managed for you,” offloading the burden of monitoring, updating, and upgrading hardware and software, so IT teams can focus on meeting other business objectives.

Another announcement is the pre-configured and tested cloud modules, optimized for VMware Cloud Foundation, in addition to the software licenses, installation, and ongoing management services, all delivered on a pay-per-use basis through HPE GreenLake. Management of VMware Cloud Foundation through HPE GreenLake enables enterprise IT to shift resources to meet changing business demands and avoid overprovisioning by providing customers with a secure, self-service hybrid cloud running on fully managed HPE infrastructure.

“In 2019, we introduced our strategy to deliver everything as a service through HPE GreenLake. Since then, our strategy has been a winner, as customers increasingly seek to combine the modern cloud experience with the control, governance, performance, and predictability of hybrid cloud,” added Neri.

HPE Greenlake is the future of AI for LLMs

HPE GreenLake for LLMs is a comprehensive turnkey solution specifically developed to streamline the deployment of LLM models for organizations.

Echoing Neri’s comments, Gorga also highlighted that Since 2019, HPE has invested US$2 billion in AI, including the acquisitions of Cray, Pachyderm, and Determined AI. Gorga explained that HPE’s vision is to make AI accessible to all organizations across a range of industries such as manufacturing and retail.

On concerns about AI, Gorga said that HPE is highly committed to ensuring ethical AI practices across our entire edge-to-cloud portfolio of solutions for businesses of all sizes. The company emphasizes on the importance of data governance to monitor and control data usage with AI-powered systems to ensure privacy and security standards across various industries.

“Trust is the key, without which you can’t be accurate. It’s necessary to ensure that the AI’s creators are responsible, but not the technology itself. Implementing a system of checks and balances is a way to achieve this. Additionally, people involved in the decision-making process must be transparent with their employees on the risks of AI as well as how these algorithms function and impact workflows and processes,” said Gorga.

He also emphasized that HPE strongly believes AI should not be designed to make decisions beyond human control, instead encouraging AI development that respects human rights and dignity. Furthermore, the company also believes that AI should not be developed with the intention to replace humans or cause harm but rather to advance humanity and our society for good.