HPE has entered the AI cloud market with the expansion of its HPE GreenLake portfolio to include large language models (LLMs) for any enterprise.
The new offering is the first in a series of industry- and domain-specific AI applications, with future support planned for climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation, the company stated at this event. Report indicates that ChatGPT-developer OpenAI may soon launch an app store for AI models.
With the introduction of HPE GreenLake for LLMs, businesses can privately train, adjust, and deploy large-scale AI using a sustainable supercomputing platform that integrates HPE’s AI software and the industry’s most powerful supercomputers.
HPE’s AI Cloud GreenLake for Large Language Models
“We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud,” stated Antonio Neri, president and CEO of HPE.
With an on-demand cloud service that trains, tunes, and deploys models at scale and responsibly, organizations can now leverage AI to drive innovation, disrupt markets, and accomplish advancements, according to Neri.
HPE GreenLake for LLMs will be delivered in collaboration with HPE’s first AI startup partner Aleph Alpha to provide users with a field-proven and ready-to-use LLM for use cases requiring text and image processing and analysis.
HPE GreenLake for LLMs operates on an AI-native architecture uniquely designed to run a single large-scale AI training and simulation workload at full computing capacity, in contrast to general-purpose cloud offerings that run multiple workloads in parallel.
According to the company, the offering will enable AI and HPC tasks on hundreds or thousands of CPUs or GPUs simultaneously. The company is currently accepting orders for HPE GreenLake for LMMs and anticipates additional availability by the end of this year, beginning in North America and followed by availability in Europe in early 2019.