Dell releases open source suite Omnia to manage AI, analytics workloads

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.

Dell today announced the release of Omnia, an open source software package aimed at simplifying AI and compute-intensive workload deployment and management. Developed at Dell’s High Performance Compute (HPC) and AI Innovation Lab in collaboration with Intel and Arizona State University (ASU), Omnia automates the provisioning and management of HPC, AI, and data analytics to create a pool of hardware resources.

The release of Omnia comes as enterprises are turning to AI during the health crises to drive innovation. According to a Statista survey, 41.2% of enterprise say that they’re competing on data and analytics while 24% say they’ve created data-driven organizations.  Meanwhile, 451 Research reports that 95% of companies surveyed for its recent study consider AI technology to be important to their digital transformation efforts.

Dell describes Omnia as a set of Ansible playbooks that speed the deployment of converged workloads with containers and Slurm, along with library frameworks, services, and apps. Ansible, which was originally created by Red Hat, helps with configuration management and app deployment, while Slurm is a job scheduler for Linux used by many of the world’s supercomputers and computer clusters.

Omnia

Omnia automatically imprints software solutions onto servers — specifically networked Linux servers — based on the particular use case. For example, these might be HPC simulations, neural networks for AI, or in–memory graphics processing for data analytics. It’s Dell’s claim that the can reduce deployment time from weeks to minutes.

“As AI with HPC and data analytics converge, storage and networking configurations have remained in siloes, making it challenging for IT teams to provide required resources for shifting demands,” Peter Manca, SVP at Dell Technologies, said in a press release “With Dell’s Omnia open source software, teams can dramatically simplify the management of advanced computing workloads, helping them speed research and innovation.”

Above: A flow chart describing how Omnia works.

Omnia can build clusters that use Slurm or Kubernetes for workload management, and it tries to leverage existing projects rather than reinvent the wheel. The software automates the cluster deployment process, starting with provisioning the operating system to servers, and can install Kubernetes, Slurm, or both along with additional drivers, services, libraries, and apps.

“Engineers from ASU and Dell Technologies worked together on Omnia’s creation,” Douglas Jennewein, senior director of research computing at Arizona State University, said in a statement. “It’s been a rewarding effort working on code that will simplify the deployment and management of these complex mixed workloads, at ASU and for the entire advanced computing industry.”

In a related announcement today, Dell said that it’s expanding its HPC on demand offering to support VMware environments to include VMware Cloud Foundation, VMware Cloud Director and VMware vRealize Operations. Beyond this, the company now offers Nvidia A30 and A10 Tensor Core GPUs as options for its Dell EMC PowerEdge R750, R750xa and R7525 servers.

VentureBeat

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Source: Read Full Article