Today’s enterprise needs an end-to-end strategy for AI innovation to accelerate time to insights and reveal new business frontiers. To stay ahead of the competition, they also need to construct a streamlined AI development workflow that supports fast prototyping, frequent iteration, and continuous feedback, as well as a robust infrastructure that can scale in an enterprise production setting. NVIDIA DGX™ systems are purpose-built to meet the demands of enterprise AI and data science, delivering the fastest start in AI development, effortless productivity, and revolutionary performance - for insights in hours instead of months.
Ubuntu is the enterprise-grade Linux most loved by developers - in the cloud or at the edge. Unlike other enterprise Linux distributions, developers can get started for free. Users benefit by teams working continuously to add the latest NVIDIA software into Ubuntu, enabling native integration into technologies like NVIDIA PeerDirect, NVIDIA GPUDirect and GPUDirect Storage in addition to a variety of signed NVIDIA GPU drivers to choose from. Each Ubuntu LTS brings 10 years of bug-fixes and security patches to NVIDIA DGX systems, so your systems remain secure, and run perfectly out-of-the-box. Leading AI practitioners today use Ubuntu/DGX OS combination on DGX systems to run their high-performance workloads cleanly. Integrations with Canonical Kubernetes provide a highly performant, one-stop solution for orchestrating a datacenter with NVIDIA DGX nodes, while enterprise-class support is just a few clicks away through Canonical and NVIDIA.
This solution brief introduces the attributes of both Charmed Kubernetes and MicroK8s, and as a perfect match for NVIDIA DGX systems, how Canonical Kubernetes is leveraged by AI experts for at-scale training.