Info

We are very proud to announce the world's first and only Nvidia GH200 Grace-Hopper Superchip and Nvidia GB200 Grace-Blackwell Superchip-powered NV-linked, liquid-cooled, CDU-integrated, ready-to-use, "small-sized" rack server systems. Multiple NV-linked GH200 or GB200 act as a single giant GPU (CPU-GPU superchip) with one single giant coherent memory pool. We are the only ones who offer also smaller systems than a complete rack (with "only" 4, 8, 16 or 18 superchips). If you like AMD, we offer Mi300 systems too. All systems have a coolant distribution unit (CDU) integrated into the rack, are ready-to-use and are perfect for inferencing insanely huge LLMs, quick fine-tuning and training of LLMs.

Example use case 1: Inferencing Llama 3 400B
  • Info: https://ai.meta.com/blog/meta-llama-3/
  • Llama 3 400B (to be released) will be the most powerful open-source model by far.
  • Llama 3 400B needs at least 800GB of memory to swiftly run inference! Luckily, GH200 NVL/GB200 NVL have up to 20TB/30TB. One rack of Mi300A or Mi300X has up to 8TB.
  • Example use case 2: Fine-tuning Llama-3-70B with PyTorch FSDP and Q-Lora
  • Tutorial: https://www.philschmid.de/fsdp-qlora-llama3
  • Models need to be fine-tuned on your data to unlock the full potential of the model. But efficiently fine-tuning bigger models like Llama 3 70B stayed a challenge until now. This blog post walks you through how to fine-tune Llama 3 using PyTorch FSDP and Q-Lora with the help of Hugging Face TRL, Transformers, peft & datasets.
  • Fine-tuning Llama-3-70B within a reasonable time requires special and beefy hardware! Luckily, multiple GH200, GB200 or Mi300 are ideal to complete this task extremely quickly.
  • Example use case 3: Creating a Large Language Model from scratch
  • Tutorial: https://www.pluralsight.com/resources/blog/data/how-build-large-language-model
  • Imagine stepping into the world of language models as a painter stepping in front of a blank canvas. The canvas here is the vast potential of Natural Language Processing (NLP), and your paintbrush is the understanding of Large Language Models (LLMs). This article aims to guide you, a data practitioner new to NLP, in creating your first Large Language Model from scratch, focusing on the Transformer architecture and utilizing TensorFlow and Keras.
  • Taining a LLM from scratch within a reasonable time requires special and extremely beefy hardware! Luckily, multiple GH200, GB200 or Mi300 are ideal for this task.
  • Why should you buy your own hardware?
  • "You'll own nothing and you'll be happy?" No!!! Never should you bow to Satan and rent stuff that you can own. In other areas, renting stuff that you can own is very uncool and uncommon. Or would you prefer to rent "your" car instead of owning it? Most people prefer to own their car, because it's much cheaper, it's an asset that has value and it makes the owner proud and happy. The same is true for compute infrastructure.
  • Even more so, because data and compute infrastructure are of great value and importance and are preferably kept on premises, not only for privacy reasons but also to keep control and mitigate risks. If somebody else has your data and your compute infrastructure you are in big trouble.
  • Speed, latency and ease-of-use are also much better when you have direct physical access to your stuff.
  • With respect to AI and specifically LLMs there is another very important aspect. The first thing big tech taught their closed-source LLMs was to be "politically correct" (lie) and implement guardrails, "safety" and censorship to such an extent that the usefulness of these LLMs is severely limited. Luckily, the (open-source) tools are out there to build and tune AI that is really intelligent and really useful. But first, you need your own hardware to run it on.

  • What are the main benefits of GH200 Grace-Hopper and GB200 Grace-Blackwell?
  • They have enough memory to run, tune and train the biggest LLMs currently available.
  • Their performance in every regard is almost unreal (up to 8520 times faster than x86).
  • There are no alternative systems with the same amount of memory.
  • Ideal for AI, especially inferencing, fine-tuning and training of LLMs.
  • Multiple NV-linked GH200 or GB200 act as a single giant GPU.
  • Optimized for memory-intensive AI and HPC performance.
  • Ideal for HPC applications like, e.g. vector databases.
  • Easily customizable, upgradable and repairable.
  • Privacy and independence from cloud providers.
  • Cheaper and much faster than cloud providers.
  • They can be very quiet (with liquid-liquid CDU).
  • Flexibility and the possibility of offline use.
  • Gigantic amounts of coherent memory.
  • They are very power-efficient.
  • The lowest possible latency.
  • Reliable liquid cooling.
  • They are beautiful.
  • CUDA enabled.
  • Run Linux.
  • GB200 Blackwell

    The coming Nvidia GB200 Grace-Blackwell Superchip has truly amazing specs to show off. GPTrack.ai ready-to-use rack server systems with multiple NV-linked Nvidia GB200 Grace-Blackwell (up to 72) will arrive at the end of Q4 2024.

    What is the difference to alternative systems?
    The main difference between GH200/GB200 and alternative systems is that with GH200/GB200, the GPU is connected to the CPU via a 900 GB/s NVLink vs. 128 GB/s PCIe gen5 used by traditional systems. Furthermore, multiple superchips can be connected via 900/1800 GB/s NVLink vs. orders of magnitude slower network connections used by traditional systems. Since these are the main bottlenecks, GH200/GB200's high-speed connections directly translate to much higher performance compared to traditional architectures. Also, multiple NV-linked GH200 or GB200 act as a single giant GPU (CPU-GPU superchip) with one single giant coherent memory pool.

    We partner with Phoronix to benchmark as much as possible and will hopefully soon have solid data in the form of publicly available benchmarks to see how the different solutions compare for different workloads. The comparisons are expected to vary greatly for different workloads. If you want to know how your workloads performs on GH200/GB200 or Mi300 you can apply for a remote bare metal test here: Try

    What is the difference to server systems of competitors?
  • Size: We focus on systems that are not bigger than one singe rack. With GB200 that gives you more than an exaflop of compute. If that is, for some reason, really not enough for you, we are happy to make you a custom offer. But for many people, one complete rack is more than needed and too expensive. That is why we also offer smaller systems with only 4, 8, 16 or 18 superchips. We are to our knowledge the only ones where you can get systems smaller than a complete GH200 NVL32 or GB200 NVL72 rack.
  • In-rack CDU: Our rack servers systems come stand with liquid cooling and a CDU integrated directly into the rack. You can choose between an air-liquid and liquid-liquid CDU.
  • Ready-to-use: In contrast to other vendors, our systems come fully integrated and ready-to-use. Everything that is needed is included and tested. All you need to do is plug your system in to run it.
  • Technical details of our GH200/GB200 rackserver systems (base configuration)
  • Standard 19-inch or 21-inch OCP rack
  • Liquid-cooled
  • In-rack CDU
  • Multiple Nvidia GH200 Grace Hopper Superchips
  • Multiple Nvidia GB200 Grace Blackwell Superchips
  • Multiple 72-core Nvidia Grace CPUs
  • Multiple Nvidia H100 Tensor Core GPUs
  • Multiple Nvidia B100 Tensor Core GPUs
  • Up to 72x 480GB of LPDDR5X memory with error-correction code (ECC)
  • Up to 13.5TB of HBM3e
  • Up to 30TB of fast-access memory
  • NVLink-C2C: 900 GB/s of coherent memory
  • GH200: Programmable from 450W to 1000W TDP (CPU + GPU + memory)
  • GB200: Programmable from 1200W to 2700W TDP (CPU + 2 GPU + memory)
  • Up to 6x power shelve
  • Up to 72x PCIe gen5 M.2 22110/228 slots on board
  • Up to 288x PCIe gen5 drive slots (NVMe)
  • Up to 108x FHFL PCIe Gen5 x16
  • 2 years manufacturer's warranty
  • Up to 900 x 2286 x 1368 mm (35.4 x 90 x 53.9")
  • Up to 1500 kg (3300 lbs)
  • Optional components
  • NIC Nvidia Bluefield-3 400Gb
  • NIC Nvidia ConnectX-7 200Gb
  • NIC Intel 100Gb
  • Up to 72x 4TB M.2 SSD
  • Up to 288x 8TB E1.S SSD
  • Up to 288x 60TB 2.5" SSD
  • Storage controller
  • Raid controller
  • OS preinstalled
  • Anything possible on request
  • Compute performance of one GH200
  • 67 teraFLOPS FP64
  • 1 petaFLOPS TF32
  • 2 petaFLOPS FP16
  • 4 petaFLOPS FP8
  • Compute performance of one GB200
  • 90 teraFLOPS FP64
  • 5 petaFLOPS TF32
  • 10 petaFLOPS FP16
  • 20 petaFLOPS FP8
  • 40 petaFLOPS FP4
  • Benchmarks
    Phoronix has so far benchmarked the Grace CPU. More is coming soon:
  • https://www.phoronix.com/review/nvidia-gh200-gptshop-benchmark
  • https://www.phoronix.com/review/nvidia-gh200-amd-threadripper
  • https://www.phoronix.com/review/aarch64-64k-kernel-perf
  • https://www.phoronix.com/review/nvidia-gh200-compilers
  • White paper: Nvidia GH200 Grace-Hopper white paper

    Trademark information: Nvidia is a trademark of Nvidia corporation. ARM is a trademark of Arm Holdings plc.

    Download

    Here you can find various downloads concerning our GH200, GB200 and Mi300 systems: operating systems, firmware, drivers, software, manuals, white papers, spec sheets and so on. Everything you need to run your system and more.

    Spec sheets
  • GH200 NVL4 2.5TB: Spec sheet GH200 NVL4 2.5TB.pdf
  • GH200 NVL8 5TB: Spec sheet GH200 NVL8 5TB.pdf
  • GH200 NVL16 10TB: Spec sheet GH200 NVL16 10TB.pdf
  • GH200 NVL32 20TB: Spec sheet GH200 NVL32 20TB.pdf
  • GB200 NVL8 3.5TB: Spec sheet GB200 NVL8 3.5TB.pdf
  • GB200 NVL16 7TB: Spec sheet GB200 NVL16 7TB.pdf
  • GB200 NVL36 15TB: Spec sheet GB200 NVL36 15TB.pdf
  • GB200 NVL72 30TB: Spec sheet GB200 NVL72 30TB.pdf
  • Mi300X 8TB: Spec sheet Mi300X 8TB.pdf
  • Mi300A 8TB: Spec sheet Mi300A 8TB.pdf

  • Manuals
  • Official Nvidia GH200 Manual: https://docs.nvidia.com/grace/#grace-hopper
  • Official Nvidia Grace Manual: https://docs.nvidia.com/grace/#grace-cpu
  • Official Nvidia Grace getting started: https://docs.nvidia.com/grace/#getting-started-with-nvidia-grace
  • GH200 NVL: Manual GH200 NVL.pdf
  • GB200 NVL: Manual GB200 NVL.pdf
  • Mi300A: Manual Mi300A.pdf
  • Mi300X: Manual Mi300X.pdf

  • Operating systems for Nvidia systems
  • Ubuntu Server for ARM: https://ubuntu.com/download/server/arm
  • Ubuntu Desktop for ARM: https://cdimage.ubuntu.com/daily-live/current/noble-desktop-arm64.iso

    Any other ARM linux distribution with kernel > 6.5 should work just fine. Using the newest 64k kernel is highly recommended.

  • Operating systems for AMD systems
  • Ubuntu Server for x86: https://ubuntu.com/download/server
  • Ubuntu Desktop for x86: https://ubuntu.com/download/desktop

    Any other x86 linux distribution with kernel > 6.8 should work just fine. Using the newest kernel is highly recommended.

  • Drivers
  • Nvidia GH200 drivers: https://www.nvidia.com/Download/index.aspx?lang=en-us
    Select product type "data center", product series "HGX-Series" and operating system "Linux aarch64".Nvidia Bluefield-3 drivers: https://developer.nvidia.com/networking/doca#downloads
  • Nvidia ConnectX-7 drivers: https://network.nvidia.com/products/ethernet-drivers/linux/mlnx_en/
  • Intel E810-CQDA2 drivers: https://www.intel.com/content/www/us/en/download/19630/intel-network-adapter-driver-for-e810-series-devices-under-linux.html?wapkw=E810-CQDA2
  • Broadcom eHBA 9600-16i drivers: https://www.broadcom.com/products/storage/host-bus-adapters/sas-nvme-9600-16i
  • Graid SupremeRAID SR-1001 drivers: https://docs.graidtech.com/#linux-driver

  • Firmware/Bios
  • Nvidia Bluefield-3 firmware: https://network.nvidia.com/support/firmware/bluefield3/
  • Nvidia ConnectX-7 firmware: https://network.nvidia.com/support/firmware/connectx7/
  • Intel E810-CQDA2 firmware: https://www.intel.com/content/www/us/en/search.html?ws=idsa-default#q=E810-CQDA2
  • Broadcom eHBA 9600-16i firmware: https://www.broadcom.com/products/storage/host-bus-adapters/sas-nvme-9600-16i

  • Software
  • Nvidia Github: https://github.com/NVIDIA
  • Nvidia CUDA: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=arm64-sbsa
  • Nvidia Container-toolkit: https://github.com/NVIDIA/nvidia-container-toolkit
  • Nvidia Tensorflow: https://github.com/NVIDIA/tensorflow
  • Nvidia Pytorch: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
  • Nvidia NIM models: https://build.nvidia.com/explore/discover
  • Nvidia Triton inference server: https://www.nvidia.com/de-de/ai-data-science/products/triton-inference-server/
  • Nvidia NeMo Customizer: https://developer.nvidia.com/blog/fine-tune-and-align-llms-easily-with-nvidia-nemo-customizer/
  • Huggingface open-source LLMs: https://huggingface.co/models
  • Huggingface text generation inference: https://github.com/huggingface/text-generation-inference
  • vLLM - inference and serving engine: https://github.com/vllm-project/vllm
  • Ollama - run LLMs locally: https://ollama.com/
  • Fine-tune Llama 3 with PyTorch FSDP and Q-Lora: https://www.philschmid.de/fsdp-qlora-llama3/

  • Benchmarking
  • Phoronix test suite: https://www.phoronix-test-suite.com/
  • MLCommons: https://github.com/mlcommons

  • White paper
  • Nvidia GH200 Grace-Hopper white paper
  • Contact

    Email: x@gptrack.ai

    GPTrack.ai UG (limited)
    Sachsenhof 1
    96106 Ebern
    Germany

    CEO: Bernhard Guentner

    Trade register Bamberg HRB 11581

    Try

    Try before you buy. You can apply for remote testing of a GH200, GB200 or Mi300 system. After approval, you will be given login credentials for remote access. If you want to come by and see it for yourself and run some tests, that is also possible any time.

    Currently available for testing: coming soon

    Apply via email: x@gptrack.ai