Info
We are very proud to announce the world's first and only Nvidia GH200 Grace-Hopper Superchip and Nvidia GB200 Grace-Blackwell Superchip-powered NV-linked, liquid-cooled, CDU-integrated, ready-to-use, "small-sized" rack server systems (GB300 Grace-Blackwell Ultra coming soon). Multiple NV-linked GH200, GB200 or GB300 act as a single giant GPU (CPU-GPU superchip) with one single giant coherent memory pool. We are the only ones who offer smaller systems than a complete rack with "only" 1, 2, 4, 8, 16 or 18 superchips (NVL2, NVL4, NVL8, NVL16 and NVL36). If you like AMD, we offer Mi325X and Mi300A systems too. All systems are available with a coolant distribution unit (CDU) integrated into the rack, are ready-to-use and are perfect for inferencing insanely huge LLMs, quick fine-tuning and training of LLMs, image and video generation and editing.
Example use case 1: Inferencing Deepseek R1 671B, Nvidia Nemotron Super 49B, Nvidia Nemotron Ultra 253B or QwQ-32BDeepseek R1 671B: https://huggingface.co/deepseek-ai/DeepSeek-R1Nvidia Nemotron Super 49B and Ultra 253B: https://www.nvidia.com/en-us/ai-data-science/foundation-models/llama-nemotron/QwQ-32B: https://qwenlm.github.io/blog/qwq-32b/Deepseek R1 671B, Nvidia Nemotron Super 49B, Nvidia Nemotron Ultra 253B and QwQ-32B are the most powerful open-source models by far and even beat GPT-o1/o3 and Claude 3.7 Sonnet. Surprisingly, the small QwQ-32B performs (almost) as well as Deepseek R1 671B. Deepseek R1 671B with 4-bit quantization needs at least 404GB of memory to swiftly run inference! Nvidia Nemotron Ultra 253B with 8-bit quantization needs at least 270GB of memory to swiftly run inference! QwQ-32B with 16-bit quantization needs at least 66GB of memory to swiftly run inference! Luckily, GH200 has a minimum of 576GB, GB200 a minimum of 864GB, GB300 a minimum of 1056GB. With GH200 QwQ-32B in 16-bit can be run in VRAM only for ultra high inference speed (approx. 100 tokens/s). With Mi300 and GB200 Blackwell, as well as GB300 Blackwell Ultra this is also possible for Deepseek R1 671B. With GB200 Blackwell and GB300 Blackwell Ultra you can expect significantly more than 200 tokens/s. If the model is bigger than VRAM you can only expect approx. 10-20 tokens/s. Surprisingly, Deepseek R1 671B in 4-bit runs on GH200 with 20 tokens/s (using Nvidia Dynamo). That is usable! 4-bit quantization seems to be the best trade-off between speed and accuracy, but is natively only supported by GB200 Blackwell and GB300 Blackwell Ultra. We recommend using Nvidia Dynamo (https://www.nvidia.com/en-us/ai/dynamo/) for inferencing.Example use case 2: Fine-tuning Deepseek R1 671B with PyTorch FSDP and Q-LoraTutorial: https://www.philschmid.de/fsdp-qlora-llama3The ultimate guide to fine-tuning: https://arxiv.org/abs/2408.13296Models need to be fine-tuned on your data to unlock the full potential of the model. But efficiently fine-tuning bigger models like Deepseek R1 671B remained a challenge until now. This blog post walks you through how to fine-tune Deepseek R1 using PyTorch FSDP and Q-Lora with the help of Hugging Face TRL, Transformers, peft & datasets. Fine-tuning big models within a reasonable time requires special and beefy hardware! Luckily, GH200, GB200 or Mi300 are ideal to complete this task extremely quickly.Example use case 3: Generating videos with Mochi1, HunyuanVideo or Wan 2.1Mochi1: https://github.com/genmoai/modelsTencent HunyuanVideo: https://aivideo.hunyuan.tencent.com/Wan 2.1: https://github.com/Wan-Video/Wan2.1Mochi1, HunyuanVideo and Wan 2.1 are democratizing efficient video production for all.Generating videos with requires special and beefy hardware! Mochi1 and HunyuanVideo needs 80GB of VRAM. Luckily, GH200, GB200 and GB300 or Mi300/Mi325 are ideal for this task. GH200 has a minimum of 96GB, GB200 a minimum of 288GB, GB300 a minimum of 576G, Mi300A has a minimum of 512GB and Mi325X has a minimum of 2TB.Example use case 4: Image generation with Flux.1 or SANA-Sprint.Flux: https://github.com/black-forest-labs/fluxSANA-Sprint: https://nvlabs.github.io/Sana/Sprint/Flux.1 is the best image generator at the moment. And it's uncensored, too. SANA-Sprint is very fast and efficient. In high-speed inference, FLUX requires approximately 33GB of VRAM for maximum speed. For training the FLUX model, more than 40GB of VRAM is needed. SANA-Sprint requires up to 67GB of VRAM. Luckily, GH200 has a minimum of 96GB, GB200 a minimum of 288GB, GB300 a minimum of 576G, Mi300A has a minimum of 512GB and Mi325X has a minimum of 2TB.Example use case 5: Image editing with Omnigen or Nvidia Add-itOmnigen: https://github.com/VectorSpaceLab/OmniGenNvidia Add-it: https://research.nvidia.com/labs/par/addit/Omnigen and Add-it are the most innovative and easy to use image editors at the moment. For maximum speed in high resolution image generation and editing beefier hardware than consumer graphics cards is needed. Luckily, GH200 and GB200 excel at this task.Example use case 6: Video editing with AutoVFX or VACEAutoVFX: https://haoyuhsu.github.io/autovfx-website/VACE: https://ali-vilab.github.io/VACE-Page/AutoVFX and VACE are the most innovative and easy to use video editor at the moment. For maximum speed in high resolution video editing beefier hardware than consumer graphics cards is needed. Luckily, GH200 and GB200 excel at this task.Example use case 7: Creating a Large Language Model from scratchTutorial: https://www.pluralsight.com/resources/blog/data/how-build-large-language-modelImagine stepping into the world of language models as a painter stepping in front of a blank canvas. The canvas here is the vast potential of Natural Language Processing (NLP), and your paintbrush is the understanding of Large Language Models (LLMs). This article aims to guide you, new to NLP, in creating your first Large Language Model from scratch, focusing on the Transformer architecture and utilizing TensorFlow and Keras. Taining a LLM from scratch within a reasonable time requires special and extremely beefy hardware! Luckily, GH200, GB200 and GB300 or Mi300/Mi325 are ideal for this task.Why should you buy your own hardware?"You'll own nothing and you'll be happy?" No!!! Never should you bow to Satan and rent stuff that you can own. In other areas, renting stuff that you can own is very uncool and uncommon. Or would you prefer to rent "your" car instead of owning it? Most people prefer to own their car, because it's much cheaper, it's an asset that has value and it makes the owner proud and happy. The same is true for compute infrastructure.Even more so, because data and compute infrastructure are of great value and importance and are preferably kept on premises, not only for privacy reasons but also to keep control and mitigate risks. If somebody else has your data and your compute infrastructure you are in big trouble.Speed, latency and ease-of-use are also much better when you have direct physical access to your stuff.With respect to AI and specifically LLMs there is another very important aspect. The first thing big tech taught their closed-source LLMs was to be "politically correct" (lie) and implement guardrails, "safety" and censorship to such an extent that the usefulness of these LLMs is severely limited. Luckily, the open-source tools are out there to build and tune AI that is really intelligent and really useful. But first, you need your own hardware to run it on.
What are the main benefits of GH200 Grace-Hopper and GB200 Grace-Blackwell?They have enough memory to run, tune and train the biggest LLMs currently available.Their performance in every regard is almost unreal (up to 8520 times faster than x86).There are no alternative systems with the same amount of memory.Ideal for AI, especially inferencing, fine-tuning and training of LLMs.Multiple NV-linked GH200 or GB200 act as a single giant GPU.Optimized for memory-intensive AI and HPC performance.Ideal for HPC applications like, e.g. vector databases.Easily customizable, upgradable and repairable.Privacy and independence from cloud providers.Cheaper and much faster than cloud providers. They can be very quiet (with liquid-liquid CDU). Reliable and energy-efficient liquid cooling.Flexibility and the possibility of offline use.Gigantic amounts of coherent memory.They are very power-efficient.The lowest possible latency.They are beautiful.CUDA enabled.Run Linux.
GB200 Blackwell
The Nvidia GB200 Grace-Blackwell superchip has truly amazing specs to show off. GPTrack.ai ready-to-use rack server systems with multiple NV-linked Nvidia GB200 Grace-Blackwell (up to 72) are available now. GB300 Blackwell Ultra will be available in Q4 2025. Be one of the first in the world to get a GB200 or GB300 rack system. Order now!
What is the difference to alternative systems?
The main difference between GH200/GB200 and alternative systems is that with GH200/GB200, the GPU is connected to the CPU via a 900 GB/s NVLink vs. 128 GB/s PCIe gen5 used by traditional systems. Furthermore, multiple superchips can be connected via 900/1800 GB/s NVLink vs. orders of magnitude slower network connections used by traditional systems. Since these are the main bottlenecks, GH200/GB200's high-speed connections directly translate to much higher performance compared to traditional architectures. Also, multiple NV-linked GH200 or GB200 act as a single giant GPU (CPU-GPU superchip) with one single giant coherent memory pool.
What is the difference to server systems of competitors?Size: We focus on systems that are not bigger than one single rack. With GB200 that gives you more than an exaflop of compute. If that is really not enough for you, we are happy to make you a custom offer. But for many people, one complete rack is more than needed and too expensive. That is why we also offer smaller systems with only 1, 2, 4, 8, 16 or 18 superchips (NVL2, NVL4, NVL8, NVL16 and NVL36). We are, to our knowledge, the only ones in the world where you can get systems smaller than a complete GB200 NVL72 rack.In-rack CDU: Our rack server systems come standard with liquid cooling and a CDU integrated directly into the rack. You can choose between an air-liquid and liquid-liquid CDU.Ready-to-use: In contrast to other vendors, our systems come fully integrated and ready-to-use. Everything that is needed is included and tested. All you need to do is plug your system in to run it.Technical details of our GH200/GB200 rackserver systems (base configuration)Standard 19-inch or 21-inch OCP rackLiquid-cooledIn-rack CDU (air-liquid or liquid-liquid)Multiple Nvidia GH200 Grace-Hopper SuperchipsMultiple Nvidia GB200 Grace-Blackwell SuperchipsMultiple Nvidia GB300 Grace-Blackwell Ultra SuperchipsMultiple 72-core Nvidia Grace CPUsMultiple Nvidia Hopper H100 Tensor Core GPUs (on request)Multiple Nvidia Hopper H200 Tensor Core GPUs (on request)Multiple Nvidia Blackwell B100 Tensor Core GPUsMultiple Nvidia Blackwell B300 Tensor Core GPUsUp to 72x 480GB of LPDDR5X memory with error-correction code (ECC)Up to 13.5TB of HBM3e memoryUp to 30TB of total fast-access memoryNVLink-C2C: 900 GB/s of bandwidthGH200: Programmable from 450W to 1000W TDP (CPU + GPU + memory)GB200: Programmable from 1200W to 2700W TDP (CPU + 2 GPU + memory)Up to 6x power shelveUp to 72x PCIe gen5 M.2 slots on boardUp to 288x PCIe gen5 drives (NVMe)Up to 108x FHFL PCIe Gen5 x163 years manufacturer's warrantyUp to 48U 600 x 2616 x 1200 mm (23.6 x 103 x 47.2")Up to 1500 kg (3300 lbs)Optional componentsNIC Nvidia Bluefield-3NIC Nvidia ConnectX-7/8NIC Intel 100GbUp to 72x 4TB M.2 SSDUp to 288x 8TB E1.S SSDUp to 288x 60TB 2.5" SSDStorage controllerRaid controllerOS preinstalledAnything possible on request
Need something different? We are happy to build custom systems to your liking.
Compute performance of one GH20067 teraFLOPS FP641 petaFLOPS TF322 petaFLOPS FP164 petaFLOPS FP8Compute performance of one GB20090 teraFLOPS FP645 petaFLOPS TF3210 petaFLOPS FP1620 petaFLOPS FP840 petaFLOPS FP4Benchmarkshttps://github.com/mag-/gpu_benchmarkPhoronix has so far benchmarked the Grace CPU. More is coming soon:
https://www.phoronix.com/review/nvidia-gh200-gptshop-benchmarkhttps://www.phoronix.com/review/nvidia-gh200-amd-threadripperhttps://www.phoronix.com/review/aarch64-64k-kernel-perfhttps://www.phoronix.com/review/nvidia-gh200-compilershttps://www.phoronix.com/review/nvidia-grace-epyc-turinWhite paper: Nvidia GH200 Grace-Hopper white paperDownload
Here you can find various downloads concerning our GH200, GB200 and Mi300 systems: operating systems, firmware, drivers, software, manuals, white papers, spec sheets and so on. Everything you need to run your system and more.
White papers Nvidia GH200 Grace-Hopper white paper Nvidia GB200 Grace-Blackwell white paper Developing for Nvidia superchips The ultimate guide to fine-tuning Diffusion LLMs
Spec sheetsGH200 624GB: Spec sheet GH200 624GB.pdfGH200 Giga 624GB: Spec sheet GH200 Giga 624GB.pdfGH200 NVL2 1.2TB: Spec sheet GH200 NVL2 1.2TB.pdfGH200 4-Node Liquid 2.5TB: Spec sheet GH200 4-Node Liquid 2.5TB.pdfGB300 Blackwell Ultra NVL2 1TB: Spec sheet GB300 Blackwell Ultra NVL2 1TB.pdfMi300A 512GB: Spec sheet Mi300A 512GB.pdfMi325X Air 2TB: Spec sheet Mi325X Air 2TB.pdfMi325X Liquid 2TB: Spec sheet Mi325X Liquid 2TB.pdf8x B200 Air 1.5TB: Spec sheet 8x B200 Air 1.5TB.pdfGB200 NVL4 1.8TB: Spec sheet GB200 NVL4 1.8TB.pdfGB200 NVL8 3.5TB: Spec sheet GB200 NVL8 3.5TB.pdfGB200 NVL16 7TB: Spec sheet GB200 NVL16 7TB.pdfGB200 NVL36 15TB: Spec sheet GB200 NVL36 15TB.pdfGB200 NVL72 30TB: Spec sheet GB200 NVL72 30TB.pdf
ManualsOfficial Nvidia GH200 Manual: https://docs.nvidia.com/grace/#grace-hopperOfficial Nvidia Grace Manual: https://docs.nvidia.com/grace/#grace-cpuOfficial Nvidia Grace getting started: https://docs.nvidia.com/grace/#getting-started-with-nvidia-graceGH200 624GB: Manual GH200 624GB.pdfGH200 Giga 624GB: Manual GH200 Giga 624GB.pdfGH200 NVL2 1.2TB: Manual GH200 NVL2 1.2TB.pdfMi300A: Manual Mi300A.pdfGB200 NVL: Manual GB200 NVL.pdf
Operating systems for Nvidia systemsUbuntu Server for ARM: https://cdimage.ubuntu.com/releases/24.04/release/ubuntu-24.04.2-live-server-arm64+largemem.iso
Using the newest Nvidia 64k kernel is highly recommended: https://packages.ubuntu.com/search?keywords=linux-nvidia-64k-hwe
Operating systems for AMD systemsUbuntu Server for x86: https://ubuntu.com/download/server
Any other x86 linux distribution with kernel > 6.8 should work just fine. Using the newest kernel is highly recommended.
DriversNvidia GH200 drivers: https://www.nvidia.com/Download/index.aspx?lang=en-us
Select product type "data center", product series "HGX-Series" and operating system "Linux aarch64".Aspeed drivers: https://aspeedtech.com/support_driver/Nvidia Bluefield-3 drivers: https://developer.nvidia.com/networking/doca#downloadsNvidia ConnectX-7 drivers: https://network.nvidia.com/products/ethernet-drivers/linux/mlnx_en/Intel E810-CQDA2 drivers: https://www.intel.com/content/www/us/en/download/19630/intel-network-adapter-driver-for-e810-series-devices-under-linux.html?wapkw=E810-CQDA2Graid SupremeRAID SR-1010 drivers: https://docs.graidtech.com/#linux-driver
FirmwareGH200 BMC: GH200 BMC.zipGH200 BIOS: GH200 BIOS.zipNvidia Bluefield-3 firmware: https://network.nvidia.com/support/firmware/bluefield3/Nvidia ConnectX-7 firmware: https://network.nvidia.com/support/firmware/connectx7/Intel E810-CQDA2 firmware: https://www.intel.com/content/www/us/en/search.html?ws=idsa-default#q=E810-CQDA2
Top open source LLMsNvidia Llama Nemotron Super 49B and Ultra 253B: https://www.nvidia.com/en-us/ai-data-science/foundation-models/llama-nemotron/Deepseek R1 671B: https://huggingface.co/deepseek-ai/DeepSeek-R1QwQ-32B: https://qwenlm.github.io/blog/qwq-32b/Llama 3.1, 3.2 and 3.3: https://www.llama.com/Mistral Large 2 123B: https://huggingface.co/mistralai/Mistral-Large-Instruct-2407Pixtral Large 123B: https://mistral.ai/news/pixtral-large/Llama-3.2 Vision 90B: https://huggingface.co/meta-llama/Llama-3.2-90B-VisionLlama-3.1 405B: https://huggingface.co/meta-llama/Llama-3.1-405BDeepseek V3 671B: https://huggingface.co/deepseek-ai/DeepSeek-V3MiniMax-01 456B: https://www.minimaxi.com/en/news/minimax-01-series-2Tülu 3 405B: https://allenai.org/tuluQwen2.5 VL 72B: https://huggingface.co/Qwen/Qwen2.5-VL-72B-InstructAya Vision: https://cohere.com/blog/aya-visionGemma-3 27B: https://blog.google/technology/developers/gemma-3/Mistral Small 3.1 24B: https://mistral.ai/news/mistral-small-3-1EXAONE Deep 32B: https://github.com/LG-AI-EXAONE/EXAONE-Deep
SoftwareNvidia Dynamo: https://www.nvidia.com/en-us/ai/dynamo/Nvidia Github: https://github.com/NVIDIANvidia CUDA: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=arm64-sbsaNvidia Container-toolkit: https://github.com/NVIDIA/nvidia-container-toolkitNvidia Tensorflow: https://github.com/NVIDIA/tensorflowNvidia Pytorch: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorchAMD ROCm: https://www.amd.com/en/products/software/rocm.htmlKeras: https://keras.io/Apache OpenNLP: https://opennlp.apache.org/Nvidia NIM models: https://build.nvidia.com/explore/discoverNvidia Triton inference server: https://www.nvidia.com/de-de/ai-data-science/products/triton-inference-server/Nvidia NeMo Customizer: https://developer.nvidia.com/blog/fine-tune-and-align-llms-easily-with-nvidia-nemo-customizer/Huggingface open-source LLMs: https://huggingface.co/modelsHuggingface text generation inference: https://github.com/huggingface/text-generation-inferencevLLM - inference and serving engine: https://github.com/vllm-project/vllmvLLM docker image: https://hub.docker.com/r/drikster80/vllm-gh200-openaiOllama - run LLMs locally: https://ollama.com/Open WebUI: https://openwebui.com/ComfyUI: https://www.comfy.org/LM Studio: https://lmstudio.ai/Llamafile: https://github.com/Mozilla-Ocho/llamafileFine-tune Llama 3 with PyTorch FSDP and Q-Lora: https://www.philschmid.de/fsdp-qlora-llama3/Perplexica: https://github.com/ItzCrazyKns/PerplexicaMorphic: https://github.com/miurla/morphicOpen-Sora: https://github.com/hpcaitech/Open-SoraFlux.1: https://github.com/black-forest-labs/fluxStorm: https://github.com/stanford-oval/stormStable Diffusion 3.5: https://huggingface.co/stabilityai/stable-diffusion-3.5-largeGenmo Mochi1: https://github.com/genmoai/modelsGenmo Mochi1 (reduced VRAM): https://github.com/victorchall/genmoai-smolRhymes AI Allegro: https://github.com/rhymes-ai/AllegroOmniGen: https://github.com/VectorSpaceLab/OmniGenSegment anything: https://github.com/facebookresearch/segment-anythingAutoVFX: https://haoyuhsu.github.io/autovfx-website/DimensionX: https://chenshuo20.github.io/DimensionX/Nvidia Add-it: https://research.nvidia.com/labs/par/addit/MagicQuill: https://magicquill.art/demo/AnythingLLM: https://github.com/Mintplex-Labs/anything-llmPyramid-Flow: https://pyramid-flow.github.io/LTX-Video: https://github.com/Lightricks/LTX-VideoCogVideoX: https://github.com/THUDM/CogVideoOmniControl: https://github.com/Yuanshi9815/OminiControlSamurai: https://yangchris11.github.io/samurai/All Hands: https://www.all-hands.dev/Tencent HunyuanVideo: https://aivideo.hunyuan.tencent.com/Aider: https://aider.chat/Unsloth: https://github.com/unslothai/unslothAxolotl: https://github.com/axolotl-ai-cloud/axolotlStar: https://nju-pcalab.github.io/projects/STAR/Sana: https://nvlabs.github.io/Sana/RepVideo: https://vchitect.github.io/RepVid-Webpage/UI-TARS: https://github.com/bytedance/UI-TARSDiffuEraser: https://lixiaowen-xw.github.io/DiffuEraser-page/Go-with-the-Flow: https://eyeline-research.github.io/Go-with-the-Flow/3DTrajMaster: https://fuxiao0719.github.io/projects/3dtrajmaster/YuE: https://map-yue.github.io/DynVFX: https://dynvfx.github.io/ReasonerAgent: https://reasoner-agent.maitrix.org/Open-source DeepResearch: https://huggingface.co/blog/open-deep-researchDeepscaler: https://github.com/agentica-project/deepscalerInspireMusic: https://funaudiollm.github.io/inspiremusic/FlashVideo: https://github.com/FoundationVision/FlashVideoMatAnyone: https://pq-yang.github.io/projects/MatAnyone/
LocalAI: https://localai.io/
Stepvideo: https://huggingface.co/stepfun-ai/stepvideo-t2v
SkyReels: https://github.com/SkyworkAI/SkyReels-V1OctoTools: https://octotools.github.io/SynCD: https://www.cs.cmu.edu/~syncd-project/Mobius: https://mobius-diffusion.github.io/Wan 2.1: https://github.com/Wan-Video/Wan2.1TheoremExplainAgent: https://tiger-ai-lab.github.io/TheoremExplainAgent/RIFLEx: https://riflex-video.github.io/Browser use: https://browser-use.com/HunyuanVideo-I2V: https://github.com/Tencent/HunyuanVideo-I2VSpark-TTS: https://sparkaudio.github.io/spark-tts/GEN3C: https://research.nvidia.com/labs/toronto-ai/GEN3C/DiffRhythm: https://aslp-lab.github.io/DiffRhythm.github.io/Babel: https://babel-llm.github.io/babel-llm/Diffusion Self-Distillation: https://primecai.github.io/dsd/OWL: https://github.com/camel-ai/owlANUS: https://github.com/nikmcfly/ANUSLong Context Tuning for Video Generation: https://guoyww.github.io/projects/long-context-video/Tight Inversion: https://tight-inversion.github.io/VACE: https://ali-vilab.github.io/VACE-Page/SANA-Sprint: https://nvlabs.github.io/Sana/Sprint/Sesame Conversational Speech Model: https://github.com/SesameAILabs/csmSearch-R1: https://github.com/PeterGriffinJin/Search-R1AI Scientist: https://github.com/SakanaAI/AI-ScientistSpatialLM: https://manycore-research.github.io/SpatialLM/Nvidia Cosmos: https://www.nvidia.com/en-us/ai/cosmos/AudioX: https://zeyuet.github.io/AudioX/AccVideo: https://aejion.github.io/accvideo/Video-T1: https://liuff19.github.io/Video-T1/InfiniteYou: https://bytedance.github.io/InfiniteYou/BizGen: https://bizgen-msra.github.io/ParetoQ: https://github.com/facebookresearch/ParetoQDAPO: https://dapo-sia.github.io/
BenchmarkingGPU benchmark: https://github.com/mag-/gpu_benchmarkOllama benchmark: https://llm.aidatatools.com/results-linux.phpPhoronix test suite: https://www.phoronix-test-suite.com/MLCommons: https://mlcommons.org/benchmarks/Artifical Analysis: https://artificialanalysis.ai/Lmarena: https://lmarena.ai/Livebench: https://livebench.ai/Contact

Email: x@GPTrack.ai
GPT LLC
Fifth Floor, Zephyr House, 122 Mary Street
George Town, P.O. Box 31493
Grand Cayman KY1-1206
Cayman Islands
Company register number: HM-7509
European branch:
GPT LLC
Sachsenhof 1
96106 Ebern
Germany
We accept almost all currencies there are. Payment is possible via wire transfer or cash.
Try
Try before you buy. You can apply for remote testing of a GH200, GB200 or Mi300 system. After approval, you will be given login credentials for remote access. If you want to come by and see it for yourself and run some tests, that is also possible any time.
Currently available for testing:
GH200 624GBGH200 Giga 624GB GH200 2x576GBMi300X 1.5TBGB200 NVL72 30TB
Apply via email: x@GPTrack.ai