Dedicated GPU VM with RTX A2000 12GB
Virtual server

Dedicated GPU VM with RTX A2000 12GB

A single-tenant GPU virtual machine for AI inference, ML experiments, CUDA workloads, containers, and private GPU-accelerated services. You get root access and manage the software stack.

220,00 €

Electronic cloud service delivery

Services are provisioned electronically after payment and onboarding. There is no physical shipping charge.

Agents can read product data and request quotes through the public API. Commit actions such as payment or order placement require human confirmation.

EU infrastructure with global access

Customer services run primarily on EU/EEA infrastructure. Customers outside Europe, including the US, can order services but should expect cross-region latency. Availability excludes sanctioned or prohibited jurisdictions.

Credit and refunds

Prepaid credit can be used for eligible future services. Payments are non-refundable except where mandatory law applies or CLI & Open expressly agrees in writing.

Back to products
Virtual server
RTX A2000 12GB root access dedicated GPU

Why buy it

This product is a dedicated GPU server, not a managed AI application. CLIopen provisions the VM, host infrastructure, network reachability, and the assigned RTX A2000 12GB GPU. The customer receives root access and is responsible for the operating system configuration, CUDA stack, containers, AI frameworks, model downloads, application code, prompts, updates, and security inside the VM.

What you get

  • Full RTX A2000 12GB assigned to one customer VM.
  • Root access included for CUDA, Docker, AI frameworks, and custom workloads.
  • Configurable vCPU, RAM, encrypted 3x replicated NVMe disk, public IPs, backup retention, and archive retention.
  • Ubuntu 24.04 LTS is the supported default operating system image.
  • Support covers VM availability, network reachability, and host infrastructure, not customer-installed AI software.

What is included

  • Dedicated RTX A2000 12GB GPU assigned to the VM.
  • Configurable CPU, RAM, encrypted 3x replicated NVMe disk, public IPs, backup retention, and archive retention before checkout.
  • Ubuntu 24.04 LTS default image and root access handoff after provisioning.

Customer responsibility

  • CUDA, Docker, NVIDIA runtime, PyTorch, Ollama, vLLM, llama.cpp, and other software inside the VM.
  • Model downloads, licenses, datasets, prompts, application code, users, firewall choices, updates, and security inside the guest OS.
  • Debugging customer-installed AI applications unless ordered separately as paid professional services.

Good fits

  • Private AI inference and model-serving experiments.
  • GPU-accelerated containers, ML development, document processing, embeddings, and automation labs.
  • Teams that want root access and accept responsibility for the software stack.

Configure a dedicated GPU VM

Choose hostname, CPU, RAM, encrypted NVMe disk, public IPs, backup retention, archive retention, and SSH access. The full RTX A2000 12GB GPU is dedicated to the VM.

Configure GPU VM