Verda (formerly DataCrunch) is positioning as a unknown horizontal AI infrastructure play, building foundational capabilities around knowledge graphs.
As agentic architectures emerge as the dominant build pattern, Verda (formerly DataCrunch) is positioned to benefit from enterprise demand for autonomous workflow solutions. The timing aligns with broader market readiness for AI systems that can execute multi-step tasks without human intervention.
Verda is building a new kind of hyperscaler, with AI at the core
Vertical integration plus deep NVIDIA partnership and an in-house AI lab that turns frontier research into platform-level optimizations and customer implementations (early Blackwell Ultra hardware deployments, low-precision inference optimizations, and confidential compute work).
No mentions of graphs, entity linking, or graph DBs. The content focuses on infrastructure, hardware, and model tooling rather than knowledge-graph style architectures.
Emerging pattern with potential to unlock new application categories.
No references to NL→code interfaces or auto-generation of software from plain-language prompts. The infra supports Terraform/OpenTofu and CLI, but not NL-driven code generation.
Emerging pattern with potential to unlock new application categories.
There is a strong emphasis on security, compliance, and confidential compute (secure enclaves) which implies policy and infrastructure-level guardrails. However, there is no explicit mention of secondary LLMs used to check or filter outputs (i.e., safety/compliance models layered on top of primary models).
Accelerates AI deployment in compliance-heavy industries. Creates new category of AI safety tooling.
Explicit signals of MoE and multi-model inference, a specialized language/tooling (SGLang) and orchestration for very large models indicate a multi-model architecture: supporting MoE inference, routed or partitioned large-model deployments, and optimized per-model runtimes across clusters. The presence of router-like concepts is implied by SGLang and model-specific optimization/precision work.
Cost-effective AI deployment for mid-market. Creates opportunity for specialized model providers.
Verda (formerly DataCrunch) builds on 1X World Model (1XWM), SGLang, NVIDIA GB300 NVL72. The technical approach emphasizes unknown.
Launched a self-serve GPU cloud from a garage in Helsinki in 2020; led seed funding and growth to Europe and US markets; central figure in founding DataCrunch/Verda
Joined Ruben in founding DataCrunch/Verda; public background details not disclosed in provided material
Joined Ruben in founding DataCrunch/Verda; public background details not disclosed in provided material
Founders bring hands-on experience building a GPU cloud and in-house AI research capabilities, complemented by strong fundraising and enterprise credibility (NVIDIA Preferred Partner, European operations, ExpressVPN confidential compute work). This aligns well with Verda's product focus on end-to-end AI cloud infrastructure, though public detail on individual leadership roles is limited.
developer first
Target: enterprise
usage based
hybrid
• ExpressVPN case study ( Confidential Compute on NVIDIA Blackwell for secure AI workloads).
• Customer quotes and success stories (various executives praising reliability, speed, and collaboration).
• In-house AI lab collaborations and public showcases (1X World Model, SGLang, challenges).
Providing AI/ML workloads on dedicated GPU infrastructure with managed services from provisioning to inference
Verda (formerly DataCrunch) operates in a competitive landscape that includes Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure.
Differentiation: Verda is Europe-focused with data sovereignty and GDPR/ISO compliance, claims up to ~90% lower GPU access costs vs hyperscalers, offers hands-on engineering collaboration, early access to NVIDIA Blackwell Ultra in Europe and a vertically integrated stack optimized specifically for ML workloads.
Differentiation: Verda emphasizes full-stack control (datacenter → hardware → platform), Nordic renewable-hosted data centers, confidential compute work (secure enclaves) with customers like ExpressVPN, and closer ML engineering support and bespoke optimization for cutting-edge NVIDIA hardware.
Differentiation: Verda positions as a specialist AI cloud with focused ML-first tooling (Instant Clusters, NVLink/SXM deployments), earlier Blackwell deployments in Europe, and developer-first integrations (Terraform/OpenTofu, native SDKs) combined with local sovereignty and sustainability messaging.
Early, production-grade deployment of NVIDIA Blackwell Ultra (GB300 NVL72 and HGX B300) with explicit support for virtualization on GB300: Verda appears to run vGPU-style or partitioned Blackwell instances at scale. This is unusual because GB300 is very recent hardware and virtualization support/firmware-level integration is non-trivial — they are offering it as a customer-facing capability rather than waiting for third-party layers.
Confidential computing for LLMs on GPU hardware (ExpressVPN case): Verda collaborated to enable secure enclaves on Blackwell-era accelerators. Confidential GPU compute—end-to-end attestation, enclave lifecycle and framework integration for model inference—is still niche and technically hard to ship; Verda highlights having done it in production.
Aggressive low-precision stack work (NVFP4 / FP8 / INT4): Their in‑house AI lab sponsors and publishes work on RL training with FP8/INT4 and NVFP4 inference. That implies bespoke kernels, optimizer changes, dynamic loss scaling and validation strategies to keep training stable at ultra-low precision — moving beyond simple post-training quantization.
Heterogeneous 'agentic-first' architecture: mentions pairing an Arm AGI CPU with NVIDIA GB300 and "Vera Rubin" for agentic workloads. This signals a CPU+GPU co-design aimed at agent control loops (low-latency planning on specialized CPU cores + heavy model execution on GPUs), which is not offered by most GPU cloud vendors as a first-class product.
Vertically-integrated stack + regional sovereign data centers: full ownership from data center to platform with ISO certifications and GDPR positioning. This gives them the ability to tune firmware, networking, and scheduling in ways public hyperscalers cannot in regulated European markets.
If Verda (formerly DataCrunch) achieves its technical roadmap, it could become foundational infrastructure for the next generation of AI applications. Success here would accelerate the timeline for downstream companies to build reliable, production-grade AI products. Failure or pivot would signal continued fragmentation in the AI tooling landscape.
“In-house AI Lab Turning frontier research into customer wins and platform capabilities”
“1X World Model Verda collaborates with 1X on building multi-GPU inference for the 1XWM generative video model”
“Powered by AI research Our internal AI lab turns cutting-edge research into customer wins and platform capabilities”
“The Verda Cloud Platform Powering the entire AI model lifecycle - at any scale”
“Inference Serverless inference API for image and audio models”
“NVIDIA GB300 NVL72 and NVIDIA Blackwell Ultra platforms mentioned as hardware foundations”