K
Watchlist
← Dealbook
Verda (formerly DataCrunch) logoV(

Verda (formerly DataCrunch)

Horizontal AI
B
4 risks

Verda (formerly DataCrunch) is positioning as a unknown horizontal AI infrastructure play, building foundational capabilities around knowledge graphs.

verda.com
unknownGenAI: coreHelsinki, Finland
$116.9Mraised
21KB analyzed12 quotesUpdated Apr 30, 2026
Event Timeline
Why This Matters Now

As agentic architectures emerge as the dominant build pattern, Verda (formerly DataCrunch) is positioned to benefit from enterprise demand for autonomous workflow solutions. The timing aligns with broader market readiness for AI systems that can execute multi-step tasks without human intervention.

Verda is building a new kind of hyperscaler, with AI at the core

Core Advantage

Vertical integration plus deep NVIDIA partnership and an in-house AI lab that turns frontier research into platform-level optimizations and customer implementations (early Blackwell Ultra hardware deployments, low-precision inference optimizations, and confidential compute work).

Build SignalsFull pattern analysis

Knowledge Graphs

emerging

No mentions of graphs, entity linking, or graph DBs. The content focuses on infrastructure, hardware, and model tooling rather than knowledge-graph style architectures.

What This Enables

Emerging pattern with potential to unlock new application categories.

Time Horizon12-24 months
Primary RiskLimited data on long-term viability in this context.

Natural-Language-to-Code

emerging

No references to NL→code interfaces or auto-generation of software from plain-language prompts. The infra supports Terraform/OpenTofu and CLI, but not NL-driven code generation.

What This Enables

Emerging pattern with potential to unlock new application categories.

Time Horizon12-24 months
Primary RiskLimited data on long-term viability in this context.

Guardrail-as-LLM

4 quotes
emerging

There is a strong emphasis on security, compliance, and confidential compute (secure enclaves) which implies policy and infrastructure-level guardrails. However, there is no explicit mention of secondary LLMs used to check or filter outputs (i.e., safety/compliance models layered on top of primary models).

What This Enables

Accelerates AI deployment in compliance-heavy industries. Creates new category of AI safety tooling.

Time Horizon0-12 months
Primary RiskAdds latency and cost to inference. May become integrated into foundation model providers.

Micro-model Meshes

5 quotes
high

Explicit signals of MoE and multi-model inference, a specialized language/tooling (SGLang) and orchestration for very large models indicate a multi-model architecture: supporting MoE inference, routed or partitioned large-model deployments, and optimized per-model runtimes across clusters. The presence of router-like concepts is implied by SGLang and model-specific optimization/precision work.

What This Enables

Cost-effective AI deployment for mid-market. Creates opportunity for specialized model providers.

Time Horizon12-24 months
Primary RiskOrchestration complexity may outweigh benefits. Larger models may absorb capabilities.
Technical Foundation

Verda (formerly DataCrunch) builds on 1X World Model (1XWM), SGLang, NVIDIA GB300 NVL72. The technical approach emphasizes unknown.

Team
Ruben• Founder/CEO (not explicitly stated in source)high technical

Launched a self-serve GPU cloud from a garage in Helsinki in 2020; led seed funding and growth to Europe and US markets; central figure in founding DataCrunch/Verda

Milosz• Co-founder (title not specified)medium technical

Joined Ruben in founding DataCrunch/Verda; public background details not disclosed in provided material

Tamir• Co-founder (title not specified)medium technical

Joined Ruben in founding DataCrunch/Verda; public background details not disclosed in provided material

Founder-Market Fit

Founders bring hands-on experience building a GPU cloud and in-house AI research capabilities, complemented by strong fundraising and enterprise credibility (NVIDIA Preferred Partner, European operations, ExpressVPN confidential compute work). This aligns well with Verda's product focus on end-to-end AI cloud infrastructure, though public detail on individual leadership roles is limited.

Engineering-heavyML expertiseDomain expertiseHiring: Visible Careers pageHiring: 110+ team members noted in milestonesHiring: Active growth with Series A funding and global expansion implying ongoing hiring for ML/infra/ops roles
Considerations
  • • Limited public information on individual founders' detailed backgrounds, roles, and prior companies
  • • Ambiguity around current leadership structure (CTO/COO/etc.) beyond founder titles
  • • Public visibility of team composition beyond overall headcount is limited
Business Model
Go-to-Market

developer first

Target: enterprise

Pricing

usage based

Enterprise focus
Sales Motion

hybrid

Distribution Advantages
  • • NVIDIA Preferred Partner status enabling early access and credibility with hardware ecosystem.
  • • Sovereign European service with GDPR/ISO 27001/27017/27018/27701 compliance for data protection and regulatory alignment.
  • • Nordic data centers with 100% renewable energy for sustainable operations.
  • • In-house AI lab and full-stack AI cloud offering, enabling rapid platform capability expansion and differentiation.
Customer Evidence

• ExpressVPN case study ( Confidential Compute on NVIDIA Blackwell for secure AI workloads).

• Customer quotes and success stories (various executives praising reliability, speed, and collaboration).

• In-house AI lab collaborations and public showcases (1X World Model, SGLang, challenges).

Product
Stage:general availability
Differentiating Features
In-house AI Lab turning frontier research into customer-visible platform capabilitiesFull-stack AI cloud with end-to-end lifecycle support (provisioning, training, inference, scaling)Confidential Computing capabilities leveraging NVIDIA Blackwell for secure enclavesEuropean sovereign cloud with GDPR compliance and ISO certifications (27001/27017/27018/27701)NVIDIA Preferred Partner with early access to latest hardware
Integrations
TerraformOpenTofuVerda CLINative SDKWeb consoleAPI
Primary Use Case

Providing AI/ML workloads on dedicated GPU infrastructure with managed services from provisioning to inference

Competitive Context

Verda (formerly DataCrunch) operates in a competitive landscape that includes Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure.

Amazon Web Services (AWS)

Differentiation: Verda is Europe-focused with data sovereignty and GDPR/ISO compliance, claims up to ~90% lower GPU access costs vs hyperscalers, offers hands-on engineering collaboration, early access to NVIDIA Blackwell Ultra in Europe and a vertically integrated stack optimized specifically for ML workloads.

Google Cloud Platform (GCP)

Differentiation: Verda emphasizes full-stack control (datacenter → hardware → platform), Nordic renewable-hosted data centers, confidential compute work (secure enclaves) with customers like ExpressVPN, and closer ML engineering support and bespoke optimization for cutting-edge NVIDIA hardware.

Microsoft Azure

Differentiation: Verda positions as a specialist AI cloud with focused ML-first tooling (Instant Clusters, NVLink/SXM deployments), earlier Blackwell deployments in Europe, and developer-first integrations (Terraform/OpenTofu, native SDKs) combined with local sovereignty and sustainability messaging.

Notable Findings

Early, production-grade deployment of NVIDIA Blackwell Ultra (GB300 NVL72 and HGX B300) with explicit support for virtualization on GB300: Verda appears to run vGPU-style or partitioned Blackwell instances at scale. This is unusual because GB300 is very recent hardware and virtualization support/firmware-level integration is non-trivial — they are offering it as a customer-facing capability rather than waiting for third-party layers.

Confidential computing for LLMs on GPU hardware (ExpressVPN case): Verda collaborated to enable secure enclaves on Blackwell-era accelerators. Confidential GPU compute—end-to-end attestation, enclave lifecycle and framework integration for model inference—is still niche and technically hard to ship; Verda highlights having done it in production.

Aggressive low-precision stack work (NVFP4 / FP8 / INT4): Their in‑house AI lab sponsors and publishes work on RL training with FP8/INT4 and NVFP4 inference. That implies bespoke kernels, optimizer changes, dynamic loss scaling and validation strategies to keep training stable at ultra-low precision — moving beyond simple post-training quantization.

Heterogeneous 'agentic-first' architecture: mentions pairing an Arm AGI CPU with NVIDIA GB300 and "Vera Rubin" for agentic workloads. This signals a CPU+GPU co-design aimed at agent control loops (low-latency planning on specialized CPU cores + heavy model execution on GPUs), which is not offered by most GPU cloud vendors as a first-class product.

Vertically-integrated stack + regional sovereign data centers: full ownership from data center to platform with ISO certifications and GDPR positioning. This gives them the ability to tune firmware, networking, and scheduling in ways public hyperscalers cannot in regulated European markets.

Risk Factors
Overclaiminghigh severity
Undifferentiatedmedium severity
No Clear Moatmedium severity
Feature, Not Productlow severity
What This Changes

If Verda (formerly DataCrunch) achieves its technical roadmap, it could become foundational infrastructure for the next generation of AI applications. Success here would accelerate the timeline for downstream companies to build reliable, production-grade AI products. Failure or pivot would signal continued fragmentation in the AI tooling landscape.

Source Evidence(12 quotes)
“In-house AI Lab Turning frontier research into customer wins and platform capabilities”
“1X World Model Verda collaborates with 1X on building multi-GPU inference for the 1XWM generative video model”
“Powered by AI research Our internal AI lab turns cutting-edge research into customer wins and platform capabilities”
“The Verda Cloud Platform Powering the entire AI model lifecycle - at any scale”
“Inference Serverless inference API for image and audio models”
“NVIDIA GB300 NVL72 and NVIDIA Blackwell Ultra platforms mentioned as hardware foundations”