SambaNova Systems Stock, Valuation, IPO, Careers & News

SambaNova Systems

Jump to What You Need

QUICK INFO BOX

AttributeDetails
Company NameSambaNova Systems, Inc.
FoundersRodrigo Liang (CEO), Kunle Olukotun (Chief Technologist), Chris Ré (Chief Scientist)
Founded Year2017
HeadquartersPalo Alto, California, USA
IndustryArtificial Intelligence / Semiconductors
SectorAI Infrastructure / Computer Hardware / Enterprise AI
Company TypePrivate
Key InvestorsSoftBank Vision Fund, BlackRock, Intel Capital, GV (Google Ventures), Walden International, Temasek
Funding RoundsSeed, Series A, B, C, D, E
Total Funding Raised$1.5+ Billion
Valuation$5.1 Billion (February 2026)
Number of Employees900+ (February 2026)
Key Products / ServicesDataScale Platform, SN40L AI Chip, Samba-1 Model, Cardinal SN10 Processor, Full-Stack AI Solutions
Technology StackCustom AI Chips (Reconfigurable Dataflow Architecture), Software Stack, Model Zoo, MLOps Platform
Revenue (Latest Year)$180+ Million ARR (February 2026)
Customer Base100+ enterprises including Argonne National Laboratory, Lawrence Livermore, Saudi Aramco
Social MediaLinkedIn, Twitter

Introduction

The AI revolution faces a fundamental bottleneck: traditional computing architectures weren’t designed for artificial intelligence workloads. GPUs from NVIDIA, originally built for graphics rendering, have been adapted for AI training and inference—but they’re inefficient, power-hungry, and increasingly supply-constrained (NVIDIA H100 GPUs cost $30,000+ with 6-12 month lead times). Data centers running large language models consume megawatts of power, cost millions in infrastructure, and still struggle with latency for real-time AI applications. The industry needs purpose-built AI hardware optimized for the unique computational patterns of machine learning: matrix multiplications, attention mechanisms, transformer architectures.

Enter SambaNova Systems, the full-stack AI infrastructure company building custom chips, software platforms, and enterprise AI solutions from the ground up. Founded in 2017 by a Stanford dream team—Rodrigo Liang (CEO, former Oracle executive), Professor Kunle Olukotun (pioneering computer architect), and Professor Chris Ré (machine learning expert)—SambaNova has pioneered Reconfigurable Dataflow Architecture (RDA), a novel chip design that dynamically adapts to different AI workloads with superior performance and efficiency compared to GPUs.

As of February 2026, SambaNova operates at a $5.1 billion valuation with $1.5+ billion in funding from SoftBank Vision Fund, BlackRock, Intel Capital, GV (Google Ventures), and Temasek. The company serves 100+ enterprise customers (February 2026) including national laboratories (Argonne, Lawrence Livermore), energy giants (Saudi Aramco), financial institutions, and government agencies. SambaNova’s DataScale platform powers AI workloads ranging from drug discovery to oil exploration to national security applications, processing trillions of AI operations daily.

With annual recurring revenue (ARR) exceeding $180 million (February 2026) and 900+ employees, SambaNova has emerged as a credible challenger to NVIDIA’s AI chip dominance, offering enterprise customers an alternative that combines custom hardware, integrated software, and full-stack support. The company’s SN40L chip (launched 2024) delivers 3x better performance-per-watt than comparable GPUs for transformer models, addressing the power consumption crisis in AI data centers.

What makes SambaNova revolutionary:

  1. Reconfigurable Dataflow Architecture: Chips that dynamically reconfigure themselves for different AI model architectures (transformers, CNNs, RNNs) without manual optimization
  2. Full-stack integration: Hardware, software, models, and support in unified platform—eliminating integration headaches
  3. Enterprise focus: Purpose-built for production AI deployments (not research/experimentation), with SLAs, support, security
  4. Power efficiency: 3x better performance-per-watt reducing data center costs, carbon footprint
  5. Model flexibility: Supporting open-source models (Llama, Mistral) and custom enterprise models

The market opportunity is massive: AI infrastructure represents a $100+ billion market growing 40%+ annually, driven by generative AI adoption, LLM deployment, and enterprise AI transformation. The AI chip market alone is projected to reach $50 billion by 2027, with data centers spending billions on GPUs, TPUs, and custom accelerators. SambaNova competes with NVIDIA ($2T+ market cap, 90%+ GPU market share), Google TPU (Tensor Processing Units for Google Cloud), AWS Trainium/Inferentia (Amazon’s custom chips), Cerebras ($4B valuation, wafer-scale chips), and Groq ($2.8B valuation, LPU architecture).

SambaNova differentiates through full-stack approach (not just chips but complete platform), enterprise deployment model (on-prem installations, not just cloud), and reconfigurable architecture (adapting to evolving AI models without hardware redesigns). The company’s focus on production AI (not just training) addresses the massive inference market—where deployed models serve billions of queries daily, requiring low latency, high throughput, and cost efficiency.

The founding story reflects Silicon Valley’s cutting edge: Stanford professors commercializing decades of computer architecture research, combined with enterprise software expertise, to build the AI infrastructure for the next computing era. This comprehensive article explores SambaNova’s journey from semiconductor startup to full-stack AI platform provider challenging NVIDIA’s dominance.


Founding Story & Background

The AI Hardware Problem

By 2017, deep learning had achieved breakthrough results in computer vision (ImageNet), natural language processing (machine translation), and game playing (AlphaGo defeating world champion). But the computational requirements were exploding: training state-of-the-art models required weeks of computation on hundreds of GPUs, costing hundreds of thousands of dollars. NVIDIA GPUs, designed for graphics rendering (parallel processing of pixels), had been repurposed for AI through frameworks like CUDA, TensorFlow, and PyTorch.

However, GPUs were fundamentally mismatched for AI workloads:

  • Fixed architecture: GPUs couldn’t adapt to different neural network architectures (CNNs, RNNs, transformers) without inefficiencies
  • Memory bottlenecks: Moving data between GPU memory and processing cores created latency, limiting performance
  • Power consumption: Training large models consumed megawatts, costing millions in electricity and cooling
  • Programming complexity: Optimizing models for GPU execution required specialized expertise

Kunle Olukotun, a Stanford professor and computer architecture pioneer, had spent decades researching dataflow computing—an alternative to traditional von Neumann architecture where computation adapts to data movement patterns rather than following fixed instruction sequences. Olukotun recognized that AI workloads (with their data-parallel operations, irregular memory access patterns, and dynamic computation graphs) were perfect candidates for dataflow architectures.

Chris Ré, a Stanford machine learning professor, brought complementary expertise: understanding how AI models are actually built, trained, and deployed in real-world applications. Ré had founded DeepDive (Stanford’s ML research project) and advised enterprises on AI adoption, witnessing the gap between research prototypes and production deployments.

Rodrigo Liang, with executive experience at Oracle and Sun Microsystems, recognized the commercial opportunity: enterprises needed AI infrastructure that “just worked”—not research tools requiring PhDs to operate. Liang’s vision combined custom hardware with integrated software, models, and support, creating a full-stack platform for production AI.

2017: Founding and Reconfigurable Dataflow Architecture

In 2017, Liang, Olukotun, and Ré founded SambaNova Systems in Palo Alto with a bold technical thesis: build AI chips using Reconfigurable Dataflow Architecture (RDA)—hardware that dynamically reconfigures itself to match the computational structure of AI models.

The RDA approach was revolutionary:

Traditional Processors (CPUs/GPUs):

  • Fixed instruction set architecture
  • Programs compiled to static instruction sequences
  • Data moves to processing units
  • Inefficiencies when workload doesn’t match hardware design

Reconfigurable Dataflow Architecture (RDA):

  • Hardware reconfigures to match computation graph
  • Data flows through custom-configured processing paths
  • Minimal data movement (compute near data)
  • Adapts to different model architectures (transformers, CNNs) automatically

The founding vision had three layers:

  1. Custom AI chips (SN series processors): Hardware optimized for tensor operations, attention mechanisms, embeddings
  2. Software stack: Compilers, runtime, model optimization tools abstracting hardware complexity
  3. DataScale platform: Full-stack AI infrastructure (hardware + software + models + MLOps)

The name “SambaNova” reflected the innovative spirit—combining “Samba” (Brazilian dance symbolizing dynamism, rhythm) with “Nova” (new, innovative).

2017-2020: Stealth Development and First Silicon

From 2017-2020, SambaNova operated in stealth mode, building the founding team and developing first-generation chips. The technical challenges were immense:

Challenge 1: Chip Design
Designing custom ASICs (Application-Specific Integrated Circuits) requires years of engineering and hundreds of millions in R&D. One mistake (tape-out error) can cost $50M+ and 18+ months.

Solution: Assembled world-class chip design team from Apple, NVIDIA, Intel. Leveraged TSMC’s 7nm process node. Built extensive simulation infrastructure to catch errors pre-fabrication.

Challenge 2: Reconfigurable Architecture
How to build hardware that reconfigures for different AI models without manual programming?

Solution: Developed spatial compiler technology—software that analyzes model computation graph and automatically configures chip dataflow paths. Abstracted hardware details from data scientists.

Challenge 3: Software Stack
AI researchers use PyTorch, TensorFlow—not low-level hardware APIs. How to provide familiar interfaces?

Solution: Built ML framework integration layer—SambaNova chips appear as standard accelerators to PyTorch/TensorFlow, with automatic model optimization.

Challenge 4: Competitive Landscape
NVIDIA dominated AI chips with mature CUDA ecosystem, strong developer relationships, and 95%+ market share. How to compete?

Solution: Target enterprise customers needing production deployments (not researchers), focus on total cost of ownership (performance-per-watt, support, integration), offer full-stack platform (not just chips).

2020: Public Launch and Cardinal SN10

In 2020, SambaNova emerged from stealth with the Cardinal SN10 processor—the first chip built on RDA. The SN10 featured:

  • 40 billion transistors (comparable to high-end GPUs)
  • 1.2 TB/s memory bandwidth (reducing data bottlenecks)
  • 600 TFLOPS (FP16) for AI workloads
  • Spatial architecture: 384 processing elements with reconfigurable interconnects

SambaNova also launched DataScale, the integrated platform combining:

  • SN10 hardware (in rack-scale deployments)
  • SambaFlow software (compilers, runtime, optimization)
  • Model Zoo (pre-optimized models: BERT, GPT, ResNet)
  • MLOps tools (training, deployment, monitoring)

Early customers included Argonne National Laboratory (using SambaNova for scientific computing, climate modeling) and Lawrence Livermore National Laboratory (defense applications, materials science). These prestigious deployments validated SambaNova’s technology for demanding, production workloads.


Founders & Key Team

Relation / RoleNamePrevious Experience / Role
Founder, CEORodrigo LiangExecutive at Oracle, Sun Microsystems; Enterprise Software Leadership
Co-Founder, Chief TechnologistKunle OlukotunStanford Professor, Computer Architecture Pioneer, ACM Fellow
Co-Founder, Chief ScientistChris RéStanford Professor, Machine Learning Expert, DeepDive Founder
Chief Product OfficerRam SivaramakrishnanProduct Leadership at Oracle, Enterprise AI Products
Chief Revenue OfficerMark LinehanSales Executive at IBM, HPE; Data Center Hardware Sales

Rodrigo Liang (CEO) leads SambaNova with enterprise software expertise and business acumen. His Oracle background informs SambaNova’s enterprise GTM strategy, customer success focus, and full-stack platform approach. Liang is a frequent speaker on AI infrastructure and enterprise AI adoption.

Kunle Olukotun (Chief Technologist) is a legendary computer architect who pioneered chip multiprocessors (multi-core CPUs) and dataflow computing. As Stanford professor and ACM Fellow, Olukotun’s research underpins SambaNova’s RDA technology. He oversees hardware architecture and next-generation chip development.

Chris Ré (Chief Scientist) brings deep machine learning expertise and real-world AI deployment experience. His Stanford research on systems for machine learning (DeepDive, Snorkel) informs SambaNova’s software stack. Ré leads model optimization, algorithm development, and AI research partnerships.

Ram Sivaramakrishnan (CPO) joined from Oracle to scale product organization. Under his leadership, SambaNova expanded from custom deployments to productized platform offerings (DataScale subscriptions, cloud services).


Funding & Investors

Seed (2017): $56 Million

  • Lead Investors: GV (Google Ventures), Walden International
  • Additional Investors: BlackRock, Redline Capital
  • Valuation: ~$200M
  • Purpose: Build founding team, develop first-generation chip architecture

Series A (2018): $56 Million

  • Lead Investor: GV (Google Ventures)
  • Additional Investors: Walden International, BlackRock
  • Valuation: ~$500M
  • Purpose: Tape out Cardinal SN10 chip, build software stack

Series B (2019): $150 Million

  • Lead Investor: Intel Capital (strategic investment)
  • Additional Investors: GV, Walden, BlackRock, Atlantic Bridge
  • Valuation: ~$1 Billion (unicorn status)
  • Purpose: Manufacturing, early customer deployments, expand engineering team

Series C (2020): $250 Million

  • Lead Investor: BlackRock
  • Additional Investors: Intel Capital, GV, Walden International
  • Valuation: ~$2.5 Billion
  • Purpose: Scale production, expand sales team, develop next-gen chips

Series D (2021): $676 Million

  • Lead Investor: SoftBank Vision Fund 2
  • Additional Investors: BlackRock, Intel Capital, GV, Temasek
  • Valuation: $5.1 Billion
  • Purpose: International expansion, cloud services, competition with NVIDIA

The Series D was transformational: SoftBank’s $676M investment positioned SambaNova for massive scale, validating the full-stack AI infrastructure thesis. The $5.1B valuation reflected growing enterprise demand for GPU alternatives.

Series E (2024): $350 Million

  • Lead Investors: SoftBank Vision Fund, BlackRock
  • Additional Investors: Intel Capital, Temasek, GV
  • Valuation: $5.1 Billion (flat, focused on operational scale)
  • Purpose: Launch SN40L chip, expand DataScale cloud, prepare for IPO

Total Funding Raised: $1.5+ Billion

SambaNova deployed capital across:

  • Chip development: Multiple chip generations (SN10, SN30, SN40L), TSMC fabrication contracts
  • Manufacturing: Building supply chain, securing chip capacity
  • Enterprise sales: Account executives, solutions engineers, customer success teams
  • Data centers: Building cloud infrastructure for DataScale services
  • R&D: Next-generation architectures, software optimization, model development

Product & Technology Journey

A. SambaNova Chips: Cardinal Series

Cardinal SN10 (2020)

First-generation RDA processor:

  • 40B transistors, 7nm process (TSMC)
  • 600 TFLOPS (FP16), 1.2 TB/s memory bandwidth
  • 384 processing elements with reconfigurable interconnects
  • Target: Training and inference for NLP, computer vision

SN30 (2022)

Second-generation with enhanced transformer support:

  • 80B transistors, 5nm process
  • 1,200 TFLOPS (FP16), 2.4 TB/s memory bandwidth
  • Optimized for attention mechanisms (transformer models)
  • 2x performance improvement for BERT, GPT models

SN40L (2024)

Third-generation focused on LLM inference:

  • 120B transistors, 3nm process
  • 2,000 TFLOPS (FP16), 4.8 TB/s memory bandwidth
  • 3x better performance-per-watt than H100 GPUs for inference
  • Optimized for Llama, Mistral, GPT-style models
  • Sub-10ms latency for real-time AI applications

B. DataScale Platform

Full-stack AI infrastructure:

Hardware Layer

  • Rack-scale deployments (8-64 SN40L chips per rack)
  • Liquid cooling (managing heat from dense chip configurations)
  • High-speed interconnects (NVLink-equivalent for multi-chip communication)
  • On-prem installations or SambaNova-managed cloud

Software Layer

  • SambaFlow: Compiler and runtime optimizing models for RDA architecture
  • Model Zoo: Pre-optimized models (Llama 2, Mistral, GPT-J, BERT, ResNet)
  • MLOps: Training orchestration, model deployment, monitoring, versioning
  • APIs: REST APIs for inference, PyTorch/TensorFlow integration

Services Layer

  • Customer success teams (solution architects, ML engineers)
  • Model optimization services (fine-tuning for specific workloads)
  • 24/7 support with SLAs
  • Training programs for customer data science teams

C. Samba-1: Foundation Model

In 2025, SambaNova launched Samba-1, a foundation model demonstrating platform capabilities:

  • 50 billion parameters (Llama 2-70B scale)
  • Trained entirely on SambaNova hardware (showcasing training performance)
  • Open weights: Released to open-source community
  • Multilingual: Supporting 12+ languages
  • Use cases: General-purpose text generation, code generation, summarization

Samba-1 served dual purposes: benchmarking platform performance and providing customers ready-to-deploy foundation model.

D. Reconfigurable Dataflow Architecture (Technical Deep-Dive)

How RDA Works:

  1. Spatial Compilation: ML model (PyTorch/TensorFlow) compiled to dataflow graph
  2. Hardware Reconfiguration: Chip’s 384 processing elements dynamically connect to match graph structure
  3. Data Movement Optimization: Computation moves to data (not vice versa), reducing memory bottlenecks
  4. Pipelining: Different model layers executing simultaneously across chip regions
  5. Dynamic Adaptation: Architecture reconfigures for different models without manual tuning

Benefits:

  • Performance: 2-3x faster than GPUs for specific workloads (transformers)
  • Efficiency: 3x better performance-per-watt (lower electricity, cooling costs)
  • Flexibility: Supporting CNNs, RNNs, transformers, GNNs without hardware changes
  • Ease of use: Data scientists use standard frameworks (PyTorch), hardware optimization automatic

E. Technology Stack

Infrastructure:

  • Manufacturing: TSMC fabrication (3nm, 5nm, 7nm process nodes)
  • Packaging: Advanced packaging for thermal management, chip-to-chip communication
  • Networking: Custom interconnects for multi-chip scaling (hundreds of chips working together)
  • Cloud: AWS, Azure, GCP partnerships for cloud-based DataScale

Security:

  • Secure boot: Hardware root-of-trust, encrypted firmware
  • Isolation: Multi-tenant security for cloud deployments
  • Compliance: SOC 2, ISO 27001, FedRAMP in progress

Business Model & Revenue

Revenue Streams (February 2026)

Stream% RevenueDescription
DataScale Platform60%Annual subscriptions for on-prem hardware + software ($500K-$5M+)
DataScale Cloud25%Pay-as-you-go cloud inference services
Professional Services10%Model optimization, training, custom development
Support & Maintenance5%Extended support, hardware refresh cycles

Pricing Model:

  • On-prem: $1M-$10M+ for hardware (8-64 chip racks) plus annual software subscription (20-30% of hardware)
  • Cloud: $2-5 per million tokens (inference), competitive with OpenAI API pricing
  • Enterprise agreements: Multi-year contracts with committed spend

Customer Segmentation

  1. Government/National Labs (40% of revenue): Argonne, Lawrence Livermore, defense agencies
  2. Energy (25%): Saudi Aramco, oil exploration AI, seismic analysis
  3. Financial Services (20%): Banks, hedge funds (risk modeling, fraud detection)
  4. Healthcare/Life Sciences (15%): Drug discovery, genomics, medical imaging

Unit Economics

  • Gross Margin: 55-65% (hardware margins lower than pure software, but improving with scale)
  • Customer Lifetime Value (LTV): $5M+ for enterprise deployments
  • CAC Payback: 18-24 months (large deals, long sales cycles)
  • Hardware Refresh: 3-5 year cycles (customers upgrading to new chip generations)

Total ARR: $180+ Million (February 2026), growing 70%+ YoY


Competitive Landscape

NVIDIA ($2T+ market cap): GPU market leader, 90%+ AI training share, H100/A100 chips
Google TPU (Cloud TPU v4/v5): Tensor Processing Units for Google Cloud customers
AWS Trainium/Inferentia: Amazon’s custom chips for SageMaker
Cerebras ($4B valuation): Wafer-scale engine (850,000 cores on single chip)
Groq ($2.8B valuation): LPU architecture for ultra-low latency inference
Intel (public, Gaudi chips): AI accelerators from Habana Labs acquisition
AMD (public, MI300): GPU competitor to NVIDIA

SambaNova Differentiation:

  1. Full-stack platform: Hardware + software + models + support (not just chips)
  2. Enterprise deployment: On-prem installations (not cloud-only) for security, compliance
  3. Reconfigurable architecture: Adapting to model evolution without hardware redesign
  4. Performance-per-watt: 3x efficiency advantage reducing TCO

Customer Success Stories

Argonne National Laboratory

Challenge: Scientific computing for climate modeling, materials science requiring massive AI computation
Solution: DataScale deployment with 256 SN30 chips for multi-modal AI models
Results: 2x faster training for climate models, $2M annual power savings vs. GPU alternative

Saudi Aramco

Challenge: AI for oil exploration (seismic analysis) requiring real-time inference on massive datasets
Solution: SambaNova SN40L for inference workloads (analyzing petabytes of seismic data)
Results: 10x faster inference enabling real-time exploration decisions, improved discovery rates

Financial Services Company

Challenge: Fraud detection AI requiring sub-100ms latency for transaction authorization
Solution: DataScale Cloud for real-time inference (analyzing 10M+ transactions daily)
Results: 50ms average latency (meeting SLA), 30% fraud detection improvement


Future Outlook

Product Roadmap

SN50 Chip (2026): Next-generation with 5x performance improvement, 2nm process
Multimodal Support: Optimizations for vision-language models (GPT-4V, Gemini-style)
Edge AI: Smaller chips for edge deployment (autonomous vehicles, robotics)
Open Ecosystem: Expanded model zoo, community contributions, open-source tools

IPO Timeline

With $180M+ ARR, 70%+ growth, and strong enterprise customer base, SambaNova is positioned for IPO in 2027-2028. The company’s strategic importance (NVIDIA alternative for enterprises) and growing AI infrastructure market make it a compelling public market candidate.


FAQs

What is SambaNova Systems?

SambaNova Systems builds full-stack AI infrastructure combining custom AI chips (Reconfigurable Dataflow Architecture), software platform, and enterprise AI solutions.

How does SambaNova differ from NVIDIA?

SambaNova uses Reconfigurable Dataflow Architecture (not fixed GPU design), offers full-stack platform (not just chips), and delivers 3x better performance-per-watt for inference workloads.

What is SambaNova’s valuation?

$5.1 billion (February 2026) following a $676M Series D led by SoftBank Vision Fund.

Who are SambaNova’s customers?

100+ enterprises including Argonne National Laboratory, Lawrence Livermore, Saudi Aramco, and financial institutions.

What chips does SambaNova make?

Cardinal series: SN10 (2020), SN30 (2022), SN40L (2024)—each generation offering 2-3x performance improvement with superior power efficiency.


Conclusion

SambaNova Systems has emerged as a credible alternative to NVIDIA’s AI chip dominance, proving that purpose-built AI architectures can outperform adapted GPUs for production workloads. With a $5.1 billion valuation, $180M+ ARR, and 100+ enterprise customers including national laboratories and Fortune 500 companies, SambaNova has demonstrated that Reconfigurable Dataflow Architecture isn’t just academic research—it’s production-ready infrastructure powering critical AI applications.

As enterprise AI adoption accelerates (LLM deployment, AI agents, multimodal models), SambaNova’s full-stack approach (hardware + software + models + support) positions it as strategic alternative to cloud-only solutions. The company’s continued chip innovation (SN40L’s 3x efficiency improvement), expanding DataScale cloud services, and strong customer relationships make it one of AI infrastructure’s most compelling investment opportunities, with IPO likely within 24-36 months.

Related Article:

Leave a Reply

Your email address will not be published. Required fields are marked *

Share This Post