Trusted By











Key Features

Accelerated AI-Pipelines
High performance Data Lake for handling large datasets – 10-15x query acceleration

Open Source Support
Supports all open source frameworks

Micro Cloud – GPU Supercomputing
Ready to use 20+ Tensor Petaflops in a GPU Micro Cloud

Multi-Cloud from Core to Edge
Runs anywhere: AWS, Azure, On-Premises or at the Edge

Cloud-Native
Container-based – runs and scales anywhere

Curated
AI-Algorithms
Curated algorithms – closed loop validation – IP exclusives
Composable Foundational Components
AI-Runtime
* Coming soon
Our Solutions
Enterprise
-
Single AI PaaS across your enterprise
-
Unify entire AI life-cycle
-
Security & compliance of your own data center
-
Open source framework support
-
HPC infrastructure support in one click
-
Access to curated AI algorithms
Cloud Service Providers
-
Onsell AI-PaaS to
-
Compete with public clouds
-
Offer low latency AI solutions
-
Promote high margin services
-
Keep customers longer
-
API integration for metering & IaaS orchestration apps
Edge Operators
-
Make AI/ML PaaS part of the Edge-Telco-Cloud ecosystem
-
Reap benefits of 5G & low latency
-
AI-Inference at the Edge for quick decisions
-
Deliver real-time, production ready AI/ML solutions
Case Studies
-
Accelerate drug discovery for the SARS-CoV-2 virus, using Computational Physics (MELD) modeling.
-
HPC notebook with MPI using – 128 NVIDIA RTX 6000 GPUs, 500 GB of Parallel File System
-
Multiple Containers to support multi-GPU, multi-CPU compute engines

-
In Silico drug discovery simulations, computer-aided drug design (CADD), molecular modeling and quantitative structure-activity relationship (QSAR) for pharmaceutical companies
-
HPC notebook usage – 8 NVIDIA RTX 6000 GPUs
-
Zeblok’s platform provides 6x improvement in performance compared to NSF’s Blue Waters Cray GPU-based system

-
Dance Classification
-
Image Classification by Computer vision using Convolution blocks and VGG family of convnets to identify dance genre
-
Open Source Jupyter Notebook
-
Multiple containers to support multi-CPU, multi-CPU compute engines using – 1 NVIDIA RTX 6000 GPU
-
Covid-19 CT Scan Segmentation
-
Unlabeled largest data set of infected lung CT scans called MOSMEDDATA
-
800 CT volumes of COVID infected patients which results in around 25k CT slices.
-
2 RTX 6000 GPUs, 500 GB of Parallel File system

-
COVID-19 Epidemiology
-
Assist health care administration, insurance, and government agencies in emergency preparedness.
-
Open source Kaggle repository dataset - UNCOVER COVID-19 Challenge provided by Roche Data Science Coalition
-
Zeblok AI-Rover™ WorkStation, 1 RTX 6000 GPU, 50 GB of Block Storage, 100 GB Object Storage

-
Predictive Analytics
-
TRMI data from Refinitiv, combined with pricing data in a unique project that aimed to understand the impact of market sentiment on returns.
-
Zeblok AI-Rover™ WorkStation, 7 vCPUs, 16GB RAM, 50 GB of Block Storage
-
Rare Disease Study Design
-
C-Path institute dataset from Friedreich's Ataxia Clinical Outcome Measure Study (FA-COMS) natural history data to identify novel endpoints, biomarkers, and baseline characteristics that may optimize clinical study
-
Zeblok AI-Rover™ WorkStation, 1 RTX 6000 GPU, 50 GB of Block Storage

-
Skin Cancer Classification
-
ISIC 2020 Challenge Dataset containing 33,126 dermoscopic training images of unique benign and malignant skin lesions from over 2000 patients.
-
Zeblok AI-Rover™ WorkStation, 1 RTX 6000 GPU, 1 vCPU, 16GB RAM, 50 GB of Block Storage, 100GB Object Storage
Just 3 Steps
Step#1
Select a Notebook
Step#2
Pick Infrastructure
Step#3
Start modeling
-
Choice of Open Source framework Notebooks that are CUDA optimized
-
Choice of curated Algorithm Notebooks
AI-Landing
Zeblok Micro Cloud 21-day trial
A complete user experience on the Zeblok Micro-Cloud for Data Engineers, Data Analysts and Data Scientists
Step #1
Select a Notebook
Choice of
-
CUDA optimized open source framework notebook
OR
-
Notebook, with algorithm exclusively available on the Zeblok platform
Step #2
Select Infrastructure
Choice of
-
CPU environment on our Micro Cloud
OR
-
Single or hundreds of GPUs in our Micro Cloud (up to 20+ petaFLOPS)
Step #3
Start Modeling
-
Ingest data into high performance Data Lake
-
Train models and integrate runtime AI solutions into your enterprise business processes from Core to Edge
Partnerships
Zeblok Ingenuity
Algorithms and Expertise
From Top Minds in AI
Benefit from government/academia/industry relationship program



Zeblok Frontier
Extend Your Infrastructure to AI



Zeblok Build Intelligence Services
Ideas to execution for pragmatic AI projects!!
Announcements and News
AI-Rover
TM
Data Comprehension in Days
Full implementation of an explainable AI algorithm on a virtualized notebook, in concert with Zeblok's GPU-powered data lake.