Ai-API Engine

Bridge Data Science and DevOps
Deliver Prediction Services Anywhere

Background design1.png

Getting completed machine learning models into production is challenging. Data scientists are not experts in building production services and DevOps best practices. Trained AI/ML models produced by a data science team are hard to test and hard to deploy. This often leads to a time consuming and error-prone workflow, where a pickled model or weights file is handed over to a software engineering team.

Our Ai-API Engine is a framework within Zeblok’s Ai-MicroCloud™ for serving, managing and deploying completed Ai/ML models. It bridges the gap between data science and DevOps, and enables teams to deliver prediction services in a fast, repeatable and scalable way.  

Zeblok’s Ai-API Engine makes moving trained Ai/ML models to production easy:

  • Package models trained with ML framework and then containerize the model server for production deployment 

  • Deploy anywhere for online API serving endpoints or offline batch inference jobs

  • High-performance API model server with adaptive micro-batching support

  • Ai-API server is able to handle high-volume without crashing, supports multi-model inference, API server Dockerization, built-in Prometheus metric endpoint, Swagger/Open API endpoint for API client library generation, serverless endpoint deployment, etc.

  • Central hub for managing models and deployment process via web UI and APIs

  • Supports various ML frameworks including: Scikit-Learn, PyTorch, TensorFlow 2.0, Keras, FastAI v1 & v2, XGBoost, H2O, ONNX, Gluon and more

  • Supports API input data types including: DataframeInput, JsonInput, TfTensorflowInput, ImageInput, FileInput, MultifileInput, StringInput, AnnotatedImageInput and more

  • Supports API output Adapters including: BaseOutputAdapter, DefaultOutput, DataframeOutput, TfTensorOutput and JsonOutput

Easy Steps to Ai-API Deployment

List of APIs 

Quick view of the APIs that are successfully deployed

select-model-to-deploy.png

Select NameSpace

Option to select a namespace

select-edge-datacenter.png

Create and Distribute API

Click Create button to create the API – the Ai-API Engine does the rest, deploying the Ai inference to all your locations

List-of-api.png

Select Model to Deploy as API

Select the completed Ai/ML model to deploy as an API

select-namespace.png

Select Data Centers/Edge Locations

Select from the list of your data centers or Edge locations where model is to be deployed

create-api.png