AI Analytics Runtime

Artificial Intelligence (AI) pipelines involve multiple specialized roles such as Software Architect, Software Engineer, DevOps Engineer, Data Scientist, IT Systems Administrator, and Big Data Engineer. Data storage and retrieval require a Big Data Engineer, while provisioning and administration of infrastructure fall under the responsibility of an IT Systems Administrator. Data Scientists process and transform data into AI models, and DevOps Engineers and Software Engineers handle deployment and execution of these models in a secure and orchestrated manner. The Software Architect oversees interconnections, technology selection, and architectural decisions. The AI Analytics Runtime component automates the provisioning and deployment of infrastructure and AI models, integrating with other i4 components for functionalities like authentication, monitoring, data acquisition, and marketplace integration.


  • Artificial Intelligence models’ configurations can be specified through a Manifest file depending on the user/model needs: training from historic data, adaptative learning, API configurations, etc

  • Artificial Intelligence models can be uploaded to be automatically configured, deployed, trained, and run, making them available for generating data predictions

  • Deployed AI models can be visualized and managed: they can be stopped/started on-demand or by creating a model schedule

  • New datasets can be uploaded to be used in the training phase of AI model deployments

Play Video


Visualization of Artificial Intelligence models’ deployments at runtime. Display of some specific model information, ie starting time, the RAM memory in use or its public API URL


AI models can be deployed in a Kubernetes cluster by uploading scripts and a Manifest file, creating docker images, training models, generating API endpoints, configuring data transformations, restricting access by user roles, integrating with Wazuh security monitoring, and providing management actions for deployed models.


Deployed AI models can be scheduled to be started or stopped with a specific frequency (ie every Monday) by configuring different options (ie specific time of day), creating an automation of each model run-time


Dataset files can be uploaded to be used in the training phase of AI models’ deployments. All available datasets are displayed in the upper table. On the other hand, after an AI model is trained, a model file is created and displayed in the lower table. Datasets and trained model files can also be downloaded for other purposes

Additional resources

Learn more about i4FS by visting the project website for general information, the wiki for information about the core components, the Technical Manual for API documentation, and downloading the repository’s source code.

Training Academy

Get a better understanding of the global architecture and information flow.

Source code

Our source code is opensource and available on our Gitlab repository.

Software Documentation

Read our easy to follow documentation to learn how to use the i4 Components.

Software Tutorials

Follow our step by step tutorials to create your first zApp.