Skip to main content

What is AIchor?

The AIchor platform allows the user to manage Machine Learning (ML) and Reinforcement Learning (RL) experiments at scale, leveraging the underlying infrastructure either on-premise or on the cloud. It uses enterprise grade servers to take advantage of the full power of high density CPU cores and advanced GPU to distribute machine learning workloads at scale while abstracting the hardware and technical intricacies to AI Engineers and researchers.

AIchor uses cutting edge MLOps and GitOps practices and sits on top of Kubernetes clusters to allow its users to deploy Machine Learning jobs that can take full advantage of hundreds of CPU cores and GPUs. This allows the user to deploy seamlessly on custom hardware infrastructure and also on various cloud providers, and avoid being cloud-vendor locked.

The AIchor platform integrates with any Machine Learning project hosted under a Git repository using commit webhooks. This allows AI engineers and researchers to trigger Machine Learning pipelines by pushing their code to their repository. This automatically triggers a centralized pipeline that handles all the steps required to run the experiment.

This user guide is intended for everyone who uses the AIchor Platform to provision and manage experiments. The AIchor Platform considers two types of user profiles per organization (ORG): administrator and developers. There is only one administrator and several developers per organization.

The access to the AIchor Platform is web-based via secure SSL connection. Every organisation will be provided with their own entry point: https://organisationname.aichor.ai. The AIchor web based interface will be accessible through the most recent versions of the commercial browsers supported nowadays. The interface will be optimised for tablets and smartphones in the next coming versions.

The supported web browser at the moment is Google Chrome.

AIchor releases:

AIchor v1.0.0

The initial version of AIchor marked its debut as a unified platform, with all components deployed within a single cluster. This streamlined setup served as the foundation for the platform's core functionality, enabling efficient integration and coordination across its various features.

AIchor v2.0.0

The second version of AIchor introduced a re-architected platform, separating the control plane from the data plane to enhance scalability and flexibility. The control plane, deployed in a dedicated Kubernetes cluster, houses all core components essential for managing the platform. Meanwhile, the data plane consists of clusters specifically allocated for running workloads. This architecture enables the execution of jobs across multiple Kubernetes clusters, allowing each tenant—whether an organization or a customer—to operate in its own isolated environment.

AIchor v2.1.0

This version of AIchor extends the platform's versatility by supporting various engines as data planes beyond Kubernetes clusters, such as AWS ParallelCluster, GCP Vertex AI, and AWS SageMaker. This enhancement allows the platform to integrate seamlessly with a wide range of computational frameworks, enabling tenants to leverage the infrastructure that best suits their workload requirements and operational preferences.