AI-Stack Appliance

Customized solution to satisfy different stages of
AI adoptions and budget requirements

Special design for industrial AI development team, education and research institutions for machine learning or deep learning training that helps users build GPU resource pools, realize centralized management and deployment of computing resources, and enhance users' research and development in the field of artificial intelligence. It is suitable for various application scenarios like artificial intelligence research, deep learning training, medical image analysis, and various identification analysis.


Efficient Resource Management

Maximize the efficiency of GPU utilization rate

Support integrated hardware systems, unified management and multiple container-shared GPU mechanisms so that multiple users and multiple containers can share the same GPU resource for AI development and accommodate one or more AI computing nodes at the same time.

Visualized Management Interface

Enhance the scalability, performance, and operational control of AI environments

Provide a rich web visualization interface that allows you to directly view the real-time and historical status of the utilization on CPU, memory, network, GPU, GPU temperature, node count and execution status, usage status, available capacity, device load status, etc.

Smart monitoring and notification

Achieve better GPU utilization of performance optimization and increating return on investment

While the container is running, you can monitor the usage status of various resources in real time, such as CPU, memory, network, GPU, GPU temperature, etc. When computing resources are used in large quantities, the system also automatically triggers an email alarm to remind managers to pay attention to the available slot of resources and to help managers rationally allocate and manage computing resources.

Abundant software resources

A single platform for all the required resources, tools and AI frameworks

Embedded NVIDIA optimizes the commonly used AI frameworks of TensorFlow and PyTorch and features an extended design of the AI framework. Users can quickly build an experimental environment to meet their needs for model research, model establishment, batch training, and model deployment in deep learning training and inference.