Since NCS exploits directly the results of traditional simulations, the integration of HPC clusters with Kubernetes, distributed training on multiple GPUs and the efficient exchange of large volume of data is a force-multiplier for AI. Therefore, NCS has been built to leverage the flexibility and power of cloud HPC services (such as Microsoft Azure) that provide seamless integration with large volumes of simulation results, access to heterogenous computing platforms and ability to deploy trained models in a secure environment for remote engineering and design teams working in different locations.
In short, the association of GCNNs with cloud-based HPC is breaking a new barrier in terms of possibilities, performance and convenience for deploying AI-based design optimization at large organizations and provides significant reduction of design times thanks to high performance in simulation, training and inference.