Meta is currently building a new AI Research SuperCluster (RSC) that aims to be among the fastest AI supercomputers in the world.
Indeed, the supercomputer is being built in phases and already features 6080 Nvidia A100 GPUs, 175 petabytes of bulk storage, 46 petabytes of cache storage, and 10 petabytes of network file system storage. Once finished, the RCS will include 16,000 GPUs alongside a data system that can serve one exabyte of training data per second.
By doing so, the company will be using the RSC to train large AI models in natural language processing (NLP) as well as to research and train models using trillions of examples. It is also training AI to work in different languages, analyze media, and develop augmented reality tools. This will then help in identifying harmful content.
In the long term, Meta wants to leverage it to build new AI models for workloads. It is then hoping to get to the final stage in July.