NVIDIA
Nvidia unveils new supercomputer to train artificial intelligence
During a conference in San Jose, USA, dedicated to technology, Nvidia introduced a new supercomputer, the DGX-2, expected to launch later this year. It is a "megalomaniac" platform of servers to help scientists and researchers deepen the training of artificial intelligence through Deep Learning. The machine is pointed out as the first system in the world to charge two performance petaflops. This is the equivalent of 300 servers, but occupying a space 60 times lower and with greater energy efficiency. In just six months, Nvidia was able to manufacture a system with a performance eight times higher than the previous DGX-1.The company also revealed that its Tesla V100 graphics processor will have double the memory capacity. This will have the performance of 100 CPUs in only one GPU. The computer features the new NVSwitch technology, a GPU interconnector capable of connecting 16 simultaneous video cards based on the Volta microarchitecture, thus doubling first generation performance. And what does this mean in terms of data communication? "Only" 2.4 terabytes per second ...

The supercomputer is equipped with 16 Tesla V100 video cards, 512GB of HBM2 memory, two Intel Xeon processors and 30TB of SSD storage based on next generation NVMe technology.
The new processor is intended for artificial intelligence research machines and has already been requested by different companies, including HP, IBM, Oracle, Dell and Lenovo to start using in their solutions in the coming months.
The new generation of supercomputers will accelerate the artificial intelligence training process, which in practical terms means better voice recognition and translation systems, with performances closer to natural. A Microsoft spokesman referred to the importance of Nvidia's technology for its Cortana, Bing and Microsoft Translator systems, which will achieve a new level of "human capabilities."
Sapo
No comments:
Post a Comment