The A100 SXM chip are also better suited for scale-up deployments, supporting up to four, eight or even 16 A100 GPUs that can be interconnected with Nvidia’s NVLink and NVSwitch interconnect ...
which the chipmaker said is up to 4.9 times faster for HPC applications and up to 20 percent faster for AI applications compared to the 400-watt SXM version of Nvidia’s flagship A100 GPU.