SPRING 2021 |
Purdue’s new Gilbreth supercomputer is accelerating the training time of machine learning models for research in the lab of Prof. Ananth Grama.
A Purdue team has used the Gilbreth community cluster operated by ITaP Research Computing to develop a new algorithm that uses the power of multiple GPU nodes to accelerate the training time of machine learning models.
“I can submit 50 to 100 jobs at the same time and because Gilbreth is such a powerful cluster, I only have to wait overnight before I have the results,” says Chih-hao Fang, the first author on the paper and a PhD student at Purdue working under the supervision of Ananth Grama, Samuel D. Conte Professor of Computer Science.
Empirical results show that the training time of their algorithm is significantly faster than other state-of-the-art distributed optimization methods.
Before beginning this project, Fang was not experienced with using Purdue’s GPU clusters, so he worked with Amiya Maji, senior computational scientist for ITaP Research Computing, to get started. Maji “patiently taught me from scratch,” Fang says, helping him install required software, submit jobs to the cluster and monitor those jobs.
Fang and his colleagues presented their work in the “GPU Algorithms and Optimizations” track of the SC20 supercomputing conference, which was held virtually in November.
In addition to Fang and Grama, co-authors on the paper include Sudhir Kylasa, a postdoctoral research associate at Purdue and former student of Grama’s, Fred Roosta, a faculty member in the school of mathematics and physics at the University of Queensland, and Michael Mahoney, associate adjunct professor of statistics at the University of California-Berkeley.
To learn more about Purdue’s Community Cluster Program, contact Preston Smith executive director of ITaP Research Computing, firstname.lastname@example.org or 49-49729.
Writer: Adrienne Miller, science and technology writer, Information Technology at Purdue (ITaP), 765-496-8204, email@example.com