-By Valentina De Vincenti
When it comes to high-speed processing of information from infinite datasets, expensive mammoth memory is not an alley. But rather, it seems the beast within the beauty of the computational game. An agile, cost-effective and timely 'in place' computational solver to work on Graphic Processing Units (GPUs) is the ultimate solution.
Professor David Keyes, head of the Extreme Computing Research Center, along with Hatem Ltaief, Senior Research Scientist, and Ali Charara, Ph.D. Student at the Center, won the computational level and game.
"Solving systems of multiple simultaneous equations involving thousands to millions of variables asks for tremendous energy costs to store and then process data," explained Professor Keyes, presenting the research conducted joining forces with NDIVIA - the global leader in GPUs for gaming graphic devices and parallel computing, California.
Already foreseeing new applications into the Company's next scientific software library, the novelty proposed by Keyes and co-workers overcomes dramatic limitations due to the large memory-based infrastructure of standard heavy hardware, such as CPUs, generally used to perform such tasks.
The team opted for GPUs light architecture, instead. Broadly used in computer gaming, mobile, and PC graphics - GPUs allow customizing an effective result-driven support to create a computational framework for increasing the number of processors while reducing the memory required to temporarily store the data.
But there is more to it. Thanks to a first-hand inspirational experience as an NDIVIA intern, Ph.D. Student Ali Charara intuitively brought the team's success even further. Charara designed a computational solver scheme that operates directly on data 'in place' without making an extra copy. Such cutting-edge generation solver allows operating data-analysis based on sub-sequential columns out of the matrix of data derived from simultaneous equations. Consequentially, the resulting columns are now converted into smaller rectangular and triangular tasks to be processed using the GPUs memory hierarchy at the speed of the higher cache levels.
Several tests proved the efficiency of the GPUs-based computational solver. Impressive performances at eightfold speed for large dense matrixes, against the existing NDIVIA implementations for basic linear algebra software, were recorded.
Mirroring such great success the team has gained NDIVIA interest to bring GPU-based computational solver optimization at industrial scale to advance solutions for research laboratories and consumer applications.