By Nages Sieslack
Few individuals have influenced high performance computing (HPC) as profoundly as Jack Dongarra. Over a career spanning more than four decades, his work has underpinned modern scientific discovery, supporting everything from climate modeling and physics simulations to artificial intelligence.
In 2021, Dongarra received the ACM A.M. Turing Award, often described as the “Nobel Prize of computing,” in recognition of his contributions to numerical algorithms and high-performance software. The ACM cited his role in developing methods and libraries that allowed scientific software to keep pace with dramatic changes in computing hardware over successive generations. Indeed, Dongarra’s work has been adapted across multiple architectures including the early vector systems, HPC clusters, and now to today’s heterogeneous, AI-accelerated supercomputers.
His influence began early. In the late 1970s, Dongarra developed LINPACK, a software library designed for solving systems of linear equations efficiently. In 1992, he led the development of LAPACK, extending those capabilities to new computer architectures. These tools became foundational to scientific computing. As Dongarra has observed, linear algebra sits “at the heart of many scientific and engineering applications,” and his work ensured those applications could run efficiently on emerging machines.
In 1993, LINPACK became the basis for the TOP500 list, the global ranking of the world’s fastest supercomputers. What began as an effort to create a consistent way to measure performance evolved into one of the most widely recognized benchmarks in computing. The TOP500 did more than rank systems; it helped define progress, offering a transparent way to understand how architectures were evolving and where the performance frontier lay.
Dongarra’s career has spanned academia, national laboratories, and international collaborations. At the University of Tennessee, he founded the Innovative Computing Laboratory, which became a leading center for high performance computing research. He also maintained deep ties with Oak Ridge National Laboratory, home to some of the world’s most powerful supercomputers, as well as held other positions, including a visiting professorship at the University of Manchester. His contributions have been recognized globally through election to the U.S. National Academy of Sciences, the National Academy of Engineering, and the Royal Society.
Yet Dongarra has consistently emphasized that computing advances are rarely the result of isolated breakthroughs. Instead, they emerge from sustained collaboration between mathematicians, computer scientists, and hardware engineers. His career reflects that intersection, bridging theory and practical implementation at a time when computing was evolving from specialized research equipment into essential scientific infrastructure.
Even now, his influence continues. Though formally retired from teaching, Dongarra remains active in research and mentorship at the University of Tennessee. In recent months, he has traveled extensively, speaking to students and researchers about the future of extreme-scale computing and artificial intelligence. In Tunisia, he met with young researchers exploring computational science. In China and Singapore, he delivered lectures on how numerical algorithms continue to shape modern AI and simulation. These appearances reflect his enduring role not only as a pioneer, but as a guide to the next generation.
That perspective will inform his closing keynote at ISC High Performance 2026 in Hamburg, where he will speak on “HPC in Transition.” Given that few figures have witnessed and shaped so many phases of computing’s evolution, his insights into what lies ahead promises to be enlightening.