Sparse Matrix Algorithms: Combinatorics + Numerical Methods + Applications

Time

-

Locations

E1 104

Host

Applied Mathematics

Speaker

Tim Davis
Texas A&M
http://www.cise.ufl.edu/~davis/welcome.html


Tim Davis received his PhD from the University of Illinois at Urbana-Champaign in 1989. After a postdoc at CERFACS in Toulouse, France, he joined the faculty at the University of Florida. In June, 2014, he became a Professor in the Computer Science and Engineering Department at Texas A&M. His primary scholarly contribution is the creation of widely-used sparse matrix algorithms and software. This research domain lies in the intersection of graph algorithms, combinatorics, robust software design, and matrix computations. His software is relied upon by a vast array of commercial, government lab, and open source applications, including MATLAB (x=A\b when A is sparse), Mathematica, Google (Street View, Photo Tours, and 3D Earth), Octave, ANSYS, Cadence, MSC NASTRAN, Mentor Graphics, and many other applications in finite element methods, mathematical optimization, circuit simulation, VLSI design, robotics, graphics, computer vision, structural engineering, and geophysical modeling. As an NVIDIA Academic Partner, he is creating a new suite of highly-parallel sparse direct methods that can exploit the high computational throughput of recent GPUs. He was recently elected as a Fellow of the Society for Industrial and Applied Mathematics, "for contributions to sparse matrix algorithms and software, including the University of Florida Sparse Matrix Collection." Work is fun, but for pure fun, Tim does algorithmic art by translating music into visual art via Fourier transforms, graph algorithms, and force-directed graph visualization. His most recent commissioned work appeared as the theme artwork for the 2013 London Electronic Arts Festival. For more details, his professional web page is http://faculty.cse.tamu.edu/davis and his algorithmic art appears at NotesArtStudio.com.


Description

Sparse matrix algorithms lie in the intersection of graph theory and numerical linear algebra, and are a key component of high-performance combinatorial scientific computing. This talk highlights four of my contributions in this domain, ranging from theory and algorithms to reliable mathematical software and its impact on applications (all of which appear in x=A\b in MATLAB):

(1) Sparse Cholesky update/downdate
(2) Approximate minimum degree
(3) Unsymmetric multifrontal method for sparse LU factorization
(4) Multifrontal sparse QR factorization

This talk also presents my current work in GPU-based heterogeneous high-performance parallel computing for sparse multifrontal methods. The method assembles and factorizes all frontal matrices on the GPU, without the need to transfer large amounts of data between the GPU and CPU. The sparse matrix is shipped to the GPU and the final factors are retrieved when it completes. A novel scheduling algorithm for communication-avoiding dense QR exposes a higher degree of parallelism than previous methods. Our research prototype reaches 150 GFlops for a large sparse QR factorization on the NVIDIA K20c GPU, with a typical 5x to 11x speedup for large problems (and a peak of 30x speedup), as compared to the highly-optimized multicore sparse QR on the CPU.

Tags: