Saturday, February 26

MS81
Parallel Algorithms for Tensor Computations and their Applications - Part III of III

1:50 PM - 3:30 PM

For Part II, see MS71

Tensors, or multidimensional arrays, are a natural way to represent high-dimensional data arising in a multitude of applications. Tensor decompositions, such as the CANDECOMP/PARAFAC, Tucker, and Tensor Train models, help to identify latent structure, achieve data compression, and enable other tools of scientific and data analysis. This minisymposium explores recent advances in algorithms for computing tensor decompositions, parallel algorithms for computing key tensor decomposition kernels, and applications of these methods to scientific and data analysis use-cases.

Organizer: Grey Ballard
Wake Forest University, U.S.
Cannada A. Lewis
Sandia National Laboratories, U.S.
Jeremy Myers
College of William & Mary, U.S.
Eric Phipps
Sandia National Laboratories, U.S.

1:50-2:10 Memory-Efficient Tensorized Embedding Layers for Neural Networks abstract
Chunxing Yin and Rich Vuduc, Georgia Institute of Technology, U.S.
2:15-2:35 Robust Approximation of Tensor Networks, and Its Applications in Quantum Chemistry abstract
Edward Valeev and Karl Pierce, Virginia Tech, U.S.
2:40-3:00 Structured Matrix Approximations via Tensor Decompositions abstract
Arvind Saibaba, North Carolina State University, U.S.; Misha E. Kilmer, Tufts University, U.S.
3:05-3:25 Parallel Memory-Efficient Computation of Symmetric Higher-Order Joint Moment Tensors abstract
Zitong Li, Wake Forest University, U.S.; Hemanth Kolla and Eric Phipps, Sandia National Laboratories, U.S.
PP22 Home 2022 Program Speaker Index