Home People Publications Talks Teaching Contact Github

Recent Talks

  1. Matrix computations: Going beyond libraries
    eSSENCE, Swedish e-Science Academy, Umeå, Sweden, October 2022.
  2. The fragmented landscape of tensor computations
    Chalmers University, 4th Workshop on Scientific Computing in Sweden (SwedComp22), Göteborg, Sweden, October 2022.
  3. High-performance matrix computations: It’s not all about libraries
    RWTH Aachen University, EU Regional School, Aachen, Germany, May 2022.
  4. Software for tensor computations: What is happening???
    Dagstuhl Seminar 22101, Tensor Computations: Applications and Optimization, Dagstul, Germany, March 2022.
  5. The MTTKRP, a closer look
    Dagstuhl Seminar 22101, Tensor Computations: Applications and Optimization.
    March 2022.
  6. Parallel Algorithms --- Introduction to High Performance Computing
    PDC summer school on High-Performance Computing, KTH, Stockholm, August 2021.
  7. High-Performance Tensor Computations: Where Do We Stand?
    SIAM Conference on Computational Science and Engineering.
    Dallas (via Zoom), March 2021.
    Since the introduction of the BLAS-1 library 40+ years ago, the entire domain of matrix computations has evolved around well defined layers, and a few "container" libraries that included state-of-the-art algorithms/implementations for a specific class of problems and/or a specific type of parallelism; these libraries served and are still serving the needs of a vast ecosystem of applications. In stark contrast, the domain of tensor computations still lacks a set of building blocks, and many similar libraries are developed in different application domains. This situation inevitably leads to redundancy and to suboptimal results. Furthermore, the software landscape for tensor computations is fragmented in terms of features, programming languages, and computing platforms, to the point that comparisons between new and existing algorithms are excessively challenging. In this talk we survey the software for high-performance tensor computations and make suggestions for an initial set of computational building blocks.
    abstractPDFhide
  8. Tensor computations: A fragmented landscape
    Huawei, January 2021.
    Since the 1970s, the domain of matrix computations has evolved around well established libraries (e.g., LINPACK, BLAS, LAPACK, PETSc) with clearly defined interfaces. Such libraries include state-of-the-art algorithms for specific problems and/or for specific types of parallelism. Written in Fortran and/or C, these libraries served and are still serving the needs of a vast ecosystem of applications. In stark contrast, the domain of tensor computations is evolving in a seemingly uncoordinated fashion. Countless libraries and packages are being developed in different application domains, resulting in redundancy and suboptimal performance. The software landscape is fragmented in terms of features, programming languages, and computing platforms, to the point of obstructing comparisons between new and existing algorithms. In this talk we first give an overview of different tensor operations as they arise in scientific computing and data science. We focus on software, and contrast the development of linear algebra and tensor libraries. Finally, we make suggestions for an initial set of computational building blocks for tensor operations, and discuss related challenges.
    abstractPDFhide
  9. Achieving the compute bound with CP-CALS
    BLIS Retreat 2020.
    5 October 2020.
  10. How fast can you drive your computer?
    Kunskapsveckan 2020, Umeå, October 2020.
    Computationally, nowadays any computer is incredibly powerful. As a reference, the computational power of any of today's laptops surpasses that of the 1996 world's fastest supercomputer. By analogy with cars, it is as if everyone owned a Ferrari, a Lamborghini, or a Formula 1 car. And the same way only selected drivers can race such supercars at more than 300Kms/hour, exploiting the full potential of a computer is a challenging task that only few experts can perform. In most cases, users rely on well known programming languages and compilers, trusting them to "drive their computers fast enough". Is this really the case? Our investigation illustrates that while programming languages are excellent at computations with numbers, they still cannot compete with human solutions when it comes to more advanced mathematical operations that involve vectors and matrices. Our study aims at giving directions for the future development of programming languages.
    abstractPDFhide