TridiagLU  1.0
Scalable, parallel solver for tridiagonal system of equations
TridiagLU Documentation
Debojyoti Ghosh [Email: (first name) (dot) (last name) (at) gmail (dot) com, Website:]

TridiagLU is a distributed-memory solver for non-periodic, tridiagonal (including block tridiagonal) systems of equations. It is written completely in C, and uses the MPICH ( library. The following solvers are available:


  • These functions are designed to solve several systems in one call to exploit the arithmetic density of multiple solves. Of course, one system can be solved as well.
  • See the documentation of the above functions to understand the array layouts used here.
  • There are some customizable parameters (eg., the type of reduced system solve, the solver tolerances and verbosity for the iterative solvers, etc.) that can be set using an optional input file (see documentation for tridiagLUInit()). If the input file is absent, default values will be used.

References for the algorithm implemented in tridiagLU() and blocktridiagLU():


The code is available at:

It can be cloned using git as follows:

Bitbucket also allows downloading the package, see

Using in another code

Copy the header files in /include to the include directory of the code, and copy the source files in /src/TridiagLU to the source directory of the code, and compile as usual.

Note: for tridiagScaLPK() to be available, compile with -Dwith_scalapack flag.

See test_mpi() and test_block_mpi() for examples of how to call the solvers.


To generate a local copy of the documentation, run "doxygen Doxyfile" in . The folder /doc should contain the generated documentation in HTML and PDF formats.

Compiling and Testing

A test suite is also available to quickly test the implementation of these solvers. To compile and run the test suite, follow these steps:


  autoreconf -i
  [CFLAGS="..."] ./configure [options]
  make install

CFLAGS should include all the compiler flags. The flags specific to tridiagLU are:

The configure options can include options such as BLAS/LAPACK location, MPI directory, etc. Type "./configure --help" to see a full list. The options specific to tridiagLU are:

  • –with-mpi-dir: Specify path where mpicc is installed.
  • –enable-scalapack: Enable ScaLAPACK (this will add the compilation flag -Dwith_scalapack ).
  • –with-blas-dir: Specify path where BLAS is installed (relevant only if –enable-scalapack is specified).
  • –with-lapack-dir: Specify path where LAPACK is installed (relevant only if –enable-scalapack is specified).
  • –with-scalapack-dir: Specify path where ScaLAPACK is installed (relevant only if –enable-scalapack is specified).
  • –with-fortran-lib: Specify path where FORTRAN libraries are installed (for ScaLAPACK) (relevant only if –enable-scalapack is specified).

Once everything is compiled, the executable can be run to test the solvers. It needs an input file called "input" with 3 integers: global size of system to test with, number of systems to test with, and number of repeated tests to run for wall time measurement. For example, an input file with the following content

1000 20 500

will solve 20 systems of global size 1000, and 500 solves will be carried out for wall time measurements (see test_mpi() and test_block_mpi() for more details).


  • This package has been tested using the GNU and IBM C compilers. The configuration script is designed to look for these compilers only.
  • Feel free to contact me about anything regarding this (doubts/difficulties/suggestions).