LAPACK

LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.

LAPACK, in contrast, was designed to effectively exploit the caches on modern cache-based architectures and the instruction-level parallelism of modern superscalar processors,[2]: "Factors that Affect Performance"  and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tuned BLAS implementation.

[2]: "The BLAS as the Key to Portability"  LAPACK has also been extended to run on distributed memory systems in later packages such as ScaLAPACK and PLAPACK.

Additionally, many other software libraries and tools for scientific and numerical computing are built on top of LAPACK, such as R,[8] MATLAB,[9] and SciPy.

[10] Several alternative language bindings are also available: As with BLAS, LAPACK is sometimes forked or rewritten to provide better performance on specific systems.