Computational complexity of matrix multiplication information
Algorithmic runtime requirements for matrix multiplication
Unsolved problem in computer science:
What is the fastest algorithm for matrix multiplication?
(more unsolved problems in computer science)
In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.
Directly applying the mathematical definition of matrix multiplication gives an algorithm that requires n3 field operations to multiply two n × n matrices over that field (Θ(n3) in big O notation). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication".[1] The optimal number of field operations needed to multiply two square n × n matrices up to constant factors is still unknown. This is a major open question in theoretical computer science.
As of January 2024[update], the best bound on the asymptotic complexity of a matrix multiplication algorithm is O(n2.371552).[2][3] However, this and similar improvements to Strassen are not used in practice, because they are galactic algorithms: the constant coefficient hidden by the big O notation is so large that they are only worthwhile for matrices that are too large to handle on present-day computers.[4][5]
^
Volker Strassen (Aug 1969). "Gaussian elimination is not optimal". Numerische Mathematik. 13 (4): 354–356. doi:10.1007/BF02165411. S2CID 121656251.
^Vassilevska Williams, Virginia; Xu, Yinzhan; Xu, Zixuan; Zhou, Renfei. New Bounds for Matrix Multiplication: from Alpha to Omega. Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). pp. 3792–3835. arXiv:2307.07970. doi:10.1137/1.9781611977912.134.
^Nadis, Steve (March 7, 2024). "New Breakthrough Brings Matrix Multiplication Closer to Ideal". Retrieved 2024-03-09.
^Iliopoulos, Costas S. (1989). "Worst-case complexity bounds on algorithms for computing the canonical structure of finite abelian groups and the Hermite and Smith normal forms of an integer matrix" (PDF). SIAM Journal on Computing. 18 (4): 658–669. CiteSeerX 10.1.1.531.9309. doi:10.1137/0218045. MR 1004789. Archived from the original (PDF) on 2014-03-05. Retrieved 2015-01-16. The Coppersmith–Winograd algorithm is not practical, due to the very large hidden constant in the upper bound on the number of multiplications required.
^Robinson, Sara (November 2005). "Toward an Optimal Algorithm for Matrix Multiplication" (PDF). SIAM News. 38 (9). Even if someone manages to prove one of the conjectures—thereby demonstrating that ω = 2—the wreath product approach is unlikely to be applicable to the large matrix problems that arise in practice. [...] the input matrices must be astronomically large for the difference in time to be apparent.
and 24 Related for: Computational complexity of matrix multiplication information
for matrixmultiplication? (more unsolved problems in computer science) In theoretical computer science, the computationalcomplexityofmatrix multiplication...
linear algebra, matrixmultiplication is a binary operation that produces a matrix from two matrices. For matrixmultiplication, the number of columns in the...
complexityof the chosen multiplication algorithm. This table lists the complexityof mathematical operations on integers. On stronger computational models...
for matrixmultiplication. It is faster than the standard matrixmultiplication algorithm for large matrices, with a better asymptotic complexity, although...
of simple arithmetic operations needed to compute the product, that is, the computationalcomplexity. The straightforward multiplicationof a matrix that...
matrix norms. Matrix norms differ from vector norms in that they must also interact with matrixmultiplication. Given a field K {\displaystyle K} of either...
called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the inverse matrixof an invertible matrix.[citation...
"Triangular Factorization and Inversion by Fast MatrixMultiplication". Mathematics ofComputation. 28 (125): 231–236. doi:10.1090/S0025-5718-1974-0331751-8...
Multiplication (often denoted by the cross symbol ×, by the mid-line dot operator ⋅, by juxtaposition, or, on computers, by an asterisk *) is one of the...
greatest lower bound for the exponent of the computationalcomplexityofmatrixmultiplication. 4. Written as a function of another function, it is used for...
by factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, it manages to reduce the complexityof computing the DFT from...
\\\end{bmatrix}}.} This rotates column vectors by means of the following matrixmultiplication, [ x ′ y ′ ] = [ cos θ − sin θ sin θ cos θ ] [ x...
solve the same computational problems as a quantum computer, given enough time. Quantum advantage comes in the form of time complexity rather than computability...
a matrix decomposition of the original matrix. The elementary row operations may be viewed as the multiplication on the left of the original matrix by...
the time complexity is the computationalcomplexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly...
operator in which the components form a diagonal matrix and, thus, multiplicationcomplexity reduces to n. Being diagonal means that all coefficients a i ...
operations such as multiplication falls significantly, often leading to huge savings in terms of calculation time and complexity. As sparse matrices...
column lengths of both matrices are n) and O(n){\displaystyle O(n)}for multiplication and addition respectively. // Matrixmultiplication for (i = 0; i...
Distance set Hollow matrix Min-plus matrixmultiplication Weyenberg, G., & Yoshida, R. (2015). Reconstructing the phylogeny: Computational methods. In Algebraic...
scalar multiplication, dot products, linear combinations, and matrixmultiplication. They are the de facto standard low-level routines for linear algebra...
used since the work of John von Neumann and Herman Goldstine to describe computation errors in operations such as matrixmultiplication. Although random...