Hello World,
In this article let us take a look at the process of matrix multiplication from a different perspective. Often matrix multiplication is taught or learned as a boring process of certain rules. This spoils the fun of doing it. So why don’t we look for better way to think about it by having fun while still doing it.
Before discussing about matrix multiplication, let me start this discussion with even more basic question. What does simple multiplication means?
To me, it feel like this. Say your IQ is 120. You have a magic wand, you wave it and say “Let my IQ go twice”. With multiplication you can grow or shrink some quantity as you wish.
Suppose I am having a Rs. 1000/- note in my purse and if I wish that “Let the amount in purse go 1.5 x” will turn the Rs. 1000/- to Rs. 1500/- :-)))
We can’t multiply things that don’t exist. I don’t have a private Jet Air plane so does I can’t multiply it.
This concept can be thought of shrinking or expanding values.
Let’s take a simple case of $1\times N$ and $N\times 1$ element matrix multiplication.
Analogy 1
$ \begin{multline} \begin{pmatrix}Lemon & Sugar & Salt &Water\end{pmatrix}\times \begin{pmatrix}
2\quad Spoons\\
2\quad Spoons\\
0.5\quad Spoon\\
1\quad Glass \end{pmatrix} \\= \begin{pmatrix}Lemonade\quad with\quad particular\quad taste\end{pmatrix} \end{multline}$
In this analogy, ingredients are kept on a table as a row and recipe as column
[Ingredients][quantity] = [final product]
The output number can be thought of as the taste or quality of the final product. Although it is not a perfect analogy (eg. negative spoons are not possible in real life, – 2 spoons of sugar means nothing), the point here is we made a connection between abstract matrix multiplication and a real life event and chances are rare that we forget both together.
Analogy 2
$\begin{multline} \begin{pmatrix}quizzes&Assignments&Midterm&Final \end{pmatrix}\times
\begin{pmatrix}
10\%\\
20\%\\
20\%\\
50\%\\
\end{pmatrix}\\ = \begin{pmatrix}Final Mark\end{pmatrix} \end{multline}$
The instructor can adjust the weights as you want is an additional insight here!. Isn’t this process looks like the features in machine learning or weights of a neural network ?
Matrix multiplication can be thought of as a way to combine two matrices to obtain a third matrix. The resulting matrix represents a transformation of the data or vectors that are represented by the input matrices.
The intuition behind matrix multiplication is best understood by looking at how it operates on individual elements of the matrices. When multiplying two matrices, the value of each element in the resulting matrix is obtained by taking the dot product of a row in the first matrix with a column in the second matrix.
One way to visualize this operation is to think of each row of the first matrix as a vector, and each column of the second matrix as a vector. The resulting matrix is then obtained by taking the dot product of each row vector with each column vector.
Creative questions on this topic
- What is the need for such a row column rule ? Easy readability..?
- Why don’t A*B != B*A..?
Matrix multiplication is defined the way it is because it allows us to represent and perform linear transformations on vectors and data sets in a concise and efficient manner.
Matrix multiplication can be thought of as a way to combine the columns of one matrix with the rows of another matrix, resulting in a new matrix whose elements are obtained by taking the dot product of the corresponding row and column. This operation is used to transform and manipulate vectors, and allows us to represent complex transformations and operations in a compact and computationally efficient way.
The definition of matrix multiplication is based on the properties of linear transformations, which are operations that preserve the properties of lines and planes in space. Linear transformations are important in many fields of study, including physics, engineering, and computer science, and matrix multiplication is a powerful tool for representing and performing these transformations.
The way matrix multiplication is defined also allows us to perform various algebraic operations, such as matrix inversion, matrix transposition, and eigenvalue decomposition, which are important in many applications. These operations can be used to solve systems of linear equations, analyze data sets, and perform machine learning tasks.
The way matrix multiplication is defined is based on the principles of linear algebra and the need to represent and perform complex transformations on data sets and vectors in an efficient and concise manner. Its usefulness and versatility make it an essential tool in many fields of study.
Matrix multiplication is not commutative because the order of the matrices matters when we multiply them. In other words, the product of two matrices is not the same whether we multiply them in the order A x B or B x A, except in certain special cases.
The reason for this is that matrix multiplication is based on the concept of linear transformations, which are not commutative in general. When we multiply two matrices, we are essentially composing two linear transformations, and the order in which we apply them can affect the result.
For example, if A represents a rotation of a vector, and B represents a scaling of the same vector, the product AB will give a different result than BA, since rotation and scaling do not commute.
Matrix multiplication is not commutative because the order of the matrices matters, and the result depends on their dimensions and the nature of the linear transformations they represent.
Applications
Matrix multiplication is a fundamental operation in linear algebra and has many important applications in mathematics, engineering, and computer science. Here are a few examples of how matrix multiplication is used in different fields:
- Linear transformations: Matrix multiplication is used to represent and apply linear transformations in computer graphics, robotics, physics, and other fields. A matrix can represent a transformation of a vector, such as a rotation, scaling, or translation.
- Solving systems of linear equations: Matrix multiplication can be used to solve systems of linear equations, which are common in engineering, physics, economics, and other fields. By multiplying the coefficients of the variables by a matrix, we can solve for the values of the variables.
- Markov chains: Matrix multiplication is used to analyze Markov chains, which are mathematical models of systems that transition between different states over time. By multiplying a transition matrix with a probability vector, we can calculate the probability of being in each state at a given time.
- Image processing: Matrix multiplication is used in image processing to apply filters and transformations to images. A matrix can represent a convolution kernel, which can be multiplied with an image matrix to apply a filter.
- Machine learning: Matrix multiplication is used extensively in machine learning algorithms, such as linear regression, neural networks, and principal component analysis. Matrices are used to represent data sets and model parameters, and matrix multiplication is used to update and apply the models.
These are just a few examples of the many applications of matrix multiplication. Its versatility and usefulness make it an essential tool in many fields of study.
Conclusion
Let me wind up from the simple matrix multiplication concept! All those who are interested in this topic, please share your valuable suggestions which will add lot of value to the topic. Matrix multiplication is related to dot product. We will explore dot product in another post.
Excellent and simple description of applications of Matrix Multiplication