This page makes use of JavaScript.
My code is 100% nice: No external code, no trackers.
Enabling scripts for this document is recommended.
Algebraic Definition |
Geometric Definition |
a · b = n i=1 a_{i}b_{i} = a_{1}b_{1} + a_{2}b_{2} + ... a_{n}b_{n} | a · b = |a| |b| cos θ |
The dot product tells us, how much one vector projects onto another. It is a measure, how similar the direction of a is to the normal vector of b. (In 3D, it compares to the normal plane of b.) For unit vectors, it ranges from -1 to +1, where +1 would indicate both vectors pointing in the same direction. This can be used to determine, if a face is pointing towards or away from the camera. For convex objects, only faces pointing to the camera need to be rendered, because faces seen from their back are obscured anyways.
cos θ = a · b |a| |b| = a_{x}b_{x} + a_{y}b_{y} a_{x}^{2} + a_{y}^{2} · b_{x}^{2} + b_{y}^{2}
Computer hardware does not know about multi-dimensional arrays, or even more sophisticated data structures. Memory is ever only accessed via the addresses of each byte. Your programming language takes care of translating high level representations of data structures, like arrays, structs or objects, into physical memory addresses.
There are two ways, how a matrix can be stored in memory:
Matrix in row major order R =
r_{1}_{1}
r_{1}_{2}
r_{1}_{3}
r_{2}_{1}
r_{2}_{2}
r_{2}_{3}
r_{3}_{1}
r_{3}_{2}
r_{3}_{3}
= Array(
r_{1}_{1},
r_{1}_{2},
r_{1}_{3},
r_{2}_{1},
r_{2}_{2},
r_{2}_{3},
r_{3}_{1},
r_{3}_{2},
r_{3}_{3}
)
,
Matrix in column major order C =
c_{1}_{1}
c_{2}_{1}
c_{3}_{1}
c_{1}_{2}
c_{2}_{2}
c_{3}_{2}
c_{1}_{3}
c_{2}_{3}
c_{3}_{3}
= Array(
c_{1}_{1},
c_{1}_{2},
c_{1}_{3},
c_{2}_{1},
c_{2}_{2},
c_{2}_{3},
c_{3}_{1},
c_{3}_{2},
c_{3}_{3}
)
.
Obviously, it is important to know, which major order your system expects.
M = [xAxis_{x}, xAxis_{y}, xAxis_{z}, 0, yAxis_{x}, yAxis_{y}, yAxis_{z}, 0, zAxis_{x}, zazis_{y}, xAxis_{z}, 0, Trans_{x}, Trans_{y}, Trans_{z}, 1];
According to Wikipedia, OpenGL uses vector-major order
(M[vector][coordinate]
or M[column][row]
respectively), which confuses me
somewhat. Further investigation is needed.
Calculating the determinant of a 2 × 2 or 3 × 3 matrix is fairly straight forward. An easy to remember rule is to add the products following the diagonals from top left to bottom right (while wrapping around the edges), and subtracting the products of the diagonals going from top right to bottom left, like so:
|M_{3}| = det abc def ghi = aei + bfg + cdh - ceg - bdi - afh,
and for a 2 × 2 matrix:
|M_{2}| = det ab cd = ad - bc.
The process for larger matrices is not as easily expressed by a simple rule; It involves recursively partitioning the matrix into smaller sub-matrices, whose determinants can be calculated like shown above. Check the Wikipedia article for details.
Still, solving equation systems with the Cramer Rule follows the same same pattern of dividing the determinants.
You have probably seen a linear equation before:
A general system of m linear equations with n unknowns can be written as:
The Cramer Rule is a method of solving such an equation system, which is very easy to code. It uses several sets of matrices, containing the coeficcients of the equations. The basic matrix M looks like so:
M = a_{1}_{1} a_{1}_{2} ⋯ a_{1}_{n} a_{2}_{1} a_{2}_{2} ⋯ a_{2}_{n} ⋮ ⋮ ⋱ ⋮ a_{m}_{1} a_{m}_{2} ⋯ a_{m}_{n} .
Note, that we did not include the constant offsets b_{m} in this matrix.
We need another matrix for each of the unknowns, where the coefficients a_{m}_{n} for the unknown is replaced with the offset b_{m}. Let's assume a system of 3 equations with 3 unknowns, x, y, and z, for easier demonstration:
Our base matrix M and substituted matrices M_{x}, M_{y} and M_{z} are:
M = A_{1} B_{1} C_{1} A_{2} B_{2} C_{2} A_{3} B_{3} C_{3} , M_{x} = D_{1} B_{1} C_{1} D_{2} B_{2} C_{2} D_{3} B_{3} C_{3} , M_{y} = A_{1} D_{1} C_{1} A_{2} D_{2} C_{2} A_{3} D_{3} C_{3} , M_{z} = A_{1} B_{1} D_{1} A_{2} B_{2} D_{2} A_{3} B_{3} D_{3} .
Then, we get the solutions for our equation system by simply dividing the determinants of the matrices:
x = |M_{x}||M|, y = |M_{y}||M|, and z = |M_{z}||M|.
Since a vector is essentially a matrix with only one element per row, the process of matrix-vector multiplication is rather simple; One takes the dot product of x with each row of A:
Ax = a_{1}_{1} a_{1}_{2} ⋯ a_{1}_{n} a_{2}_{1} a_{2}_{2} ⋯ a_{2}_{n} ⋮ ⋮ ⋱ ⋮ a_{m}_{1} a_{m}_{2} ⋯ a_{m}_{n} x_{1} x_{2} ⋮ x_{n} = a_{1}_{1}x_{1} + a_{1}_{2}x_{2} + ⋯ + a_{1}_{n}x_{n} a_{2}_{1}x_{1} + a_{2}_{2}x_{2} + ⋯ + a_{2}_{n}x_{n} ⋮ a_{m}_{1}x_{1} + a_{m}_{2}x_{2} + ⋯ + a_{m}_{n}x_{n}
If A = 1-12 0-31 , x = 210 , then Ax = 1·2 - 1·1 + 2·0 0·2 - 3·1 + 1·0 = 1 -3
If A is an m × n matrix, and B is an n × p matrix, A = a_{1}_{1} a_{1}_{2} ⋯ a_{1}_{n} a_{2}_{1} a_{2}_{2} ⋯ a_{2}_{n} ⋮ ⋮ ⋱ ⋮ a_{m}_{1} a_{m}_{2} ⋯ a_{m}_{n} , B = b_{1}_{1} b_{1}_{2} ⋯ b_{1}_{p} b_{2}_{1} b_{2}_{2} ⋯ b_{2}_{p} ⋮ ⋮ ⋱ ⋮ b_{n}_{1} b_{n}_{2} ⋯ b_{n}_{p} , the matrix product C = AB is defined to be the m × p matrix C = AB = c_{1}_{1} c_{1}_{2} ⋯ c_{1}_{p} c_{2}_{1} c_{2}_{2} ⋯ c_{2}_{p} ⋮ ⋮ ⋱ ⋮ c_{m}_{1} c_{m}_{2} ⋯ c_{m}_{p} , where c_{i}_{j} = a_{i}_{1}b_{1}_{j} + a_{i}_{2}b_{2}_{j} + ... + a_{i}_{n}b_{n}_{j} = n k=1 a_{i}_{k}b_{k}_{j}, for i = 1, ..., m and j = 1, ..., p.
That is, the entry c_{i}_{j} of the product is obtained by multiplying term-by-term the entries of the ith row of A and the jth column of B, and summing these n products. In other words, c_{i}_{j} is the dot product of the ith row of A and the jth column of B.
Therefore, AB can also be written as C = a_{1}_{1}b_{1}_{1} + ⋯ + a_{1}_{n}b_{n}_{1} a_{1}_{1}b_{1}_{2} + ⋯ + a_{1}_{n}b_{n}_{2} ⋯ a_{1}_{1}b_{1}_{p} + ⋯ + a_{1}_{n}b_{n}_{p} a_{2}_{1}b_{1}_{1} + ⋯ + a_{2}_{n}b_{n}_{1} a_{2}_{1}b_{1}_{2} + ⋯ + a_{2}_{n}b_{n}_{2} ⋯ a_{2}_{1}b_{1}_{p} + ⋯ + a_{2}_{n}b_{n}_{p} ⋮ ⋮ ⋱ ⋮ a_{m}_{1}b_{1}_{1} + ⋯ + a_{m}_{n}b_{n}_{1} a_{m}_{1}b_{1}_{2} + ⋯ + a_{m}_{n}b_{n}_{2} ⋯ a_{m}_{1}b_{1}_{p} + ⋯ + a_{m}_{n}b_{n}_{p}
Thus the product AB is defined if and only if the number of columns in A equals the number of rows in B, in this case n.
A = a_{1}_{1} a_{1}_{2} a_{1}_{3} a_{2}_{1} a_{2}_{2} a_{2}_{3} a_{3}_{1} a_{3}_{2} a_{2}_{3} , B = b_{1}_{1} b_{1}_{2} b_{1}_{3} b_{1}_{1} b_{2}_{2} b_{2}_{3} b_{1}_{1} b_{2}_{2} b_{2}_{3} , AB = a_{1}_{1}b_{1}_{1} + a_{1}_{2}b_{2}_{1} + a_{1}_{3}b_{3}_{1} a_{1}_{1}b_{1}_{2} + a_{1}_{2}b_{2}_{2} + a_{1}_{3}b_{3}_{2} a_{1}_{1}b_{1}_{3} + a_{1}_{2}b_{2}_{3} + a_{1}_{3}b_{3}_{3} a_{2}_{1}b_{1}_{1} + a_{2}_{2}b_{2}_{1} + a_{2}_{3}b_{3}_{1} a_{2}_{1}b_{1}_{2} + a_{2}_{2}b_{2}_{2} + a_{2}_{3}b_{3}_{2} a_{2}_{1}b_{1}_{3} + a_{2}_{2}b_{2}_{3} + a_{2}_{3}b_{3}_{3} a_{3}_{1}b_{1}_{1} + a_{3}_{2}b_{2}_{1} + a_{3}_{3}b_{3}_{1} a_{3}_{1}b_{1}_{2} + a_{3}_{2}b_{2}_{2} + a_{3}_{3}b_{3}_{2} a_{3}_{1}b_{1}_{3} + a_{3}_{2}b_{2}_{3} + a_{3}_{3}b_{3}_{3} .
C_{2 × 2} = A_{2 × 3}B_{3 × 2} = a_{1}_{1} a_{1}_{2} a_{1}_{3} a_{1}_{1} a_{2}_{2} a_{2}_{3} b_{1}_{1} b_{1}_{2} b_{2}_{1} b_{2}_{2} b_{3}_{1} b_{3}_{2} , where n = 3, m = p = 2; C = AB = a_{1}_{1}b_{1}_{1} + a_{1}_{2}b_{2}_{1} + a_{1}_{3}b_{3}_{1} a_{1}_{1}b_{1}_{2} + a_{1}_{2}b_{2}_{2} + a_{1}_{3}b_{3}_{2} a_{2}_{1}b_{1}_{1} + a_{2}_{2}b_{2}_{1} + a_{2}_{3}b_{3}_{1} a_{2}_{1}b_{1}_{2} + a_{2}_{2}b_{2}_{2} + a_{2}_{3}b_{3}_{2} .
For practical reasons, 4 × 4 matrices are used with 3D graphics. For details, see the section about Transformation Matrices.
rotation_x_mat4()
rotation_y_mat4()
rotation_z_mat4()
function on_update_scene (elapsed_seconds) { const angle = 60*DtoR * elapsed_seconds; VMath.mat4_multiply_mat4( entity.rotation_matrix, entity.rotation_matrix, VMath.rotation_<axis>_mat4( angle ), ); }
X = rot_{x} θ = 1 0 0 0 0 cos θ -sin θ 0 0 sin θ cos θ 0 0 0 0 1 , Y = rot_{y} θ = cos θ 0 sin θ 0 0 1 0 0 -sin θ 0 cos θ 0 0 0 0 1 , Z = rot_{z} θ = cos θ -sin θ 0 0 sin θ cos θ 0 0 0 0 1 0 0 0 0 1 .
Thus, a = xyz1 ≘ x 0 0 0 0 y 0 0 0 0 z 0 0 0 0 1 , a' = aX = x y cos θ - z sin θ y sin θ + z cos θ 1 .
n = n_{x} n_{y} n_{z} , M_{rot n} = tn_{x}^{2}+c tn_{x}n_{y}-sn_{z} tn_{x}n_{z}+sn_{y} 0 tn_{x}n_{y}+sn_{z} tn_{y}^{2}+c tn_{y}n_{z}-sn_{x} 0 tn_{x}n_{z}-sn_{y} tn_{y}n_{z}+sn_{x} tn_{z}^{2}+c 0 0 0 0 1 , where s = sin θ, c = cos θ and t = 1 - cos θ.