posted: 30-Jul-2025
&
updated: 31-Aug-2025
NotebookLM Podcasts
Vector Spaces
Definition
A nonempty set of elements with two laws of combination,
which we call addition and multiplication,
satisfying the following conditions is called a
field
and is denoted by
$F$.
-
addition
-
to every pair of elements $a,b\in F$,
there is associated a unique element,
called their sum, which we denote by
$$
\newcommand{\sign}{\mathop{\bf sign}}
\newcommand{\lspan}[1]{\langle{#1}\rangle} % linear span
\newcommand{\image}{\text{Im}}
%
\newcommand{\algA}{\algk{A}}
\newcommand{\algC}{\algk{C}}
\newcommand{\bigtimes}{\times}
\newcommand{\compl}[1]{\tilde{#1}}
\newcommand{\complexes}{\mathbb{C}}
\newcommand{\dom}{\mathop{\bf dom {}}}
\newcommand{\ereals}{\reals\cup\{-\infty,\infty\}}
\newcommand{\field}{\mathbb{F}}
\newcommand{\integers}{\mathbb{Z}}
\newcommand{\lbdseqk}[1]{\seqk{\lambda}{#1}}
\newcommand{\meas}[3]{({#1}, {#2}, {#3})}
\newcommand{\measu}[2]{({#1}, {#2})}
\newcommand{\meast}[3]{\left({#1}, {#2}, {#3}\right)}
\newcommand{\naturals}{\mathbb{N}}
\newcommand{\nuseqk}[1]{\seqk{\nu}{#1}}
\newcommand{\pair}[2]{\langle {#1}, {#2}\rangle}
\newcommand{\rationals}{\mathbb{Q}}
\newcommand{\reals}{\mathbb{R}}
\newcommand{\seq}[1]{\left\langle{#1}\right\rangle}
\newcommand{\powerset}{\mathcal{P}}
\newcommand{\pprealk}[1]{\reals_{++}^{#1}}
\newcommand{\ppreals}{\mathbb{R}_{++}}
\newcommand{\prealk}[1]{\reals_{+}^{#1}}
\newcommand{\preals}{\mathbb{R}_+}
\newcommand{\tXJ}{\topos{X}{J}}
%
\newcommand{\relint}{\mathop{\bf relint {}}}
\newcommand{\boundary}{\mathop{\bf bd {}}}
\newcommand{\subsetset}[1]{\mathcal{#1}}
\newcommand{\Tr}{\mathcal{\bf Tr}}
\newcommand{\symset}[1]{\mathbf{S}^{#1}}
\newcommand{\possemidefset}[1]{\mathbf{S}_+^{#1}}
\newcommand{\posdefset}[1]{\mathbf{S}_{++}^{#1}}
\newcommand{\ones}{\mathbf{1}}
\newcommand{\Prob}{\mathop{\bf Prob {}}}
\newcommand{\prob}[1]{\Prob\left\{#1\right\}}
\newcommand{\Expect}{\mathop{\bf E {}}}
\newcommand{\Var}{\mathop{\bf Var{}}}
\newcommand{\Mod}[1]{\;(\text{mod}\;#1)}
\newcommand{\ball}[2]{B(#1,#2)}
\newcommand{\generates}[1]{\langle {#1} \rangle}
\newcommand{\isomorph}{\approx}
\newcommand{\isomorph}{\approx}
\newcommand{\nullspace}{\mathcalfont{N}}
\newcommand{\range}{\mathcalfont{R}}
\newcommand{\diag}{\mathop{\bf diag {}}}
\newcommand{\rank}{\mathop{\bf rank {}}}
\newcommand{\Ker}{\mathop{\mathrm{Ker} {}}}
\newcommand{\Map}{\mathop{\mathrm{Map} {}}}
\newcommand{\End}{\mathop{\mathrm{End} {}}}
\newcommand{\Img}{\mathop{\mathrm{Im} {}}}
\newcommand{\Aut}{\mathop{\mathrm{Aut} {}}}
\newcommand{\Gal}{\mathop{\mathrm{Gal} {}}}
\newcommand{\Irr}{\mathop{\mathrm{Irr} {}}}
\newcommand{\arginf}{\mathop{\mathrm{arginf}}}
\newcommand{\argsup}{\mathop{\mathrm{argsup}}}
\newcommand{\argmin}{\mathop{\mathrm{argmin}}}
\newcommand{\ev}{\mathop{\mathrm{ev} {}}}
\newcommand{\affinehull}{\mathop{\bf aff {}}}
\newcommand{\cvxhull}{\mathop{\bf Conv {}}}
\newcommand{\epi}{\mathop{\bf epi {}}}
\newcommand{\injhomeo}{\hookrightarrow}
\newcommand{\perm}[1]{\text{Perm}(#1)}
\newcommand{\aut}[1]{\text{Aut}(#1)}
\newcommand{\ideal}[1]{\mathfrak{#1}}
\newcommand{\bigset}[2]{\left\{#1\left|{#2}\right.\right\}}
\newcommand{\bigsetl}[2]{\left\{\left.{#1}\right|{#2}\right\}}
\newcommand{\primefield}[1]{\field_{#1}}
\newcommand{\dimext}[2]{[#1:{#2}]}
\newcommand{\restrict}[2]{#1|{#2}}
\newcommand{\algclosure}[1]{#1^\mathrm{a}}
\newcommand{\finitefield}[2]{\field_{#1^{#2}}}
\newcommand{\frobmap}[2]{\varphi_{#1,{#2}}}
%
%\newcommand{\algfontmode}{}
%
%\ifdefined\algfontmode
%\newcommand\mathalgfont[1]{\mathcal{#1}}
%\newcommand\mathcalfont[1]{\mathscr{#1}}
%\else
\newcommand\mathalgfont[1]{\mathscr{#1}}
\newcommand\mathcalfont[1]{\mathcal{#1}}
%\fi
%
%\def\DeltaSirDir{yes}
%\newcommand\sdirletter[2]{\ifthenelse{\equal{\DeltaSirDir}{yes}}{\ensuremath{\Delta #1}}{\ensuremath{#2}}}
\newcommand{\sdirletter}[2]{\Delta #1}
\newcommand{\sdirlbd}{\sdirletter{\lambda}{\Delta \lambda}}
\newcommand{\sdir}{\sdirletter{x}{v}}
\newcommand{\seqk}[2]{#1^{(#2)}}
\newcommand{\seqscr}[3]{\seq{#1}_{#2}^{#3}}
\newcommand{\xseqk}[1]{\seqk{x}{#1}}
\newcommand{\sdirk}[1]{\seqk{\sdir}{#1}}
\newcommand{\sdiry}{\sdirletter{y}{\Delta y}}
\newcommand{\slen}{t}
\newcommand{\slenk}[1]{\seqk{\slen}{#1}}
\newcommand{\ntsdir}{\sdir_\mathrm{nt}}
\newcommand{\pdsdir}{\sdir_\mathrm{pd}}
\newcommand{\sdirnu}{\sdirletter{\nu}{w}}
\newcommand{\pdsdirnu}{\sdirnu_\mathrm{pd}}
\newcommand{\pdsdiry}{\sdiry_\mathrm{pd}}
\newcommand\pdsdirlbd{\sdirlbd_\mathrm{pd}}
%
\newcommand{\normal}{\mathcalfont{N}}
%
\newcommand{\algk}[1]{\mathalgfont{#1}}
\newcommand{\collk}[1]{\mathcalfont{#1}}
\newcommand{\classk}[1]{\collk{#1}}
\newcommand{\indexedcol}[1]{\{#1\}}
\newcommand{\rel}{\mathbf{R}}
\newcommand{\relxy}[2]{#1\;\rel\;{#2}}
\newcommand{\innerp}[2]{\langle{#1},{#2}\rangle}
\newcommand{\innerpt}[2]{\left\langle{#1},{#2}\right\rangle}
\newcommand{\closure}[1]{\overline{#1}}
\newcommand{\support}{\mathbf{support}}
\newcommand{\set}[2]{\{#1|#2\}}
\newcommand{\metrics}[2]{\langle {#1}, {#2}\rangle}
\newcommand{\interior}[1]{#1^\circ}
\newcommand{\topol}[1]{\mathfrak{#1}}
\newcommand{\topos}[2]{\langle {#1}, \topol{#2}\rangle} % topological space
%
\newcommand{\alg}{\algk{A}}
\newcommand{\algB}{\algk{B}}
\newcommand{\algF}{\algk{F}}
\newcommand{\algR}{\algk{R}}
\newcommand{\algX}{\algk{X}}
\newcommand{\algY}{\algk{Y}}
%
\newcommand\coll{\collk{C}}
\newcommand\collB{\collk{B}}
\newcommand\collF{\collk{F}}
\newcommand\collG{\collk{G}}
\newcommand{\tJ}{\topol{J}}
\newcommand{\tS}{\topol{S}}
\newcommand\openconv{\collk{U}}
%
\newenvironment{my-matrix}[1]{\begin{bmatrix}}{\end{bmatrix}}
\newcommand{\colvectwo}[2]{\begin{my-matrix}{c}{#1}\\{#2}\end{my-matrix}}
\newcommand{\colvecthree}[3]{\begin{my-matrix}{c}{#1}\\{#2}\\{#3}\end{my-matrix}}
\newcommand{\rowvecthree}[3]{\begin{bmatrix}{#1}&{#2}&{#3}\end{bmatrix}}
\newcommand{\mattwotwo}[4]{\begin{bmatrix}{#1}&{#2}\\{#3}&{#4}\end{bmatrix}}
%
\newcommand\optfdk[2]{#1^\mathrm{#2}}
\newcommand\tildeoptfdk[2]{\tilde{#1}^\mathrm{#2}}
\newcommand\fobj{\optfdk{f}{obj}}
\newcommand\fie{\optfdk{f}{ie}}
\newcommand\feq{\optfdk{f}{eq}}
\newcommand\tildefobj{\tildeoptfdk{f}{obj}}
\newcommand\tildefie{\tildeoptfdk{f}{ie}}
\newcommand\tildefeq{\tildeoptfdk{f}{eq}}
\newcommand\xdomain{\mathcalfont{X}}
\newcommand\xobj{\optfdk{\xdomain}{obj}}
\newcommand\xie{\optfdk{\xdomain}{ie}}
\newcommand\xeq{\optfdk{\xdomain}{eq}}
\newcommand\optdomain{\mathcalfont{D}}
\newcommand\optfeasset{\mathcalfont{F}}
%
\newcommand{\bigpropercone}{\mathcalfont{K}}
%
\newcommand{\prescript}[3]{\;^{#1}{#3}}
%
%
a+b
$$
-
additive associativity
-
addition is associative;
$$
(\forall a, b, c \in F)((a+b)+c = a+(b+c)).
$$
-
existence of additive identity
-
there exists an element, which we denote by
$$
0
$$
such that
$$
(\forall a\in F)(a+0=a).
$$
-
existence of additive inverse
-
for each $a\in F$, there exists an element, which we denote by
$$
-a
$$
such that
$$
a+(-a)=0.
$$
Following the usual practice, we write $b+(-a)=b-a$.
-
additive commutativity
-
addition is commutative;
$$
(\forall a, b \in F)
(a+b=b+a).
$$
-
multiplication
-
to every pair of elements $a,b\in F$,
there is associated a unique element,
called their product, which we denote by
$$
ab
$$
or
$$
a\cdot b
$$
-
multiplicative associativity
-
multiplication is associative;
$$
(\forall a, b, c \in F)
((ab)c = a(bc)).
$$
-
existence of multiplicative identity
-
there exists an element different from $0$,
which we denote by
$$
1
$$
such that
$$
(\forall a\in F)(a\cdot 1=a)
$$
-
existence of additive inverse
-
for each $a\in F$ with $a\neq0$,
there exists an element, which we denote by
$$
a^{-1}
$$
such that
$$
a\cdot a^{-1}=1.
$$
-
multiplicative commutativity
-
multiplication is commutative;
$$
(\forall a,b \in F)
(ab=ba).
$$
-
multiplicative distributivity over addition
-
multiplication is distributive with respect to addition:
$$
(\forall a,b,c \in F)
((a+b)c = ac + bc).
$$
The elements of a field are called scalars.
A
vector space $V$ over a field $F$
is a nonempty set of elements,
called
vectors,
with two laws of combination,
called
vector addition (or just
addition)
and
scalar multiplication,
satisfying the following conditions.
-
vector addition
-
to every pair of vectors $x,y\in V$,
there is associated a unique vector in $V$ called their sum,
which we denote by
$$
x+y.
$$
-
additive associativity
-
vector addition is associative;
$$
(\forall x, y, z \in F)((x+y)+z = x+(y+z)).
$$
-
existence of additive identity
-
there exists a vector, which we denote by
$$
0
$$
such that
$$
(\forall a\in F)(a+0=a).
$$
-
existence of additive inverse
-
for each $x\in V$, there exists an element, which we denote by
$$
-x
$$
such that
$$
x+(-x)=0.
$$
-
additive commutativity
-
addition is commutative;
$$
(\forall x, y \in F)
(x+y=y+x).
$$
-
scalar multiplication
-
to every scalar $\alpha\in F$ and vector $x\in V$,
there is associated a unique vector,
called the product of $\alpha$ and $x$, which we denote by
$$
\alpha x.
$$
-
multiplicative associativity
-
scalar multiplication is associative;
$$
(\forall \alpha, \beta \in F \;\&\; \forall x\in V)
(\alpha(\beta x) = (\alpha\beta)x).
$$
-
multiplicative distributivity over vector addition
-
scalar multiplication is distributive with respect to vector addition;
$$
(\forall \alpha \in F \;\&\; \forall x,y\in V)
(\alpha(x+y) = \alpha x + \alpha y).
$$
-
multiplicative distributivity over scalar addition
-
scalar multiplication is distributive with respect to scalar addition;
$$
(\forall \alpha, \beta \in F \;\&\; \forall x\in V)
((\alpha+\beta)x = \alpha x + \beta x).
$$
-
For $1\in F$
$$
(\forall x \in V)
(1\cdot x = x).
$$
Note that the identical definition of vector space
is given in this section in From Ancient Equations to Artificial Intelligence – Linear Algebra.
Linear independence & linear dependence
A set of vectors is said to be linearly dependent
if there exists a non-trivial linear relation among them.
Otherwise,
the set is said to be linearly independent.
If $x$ is linearly dependent on $\{y_i\}$
and each $y_i$ is linearly dependent on $\{z_j\}$,
$x$ is linearly dependent on $\{z_j\}$.
For a subset $A$ of a vector space $V$,
the set of all linear combinations of vectors in $A$
is called the set spanned by $A$,
and we denote it by $\lspan{A}$.
A set of nonzero vectors $\{x_1, x_2, \ldots\}$ is linearly dependent
if and only if
some $x_k$ is a linear combination of $x_1,\ldots,x_{k-1}$.
A set of nonzero vectors $\{x_1,x_2,\ldots\}$ is linearly independent
if and only if
for each $k$, $x_k \notin \lspan{\{x_1,\ldots,x_{k-1}\}}$.
For two subsets $A, B \subset V$ such that $A\subset \lspan{B}$,
$\lspan{A} \subset \lspan{B}$.
For a subset $A\subset V$,
if $x\in A$ is dependent on some other vectors in $A$,
$\lspan{A} = \lspan{A-\{x\}}$.
For any subset $A\subset V$,
$\lspan{\lspan{A}}= \lspan{A}$.
If a finite set $\{x_1,\ldots,x_n\}$ spans $V$,
every linearly independent set contains at most $n$ elements.
Bases of vector spaces
A linearly independent set spanning a vector space $V$
is called a basis or base
(the plural is bases) of $V$.
If a vector space has one basis with a finite number of elements,
then all other bases are finite and have the same number of elements.
Any $n+1$ vectors in an $n$-dimensional vector space are linearly dependent.
A set of $n$ vectors in an $n$-dimensional vector space is a basis if and only if it is linearly independent.
A set of $n$ vectors in an $n$-dimensional vector space $V$ is a basis if and only if it spans $V$.
In a finite dimensional vector space $V$,
every set spanning $V$ contains a basis.
In a finite dimensional vector space,
any linearly independent set of vectors can be extended to a basis.
Subspaces
A subspace $W$ of a vector space $V$
is a nonempty subset of $V$ which is itself a vector space
with respect to the operations of addition and scalar multiplication
defined in $V$.
In particular,
the subspace must be a vector space over the same field $F$
The intersection of any collection of subspaces is a subspace.
For a vector space and $V$ and $A\subset V$,
the smallest subspace containing $A$ is the subspace spanned by $A$,
i.e,
$$
\lspan{A}
=
\bigcap_{W:\;\text{subspace with } A \subset W} W.
$$
For two subspaces $W_1$ and $W_2$ of a vector space $V$,
$W_1+W_2$ is a subspace of $V$.
For two subspaces $W_1$ and $W_2$ of a vector space $V$,
$W_1+W_2$ is the smallest subspace containing $W_1$ and $W_2$,
i.e.,
$$
W_1 + W_2 = \lspan{W_1\cup W_2}.
$$
If $A_1$ spans $W_1$ and $A_2$ spans $W_2$,
then
$$
\lspan{A_1\cup A_2} = W_1 + W_2.
$$
A subspace $W$ of an $n$-dimensional vector space $V$
is
a finite dimensional vector space of dimension $m\leq n$.
For a subspace $W$ of dimension $m$ in an $n$-dimensional vector space $V$,
there exists a basis $\{a_1,\ldots,a_m,a_{m+1},\ldots,a_n\}$ of $V$
such that $\{a_1,\ldots,a_m\}$ is a basis of $W$.
If two subspaces $U$ and $W$ of a vector space $V$ have the same finite dimension and $U\subset W$,
then $U=W$.
For two subspaces $W_1$ and $W_2$ of a finite dimensional vector space $V$,
$$
\dim (W_1 + W_2) = \dim W_1 + \dim W_2 - \dim (W_1 \cap W_2).
$$
For two subspaces $W_1$ and $W_2$ of a finite dimensional vector space $V$,
if $W_1\cap W_2 = \{0\}$, the sum $W_1 + W_2$ is said to be direct;
$W_1+W_2$ is said to be a direct sum of $W_1$ and $W_2$.
To indicate that a sum is direct, we use the notation:
$$
W_1 \oplus W_2
$$
For two subspaces $W_1$ and $W_2$ of a finite dimensional vector space $V$,
if $W_1 \oplus W_2 = V$,
$W_1$ and $W_2$ are said to be complementary
and
$W_2$ said to be a complementary subspace of $W_1$,
or a complement of $W_1$.
For a subspace $W$ of a vector space $V$,
there exists a subspace $W'$ such that $V = W \oplus W'$.
For a sum of several subspaces of a finite dimensional vector space
to be direct it is necessary and sufficient that
$$
\dim (W_1 + \cdots + W_k) = \dim W_1 + \cdots + \dim W_k.
$$
Let $U$ and $V$ be vector spaces over a field $F$.
A linear transformation $\sigma$ of $U$ into $V$
is
a single-valued mapping of $U$ into $V$
which associates to each element $x\in U$ a unique element $\sigma(x)\in V$
such that
for all $x,y\in U$ and all $\alpha,\beta \in F$,
$$
\sigma(\alpha x + \beta y) = \alpha \sigma(x) + \beta \sigma(y).
$$
To describe the special role of the elements of $F$
in the condition $\sigma(\alpha x) = \alpha\sigma(x)$,
we say that
a linear transformation is a homomorphism over $F$
or an $F$-homomorphism.
When a homomorphism is one-to-one,
it is called monomorphism.
When a homomorphism is onto, i.e., $\sigma(U) = V$,
it is called epimorphism.
A homomorphism that is both an epimorphism and a monomorphism is called an isomorphism.
The inverse of an isomorphism is also an isomorphism.
If a homomorphism or isomorphism can be defined uniquely by intrinsic properties
independent of a choice of basis,
the mapping is said to be natural or canonical.
Any two vector spaces of dimension $n$ over a field $F$ are isomorphic.
Such an isomorphism can be established by setting up an isomorphism between each one and $F^n$.
Such an isomorphism, dependent upon the arbitrary choice of bases, is not canonical.
For a linear transformation $\sigma: U\to V$,
$\sigma(U)=\{\sigma(u)|u \in U\}\subset V$ is a subspace of $V$.
For a linear transformation $\sigma: U\to V$,
the subspace $\sigma(U)$ is called the image of $\sigma$,
and denoted by $\image(\sigma)$.
For a linear transformation $\sigma: U\to V$,
if $W$ is a subspace of $U$,
$\sigma(W)$ is a subspace of $V$.
The rank
of a linear transformation $\sigma: U\to V$
is defined by the dimension of the image of $\sigma$,
i.e.,
$\dim \image(\sigma)$,
denoted by $\rho(\sigma)$.
For a linear transformation $\sigma: U\to V$,
$$
\rho(\sigma) \leq \min\{\dim U, \dim V\}.
$$
For a linear transformation $\sigma: U\to V$,
if $W$ is a subspace of $V$,
the set $\sigma^{-1}(W)$ is a subspace of $U$.
For a linear transformation $\sigma: U\to V$,
the subspace $\sigma^{-1}(0)$ is called the kernel of $\sigma$,
and denoted by $K(\sigma)$.
For a linear transformation $\sigma: U\to V$,
the dimension of $K(\sigma)$ is called the nullity of $\sigma$,
and denoted by $\nu(\sigma)$.
For a linear transformation $\sigma: U\to V$,
$$
\rho(\sigma) + \nu(\sigma) = \dim U.
$$
A linear transformation $\sigma: U\to V$
is a monomorphism
if and only if
$$
\nu(\sigma)=0.
$$
A linear transformation $\sigma: U\to V$
is an epimorphism
if and only if
$$
\rho(\sigma)=\dim V.
$$
For two vector spaces $U$ and $V$ with $\dim U = \dim V < \infty$,
a linear transformation $\sigma: U\to V$
is isomorphism if and only if it is epimorphism if and only if it is monomorphism.
implies
a linear transformation $\sigma$ of $U$ into $V$
is an isomorphism if two of the following are satisfied.
-
$\dim U = \dim V$
-
$\sigma$ is an epimorphism
-
$\sigma$ is a monomorphism
For three vector spaces $U$, $V$, and $W$ over a field $F$,
and linear transformations $\sigma: U\to V$ and $\tau: V \to W$,
$$
\rho(\sigma) = \rho(\tau\sigma) + \dim \{\image(\sigma)\cap K(\tau)\}.
$$
For three vector spaces $U$, $V$, and $W$ over a field $F$,
and linear transformations $\sigma: U\to V$ and $\tau: V \to W$,
$$
\rho(\tau\sigma) = \dim \{\image(\sigma) + K(\tau)\} - \nu(\tau).
$$
and
imply
$$
\begin{eqnarray*}
\dim \{\image(\sigma) + K(\tau)\}
&=&
\dim \image(\sigma)
+
\dim K(\tau)
-
\dim \{\image(\sigma) \cap K(\tau)\}
\\
&=&
\rho(\sigma) + \nu(\tau) - (\rho(\sigma) - \rho(\tau \sigma))
=
\nu(\tau) + \rho(\tau \sigma),
\end{eqnarray*}
$$
hence the proof!
If $K(\tau) \subset \image(\sigma)$, $\rho(\sigma) = \rho(\tau\sigma) + \nu(\tau)$.
The rank of a product (i.e., composition) of two linear transformations
is less than or equal to the rank of either factor:
$$
\rho(\tau\sigma) \leq \min\{\rho(\tau), \rho(\sigma)\}
$$
If $\sigma$ is an epimorphism, $\rho(\tau\sigma) = \rho(\tau)$.
If $\tau$ is a monomorphism, $\rho(\tau\sigma) = \rho(\sigma)$.
The rank of a linear transformation is not changed by
multiplication by an isomorphism (on either side).
$\sigma$ is an epimorphism if and only if $\tau\sigma = 0$ implies $\tau=0$.
$\tau$ is a monomorphism if and only if $\tau\sigma = 0$ implies $\sigma=0$.
$\sigma$ is an epimorphism if and only if $\tau_1\sigma = \tau_2\sigma$ implies $\tau_1=\tau_2$.
$\tau$ is a monomorphism if and only if $\tau\sigma_1 = \tau\sigma_2$ implies $\sigma_1=\sigma_2$.
For any basis of $U$, $\{a_1,\ldots,a_n\}$
and
for any $n$ vectors $b_1, \ldots, b_n\in V$ (not necessarily linearly independent),
there exists a uniquely determined linear transformations $\sigma: U \to V$
such that $\sigma(a_i)=b_i$ for all $1\leq i\leq n$.
For any $r$ linearly independent vectors in a finite dimensional vector space $U$, $\{u_1,\ldots,u_r\}$
and
for any $r$ vectors $v_1, \ldots, v_r\in V$ (not necessarily linearly independent),
any
of $U$, $\{a_1,\ldots,a_n\}$,
there exists a (not necessarily unique) linear transformation of $\sigma:U \to V$
such that $\sigma(u_i)=v_i$ for all $1\leq i\leq r$.
A linear transformation $\pi$ of a vector space into itself with the property that $\pi^2 = \pi$ is called
projection.
If $\pi$ is a projection of $V$ into itself,
then
$$
V = \image(\pi) \oplus K(\pi)
$$
and $\pi$ acts like the identity on $\image(\pi)$.
For any $v\in V$,
let $v_1 = \pi(v)\in V$.
Let $v_2 = v-v_1$,
then $\pi(v_2) = \pi(v) - \pi(v_1) = v_1 - v_1 = 0$,
hence $v_2\in K(\pi)$.
Therefore $v_2 \in K(\pi)$,
hence $v \in \image(\pi) + K(\pi)$,
thus $V\subset \image(\pi) + K(\pi)$.
Hence, we conclude that $V=\image(\pi) + K(\pi)$.
Now let $x\in \image(\pi) \cap K(\pi)$.
Then there exists $y\in V$ such that $x=\pi(y)$,
thus $0=\pi(x) = \pi^2(y) = \pi(y) = x$.
Therefore $\image(\pi) \cap K(\pi) = \{0\}$.
Therefore $V$ is a direct sum of $\image(\pi)$ and $K(\pi)$.
Matrices
A matrix over a field $F$
is
a rectangular array of scalrs.
The array will be written in the form
$$
A=
\begin{bmatrix}
A_{1,1} & A_{1,2} & \cdots & A_{1,n}
\\
A_{2,1} & A_{2,2} & \cdots & A_{2,n}
\\
\vdots & \vdots & \ddots & \vdots
\\
A_{m,1} & A_{m,2} & \cdots & A_{m,n}
\end{bmatrix}
\in
F^{m\times n}
$$
A matrix with $m$ rows and $n$ columns is called an $m$ $\times$ $n$ matrix
or $m$-by-$n$ matrix.
$A_{i,j}$ are called
elements
or
entries.
The main diagonal of the matrix $A\in F^{m\times n}$
is the list of elements $(A_{1,1}, A_{1,2}, \ldots, A_{t,t})$ where $t=\min\{m,n\}$.
A diagonal matrix is a square matrix in which
the elements not in the main diagonal are zero.
A matrix-matrix multiplication is defined as a (non-commutative) binary operation
on
two matrices $A\in F^{r\times m}$ and $B\in F^{m\times n}$
defined in such a way that
the result of $AB$ is a $r$-by-$n$ matrix where
$$
(AB)_{i,j} = \sum_{k=1}^m A_{i,k} B_{k,j}
$$
for all $1\leq i\leq r$ and $1\leq j\leq n$.
The motivation for this specific way of defining matrix-matrix multiplication
is well explained in
Matrix-matrix multiplication
of my another blog post about linear algebra, From Ancient Equations to Artificial Intelligence – Linear Algebra.
For an $A\in F^{m\times n}$,
the rank of $A$ plus the nullity of $A$ is equal to $n$.
The rank of a product $BA$ is less than or equal to the rank of either factor.
Nonsingular matrices
A homomorphism of a set into itself is called an endomorphism.
A one-to-one linear transformation $\sigma$ of a vector space onto itself is called an automorphism.
The inverse of an automorphism is an automorphism.
A linear transformation of an $n$-dimensional vector space into itself
is an automorphism if and only if it is of rank $n$,
i.e., if and only if it is an epimorphism.
A linear transformation of an $n$-dimensional vector space into itself
is an automorphism if and only if its nullity is 0,
i.e., if and only if it is an monomorphism.
A linear transformation that has an inverse is said to be nonsingular or invertible;
otherwise it is said to be singular.
A matrix that has an inverse is said to be nonsingular or invertible.
Only a square matrix can have an inverse.
Suppose for $A\in F^{n\times n}$, there exists $B\in F^{n\times n}$ such that $BA=I$,
implies $\rank A = n$,
hence $A$ represents an automorphism $\sigma$.
By definition, $B$ represents the inverse transformation $\sigma^{-1}$,
hence $B = A^{-1}$.
Using the very same argument,
if $C\in F^{n\times n}$ satisfies $AC=I$,
then $C=A^{-1}$.
If $A$ and $B$ are square matrices satisfying $BA=I$, then $AB=I$.
If $A$ and $B$ are square matrices satisfying $AB=I$, then $BA=I$.
In either case, $B$ is the unique inverse of $A$.
If $A$ and $B$ are nonsingular,
-
$AB$ is nonsingular and $(AB)^{-1} = B^{-1} A^{-1}$,
-
$A^{-1}$ is nonsingular and $(A^{-1})^{-1} = A$,
-
for $a\neq0$, $aA$ is nonsingular and $(aA)^{-1} = a^{-1} A^{-1}$.
If $A$ is nonsingular,
we can solve uniquely the equations $XA=B$ and $AY=B$ for any matrix $B$ of the proper size
(but the two solutions need not be equal).
The rank of a (not necessarily square) matrix is not changed
by multiplication by a nonsingular matrix.