In the set R^{2} one usually introduces two operations: the sum of two couples and the product of a couple by a number: (x_{1},x_{2})+(y_{1},y_{2})=(x_{ 1}+y_{1},x_{2}+y_{2}) and c(x_{1},x_{2})=(cx_{1},cx_{2}). These operations turn the set of all couples of real numbers in a set that is almost identical to the set of ordinary vectors in the plane where the sum and multiplication by a number are defined in the usual geometric form.
If we consider the set R^{n} of all n-tuples of real numbers (x_{1}, x_{2}, ..., x_{n}) we can define sum and multiplication by a number in exactly the same way:
(x_{1}, x_{2}, ..., x_{n})+(y_{1}, y_{2}, ..., y_{n})=(x_{1}+y_{1}, x_{2}+y_{2}, ..., x_{n}+y_{n}) and c(x_{1}, x_{2}, ..., x_{n})=(cx_{1}, cx_{2}, ..., cx_{n}).
These definitions turn the set R^{n} in a set where the operations just defined have all the usual properties of the operations between vectors (of course we can now give no geometric representation if n>3). We shall say that R^{n}, with these two operations is a vector space of dimension n. More general sets can be turned into vector spaces by suitable definitions of a sum and a product by a real number: these spaces play an important role in many applications of maths, but they are beyond the purpose of this brief introduction. It is usual to represent the elements of the vector space R^{n} as one column matrices.
We can now go one step further and introduce functions between vector spaces: f : R^{n}→R^{m}. A function of this kind transforms a n-tuple in a m-tuple. For example the function transforms each couple of real numbers in a tern of real numbers, so it is a function f : R^{2}→R^{3}.
If we consider a function f : R^{n}→R^{m}, the elements of R^{n} are called independent variables, while the elements of R^{m} are called dependent variables (as usual with functions!).
The most important functions of this kind are the so called linear functions: the only allowed operations on the independent variables are sums between the variables and multiplications by scalars. The function in the previous example is not a linear function, while the following is: . If we remember matrix multiplication, it is easy to show that this function can also be written as: . This is the reason why we represent the elements of R^{n} as one column matrices. We say that the matrix represents the function. This fact is of a general nature:
In the special case where n=m=1 the function is simply f(x)=ax, where a is a real number, and the matrix A reduces to a single entry matrix: A=[a]. This function has a straight line through the origin as graph in a cartesian plane. This is the reason why these functions are called linear.
Using the terminology just introduced we can interpret the problem of solving a linear system of m equations in n unknowns, Ax = b, as the problem of finding the inverse image of an element b of R^{m} under the function represented by the matrix A:
The theory we have developed proves that this inverse image can have no elements (inconsistent system), only one element, or infinitely many elements.