The notion of a vector space is extremely general, and can be applied in all sorts of surprising situations. A vector space may be loosely defined as a set of lists of values that can be added and subtracted with one another, and which can be scaled by another set of values. The most familiar examples vector spaces are those representing two or three dimensional space, such as R^{2} or R^{3}, in which the vectors are things like (x,y) and (x,y,z).
We also often think of vectors as column or row vectors, which are nx1 or 1xn matrices respectively. Understanding vectors that way will become more important as we investigate topics such as subspaces, spanning sets, and maps between vector spaces. The general notion of a column vector should already be familiar from basic linear systems.
Consider , is this a vector space? It had better be! Lets assuage any suspicion by ensuring that the conditions above are satisfied.
First, we see that vector addition checks out. For example, given , we see that by adding them we get . Furthermore, we can see that scalar multiplication checks out too, for if given , then we cam multiply this by our first vector above to get .
As for the remaining properties, the vector (0,0) is the zero vector, the additive inverse of any vector (a,b) is just (-a,-b), and 1(a,b) = (a,b). Hence R^{2} is indeed a vector space.
Next we will see two examples of slightly more interesting vector spaces.
In this video we see two examples of vector spaces.
Ok, so I lied a little. In addition to the four point definition above, there are a few additional properties that all vector spaces must satisfy. I neglected to mention these properties above because they are all the sort of thing that we usually take for granted, but that become important as we begin to think more abstractly about vector spaces. Knowing about these additional properties is of paramount importance when we are trying to define weird looking vector spaces.
So what do we usually take for granted? First, we take for granted that addition is commutative, that a + b = b + a. For a vector space, addition must be commutative.
Second, we never think that multiplication by 1 will ever give us anything but the number we multiplied, that is, 1x = x for all x. Scalar multiplication by the 1 must have this feature, that is, 1 in R must be the multiplicative identity for scalar multiplication.
Likewise, we generally assume that multiplication by zero will always give us zero. But in this case we actually have two different concepts of zero! We have regular zero, 0 in R, and we have the zero vector, 0_{V} in V. What we must ensure is that 0w = 0_{V} for every vector w in V.
Finally, we usually take as granted the distributive property of multiplication. The situation with vector spaces is again a little goofy because we don't actually multiply vectors, we only add them. Multiplication comes into play only when scalars are concerned. So in order for a set to count as a real vector space, we have to make sure that r(v+w) = rv+rw and that (r+s)v = rv + sv for real numbers r and s, and for any vectors v and w in V.
Example 1: Consider the subset of R^{2} of vectors with components x and y such that x+y = 1. Is this a vector space? No, we see that it is not closed under addition:
Example 2: Let N be the set of natural numbers, i.e., N = {1,2,3,4,....}. Is N a real vector space? Again the answer is no. We see that the set fails on a number of points: