|
Hey guys: Been digging my nose in my assignment all night, just have a few questions that I hope you guys can help me clarify (not really a homework thread, but I just want to seek some additional insights).
So the question is: Let U,V,W be finite dimensional vector space over R and let the linear mappings L: V->U and M: U->W be true.
a) Prove rank (M*L) is less than or equal to rank (M) b) Prove that if M is invertible, then rank (M*L) = rank (L)
where M*L is composition of the two linear mappings.
So after several attempts at the proofs, one question arises: let's look at L: V->U. Let's suppose the vector space V has dimension A, and the output range U has dimension B. Does this necessarily mean A must be equal to or greater than B? Or can you do a linear mapping and obtain a higher dimension?
My basic approach to the first proof is that M*L first produces U from the mapping L, then produces W from the mapping M. However, some of the original vectors from V are mapped into the nullspace. And hence the maximum dimension of the output from L is just the dimension of V. And similarly if we apply the same logic to the mapping M, we prove a). Is this a valid approach to proving this question? Or am I being too vague?
For part b), if M is invertible, that means it is a square matrix (suppose it's n x n). That means the dimension of U is n and W is also n. And hence the question is proven.
|
Belgium9937 Posts
Hmm, been a while since my algebra class, but can't both a) and b) just be trivially proven using the linear mapping dimension lemma?
It would provide a more concrete proof to what you intuitively already see in your 'basic approach' and solve the vagueness
|
Belgium9937 Posts
Btw, I don't know if it's actually called linear mapping dimension lemma in english. I just know it's 'dimensiestelling' in Dutch.
|
On September 21 2010 17:49 Talent.L wrote: So after several attempts at the proofs, one question arises: let's look at L: V->U. Let's suppose the vector space V has dimension A, and the output range U has dimension B. Does this necessarily mean A must be equal to or greater than B? Or can you do a linear mapping and obtain a higher dimension?
Yes, it must hold that A>=B. You can prove it as follows: Let e_1, ..., e_n be a basis of V. Then L(e_1), ..., L(e_n) spans L(V). If L(e_1), ... ,L(e_n) are linearly independent, then dim L(V) = dim V, otherwise dim L(V) <= dim V.
|
On September 21 2010 17:49 Talent.L wrote: Hey guys: Been digging my nose in my assignment all night, just have a few questions that I hope you guys can help me clarify (not really a homework thread, but I just want to seek some additional insights).
So the question is: Let U,V,W be finite dimensional vector space over R and let the linear mappings L: V->U and M: U->W be true.
a) Prove rank (M*L) is less than or equal to rank (M) b) Prove that if M is invertible, then rank (M*L) = rank (L)
where M*L is composition of the two linear mappings.
So after several attempts at the proofs, one question arises: let's look at L: V->U. Let's suppose the vector space V has dimension A, and the output range U has dimension B. Does this necessarily mean A must be equal to or greater than B? Or can you do a linear mapping and obtain a higher dimension?
No, the image of a linear transformation will always have smaller or equal dimension than the range. The basic idea is that the image of basis will form a generator set over the image set.
The reasoning goes like this: Let K be a linear transformation between V and U and Q={v1,v2,...,vn} a basis of V.
Let K(w) be any vector in the image set of K. Since w is in V it can be written as w=c1*v1+...cn*vn, therefore because of the linearity K, K(w)=c1*K(v1)+...cn*K(vn)
Since every element in the image of K can be written in this form it means that the set R={K(v1),K(v2),...K(vn)} is a generator set for im(K). So we have n>=dim(Im(k)), i.e dim(V)>=rank(K)
|
Look, as long as the flux vortex and corvonal nibulus reach a resonant constant, then the linear mappings will reach equilibrium. This goes without saying that the induced transient equation will never fulfill the allotted serialistic parameters of the synaptical dimension. Let's pretend Y=({J8}-8C{8&]@h2{_), then that would mean we have a disproportionate element of mathematical clarity that can only be functioned to a state of the cosine opposite. Agree? It's really very easy. The range that has to be considered is solely dependent on the inverted and reciprocated equation of naturalistic dimentia that can be covered by a 4x4 matrii and a constalisis neurabulation chart.
Just run it through a calculator and you'll see what I mean.
|
On September 21 2010 21:38 Horrde wrote: Look, as long as the flux vortex and corvonal nibulus reach a resonant constant, then the linear mappings will reach equilibrium. This goes without saying that the induced transient equation will never fulfill the allotted serialistic parameters of the synaptical dimension. Let's pretend Y=({J8}-8C{8&]@h2{_), then that would mean we have a disproportionate element of mathematical clarity that can only be functioned to a state of the cosine opposite. Agree? It's really very easy. The range that has to be considered is solely dependent on the inverted and reciprocated equation of naturalistic dimentia that can be covered by a 4x4 matrii and a constalisis neurabulation chart.
Just run it through a calculator and you'll see what I mean.
this is correct answer... obviouslly. lol.
|
16927 Posts
If you're not going to be helpful, refrain from posting.
|
I don't know if it's completely correct but this is what I got for excercise a:
Define:
A=m*n matrix B=n*p matrix
Nul(B) is a subset of Nul(AB) (all x in nullspace of B (Bx=0) are in the nullspace of ABx (ABx=A*0=0), so: Dim Nul B <= Dim Nul(AB)
It follows that Rank(AB)= p - dim Nul (AB) (rank nullity theorem (dimensiestelling in dutch) <= p-dim Nul(B)= Rank B
Rank(M*L)=Rank(Transpose(ML))=Rank(Transpose(L)*Transpose(M))<=Rank(Transpose(M))=Rank(M)
(Used Rank(Transpose(A))=Rank(A))
So Rank(M*L)<= Rank(M)
|
|
|
|