[go: nahoru, domu]

Jump to content

Talk:Square root of a matrix: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Mct mht (talk | contribs)
→‎Cholesky vs square root: + very late reply
No edit summary
Line 4: Line 4:


:in the finite dimensional setting, a partial isometry can be extended to a unitary. for the same reason, U can be taken to be unitary in the polar decomposition UP of a matrix. [[User:Mct mht|Mct mht]] ([[User talk:Mct mht|talk]]) 17:16, 6 December 2008 (UTC)
:in the finite dimensional setting, a partial isometry can be extended to a unitary. for the same reason, U can be taken to be unitary in the polar decomposition UP of a matrix. [[User:Mct mht|Mct mht]] ([[User talk:Mct mht|talk]]) 17:16, 6 December 2008 (UTC)

::I really don't see how that helps. In the proof it is said that each column of A is linear combination of ''only'' the columns of B, because the columns of B span C<sup>n</sup>. Moreover, I dont see how B can be an orthogonal basis for C<sup>n</sup>. If T diagonalizes as T = UDU*, with eigenvalues real and not negative, because T is positive semidefinite. Then you pick B as B = UD<sup>1/2</sup>U*. U is an orthonormal basis for C<sup>n</sup>, but B isn't. What I'm saying is that if T (and then B) is positive definite, then the proof is OK, but if T is only positive semidefinite, then B has kernel, and so its columns cannot span the whole C<sup>n</sup>. Of course I may be completely wrong, as I am not really into linear algebra/functional analysis [[User:Daniel Estévez|Daniel Estévez]] ([[User talk:Daniel Estévez|talk]]) 18:11, 6 December 2008 (UTC)



== Cholesky vs square root ==
== Cholesky vs square root ==

Revision as of 18:11, 6 December 2008

Proof of unitary freedom

I think the proof of the unitary freedom of roots is not correct. It assumes that the rows of B, the positive root of T, span Cn. This is equivalent to saying B is invertible, which need not to be the case. B won't be invertible if T isn't, and T will only be invertible if it is positive-definite. If it is positive-semidefinite, having some null eigenvalues, T will be singular. For an example, pick T = 0, then B = 0, but the columns of B don't span anything at all. Daniel Estévez (talk) 12:46, 6 December 2008 (UTC)[reply]

in the finite dimensional setting, a partial isometry can be extended to a unitary. for the same reason, U can be taken to be unitary in the polar decomposition UP of a matrix. Mct mht (talk) 17:16, 6 December 2008 (UTC)[reply]
I really don't see how that helps. In the proof it is said that each column of A is linear combination of only the columns of B, because the columns of B span Cn. Moreover, I dont see how B can be an orthogonal basis for Cn. If T diagonalizes as T = UDU*, with eigenvalues real and not negative, because T is positive semidefinite. Then you pick B as B = UD1/2U*. U is an orthonormal basis for Cn, but B isn't. What I'm saying is that if T (and then B) is positive definite, then the proof is OK, but if T is only positive semidefinite, then B has kernel, and so its columns cannot span the whole Cn. Of course I may be completely wrong, as I am not really into linear algebra/functional analysis Daniel Estévez (talk) 18:11, 6 December 2008 (UTC)[reply]


Cholesky vs square root

This article states that Cholesky decomposition gives the square root. This is, I think, a mistake. B.B = A and L.L^T = A are not the same, and B will not equal L, unless L is diagonal. c.f. http://yarchive.net/comp/sqrtm.html I have edited this article appropriately --Winterstein 11:03, 26 March 2007 (UTC)[reply]

well, the Cholesky decomposition gives a square root. the term "square root" are used in different senses in the two sections. Mct mht 11:09, 26 March 2007 (UTC)[reply]
At the beginning of section Square root of positive operators the article defines the square root of A as the operator B for which A = B*B and in the next sentence it denotes the operator T½ to mean the matrix for which T = (T½)2. This was very confusing until I realized that T½ is necessarily self-adjoint and that it is therefore also a square-root in the former sense and that only the self-adjoint square root is unique. Is this reasoning correct? --Drizzd (talk) 11:17, 9 February 2008 (UTC)[reply]
yes, more precisely the positive square root is unique. Mct mht (talk) 17:23, 6 December 2008 (UTC)[reply]

Calculating the square root of a diagonizable matrix

The description on calculating the square root of a diagonizable matrix could be improved.

Currently is takes the matrix of eigen vectors as a given, then takes steps to calculate the eigen values from this. It is a very rare situation to have eigen vectors before you have eigen values. They are often calculated simultaneously or for small matrices the eigen values are found first, by finding the roots of the characteristic polynomial.

I realize it is easier to describe the step from eigen vectors to eigen values in matrix-notation tah the other way around, but the description should decide whether it wants to be a recipe or a theorem. If it's a recipe, it should have practical input and if its a theorem, the eigenvalues should be given alongside the eigenvectors.

Please comment if you believe this to be a bad idea. I will fix the article in some weeks if no one stops me - if I remember. :-) -- Palmin

Good point. Please do fix it. As an alternative, perhaps consider diagonalization as one step (and refer to diagonalizable matrix), but if you think it's better to spell it out in more detail, be my guest! -- Jitse Niesen (talk) 00:34, 15 March 2007 (UTC)[reply]
As I didn't see any edits from you, I rewrote the section myself. -- Jitse Niesen (talk) 12:38, 20 May 2007 (UTC)[reply]