Talk:Sparse matrix
From Wikipedia, the free encyclopedia
Contents |
[edit] Vertex reordering
While the Cuthill-McKee algorithm is a fine algorithm, it was first shown that the Reverse Cuthill-McKee algorithm was often better by A. George in 1971. There have been many arguably better algorithms in the last 30 years.
I added some [incomplete, sketchy] remarks.
-- User:Nahaj 2005-08-28
[edit] Cholesky factorization
Cholesky factorization usually only shows up (in my experience) for large sparse matrices in solutions of least squares problems via Normal Equations.
There have been many many alternatives since the 1960's and Householders work. [Lawson and Hanson, 1973] covered several, for example. Reference: Lawson, Charles L. and Hanson, Richard J. 1974, "Solving Least Squares Problems" Prentice-Hall.
The article seems, to me, to have a very old fashioned bias in my opinion. I added some remarks against that bias.
-- User:Nahaj 2005-08-28
- My first sentence shows how small an area I play in. In Tim Davis' note below under "too much emphasis on band matrices" he corrects this, and in email has sent me a pointer to his http://www.cise.ufl.edu/research/sparse/matrices which has lots and lots of symmetric positive definite matrices that arise in non least squares contexts and for which Cholesky factorization would be appropriate. Nahaj 14:54, 27 September 2007 (UTC)
[edit] References
A pointer to one guy's implimentation does not make for an reasonable reference list. There are many many good books on the subject.
I added one.
-- User:Nahaj 2005-08-28
[edit] Sparse polynomials?
in algebraic geometry sometimes sparse (multivariate) polynomials are mentoined, although I do not know an explicit definition. Assuming somebody here knows a bit about this, it could be a nice topic to add..
guest, 2005-09-14
- Well, looking forward to seeing a sparse polynomial article! :) I know nothing of the topic, so I can't help. Oleg Alexandrov 21:10, 14 September 2005 (UTC)
[edit] too much emphasis on band matrices
This article has too much emphasis on band matrices.
"Band" matrices are often thought of as arising from the discretization of a 2D mesh. These are not truly "banded" matrices, however, since the optimal ordering (nested dissection) results in a matrix that is far from banded. Bandwidth reducing orderings are not suitable for a matrix arising from the discretization of a 2D or 3D square mesh (for example).
For example, a n-by-n matrix arising from an s-by-s 2D mesh can be factorized in O(s^3) time, where n = s^2, and with only O (n log n) entries in the factor L, or about O(s^2 log s).
On the other hand, if the "natural" ordering is used (the one that "looks banded"), the time taken is O(n^2), or O(s^4). That's quite a bit higher. The number of entries in the factor is O(n times sqrt(n)), or O(s^3).
Cholesky factorization arises in more problems than the Normal Equations (using Cholesky factorization for the Normal Equations is usually a bad idea; QR factorization is more accurate).
-- Tim Davis, Univ. of Florida
- I agree it is a bad idea... but it is still being promoted in current Surveying texts, and still used by the National Geodetic Survey for the national adjustments. So I see it a lot. Nahaj 15:02, 27 September 2007 (UTC)
[edit] Row Bandwidth def'n incorrect
I don't have a reference in front of me, but I think the definition of row bandwidth is wrong. Something like n-m, where m is the minimizer, is within 1 of the lower bandwidth. But it doesn't appear to capture upper bandwidth. Jonas August 16:20, 30 April 2007 (UTC)
- Yes, the definition was rather strange; thanks for bringing that to our attention. I replaced it by another one. Let me know if the new one doesn't make sense to you either. -- Jitse Niesen (talk) 02:00, 1 May 2007 (UTC)
[edit] Banker's algorithm
I removed the link to a "Banker's algorithm" since it points to another algorithm of the same name, not Dr. Snay's algorithm. Nahaj 14:52, 27 September 2007 (UTC)
[edit] Orthogonal List?
As I remember, Orthogonal List (or a similar name) is a common data structure to store a sparse matrix. Is it correct? Btw, I can't even find any info about Orthogonal List in wikipedia. Could anyone more knowledgeable on this issue write some stuff? Took 05:10, 9 November 2007 (UTC)
[edit] Bandwidth and invariants
Is this correct?: It seems to me that the bandwidth of a matrix describes the matrix but not the underlying linear operator. That is, by changing the basis in which you write a matrix, you can change the matrix's bandwidth. As such, bandwidth is not an invariant the way trace and determinant are.
Is that right? —Ben FrantzDale (talk) 02:38, 8 April 2008 (UTC)
[edit] Why are academic explanations so hard to comprehend?
"It stores an initial sparse N×N matrix M in row form using three arrays, A, IA, JA"
I admit, I'm not academic. I believe I have a need for a sparse matrix. I understand the basic theory, but wonder how they deal with insertions. I search the web, and find nothing but incomprehensible academic articles. I decide to search Wikipedia, which I find almost always to answer my questions, except now. I see this article, and try to understand this explanation and get no where.....what is wrong with academics that they can not communicate clearly? Problems with this statement:
- Does N*N mean rows and columns? If so, say "rows and columns"! - Does reusing the letter N mean that rows and columns are the same size? If so, say "where the number of rows and columns are the same"! - Why is A reused in A, IA, JA? If there is no reason, then DON'T DO IT! If there is then say so, IN PLAIN ENGLISH! It confused the hell out of me as to what the relationship was given that A is everywhere.
Why do academics make it so hard to understand their writing? It seems like a college social fraternity, with all it's social pressure glory. REBEL AGAINST THE PRESSURE TO WRITE INCOMPREHENSIBLE DOUBLETHINK! Please write so us dumb dumbs can understand.....I believe that is in the in spirit of Wikipedia! —Preceding unsigned comment added by 69.107.136.52 (talk) 17:01, 12 April 2008 (UTC)
- Thanks for the suggestions. NxN means it's square. mxn means it's m rows and n columns; this is standard matrix notation that could be referenced but is common knowledge for anyone attempting a sparse-matrix implementation. I cleaned it up a bit. Is that clearer? Cheers. —Ben FrantzDale (talk) 23:47, 14 April 2008 (UTC)