Inverse eigenvalues theorem

From Wikipedia, the free encyclopedia

In numerical analysis and linear algebra, this theorem states that λ is an eigenvalue of A if and only if λ − 1 is an eigenvalue of A − 1.