![]() ![]() Otherwise as long as the initial matrix is only approximate (i.e. If they give eigenvalues with small imaginary parts then you can discard those if you know that the eigenvalues should be real. Most likely though computing exact eigenvalues like this is not what you want to do and you should instead use NumPy/SciPy. I recommend having gmpy2 installed if you want to do this because it makes exact rational numbers in SymPy a lot faster. Scaling up to 100x100 will be slow but might be possible. So already finding the exact integer eigenvalues of a 40x40 matrix takes 30 seconds. Out: ĬPU times: user 652 ms, sys: 2 µs, total: 652 msĬPU times: user 6.1 s, sys: 0 ns, total: 6.1 s You can get a sense for the timings involved for different sizes: In : M = make_matrix(5) Unfortunately exact rational numbers do not have O(1) arithmetic cost so in fact the complexity is significantly worse than O(N**4) as you scale up to larger matrices. Finding the characteristic polynomial this way is roughly O(N**4) for an N x N dense matrix if the underlying coefficient field has O(1) arithmetic cost. ![]() This is much more efficient than using determinants to find the characteristic polynomial but still slow for large matrices. This uses the Berkowicz algorithm to compute the characteristic polynomial which is the slowest part of the operation (the charpoly method). """Random matrix of rational numbers with all integer eigenvalues""" This is usually only worth doing if your original matrix has exact rational numbers (not floats) but it should obtain precisely the real eigenvalues: from sympy import * In any case here is some code that can show how to get the exact real eigenvalues of a matrix more efficiently using SymPy. If your initial matrix is a matrix of floats though then it is already approximate in which case there is a good chance that there will be no benefit in using SymPy. Exact or symbolic calculations are a lot slower than fixed precision floating point though so you should expect things to slow down a lot compared to using NumPy or SciPy. Determining that the eigenvalues are precisely real will require exact arithmetic in general. The advantage of using SymPy rather than NumPy or SciPy in this context is just if you want to perform the calculation exactly or symbolically. I can throw more hardware at it on a cluster, but have not been able to try this yet. However this does not output an answer in 12 hours with any amount of hardware thrown at it. T = np.loadtxt('rep10_T_ij.dat', delimiter=' ')ĭisplay(roots(poly(char_poly, domain=CC))) My understanding is that this will force answers as real numbers. Is it possible somehow to find complex eigenvalues using SymPy? Using sympy as described here makes sense: I turned to the sympy library, which also returned complex numbers as the solution. The scipy and numpy libraries return complex numbers - which is apparently due to not being able to solve the characteristic polynomial as real numbers or the algorithms being optimised to do this. Columns do not have to sum to anything in particular. Each row of the matrix, T, sums to 1.0 (100% probability). Elements of the matrix, T, are probabilities from 0 to 1. Am trying to determine the (real) left-eigenvectors and eigenvalues of a 100 by 100 matrix.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |