Arbitrary-Precision Arithmetic
We'll be using the mpmath
library for arbitrary-precision arithmetic. We can set however many bits of precision we like. We'll base it roughly on the number of bits in , with an extra buffer (because the division by and error caused by not calculating can't affect the final result.)
Calculating can be sped up using the same technique we did for the matrix exponents, so this should be roughly the same number of multiplications. The question now becomes: is it more efficient to work with floating-point numbers or matrices with integers?
87919 87919
Arbitrary-Precision Arithmetic
We'll be using the mpmath
library for arbitrary-precision arithmetic. We can set however many bits of precision we like. We'll base it roughly on the number of bits in , with an extra buffer (because the division by and error caused by not calculating can't affect the final result.)
Calculating can be sped up using the same technique we did for the matrix exponents, so this should be roughly the same number of multiplications. The question now becomes: is it more efficient to work with floating-point numbers or matrices with integers?
87919 87919