Arbitrary-Precision Arithmetic

We'll be using the mpmath library for arbitrary-precision arithmetic. We can set however many bits of precision we like. We'll base it roughly on the number of bits in ϕn\phi^n, with an extra buffer (because the division by 5\sqrt{5} and error caused by not calculating ψn\psi^n can't affect the final result.)

Calculating ϕn\phi^n can be sped up using the same technique we did for the matrix exponents, so this should be roughly the same number of multiplications. The question now becomes: is it more efficient to work with floating-point numbers or matrices with integers?

Code
Output
87919 87919

Arbitrary-Precision Arithmetic

We'll be using the mpmath library for arbitrary-precision arithmetic. We can set however many bits of precision we like. We'll base it roughly on the number of bits in ϕn\phi^n, with an extra buffer (because the division by 5\sqrt{5} and error caused by not calculating ψn\psi^n can't affect the final result.)

Calculating ϕn\phi^n can be sped up using the same technique we did for the matrix exponents, so this should be roughly the same number of multiplications. The question now becomes: is it more efficient to work with floating-point numbers or matrices with integers?

Code
Output
87919 87919