Multiplication Speed
One thing to remember is that multiplying very large integers and multiplying very precise decimals end up being roughly the same speed. If we compute to 10 decimal digits of precision, we have the multiplication . If we wanted to, we could multiply and then just shift the decimal place back when we're done.
To get the right answer at the end of Binet's formula, we need a little more precision than there are decimal digits in the answer. The final result only has to have the first decimal digit correct to round properly, but rounding errors can accumulate, and we're always leaving out the factor of . That means our working precision needs to be higher than the number of digits in the answer by a fair margin: I'm using a 1% buffer (defaulting to Python's 53-bit precision as a minimum), which works for all of the tests I'm doing.
Here I've done a rudimentary test of the speed of computing this product with different levels of precision. We plot on a log-log scale—the straight line at the end is an exponential curve.1 We can see that, once we get values large enough that the meat of the computation dominates everything else, multiplying very precise decimals and very large integers has essentially the same time cost.
- You might wonder why it's flat at the start. I don't have the authoritative answer to this, but I imagine it's largely due to the speed of computer memory. The CPU has very efficient caches that are limited in size: the more numbers there are, the less efficient retrieving those values becomes.↩
Multiplication Speed
One thing to remember is that multiplying very large integers and multiplying very precise decimals end up being roughly the same speed. If we compute to 10 decimal digits of precision, we have the multiplication . If we wanted to, we could multiply and then just shift the decimal place back when we're done.
To get the right answer at the end of Binet's formula, we need a little more precision than there are decimal digits in the answer. The final result only has to have the first decimal digit correct to round properly, but rounding errors can accumulate, and we're always leaving out the factor of . That means our working precision needs to be higher than the number of digits in the answer by a fair margin: I'm using a 1% buffer (defaulting to Python's 53-bit precision as a minimum), which works for all of the tests I'm doing.
Here I've done a rudimentary test of the speed of computing this product with different levels of precision. We plot on a log-log scale—the straight line at the end is an exponential curve.1 We can see that, once we get values large enough that the meat of the computation dominates everything else, multiplying very precise decimals and very large integers has essentially the same time cost.
- You might wonder why it's flat at the start. I don't have the authoritative answer to this, but I imagine it's largely due to the speed of computer memory. The CPU has very efficient caches that are limited in size: the more numbers there are, the less efficient retrieving those values becomes.↩