I know my critics will probably get upset with me for spending so little time on the argument that they think is the strongest proof for .999… equaling 1, but it really doesn’t take long to explain why this is wrong.
I think the proof was well-explained in a “Yahoo! Answers” forum, and so I’m just going to quote it verbatim:
Using calculus (an infinite series to be more exact), we know that .999 repeating can actually be written as The sum from 0 to infinity of(.9(.1^n)), which is a geometric series of the form a=.9, r=.1. (Meaning that it follows the form a+ar1+ar2+ar3… etc.) Using the geometric series convergence rule, we know that any geometric series converges to a/(1-r). So, the series converges to .9/(1-.1), or .9/.9, which obviously equals 1.
This is an infallible proof, using one of the highest levels of math (differential Calculus), that cannot be disproved…
Basically, calculus states that .999… can be written as (.9(.1^n))… if it followed geometric convergence. It doesn’t. This problem could be graphed like 1/X = Y, where X just keeps getting bigger and bigger as Y approaches 0. So how can I be so sure that these lines don’t converge? Because Y can never actually equal 0, otherwise it would have to be multiplied by some very large X to get 1. It doesn’t — 0 times any number is always 0. It gets infinitely close to 0 without ever actually converging, and I’m positing that this infinitely close sum (the smallest possible Y) is .00…01. You can reach this by dividing by the largest possible number, 1000… with zeroes into infinity, as I guessed in an earlier blog.
I’ve mentioned this before, and it bears repeating: most of these arguments from calculus are egotistical. The author of this Yahoo! question wrote that his proof was “using one of the highest levels of math (differential calculus)”. I’ve tried to prove my own point using subtraction, and none of my critics say that subtraction itself is faulty or inferior — they merely think that I’m misapplying it. That’s what I’m saying is true about the geometric proof; it is misapplied calculus. I’m not saying that calculus itself is untrue or inferior, but that those who turn it towards this problem made an error in application, and those that think they’ve made their point because they’re using “a superior form of math” are too blinded by their egos to spot the proofs made in “simpler forms of math”.