I know my critics will probably get upset with me for spending so little time on the argument that they think is the strongest proof for .999… equaling 1, but it really doesn’t take long to explain why this is wrong.

I think the proof was well-explained in a “Yahoo! Answers” forum, and so I’m just going to quote it verbatim:

Using calculus (an infinite series to be more exact), we know that .999 repeating can actually be written as The sum from 0 to infinity of(.9(.1^n)), which is a geometric series of the form a=.9, r=.1. (Meaning that it follows the form a+ar1+ar2+ar3… etc.) Using the geometric series convergence rule, we know that any geometric series converges to a/(1-r). So, the series converges to .9/(1-.1), or .9/.9, which obviously equals 1.

This is an infallible proof, using one of the highest levels of math (differential Calculus), that cannot be disproved…

Basically, calculus states that .999… can be written as (.9(.1^n))… if it followed geometric convergence. It doesn’t. This problem could be graphed like 1/X = Y, where X just keeps getting bigger and bigger as Y approaches 0. So how can I be so sure that these lines don’t converge? Because Y can never actually equal 0, otherwise it would have to be multiplied by some very large X to get 1. It doesn’t — 0 times any number is always 0. It gets infinitely close to 0 without ever actually converging, and I’m positing that this infinitely close sum (the smallest possible Y) is .00…01. You can reach this by dividing by the largest possible number, 1000… with zeroes into infinity, as I guessed in an earlier blog.

I’ve mentioned this before, and it bears repeating: most of these arguments from calculus are egotistical. The author of this Yahoo! question wrote that his proof was “using one of the highest levels of math (differential calculus)”. I’ve tried to prove my own point using subtraction, and none of my critics say that subtraction itself is faulty or inferior — they merely think that I’m misapplying it. That’s what I’m saying is true about the geometric proof; it is misapplied calculus. I’m not saying that calculus itself is untrue or inferior, but that those who turn it towards this problem made an error in application, and those that think they’ve made their point because they’re using “a superior form of math” are too blinded by their egos to spot the proofs made in “simpler forms of math”.

### Like this:

Like Loading...

*Related*

## About starcrashx

I love statistics. They drive my poker playing, my reasoning, and my research. As Penn Gillete said "Luck is probability taken personally". There's no such thing as luck... but I wish you positive chance.

“calculus states that .999… can written as \sum_{i=1}^\infty 9*.1^i”. Okay, it is completely irrelevant, where you state it… it’s merely a definition. the one which says that .999…=9/9=1, shows just that it’s the same thing…. the way you define it, is naturally not the same thing, then how it is defined… it is defined either as 9/9=1, or as a series… both will give us the same result… defining it another way you can get that .999 is not equal to 1. This is definitely not high calculus… this is merely first semester…

Please don’t be personally abusive by suggesting things like “this is merely first semester [calculus]”. I know it’s easy to cruel on the internet thanks to anonymity, but if you genuinely want to win someone over with your argument, insulting them is a poor choice of method.

Instead of arguing this myself, I’ll just refer you to a mathematician who has put it in more detailed (and more “mathematical”) terms and language than I have. https://www.filesanywhere.com/fs/v.aspx?v=8b6966895b6673aa6b6c

I didn’t mean to upset you, when i said that this is merely first semester, really. I just wanted to say, that the concepts are actually not that difficult to understand. And no… You really shouldn’t cite John Gabriel… Okay, I try to explain you the rules of mathematics… You define something, you proove that it’s well-defined (based on the fundamental assumptions), but first of all you need to understand the assumptions. (Just think about it, what’s the mistake when someone prooves something and someone other who’s also right disprooves something, we have a clear contradiction, so the assumptions are not well-made) The deduction process, is in fact really important in mathematician, but this will eventually bring you to fallacies… It’s like solving differential equations, there are several methods of solving them, but just because the steps seem plausible, they are not rigorous.

Do you not know how to test for converge a(nth term)/a(nth + 1 term) is 1/10 so this series does converge. If you went by comparing it to your f(x)=1/x example then all series would diverge. Also the p-adics and reals are two different ways to expand the rational numbers. To quote Wikipedia:

“As a final extension, since 0.999… = 1 (in the reals) and …999 = −1 (in the 10-adics), then by “blind faith and unabashed juggling of symbols”[62] one may add the two equations and arrive at …999.999… = 0. This equation does not make sense either as a 10-adic expansion or an ordinary decimal expansion, but it turns out to be meaningful and true if one develops a theory of “double-decimals” with eventually repeating left ends to represent a familiar system: the real numbers.[63]”

Just check out the paper I cited above. I’m tired of arguing this point, and calculus is honestly not my strong suit so I’ll leave it to someone whose career is teaching calculus.

https://www.filesanywhere.com/fs/v.aspx?v=8b6966895b6673aa6b6c