Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Please explain this in a way that makes sense to me (I’m an algebraist). I don’t know what it would mean for infinite decimals to be supported “properly” or “improperly”. Furthermore, I’m not aware of any arguments worth taking seriously that don’t use logic, so I’m wondering why that’s a criticism of the notation.
Decimal notation is a number system where fractions are accomodated with more numbers represeting smaller more precise parts. It is an extension of the place value system where very large tallies can be expressed in a much simpler form.
One of the core rules of this system is how to handle values larger than the highest digit, and lower than the smallest. If any place goes above 9, set that place to 0 and increment the next place by 1. If any places goes below 0, increment the place by (10) and decrement the next place by one (this operation uses a non-existent digit, which is also a common sticking point).
This is the decimal system as it is taught originally. One of the consequences of it’s rules is that each digit-wise operation must be performed in order, with a beginning and an end. Thus even getting a repeating decimal is going beyond the system. This is usually taught as special handling, and sometimes as baby’s first limit (each step down results in the same digit, thus it’s that digit all the way down).
The issue happens when digit-wise calculation is applied to infinite decimals. For most operations, it’s fine, but incrementing up can only begin if a digit goes beyong 9, which never happens in the case of 0.999… . Understanding how to resolve this requires ditching the digit-wise method and relearing decimals and a series of terms, and then learning about infinite series. It’s a much more robust and applicable method, but a very different method to what decimals are taught as.
Thus I say that the original bitwise method of decimals has a bug in the case of incrementing infinite sequences. There’s really only one number where this is an issue, but telling people they’re wrong for using the tools as they’ve been taught isn’t helpful. Much better to say that the tool they’re using is limited in this way, then showing the more advanced method.
That’s how we teach Newtonian Gravity and then expand to Relativity. You aren’t wrong for applying newtonian gravity to mercury, but the tool you’re using is limited. All models are wrong, but some are useful.
I can’t help but notice you didn’t answer the question.
each digit-wise operation must be performed in order
I’m sure I don’t know what you mean by digit-wise operation, because my conceptuazation of it renders this statement obviously false. For example, we could apply digit-wise modular addition base 10 to any pair of real numbers and the order we choose to perform this operation in won’t matter. I’m pretty sure you’re also not including standard multiplication and addition in your definition of “digit-wise” because we can construct algorithms that address many different orders of digits, meaning this statement would also then be false. In fact, as I lay here having just woken up, I’m having a difficult time figuring out an operation where the order that you address the digits in actually matters.
Later, you bring up “incrementing” which has no natural definition in a densely populated set. It seems to me that you came up with a function that relies on the notation we’re using (the decimal-increment function, let’s call it) rather than the emergent properties of the objects we’re working with, noticed that the function doesn’t cover the desired domain, and have decided that means the notation is somehow improper. Or maybe you’re saying that the reason it’s improper is because the advanced techniques for interacting with the system are dissimilar from the understanding imparted by the simple techniques.
In base 10, if we add 1 and 1, we get the next digit, 2.
In base 2, if we add 1 and 1 there is no 2, thus we increment the next place by 1 getting 10.
We can expand this to numbers with more digits:
111(7) + 1 = 112 = 120 = 200 = 1000
In base 10, with A representing 10 in a single digit:
199 + 1 = 19A = 1A0 = 200
We could do this with larger carryover too:
999 + 111 = AAA = AB0 = B10 = 1110
Different orders are possible here:
AAA = 10AA = 10B0 = 1110
The “carry the 1” process only starts when a digit exceeds the existing digits. Thus 192 is not 2Z2, nor is 100 = A0. The whole point of carryover is to keep each digit within the 0-9 range. Furthermore, by only processing individual digits, we can’t start carryover in the middle of a chain. 999 doesn’t carry over to 100-1, and while 0.999 does equal 1 - 0.001, (1-0.001) isn’t a decimal digit. Thus we can’t know if any string of 9s will carry over until we find a digit that is already trying to be greater than 9.
This logic is how basic binary adders work, and some variation of this bitwise logic runs in evey mechanical computer ever made. It works great with integers. It’s when we try to have infinite digits that this method falls apart, and then only in the case of infinite 9s. This is because a carry must start at the smallest digit, and a number with infinite decimals has no smallest digit.
Without changing this logic radically, you can’t fix this flaw. Computers use workarounds to speed up arithmetic functions, like carry-lookahead and carry-save, but they still require the smallest digit to be computed before the result of the operation can be known.
If I remember, I’ll give a formal proof when I have time so long as no one else has done so before me. Simply put, we’re not dealing with floats and there’s algorithms to add infinite decimals together from the ones place down using back-propagation. Disproving my statement is as simple as providing a pair of real numbers where doing this is impossible.
Once again, I have no issue with the math. I just think the commonly taught system of decimal arithmetic is flawed at representing that math. This flaw is why people get hung up on 0.999… = 1.
Furthermore, I’m not aware of any arguments worth taking seriously that don’t use logic, so I’m wondering why that’s a criticism of the notation.
If you hear someone shout at a mob “mathematics is witchcraft, therefore, get the pitchforks” I very much recommend taking that argument seriously no matter the logical veracity.
Fair, but that still uses logic, it’s just using false premises. Also, more than the argument what I’d be taking seriously is the threat of imminent violence.
By definition, mathematics isn’t witchcraft (most witches I know are pretty bad at math). Also, I think you need to look more deeply into Occam’s razor.
By definition, all sufficiently advanced mathematics is isomorphic to witchcraft. (*vaguely gestures at numerology as proof*). Also Occam’s razor has never been robust against reductionism: If you are free to reduce “equal explanatory power” to arbitrary small tunnel vision every explanation becomes permissible, and taking, of those, the simplest one probably doesn’t match with the holistic view. Or, differently put: I think you need to look more broadly onto Occam’s razor :)
Please explain this in a way that makes sense to me (I’m an algebraist). I don’t know what it would mean for infinite decimals to be supported “properly” or “improperly”. Furthermore, I’m not aware of any arguments worth taking seriously that don’t use logic, so I’m wondering why that’s a criticism of the notation.
Decimal notation is a number system where fractions are accomodated with more numbers represeting smaller more precise parts. It is an extension of the place value system where very large tallies can be expressed in a much simpler form.
One of the core rules of this system is how to handle values larger than the highest digit, and lower than the smallest. If any place goes above 9, set that place to 0 and increment the next place by 1. If any places goes below 0, increment the place by (10) and decrement the next place by one (this operation uses a non-existent digit, which is also a common sticking point).
This is the decimal system as it is taught originally. One of the consequences of it’s rules is that each digit-wise operation must be performed in order, with a beginning and an end. Thus even getting a repeating decimal is going beyond the system. This is usually taught as special handling, and sometimes as baby’s first limit (each step down results in the same digit, thus it’s that digit all the way down).
The issue happens when digit-wise calculation is applied to infinite decimals. For most operations, it’s fine, but incrementing up can only begin if a digit goes beyong 9, which never happens in the case of 0.999… . Understanding how to resolve this requires ditching the digit-wise method and relearing decimals and a series of terms, and then learning about infinite series. It’s a much more robust and applicable method, but a very different method to what decimals are taught as.
Thus I say that the original bitwise method of decimals has a bug in the case of incrementing infinite sequences. There’s really only one number where this is an issue, but telling people they’re wrong for using the tools as they’ve been taught isn’t helpful. Much better to say that the tool they’re using is limited in this way, then showing the more advanced method.
That’s how we teach Newtonian Gravity and then expand to Relativity. You aren’t wrong for applying newtonian gravity to mercury, but the tool you’re using is limited. All models are wrong, but some are useful.
Said a simpler way:
1/3= 0.333…
1/3 + 1/3 = 0.666… = 0.333… + 0.333…
1/3 + 1/3 + 1/3 = 1 = 0.333… + 0.333… + 0.333…
The quirk you mention about infinite decimals not incrementing properly can be seen by adding whole number fractions together.
I can’t help but notice you didn’t answer the question.
I’m sure I don’t know what you mean by digit-wise operation, because my conceptuazation of it renders this statement obviously false. For example, we could apply digit-wise modular addition base 10 to any pair of real numbers and the order we choose to perform this operation in won’t matter. I’m pretty sure you’re also not including standard multiplication and addition in your definition of “digit-wise” because we can construct algorithms that address many different orders of digits, meaning this statement would also then be false. In fact, as I lay here having just woken up, I’m having a difficult time figuring out an operation where the order that you address the digits in actually matters.
Later, you bring up “incrementing” which has no natural definition in a densely populated set. It seems to me that you came up with a function that relies on the notation we’re using (the decimal-increment function, let’s call it) rather than the emergent properties of the objects we’re working with, noticed that the function doesn’t cover the desired domain, and have decided that means the notation is somehow improper. Or maybe you’re saying that the reason it’s improper is because the advanced techniques for interacting with the system are dissimilar from the understanding imparted by the simple techniques.
In base 10, if we add 1 and 1, we get the next digit, 2.
In base 2, if we add 1 and 1 there is no 2, thus we increment the next place by 1 getting 10.
We can expand this to numbers with more digits: 111(7) + 1 = 112 = 120 = 200 = 1000
In base 10, with A representing 10 in a single digit: 199 + 1 = 19A = 1A0 = 200
We could do this with larger carryover too: 999 + 111 = AAA = AB0 = B10 = 1110 Different orders are possible here: AAA = 10AA = 10B0 = 1110
The “carry the 1” process only starts when a digit exceeds the existing digits. Thus 192 is not 2Z2, nor is 100 = A0. The whole point of carryover is to keep each digit within the 0-9 range. Furthermore, by only processing individual digits, we can’t start carryover in the middle of a chain. 999 doesn’t carry over to 100-1, and while 0.999 does equal 1 - 0.001, (1-0.001) isn’t a decimal digit. Thus we can’t know if any string of 9s will carry over until we find a digit that is already trying to be greater than 9.
This logic is how basic binary adders work, and some variation of this bitwise logic runs in evey mechanical computer ever made. It works great with integers. It’s when we try to have infinite digits that this method falls apart, and then only in the case of infinite 9s. This is because a carry must start at the smallest digit, and a number with infinite decimals has no smallest digit.
Without changing this logic radically, you can’t fix this flaw. Computers use workarounds to speed up arithmetic functions, like carry-lookahead and carry-save, but they still require the smallest digit to be computed before the result of the operation can be known.
If I remember, I’ll give a formal proof when I have time so long as no one else has done so before me. Simply put, we’re not dealing with floats and there’s algorithms to add infinite decimals together from the ones place down using back-propagation. Disproving my statement is as simple as providing a pair of real numbers where doing this is impossible.
Are those algorithms taught to people in school?
Once again, I have no issue with the math. I just think the commonly taught system of decimal arithmetic is flawed at representing that math. This flaw is why people get hung up on 0.999… = 1.
If you hear someone shout at a mob “mathematics is witchcraft, therefore, get the pitchforks” I very much recommend taking that argument seriously no matter the logical veracity.
Fair, but that still uses logic, it’s just using false premises. Also, more than the argument what I’d be taking seriously is the threat of imminent violence.
But is it a false premise? It certainly passes Occam’s razor: “They’re witches, they did it” is an eminently simple explanation.
By definition, mathematics isn’t witchcraft (most witches I know are pretty bad at math). Also, I think you need to look more deeply into Occam’s razor.
By definition, all sufficiently advanced mathematics is isomorphic to witchcraft. (*vaguely gestures at numerology as proof*). Also Occam’s razor has never been robust against reductionism: If you are free to reduce “equal explanatory power” to arbitrary small tunnel vision every explanation becomes permissible, and taking, of those, the simplest one probably doesn’t match with the holistic view. Or, differently put: I think you need to look more broadly onto Occam’s razor :)