Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
We’re taught about the decimal system by manipulating whole number representations of fractions, but when that method fails, we get told that we are wrong.
In chemistry, we’re taught about atoms by manipulating little rings of electrons, and when that system fails to explain bond angles and excitation, we’re told the model is wrong, but still useful.
This is my issue with the debate. Someone uses decimals as they were taught and everyone piles on saying they’re wrong instead of explaining the limitations of systems and why we still use them.
For the record, my favorite demonstration is useing different bases.
In base 10:
1/3 ≈ 0.333…
0.333… × 3 = 0.999…
In base 12:
1/3 = 0.4
0.4 × 3 = 1
The issue only appears if you resort to infinite decimals. If you instead change your base, everything works fine. Of course the only base where every whole fraction fits nicely is unary, and there’s some very good reasons we don’t use tally marks much anymore, and it has nothing to do with math.
you’re thinking about this backwards: the decimal notation isn’t something that’s natural, it’s just a way to represent numbers that we invented. 0.333… = 1/3 because that’s the way we decided to represent 1/3 in decimals. the problem here isn’t that 1 cannot be divided by 3 at all, it’s that 10 cannot be divided by 3 and give a whole number. and because we use the decimal system, we have to notate it using infinite repeating numbers but that doesn’t change the value of 1/3 or 10/3.
different bases don’t change the values either. 12 can be divided by 3 and give a whole number, so we don’t need infinite digits. but both 0.333… in decimal and 0.4 in base12 are still 1/3.
there’s no need to change the base. we know a third of one is a third and three thirds is one. how you notate it doesn’t change this at all.
I’m not saying that math works differently is different bases, I’m using different bases exactly because the values don’t change. Using different bases restates the equation without using repeating decimals, thus sidestepping the flaw altogether.
My whole point here is that the decimal system is flawed. It’s still useful, but trying to claim it is perfect leads to a conflict with reality. All models are wrong, but some are useful.
you said 1/3 ≠ 0.333… which is false. it is exactly equal. there’s no flaw; it’s a restriction in notation that is not unique to the decimal system. there’s no “conflict with reality”, whatever that means. this just sounds like not being able to wrap your head around the concept. but that doesn’t make it a flaw.
Let me restate: I am of the opinion that repeating decimals are imperfect representations of the values we use them to represent. This imperfection only matters in the case of 0.999… , but I still consider it a flaw.
I am also of the opinion that focusing on this flaw rather than the incorrectness of the person using it is a better method of teaching.
I accept that 1/3 is exactly equal to the value typically represented by 0.333… , however I do not agree that 0.333… is a perfect representation of that value. That is what I mean by 1/3 ≠ 0.333… , that repeating decimal is not exactly equal to that value.
After reading this, I have decided that I am no longer going to provide a formal proof for my other point, because odds are that you wouldn’t understand it and I’m now reasonably confident that anyone who would already understands the fact the proof would’ve supported.
It is my opinion that repeating decimals cannot properly represent the values we use them for, and I would rather avoid them entirely (kinda like the meme).
Besides, I have never disagreed with the math, just that we go about correcting people poorly. I have used some basic mathematical arguments to try and intimate how basic arithmetic is a limited system, but this has always been about solving the systemic problem of people getting caught by 0.999… = 1. Math proofs won’t add to this conversation, and I think are part of the issue.
Is it possible to have a coversation about math without either fully agreeing or calling the other stupid? Must every argument about even the topic be backed up with proof (a sociological one in this case)? Or did you just want to feel superior?
Your opinion is incorrect as a question of definition.
I have never disagreed with the math
You had in the previous paragraph.
Is it possible to have a coversation about math without either fully agreeing or calling the other stupid?
Yes, however the problem is that you are speaking on matters that you are clearly ignorant. This isn’t a question of different axioms where we can show clearly how two models are incompatible but resolve that both are correct in their own contexts; this is a case where you are entirely, irredeemably wrong, and are simply refusing to correct yourself. I am an algebraist understanding how two systems differ and compare is my specialty. We know that infinite decimals are capable of representing real numbers because we do soall the time. There. You’re wrong and I’ve shown it via proof by demonstration. QED.
They are just symbols we use to represent abstract concepts; the same way I can inscribe a “1” to represent 3-2={ {} } I can inscribe “.9~” to do the same. The fact that our convention is occasionally confusing is irrelevant to the question; we could have a system whereby each number gets its own unique glyph when it’s used and it’d still be a valid way to communicate the ideas. The level of weirdness you can do and still have a valid notational convention goes so far beyond the meager oddities you’ve been hung up on here. Don’t believe me? Look up lambda calculus.
Any my argument is that 3 ≠ 0.333…
EDIT: 1/3 ≠ 0.333…
We’re taught about the decimal system by manipulating whole number representations of fractions, but when that method fails, we get told that we are wrong.
In chemistry, we’re taught about atoms by manipulating little rings of electrons, and when that system fails to explain bond angles and excitation, we’re told the model is wrong, but still useful.
This is my issue with the debate. Someone uses decimals as they were taught and everyone piles on saying they’re wrong instead of explaining the limitations of systems and why we still use them.
For the record, my favorite demonstration is useing different bases.
In base 10: 1/3 ≈ 0.333… 0.333… × 3 = 0.999…
In base 12: 1/3 = 0.4 0.4 × 3 = 1
The issue only appears if you resort to infinite decimals. If you instead change your base, everything works fine. Of course the only base where every whole fraction fits nicely is unary, and there’s some very good reasons we don’t use tally marks much anymore, and it has nothing to do with math.
you’re thinking about this backwards: the decimal notation isn’t something that’s natural, it’s just a way to represent numbers that we invented. 0.333… = 1/3 because that’s the way we decided to represent 1/3 in decimals. the problem here isn’t that 1 cannot be divided by 3 at all, it’s that 10 cannot be divided by 3 and give a whole number. and because we use the decimal system, we have to notate it using infinite repeating numbers but that doesn’t change the value of 1/3 or 10/3.
different bases don’t change the values either. 12 can be divided by 3 and give a whole number, so we don’t need infinite digits. but both 0.333… in decimal and 0.4 in base12 are still 1/3.
there’s no need to change the base. we know a third of one is a third and three thirds is one. how you notate it doesn’t change this at all.
I’m not saying that math works differently is different bases, I’m using different bases exactly because the values don’t change. Using different bases restates the equation without using repeating decimals, thus sidestepping the flaw altogether.
My whole point here is that the decimal system is flawed. It’s still useful, but trying to claim it is perfect leads to a conflict with reality. All models are wrong, but some are useful.
you said 1/3 ≠ 0.333… which is false. it is exactly equal. there’s no flaw; it’s a restriction in notation that is not unique to the decimal system. there’s no “conflict with reality”, whatever that means. this just sounds like not being able to wrap your head around the concept. but that doesn’t make it a flaw.
Let me restate: I am of the opinion that repeating decimals are imperfect representations of the values we use them to represent. This imperfection only matters in the case of 0.999… , but I still consider it a flaw.
I am also of the opinion that focusing on this flaw rather than the incorrectness of the person using it is a better method of teaching.
I accept that 1/3 is exactly equal to the value typically represented by 0.333… , however I do not agree that 0.333… is a perfect representation of that value. That is what I mean by 1/3 ≠ 0.333… , that repeating decimal is not exactly equal to that value.
After reading this, I have decided that I am no longer going to provide a formal proof for my other point, because odds are that you wouldn’t understand it and I’m now reasonably confident that anyone who would already understands the fact the proof would’ve supported.
Ah, typo. 1/3 ≠ 0.333…
It is my opinion that repeating decimals cannot properly represent the values we use them for, and I would rather avoid them entirely (kinda like the meme).
Besides, I have never disagreed with the math, just that we go about correcting people poorly. I have used some basic mathematical arguments to try and intimate how basic arithmetic is a limited system, but this has always been about solving the systemic problem of people getting caught by 0.999… = 1. Math proofs won’t add to this conversation, and I think are part of the issue.
Is it possible to have a coversation about math without either fully agreeing or calling the other stupid? Must every argument about even the topic be backed up with proof (a sociological one in this case)? Or did you just want to feel superior?
Your opinion is incorrect as a question of definition.
You had in the previous paragraph.
Yes, however the problem is that you are speaking on matters that you are clearly ignorant. This isn’t a question of different axioms where we can show clearly how two models are incompatible but resolve that both are correct in their own contexts; this is a case where you are entirely, irredeemably wrong, and are simply refusing to correct yourself. I am an algebraist understanding how two systems differ and compare is my specialty. We know that infinite decimals are capable of representing real numbers because we do so all the time. There. You’re wrong and I’ve shown it via proof by demonstration. QED.
They are just symbols we use to represent abstract concepts; the same way I can inscribe a “1” to represent 3-2={ {} } I can inscribe “.9~” to do the same. The fact that our convention is occasionally confusing is irrelevant to the question; we could have a system whereby each number gets its own unique glyph when it’s used and it’d still be a valid way to communicate the ideas. The level of weirdness you can do and still have a valid notational convention goes so far beyond the meager oddities you’ve been hung up on here. Don’t believe me? Look up lambda calculus.