• pyre@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 months ago

    you’re thinking about this backwards: the decimal notation isn’t something that’s natural, it’s just a way to represent numbers that we invented. 0.333… = 1/3 because that’s the way we decided to represent 1/3 in decimals. the problem here isn’t that 1 cannot be divided by 3 at all, it’s that 10 cannot be divided by 3 and give a whole number. and because we use the decimal system, we have to notate it using infinite repeating numbers but that doesn’t change the value of 1/3 or 10/3.

    different bases don’t change the values either. 12 can be divided by 3 and give a whole number, so we don’t need infinite digits. but both 0.333… in decimal and 0.4 in base12 are still 1/3.

    there’s no need to change the base. we know a third of one is a third and three thirds is one. how you notate it doesn’t change this at all.

    • Tlaloc_Temporal@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      4 months ago

      I’m not saying that math works differently is different bases, I’m using different bases exactly because the values don’t change. Using different bases restates the equation without using repeating decimals, thus sidestepping the flaw altogether.

      My whole point here is that the decimal system is flawed. It’s still useful, but trying to claim it is perfect leads to a conflict with reality. All models are wrong, but some are useful.

      • pyre@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        you said 1/3 ≠ 0.333… which is false. it is exactly equal. there’s no flaw; it’s a restriction in notation that is not unique to the decimal system. there’s no “conflict with reality”, whatever that means. this just sounds like not being able to wrap your head around the concept. but that doesn’t make it a flaw.

        • Tlaloc_Temporal@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          4 months ago

          Let me restate: I am of the opinion that repeating decimals are imperfect representations of the values we use them to represent. This imperfection only matters in the case of 0.999… , but I still consider it a flaw.

          I am also of the opinion that focusing on this flaw rather than the incorrectness of the person using it is a better method of teaching.

          I accept that 1/3 is exactly equal to the value typically represented by 0.333… , however I do not agree that 0.333… is a perfect representation of that value. That is what I mean by 1/3 ≠ 0.333… , that repeating decimal is not exactly equal to that value.