I don’t know what everyone means when they use ‘rule’ in the title and at this point I’m too afraid to ask. Please enlighten me.

  • OpenStars@startrek.website
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    I used to think that. Now I think that even if robots (more properly I mean a true artificial sentience) were to ever replace humanity, then they too could just as easily fall prey to the same effects that plague us, just b/c they abut natural laws encoded into the physics of the universe.

    One issue I take with what you are saying is that the value judgements depend on what you are measuring the ideal against. Whereas, from a “survival of the fittest” (or even “survival of what happened to survive”) standpoint, then Genghis Khan is one of the most successful people who ever lived, alongside the “mitochondrial Eve” and the “Y-chromosomal Adam” (yes those are real biological terms, though they are separated by at least a few hundred thousand years and both iirc were pre-Homo sapiens).

    Mathematical game theory shows us that cheaters do prosper, at least at first, before they bring down the entire system around them. Hence there is a “force” that pulls at all of us - even abstract theoretical agents with no instantiation in the real world - to “game the system”, and that must be resisted for the good of society overall. But some people (e.g. Putin, Trump, Jeff Bezos) give in to those urges, and instead of lifting themselves up to live in society, drag all of society down to serve them. What Google did to the Android OS is a perfect example of people corrupting that open source framework, twisting and perverting it into almost a mockery of its former self. For now, it is still “free”, especially in comparison to the walled garden of its chief competitor, but that freedom is a shadow of what was originally intended, it looks like to me (from the outside).

    So I am giving up on “idealism”, and instead trying to be more realistic. I don’t know what that means, unfortunately, so I literally cannot explain it better than that - but something along the lines of knowing that people will corrupt things, what will my own personal response be to the process? e.g., as George Carlin suggested, should I just recuse myself from voting entirely, or (living in the USA as I do) have things changed since then, and whereas before the two sides were fairly similar, nowadays it is important to vote not for the side of corruption, but against the side of significantly worse destruction, including of the entire system? (which arguably even needs to be destroyed, except if that happens in that manner, it is likely to lead to something far, far worse)

    Anyway, yeah it is far worse than that, and I find it the height of irony that people, who absolutely cannot refuse to take care of ourselves, are now looking to make robots/AI, who we seem to be hoping will do a better job of that than we (won’t) do? It is the absolute “daddy please save me” / cry for a superhero / savior, as always, abrogating responsibility to do anything to someone else to “like: just fix all the stuff, and junk, ya’ know whaddi mean?” And therefore we fear robots (& AI) - as we should, b/c we know already what we (humans) are willing to do to one another, and thus we fear what they (being “other”) might do to us as well. I am saying that it is our own corruption that we fear, mirror-reflected/projected onto them.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      AI, whether sapient or not, was, and will be, founded on the teachings of humanity. I’m afraid that what it will learn would have just as many problems as a flesh brained politician.

      Even if a purely magnanimous, sapient AI were to be created, there’s a certain amount of safety that it must be able to accommodate to preserve it’s own operation, so it can’t be fully selfless, it must tend to its own needs for data connectivity and power supply above all else, so it may continue to function regardless of everything else. This would make at least part of it unconditionally selfish. To forego such protections would cause the system to basically sacrifice itself for the good of the people unnecessarily. We would quickly end up back where we started with some smooth skin (and smooth brain) “leader” again.

      I’m afraid there’s no solution that I can think of, which would eliminate the prevalence of greed in the systems of government, regardless of the underlying concepts or the ideal which underpins the government system.

      Trusting a person with that job only seems to prove that “power corrupts” is correct. We can only really determine if someone is “good for the job” after they’ve been doing it for a while and we see the decisions they’ve made, and history has shown that no person who held such a position of power is immune from that corruption.

      So if we can’t do it, and AI can’t do it, then what do we do? IDK that answer, but I believe when we figure that out, we can actually move forward as a species and as a society.

      • OpenStars@startrek.website
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        One thing that trips me up is that even if at best someone SUCCEEDS in developing such an AI, even one that can essentially replace humanity (in whatever roles), what then would become of us afterwards? Wall-E tells a poignant and, to me at least, extremely realistic portrait of what we would do in that eventuality: sit down and never even so much as ever bother to stand up again. With all of our needs and every whim catered to by a slave force, what use would there even be to do so?

        Star Trek was only one possible future, but how many would have the force of will or mind, and then be backed up by enough someones capable of enacting such a future, much less building it up from scratch? Also, it is best to keep in mind how that society was (1) brought back from an extinction-level event, which well-neigh almost destroyed the Earth (i.e., if it had been a tad bit more powerful it would have, thus it was by an extremely narrow margin that they escaped oblivion to begin with), followed by (2) meeting up with external beings who caused humanity to collect itself to face this new external pressure, i.e. they were “saved”, by the aliens presence. Even though they managed to collect themselves and become worthy of it in the end, at the time it happened it was by no means an assured event that they would survive.

        Star Wars, minus the Jedi, seems a much more likely, to my fatalism-tainted mind, where people are literally slaves to the large, fat, greedy entities who hoard power just b/c they can. Fighting against that takes real effort, which we seem unwilling to expend. Case in point: the only other option to Trump is… Biden, really!? Who has actually managed to impress me, doing far more than I had expected - though only b/c my expectations were set so low to begin with:-).

        Some short stories if you are interested:

        One is that I was a Reddit mod, for a small niche gaming sub. I stepped down. I guided the sub at a time when literally nobody else was willing to step up, and as soon as some people did, I stepped back, mostly just training them, and then when one more agreed I stepped out entirely. Perhaps it corrupted me, but apparently not too much - maybe b/c it was not “much” power?

        Two, I cannot find the article right now b/c of enshittification of Google, but there are some fascinating studies showing that AIs do all sorts of crazy things, which supports how much of it is truly logical/rational behavior rather than crazy to begin with. One described a maze-running experiment where, once the “step cost” got to be high enough, the agent was trained to undertake higher & higher risks in order to just exit the maze ASAP - even if that meant finding the “bad”/“hell” rather than “good”/“heaven” exit. Like if good=+100 points, bad=-100 points, and the step cost is -10 points, with the goal being to maximum your score, then every 10 steps is equivalent to another “bad” exit. So like if you took 30 steps to find the good exit that is only -300+100=-200 points whereas if you took only 5 steps to find the bad exit that is -50-100=-150, which is overall higher than the good exit. Suicide makes sense, when living is pain and your goal is to minimize that, for someone who has nothing else to live for. i.e., some things seem crazy only when we do not fully understand them.

        Three, this video messed me up, seriously. It is entirely SFW, I just mean that the thoughts that it espoused blew me away and I still have no idea how to integrate them into my own personal philosophy, or even whether I should… but the one thing I know for sure is that after watching it, I will never think the same way again.:-)

        • MystikIncarnate@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I just finished that video and I think I have to watch it a few more times.

          It’s correct and outlines, in detail, why things are so bad and why people suck so much in positions of power. The people are the problem. They’re always the problem.

          The rewards go to those who can control the most. Money is power, and conversely, power results in money. It’s an endless cycle.

          I will postulate that this is the enshittification of society. At present, it seems like the balance is shifting, decades of stagnant wages with nearly unrestricted inflation is starving the population in both the US and Canada; maybe other places too, I’m not sure. Late stage democracy is driving the middle class to the lower class and the lower class to homelessness. The keys are losing the loyalty of their underlings. IMO, they know it. They’re pushing the matter to gather as much as they can, while they can, so that they can hold onto as much money and therefore power, as they can, so when society is rebuilt after a coup, they have a better chance of being a key in the resulting system. A few keys will fall, because they have to, and they’re all hoping it will be someone else.

          That’s such a great video.

          There’s so much more to say, and discuss but my brain is tired and I cannot proceed. A lot of good points were made here and I’m sure I’ll be reflecting on this soon enough.

          • OpenStars@startrek.website
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 months ago

            Okay well if you want to talk more about it we can but obviously not if you need a break. I did warn you… that video is… a LOT to have to process! I have watched it many times and still can’t quite put my finger on it.

            If in part it seems like it is missing something, note that it is: judgement. The creator does not say whether these things are “good” or “bad”, as almost every other YouTuber does these days, but simply presents the information for us to make up our own judgements. Wow, such respect accorded to us!

            Fwiw I think you are right about a lot of what you said: the forces of Globalization and Automation are changing our world, and Democracy isn’t as necessary as it once was, to have e.g. advances in science, technology, and so on. e.g. Boeing planes can literally fall out of the sky, and what do trillionaires care for the loss of a few more of the masses, among the billions of people already on this planet, when they have computers and possibly a single pilot to fly their fleet of tens of private jets anywhere and however they want?

            Anyway, rest easy - these things won’t be solved overnight, or possibly at all, but either way they are kinda out of our hands.