Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
I can’t contest the first point cause I’m not a firefox junkie, so I won’t.
What I will contest is that the existence of AI, or, deep learning, or LLMs, or neural networks, or matrix multiplication, or whatever type of shit they come up with next, I’ll contest that it isn’t problematic. I kind of think it is, inherently, I think it’s existence is not great. Mostly because it obfuscates, even internally, the processing of data, it obfuscates the inputs from the outputs, the works from the results. You can do that with regular programming just fine, just as you can do most of the shit that AI does with normal programming, like that guy who made a program that calculates the prices for japanese baked goods and also recognizes cancer, right. But I think AI is a step further than that, it obfuscates it more. I kind of am skeptical of it’s broad implementation.
For trivial use cases, it’s kind of fine, but I think maybe use cases we might consider trivial, otherwise are kind of fucked, maybe. AI summary of an article? I dunno if that’s good. We might think, oh, this is kind of trivial because the user should just not really trust what the AI says, but, as with all technology, what if the user is an idiot and a moron? They might just use it to read the article for them, and then spout off whatever talking points and headlines it gives them. I can’t really think of a scenario where that’s actually a good thing, and it’s highly possible. It might make it easier to parse an article, like that, but I don’t think that’s actually a good or useful tool, it’s just presented a kind of illusion of utility, most especially because it was redundant (we could just write a summary and have it at the top of the article, like every article on the face of the earth), and it was totally beyond our control, at least, in most circumstances.
Also, the Mozilla Foundation is nonprofit, but the Mozilla Corporation is not. The Foundation manages the Corp, which manages Firefox development. So depending on which one you’re referring to, it might be a non-profit, or it might not be. In any case, the nonprofit is a step removed from Firefox development, which I think is an important side-note, even if it’s not actually that relevant to whatever conversations about AI there might be.
Perhaps, comically, it is the perfect representation of the world as it is now: “knowledge” in people’s brains is created by consuming whatever source aligns with the beliefs that they think are theirs. No source or facts are required. Only the interpretation matters.
I can’t contest the first point cause I’m not a firefox junkie, so I won’t.
What I will contest is that the existence of AI, or, deep learning, or LLMs, or neural networks, or matrix multiplication, or whatever type of shit they come up with next, I’ll contest that it isn’t problematic. I kind of think it is, inherently, I think it’s existence is not great. Mostly because it obfuscates, even internally, the processing of data, it obfuscates the inputs from the outputs, the works from the results. You can do that with regular programming just fine, just as you can do most of the shit that AI does with normal programming, like that guy who made a program that calculates the prices for japanese baked goods and also recognizes cancer, right. But I think AI is a step further than that, it obfuscates it more. I kind of am skeptical of it’s broad implementation.
For trivial use cases, it’s kind of fine, but I think maybe use cases we might consider trivial, otherwise are kind of fucked, maybe. AI summary of an article? I dunno if that’s good. We might think, oh, this is kind of trivial because the user should just not really trust what the AI says, but, as with all technology, what if the user is an idiot and a moron? They might just use it to read the article for them, and then spout off whatever talking points and headlines it gives them. I can’t really think of a scenario where that’s actually a good thing, and it’s highly possible. It might make it easier to parse an article, like that, but I don’t think that’s actually a good or useful tool, it’s just presented a kind of illusion of utility, most especially because it was redundant (we could just write a summary and have it at the top of the article, like every article on the face of the earth), and it was totally beyond our control, at least, in most circumstances.
Also, the Mozilla Foundation is nonprofit, but the Mozilla Corporation is not. The Foundation manages the Corp, which manages Firefox development. So depending on which one you’re referring to, it might be a non-profit, or it might not be. In any case, the nonprofit is a step removed from Firefox development, which I think is an important side-note, even if it’s not actually that relevant to whatever conversations about AI there might be.
Perhaps, comically, it is the perfect representation of the world as it is now: “knowledge” in people’s brains is created by consuming whatever source aligns with the beliefs that they think are theirs. No source or facts are required. Only the interpretation matters.