Damn, those are not rookie numbers!
Damn, those are not rookie numbers!
I hope they managed to grab some content creators.
Yeah, I don’t think they would have been fired if they had just held a vigil without shitting all over their employers brands.
It’s easy to nitpick all the details in the video, but keep in mind that 2 years ago generative AI videos consisted mostly of shape shifting mosaics that vaguely resembled the things they were supposed to be. And now we’re down to “in this frame the 10x10 pixel airplane has a third wing”.
That doesn’t excuse the use of copyrighted material to get to this point, mind you. But to claim that this tech is going nowhere is just a contextless circlejerk.
Now that sounds like the job for an extension, since for most users “stfu and always apply the fix” would be the preferred option.
How about, if you want a broken version of Firefox, you compile it yourself, rather than let everyone else suffer?
Like, the vast majority of browser users don’t even know what an extension is, let alone install one.
I like the thought, but I can’t imagine that most people will enjoy getting even more popups when they load up a site, especially when they come from the browser itself.
Just take a look at OP here. If they responds this way to settings that are there for their actual benefit - just imagine how much they’ll like those popups.
Awww, it learned to write a word without understanding what it means.
Saw a great video about this (project is still ongoing).
It’s one thing to claim that the current machine learning approach won’t lead to AGI, which I can get behind. But this article claims AGI is impossible simply because there are not enough physical resources in the world? That’s a stretch.
It’s “funny”, because without that injection from Google, Mozilla would surely die. And the only reason Google hasn’t stopped doing that is because then Chrome (Blink) would be more likely to be treated as a monopoly.
Yay, mob justice!
Yeah fair enough. Key part was “arcs that go nowhere”. I got so incredibly tired of TV shows that think the way to do mystery is drawing out plot far too slowly, in hopes you’ll tune in next episode.
Then again, regarding new trek, I only watched season 1 of Discovery, and the first episode of Picard. I ain’t got no patience for this.
And most important (for me): self-contained episodes. No season long story arcs that go nowhere.
That’s a a bit too absolute way to look at it.
From their point of view the goal isn’t to abolish human involvement, but to minimise the cost. So if they can do the job at the same quality with a quarter of the personnel through AI assistance for less cost, obviously they’re gonna do that.
At the same time, just because humans having crappy jobs is the current way we solve the problem of people getting money, doesn’t mean we should keep on doing that. Basic income would be a much nicer solution for that, for example. Try to think a bit less conservatively.
I’m not sure how long ago that was, but LLM context sizes have grown exponentially in the past year, from 4k tokens to over a hundred k. That doesn’t necessarily affect the quality of the output, although you can’t expect it to summarize what it can’t hold on memory.
troed:
It’s problematic when people conflate their gut feelings for facts.
Also troed:
I understand activitypub better than creator of Lemmy
Well, that convinced me. Thanks for your insight on the matter, I now know how to value the rest of your comments.
It sucks that it overshadows the actual news.
On the plus side, this post serves as a wonderful tool to clean up some garbage users/servers.