

Or it might not. It would be a huge short term risk to do so.
As FaceDeer said, that we truly don’t know.
Or it might not. It would be a huge short term risk to do so.
As FaceDeer said, that we truly don’t know.
To be fair, OpenAI’s negative profitability has been extensively reported on.
Your point stands though; there’s no evidence they’re trying to decrease revenue. On the contrary, that would be a huge red flag to any vested interests.
BTW you can share NTFS partitions with Windows and Linux, if that’s your SO’s concern.
I do this, and it works really well! This used to be an issue in Mint because its kernel was kinda old, but no more.
Obviously getting into partitioning is a lot of configuration for most people, but still.
I don’t buy the research paper at all. Of course we have no idea what OpenAI does because they aren’t open at all, but Deepseek’s publish papers suggest it’s much more complex than 1 model per node… I think they recommended like a 576 GPU cluster, with a scheme to split experts.
That, and going by the really small active parameter count of gpt-oss, I bet the model is sparse as heck.
There’s no way the effective batch size is 8, it has to be waaay higher than that.
Fair.
And it’s the default.
Eventually, OEMs might find all this stuff is hurting sales and start offering Linux. I think that would be huge, as 99% of buyers will just stick to the default.
Even if true, that doesn’t mean you have to put up with it everywhere else.
I dual boot linux. These days, I only boot an extremely neutered Windows for most games or anything HDR, but basically for nothing else. Honestly a lot of old Windows stuff works better in Wine anyway.
I honestly think Zuck is more of a coward than the Palantir CEO and that techno feudalism crowd. He’s just so obviously insecure in all his decision making it’s unreal.
Zuckerberg is such a coward.
He bends over backwards for even the slightest change in wind, like VR or a fascist govt. He dropped open-weight llama at the first experimental stumble.
Mark my words, if a mega liberals got into power, he’d fire this guy and act as woke as can be.
I mean, you mind as well do it right then. Use free, crowd hosted roleplaying finetunes, not a predatory OpenAI frontend.
Reply/PM me, and I’ll spin up a 32B or 49B instance myself and prioritize it for you, anytime. I would suggest this over ollama as the bigger models are much, much smarter.
Does a fanfic count?
I don’t think there are any real techbros on Lemmy. They have to go to the crowd to signal, which is Twitter, Linkedin, and sometimes YouTube I guess.
Palantir buying Chrome would be so cyberpunk it hurts.
The pathological need to find something to use LLMs for is so bizzare.
It’s like the opposite of classic ML, relatively tiny special purpose models trained for something critical, out of desperation, because it just can’t be done well conventionally.
But this:
AI-enhanced tab groups. Powered by a local AI model, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
Take out the word AI.
Enhanced tab groups. Powered by a local algorithm, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.
If this feature took, say, a gigabyte of RAM and a bunch of CPU, it would be laughed out. But somehow it ships because it has the word AI in it? That makes no sense.
I am a massive local LLM advocate. I like “generative” ML, within reason and ethics. But this is just stupid.
That’s not what I’m saying. They’ve all but outright said they’re unprofitable.
But revenue is increasing. Now, if it stops increasing like they’ve “leveled out”, that is a problem.
Hence it’s a stretch to assume they would decrease costs for a more expensive model since that would basically pop their bubble well before 2029.