It depends. If it’s difficult to maintain because it’s some terrible careless spaghetti written by person who didn’t care enough, then it’s definitely not a sign of intelligence or power level. But if it’s difficult to maintain because the rest of the team can’t wrap their head around type-level metaprogramming or edsl you came up with, then it’s a different case.
- 0 Posts
- 20 Comments
The fact that I dislike it that it turned out that software engineering is not a good place for self-expression or for demonstrating your power level or the beauty and depth of your intricate thought patterns through advanced constructs and structures you come up with, doesn’t mean that I disagree that this is true.
Why though? I think hating and maybe even disrespecting programming and wanting your job to be as much redundant and replaced as possible is actually the best mindset for a programmer. Maybe in the past it was a nice mindset to become a teamlead or a project manager, but nowadays with AI it’s a mindset for programmers.
Okay, to be fair, my knowledge of the current culture in industry is very limited. It’s mostly impression formed by online conversations, not limited to Lemmy. Last project I worked at it was illegal to use public LLMs because of intellectual property (and maybe even GDPR) concerns. We had a local scope-limited LLM integration though and that one was allowed, but there was literally a single person across multiple departments who used it and it was a “middle” frontend dev and it was only for autocomplete. Backenders wouldn’t even consider it.
You’re right of course and engineering as a whole is a first-line subject to AI. Everything that has strict specs, standards, invariants will benefit massively from it, and conforming is what AI inherently excels at, as opposed to humans. Those complaints like the one this subthread started with are usually people being bad at writing requirements rather than AI being bad at following them. If you approach requirements like in actual engineering fields, you will get corresponding results, while humans will struggle to fully conform or even try to find tricks and loopholes in your requirements to sidestep them and assert their will while technically still remaining in “barely legal” territory.
I saw an LLM override the casting operator in C#. An evangelist would say “genius! what a novel solution!” I said “nobody at this company is going to know what this code is doing 6 months from now.”
Before LLMs people were often saying this about people smarter than the rest of the group. “Yeah he was too smart and overengineered solutions that no one could understand after he left,”. This is btw one of the reasons why I increasingly dislike programming as a field over the years and happily delegate the coding part to AI nowadays. This field celebrates conformism and that’s why humans shouldn’t write code manually. Perfect field to automate away via LLMs.
If my coworkers do, they’re very quiet about it.
Gee, guess why. Given the current culture of hate and ostracism I would never outright say IRL that I like it or use it a lot. I would say something like “yeah, I think it can sometimes be useful when used carefully and I sometimes use it too”. While in reality it would mean that it actually writes 95% of code under my micromanagement.
deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore
And that’s exactly what I want. And I don’t get it why people want more. Having more means you have less and less control or influence on the result. What I want is that in other fields it becomes like it is in programming now, so that you micromanage every step and have great control over the result.
My first level of debugging is logging things to console. LLMs here do a decent job at “reading your mind” and autocompleting “pri” into something like “println!(“i = {}, x = {}, y = {}”, i, x, y);” with very good context awareness of what and how exactly it makes most sense to debug print in the current location in code.
I love it how article baits AI-haters to upvote it, even though it’s very clearly pro-AI:
At Zed we believe in a world where people and agents can collaborate together to build software. But, we firmly believe that (at least for now) you are in the drivers seat, and the LLM is just another tool to reach for.
hisao@ani.socialto Technology@lemmy.world•OpenAI will not disclose GPT-5’s energy use. It could be higher than past modelsEnglish1·1 day agoI’m only using it in edits mode, it’s the second of the three modes available.
hisao@ani.socialto Technology@lemmy.world•OpenAI will not disclose GPT-5’s energy use. It could be higher than past modelsEnglish72·2 days agoI make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it’s needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.
hisao@ani.socialto Technology@lemmy.world•GenAI tools are acting more ‘alive’ than ever; they blackmail people, replicate, and escapeEnglish4·2 days agoand the resource it’s concerned about is how long a human engages.
Why do you think models are trained like this? To my knowledge most LLMs are trained on giant corpuses of data scraped from internet, and engagement as a goal or a metric isn’t in any way embedded inherently in such data. It is certainly possible to train AI for engagement but that requires completely different approach: they will have to gather giant corpus of interactions with AI and use that as a training data. Even if new OpenAI models use all the chats of previous models in training data with engagement as a metric to optimize, it’s still a tiny fraction of their training set.
hisao@ani.socialto Technology@lemmy.world•GenAI tools are acting more ‘alive’ than ever; they blackmail people, replicate, and escapeEnglish9·2 days agoHere is a direct quote of what they call “self-replication”:
Beyond that, “in a few instances, we have seen Claude Opus 4 take (fictional) opportunities to make unauthorized copies of its weights to external servers,” Anthropic said in its report.
So basically model tries to backup its tensor files.
And by “fictional” I guess they gave the model a fictional file io api just to log how it’s gonna try to use it,
hisao@ani.socialto Technology@lemmy.world•UK government suggests deleting files to save waterEnglish1·3 days agoYeah, let them figure it out. It’s their problem after all. If it’s more expensive, then let them increase prices for their “data serving” activities. If it’s too expensive for some people, they might reconsider their usage of said services which in turn might be equivalent of “deleting old files or emails”. Instead of asking people deleting files right now before those in charge even tried to fix the problems they created.
hisao@ani.socialto Technology@lemmy.world•UK government suggests deleting files to save waterEnglish1·3 days agoSo this doesn’t sound like a big deal after all. Maybe just stop pulling water from those “stores of freshwater” for cooling purposes and get your own from the ocean.
hisao@ani.socialto Technology@lemmy.world•UK government suggests deleting files to save waterEnglish0·3 days agoSo where does this water go after evaporating or leaking from your toilet? Is it flying into deep space and being lost for our planet forever?
hisao@ani.socialto Technology@lemmy.world•Why using ChatGPT is not bad for the environmentEnglish0·4 days agoadding AI into the mix is only making it worse for no reason at all
This is very ignorant/naive take. Imagine how much electricity call-centers with dozens/hundreds of workers use. Now imagine if they all get replaced by AI. Compare electricity usage by AI to that of all work/industries/workers it makes obsolete and then you have a real picture.
hisao@ani.socialto Technology@lemmy.world•Why using ChatGPT is not bad for the environmentEnglish1·4 days agoYou’re being a sophist by comparing those. It’s non-comparable type of periodicity. AI doesn’t need continuous training to function. In theory, eveyone could just stop and settle on models we have already and they would continue to work forever. Training models is only needed for creating better improved models. And you can count really big models like ChatGPT ones on fingers. You can’t raise beef once and keep people fed forever. Raising beef has the same type of periodicity as running already-trained model, something that is continuously done for industry to function.
deleted by creator