• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: September 2nd, 2024

help-circle

  • hisao@ani.socialtoTechnology@lemmy.worldWhy LLMs can't really build software
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    11 hours ago

    It depends. If it’s difficult to maintain because it’s some terrible careless spaghetti written by person who didn’t care enough, then it’s definitely not a sign of intelligence or power level. But if it’s difficult to maintain because the rest of the team can’t wrap their head around type-level metaprogramming or edsl you came up with, then it’s a different case.




  • hisao@ani.socialtoTechnology@lemmy.worldWhy LLMs can't really build software
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    Okay, to be fair, my knowledge of the current culture in industry is very limited. It’s mostly impression formed by online conversations, not limited to Lemmy. Last project I worked at it was illegal to use public LLMs because of intellectual property (and maybe even GDPR) concerns. We had a local scope-limited LLM integration though and that one was allowed, but there was literally a single person across multiple departments who used it and it was a “middle” frontend dev and it was only for autocomplete. Backenders wouldn’t even consider it.


  • hisao@ani.socialtoTechnology@lemmy.worldWhy LLMs can't really build software
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    12 hours ago

    You’re right of course and engineering as a whole is a first-line subject to AI. Everything that has strict specs, standards, invariants will benefit massively from it, and conforming is what AI inherently excels at, as opposed to humans. Those complaints like the one this subthread started with are usually people being bad at writing requirements rather than AI being bad at following them. If you approach requirements like in actual engineering fields, you will get corresponding results, while humans will struggle to fully conform or even try to find tricks and loopholes in your requirements to sidestep them and assert their will while technically still remaining in “barely legal” territory.


  • hisao@ani.socialtoTechnology@lemmy.worldWhy LLMs can't really build software
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    14 hours ago

    I saw an LLM override the casting operator in C#. An evangelist would say “genius! what a novel solution!” I said “nobody at this company is going to know what this code is doing 6 months from now.”

    Before LLMs people were often saying this about people smarter than the rest of the group. “Yeah he was too smart and overengineered solutions that no one could understand after he left,”. This is btw one of the reasons why I increasingly dislike programming as a field over the years and happily delegate the coding part to AI nowadays. This field celebrates conformism and that’s why humans shouldn’t write code manually. Perfect field to automate away via LLMs.


  • hisao@ani.socialtoTechnology@lemmy.worldWhy LLMs can't really build software
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    7
    ·
    14 hours ago

    If my coworkers do, they’re very quiet about it.

    Gee, guess why. Given the current culture of hate and ostracism I would never outright say IRL that I like it or use it a lot. I would say something like “yeah, I think it can sometimes be useful when used carefully and I sometimes use it too”. While in reality it would mean that it actually writes 95% of code under my micromanagement.


  • deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore

    And that’s exactly what I want. And I don’t get it why people want more. Having more means you have less and less control or influence on the result. What I want is that in other fields it becomes like it is in programming now, so that you micromanage every step and have great control over the result.





  • I make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it’s needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.


  • and the resource it’s concerned about is how long a human engages.

    Why do you think models are trained like this? To my knowledge most LLMs are trained on giant corpuses of data scraped from internet, and engagement as a goal or a metric isn’t in any way embedded inherently in such data. It is certainly possible to train AI for engagement but that requires completely different approach: they will have to gather giant corpus of interactions with AI and use that as a training data. Even if new OpenAI models use all the chats of previous models in training data with engagement as a metric to optimize, it’s still a tiny fraction of their training set.



  • Yeah, let them figure it out. It’s their problem after all. If it’s more expensive, then let them increase prices for their “data serving” activities. If it’s too expensive for some people, they might reconsider their usage of said services which in turn might be equivalent of “deleting old files or emails”. Instead of asking people deleting files right now before those in charge even tried to fix the problems they created.





  • You’re being a sophist by comparing those. It’s non-comparable type of periodicity. AI doesn’t need continuous training to function. In theory, eveyone could just stop and settle on models we have already and they would continue to work forever. Training models is only needed for creating better improved models. And you can count really big models like ChatGPT ones on fingers. You can’t raise beef once and keep people fed forever. Raising beef has the same type of periodicity as running already-trained model, something that is continuously done for industry to function.