• Nighed@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    5
    ·
    19 hours ago

    The idea of the mental model CAN be done by AI.

    In my experience, if you get it to build a requirements doc first, then ask it to implement that while updating it as required (effectively it’s mental state). you will get a pretty good output with decent ‘debugging’ ability.

    This even works ok with the older ‘dumber’ models.

    That only works when you have a comprehensive set of requirements available though. It works when you want to add a new screen/process (mostly) but good luck updating an existing one! (I haven’t tried getting it to convert existing code to a requirements doc - anyone tried that?)

    • flop_leash_973@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      15 hours ago

      I tried feeding ChatGPT a Terraform codebase once and asked it to produce an architecture diagram of what the code base would deploy to AWS.

      It got most of the little blocks right for the services that would get touched. But the layout and traffic direction flow between services was nonsensical.

      Truth be told it did do a better job than I thought it would initially.

      • Nighed@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        14 hours ago

        The trick is to split up the tasks into chunks.

        Ask it to identify the blocks.

        Then ask it to identify the connections.

        Then ask it to produce the diagram.

        • Pumasuedeblue@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 hours ago

          Which means you just did four things to help the AI which the AI can’t do itself. That makes it a tool: useful in some applications, not useful in others, and constantly requiring a human to properly utilize it.