In my experience, if you get it to build a requirements doc first, then ask it to implement that while updating it as required (effectively it’s mental state). you will get a pretty good output with decent ‘debugging’ ability.
This even works ok with the older ‘dumber’ models.
That only works when you have a comprehensive set of requirements available though. It works when you want to add a new screen/process (mostly) but good luck updating an existing one! (I haven’t tried getting it to convert existing code to a requirements doc - anyone tried that?)
I tried feeding ChatGPT a Terraform codebase once and asked it to produce an architecture diagram of what the code base would deploy to AWS.
It got most of the little blocks right for the services that would get touched. But the layout and traffic direction flow between services was nonsensical.
Truth be told it did do a better job than I thought it would initially.
Which means you just did four things to help the AI which the AI can’t do itself. That makes it a tool: useful in some applications, not useful in others, and constantly requiring a human to properly utilize it.
The idea of the mental model CAN be done by AI.
In my experience, if you get it to build a requirements doc first, then ask it to implement that while updating it as required (effectively it’s mental state). you will get a pretty good output with decent ‘debugging’ ability.
This even works ok with the older ‘dumber’ models.
That only works when you have a comprehensive set of requirements available though. It works when you want to add a new screen/process (mostly) but good luck updating an existing one! (I haven’t tried getting it to convert existing code to a requirements doc - anyone tried that?)
I tried feeding ChatGPT a Terraform codebase once and asked it to produce an architecture diagram of what the code base would deploy to AWS.
It got most of the little blocks right for the services that would get touched. But the layout and traffic direction flow between services was nonsensical.
Truth be told it did do a better job than I thought it would initially.
The trick is to split up the tasks into chunks.
Ask it to identify the blocks.
Then ask it to identify the connections.
Then ask it to produce the diagram.
Which means you just did four things to help the AI which the AI can’t do itself. That makes it a tool: useful in some applications, not useful in others, and constantly requiring a human to properly utilize it.