Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
If you just let it do a full rewrite again and again, what protects against breaking changes in the API? Software doesn’t exist in a vacuum, there might be other businesses or people using a certain API and relying on it. A breaking change could be as simple as the same endpoint now being named slightly differently.
So if you now start to mark every API method as “please no breaking changes for this” at what point do you need a full software developer again to take care of the AI?
I’ve also never seen AI modify an existing code base, it’s always new code getting spit out (80% correct or so, it likes to hallucinate functions that don’t even exist). Sure, for run of the mill templates you can use it, but even a developer who told me on here they rely heavily on ChatGPT said they need to verify all the code it spits out, because sometimes it’s garbage.
In the end it’s a damn language model that uses probability on what the next word should be. It’s fantastic for what it does, but it has no consistent internal logic and the way it works it never will.
Mate, I’ve used ChatGPT before, it straight up hallucinates functions if you want anything more complex than a basic template or a simple program. And as things are in programming, if even one tiny detail is wrong, things straight up don’t work. Also have fun putting ChatGPT answers into a real program you might have to compile, are you going to copy code into hundreds of files?
My example was public APIs, you might have an endpoint /v2/device that was generated the first time around. Now external customers/businesses built their software to access this endpoint. Next run around the AI generates /v2/appliance instead, everything breaks (while the software itself and unit tests still seem to work for the AI, it just changed a name).
If you don’t want that change you now have to tell the AI what to name things (or what to keep consistent), who is going to do that? The CEO? The intern? Who writes the perfect specification?
Management and sound technical specifications, that sounds to me like you’ve never actually worked in a real software company.
You just said what the main problem is: ChatGPT is not perfect. Code that isn’t perfect (compiles + has consistent logic) is worthless. If you need a developer to look over it you’ve already lost and it would be faster to have that developer write the code themselves.
Have you ever gotten a pull request with 10k lines of code? The AI could spit out so much code in an instant, no developer would be able to debug this mess or do a code review. They’ll just click “Approve” and throw it on the giant garbage heap whatever the AI decided to spit out.
If there’s a bug down the line (if you even get the whole thing to run), good luck finding it if no one in your developer team even wrote the code in the first place.
You misunderstood, I never said management is worthless. The product managers know what customers want. The product owners keep 8 out of 10 dumb ideas away from the development team. And management again leans on the development team to find out what is actually technically possible and in what time frame.
If management just threw every customer wish into a magic black box to get code out, even if that code was perfect, you wouldn’t have a product. You’d have a pile of steaming crap.
I’ve done plenty of code reviews, they only work if they are small human readable increments. Like they say: A code review of 100 lines might take an hour. A code review of 10000 lines takes thirty minutes.
AI would spit out so much code with missing context for the developer, it would be impossible to properly review.
Removed by mod
If you just let it do a full rewrite again and again, what protects against breaking changes in the API? Software doesn’t exist in a vacuum, there might be other businesses or people using a certain API and relying on it. A breaking change could be as simple as the same endpoint now being named slightly differently.
So if you now start to mark every API method as “please no breaking changes for this” at what point do you need a full software developer again to take care of the AI?
I’ve also never seen AI modify an existing code base, it’s always new code getting spit out (80% correct or so, it likes to hallucinate functions that don’t even exist). Sure, for run of the mill templates you can use it, but even a developer who told me on here they rely heavily on ChatGPT said they need to verify all the code it spits out, because sometimes it’s garbage.
In the end it’s a damn language model that uses probability on what the next word should be. It’s fantastic for what it does, but it has no consistent internal logic and the way it works it never will.
Removed by mod
Mate, I’ve used ChatGPT before, it straight up hallucinates functions if you want anything more complex than a basic template or a simple program. And as things are in programming, if even one tiny detail is wrong, things straight up don’t work. Also have fun putting ChatGPT answers into a real program you might have to compile, are you going to copy code into hundreds of files?
My example was public APIs, you might have an endpoint
/v2/device
that was generated the first time around. Now external customers/businesses built their software to access this endpoint. Next run around the AI generates/v2/appliance
instead, everything breaks (while the software itself and unit tests still seem to work for the AI, it just changed a name).If you don’t want that change you now have to tell the AI what to name things (or what to keep consistent), who is going to do that? The CEO? The intern? Who writes the perfect specification?
Removed by mod
Management and sound technical specifications, that sounds to me like you’ve never actually worked in a real software company.
You just said what the main problem is: ChatGPT is not perfect. Code that isn’t perfect (compiles + has consistent logic) is worthless. If you need a developer to look over it you’ve already lost and it would be faster to have that developer write the code themselves.
Have you ever gotten a pull request with 10k lines of code? The AI could spit out so much code in an instant, no developer would be able to debug this mess or do a code review. They’ll just click “Approve” and throw it on the giant garbage heap whatever the AI decided to spit out.
If there’s a bug down the line (if you even get the whole thing to run), good luck finding it if no one in your developer team even wrote the code in the first place.
Removed by mod
You misunderstood, I never said management is worthless. The product managers know what customers want. The product owners keep 8 out of 10 dumb ideas away from the development team. And management again leans on the development team to find out what is actually technically possible and in what time frame.
If management just threw every customer wish into a magic black box to get code out, even if that code was perfect, you wouldn’t have a product. You’d have a pile of steaming crap.
I’ve done plenty of code reviews, they only work if they are small human readable increments. Like they say: A code review of 100 lines might take an hour. A code review of 10000 lines takes thirty minutes.
AI would spit out so much code with missing context for the developer, it would be impossible to properly review.
Removed by mod