90%? You don't have to be in the 90th percentile to be able to code a web app, which doesn't seem like something an LLM will be able to do in 3 years. How would that work? You think there will be an LLM where you just drop the spec into it and out comes the codebase? If you do it piecemeal, how do you ensure all the pieces cohere, that they fit into the architecture, basically, that it doesn't turn into a big pile of spaghetti? How do you make an LLM maintain previously existing software, which is where probably most programming effort goes into?
These are valid questions for responsible CTOs. Those are in the upper 10% of "coders".
Have you tried coding a web app with ChatGPT? I did, and it worked. (GPT-3.5) Even made it do some cosmetic changes with CSS, which is outside my expertise, as I'm back-end focused.
It was very fast, very pleasant, and dirt cheap.
> how do you ensure all the pieces cohere
It needs to be able to fit the problem inside its context window. Since the best practice, for a while, has been to delegate business logic to untouchable vendor libraries, very little code is actually needed to stitch an app together.
With GPT-4's much greater context window, I'm looking forward to delegate even more complex projects to my new underling. (I do have the pro subscription and it worths every cent, hell it's the most valuable subscription I've ever had to any service.)
The video is the one under the link. The paper is in the comments, but the video itself is a review (Yannic did and does a ton of paper reviews on Youtube, I highly recommend it). My timestamp might be off.
Also, this method might be a dead end as far as solutions go.
90%? You don't have to be in the 90th percentile to be able to code a web app, which doesn't seem like something an LLM will be able to do in 3 years. How would that work? You think there will be an LLM where you just drop the spec into it and out comes the codebase? If you do it piecemeal, how do you ensure all the pieces cohere, that they fit into the architecture, basically, that it doesn't turn into a big pile of spaghetti? How do you make an LLM maintain previously existing software, which is where probably most programming effort goes into?
Check this out, and put it in a percentile:
https://twitter.com/InternetIsHell/status/1636149754148147201
I'd say top 1%.
These are valid questions for responsible CTOs. Those are in the upper 10% of "coders".
Have you tried coding a web app with ChatGPT? I did, and it worked. (GPT-3.5) Even made it do some cosmetic changes with CSS, which is outside my expertise, as I'm back-end focused.
It was very fast, very pleasant, and dirt cheap.
> how do you ensure all the pieces cohere
It needs to be able to fit the problem inside its context window. Since the best practice, for a while, has been to delegate business logic to untouchable vendor libraries, very little code is actually needed to stitch an app together.
With GPT-4's much greater context window, I'm looking forward to delegate even more complex projects to my new underling. (I do have the pro subscription and it worths every cent, hell it's the most valuable subscription I've ever had to any service.)
Remember that GPT-3.5-based ChatGPT could already hallucinate a linux virtual machine, and stay coherent throughout the exercise: https://www.engraved.blog/building-a-virtual-machine-inside/
> I live in Eastern Europe: I can say it as it is.
Beams of appreciation.
> check out this video about Meta’s efforts, skip to Prompt Pre-Training at 35:05. (I
> recommend the whole video (and Meta’s paper as well, linked in the video notes).)
Please provide a link to the video -- I'd like to watch it... and read their paper, too.
The video is the one under the link. The paper is in the comments, but the video itself is a review (Yannic did and does a ton of paper reviews on Youtube, I highly recommend it). My timestamp might be off.
Also, this method might be a dead end as far as solutions go.
https://www.youtube.com/watch?v=ZTs_mXwMCs8
https://galactica.org/static/paper.pdf
My bad -- I missed the single-word link ("Meta"). Sorry... and thanks.