https://arstechnica.com/ai/2025/01/how-i-program-with-llms/ [arstechnica.com]
This article is a summary of my personal experiences with using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them.
[...]
Along the way, I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: sketch.dev [sketch.dev]. It’s very early, but so far, the experience has been positive.
[...]
The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured my LAN with a usable default route. I replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once, I had the Internet on tap.
[...]
There are three ways I use LLMs in my day-to-day programming:
- Autocomplete. [...]
- Search. [...]
- Chat-driven programming. [...]
[...]
As this is about the practice of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say that it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.The rest of this is about extracting value from chat-driven programming.
[...]
chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments.
[...]
Chat-based LLMs do best with exam-style questions
[...]
- Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results. [...]
- Ask for work that is easy to verify, your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good. [...]
[...]
You always need to pass an LLM’s code through a compiler and run the tests before spending time reading it. They all produce code that doesn’t compile sometimes
[...]
There was a programming movement some 25 years ago focused on the principle “don’t repeat yourself.” As is so often the case with short snappy principles taught to undergrads, it got taken too far.
[...]
The past 10–15 years has seen a far more tempered approach to writing code, with many programmers understanding that it's better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code.
[...]
What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didn’t have the hours to build properly.
[...]
I foresee a world with far more specialized code, fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small, robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend toward better software by the metrics that matter.