Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday January 11, @02:34PM   Printer-friendly
from the machine-programming-the-machine dept.

https://crawshaw.io/blog/programming-with-llms/

This article is a summary of my personal experiences with using generative models while programming over the past year. It has not been a passive process. I have intentionally sought ways to use LLMs while programming to learn about them.

[...] Along the way, I have found oft-repeated steps that can be automated, and a few of us are working on building those into a tool specifically for Go programming: sketch.dev. It's very early, but so far, the experience has been positive. [...] The only technological shift I have experienced that feels similar to me happened in 1995, when we first configured my LAN with a usable default route. I replaced the shared computer in the other room running Trumpet Winsock with a machine that could route a dialup connection, and all at once, I had the Internet on tap.

[...] There are three ways I use LLMs in my day-to-day programming:

  1. Autocomplete. [...]
  2. Search. [...]
  3. Chat-driven programming. [...]

[...] As this is about the practice of programming, this has been a fundamentally qualitative process that is hard to write about with quantitative rigor. The closest I will get to data is to say that it appears from my records that for every two hours of programming I do now, I accept more than 10 autocomplete suggestions, use LLM for a search-like task once, and program in a chat session once.

The rest of this is about extracting value from chat-driven programming. [...]chat-based programming may not be for you. I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments.

[...] Chat-based LLMs do best with exam-style questions:

  1. Avoid creating a situation with so much complexity and ambiguity that the LLM gets confused and produces bad results.
  2. Ask for work that is easy to verify, your job as a programmer using an LLM is to read the code it produces, think about it, and decide if the work is good.

[...] You always need to pass an LLM's code through a compiler and run the tests before spending time reading it. They all produce code that doesn't compile sometimes.

[...] The past 10–15 years has seen a far more tempered approach to writing code, with many programmers understanding that it's better to reimplement a concept if the cost of sharing the implementation is higher than the cost of implementing and maintaining separate code.

[...] What we have now is a world where the tradeoffs have shifted. It is now easier to write more comprehensive tests. You can have the LLM write the fuzz test implementation you want but didn't have the hours to build properly.

[...] I foresee a world with far more specialized code, fewer generalized packages, and more readable tests. Reusable code will continue to thrive around small, robust interfaces and otherwise will be pulled apart into specialized code. Depending how well this is done, it will lead to either better software or worse software. I would expect both, with a long-term trend toward better software by the metrics that matter.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by RamiK on Saturday January 11, @04:58PM

    by RamiK (1813) on Saturday January 11, @04:58PM (#1388409)

    I foresee a world with far more specialized code, fewer generalized packages, and more readable tests.

    The future is developers asking the LLM to sort through an array and then correcting it by asking to "use an existing library for the algorithm" while writing a test since we're too lazy to review but coming up with a couple of examples is trivial.

    So, more generalized packages; less readable tests and probably about the same specialized code I guess?

    --
    compiling...
  • (Score: 5, Insightful) by Rosco P. Coltrane on Saturday January 11, @05:16PM (2 children)

    by Rosco P. Coltrane (4757) on Saturday January 11, @05:16PM (#1388411)

    - Big Data snooping on your continuously as you program with LLMs
    - Leaking proprietary company data all the time to the aforementioned Big Data provider without the approval of your boss

    • (Score: 5, Interesting) by stormreaver on Saturday January 11, @06:02PM

      by stormreaver (5101) on Saturday January 11, @06:02PM (#1388414)

      - Courts determining that LLM's are massive copyright violation engines. Then you open yourself up to massive copyright lawsuits, with unemployment soon to follow. This one is still an open question, as none of the existing copyright lawsuits have been resolved.

    • (Score: 3, Interesting) by turgid on Saturday January 11, @06:48PM

      by turgid (4318) on Saturday January 11, @06:48PM (#1388418) Journal

      So fair, just as I have avoided the likes of Farcebook, I have avoided LLMs. From what I gather, there's nothing these LLMs can do for me in the sphere of programming that my shell scripts can't already do. Furthermore, my scripts don't advertise to the world what I'm doing and they don't hallucinate based on the nonsense out there on the Internet. Now get off my lawn!

  • (Score: 3, Interesting) by corey on Saturday January 11, @09:15PM (2 children)

    by corey (2202) on Saturday January 11, @09:15PM (#1388444)

    I get the feeling with all this talk about using “AI” to speed up writing code, that it’ll be bad for software overall. Seems like it is/will erode the ability of the coder (being lazy, not doing the thinking they used to do), as well as a reduced understanding of what the code actually does. If someone else does work for me, I don’t need to think about it or understand it’s nuances, which saves me time — but I also only miss what the underlying characteristics of its integration with other parts of my work, as well as bugs it might contain.

    Glad I’m a hardware engineer though, seems software engineering is going through another transition. I wouldn’t feel comfortable having an LLM design me a op-amp comparator or RF amplifier.

    • (Score: 3, Informative) by deimtee on Saturday January 11, @11:37PM (1 child)

      by deimtee (3272) on Saturday January 11, @11:37PM (#1388462) Journal

      Glad I’m a hardware engineer though, seems software engineering is going through another transition. I wouldn’t feel comfortable having an LLM design me a op-amp comparator or RF amplifier.

      Don't suggest that too loud. LLM style AI is just as applicable to modular hardware design. The only reason they don't do it is because nobody has bothered to train one on op-amp and RF amplifier designs. Yet.

      --
      If you cough while drinking cheap red wine it really cleans out your sinuses.
      • (Score: 2) by corey on Monday January 13, @02:17AM

        by corey (2202) on Monday January 13, @02:17AM (#1388626)

        Yeah, I have had that thought. I think the likes of Altium are working on it.

  • (Score: 3, Insightful) by VLM on Saturday January 11, @09:42PM (1 child)

    by VLM (445) on Saturday January 11, @09:42PM (#1388447)

    I find it surprising the only "official" way we're supposed to use LLMs is intense social pressure to ask for code and copy and paste it without testing or analysis.

    One thing I've used it for is data generation. Create a JSON of a database dump of (glances across desk) beef jerky store inventories, with at least 5 flavors of beef jerky and 3 package sizes and reasonable random qtys for a food store.

    Some of the most fun I've had is asking essay questions. OK given the previous JSON "database" of beef jerky provide me about one paragraph of justification why I should breadth-first search it vs depth-first search it. I don't need the searches I'll be using libraries and my example is utterly awful but the general idea of asking architecture level essay questions is pretty good. Hey should I store the above as JSON or YAML or CSV gimmie a paragraph thanks!

    Another use is handling make-work projects. Guess who's creating a five page power point slide presentation for next Thursday's brown bag lunch on the topic of introducing mid-level Python programmers to ... (glances at an open tab) "Plotly Open Source graphing library for Python". Thats right, the answer is "not me" the LLM will take care of that thanks. Just ask for an outline of the overall presentation and it'll do all the busywork. Sure I could gin something up in an hour or two, but I have better things to do, or at least other things to do. Or here's some code that's already set in stone from above so it doesn't matter if anyone likes it or not, but we need to pencil whip it while pretending we do code reviews, so give me three topics to bring up at the code review meeting about the following chunk of code. As long as I pay more attention to the LLM output than any of the meeting attendees pay to the meeting in general, I'll get away with it. Well if they used asyncio they could have used trio, or if they used trio they could have used asyncio; is that worth discussing? hell no, but it pencil whips a code review doesn't it?

    • (Score: 2) by Freeman on Wednesday January 15, @07:34PM

      by Freeman (732) on Wednesday January 15, @07:34PM (#1389013) Journal

      One thing I've used it for is data generation. Create a JSON of a database dump of (glances across desk) beef jerky store inventories, with at least 5 flavors of beef jerky and 3 package sizes and reasonable random qtys for a food store.

      I love your usage example. While LLMs aren't the be all end all, they can still be useful.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(1)