You're right, you are using it wrong. An LLM can read code faster than you can, write code faster than you can, and knows more things than you do. By "you" I mean you, me, and anyone with a biological brain.
Where LLMs are behind humans is depth of insight. Doing anything non-trivial requires insight.
The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work. Kind of like paint by numbers. In your case, I would recommend some combination of defining the API of the library you want yourself manually, thinking through how you would implement it and writing down the broad strokes of the process for the LLM, and collecting reference materials like a format spec, any docs, the code that's creating these packets, and so on.
> An LLM can read code faster than you can, write code faster than you can, and knows more things than you do.
I don't agree. It can't write code at all, it can only copy things it's already seen. But, if that is true, why can't it solve my problem?
> The key to effectively using LLMs is to provide the insight yourself, then let the LLM do the grunt work
Okay, so how do I do that? Remember, I want to do ZERO TYPING. I do not want to type a single character that is not code. I already know what I want the code to do, I just want it typed in.
I just don't think AI can ever solve a problem I have.
When you write a library the first step is always designing it. LLMs dont get rid of that step, they get rid of the next step where you implement your design.
Is this really "additional"? do you not do design docs/adrs/rfcs etc and talk about them with your team? do you take any notes or write out your design/plan in some way even for yourself?
If I'm writing a library to work with a binary format, there is very little English in my head required, let alone written English.
That is a heavily symbolic exercise. I will "read" the spec, but I will not pronounce it in literal audible English in my head (I'm a better reader than that.)
I write Haskell tho so maybe I'm biased. I do not have an inner narrative when programming ever.
I’m not part of any team, I work on my projects alone. I rarely write long-form design documents; usually I either just start coding or write very vague notes that only make sense when combined with what’s in my head.
I think one side of the issues folks are having is that combined with the mandate to use these tools, there is also an expectation or assumption that the developers will instantly get X% more productive. Like, "you must use this tool and you will be twice as productive".
Where I work there as certainly been that kind of discussions, "we need to use AI for this, because no offense but you are simply not fast enough". And this from people who do not understand software development and has never worked with it. They have only read the online stuff about 20X speeds and FOMO. (And my workplace is generally quite laid back and reasonable. I am sure many other places are much more aggressively steered.)
Many say but I don't agree. It is clearly better now but I had basically the same view on code gen-AI a year ago as I have now. It was obvious even then that LLMs were a big deal. They were really cool then and are amazing now. But some issues are undeniably still there. Maybe they are not a question of some simple quality measure, meaning they might not be solved by simply crunching more tokens with larger context.
I used Claude Code before August 2025 and it was definitely usable, although clearly more capable now. The difference is noticeable but not a completely different world, all in all, in my eyes.
I notice on a daily basis even now that it can easily lead to bloat and unnecessary complexity. We will see if it can be fixed by using even stronger models or not.
yes. claude added a suggested random scramble (if that's what you mean?), also running average of 5/12/100, local storage of past times on first iteration, my son told it to also add a button for +2s penalties and touch screen support.
Ok cool! I have not done any cubing related coding so I don't know how complicated it gets but making sure suggested scrambles are solvable etc seems like it could be non-trivial?
Regarding the OP's dilemma. I am split. I enjoy both the process and the destination. With AI, the process is faster and less satisfying, but reaching the destination is satisfying in its own way, and enables certain professional ambitions.
I have always had other outlets for my "process" needs, and I believe I will spend more time on them in the future. Other hobbies. I love "artisanal coding" but that aspect was never really my job.
I am conservative regarding AI driven coding but I still see tremendous value.
It makes me want to ask you: do you ever see helpful things from your colleagues at all?