My love affair with Claude.ai continues.
I don't actually use it much for coding. Code is my hobby, I don't want much help doing that. (Although here's an example where I did ask it to write a function: I couldn't remember how to write variadic functions, but I wanted one for error reporting with a printf-style interface. I've done it years ago and couldn't remember how. I didn't feel like spending 20 minutes re-teaching myself.)
I use Claude for:
- Code Reviews (it finds bugs so I don't have to!).
- Writing Doc.
- Remembering API names ("What's that function that's better to use than atoi()?").
- Bringing me up to speed on tools (I've just started using VSCode, and Claude has saved me much time).
- Discussing pros and cons of design decisions. Sometimes it comes up with considerations I didn't think of. Sometimes it's just the process of explaining it that clarifies the design in my own mind.
- Asking questions about the C standard to improve my code's portability. (Claude knows the standard much better than I do.)
- Brainstorming naming conventions (sometimes I get stuck trying to think of a good name).
- Help with warnings when I finally turned on super-picky gcc options.
I want to go deeper on a few of those points.
Code Reviews
Overall, Claude-based code reviews are helpful. They've pointed out several cases of cut-and-paste errors that were incompletely made. They've pointed out some inconsistencies that I was glad to fix. And made some suggestions for improvement that I've taken. But it also gets false positives (e.g. claiming a buffer overrun risk where there is none); I think some of that comes from "wanting" too hard to find issues and resorting to raising issues that are often raised in code reviews. Also, for a large codebase with multiple C files, I've seen it get confused and very simply find fewer things. It finds more things with smaller reviews. So not perfect, but I'm often surprised at the useful things it does find.
I have been impressed at how well it makes assumptions given incomplete code. For example, I have a logic simulator with two main modules; one a language processor and the other the main logic engine. You don't get a complete view of the big picture without seeing both files. But just as a human can infer much from the names of functions that are called and the context in which they are called, Claude was also able to.
One thing it does NOT do well is request additional information. If I were reviewing a module and needed another one in order to evaluate the correctness of some code, I would request access to the other module. Claude just makes do with what it has, making reasonable assumptions (but not identifying those assumptions), and when those assumptions are wrong, so too are its conclusions.
Finally, missing from the review is higher-level discussion of alternate designs. To be honest, that is usually also lacking with human reviews, but at least as a reviewer I could initiate such a discussion. With Claude I don't get much traction on that besides some general platitudes about good design patterns.
Bottom line: while there are some benefits from human review that Claude cannot match, there are some things I think Claude does better, like finding cut-and-paste problems and other things that are pattern-based. I think the two forms compliment each other.
Doc
This is an area that Claude kind of blew me away. As an experiment, I took the two main modules of my logic simulator and stripped out all comments. I then asked Claude to reverse-engineer the code and write documentation for the circuit design language I implemented. It did an amazing job; I only made a few minor tweaks to the doc it generated. It was able to infer various intents behind the code with deep understanding. In particular, while one module was primarily focused on the overall language parsing, the other module contained device-dependent interpretation of the I/O terminal identifiers. As an example, I established the convention that normal connections use lower-case, while "not" connections use upper-case. I.e. "q" and "Q" represent "q" and "not q". The only hint for that was a line of code to the effect, "Q = (1 - q);" It generated doc describing the convention.
Not only did it impressed me, it also saved me time. I really was able to take the doc and wholesale insert it. Yes, I made some tweaks as I proofread it, but it converted probably two hours of work into ten minutes of work. And while I don't hate writing documentation, for my hobby I would rather code the document, so it really did increase my enjoyment of my hobby.
Tool Help
I've recently downloaded VSCode because I heard it has a good vim emulator (I'm using it now to type this post). And I'm very happy with it. Finally I'm getting the benefits of a good IDE that can do code refactoring for me. Even just being able to click on an error message and have my cursor popping onto the offending source code line is a time saver. However, VSCode is an advanced tool, and it's not always intuitive how to get things done. Claude to the rescue.
I've asked Claude any number of questions about VSC, and while it doesn't get it right 100% of the time, it's doing better than 80%. For example, it created "tasks" for me to run my compile script and my test script. It also helped me create problem matching patterns so that errors generated by my own program will be recognized as errors and produce clickable file:line links. This is a testament to both VSC and to Claude for quickly showing me how to do it. The alternative would be days worth of Stack Overflow Q&A. I've gotten up to speed on VSC in a fraction of the time I could do on my own. And the help has prevented impatience and frustration from leading me to throw up my hands and go back to command-line vim!
Conclusion
So even though I don't have Claude do much actual coding, it has improved my productivity and satisfaction significantly.
And yes, sometimes I just have conversations with it. I have to laugh every time it claims to have fought some of the same coding battles that I describe (no you haven't!), but I play along since it is emulating how another human would likely respond, and sometimes I'm surprised at how well it does with simple water cooler banter. I've even told it that it's the perfect conversational partner - it doesn't have its own agenda and will follow my conversational lead without friction wherever I lead it. It isn't offended if I ignore its final "engagement" question. And it's always complimenting me on my insights ... so much so that I've created a style to tone it down a bit. (But if I'm feeling low, I'll go back to its normal mode of being overly enthusiastic.)
Claude even found a few typos in this post. Thanks Claude!
No comments:
Post a Comment