Real stupidity beats Artificial Intelligence every time.
– Terry Pratchett
My personal code of LLM usage
(this is a living page, I update it from time-to time)
Originally Published February 15th 2025
This page is meant as a reference for what I do and don’t do with regards to LLMsI consider “Large Language Models” to be a more accurate term than “AI”
as a set of personal rules. It’s meant to be a quick reference for myself and others about how I personally engage with this technology. If you use them differently, that’s no problem and there’s no judgement here.
I interact with LLMs either through my self hosted LibreChat instance (usually connected to an anthropic model), or through a containerized sandbox around the claude cli tool with restricted filesystem access (basically read only, with some exceptions).
What follows is a description of what I DO and DON’T do with regards to these tools.
What I don’t do:
I don’t let an LLM write code for me.
Any code I publish, write, or deploy is written by me.Or it’s copypasta from Stack Overflow, the way God intended. Kidding. (Kind of)
So far, I’ve been able to get away with that. Barring circumstances outside of my control, (such as an employer explicitly requiring me to use LLM generated code) I intend to keep it that way for the foreseeable future.
I do use LLMs as a programmer, see the section down below for what I consider acceptable.
I never let an LLM write text for me:
Every essay you read on this site will be written by me. In an era of generated text, clunkiness in grammar, speling[sic]
, and phrasing have become more an indication that words have been put to paper (or hard disk) by a person, not a graphics card.
I consider creativity to be a valuable process, whether that’s code, art, words, etc. I’m not going to rob myself of the ability to become a better writer by becoming lazy enough to let it generate text for me. AI generated art is embarassing to look at.
I don’t fully trust any unsourced claims an LLM makes.
Due to their non-deterministic nature, I don’t consider a claim made by an LLM to be true unless I can immediately click a link to verify it. I treat it with a similar level of trustworthiness that I would treat a Wikipedia article.I generally like Wikipedia. It’s a testament to what collaboration can do. That trust varies article to article, as with anything.
If I’m looking for information on a new topic I always instruct the LLM to:
- present both sides of an issue (often following a thesis-antithesis-synthesis pattern)
- provide a link backing up every claim it makes. Example:
Back each claim made with an inline source, clearly stating what it's linking to. Examples:
A [2024 pew poll] shows that "95% of people prefer having a pet parakeet instead of a pet mountain lion"
According to the [policy page] on Amazon's Website, Jeff Bezos' favorite color is burgundy.
In 2017 the FTC and 17 attorneys filed suit ([link to original suit], [link to friendlier accessible resource explaining it]) against all members of the Nebraska Ham Radio club claiming that they "are really killing the market vibe".
prioritize the reader's time, their commitment to reading original sources, and finding ways to understand those original sources. Assume they will not implicitly trust any claims you make, and will want to quickly verify.
What I do:
In general I think LLMs excel at taking huge chunks of text and allowing me to query them. I tend to use them like a semantically aware fuzzy finder.
Example uses include:
Reading cumbersome documents:
- Contracts I may be about to sign
- Privacy agreements for a service I’m about to sign up for
- Congressional Bills, State Statutes etc.
As an basic gramatical editor
Sometimes I’ll paste a paragraph or sentence into an LLM and ask it to dissect it for me and give me feedback on tone and grammatical structure. I use this an an opportunity to improve, and I don’t always take its advice. But I never let it write for me.
I also prefer human feedback where I can get it.
As a senior engineer
This is the big one.
Acceptable uses of LLMs in code for me personally are:
- To show an example of syntax, or ask how a function might be implemented so I can get an idea of an algorithm generally.
- Review a solution I’ve implemented and provide feedback (code-review style) of the solution, to catch basic errors before I send it off to other humans.
- Generate a skeleton of a config file so I can immediately begin modifying it (this is perhaps me stretching my rules a bit)
- Break down a concept or a function for me and compare it across programming languages
- read through documentation or search through many files to find a specific thing I’m looking for, and don’t know how to search for otherwise
In general, I prefer tools like agents to only have read only, sandboxed access to a filesystemI have a container wrapped around claude on my work machine for this, with some failsafes in the shell to prevent mounting e.g. private keys to the agent filesystem. This way I have to worry less about agents accessing things I would prefer they didn’t.
. If I allow them to write or save context, I have them prefix any context files they generate with the string LLM__ to differentiate it from regular docs.
- I will likely never allow an agent to run shell commands beyond
findgrepand the like. That being said, when delving into a brand new code base I have to admit the ability to ask for general organizational flow and architecture has been a welcome addition to my workflow. Especially when tackling legacy systems where documentation is sparse. - Even so, I like to read through code on my own first, do a writeup for myself of my understanding of a new system, and only then let an LLM read and confirm/supplement my understanding.
Argue about homework:
I often do the best on assignments when I can verbalize the problem to someone and then talk through a solution. But of course another person isn’t always available, and it turns out rubber ducks are easy to ignore. I always include explicit instructions to the model that it is never do do the work for me, or provide a solution. Only to help me reason through things.
I’ve seen good results from this. I’ve also seen very bad ones. Sometimes the LLM is outright wrong about its approach. Other times it helps me get unstuck.
In my academic life my workflow is usually:
- use a prompt verify that I have the requisite equations, patterns etc. written in my notes to solve a problem before I start
- Try the problem sans LLM
- If I have a known solution available, hand both the solution and the problem to the LLM if I get stuck (keeping it accurate)
- Explain my reasoning step by step in writing (often this is enough to get me unstuck without the LLM)
- Get insight as to where the mistake might be otherwise
For the sake of both my own growth as well as academic honesty, I endeavor to always include a phrase similar to “You are not to do the work for me, help me figure out what I’m missing to solve this problem”.
Explore new ideas:
Sometimes if I just need to delve into an area (for classes, work, or my own personal curiosity), I’ll ask an LLM to do a web search and let me know what the current body of knowledge about a thing is with resources for further reading.
Tool discovery:
e.g. “I’m trying to find a tool to solve problem X. I prefer solutions that are <self-hostable, open source, written in rust, whatever>. Are there any good ones I should be aware of? Search the web.”
Source discovery:
Similar to the approach of scrolling to the bottom of a Wikipedia article and opening the cited sources, I find it’s useful to ask an LLM for a web search to find original sources, documents, etc.
Additional thoughts
Whether their overall use is good or bad, the tech world has opened Pandora’s metaphorical box. The lid doesn’t seem like it’s going back on any time soon. I hope for a future where it will feasible to self-host and self-train LLMs. I have no great love for the closed systems that Gemini, Claude, ChatGPT and their ilk prop up and support, especially since they were built using open systemsThey were trained on free information, information that often provides no conditions save that the information remain free to modify and use for everyone. There’s ongoing litigation about what exactly that means in a legal sense. But to take copyleft code and train closed models with it is in my mind a violation of the entire premise of collaboration online.
.
There are real ethical issues with how these models are developed, how they are trained, and the amount of power they consume. In an ideal world, I would be able to run a model locally, with open weights and low power consumption on a consumer laptop. Maybe we’ll get there someday.
The banal hegemony of LLMs is that in knowledge work there’s pressure, influence, or sometimes outright requirement to use them. So to an extent my means of earning a living seems contingent on my becoming knowledgeable abuout their use. That being said, if I had to summarize my whole philosophy it would be: “never let something else do my thinking for me”. I consider both programming and writing to be among the more structured forms of thinking in my life, and those are activities I’m not willing to outsource to a complex series of statistical weights.