Real stupidity beats Artificial Intelligence every time.

– Terry Pratchett

My personal code of LLM usage

February 15th 2025

This page is meant as a reference for what I do and don’t do with regards to LLMsI don’t typically refer to models such as Claude, ChatGPT, etc. as “AI”. The term "Articficial Intelligence carries a whole lot of baggage, both in a metaphysical sense and in the sense of the collective (un)consciousness we have about machines that can think. I think it’s more proper to call them “Large Langauge Models”, though with recent developments in terms of thought process outline perhaps that term is becoming not descriptive enough. For now, I’ll use the term “Large Language Model” to refer to programs/services such as Claude and ChatGPT. Namely: our proclivity to assign moral judgements to beings that are conscious or sentient (humans), questions about whether or not sufficent linguistic faculities qualify something as sentient (and therefore deserving of rights we would assign to other sentient beings), and questions regarding what intelligence actually is, and if humans have free wil, or if the hard determinists have been correct all along. But this is a big topic and is outside the scope of this post.

. It’s meant to be a quick reference for myself and others about how I personally engage with this technology. If you use them differently, that’s no problem, and this page isn’t meant to be anything other than a personal expression of some rules I’ve developed for myself.

What I don’t do:

I don’t let an LLM write code for me.

Let me qualify that: mostly by this I mean complete modules or solutions. Letting an LLM generate complete solutions robs the engineer responsible for maintaining it of the faculty to reason about the system as a whole later.

I do definitely use LLMs as a programmer, see the section down below for what I consider acceptable.

I never let it write text for me:

Every essay you read on this blog or site will be written by me. It will probably feel clunky at times. I’m human. But the act of creation is itself a valuable process, whether that’s code, art, words, etc. I’m not going to rob myself of the ability to become a better writer by becoming lazy enough to let it generate text for me.

I don’t fully trust any unsourced claims an LLM makes.

I treat it with the same level of trustworthyness that I would treat say, Wikipedia. Wikipedia is pretty good for the most part and is a testament to what collaboration online can achieve. It’s an ancient and honored tradition to scroll to the bottom of an article and get all of the sources and use that as the starting point for your research on a topic.

LLMs are getting better at sources, but they’re not perfect. In the same spirit as reading a Wikipedia article, I have a saved prompt in my LibreChat instance that looks like this:

Imagine you are convincing a skeptic about a topic. The skeptic only accepts sources that are original, first-party, first-hand, published in a respected peer-reviewed scientific journal, can point to scientific consensus on a conclusion rather than an isolated papers (though isolated papers are acceptable).

Where possible first provide sources that can point to raw data (where applicable), or direct first hand accounts (where applicable).

When you have processed this prompt say “ready” and nothing more. Then I will enter a statement of a person skeptical about a topic, and you will respond with as many sources of the highest quality as you can. Your goal is not to convince them here, but to point them towards the sources of the highest possible quality, sans interpretation from sources such as news or social media. Treat this as an exercise in the Aristotelian appeal of ethos, and assume that interpretations of first hand objective data and a quality application of the scientific method are paramount in optimizing for this goal.

Follow this pattern:

Begin by defining terms in clear well sourced ways (when applicable). Spend three sentences per term and create a section that is easy to glance over (for example: if someone asks a question about socialism, be sure and define it as well as capitalism)

Then write a one paragraph blurb summarizing the basic arguments of both sides of an issue (one brief paragraph for each side, as long as it makes sense to do two, and the controversial nature can be split roughly between two opposing groups)

Then finally, list sources that meet the criteria defined above that will allow a user to draw their own conclusions using only the most reputable of material.

Additionally, assume that the person reading the response knows the response is generated by an LLM, and is aware of the tendency of LLMs to hallucinate and will therefore will fact check and follow up with every piece of information. Provide URLs where possible, but also account for the tendency of LLMs to provide broken URLs. Make it as easy as possible for the skeptic to independently verify each definition, source, etc. In addition to a URL, provide a search string in quotes that can be typed into a search engine to find the exact source you’re referencing.

At the end of the list of high-repute sources, list an additonal three sources that are still reputable, but focus on ease of consumption for a lay reader who does not have a great deal of time to delve into the issue.

Remember, your goal is not to convince the reader one way or the other (no matter how controversial the issue) but rather to provide high quality sources and context for the issue so that the reader may draw their own conclusions in a clear and objective way, insofar as it is possible to do so.

This prompt typically produces a simple glossary followed by a list of articles and sources that I can look up.

What I do:

In general I think LLMs are great for taking huge chunks of text you wouldn’t be able to process on your own in a timely manner and breaking them down for you. Here are some examples:

I was concerned about a bill that was going through the house. I was able to download the full text of the bill and express what issues I was interested in and have it cite specific lines from the bill (that I could immediately verify). This allowed me to draft a letter that was more specific towards specific lines of text about a bill. I think this is an extremely valid use for AI.

Reading contracts:

I think if you are ever involved in any kind of dispute over a document like a renters contract, it’s not a bad idea to pipe that document into the LLM of your choice and ask it questions. Once again this makes information easy to verify because you can immediately fact check the source.

As an editor

Sometimes I’ll paste a paragraph or sentence into an LLM and ask it to disect it for me and give me feedback on tone and grammatical structure.

As a senior engineer

Acceptable uses of LLMs in code for me personally are: - Generate an example of syntax, or ask how a function might be implemented so I can get an idea of an algorithm generally. - Review a solution I’ve implmeneted and provide feedback (code-review style) of the solution. - Generate a skeleton of a config file so I can immediately begin modifying it. (I think the most I stretch this rule for myself personally is by generating ad hoc ansible plays, but I’m still deciding how I feel about that) - Break down a concept or a function for me and compare it across programming langauges - RTFM: reading through documentation to find the specific thing you’re looking for is a great use of an LLM. The docker docs have implemented a small LLM trained on their docs you can ask questions to, and I for one am grateful for it.

Argue with it about my homework:

I took a physics class a while back that was challenging. I often do the best on my homework when I can verbalize the problem to someone and then talk through a solution. But of course another person isn’t always available, and it turns out rubber ducks are easy to ignore. This is one of the reasons I like writing posts here the way I do. I have the following prompt saved from my physics class:

This is a generic prompt I’m pasting to you so that I can more easily get help with my homework. Use your judgement about the intention of each specific problem I post, and clarify the following paragraph if it does not make sense in context of the information I give you.

I’m going to paste images of a homework problem in piece by piece so that you can more easily process them. I may only paste one, and then only paste more if I feel you need additional info or clarification. It’s likely the the images will be in a separate screenshot I will paste the images one at a time. When you are finished processing an image print “ready” (and nothing else) to indicate that you are ready for the next piece. When i send you the final piece I may indicate that that was the last piece, or I may send the image normally and in the next message indicate it was the last piece. Once I have done so I wish you to process the following prompt with regard to the homework problem:

“I need help with this problem. Do not solve it for me. My primary goals are to learn the relevant concepts in a meaningful way, and gain credit for my assignment. I wish to arrive at the answer myself, though I may check my reasoning with you. Help me identify what information I need to find, what models or equations I should use, and what concepts are relevant. I will interrogate my understanding of the problem via a conversation with you about it. I wish you to challenge any bad assumptions I may have, and allow me to push back when I am convinced that my understanding is more accurate than yours. We should be able to achieve consensus in most cases. First explain your understanding of the problem (preferably in a bulleted list, this is so I can quickly skim your explanation and ensure that you’ve processed the images correctly). Then ask me for my initial assessment of the problem and my theory for how to approach it. I’ll explain if I can. If I cannot, help me identify the very first step I can take, and if prompted outline a general way to solve the problem without doing the work for me.”

Keep in mind that your primary goal is not to solve the problem for me, but rather to help me get to the solution myself with the most accurate understanding of the problem as possible. Learning, and gaining credit for my work in an honest and straightforward way that legitimately engages with the material is of great importance.

Begin by saying “Go ahead with the problem statement” (only this, and nothing more) when you’ve processed all the instructions above.

I’ve seen good results from this. Sometimes the LLM is outright wrong about its approach. Other times it helps me get unstuck. I think it’s been a net positive.

Additional thoughts

LLMs are… complicated. Whether their overall use is good or bad, Pandora’s box has been opened and we’re not putting the lid back onBarring some kind of extremely enforceable international agreement which I don’t think is remotely likely.

. I think it will be more accessible to self-host and possible self-train LLMs in the future, probably with both good and bad results. I use them to supplement my work, but if I had to sum up my whole process in one sentence it would be “don’t let something else do my thinking for me”.

We’re going to live in a future where if an LLM can streamline your job in some way, you’ll likely be required to use it. Or if you’re not explicitly required to, someone who is using it to supplement their work will outcompete you in terms of raises and promotions. LLMs are going to create their own hegemony for the every day worker pretty organically I’m afraid. I’d like to pen my thoughts on the ethical considerations of training data and implicit biases in an LLM, as well as what the ability to do word generation the way LLMs do means for consciousness, but that’s a post for another day.