After a bad case of Covid, I threw up every day for seven months.
I did not know why. I was doing blood tests, seeing doctors, trying to figure out what my body had become after the infection. Eventually a test came back showing IgE reactions — immune responses — to milk, eggs, almonds, and shrimp. Real allergies, not intolerances. My body had apparently decided, somewhere in the wreckage of that illness, to start treating ordinary food like an attack.
The problem was that I had no idea how much of everyday food is milk and eggs. Not obvious dairy. Not a glass of milk or a fried egg. I mean the milk solids in crackers. The egg whites in salad dressing. The whey protein in a protein bar that does not advertise itself as a dairy product. The casein in the non-dairy creamer, which contains dairy despite its name.
If I could avoid every trigger for three days in a row, I could feel human again. Three days was the reset. But I could not get three days clean because I did not yet know what I was avoiding. Every label was a puzzle. Every ingredient list was a potential trap written in small print by someone who assumed you already knew what things were.
I wanted a tool that I could point at a package and ask: can I eat this?
I started building one. It could read ingredient labels from photos. It could pull nutrition information from chain restaurant websites. It had a menu rewriter that would take a full menu and hand back only the things you could safely order. It knew your allergies because you told it once, and it never forgot.
I eventually had to set it aside — the liability of being wrong, for someone in anaphylaxis, was more than one developer should carry alone. But the idea never stopped being the right idea. And in the past year, something changed.
The AI already in your pocket got good enough to do this itself. You just have to teach it who you are.
What “personal context” actually is
Every major AI assistant — Claude, ChatGPT, Gemini — has a place where you can store information about yourself that the AI reads before every single conversation. It goes by different names. Claude calls it memory and user preferences. ChatGPT calls it custom instructions. Gemini calls it personal context. Same idea everywhere: you write it once, and from that point on, the AI knows it without you having to repeat yourself.
Most people use this for things like “I prefer concise answers” or “I work in marketing.” But it is also exactly the right place to store your allergy profile — permanently — so that every time you ask your AI about food, a label, a menu, or an ingredient you don’t recognize, it already knows what you cannot eat.
Here is where to find it on each platform:
The template — copy, fill in, paste
Here is a context file you can adapt. Replace the example allergies with your own. Keep the rules section — that part is doing important work, which I will explain shortly.
That last rule is the most important one in the whole file. We will come back to it.
How to use it
Once the context is saved, using it is exactly what you would hope. You are standing in a grocery aisle. You flip a package over. The ingredient list is printed in six-point type and you cannot read it in the store lighting. You open your AI, take a photo, and ask: can I eat this?
The AI reads the label, checks it against your profile, and tells you specifically what is safe and what is not — and why. Not “this might contain dairy.” It will say “this contains whey, which is a dairy protein you listed as an allergy.”
You can also ask it to look up a restaurant. “I’m going to Olive Garden tonight, what can I order?” It will pull their published nutrition information and filter the menu down to what works for you. You can ask it to read a recipe and flag every problem ingredient. You can ask it what “sodium caseinate” is when you don’t recognize a term on a label — and now you know to avoid it, because it is a milk protein hiding behind a chemical name.
Now let’s talk about whether you can trust it
This is the part of the article where I have to be honest with you, because your health is not a place for cheerleading.
AI makes mistakes. Labels change and the AI does not know about last week’s reformulation. A manufacturer can add an ingredient without the AI knowing. The AI might not recognize an obscure derivative of your allergen. It might miss something in a long ingredient list. It might be confident when it should not be.
This tool is a first pass, not a final answer. Always read the label yourself. When in doubt, do not eat it. For anaphylactic allergies, no AI is a substitute for your own eyes on the ingredients and, when necessary, a direct call to the manufacturer.
The rules in the template above — especially “if you are not certain, say so” and “always remind me to check the label” — are there specifically to reduce this risk. Use them.
Why rules make AI more accurate — not less
Here is something worth understanding, especially if you have heard that AI “just makes things up” or “only tells you what you want to hear.” Both of those criticisms are real — but they apply most to AI that has been given no rules and no constraints. The context file you just wrote changes that dynamic significantly.
When an AI hallucinates — invents something that is not true — it usually happens because it is being asked to fill a gap. It does not know the answer, but it has been trained to be helpful, so it generates something plausible. In an open-ended conversation with no guardrails, that tendency can produce confident-sounding nonsense.
But look at what you are actually asking the AI to do when you photograph an ingredient label. You are not asking it to invent anything. You are asking it to compare. Here is a list of ingredients. Here is your allergy profile. Does one appear in the other? That is a matching task, not a creative task. The surface area for hallucination shrinks dramatically when the job is “find this word in this list” rather than “tell me something interesting about this topic.”
The rules in your context file do something else important. When you write “if you are not certain, say so — do not guess,” you are giving the AI explicit permission to say no. To say I don’t know. Without that instruction, AI assistants have a natural pull toward resolution — toward giving you a complete, satisfying answer. That pull is what produces false confidence. Your rules interrupt it. They tell the AI that uncertainty is the correct answer sometimes, and that you prefer an honest “I’m not sure” over a reassuring guess.
That is not a workaround for a broken tool. That is how you use any tool well. A smoke detector with dead batteries will not tell you there is no fire — it will tell you nothing at all. Your job is to keep the batteries fresh. Your context file is the batteries.
What it still cannot do
It cannot account for cross-contamination — the cheese in the olive tub, the butter in the pan. It does not know what happened in the kitchen before your food was plated. It can only work with what is on the label or the published menu. For that gap, you still need the allergy card we covered in our last article — the one that goes to the kitchen and asks a human to be careful.
The two tools together — a personal AI that knows your allergies and an allergy card for restaurants — cover most of the ground. Neither one covers all of it. That is the honest answer.
But here is what I know from seven months of being sick every single day: having a starting point matters. Having something that catches the obvious things — the whey in the crackers, the albumin in the dressing, the casein in the non-dairy creamer — frees up your attention for the harder cases. You stop drowning in the basics. You start being able to think.
The AI in your pocket is not a doctor, not a dietitian, and not a guarantee. But it is available at 10pm in a grocery store when you are standing in the cracker aisle trying to read six-point type. And with the right context file, it already knows what it is looking for.
That is more than I had when I was sick. It is worth something.
STAY SALTY!