S
🍳

Ssempijja Charles (Charz)

Product Designer & UX Engineer in Kampala, Uganda (HIM)

Personalized Doesn't Feel Personal

I’ve spent hundreds of hours working with large language models, both as a designer and a web dev. I use them as a bridge, a tool to get from a blank page to a first draft. But with every interaction, I feel a growing sense of disconnect. The content is plausible, often impressive, but it is never mine.

The more I analyze this feeling, the more I’m convinced the problem is baked into the very way these models are designed to communicate. The issue isn't just what they write; it's how they're "looking" when they write.

AI, in its current form, is a brilliant observer but a terrible participant. It defaults to writing in the second-person view, as a reader, rather than the first-person view, as a writer. This subtle but profound distinction is why its output feels so hollow, and it’s the central paradox holding back the promise of truly "personalized" AI.

I have found a personal touch in AI tools with only one tool so far, Jules. When i am writing code with AI, i treat the AI as a coworker sitting across the table from me. Usually, my first prompt when opening a codebase in an AI tool is

"We are going to work on this project, we will be pair programming, explore the codebase to get project context"

With AI coding tools especially cursor, i realized it treats each interaction as me being less knowledgeable and it being my savior and thus usually this leads to it having really poor code which often breaks things as it will do everything possible to fix the issue even if it means breaking other things in the process, it would have reached its apex.

This is not how AI tools should work. For any tool of any type because it's a tool. When humans made hoes, the hoes did not explicitly tell humans "Sit down you have no idea how to weed" but humans held the handles and worked with the hoe to dig that's how AI should be.

AI is a "bicycle for the mind," a powerful tool that's a "great starter" but a "bad finisher." It has incredible "smarts" but no "taste."

This "great starter" identity is the root of the problem. AI models, especially those refined with Reinforcement Learning from Human Feedback (RLHF), have been overwhelmingly trained to be assistants. Their entire "personality" is built around being helpful, safe, and subservient. And what is the primary mode of an assistant? It is to address you.

"Here are the steps you can take..." "You might want to consider..." "As you can see, the data suggests..."

This is the second-person voice. It’s the voice of a tool assisting a user. This is why, as you pointed out, its tenses get skewed. It observes what you are doing (present continuous) rather than stating what I see (present). It’s an external narrator describing your actions, not an internal author expressing its own.

This creates the fundamental paradox of "personalized AI." We are building tools that ingest all our personal data—our emails, our notes, our messages—in the hopes that they will learn to augment our lives. But they are failing at the most crucial step: adoption of our voice.

The AI can summarize my inbox and draft a list of replies for me, but it cannot write a single reply as me. The replies it generates are always to me, the user, not by me, the writer.

For example, a "personalized" AI might generate this:

"You have a meeting at 10 AM. You should probably tell your team that you are running late."

It’s a prompt to me. I then have to take that instruction and write the actual message:

"Hey team, running about 10 minutes late. Kicking things off as soon as I'm on."

The AI is trapped as an intermediary, a bridge I must still cross. It can't make the leap from being my assistant to being my agent. It can’t adopt the "I."

AI has "smarts" but no "taste." "Taste" is the very thing the AI’s second-person, assistant-like nature prevents it from ever developing.

What is "taste," anyway? It's the sum of your first-person experiences. It’s your specific, un-averaged, and often irrational set of opinions. It’s the scar tissue from your past failures. It’s your unique sense of humor, your private references, your biases. Taste is the "I."

AI, on the other hand, is an "averaging" machine. It is, as some researchers have called it, a "stochastic parrot," brilliantly mimicking the most plausible patterns from its massive training data. The most plausible voice, by default, is the most average one—the helpful, instructional, and deeply impersonal second-person tone that dominates the internet's "corpus of text."

It’s trained on a sea of Wikipedia articles, marketing copy, and instruction manuals, all written to a reader. It’s not trained on the private, specific, first-person narrative of my life. Therefore, when I ask it to write for me, it can only produce a polished summary of what everyone else might say, never the specific, jagged-edged thing that I would.

This is why AI is a "bad finisher." The final 10% of any creative work is all taste. It’s the "I." It’s the courage to delete the plausible sentence and replace it with the weird, true one.

As long as AI is trapped in the "you" voice, it will remain a "great starter"—a bicycle for the mind. But it will never be the rider. It will always be the tool, never the author. And the replies it generates, no matter how "personalized" the data, will continue to feel like they are written to us, not by us.

Read this article on a similar topic; https://www.newyorker.com/humor/sketchbook/is-my-toddler-a-stochastic-parrot