Back to Blog
·4 min read
Edit

Building Gift Genie

Fullstack

Most AI chatbots feel like talking to a wall. You type something, wait, then get slapped with a block of text. Gift Genie started the same way but through a series of incremental improvements, it became something that actually feels responsive and more aligned to the experience of actually using the leading provider LLMs i.e. ChatGPT.

Here's what went into it.

The Foundation: Anthropic's Claude API

The backbone is Claude Haiku via the Anthropic SDK. The server acts as a middleman where importantly the client never touches the API key. Here Express handles the routing, dotenv loads the credentials, and every message goesvthrough a single /chat endpoint.

One thing that catches people out: the Claude API is stateless. It doesn't remember your last message. This means that Claude will not remember any previous messages to build the context for its latest response. So the server maintains a conversation history array that gets sent with every request. This gives Claude the context it needs to have a coherent back-and-forth, while also letting us cap the history at 100 messages to keep token costs in check.

The System Prompt: Giving It a Personality

A blank Claude is helpful but generic. The system prompt transforms it into the Gift Genie, a slightly cheeky character inspired by the genie from Aladdin. But personality isn't enough. The prompt also defines behaviour:

  • Start with constraints (budget, occasion)
  • Play "this or that" to narrow down preferences
  • Keep responses concise, micro-decisions over walls of text
  • Suggest up to 3 gifts at a time with a rating system to refine

The key insight: telling the model how to interact matters more than telling it who to be.

Streaming: The Biggest UX Win

This was the single biggest improvement. Before streaming, you'd click the lamp, watch it animate for an few seconds, then get the full response dumped at once. Which as a user is confusing as not clear what is actually happening in that moment. With streaming, text appears word by word as Claude generates it.

The implementation uses Server-Sent Events (SSE). The server opens a persistent connection, sends each text chunk as a data: event, and closes with a done signal that includes token usage stats. The client reads chunks via a ReadableStream reader and progressively appends them to the conversation state.

The result: responses feel instant, even when they're long.

The Little Things That Make It Feel Alive

These are small but they compound to make a more engaging and familiar UX:

Typing indicator: Three bouncing dots appear the moment you send a message, before Claude's first word arrives. It fills that awkward gap where you're wondering "did it work?"

Typing cursor: A blinking golden cursor sits at the end of the text while streaming. It disappears when the response completes. Subtle, but it tells you the model is still going.

Stop button: The lamp button swaps to a red "Stop" during streaming. You can cancel mid-response if it's heading in the wrong direction.

Error recovery: If the stream fails partway through, the partial response stays visible with a "Retry" button. No lost context.

Token tracking — After each response, the header shows token usage (input/output/total). Useful for keeping an eye on costs during development.

UI: Dark Theme, Golden Accents, Zero Clutter

The interface is deliberately minimal. Dark background, golden amber accent colour, and one interaction: type and rub the lamp. shadcn components handle the structure (Cards for messages, Textarea for input, Button for the lamp), while Tailwind handles everything else.

The lamp animates on hover (scale + rotate + glow) and shakes while loading. Markdown responses render properly with styled lists, headers, and bold text. Conversation flows newest-first so the latest response is always at the top.

What's Next

The key item in the backlog is tool use, where I will be giving Claude the ability to actually search for products via Google Shopping and return real links with prices. That's the difference between "here are some ideas" and "here's the actual gift, click to buy."

Erik Cavan

Erik Cavan

Applied AI

Share: