How I Built an AI Chat Agent for My Portfolio

How I Built an AI Chat Agent for My Portfolio

A concise, dev-friendly story of turning a static resume into a conversational AI agent using RAG.

Published: February 21, 2026
6 min

I asked a simple question: what if my portfolio and resume could talk to visitors and recruiters in real-time? This post shares the why, the architecture, and the exact implementation steps behind my portfolio AI agent.

The Question That Started It

I kept thinking: What if my resume and portfolio could talk? Not just display static text, but actually answer recruiter questions in seconds.

Recruiters usually scan quickly. Great details often get missed. I wanted an interface where someone could ask, "What has Uzair shipped in AI?" and get a precise, contextual answer instantly.

The Problem It Solves

  • Reduces friction for recruiters who want fast, direct answers.
  • Keeps answers consistent with my real experience and projects.
  • Turns a passive portfolio into an interactive technical conversation.

Architecture (RAG, Not Fine-Tuning)

I did not train a custom model. I used RAG: store my resume/portfolio knowledge as embeddings, retrieve the most relevant chunks at query time, then ground the LLM response in that context.

How Resume Context Became Queryable

Step 1: Structure the knowledge base

I converted resume content into focused markdown sources: about, skills, projects, and FAQ. This improved retrieval quality and made updates easy.

Step 2: Ingest once, query many times

A local ingestion script chunks text, generates embeddings via HuggingFace, and upserts vectors into Upstash.

Step 3: Add strict guardrails

If no relevant context is found, the API refuses the question instead of guessing. This keeps responses factual and reduces hallucinations.

Tech Stack I Used

  • Next.js (App Router) for UI + serverless route
  • Groq for low-latency LLM responses
  • Upstash Vector for semantic retrieval
  • HuggingFace all-MiniLM-L6-v2 for embeddings
  • Markdown files as maintainable source of truth

What I Learned

Better answers come from better context, not bigger prompts. Small, clean, trustworthy data chunks + strict refusal logic made the assistant feel both smart and reliable.

If you are building your own portfolio agent, start with data quality first. Model quality is important, but retrieval quality decides trust.

Build Yours Next

Interested in more tips and guides? Browse our other blog posts to keep learning and stay inspired!
Have questions or want to work together? Contact me through my portfolio.