Chatbot Architecture & Guardrails
This site includes an AI-powered chatbot designed to answer questions about my experience, skills, and career history. It's intentionally built like a small production system — secure by default, guardrail-driven, and designed with operational discipline.
Architecture Overview
- Frontend: Next.js App Router application
- API Layer: Server-side /api/chat route (Node runtime)
- LLM Integration: OpenAI API invoked server-side only
- Knowledge Layer: Resume snippets and curated success stories for grounding
- Source Control: GitLab (gitlab.com/daburritoking_project_group/personal-site)
- CI/CD & Hosting: Vercel (vercel.com/daburritoking-projects/personal-site)
CI/CD Pipeline
- Code is committed to GitLab and merged into main via standard Git workflows
- Vercel is connected directly to the GitLab repository
- Pushes to main trigger production deployments automatically
- Branches and merge requests generate preview deployments (when enabled)
- Build step runs dependency installation and next build before deploy
- Secrets (e.g., OPENAI_API_KEY) are stored in Vercel environment variables — never committed to source control
Request Flow
- User submits a question via the chat interface
- Browser sends a POST request to
/api/chat - Server validates payload and applies guardrails
- Relevant resume/success story content is selected
- Server invokes OpenAI API using environment-based credentials
- Response is returned to the client and rendered
Security & Guardrails
Secrets Management
- API keys never exposed to the browser
- Local development uses .env.local (ignored by Git)
- Production secrets stored securely in Vercel
- Credential rotation performed after exposure event
Input Controls
- Server-side validation of request payloads
- Size limits and structured request handling
- Safe error responses without leaking internal details
Git Guardrails
- Global gitignore for sensitive paths
- Pre-commit hooks to block secrets
- Clean project baselines to prevent accidental inclusion of local files
Design Philosophy
AI features should be treated like any production system: secure by default, observable where possible, and protected by guardrails rather than relying on individual discipline alone.
Roadmap
- Additional professional stories
- Improved telemetry and request tracing
- Vector-based retrieval for better grounding
- Enhanced rate limiting and abuse detection
- Expanded architectural transparency