← Back to Blog

Vibe Engineering: From Zero to a SaaS ChatGPT App - Part2

Nicholas Dickey

A compact companion to Part 2 of the Vibe Engineering tutorial series https://youtu.be/J3NVCD6w4IE, showing how we build the ChatVault backend: a fully tested MCP server with PostgreSQL, pgvector semantic search, Drizzle ORM, and strict JSON-RPC behavior — all created through disciplined vibe-engineering prompts.

Vibe Engineering: From Zero to a SaaS ChatGPT App - Part2

Vibe Engineering: From Zero to a SaaS ChatGPT App

Part 2 – Building the MCP Backend for ChatVault

Companion Summary for the Vibe Engineering Master-Class

https://youtu.be/J3NVCD6w4IE

In Part 1 of this tutorial series, we created a Skybridge widget and an upstream MCP server for the ChatVault ChatGPT App. That episode established the UI and the core interaction model: a widget rendered inside ChatGPT talking to an MCP server over a clean JSON-RPC interface.

Part 2 is where the magic deepens.
Here we build the backend engine — a full MCP server with real infrastructure, real persistence, embeddings, vector search, and end-to-end testing. This is where the vibe engineering method proves its discipline: keep everything small, verifiable, and safe, while still moving fast.

This article summarizes the prompts, design principles, and engineering techniques used in Part 2 of the video master-class.

The Goal of Part 2

By the end of this episode, you build a database-backed MCP server that powers the ChatVault ChatGPT App:

  • Stores chats
  • Loads chats with pagination
  • Performs semantic vector search across previous conversations
  • Enforces strict JSON-RPC and protocol correctness
  • Runs on real PostgreSQL (local & Neon)
  • Includes a complete end-to-end test harness
  • Works seamlessly with the Part 1 widget via Skybridge

This is the backend half of a fully-functional SaaS ChatGPT App.

The Vibe Engineering Mindset

Before diving into prompts, Part 2 reinforces several principles that shape the entire project:

1. Verify, don’t guess

Every decision is anchored in real experiments:
Run the code, hit the real database, check the MCP spec, try the Apps SDK in practice.

2. Minimize scope and keep components bite-sized

The project is sliced into prompts so small that each can be completed in 2–3 hours.
This is central to vibe engineering: modular, finishable units that never overload the agent’s “circuits.”

3. Treat safety and rigor as first-class

Live AI is a volcano — beautiful but dangerous.
We apply engineering discipline: bounded behaviors, strict error handling, and oversized test coverage.

4. Separate concerns

  • Schema lives in its own module
  • MCP server code stands alone
  • Vector search is isolated
  • Tests run the full stack but keep clean fixtures

This structure keeps the AI assistant honest and prevents accidental entanglement.

Part 2 Prompt Overview

The work is driven through a sequence of prompts — generic prompts (0–5) and ChatVault-specific prompts (6–10). Below is a high-level summary of what each accomplishes.

Prompt 0 — Neon Database Setup

You configure PostgreSQL in Neon (for dev & prod), enable pgvector, and collect connection strings.
Vector search becomes a first-class part of the architecture from the start.

Prompt 1 — Node.js Project with Drizzle & Apps SDK

We bootstrap the backend MCP server project using:

  • TypeScript
  • Drizzle ORM
  • Apps SDK helpers
  • PostgreSQL + pgvector libraries

Configuration is strict and minimalistic — only what is needed.

Prompt 2 — Move to a Monorepo

Part 1 and Part 2 are unified under a single root repository:

chatvault-tutorial/ - part1/ - part2/ - prompts/

Git history from Part 1 is preserved. Pre-commit hooks are removed to avoid friction.

This step creates a clean foundation for a multi-component SaaS app.

Prompt 3 — Create the MCP HTTP Streaming Server

A fully compliant MCP server is implemented:

✔ JSON-RPC 2.0 request/response
✔ POST /mcp only (no SSE)
✔ Session management via mcp-session-id
✔ Proper handling of notifications (204 No Content)
✔ Correct CORS preflight
✔ One JSON object per response — no NDJSON
✔ Rich logging for observability

This forms the core protocol engine used by ChatGPT.

Prompt 4 — Drizzle Initialization and pgvector

You wire in database connectivity, migrations, and pgvector support:

.env ignored by git
✔ Connection tested at startup
✔ Initial schema file created
✔ Migrations run cleanly
✔ pgvector verified before use

The project is now database-ready.

Prompt 5 — Build the Generic Test Framework

We create a real test environment using:

  • Docker-based PostgreSQL (with pgvector)
  • Jest for e2e tests
  • MCP client utilities
  • Server lifecycle utilities
  • Test DB migrate → truncate → clean lifecycle

Tests validate:

  • MCP handshake
  • Session IDs
  • JSON-RPC correctness
  • tools/list + resources/list

The backend now has a reproducible testing ecosystem.

ChatVault-Specific Prompts (6–10)

With the generic foundation in place, we implement ChatVault’s real features.

Prompt 6 — Implement saveChat

You define the chat schema:

  • id
  • userId
  • title
  • timestamp
  • turns (JSONB)
  • embedding (vector)

The tool:

  1. Concatenates all chat turns
  2. Generates an OpenAI embedding
  3. Stores everything in PostgreSQL
  4. Returns the new chat ID

Non-negotiables enforce correctness and disciplined error handling.

Prompt 7 — Implement loadChats (Pagination)

A paginated loader that matches exactly the Part 1 widget format, including:

{ _meta: {...}, chats: [...], pagination: {...} }

This preserves backwards compatibility and allows ChatVault Part 1 to plug into a real backend without a single UI change.

Prompt 8 — Implement searchChats (Vector Search)

Semantic search arrives:

  1. Generate embedding for query
  2. Perform cosine similarity search (embedding <-> query_vector)
  3. Sort by similarity
  4. Return results with metadata

Search honors:

  • Required userId
  • Required query
  • Results must belong to the user
  • Chats without embeddings are excluded

This is where ChatVault becomes a knowledge vault.

Prompt 9 — Test Everything End-to-End

You add tests for:

  • saveChat
  • loadChats
  • searchChats
  • Error cases
  • Cross-tool workflows (save → load → search)
  • Database state verification

Tests build confidence and eliminate the subtle regressions that happen when vibe engineering relies too much on intuition.

Prompt 10 — Integrate With ChatGPT

At this stage:

  • MCP server is running locally
  • ngrok exposes it
  • Production DB is ready
  • All tools work inside ChatGPT
  • Logs confirm correct DB operations
  • The Part 1 widget can now load from the real backend

This completes the backend half of the SaaS ChatGPT App.

What Part 2 Demonstrates

Part 2 is a case study in vibe engineering applied to backend infrastructure:

✔ How to tightly guide an LLM through complex backend development

Without falling into hallucinations, protocol drift, or code rot.

✔ How to enforce quality through prompts, not post-hoc debugging

Each prompt encodes constraints and non-negotiables that shape the output.

✔ How to replace “guessing” with controlled experiments

Everything is verified — from pgvector availability to JSON-RPC semantics.

✔ How small, isolated steps compound into a production-grade system

This is the essence of vibe engineering.

What’s Next — Part 3

Part 3 will layer on the SaaS system:

  • User accounts
  • Billing
  • Resource usage
  • Admin dashboards
  • Multi-tenant deployments

This transforms ChatVault from an app into a business — powered by Findexar.

Conclusion

Part 2 is the backbone of the entire tutorial series.
It teaches you how to build a disciplined, production-ready MCP backend using a vibe-engineering process: small steps, strict constraints, relentless verification, and an agent kept on a tight leash.

By the end of Part 2, ChatVault is no longer a demo.
It is a real ChatGPT App with a skybridge widget, a backend database, vector search, and a fully tested MCP server — ready for SaaS integration.