TL;DR: CI/CD is an automated pipeline that runs every time you push code. CI (Continuous Integration) runs your tests to catch broken code before it ships. CD (Continuous Deployment) automatically pushes passing code to production. Platforms like Vercel and Netlify give you the CD part for free. GitHub Actions gives you both. You write a YAML config file that tells the pipeline what to do — and AI can write that config for you.

What CI/CD Actually Is

CI/CD stands for Continuous Integration / Continuous Deployment. Those words sound more complicated than the concept.

Before CI/CD, shipping software worked like this: developers wrote code for weeks, then manually tested everything, then manually uploaded files to a server, then crossed their fingers. It was slow, error-prone, and stressful.

CI/CD flips that process. Instead of shipping in big batches, you push small changes constantly — and an automated system handles the testing and deployment every single time. The "continuous" part just means "every push triggers the process automatically."

CI — Continuous Integration

Every time code is pushed, automatically run tests to catch bugs. If tests fail, block the merge. Keeps the main branch always working.

CD — Continuous Deployment

Every time tests pass, automatically deploy to production. No manual "upload files to server" step. Push code, ship code.

For vibe coders, the practical translation is: you push your AI-generated code to GitHub, a pipeline wakes up, tests your code, builds it, and puts it on the internet — all without you clicking anything.

The Pipeline: What Happens When You Push

Here is the exact sequence of events from "git push" to "live site."

  1. You push code to GitHub. This is the trigger. The moment code hits the remote branch, the pipeline starts.
  2. The CI runner spins up. A fresh virtual machine (a cloud computer no one else is using) boots up. It has nothing on it except the operating system.
  3. Your code is checked out. The runner downloads your repository.
  4. Dependencies are installed. The runner runs npm install or pip install or whatever your project needs.
  5. Tests run. Your test suite executes. If any test fails, the pipeline stops here and marks the build red. Nothing ships.
  6. The project builds. npm run build compiles your code, bundles assets, generates output files.
  7. Deployment happens. The built files are sent to your hosting platform — Vercel, Netlify, a VPS, wherever.
  8. You get a notification. Green checkmark = success. Red X = something broke at one of the steps above.

The whole thing typically takes 2-5 minutes. You can watch it happen in real time in your GitHub repository under the "Actions" tab.

Why AI-Generated Code Especially Needs This

This is the section most CI/CD guides skip, but it is the most important one for vibe coders.

When you write code yourself, you have a mental model of how everything connects. When AI writes code, you often do not. That is fine — you do not need to. But it creates a specific risk: the AI might generate code that works for the feature you asked for but quietly breaks something else you did not mention.

This is called a regression — a change that fixes one thing and breaks another. AI-generated code produces regressions at a higher rate than human-written code simply because the AI does not have context about everything your app does. It optimizes for the specific prompt you gave it.

Automated tests catch regressions before they reach users. Here is how that plays out:

  • You ask AI to "add a discount code field to the checkout form."
  • AI adds the field and rewrites part of the checkout logic.
  • The discount field works. But the AI's rewrite broke how tax is calculated.
  • Without tests: this reaches production. Users get wrong tax amounts. You find out via a support ticket.
  • With tests: the CI pipeline runs your checkout tests, sees that tax calculation now returns wrong values, marks the build red. Nothing ships. You ask AI to fix the tax bug before merging.

Automated testing is the safety net that makes AI-assisted development sustainable at scale. The faster you ship with AI, the more important that net becomes.

The Vibe Coding Risk

AI moves fast. You can add a feature in 10 minutes. Without a test pipeline, you can also break something in 10 minutes and not find out for days. CI catches that gap.

GitHub Actions

GitHub Actions is the most common CI/CD tool for small teams and solo developers. It lives inside GitHub, so there is nothing extra to sign up for. You define your pipeline in a YAML file, push it to your repo, and GitHub runs it on every push.

GitHub Actions is free for public repos and gives 2,000 free minutes per month for private repos. Most projects stay under that.

Vercel Auto-Deploy

When you connect a GitHub repo to Vercel, it automatically deploys on every push to the main branch. It also creates preview deployments for every pull request — a live URL where you can see your changes before merging. This is the CD part, built in, zero configuration required.

Vercel does not run your custom tests — it just builds and deploys. For the full CI+CD pipeline, combine Vercel deploy with a GitHub Actions workflow that runs tests first.

Netlify Auto-Deploy

Same idea as Vercel. Connect your repo, Netlify deploys on every push. It also has branch deploys and deploy previews. Great for static sites and JAMstack projects. See Vercel vs. Netlify for a side-by-side comparison.

Other Options Worth Knowing

  • CircleCI — Popular at larger companies. More configuration options than GitHub Actions, steeper learning curve.
  • GitLab CI — If you use GitLab instead of GitHub, it has a built-in CI system similar to GitHub Actions.
  • Railway / Render — Platform-as-a-service tools with built-in deploy pipelines, similar to Vercel but for backend apps.

For most vibe coders building web projects: GitHub Actions for tests + Vercel or Netlify for deployment is the standard stack.

What a YAML Config File Does

GitHub Actions reads a YAML file you place at .github/workflows/ in your repo. YAML is just a configuration format — it uses indentation to define structure. You do not need to understand YAML deeply. You need to know what the pieces are.

Here is a real GitHub Actions workflow for a React project:

name: CI Pipeline

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test-and-build:
    runs-on: ubuntu-latest

    steps:
      - name: Check out code
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run tests
        run: npm test

      - name: Build project
        run: npm run build

Here is what each section means:

  • name: What this workflow is called. Shows up in the GitHub Actions UI.
  • on: When to run this workflow. This one runs on any push to main, and on any pull request targeting main.
  • jobs: What to actually do. A workflow can have multiple jobs that run in parallel or in sequence.
  • runs-on: What type of machine to use. ubuntu-latest is the standard choice.
  • steps: The ordered list of things to do. Each step either uses a pre-built action (uses:) or runs a shell command (run:).

The indentation matters in YAML. Two spaces for each level. GitHub Actions will reject the file if the indentation is wrong. When in doubt, paste the file into a YAML validator or ask AI to check it.

Prompt I Would Type

Generate a GitHub Actions workflow file for my project:
- Framework: [Next.js / React / Vue / etc.]
- Tests: [Vitest / Jest / pytest / etc.]
- Hosting: [Vercel / Netlify / Railway / etc.]
- Node version: [20 / 22 / etc.]

It should run on push to main and on pull requests.
Run tests first. If tests pass, trigger deployment.
Add caching for node_modules.

Common CI/CD Failures and What They Mean

When the pipeline goes red, the error is always in the logs. GitHub Actions shows you the exact step that failed and the exact error message. Here are the most common failures and what to do.

"Tests failed"

A test in your test suite returned a failure. This is the pipeline working correctly — it caught something before it shipped. Read the test output to see which test failed and what the expected vs. actual values were. This is exactly what you want CI to catch.

"Cannot find module" / "Package not found"

A dependency is not installed or is not listed in package.json. This happens when you install a package locally but forget to save it (npm install --save), or when there is a mismatch between package.json and package-lock.json. Run npm ci locally and see if you get the same error.

"Environment variable undefined"

Your code uses an environment variable (like an API key or database URL) that exists on your machine but is not configured in GitHub. Go to your repo Settings → Secrets and Variables → Actions, and add the missing secret. Reference it in your YAML as ${{ secrets.YOUR_SECRET_NAME }}.

"Build failed" / "Out of memory"

The build step ran out of resources. For most projects this means a dependency is causing issues or there is an actual build error in your code. Check the build output — it usually points to the specific file and line that caused the failure.

"YAML parsing error" / "Invalid workflow"

Your .github/workflows/ file has a syntax problem — usually wrong indentation or a missing colon. Paste the file into yamllint.com or ask AI to find the formatting error.

Pipeline times out

The workflow ran for too long (GitHub Actions has a 6-hour limit, but free tier runners can be slow). Usually caused by tests that hang instead of finishing. Check if any test is waiting for a network call that never resolves, or a server that never closes.

Debug Tip

When CI fails, click on the red X next to the commit in GitHub, then click "Details." This takes you to the exact step that failed with the full log output. Copy the error message and paste it to AI with: "This is my GitHub Actions error, here is my workflow file, what is wrong?"

What AI Gets Wrong About CI/CD

Generating YAML with outdated action versions

GitHub Actions uses versioned actions like actions/checkout@v3. AI trained on older data often generates workflows using deprecated versions. This causes warnings or failures. Always check that the action versions in AI-generated YAML are current — look at the action's GitHub page for the latest version tag.

Forgetting to add secrets

AI generates a workflow that references ${{ secrets.DATABASE_URL }} but does not remind you that you need to add that secret in your GitHub repo settings. The workflow will fail with a confusing error until you add the secret. After AI generates a workflow, ask: "What secrets do I need to configure in GitHub for this workflow to run?"

Skipping test setup

AI sometimes generates workflows that only build and deploy — no test step. That gives you the CD part but skips the CI part entirely. A pipeline without tests is just an automated deploy button. Always check that the generated workflow includes a test step before the build step.

Recommending CI for everything

AI will suggest setting up CI/CD for a static landing page you built in an afternoon. That is overkill. More on this in the next section.

Not caching dependencies

Without caching, every pipeline run re-downloads all your node_modules. That adds 1-3 minutes to every run and wastes your free minutes. AI-generated workflows often omit the cache configuration. Ask specifically: "Add caching for node_modules to speed up the workflow."

Connecting Tests to Your Pipeline

CI is only as useful as your tests. If you have no tests, the CI pipeline has nothing to check — it just builds and deploys, which Vercel already does for free.

For JavaScript and TypeScript projects, Vitest is the modern testing tool of choice. It runs fast, integrates with Vite (which most AI-generated projects use), and produces clean output that is easy to read in CI logs.

The most valuable tests to write for AI-generated projects are not unit tests on individual functions — they are integration tests on the behaviors that would hurt if they broke:

  • Does the checkout flow complete without errors?
  • Does the login form reject bad credentials?
  • Does the API return the right shape of data?
  • Does the form validation block empty required fields?

You do not need 100% test coverage. You need tests on the things that would cause you real problems if they silently broke.

Prompt I Would Type

I have a [React / Next.js / Vue] app with no tests.
I use Vitest. Write tests for the most critical user flows:
- [describe your main features: checkout, login, form submission, etc.]

Focus on integration tests that would catch regressions
if AI changed the related code. Keep the tests simple and readable.

CI/CD and Docker

If your project uses Docker, the CI pipeline can build and push your Docker image as part of the workflow. Instead of deploying built files, the pipeline packages your app into a container image and pushes it to a registry (Docker Hub, GitHub Container Registry, etc.). The hosting platform then pulls that image and runs it.

This is more complex than the Vercel/Netlify deploy flow, but it is how most production backend APIs ship. AI can generate the full workflow including Docker build steps — just ask specifically for it.

When You Actually Need CI/CD vs. Just Deploying Manually

Not every project needs a full CI/CD pipeline. Here is a practical decision framework.

Skip CI/CD when:

  • You are building a personal portfolio or static site with no user data
  • You are the only person pushing code and you test manually before deploying
  • The project is a one-time build that will not be updated frequently
  • You are in early exploration — still figuring out what you are even building

Vercel and Netlify auto-deploy is enough for these cases. You get automatic deployment without the overhead of writing workflow files.

Set up CI/CD when:

  • Multiple people are pushing code to the same repository
  • The app handles real user data, payments, or authentication
  • You have shipped bugs to production that tests would have caught
  • You are iterating fast with AI and need a safety net for regressions
  • Your project has grown past "prototype" and into something people depend on

The rule of thumb: set up CI when the cost of a production bug exceeds the cost of writing a workflow file. For most real apps, that threshold hits earlier than you expect.

Just Auto-Deploy

Vercel / Netlify connected to GitHub. Push to main = live. No YAML file needed. Good for: portfolios, landing pages, early prototypes.

Full CI/CD Pipeline

GitHub Actions running tests + build, then triggering deploy. Catches bugs before they ship. Good for: apps with users, payment flows, multiple contributors.

What to Learn Next

  • What Is GitHub? — CI/CD lives inside your GitHub repository. Start here if you are new to GitHub.
  • What Is GitHub Actions? — A deeper look at the specific tool that powers most CI pipelines.
  • What Is Vitest? — The fast testing framework that plugs into your CI pipeline.
  • What Is Docker? — How containerization fits into more advanced CI/CD pipelines.
  • Vercel vs. Netlify — Both include auto-deploy CD. Pick the right one for your project.

Next Step

If you already deploy to Vercel or Netlify, you have CD. Add CI by creating a .github/workflows/ci.yml file with a test step. Ask AI to generate it for your stack. The first time a test catches a bug before it ships, you will understand exactly why this is worth the 20 minutes it took to set up.

FAQ

CI stands for Continuous Integration — automatically testing and merging code changes. CD stands for Continuous Delivery or Continuous Deployment — automatically releasing tested code to production. Together they form a pipeline that runs every time you push code, so bugs get caught before they reach users and new features ship without manual steps.

If you deploy to Vercel or Netlify, you already have the CD part — they auto-deploy on push. You need explicit CI (automated tests) once you have more than one contributor, when bugs are reaching production, or when your app handles real user data. Solo hobby projects and static portfolios can skip it entirely.

A YAML file is a configuration file that tells GitHub Actions what to do when you push code. It lives in .github/workflows/ and defines the steps: install dependencies, run tests, build the project, deploy. You do not need to write it from scratch — AI can generate the correct YAML for your specific stack and hosting platform.

The most common causes are missing environment variables (secrets not configured in GitHub repo settings), a different Node.js or Python version in CI versus local, or a dependency that works locally but behaves differently in a fresh install. Check the error log in the GitHub Actions tab for the exact failing step, compare the CI environment to your local setup, and make sure all secrets are added under Settings → Secrets and Variables → Actions.

GitHub Actions is free for public repositories with no limits. For private repositories, you get 2,000 minutes per month on the free plan. Most small projects stay well under that limit — a typical workflow run uses 2-5 minutes. If you exceed the limit, GitHub will pause your workflows until the next month or you upgrade to a paid plan.

Set Up Your First CI Pipeline

The best way to understand CI/CD is to watch it catch a real bug. Connect your GitHub repo to Vercel or Netlify for auto-deploy, add a GitHub Actions workflow with a test step, then deliberately break a test and push. Watch the pipeline block the deploy. That is the entire concept, made concrete.

Use the prompts in this article to generate the YAML and tests. The setup takes under 30 minutes for most projects.

Start With GitHub Actions →