TL;DR

Edge computing runs your code in data centers scattered around the world — not just one central server. Users get responses from servers physically close to them, which means faster load times. Your AI-generated Next.js middleware already uses it. The catch: edge runtimes are stripped-down — no Prisma, no filesystem, limited CPU time.

What Is Edge Computing? (The Plain English Version)

When you deploy a typical web app, your code runs on a server. That server is in a specific building, in a specific city. Most cloud providers default to something like us-east-1 — Amazon's data center in Northern Virginia.

That's fine when your users are in New York or Chicago. But what happens when someone in Tokyo hits your site? Their request has to travel from Japan, across the Pacific Ocean, through fiber optic cables to Virginia, get processed, then travel all the way back. We're talking 150–250 milliseconds, just for the round trip. That's before your server even runs any code.

Edge computing solves this by running your code in dozens — sometimes hundreds — of locations simultaneously. Instead of one server in Virginia, your code runs in a data center in Tokyo. And in London. And in São Paulo. And in Sydney. When someone in Tokyo visits your app, they hit the Tokyo server. They get a response in 10–20ms instead of 200ms. That's not a small improvement — it's a 10–20x difference in speed.

The term "edge" comes from network topology. The center is your origin server. The edge is where your network meets the world — close to the end user. Think of it as pushing your code to the perimeter, where your users actually are.

The Numbers

The speed of light through fiber is roughly 200,000 km/second. Virginia to Tokyo is about 11,000 km. At best, that's a 55ms one-way trip — 110ms round trip — before your server does anything. Real-world latency is typically 150–250ms due to routing, hops, and processing. Edge reduces this to 5–20ms.

Cloud vs. Edge: A Construction Analogy

Here's a way to think about it that makes immediate sense if you've spent time in construction.

Imagine you're running a construction supply company. You have one massive warehouse in Columbus, Ohio. Every contractor in America orders from you. When a crew in Phoenix needs lumber, they place an order, and a truck drives from Columbus to Phoenix. That's a two-day delivery. When a crew in Miami needs it, same thing. Columbus to Miami. Two days.

You're spending a lot on shipping. Your customers are frustrated. They can't move fast because they're always waiting on the warehouse.

Now imagine you open regional supply depots. One in Phoenix. One in Miami. One in Seattle. One in Atlanta. Contractors order from the nearest depot. Phoenix crews get same-day delivery. Miami crews get same-day delivery. Your Columbus warehouse still exists — it's where you stock everything and handle the big orders — but the daily fast-moving stuff lives close to the customer.

That's edge computing. Your origin server in Virginia is the Columbus warehouse. The edge locations are the regional depots. Most of your users are getting served from the depot closest to them, not waiting on the cross-country truck.

The difference is felt every single time someone uses your app. Not just on first load — on every API call, every page transition, every real-time update. Shaving 200ms off every interaction changes how an app feels. It goes from "fine" to "fast."

When Your AI Uses the Edge Without Telling You

Here's something that surprises a lot of vibe coders: if you've been building Next.js apps with Claude or ChatGPT, your AI has almost certainly written edge code for you already. You just didn't know it.

This happens most often in three situations:

1. Next.js Middleware

If you asked your AI to add authentication, redirects, geolocation, or A/B testing to your Next.js app, it probably created a middleware.ts file in your project root. Here's the thing: Next.js middleware runs at the edge by default. Every time.

You didn't have to configure anything. Vercel (the default hosting for Next.js) automatically runs middleware in edge locations worldwide. That middleware file your AI wrote? It's running in Tokyo, London, and São Paulo right now.

middleware.ts
// This file runs at the EDGE — not on your origin server
// Next.js middleware is edge by default
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'

export function middleware(request: NextRequest) {
  // This runs in a data center near your user
  const country = request.geo?.country || 'US'
  
  if (country === 'DE') {
    return NextResponse.redirect(new URL('/de', request.url))
  }
  
  return NextResponse.next()
}

export const config = {
  matcher: '/((?!api|_next/static|favicon.ico).*)',
}

This code doesn't run in Virginia when someone in Germany visits your site. It runs in Frankfurt. The request.geo object — that country detection — is available because it's running at the edge, where Vercel has geolocation data baked in.

2. Edge Route Handlers

Your AI might have written an API route and added one line that you might have scrolled past:

app/api/check-auth/route.ts
export const runtime = 'edge'  // ← THIS LINE changes everything

export async function GET(request: Request) {
  // Now this route runs at edge locations worldwide
  const token = request.headers.get('authorization')
  
  if (!token) {
    return new Response('Unauthorized', { status: 401 })
  }
  
  // Fast JWT verification — perfect for edge
  return new Response(JSON.stringify({ valid: true }), {
    headers: { 'Content-Type': 'application/json' }
  })
}

That single line — export const runtime = 'edge' — opts this entire route into edge execution. It's no longer running on a Node.js server in Virginia. It's running in ~300 locations worldwide with near-zero cold start time.

3. Cloudflare Workers

If you asked Claude or GPT to add geolocation, rate limiting, or smart redirects and you're not on Vercel, it may have suggested Cloudflare Workers. Workers are edge functions that run on Cloudflare's network — about 300 locations, some of the most distributed infrastructure on the planet.

A basic Worker looks like this:

worker.js (Cloudflare Worker)
export default {
  async fetch(request, env) {
    const country = request.cf.country
    
    // Show different content based on user's country
    // This runs in the data center nearest to them
    if (country === 'CA') {
      return Response.redirect('https://yoursite.com/ca/', 302)
    }
    
    return fetch(request)
  }
}

If you've seen code like this and wondered "where does this actually run?" — it runs in the Cloudflare data center closest to whoever hits that URL. Not your server. Not AWS. Cloudflare's edge.

The Error That Sends People to Google

If you've seen this error: "Dynamic server usage: cookies() was used" — that's Next.js telling you that a page tried to read cookies at edge, and it can't pre-render the page. This happens when your AI marks a page as edge but the page reads cookies or headers dynamically. The fix is usually removing export const runtime = 'edge' from that specific file, or restructuring the page to handle this in middleware instead.

What Works at the Edge (and What Doesn't)

Edge runtimes are deliberately stripped down. They're not full Node.js environments. Think of it like this: a regular server is a full construction trailer — generator, tools, everything. An edge function is a tool belt. Lighter, faster, but you can only bring what fits.

What Edge Is Great For

  • Auth token validation — Checking a JWT or session token before a request hits your origin. Fast, stateless, perfect for edge.
  • Geolocation routing — Show Canadian users the Canadian version of your site. Redirect EU users to your GDPR-compliant pages.
  • A/B testing — Split traffic 50/50 between two versions without adding latency. The split happens at the edge, not on your server.
  • Personalization headers — Add a header like X-User-Plan: pro based on a cookie, so your origin server knows what to render.
  • Rate limiting — Block bad actors before they ever touch your origin server.
  • Static asset manipulation — Resize images on the fly, add watermarks, serve WebP to browsers that support it.
  • Caching dynamic content — Cache API responses at the edge so 100 users in London all get the same fast response.

What Edge Can't Do

  • Traditional database queries — Connecting to PostgreSQL or MySQL requires persistent TCP connections and Node.js drivers. Edge runtimes don't support this. (There are workarounds — more on that in the Prisma section.)
  • File system access — No fs.readFile(), no reading local files. Edge functions are stateless.
  • Long-running processes — Edge functions have CPU limits of 10–50ms. They're built for fast, in-and-out work. Not for generating PDFs, processing video, or anything that takes real compute time.
  • Node.js-specific APIs — Many npm packages rely on Node.js internals. If a package uses process, Buffer (in certain ways), child_process, or native Node modules, it won't work at edge.
  • Stateful connections — WebSockets require persistent connections. Edge functions handle individual requests, not long-lived connections.

The Edge Runtime Web Standards

Edge runtimes implement the Web Platform APIs — the same APIs that run in your browser. fetch(), Request, Response, Headers, URL, crypto (the Web Crypto API), TextEncoder. If it runs in Chrome, it probably runs at edge. If it requires Node.js, it probably doesn't.

The Big Gotcha: Prisma Doesn't Work at the Edge

This one burns vibe coders constantly. You have a Next.js app. Your AI built your database layer with Prisma because Prisma is excellent and widely recommended. You add export const runtime = 'edge' to a route or create a middleware that touches the database, and suddenly everything breaks.

The error looks something like:

Error
PrismaClientUnknownRequestError: 
  @prisma/client does not support edge runtimes yet.
  
  Please try one of the following:
  - Use the Prisma Accelerate extension
  - Use the Prisma Pulse extension

Here's why this happens: Prisma uses native Node.js binaries to talk to databases over TCP connections. Edge runtimes don't have Node.js. They don't support TCP connections to arbitrary hosts. Prisma simply cannot run in that environment.

Your Options

Option 1: Don't use edge for database routes. This is the simplest fix. Remove export const runtime = 'edge' from any route that touches Prisma. Let those routes run on normal serverless (Node.js) functions. Only use edge for routes that don't need the database.

Option 2: Use Prisma Accelerate. Prisma offers a connection pooler/proxy service that exposes an HTTP API. Your edge function calls Prisma Accelerate's HTTP endpoint instead of connecting directly to the database. Works at edge, but adds Prisma Accelerate to your stack (it's a paid service after the free tier).

Option 3: Switch to Drizzle ORM + HTTP-based database driver. This is what many teams do when they're building edge-first. Drizzle is a lightweight TypeScript ORM. Neon (serverless PostgreSQL) and PlanetScale (MySQL) both offer HTTP drivers that work in edge runtimes — no TCP, no Node.js required.

Edge-compatible database (Drizzle + Neon HTTP)
import { neon } from '@neondatabase/serverless'
import { drizzle } from 'drizzle-orm/neon-http'
import { users } from './schema'

// This works at the edge because it uses HTTP, not TCP
export const runtime = 'edge'

export async function GET() {
  const sql = neon(process.env.DATABASE_URL!)
  const db = drizzle(sql)
  
  const allUsers = await db.select().from(users)
  
  return Response.json(allUsers)
}

If your AI set you up with Prisma and you want to go edge, Option 1 (just don't run database routes at edge) is usually the right call unless you have a specific performance reason to push database access to the edge. Most apps don't need it.

Tell Your AI This

If you want your AI to write edge-compatible database code, give it this context: "I'm using Next.js deployed on Vercel with edge functions. Use Drizzle ORM with the Neon serverless HTTP driver — NOT Prisma, which doesn't work at edge." That one sentence saves you a debugging session.

Cloudflare Workers vs. Vercel Edge vs. Netlify Edge

The three main edge platforms you'll encounter as a vibe coder each have their own flavor. Here's how they compare:

Provider Locations Cold Start CPU Limit Best For
Cloudflare Workers ~300 worldwide ~0ms 10ms CPU (free), 30ms (paid) Any stack, maximum global reach
Vercel Edge Functions ~70 locations ~0ms 25ms CPU Next.js apps on Vercel
Netlify Edge Functions ~35 locations ~0ms 50ms CPU Netlify-hosted apps, Deno-based
AWS Lambda@Edge ~30 CloudFront PoPs 50–200ms 5 seconds AWS/CloudFront architectures

Cloudflare Workers

The most widely distributed edge platform on the planet. Cloudflare has ~300 data center locations — if there's a city with significant internet infrastructure, Cloudflare is probably there. Workers run on Cloudflare's V8 isolate technology, which means near-zero cold starts and extremely low latency.

Workers is a standalone product. You don't have to host your site on Cloudflare to use it. You can run Workers in front of any backend — AWS, Vercel, a VPS, whatever. This makes it extremely flexible for things like:

  • Global rate limiting in front of any backend
  • Smart routing between multiple origin servers
  • Transforming responses before they hit the user

The free tier is generous: 100,000 requests/day, 10ms CPU per request. That's plenty for most hobby projects and small apps.

Vercel Edge Functions

If you're using Next.js on Vercel (and most vibe coders are), Vercel Edge Functions is what runs your middleware and any routes you mark with export const runtime = 'edge'. You don't configure it separately — it's built into the Vercel platform.

Vercel's edge network is smaller than Cloudflare's (~70 locations vs ~300), but for Next.js apps it's deeply integrated. The request.geo object, ISR (Incremental Static Regeneration), and edge caching all work together out of the box.

If you're comparing deployment platforms, check out our breakdown of Vercel vs. Netlify — it covers edge functions, build performance, and pricing in detail.

Netlify Edge Functions

Netlify's edge layer runs on Deno rather than Node.js. This matters if your AI writes code for it — Deno uses ES modules natively, has different import syntax, and has access to different APIs than Node.js-based edge runtimes. If you see code with import ... from "https://deno.land/...", you're looking at Netlify Edge code.

Netlify Edge has a longer CPU time limit (50ms) compared to Cloudflare (10–30ms) and Vercel (25ms), which gives you a bit more room for complex logic.

AWS Lambda@Edge

Lambda@Edge runs your functions at CloudFront edge locations — Amazon's CDN. It's more powerful than the others (longer timeouts, more memory) but it's also more complex to set up and has actual cold starts. If your stack is already deep in AWS, it's worth knowing about. If not, Cloudflare Workers or Vercel Edge is a simpler path.

When Should You Actually Use Edge Computing?

Okay, so edge computing sounds powerful. Should you use it everywhere? Short answer: no. Here's the decision framework.

Use Edge When:

  • Your users are global — If most of your traffic comes from one country, edge computing gives you less benefit. If you have users in Asia, Europe, and the Americas, edge makes a real difference.
  • The operation is fast and stateless — Auth checks, redirects, header manipulation, simple transforms. These are edge-native operations.
  • You're already using Next.js middleware — Congratulations, you're already using edge. You don't need to do anything else.
  • You need near-zero cold starts — Traditional serverless functions (AWS Lambda, Vercel Serverless Functions) have cold starts of 50–500ms when they haven't been called recently. Edge functions start in under 5ms, consistently.
  • You're building geolocation features — Redirecting based on country, showing localized content, compliance routing (EU data stays in EU). Edge makes this trivial.

Don't Use Edge When:

  • You need a traditional database connection — Unless you're using an HTTP-based driver like Neon or PlanetScale, keep your database routes in regular serverless functions.
  • Your operation takes more than 50ms of CPU — Edge has hard CPU limits. If you're doing image processing, PDF generation, or complex data transforms, use regular serverless or a dedicated worker.
  • You need lots of memory — Edge functions typically have 128MB or less. Background processing jobs and data-heavy operations need more.
  • You're just starting out — If your app is new and most of your users are in one region, don't over-engineer it. A regular Next.js app on Vercel works great. You can add edge optimizations later when you actually have the traffic to justify them.

The Practical Rule

If your AI already wrote edge code (middleware, Workers), don't fight it — learn to work with the constraints. If you're choosing whether to add edge computing to a new feature, only do it if the operation is fast, stateless, and needs to be close to users. Everything else? Regular serverless is fine.

The Real Performance Win: Combining Edge + Origin

The best architectures don't pick "edge OR origin" — they use both. Here's a pattern that works well for Next.js apps:

  • Middleware at edge: Auth validation, geolocation, A/B test assignment, rate limiting. Runs worldwide, ~0ms cold start.
  • Static pages at CDN: Pre-rendered HTML served from Vercel/Netlify CDN. Instant, globally cached.
  • API routes at origin: Database queries, complex business logic, integrations. Runs in Node.js with full capabilities.

This pattern gives you edge speed for fast operations and origin power for heavy lifting. Your users in Tokyo get lightning-fast auth checks at the edge, then pull data from your origin server — which only has to run once per cache lifetime.

What to Learn Next

Edge computing connects to several concepts that vibe coders run into constantly. Here's where to go deeper:

  • What Is Next.js Middleware? — If you want to understand the most common place edge code lives in a Next.js app, start here. Middleware is the gateway to understanding edge computing in practice.
  • Vercel vs. Netlify: Which Should You Use? — Both platforms have edge functions, but they work differently and have different pricing. This comparison breaks it down for vibe coders.
  • What Is Coolify? Self-Hosting Your App — If edge computing feels too abstract and you'd rather understand traditional server hosting first, Coolify is an excellent self-hosting platform to learn on.
  • Supabase vs. Firebase: Database for Vibe Coders — Supabase (PostgreSQL) works with edge via HTTP drivers. Firebase's real-time database has edge-friendly APIs. If your app needs a database at the edge, understanding both options helps.
  • What Is GitHub Pages? — On the other end of the spectrum: the simplest possible hosting with no edge computing at all. Good context for understanding the spectrum from static to edge.

The Fast Path

If you're using Next.js on Vercel, you already have edge computing. Read up on Next.js Middleware to understand what's already running at the edge in your app, then come back to this article to understand why it's structured the way it is.

FAQ

What is edge computing in simple terms?

Edge computing means running your code in data centers that are physically close to your users — not just in one central location like Virginia. Instead of one warehouse serving the whole country, you have regional depots everywhere. Your users in Tokyo get a response from a server in Tokyo, not one that has to travel 8,000 miles and back.

Does Prisma work at the edge?

No — not without workarounds. Prisma requires Node.js runtime features that aren't available in edge environments. Your options: (1) don't use edge for database routes, (2) use Prisma Accelerate as a proxy, or (3) switch to Drizzle ORM with Neon or PlanetScale's HTTP drivers, which are designed specifically for edge compatibility.

What's the difference between Vercel Edge Functions and Cloudflare Workers?

Both run JavaScript at the edge with near-zero cold starts. The main differences: Cloudflare Workers is a standalone product available on any stack with ~300 locations worldwide. Vercel Edge Functions is built into the Vercel platform specifically for Next.js apps and integrates deeply with Next.js features. If you're on Vercel with Next.js, use Vercel Edge. If you need global edge on any stack, use Cloudflare Workers.

Why is my Next.js middleware throwing an edge runtime error?

Next.js middleware runs at the edge by default, which means it can't use Node.js-only APIs. If your middleware imports something that requires Node.js (Prisma, the fs module, certain authentication libraries), you'll get an edge runtime error. The fix: use edge-compatible alternatives, or move Node.js-dependent logic into a regular API route and call it from middleware.

When should I NOT use edge computing?

Skip edge for: database queries (unless using an HTTP driver), anything that takes more than 50ms of CPU, operations needing lots of memory, long-running background tasks, and code that depends on Node.js-specific features. Edge is purpose-built for fast, stateless operations. If your code needs to "think" for a while or talk to a database directly, keep it in a regular serverless function or server.