Web Development
Next.jsS3Cloudflare R2Presigned URLsSecurityFile UploadsApp Router

Next.js File Uploads Done Right: Direct-to-S3 and Cloudflare R2 with Presigned URLs, Validation, and Security

AO
Adrijan Omićević
·15 min read

# What You’ll Build#

This guide shows a production-ready pattern for direct browser uploads to AWS S3 or Cloudflare R2 using presigned URLs in Next.js App Router. You’ll implement server-side signing, layered validation, optional malware scanning, and resilient retries without routing file bytes through your Next.js server.

If you need the short reason this matters: server-side uploads frequently fail under load due to memory limits and timeouts, while direct-to-object-storage uploads shift the heavy lifting to infrastructure designed for it.

Target keyword: Next.js file upload presigned URL S3 Cloudflare R2

# Architecture Overview: Direct Upload, Then Verify#

A secure upload flow has three phases:

  1. 1
    Request: the browser asks your server for permission to upload.
  2. 2
    Upload: the browser uploads directly to S3 or R2 using a presigned URL.
  3. 3
    Finalize: the browser tells your server the upload is done, and your server verifies and records it.

This removes large request bodies from your Next.js runtime, which is critical on serverless platforms where request bodies are commonly limited and long uploads hit timeouts.

Direct Upload Flow (High Level)#

StepWhoWhat happensWhy it matters
1ClientSends filename, size, type to your APIMinimal payload, fast request
2ServerAuth checks, validates metadata, generates object key, returns presigned URLCentral security gate
3ClientPUTs file to S3 or R2Uses storage network throughput
4ClientCalls finalize endpoint with key and ETagAvoids trusting client claims
5ServerHEADs object, validates size and content-type, marks statusPrevents spoofing and partial uploads
6OptionalScannerScans object, then moves to safe bucket or marks clean

🎯 Key Takeaway: Your app should treat a file as untrusted until it is verified server-side and, for risky use cases, scanned.

# Prerequisites#

RequirementVersionNotes
Next.js14 or 15App Router route handlers used in examples
Node.js runtime for signing18+Presigning uses crypto and AWS SDK
AWS S3 or Cloudflare R2R2 uses S3-compatible endpoint
Auth systemAnyUse session cookies or JWT; tie uploads to a user

For authentication patterns, see Next.js authentication options. Upload endpoints must be protected, otherwise presigned URLs become a public upload relay.

# Storage Setup: S3 vs R2 Basics#

S3 and R2 support the same S3-compatible API, but operational details differ.

Use at least two logical areas:

  • staging: where direct uploads land
  • public or processed: where you move files after verification and scanning

This reduces risk of serving unsafe content and makes lifecycle policies easier.

Quick Comparison for This Use Case#

CapabilityAWS S3Cloudflare R2
Presigned PUT URLsYesYes
Multi-part uploadYesYes
Event triggersStrong native optionsWorks, but depends on your stack
Egress feesTypical cloud pricingOften lower, especially via Cloudflare
S3 API compatibilityNativeS3-compatible endpoint

# Step 1: Server-Side Signing in App Router#

You’ll create a route handler that validates upload intent and returns a presigned PUT URL plus the object key.

Environment Variables#

Keep credentials server-side only.

VariableExampleNotes
S3_ACCESS_KEY_IDAKIA...Use least-privilege IAM
S3_SECRET_ACCESS_KEY...Never expose to client
S3_BUCKETmyapp-uploadsSeparate buckets per env
S3_REGIONeu-central-1For R2, often auto
S3_ENDPOINThttps://<account>.r2.cloudflarestorage.comNeeded for R2

S3 or R2 Client Setup#

Create a small helper. This stays on the server.

TypeScript
// lib/s3.ts
import { S3Client } from "@aws-sdk/client-s3";
 
export function getS3Client() {
  return new S3Client({
    region: process.env.S3_REGION!,
    endpoint: process.env.S3_ENDPOINT || undefined,
    credentials: {
      accessKeyId: process.env.S3_ACCESS_KEY_ID!,
      secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
    },
    forcePathStyle: !!process.env.S3_ENDPOINT,
  });
}

forcePathStyle is often required for S3-compatible providers and local testing.

Route Handler: Create Presigned URL#

This endpoint should do:

  • auth check
  • validate size and type against your policy
  • generate a safe object key tied to the user
  • create a short-lived presigned URL
TypeScript
// app/api/uploads/sign/route.ts
import { NextResponse } from "next/server";
import { randomUUID } from "crypto";
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { getS3Client } from "@/lib/s3";
 
const MAX_BYTES = 10 * 1024 * 1024;
const ALLOWED_TYPES = ["image/jpeg", "image/png", "application/pdf"];
 
export async function POST(req: Request) {
  const sessionUserId = req.headers.get("x-user-id"); // replace with real auth
  if (!sessionUserId) return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
 
  const body = await req.json();
  const fileName = String(body.fileName || "");
  const fileType = String(body.fileType || "");
  const fileSize = Number(body.fileSize || 0);
 
  if (!ALLOWED_TYPES.includes(fileType)) {
    return NextResponse.json({ error: "Unsupported file type" }, { status: 400 });
  }
  if (!Number.isFinite(fileSize) || fileSize <= 0 || fileSize > MAX_BYTES) {
    return NextResponse.json({ error: "File too large" }, { status: 400 });
  }
 
  const safeExt = fileType === "application/pdf" ? "pdf" : fileType.split("/")[1];
  const objectKey = `staging/${sessionUserId}/${randomUUID()}.${safeExt}`;
 
  const s3 = getS3Client();
  const bucket = process.env.S3_BUCKET!;
 
  const cmd = new PutObjectCommand({
    Bucket: bucket,
    Key: objectKey,
    ContentType: fileType,
    // Optionally: metadata to help downstream processing
    Metadata: {
      "original-name": fileName.slice(0, 120),
      "uploader-id": sessionUserId,
    },
  });
 
  const uploadUrl = await getSignedUrl(s3, cmd, { expiresIn: 60 });
 
  return NextResponse.json({
    uploadUrl,
    key: objectKey,
    maxBytes: MAX_BYTES,
    requiredContentType: fileType,
  });
}

Why the short expiry matters: if a presigned URL leaks, the attacker’s window is small. A common production value is 30 to 120 seconds.

⚠️ Warning: Don’t build object keys from user-provided filenames. Path tricks and collisions are real. Always generate your own key and store the original name as metadata.

# Step 2: Browser Upload with Retries and Progress#

The client uploads directly to the presigned URL using fetch with PUT. Add retries because mobile networks fail, corporate proxies can reset connections, and users close laptops mid-upload.

Minimal Client Upload Function#

TypeScript
// lib/uploadDirect.ts
export async function uploadViaPresignedUrl(params: {
  file: File;
  uploadUrl: string;
  contentType: string;
  retries?: number;
}) {
  const { file, uploadUrl, contentType } = params;
  const retries = params.retries ?? 2;
 
  let lastError: unknown;
  for (let attempt = 0; attempt <= retries; attempt++) {
    try {
      const res = await fetch(uploadUrl, {
        method: "PUT",
        headers: { "Content-Type": contentType },
        body: file,
      });
      if (!res.ok) throw new Error(`Upload failed: ${res.status}`);
      return { etag: res.headers.get("etag") };
    } catch (err) {
      lastError = err;
      await new Promise((r) => setTimeout(r, 400 * (attempt + 1)));
    }
  }
  throw lastError;
}

If you need progress, fetch still lacks stable upload progress events in many browsers. For large files, use XMLHttpRequest for progress or implement multipart uploads with a client-side library.

Client Orchestration: Sign, Upload, Finalize#

Keep the finalize step even if you already have the key. It’s the server’s chance to verify and attach the upload to your database.

TypeScript
// example usage in a client component action
export async function uploadFile(file: File) {
  const signRes = await fetch("/api/uploads/sign", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ fileName: file.name, fileType: file.type, fileSize: file.size }),
  });
  if (!signRes.ok) throw new Error("Sign failed");
  const { uploadUrl, key, requiredContentType } = await signRes.json();
 
  const { etag } = await uploadViaPresignedUrl({
    file,
    uploadUrl,
    contentType: requiredContentType,
    retries: 2,
  });
 
  const finRes = await fetch("/api/uploads/finalize", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ key, etag }),
  });
  if (!finRes.ok) throw new Error("Finalize failed");
  return finRes.json();
}

# Step 3: Finalize Endpoint With Server-Side Verification#

Finalization should confirm the object exists and matches your expectations.

At minimum:

  • ensure the key belongs to the requesting user
  • HEAD the object and validate:
    • ContentLength less than or equal to your max
    • ContentType matches your allowlist
  • persist a DB record with status uploaded or pending_scan

Route Handler: Finalize#

TypeScript
// app/api/uploads/finalize/route.ts
import { NextResponse } from "next/server";
import { HeadObjectCommand } from "@aws-sdk/client-s3";
import { getS3Client } from "@/lib/s3";
 
const MAX_BYTES = 10 * 1024 * 1024;
const ALLOWED_TYPES = ["image/jpeg", "image/png", "application/pdf"];
 
export async function POST(req: Request) {
  const sessionUserId = req.headers.get("x-user-id"); // replace with real auth
  if (!sessionUserId) return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
 
  const { key } = await req.json();
  const objectKey = String(key || "");
 
  if (!objectKey.startsWith(`staging/${sessionUserId}/`)) {
    return NextResponse.json({ error: "Invalid key" }, { status: 403 });
  }
 
  const s3 = getS3Client();
  const bucket = process.env.S3_BUCKET!;
 
  const head = await s3.send(new HeadObjectCommand({ Bucket: bucket, Key: objectKey }));
 
  const size = Number(head.ContentLength || 0);
  const type = String(head.ContentType || "");
 
  if (size <= 0 || size > MAX_BYTES) {
    return NextResponse.json({ error: "Invalid size" }, { status: 400 });
  }
  if (!ALLOWED_TYPES.includes(type)) {
    return NextResponse.json({ error: "Invalid type" }, { status: 400 });
  }
 
  // Persist to DB here: userId, key, size, type, status
  // For risky content: set status = "pending_scan"
 
  return NextResponse.json({
    ok: true,
    key: objectKey,
    size,
    type,
    status: "uploaded",
  });
}

Why this matters: client-provided metadata is easy to spoof. HEAD-based verification is cheap and stops a large class of abuse.

💡 Tip: If you run a CDN in front, never serve directly from staging. Only serve from a separate public prefix or bucket after verification and scanning.

# Validation Strategy: Size, Type, and Content#

Validation needs layers. Each layer catches different failures.

Client-Side Validation (UX)#

Client checks reduce wasted time, but they are not security.

  • block obviously wrong file types
  • show max size before upload starts
  • show progress and expected time remaining

Server-Side Validation (Security Gate)#

The server should enforce policy in both endpoints:

  • in /sign: validate requested fileType and fileSize
  • in /finalize: validate actual stored object metadata

Don’t Trust MIME Type Alone#

Browsers can send application/octet-stream or incorrect types, and attackers can fake image/png.

If your risk is high, add at least one of:

  • content sniffing after upload
  • magic-bytes detection in a worker
  • antivirus scanning before making the file available

For a broader set of security controls, use this checklist: Web application security checklist.

# Security Hardening: Keys, Permissions, and Abuse Prevention#

Direct upload endpoints are common targets for bandwidth abuse and storing illegal content. Use these controls by default.

IAM or R2 API Token: Least Privilege#

Your server credentials should be able to:

  • PutObject to staging/
  • HeadObject in staging/
  • optionally CopyObject to public/ after scanning
  • optionally DeleteObject for cleanup

Avoid wildcard bucket access.

Object Keys: Predictability and Access Control#

Use unguessable keys, and tie them to a user namespace.

Good pattern:

  • staging/userId/uuid.ext

Avoid:

  • uploads/myphoto.png
  • uploads/2026/05/myphoto.png

Short Expirations and One-Time Intent#

Presigned URL expiry should be short, but you also need app-level intent:

  • create an upload_intent record with userId, key, maxBytes, type, expiresAt
  • only allow finalize if there is a matching intent
  • expire and garbage-collect intents

This blocks “sign once, upload forever” abuse when a URL leaks.

CORS#

For direct browser PUTs, configure bucket CORS to allow your site origin, PUT, and the needed headers.

Common gotchas:

  • missing ETag in exposed headers
  • too-broad AllowedOrigin using wildcard on authenticated apps

Rate Limiting#

Rate limit /sign by user and IP. Even with short URL expiry, a bot can generate thousands of signed URLs per minute.

# Antivirus and Malware Scanning Options#

If users can upload PDFs, Office files, archives, or anything that might be redistributed, scanning is usually required.

A practical approach is to treat uploads as quarantined until scanned.

Option Matrix#

OptionWhere it runsProsCons
ClamAV in a container workerYour infraLow cost, matureNeeds ops and updates
Managed malware scanningVendorLess ops burdenExtra cost, vendor lock-in
Custom rule-based scanningWorkerTailored rulesNot a substitute for AV

Typical Scanning Pipeline#

  1. 1
    Upload goes to staging/ with status pending_scan.
  2. 2
    A background job downloads object to a worker and scans.
  3. 3
    If clean, copy to public/ and mark clean.
  4. 4
    If infected, delete and mark rejected.

Keep scanning off the request path. Even small files can cause scan times that exceed serverless time limits.

ℹ️ Note: If you must process images, do it in a worker and re-encode them. Re-encoding strips many malicious payload techniques embedded in metadata and reduces file size for CDN delivery.

# Handling Retries, Idempotency, and Partial Uploads#

Retries are not optional in real user conditions. The tricky part is making retries safe.

Retry Rules That Work#

Failure typeWhat to doWhy
Network error during PUTRetry same presigned URL if still validMost common transient failure
403 from storageRequest a new presigned URLURL expired or signature mismatch
Upload succeeded but finalize failedRetry finalize with same keyMake finalize idempotent
User reloads page mid-uploadResume using stored key and intentAvoid orphaned objects

Make Finalize Idempotent#

Finalize should be safe to call multiple times. In DB terms:

  • store a unique constraint on key
  • if record exists and belongs to user, return success

This prevents duplicate records if the client retries.

Cleanup Job for Orphaned Uploads#

Expect orphaned objects from abandoned uploads. A daily cleanup job should delete:

  • objects in staging/ older than 24 to 72 hours
  • intents that expired

This reduces storage cost and limits exposure.

# Common Pitfalls in Next.js Upload Implementations#

These issues cause most production incidents with uploads.

Pitfall 1: Uploading Through Route Handlers#

Uploading file bytes through Next.js endpoints is tempting but fragile:

  • serverless platforms often enforce body size limits
  • large requests can exceed execution timeouts
  • Node memory spikes can crash processes when multiple uploads happen concurrently

Direct-to-storage avoids these failure modes almost entirely.

Pitfall 2: Long-Lived Presigned URLs#

Long expirations turn a one-time authorization into a long-lived capability. If a URL leaks via logs, browser extensions, or shared screenshots, it becomes an abuse vector.

Use short TTL and app-level intents.

Pitfall 3: Serving Unscanned Content#

If you serve directly from staging/, you are effectively publishing untrusted user content. That can lead to malware distribution, phishing, and brand damage.

Quarantine and scan, then promote to public.

Pitfall 4: Missing Observability#

Uploads fail in ways users can’t describe precisely. Instrument your flow:

  • log sign and finalize responses with correlation IDs
  • track upload failure rates by browser and network type
  • alert on spikes in signed URLs per user

Use a structured approach from Web app observability: logging, metrics, tracing.

# Production Checklist: What to Ship#

Use this checklist before going live.

AreaMust-haveRecommended
AuthProtected sign and finalizePer-user quotas
ValidationSize and allowlist typesPost-upload sniffing
SecurityRandom keys, short TTLStaging and promotion buckets
ReliabilityClient retriesMultipart for large files
OpsCleanup for orphansAlerts on abuse patterns
ComplianceAudit logsMalware scanning for risky content

# Key Takeaways#

  • Use a three-step flow: sign on the server, upload directly to S3 or R2 from the browser, then finalize with server-side verification.
  • Enforce validation twice: at signing time using claimed metadata and at finalize time using HEAD to validate actual stored object size and type.
  • Treat staging as untrusted: quarantine uploads, optionally scan for malware, then promote to a safe location before serving.
  • Keep presigned URLs short-lived and tie them to an upload intent to reduce abuse if a URL leaks.
  • Build for failure: add retries for PUT and make finalize idempotent, plus run a cleanup job for abandoned uploads.

# Conclusion#

Direct-to-object-storage uploads are the most reliable way to handle files in Next.js because they avoid server memory pressure and serverless timeouts while improving scalability. Implement the sign-upload-finalize pattern, validate aggressively, quarantine and scan when needed, and make retries safe with idempotent finalize logic.

If you want Samioda to implement a secure upload pipeline for S3 or Cloudflare R2, including scanning, observability, and hardening aligned with your auth setup, contact us and we’ll help you ship it fast and safely.

FAQ

Share
A
Adrijan OmićevićSamioda Team
All articles →

Need help with your project?

We build custom solutions using the technologies discussed in this article. Senior team, fixed prices.