# What You’ll Build#
This guide shows a production-ready pattern for direct browser uploads to AWS S3 or Cloudflare R2 using presigned URLs in Next.js App Router. You’ll implement server-side signing, layered validation, optional malware scanning, and resilient retries without routing file bytes through your Next.js server.
If you need the short reason this matters: server-side uploads frequently fail under load due to memory limits and timeouts, while direct-to-object-storage uploads shift the heavy lifting to infrastructure designed for it.
Target keyword: Next.js file upload presigned URL S3 Cloudflare R2
# Architecture Overview: Direct Upload, Then Verify#
A secure upload flow has three phases:
- 1Request: the browser asks your server for permission to upload.
- 2Upload: the browser uploads directly to S3 or R2 using a presigned URL.
- 3Finalize: the browser tells your server the upload is done, and your server verifies and records it.
This removes large request bodies from your Next.js runtime, which is critical on serverless platforms where request bodies are commonly limited and long uploads hit timeouts.
Direct Upload Flow (High Level)#
| Step | Who | What happens | Why it matters |
|---|---|---|---|
| 1 | Client | Sends filename, size, type to your API | Minimal payload, fast request |
| 2 | Server | Auth checks, validates metadata, generates object key, returns presigned URL | Central security gate |
| 3 | Client | PUTs file to S3 or R2 | Uses storage network throughput |
| 4 | Client | Calls finalize endpoint with key and ETag | Avoids trusting client claims |
| 5 | Server | HEADs object, validates size and content-type, marks status | Prevents spoofing and partial uploads |
| 6 | Optional | Scanner | Scans object, then moves to safe bucket or marks clean |
🎯 Key Takeaway: Your app should treat a file as untrusted until it is verified server-side and, for risky use cases, scanned.
# Prerequisites#
| Requirement | Version | Notes |
|---|---|---|
| Next.js | 14 or 15 | App Router route handlers used in examples |
| Node.js runtime for signing | 18+ | Presigning uses crypto and AWS SDK |
| AWS S3 or Cloudflare R2 | — | R2 uses S3-compatible endpoint |
| Auth system | Any | Use session cookies or JWT; tie uploads to a user |
For authentication patterns, see Next.js authentication options. Upload endpoints must be protected, otherwise presigned URLs become a public upload relay.
# Storage Setup: S3 vs R2 Basics#
S3 and R2 support the same S3-compatible API, but operational details differ.
Recommended Bucket Layout#
Use at least two logical areas:
- staging: where direct uploads land
- public or processed: where you move files after verification and scanning
This reduces risk of serving unsafe content and makes lifecycle policies easier.
Quick Comparison for This Use Case#
| Capability | AWS S3 | Cloudflare R2 |
|---|---|---|
| Presigned PUT URLs | Yes | Yes |
| Multi-part upload | Yes | Yes |
| Event triggers | Strong native options | Works, but depends on your stack |
| Egress fees | Typical cloud pricing | Often lower, especially via Cloudflare |
| S3 API compatibility | Native | S3-compatible endpoint |
# Step 1: Server-Side Signing in App Router#
You’ll create a route handler that validates upload intent and returns a presigned PUT URL plus the object key.
Environment Variables#
Keep credentials server-side only.
| Variable | Example | Notes |
|---|---|---|
S3_ACCESS_KEY_ID | AKIA... | Use least-privilege IAM |
S3_SECRET_ACCESS_KEY | ... | Never expose to client |
S3_BUCKET | myapp-uploads | Separate buckets per env |
S3_REGION | eu-central-1 | For R2, often auto |
S3_ENDPOINT | https://<account>.r2.cloudflarestorage.com | Needed for R2 |
S3 or R2 Client Setup#
Create a small helper. This stays on the server.
// lib/s3.ts
import { S3Client } from "@aws-sdk/client-s3";
export function getS3Client() {
return new S3Client({
region: process.env.S3_REGION!,
endpoint: process.env.S3_ENDPOINT || undefined,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID!,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
},
forcePathStyle: !!process.env.S3_ENDPOINT,
});
}forcePathStyle is often required for S3-compatible providers and local testing.
Route Handler: Create Presigned URL#
This endpoint should do:
- auth check
- validate size and type against your policy
- generate a safe object key tied to the user
- create a short-lived presigned URL
// app/api/uploads/sign/route.ts
import { NextResponse } from "next/server";
import { randomUUID } from "crypto";
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { getS3Client } from "@/lib/s3";
const MAX_BYTES = 10 * 1024 * 1024;
const ALLOWED_TYPES = ["image/jpeg", "image/png", "application/pdf"];
export async function POST(req: Request) {
const sessionUserId = req.headers.get("x-user-id"); // replace with real auth
if (!sessionUserId) return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
const body = await req.json();
const fileName = String(body.fileName || "");
const fileType = String(body.fileType || "");
const fileSize = Number(body.fileSize || 0);
if (!ALLOWED_TYPES.includes(fileType)) {
return NextResponse.json({ error: "Unsupported file type" }, { status: 400 });
}
if (!Number.isFinite(fileSize) || fileSize <= 0 || fileSize > MAX_BYTES) {
return NextResponse.json({ error: "File too large" }, { status: 400 });
}
const safeExt = fileType === "application/pdf" ? "pdf" : fileType.split("/")[1];
const objectKey = `staging/${sessionUserId}/${randomUUID()}.${safeExt}`;
const s3 = getS3Client();
const bucket = process.env.S3_BUCKET!;
const cmd = new PutObjectCommand({
Bucket: bucket,
Key: objectKey,
ContentType: fileType,
// Optionally: metadata to help downstream processing
Metadata: {
"original-name": fileName.slice(0, 120),
"uploader-id": sessionUserId,
},
});
const uploadUrl = await getSignedUrl(s3, cmd, { expiresIn: 60 });
return NextResponse.json({
uploadUrl,
key: objectKey,
maxBytes: MAX_BYTES,
requiredContentType: fileType,
});
}Why the short expiry matters: if a presigned URL leaks, the attacker’s window is small. A common production value is 30 to 120 seconds.
⚠️ Warning: Don’t build object keys from user-provided filenames. Path tricks and collisions are real. Always generate your own key and store the original name as metadata.
# Step 2: Browser Upload with Retries and Progress#
The client uploads directly to the presigned URL using fetch with PUT. Add retries because mobile networks fail, corporate proxies can reset connections, and users close laptops mid-upload.
Minimal Client Upload Function#
// lib/uploadDirect.ts
export async function uploadViaPresignedUrl(params: {
file: File;
uploadUrl: string;
contentType: string;
retries?: number;
}) {
const { file, uploadUrl, contentType } = params;
const retries = params.retries ?? 2;
let lastError: unknown;
for (let attempt = 0; attempt <= retries; attempt++) {
try {
const res = await fetch(uploadUrl, {
method: "PUT",
headers: { "Content-Type": contentType },
body: file,
});
if (!res.ok) throw new Error(`Upload failed: ${res.status}`);
return { etag: res.headers.get("etag") };
} catch (err) {
lastError = err;
await new Promise((r) => setTimeout(r, 400 * (attempt + 1)));
}
}
throw lastError;
}If you need progress, fetch still lacks stable upload progress events in many browsers. For large files, use XMLHttpRequest for progress or implement multipart uploads with a client-side library.
Client Orchestration: Sign, Upload, Finalize#
Keep the finalize step even if you already have the key. It’s the server’s chance to verify and attach the upload to your database.
// example usage in a client component action
export async function uploadFile(file: File) {
const signRes = await fetch("/api/uploads/sign", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ fileName: file.name, fileType: file.type, fileSize: file.size }),
});
if (!signRes.ok) throw new Error("Sign failed");
const { uploadUrl, key, requiredContentType } = await signRes.json();
const { etag } = await uploadViaPresignedUrl({
file,
uploadUrl,
contentType: requiredContentType,
retries: 2,
});
const finRes = await fetch("/api/uploads/finalize", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ key, etag }),
});
if (!finRes.ok) throw new Error("Finalize failed");
return finRes.json();
}# Step 3: Finalize Endpoint With Server-Side Verification#
Finalization should confirm the object exists and matches your expectations.
At minimum:
- ensure the key belongs to the requesting user
- HEAD the object and validate:
ContentLengthless than or equal to your maxContentTypematches your allowlist
- persist a DB record with status
uploadedorpending_scan
Route Handler: Finalize#
// app/api/uploads/finalize/route.ts
import { NextResponse } from "next/server";
import { HeadObjectCommand } from "@aws-sdk/client-s3";
import { getS3Client } from "@/lib/s3";
const MAX_BYTES = 10 * 1024 * 1024;
const ALLOWED_TYPES = ["image/jpeg", "image/png", "application/pdf"];
export async function POST(req: Request) {
const sessionUserId = req.headers.get("x-user-id"); // replace with real auth
if (!sessionUserId) return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
const { key } = await req.json();
const objectKey = String(key || "");
if (!objectKey.startsWith(`staging/${sessionUserId}/`)) {
return NextResponse.json({ error: "Invalid key" }, { status: 403 });
}
const s3 = getS3Client();
const bucket = process.env.S3_BUCKET!;
const head = await s3.send(new HeadObjectCommand({ Bucket: bucket, Key: objectKey }));
const size = Number(head.ContentLength || 0);
const type = String(head.ContentType || "");
if (size <= 0 || size > MAX_BYTES) {
return NextResponse.json({ error: "Invalid size" }, { status: 400 });
}
if (!ALLOWED_TYPES.includes(type)) {
return NextResponse.json({ error: "Invalid type" }, { status: 400 });
}
// Persist to DB here: userId, key, size, type, status
// For risky content: set status = "pending_scan"
return NextResponse.json({
ok: true,
key: objectKey,
size,
type,
status: "uploaded",
});
}Why this matters: client-provided metadata is easy to spoof. HEAD-based verification is cheap and stops a large class of abuse.
💡 Tip: If you run a CDN in front, never serve directly from
staging. Only serve from a separatepublicprefix or bucket after verification and scanning.
# Validation Strategy: Size, Type, and Content#
Validation needs layers. Each layer catches different failures.
Client-Side Validation (UX)#
Client checks reduce wasted time, but they are not security.
- block obviously wrong file types
- show max size before upload starts
- show progress and expected time remaining
Server-Side Validation (Security Gate)#
The server should enforce policy in both endpoints:
- in
/sign: validate requestedfileTypeandfileSize - in
/finalize: validate actual stored object metadata
Don’t Trust MIME Type Alone#
Browsers can send application/octet-stream or incorrect types, and attackers can fake image/png.
If your risk is high, add at least one of:
- content sniffing after upload
- magic-bytes detection in a worker
- antivirus scanning before making the file available
For a broader set of security controls, use this checklist: Web application security checklist.
# Security Hardening: Keys, Permissions, and Abuse Prevention#
Direct upload endpoints are common targets for bandwidth abuse and storing illegal content. Use these controls by default.
IAM or R2 API Token: Least Privilege#
Your server credentials should be able to:
- PutObject to
staging/ - HeadObject in
staging/ - optionally CopyObject to
public/after scanning - optionally DeleteObject for cleanup
Avoid wildcard bucket access.
Object Keys: Predictability and Access Control#
Use unguessable keys, and tie them to a user namespace.
Good pattern:
staging/userId/uuid.ext
Avoid:
uploads/myphoto.pnguploads/2026/05/myphoto.png
Short Expirations and One-Time Intent#
Presigned URL expiry should be short, but you also need app-level intent:
- create an
upload_intentrecord withuserId,key,maxBytes,type,expiresAt - only allow finalize if there is a matching intent
- expire and garbage-collect intents
This blocks “sign once, upload forever” abuse when a URL leaks.
CORS#
For direct browser PUTs, configure bucket CORS to allow your site origin, PUT, and the needed headers.
Common gotchas:
- missing
ETagin exposed headers - too-broad
AllowedOriginusing wildcard on authenticated apps
Rate Limiting#
Rate limit /sign by user and IP. Even with short URL expiry, a bot can generate thousands of signed URLs per minute.
# Antivirus and Malware Scanning Options#
If users can upload PDFs, Office files, archives, or anything that might be redistributed, scanning is usually required.
A practical approach is to treat uploads as quarantined until scanned.
Option Matrix#
| Option | Where it runs | Pros | Cons |
|---|---|---|---|
| ClamAV in a container worker | Your infra | Low cost, mature | Needs ops and updates |
| Managed malware scanning | Vendor | Less ops burden | Extra cost, vendor lock-in |
| Custom rule-based scanning | Worker | Tailored rules | Not a substitute for AV |
Typical Scanning Pipeline#
- 1Upload goes to
staging/with statuspending_scan. - 2A background job downloads object to a worker and scans.
- 3If clean, copy to
public/and markclean. - 4If infected, delete and mark
rejected.
Keep scanning off the request path. Even small files can cause scan times that exceed serverless time limits.
ℹ️ Note: If you must process images, do it in a worker and re-encode them. Re-encoding strips many malicious payload techniques embedded in metadata and reduces file size for CDN delivery.
# Handling Retries, Idempotency, and Partial Uploads#
Retries are not optional in real user conditions. The tricky part is making retries safe.
Retry Rules That Work#
| Failure type | What to do | Why |
|---|---|---|
| Network error during PUT | Retry same presigned URL if still valid | Most common transient failure |
| 403 from storage | Request a new presigned URL | URL expired or signature mismatch |
| Upload succeeded but finalize failed | Retry finalize with same key | Make finalize idempotent |
| User reloads page mid-upload | Resume using stored key and intent | Avoid orphaned objects |
Make Finalize Idempotent#
Finalize should be safe to call multiple times. In DB terms:
- store a unique constraint on
key - if record exists and belongs to user, return success
This prevents duplicate records if the client retries.
Cleanup Job for Orphaned Uploads#
Expect orphaned objects from abandoned uploads. A daily cleanup job should delete:
- objects in
staging/older than 24 to 72 hours - intents that expired
This reduces storage cost and limits exposure.
# Common Pitfalls in Next.js Upload Implementations#
These issues cause most production incidents with uploads.
Pitfall 1: Uploading Through Route Handlers#
Uploading file bytes through Next.js endpoints is tempting but fragile:
- serverless platforms often enforce body size limits
- large requests can exceed execution timeouts
- Node memory spikes can crash processes when multiple uploads happen concurrently
Direct-to-storage avoids these failure modes almost entirely.
Pitfall 2: Long-Lived Presigned URLs#
Long expirations turn a one-time authorization into a long-lived capability. If a URL leaks via logs, browser extensions, or shared screenshots, it becomes an abuse vector.
Use short TTL and app-level intents.
Pitfall 3: Serving Unscanned Content#
If you serve directly from staging/, you are effectively publishing untrusted user content. That can lead to malware distribution, phishing, and brand damage.
Quarantine and scan, then promote to public.
Pitfall 4: Missing Observability#
Uploads fail in ways users can’t describe precisely. Instrument your flow:
- log sign and finalize responses with correlation IDs
- track upload failure rates by browser and network type
- alert on spikes in signed URLs per user
Use a structured approach from Web app observability: logging, metrics, tracing.
# Production Checklist: What to Ship#
Use this checklist before going live.
| Area | Must-have | Recommended |
|---|---|---|
| Auth | Protected sign and finalize | Per-user quotas |
| Validation | Size and allowlist types | Post-upload sniffing |
| Security | Random keys, short TTL | Staging and promotion buckets |
| Reliability | Client retries | Multipart for large files |
| Ops | Cleanup for orphans | Alerts on abuse patterns |
| Compliance | Audit logs | Malware scanning for risky content |
# Key Takeaways#
- Use a three-step flow: sign on the server, upload directly to S3 or R2 from the browser, then finalize with server-side verification.
- Enforce validation twice: at signing time using claimed metadata and at finalize time using HEAD to validate actual stored object size and type.
- Treat
stagingas untrusted: quarantine uploads, optionally scan for malware, then promote to a safe location before serving. - Keep presigned URLs short-lived and tie them to an upload intent to reduce abuse if a URL leaks.
- Build for failure: add retries for PUT and make finalize idempotent, plus run a cleanup job for abandoned uploads.
# Conclusion#
Direct-to-object-storage uploads are the most reliable way to handle files in Next.js because they avoid server memory pressure and serverless timeouts while improving scalability. Implement the sign-upload-finalize pattern, validate aggressively, quarantine and scan when needed, and make retries safe with idempotent finalize logic.
If you want Samioda to implement a secure upload pipeline for S3 or Cloudflare R2, including scanning, observability, and hardening aligned with your auth setup, contact us and we’ll help you ship it fast and safely.
FAQ
More in Web Development
All →React Query vs SWR in Next.js App Router: When to Use Which (and How to Avoid Double Fetching)
A practical 2026 comparison of React Query and SWR inside Next.js App Router — caching models, SSR and RSC compatibility, mutations, optimistic updates, DX, and proven patterns to prevent double fetching.
Next.js Background Jobs in 2026: Queues, Cron, and Long-Running Tasks on Vercel (and Beyond)
A practical guide to running background work in Next.js in 2026: Vercel Cron, serverless limits, queues with Upstash and Redis, and worker services for long-running tasks. Includes decision criteria, architecture diagrams, and a production checklist.
Next.js i18n with the App Router: Localized Routing, SEO, and Content Workflows (2026 Guide)
Implement Next.js i18n in the App Router with localized routing, language detection, SEO-safe metadata, and scalable translation workflows for JSON, CMS, or localization platforms.
Need help with your project?
We build custom solutions using the technologies discussed in this article. Senior team, fixed prices.
Related Articles
Next.js Multitenant SaaS Architecture: Tenancy Models, Routing, Auth, and Data Isolation (2026 Guide)
A practical guide to Next.js multitenant SaaS architecture: tenancy models, tenant-aware routing with App Router and middleware, auth patterns, and hardening data isolation to prevent leaks.
Next.js Authentication in 2026: NextAuth vs Clerk vs Supabase (What We Use for Client Projects)
A practical comparison of Next.js authentication options in 2026 — NextAuth, Clerk, and Supabase — across UX, security, cost, setup time, and enterprise requirements, with decision matrices for SaaS, internal tools, and B2B portals.
React Query vs SWR in Next.js App Router: When to Use Which (and How to Avoid Double Fetching)
A practical 2026 comparison of React Query and SWR inside Next.js App Router — caching models, SSR and RSC compatibility, mutations, optimistic updates, DX, and proven patterns to prevent double fetching.