These integration guides are not official documentation and the Strapi Support Team will not provide assistance with them.
What Is Upstash?
Upstash is a serverless data platform that provides managed Redis, messaging (QStash), vector search, and workflow orchestration — all with pay-per-request pricing and native REST API access.
Unlike traditional Redis hosting that requires persistent TCP connections and manual connection pooling, Upstash's @upstash/redis SDK communicates over HTTP. That means it works everywhere: AWS Lambda, Cloudflare Workers, Vercel Edge Functions, and standard Node.js servers like the one powering Strapi.
The SDK is TypeScript-native with full type definitions built in — no separate @types package needed. It includes automatic retry logic (five retries with exponential backoff by default) and supports all standard Redis data structures: strings, hashes, lists, sets, and sorted sets.
For Strapi developers, the most relevant Upstash products are:
- Redis — API response caching, rate limiting, session storage
- QStash — Background job processing triggered from Strapi lifecycle hooks
- Global Database — Multi-region replication for low-latency reads worldwide
Sign up for the Logbook, Strapi's Monthly newsletter
Why Integrate Upstash with Strapi
Strapi v5's REST API is powerful, but every uncached request runs through the full request pipeline — routing, policies, controllers, services, and database queries. Adding Upstash Redis to the stack addresses several performance and security concerns at once.
- Reduce API response latency. Caching Strapi REST responses in Upstash Redis can cut response times significantly by serving content directly from memory instead of re-querying the database on every request.
- Protect endpoints from abuse. The
@upstash/ratelimitlibrary provides sliding window, fixed window, and token bucket algorithms that plug directly into Strapi's middleware pipeline. - Zero connection overhead. Upstash's REST-based SDK eliminates the connection pool management that plagues traditional Redis clients in serverless environments — no cold-start connection delays, no socket exhaustion.
- Pay only for what you use. Upstash's per-request pricing means zero cost during low-traffic periods, making it practical for side projects and production apps alike.
- Automatic cache invalidation. Strapi v5's lifecycle hooks (
afterUpdate,afterDelete) let you purge stale cache entries the moment content changes in the Admin Panel. - TypeScript-first developer experience. Both Strapi v5 and the Upstash SDK support TypeScript natively, giving you type-safe Redis operations without extra configuration.
How to Integrate Upstash with Strapi
This section covers building a custom Upstash integration from scratch using @upstash/redis and Strapi v5's middleware system. You'll create a caching middleware, a rate limiting middleware, and lifecycle-based cache invalidation.
Prerequisites
Before starting, make sure you have:
- Node.js 18+ (LTS recommended)
- Strapi v5 project initialized (project structure reference)
- Upstash account with a Redis database created at console.upstash.com
- Your Upstash Redis REST URL and REST Token from the database details page
- Basic familiarity with Strapi's backend customization concepts (middleware, controllers, services)
If you don't have a Strapi v5 project yet:
npx create-strapi@latest my-project --quickstartStep 1: Install Dependencies
From your Strapi project root, install the @upstash/redis SDK and @upstash/ratelimit package:
npm install @upstash/redis @upstash/ratelimitThe @upstash/redis package provides the core Redis client. The @upstash/ratelimit package builds on top of it with production-ready algorithms for request throttling.
Step 2: Configure Environment Variables
Add your Upstash credentials to the .env file in your Strapi project root. The @upstash/redis SDK's Redis.fromEnv() method expects these specific variable names:
# Upstash Redis
UPSTASH_REDIS_REST_URL=https://your-endpoint.upstash.io
UPSTASH_REDIS_REST_TOKEN=your-token-here
# Cache Configuration
CACHE_TTL_SECONDS=3600
CACHE_KEY_PREFIX=strapiStrapi v5 provides an env() helper with type-casting utilities for use in configuration files:
// config/upstash.js
module.exports = ({ env }) => ({
redis: {
url: env("UPSTASH_REDIS_REST_URL"),
token: env("UPSTASH_REDIS_REST_TOKEN"),
},
cache: {
ttl: env.int("CACHE_TTL_SECONDS", 3600),
prefix: env("CACHE_KEY_PREFIX", "strapi"),
},
});You can access these values anywhere in Strapi via strapi.config.get("upstash.redis.url"). Make sure .env is listed in your .gitignore — never commit credentials to version control.
Step 3: Create the Caching Middleware
Strapi v5 runs on Koa, and its middleware system supports four tiers: global, API-scoped, route-scoped, and plugin-scoped. For API response caching, a global middleware gives you the broadest coverage.
Create the middleware file:
// src/middlewares/upstash-cache.js
"use strict";
const { Redis } = require("@upstash/redis");
module.exports = (config, { strapi }) => {
const redis = Redis.fromEnv();
const DEFAULT_TTL = config.ttl || 3600;
return async (ctx, next) => {
// Only cache GET requests to the API
if (ctx.request.method !== "GET" || !ctx.request.url.startsWith("/api/")) {
return next();
}
const cacheKey = `strapi:${ctx.request.url}`;
// REQUEST PHASE: Check cache
try {
const cached = await redis.get(cacheKey);
if (cached) {
ctx.body = cached;
ctx.set("X-Cache", "HIT");
return;
}
} catch (err) {
strapi.log.warn("Cache lookup failed, proceeding without cache", err);
}
// Cache miss — execute the controller
await next();
// RESPONSE PHASE: Store successful responses
try {
if (ctx.status === 200 && ctx.body) {
await redis.set(cacheKey, ctx.body, { ex: DEFAULT_TTL });
ctx.set("X-Cache", "MISS");
}
} catch (err) {
strapi.log.warn("Cache store failed", err);
}
};
};A few things worth noting here. The try/catch blocks implement a fail-open pattern — if Upstash is unreachable, the request still goes through to the database. The X-Cache header makes it easy to verify caching behavior during development. And the Redis client is initialized once at the module level, reused across all requests.
Step 4: Create the Rate Limiting Middleware
The @upstash/ratelimit package provides three algorithms. The sliding window algorithm is the best fit for most API rate limiting — it smooths out burst traffic at window boundaries:
// src/middlewares/upstash-ratelimit.js
"use strict";
const { Ratelimit } = require("@upstash/ratelimit");
const { Redis } = require("@upstash/redis");
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(100, "60 s"),
analytics: true,
});
module.exports = (config, { strapi }) => {
return async (ctx, next) => {
const identifier =
ctx.request.headers["x-forwarded-for"] ||
ctx.request.ip ||
"anonymous";
const { success, remaining, reset } = await ratelimit.limit(identifier);
ctx.set("X-RateLimit-Limit", config.maxRequests || "100");
ctx.set("X-RateLimit-Remaining", remaining.toString());
ctx.set("X-RateLimit-Reset", reset.toString());
if (!success) {
ctx.status = 429;
ctx.body = {
error: "Too Many Requests",
message: "Rate limit exceeded. Please try again later.",
retryAfter: Math.ceil((reset - Date.now()) / 1000),
};
return;
}
return next();
};
};The identifier strategy matters. When a user is authenticated (via Strapi's built-in auth), ctx.state.user.id gives you per-user limits. For anonymous requests, the IP address is the fallback. You can also combine identifiers — ${ip}:${ctx.request.url} — for per-endpoint limits.
Step 5: Register Middlewares in Strapi
Add both middlewares to your global middleware configuration. Array order controls execution order — place your custom middlewares after the built-in body parser and query parser so those process the request first:
// config/middlewares.js
module.exports = [
"strapi::errors",
"strapi::security",
"strapi::cors",
"strapi::poweredBy",
"strapi::logger",
"strapi::query",
"strapi::body",
"strapi::session",
"strapi::favicon",
"strapi::public",
{
name: "global::upstash-ratelimit",
config: {
maxRequests: 100,
},
},
{
name: "global::upstash-cache",
config: {
ttl: 3600,
},
},
];The rate limiter sits before the cache middleware intentionally. You want to reject abusive requests before they even hit the cache layer.
Step 6: Add Cache Invalidation with Lifecycle Hooks
Caching is only useful if stale content gets purged when editors update content. In Strapi v5, lifecycle hooks like afterUpdate and afterDelete are no longer the recommended mechanism for cache invalidation on content mutations; Strapi recommends using Document Service middleware instead.
One critical detail: lifecycle hooks only trigger when content is modified through the Content API (strapi.documents), not through the Query Engine (strapi.db.query). Admin Panel operations use the Document Service, so hooks fire as expected for editor workflows.
// src/api/article/content-types/article/lifecycles.js
"use strict";
const { Redis } = require("@upstash/redis");
async function invalidateArticleCache() {
const redis = Redis.fromEnv();
try {
const keys = await redis.keys("strapi:/api/articles*");
if (keys.length > 0) {
await Promise.all(keys.map((key) => redis.del(key)));
}
} catch (err) {
console.warn("Cache invalidation failed:", err);
}
}
module.exports = {
async afterUpdate() {
await invalidateArticleCache();
},
async afterDelete() {
await invalidateArticleCache();
},
};This clears all cached article responses when any article is updated or deleted. For content types with high update frequency, you could scope invalidation to specific documentId values using Strapi v5's flat response format — where documentId (a string) replaces the nested numeric id from v4.
Step 7: Verify the Integration
Start your Strapi server and test the caching behavior:
npm run developMake a request to any content API endpoint:
curl -i http://localhost:1337/api/articlesOn the first request, you should see X-Cache: MISS in the response headers. The second identical request returns X-Cache: HIT — served directly from Upstash Redis without touching the database.
To verify rate limiting, send requests rapidly:
for i in $(seq 1 105); do
curl -s -o /dev/null -w "%{http_code}\n" http://localhost:1337/api/articles
doneAfter 100 requests within 60 seconds, you should start seeing 429 status codes.
Project Example: Cached Blog API with Upstash and Next.js
This project demonstrates a common architecture: Strapi v5 as the content backend, Upstash Redis as the caching layer, and a Next.js frontend consuming the cached API. The integration works at two levels — Strapi-side middleware caching reduces database load, and frontend-side cache lookups reduce requests to Strapi entirely.
Project Architecture
[Next.js Frontend] → [Upstash Redis (cache check)] → [Strapi v5 REST API] → [Database]
↑ |
└── cache store on miss ─────────────┘Setting Up the Strapi Content Types
Assume you have an Article Collection Type with these fields: title (text), content (rich text), slug (text, unique), and a relation to a Category Collection Type. Create these through the Strapi Admin Panel or using the Content-Type Builder.
Strapi-Side: Custom Controller with Cache Headers
Beyond the global caching middleware from earlier steps, you can set Cache-Control headers at the controller level for CDN and browser caching:
// src/api/article/controllers/article.js
"use strict";
const { createCoreController } = require("@strapi/strapi").factories;
module.exports = createCoreController("api::article.article", ({ strapi }) => ({
async find(ctx) {
const { data, meta } = await super.find(ctx);
ctx.set("Cache-Control", "public, max-age=300, s-maxage=600");
return { data, meta };
},
async findOne(ctx) {
const result = await super.findOne(ctx);
ctx.set("Cache-Control", "public, max-age=600");
return result;
},
}));This creates a two-layer caching strategy: the Upstash middleware caches full responses server-side, while Cache-Control headers let CDNs and browsers cache on the client side. Both layers reduce load on your Strapi database.
Frontend: Cache-Aside Pattern in Next.js
On the Next.js side, implement a cache-aside pattern that checks Upstash before calling Strapi. This is especially useful in Next.js server components and API routes:
// lib/articles.ts
import { Redis } from "@upstash/redis";
const redis = Redis.fromEnv();
interface Article {
documentId: string;
title: string;
content: string;
slug: string;
publishedAt: string;
}
interface StrapiResponse {
data: Article[];
meta: { pagination: { page: number; pageSize: number } };
}
export async function getArticles(page: number = 1): Promise<StrapiResponse> {
const cacheKey = `blog:articles:page:${page}`;
// Check Upstash cache first
const cached = await redis.get<StrapiResponse>(cacheKey);
if (cached) return cached;
// Cache miss — fetch from Strapi
const response = await fetch(
`${process.env.STRAPI_URL}/api/articles?pagination[page]=${page}&populate=category`,
{
headers: {
Authorization: `Bearer ${process.env.STRAPI_API_TOKEN}`,
},
}
);
const result: StrapiResponse = await response.json();
// Store in Upstash with 30-minute TTL
await redis.set(cacheKey, result, { ex: 1800 });
return result;
}Note the response structure — Strapi v5 uses a flat format where fields like title and content sit directly on the data object, not nested under attributes like in v4.
Lifecycle-Based Invalidation for the Blog
Extend the lifecycle hooks to handle both articles and categories, since a category change might affect how articles render:
// src/api/category/content-types/category/lifecycles.js
"use strict";
const { Redis } = require("@upstash/redis");
module.exports = {
async afterUpdate(event) {
const redis = Redis.fromEnv();
try {
const keys = await redis.keys("strapi:/api/articles*");
if (keys.length > 0) {
await Promise.all(keys.map((key) => redis.del(key)));
}
} catch (err) {
console.warn("Cache invalidation failed:", err);
}
},
async afterDelete() {
const redis = Redis.fromEnv();
try {
const keys = await redis.keys("strapi:/api/categories*");
if (keys.length > 0) {
await Promise.all(keys.map((key) => redis.del(key)));
}
} catch (err) {
console.warn("Cache invalidation failed:", err);
}
},
};TTL Strategy by Content Type
Different content types have different update frequencies. Here's a practical breakdown:
| Content Type | Recommended TTL | Invalidation Strategy |
|---|---|---|
| Blog posts (published) | 5 minutes–2 hours | Strapi document service middleware or cache plugin invalidation |
| Categories/tags | 24 hours | afterUpdate lifecycle hook |
| Homepage (Single Type) | 30–60 minutes | afterUpdate lifecycle hook |
| Navigation menus | 24 hours | afterUpdate lifecycle hook |
| Search results | 10–30 minutes | TTL expiry only |
Note: Avoid relying on afterUpdate lifecycle hooks for cache invalidation in Strapi v5.
This project gives you a solid foundation. From here, you can add QStash for background jobs (like sending newsletters when articles publish), or Upstash's Global Database feature for multi-region read replicas if your audience is geographically distributed.
Strapi Open Office Hours
If you have any questions about Strapi 5 or just would like to stop by and say hi, you can join us at Strapi's Discord Open Office Hours, Monday through Friday, from 12:30 pm to 1:30 pm CST: Strapi Discord Open Office Hours.
For more details, visit the Strapi documentation and Upstash documentation.
Get Started in Minutes
npx create-strapi-app@latest in your terminal and follow our Quick Start Guide to build your first Strapi project.