Integration3 min read
GPT Image 2 in a Next.js App (Production Pattern)
A server action, a streaming response, and a cached result table. Ship GPT Image 2 inside a Next.js 15 app without exposing your FAL_KEY.
Wiring GPT Image 2 into a Next.js app needs three things right: the API call on the server so FAL_KEY does not leak, a progress surface for the user, and a cache layer so you are not re-rendering the same prompt on every refresh.
Server action
example.tsTS
1// app/actions/generate.ts2"use server";3import { fal } from "@fal-ai/client";4import { cache } from "@/lib/cache";56fal.config({ credentials: process.env.FAL_KEY });78export async function generateImage(prompt: string) {9 const key = `img:${hash(prompt)}`;10 const cached = await cache.get(key);11 if (cached) return cached;1213 const res = await fal.subscribe("fal-ai/gpt-image-2", {14 input: {15 prompt,16 image_size: "1024x1024",17 quality: "medium",18 num_images: 1,19 output_format: "png",20 },21 });2223 const url = res.data.images[0].url;24 await cache.set(key, url, 60 * 60 * 24 * 30);25 return url;26}
Client component
example.tsTSX
1"use client";2import { useState, useTransition } from "react";3import { generateImage } from "@/app/actions/generate";45export function Generator() {6 const [url, setUrl] = useState<string | null>(null);7 const [pending, start] = useTransition();8 async function onSubmit(form: FormData) {9 start(async () => {10 const result = await generateImage(String(form.get("prompt")));11 setUrl(result);12 });13 }14 return (15 <form action={onSubmit}>16 <input name="prompt" />17 <button disabled={pending}>Generate</button>18 {url && <img src={url} alt="" />}19 </form>20 );21}
Why a cache
fal.media URLs are stable for the life of the render. If two users submit the same prompt inside a short window, you do not need to pay for it twice. Keying the cache on a hash of the normalized prompt saves 20 to 40 percent of a typical SaaS bill.
