Quickstart3 min read

GPT Image 2 Python Quickstart with fal.ai

A copy-paste quickstart that renders your first image in under two minutes using fal-client in Python.


Here is the shortest path from a fresh Python virtualenv to a rendered image file on disk using fal-ai/gpt-image-2 (or the 1.5 stand-in today). Five lines of setup, one call.

Install

example.shBASH
1pip install fal-client
2export FAL_KEY=your_key_here

The full script

example.pyPYTHON
1import fal_client
2
3handler = fal_client.submit(
4 "fal-ai/gpt-image-1.5", # swap to fal-ai/gpt-image-2 when public
5 arguments={
6 "prompt": "A photoreal lighthouse on a rocky cliff at dawn, mist rising off the water, a single gull in the distance.",
7 "image_size": "1536x1024",
8 "quality": "high",
9 "num_images": 1,
10 "output_format": "png",
11 },
12)
13
14result = handler.get()
15print(result["images"][0]["url"])
16
17import urllib.request
18urllib.request.urlretrieve(result["images"][0]["url"], "lighthouse.png")

Run it, wait about 10 seconds, and you have a 1536x1024 render saved locally. Switch to fal_client.subscribe when you want logs streamed during generation, or fal_client.submit_async when you want to fan out dozens of jobs in parallel.

Streaming progress

example.pyPYTHON
1for event in fal_client.stream("fal-ai/gpt-image-1.5", arguments={...}):
2 print(event)

The stream yields progress events so you can wire a progress bar without polling.

Batch

example.pyPYTHON
1handlers = [fal_client.submit("fal-ai/gpt-image-1.5", arguments=args) for args in jobs]
2results = [h.get() for h in handlers]

One call per job, then block on all. Concurrency tops out around 10 requests per account before you see 429s, so batch in chunks of 8 if you are running a thousand-image pipeline.

A terminal showing the Python quickstart running end to end
A terminal showing the Python quickstart running end to end

Also reading