GPT Image 2 Python Quickstart with fal.ai
A copy-paste quickstart that renders your first image in under two minutes using fal-client in Python.
Here is the shortest path from a fresh Python virtualenv to a rendered image file on disk using fal-ai/gpt-image-2 (or the 1.5 stand-in today). Five lines of setup, one call.
Install
1pip install fal-client2export FAL_KEY=your_key_here
The full script
1import fal_client23handler = fal_client.submit(4 "fal-ai/gpt-image-1.5", # swap to fal-ai/gpt-image-2 when public5 arguments={6 "prompt": "A photoreal lighthouse on a rocky cliff at dawn, mist rising off the water, a single gull in the distance.",7 "image_size": "1536x1024",8 "quality": "high",9 "num_images": 1,10 "output_format": "png",11 },12)1314result = handler.get()15print(result["images"][0]["url"])1617import urllib.request18urllib.request.urlretrieve(result["images"][0]["url"], "lighthouse.png")
Run it, wait about 10 seconds, and you have a 1536x1024 render saved locally. Switch to fal_client.subscribe when you want logs streamed during generation, or fal_client.submit_async when you want to fan out dozens of jobs in parallel.
Streaming progress
1for event in fal_client.stream("fal-ai/gpt-image-1.5", arguments={...}):2 print(event)
The stream yields progress events so you can wire a progress bar without polling.
Batch
1handlers = [fal_client.submit("fal-ai/gpt-image-1.5", arguments=args) for args in jobs]2results = [h.get() for h in handlers]
One call per job, then block on all. Concurrency tops out around 10 requests per account before you see 429s, so batch in chunks of 8 if you are running a thousand-image pipeline.
