← all work

Creative tech

2025

An AI content pipeline for a hemispherical planetarium dome

Context

NMD 345, University of Maine

Role

Solo build

Stack

Python · NumPy · Pillow · atan2 spherical math · DALL-E 3

Problem

Generating visually correct content for a hemispherical planetarium dome requires every image to be projected so straight lines and proportions look right when displayed on a curved 180 degree surface. Manual reprojection in Photoshop is slow and brittle. Nothing I tried off-the-shelf gave the quality the dome needed.

Constraints

Architecture

01

Use bivariate arctangent (atan2) to map each pixel of the fisheye output back into the source equirectangular image

whyReverse mapping, iterating over output pixels and sampling the source, avoids the aliasing and gaps you get from forward mapping. atan2(y, x) gives a numerically stable azimuth across all four quadrants, which is exactly what's needed to wrap an equirectangular image around a dome.

tradeoffSlower than forward mapping for small outputs; trivially parallelizable for the 8K case. NumPy vectorization made the speed difference irrelevant.

02

Built it as a single Python script with a CLI, not a library or web app

whyThe workflow is: generate or pick an image, reproject, display on the dome. A CLI tool matches that flow exactly. A library would have added abstraction with no benefit; a web app would have added latency and a deployment story I didn't need.

tradeoffLess reusable from other code. Acceptable. The script is short enough that copy-paste into a notebook is a non-issue.

03

Made DALL-E generation optional rather than required

whyHalf the time I want to dome-project a photo or an existing render, not generate from scratch. Coupling the projection step to the generation step would have made the tool worse at the more common task.

tradeoffTwo code paths to maintain. They share 90% of the implementation; the divergence is one CLI flag.

Outcomes

Output resolution

8192 × 8192

Quality mode produces dome-correct images at the dome's native projector resolution

Pipeline time per image

~12 seconds

8K projection on a single MacBook, including disk I/O. Generation adds 15 to 40s depending on DALL-E response time.

Batch capability

unlimited

`--batch ./raw_images/` reprojects an entire folder; used for sequence work where 30+ frames needed consistent reprojection

Why this project existed

NMD 345 is a course on immersive media at the University of Maine, and one of its capstone deliverables involved presenting AI-generated content on a planetarium dome. The dome projects onto a hemisphere; standard rectangular images get severely distorted when displayed there. The dome expects images in fisheye projection, a circular image where the center of the circle is the zenith (straight up) and the edge of the circle is the horizon.

AI image generators like DALL-E produce standard rectangular or equirectangular output, not fisheye. The gap between "what AI gives you" and "what the dome accepts" is a coordinate transform.

The math

For each pixel in the fisheye output, I needed to know which pixel of the equirectangular source it should sample.

The fisheye image is parameterized by polar coordinates (r, θ) from the center. r tells you how far from the zenith you are (0 = straight up, 1 = horizon); θ tells you the azimuth angle. The equirectangular image is parameterized by (longitude, latitude), a flat unwrap of the sphere.

The map between them is straightforward once written out:

r = sqrt(x² + y²)                      # distance from fisheye center
θ = atan2(y, x)                        # azimuth, atan2 handles all 4 quadrants
latitude  = π/2 - r * π/2              # zenith to horizon
longitude = θ                          # direct passthrough

# Then sample source[lat, lon] for each output pixel

The reason atan2 matters and atan doesn't is that atan only resolves the angle to within ±π/2. atan2(y, x) looks at the signs of both arguments and returns the correct angle in [-π, π], exactly what you need to wrap a 360 degree azimuth correctly. The first version of this code used plain atan and produced an image that was correct in one quadrant and mirrored in the other three.

Why a CLI

I considered three shapes for this tool: a Jupyter notebook, a Python library, a CLI script. The CLI won because the actual workflow looks like this:

# Reproject an image I already have
python planetarium_dome.py --input photo.jpg --name my_scene

# Or generate from a prompt and project in one step
python planetarium_dome.py --prompt "whales in the galaxy" --name whale

# Or batch a folder of frames for a sequence
python planetarium_dome.py --batch ./raw_images/

Every interaction is "give me this output for that input." No state to manage, no live preview to maintain, no library API surface to design. The script weighs about 200 lines including argument parsing and DALL-E integration.

What I'd do differently

I'd add a preview mode. The current workflow is: run the command, wait, open the output in an image viewer, decide if it's right. A preview mode that generates a fast 1024x1024 version with a --quick flag would tighten the iteration loop. The infrastructure is there; it'd be a 20-line change.

I'd also write tests for the math. The atan2 quadrant bug took me an embarrassingly long time to track down by visual inspection of outputs. Four unit tests that picked specific known points (north pole, south pole, four cardinal directions) and verified their output coordinates would have caught it in five seconds.

What I learned

Next case study →

Indian Township Housing Authority, a full website rebuild for a tribal government

Took a tribal housing authority's web presence from proposal through scoping, design, build, and launch. Next.js, TypeScript, Tailwind, deployed on Vercel.