Accurate, data-driven map art from raw geographic data, AI-assisted rendering, and automated quality assurance.
An end-to-end pipeline that turns raw geographic data into print-ready map art across four curated styles.
Every competitor uses pre-rendered tiles. We start from source data, producing output no tile-based approach can replicate.
AI-assisted development, multi-agent QA, and structured decision-making: a single operator running a full production pipeline.
Every custom map art product on the market uses the same approach: tile-based rendering from Mapbox or Google Maps, styled with a limited set of overlays. The results are serviceable but generic. They look like screenshots with a filter, not art.
We wanted to build something different: map prints where the data itself creates the visual texture. Where building density, road hierarchy, water boundaries, and green space aren't just displayed but composed. And where every print is accurate enough that someone who lives in the neighborhood would recognize it instantly.
That required starting from raw geographic data, not pre-rendered tiles.
Data quality is the actual moat.
The pipeline reconciles data from three primary sources, each with different coverage, accuracy, and update cadences:
OpenStreetMap provides the structural skeleton: building footprints, road networks, water polygons, park boundaries, rail lines, and bridge geometry. Coverage is excellent for dense urban areas but varies in completeness.
NOAA provides authoritative harbor and coastal boundary data. For waterfront neighborhoods, the accuracy of the water boundary is critical. We reconcile differences between OSM's community-contributed polygons and official shoreline data in the geometry processing layer.
NYC PLUTO provides parcel-level data including building floor counts, year built, and land use classification. This enables per-building visual variation grounded in real-world characteristics: taller buildings emit more light in dark mode, older buildings render in warmer tones in vintage.
Each map is rendered in one of four curated styles across two pipeline branches. The critical design decision: all styles render the same accurate geographic data. The geometry pipeline is style-independent. Cartographic precision is the constant; artistic treatment is the variable.
All styles render the same underlying data.
Precision is constant. Style is variable.
A single operator producing hundreds of map renders can't manually QA each one. We built a multi-agent QA system using Claude's vision API: three stateless agents, each a single-shot API call returning structured JSON.
Bounding box validation reviews a thumbnail of the proposed map area before any render begins, evaluating whether the framing captures the neighborhood's compositional anchors. It rejects poor compositions before pipeline resources are spent.
Geometry cleanup reviews the processed geometry output, identifying artifacts from OSM data quality issues: duplicate buildings, misclassified features, clipped polygons at the render boundary.
Render QA reviews the completed render against style-specific criteria: tonal hierarchy, print fidelity, and compositional quality. Each render receives a structured verdict with explicit notes.
Solo product development creates a specific risk: decisions go unchallenged. Without a team to push back, confirmation bias compounds.
To address this, we developed a structured debate methodology where AI agents simulate distinct executive perspectives (CEO, CMO, CPO, Contrarian, and domain experts) to stress-test every major strategic decision. Each "participant" has a defined mandate and argues from that position.
The constraint shifts from "can I build this?" to "can I make the right product decisions?"
The hard problem is drawing the line between deterministic and probabilistic. Geometry processing, coordinate math, and tonal hierarchy are all solved precisely in code. The LLM enters only where human judgment would otherwise be required: texture simulation, compositional QA, and visual assessment. Getting that boundary wrong in either direction is expensive. Too much AI and you lose reliability. Too little and you're hand-tuning every render.
More of taste is deterministic than you'd expect. Color relationships, spatial hierarchy, typographic rhythm, print resolution: these feel like aesthetic judgments, but they're governed by rules that can be encoded. The pipeline defines taste in code wherever possible and reserves the model for the parts that genuinely resist specification.
AI-assisted development changes the solo operator equation. With Claude Code, Gemini API, and vision-model QA, a single person can architect, build, and quality-assure a system that would traditionally require a small engineering team.
Data quality is the actual moat. The visual distinctiveness of these maps comes from working with raw geographic data rather than pre-rendered tiles. That's harder, but it produces output no tile-based competitor can replicate.
Structured AI debate is a bridge, not a destination. The methodology catches obvious blind spots and forces explicit tradeoff analysis. It doesn't replace the generative friction of real disagreement between people with different experiences.
Python · Matplotlib · Pillow · MapLibre · Turf.js
OpenStreetMap / Overpass API · NOAA · NYC PLUTO
Claude Code · Anthropic API / Claude Vision · Gemini API
Next.js / React · GitHub Actions
Have a hard problem in an overlooked industry?
Get in Touch