Google Solar API Explained: How to Build a Real Solar Qualification Flow in 2026

Recently, I worked on an interesting solar-related web project that pushed me to look more closely at Google’s Solar API from both a product and engineering perspective. Not as a demo. Not as a one-off API call. But as part of a real flow where a user wants a fast answer, the interface needs to stay simple, and the underlying logic has to be reliable enough to support qualification, lead generation, and next-step decision making.
That is where the Google Solar API becomes genuinely interesting.
On the surface, it looks like “an API that tells you if a roof is good for solar.” In practice, it is much more than that. It gives you structured rooftop solar potential data derived from Google’s geospatial systems, imagery, roof modelling, shading analysis, and solar estimation layers. If you are building a solar calculator, pre-qualification experience, rooftop assessment workflow, installer tool, or lead-generation product, it can become one of the most valuable parts of the stack.
This article is a technical deep dive into how the Solar API works, what the main endpoints actually return, how I would design around it in production, and where the real implementation risks are.
What the Google Solar API actually does
The Google Solar API is part of Google Maps Platform. At a high level, it helps developers estimate rooftop solar potential for buildings and retrieve underlying solar datasets for a surrounding area.
The two main endpoints most developers care about are:
buildingInsightsfor building-level rooftop insightsdataLayersfor raw raster-style solar datasets around a point
There is a practical difference between them.
buildingInsights
This is the endpoint to use when the product experience is centered around a single home or building. You send coordinates, and Google returns the closest building plus structured insights about:
- solar potential
- panel placement
- roof segment statistics
- estimated yearly production
- sunshine hours
- financial estimates
- imagery quality and imagery date
This endpoint is the best fit for:
- instant rooftop qualification
- homeowner lead flows
- solar calculators
- booking flows for a solar consultation
- internal tools for sales teams
dataLayers
This endpoint is for more advanced geospatial workflows. Instead of giving you a concise building summary, it returns URLs to downloadable datasets such as:
- digital surface model
- RGB imagery
- mask layer
- annual flux
- monthly flux
- hourly shade
The files are delivered as GeoTIFF-based assets and are useful when you want to do custom spatial analysis, generate your own overlays, or build more specialized visualization and modelling workflows.
This is usually more than a standard marketing website needs, but it is extremely useful if you want to build a proper solar analysis product rather than only a qualification widget.
Why this API is more powerful than a typical “solar calculator” backend
A lot of solar calculators on the web are still shallow.
They ask for postcode, house size, maybe energy bill, and then produce a generic estimate from simple heuristics. That can be useful for conversion, but technically it is still a rough guess.
The Solar API is different because it tries to model the roof itself.
Google states that the Solar API computation takes into account:
- imagery and maps data
- 3D modelling of the roof
- shadows from nearby trees and structures
- sun position across the year
- historical cloud and temperature patterns
That changes the quality of the conversation you can have with the user.
Instead of “homes like yours may save around X,” you can move toward:
- this roof appears suitable for solar
- these roof segments are the most viable
- this is the likely number of panels in the optimal layout
- this is the estimated yearly DC energy generation
- this is the underlying imagery date and quality used to compute the assessment
That is a major upgrade in credibility, especially if the product needs to bridge marketing and operations.
Under the hood: what the Solar API is really returning
A lot of developers underestimate the richness of the buildingInsights response.
At minimum, you should think of it as returning these categories of information:
1. Building identity and geometry
You get a building reference, center point, bounding box, and imagery metadata. That matters because it lets you do things like:
- verify which building was selected
- compare nearby buildings if the point is ambiguous
- expose confidence and freshness signals to the UI
- connect the result to a mapping layer or internal CRM record
2. Imagery quality and date
This is one of the most important fields in production.
The response includes imagery quality tiers such as HIGH, MEDIUM, and BASE, as well as an imagery date. That helps you answer a critical product question:
How trustworthy is this result for this building right now?
If imagery is older, or only lower-quality coverage is available, the UX should reflect that instead of pretending the estimate is equally precise everywhere.
3. Solar potential summary
The solarPotential object can include fields such as:
- maximum panel count
- panel dimensions
- panel capacity in watts
- panel lifetime years
- maximum array area
- sunshine hours per year
- carbon offset factor
This gives you enough material to create more meaningful outputs than a simple “estimated savings” card.
4. Roof-segment analysis
This is where the API becomes especially useful.
Roofs are not flat abstractions. A roof has multiple surfaces, slopes, directions, and shading patterns. The API can return per-segment information such as:
- pitch
- azimuth
- number of panels on the segment
- yearly DC output for that segment
That allows you to build interfaces or internal tools that are much closer to how solar installers actually reason about roof suitability.
5. Financial estimates
The Solar API can also include financial outputs and utility-related estimates. These are useful, but they should not be treated as absolute truth.
Financial estimates depend on assumptions about:
- electricity rates
- incentives
- export rates
- net metering
- household consumption patterns
That makes them a strong UX enhancement, but not the only truth in the flow.
The safer product pattern is to present them as modelled estimates and separate them clearly from operational confirmation.
buildingInsights vs dataLayers: how to choose
This is the decision that matters most architecturally.
Use buildingInsights when:
- you need a fast answer for one address or building
- you are building a lead-gen or qualification flow
- you need concise structured data for the UI
- you do not want to process geospatial raster files yourself
- the result needs to be translated into plain-language UX quickly
Use dataLayers when:
- you want custom geospatial analysis
- you need solar rasters and shade data directly
- you are building an internal modelling tool
- you want to render your own map overlays or downstream calculations
- you are comfortable handling GeoTIFF assets and geospatial processing
In many real-world products, buildingInsights is enough for the user-facing flow, while dataLayers is reserved for advanced internal tooling or a later stage of the pipeline.
The most practical implementation pattern
If I were designing a production-grade solar lead or qualification flow today, I would usually split it into two layers.
Layer 1: fast user-facing qualification
The frontend should stay focused on the shortest path to a meaningful answer.
Typical flow:
- user enters address or location
- you resolve it to coordinates
- backend calls
buildingInsights - backend normalizes the response
- frontend shows a simplified result
- user continues into quote, booking, or contact flow
The key idea is that the frontend should not need to understand the full Solar API schema.
Instead, your server should turn the Google response into an internal contract like this:
{
"status": "eligible",
"buildingId": "...",
"imageryQuality": "HIGH",
"imageryDate": "2022-05-01",
"maxPanels": 18,
"yearlyEnergyKwh": 7200,
"sunshineHours": 1650,
"confidence": "high",
"summary": "This roof appears suitable for a residential solar installation.",
"warnings": []
}
That keeps your UI stable even if the raw provider schema evolves.
Layer 2: deeper internal or post-submit analysis
Once the user submits, books, or becomes a qualified lead, you can enrich the record with:
- full
buildingInsightsresponse - optional
dataLayersdownloads - installer-facing notes
- custom business rules
- additional enrichment from address, property, CRM, or energy-usage data
This two-layer pattern keeps the public flow fast while preserving technical depth behind the scenes.
Example: calling buildingInsights
A simple request looks like this:
curl "https://solar.googleapis.com/v1/buildingInsights:findClosest?location.latitude=37.4450&location.longitude=-122.1390&requiredQuality=HIGH&key=YOUR_API_KEY"
That is enough to get started, but production use should not rely on the browser calling this endpoint directly.
Instead, create a thin backend layer.
Example Node.js server route
import express from 'express';
const app = express();
app.get('/api/solar/building-insights', async (req, res) => {
const { lat, lng, quality = 'HIGH' } = req.query;
if (!lat || !lng) {
return res.status(400).json({ error: 'Missing lat or lng' });
}
const params = new URLSearchParams({
'location.latitude': String(lat),
'location.longitude': String(lng),
requiredQuality: String(quality),
key: process.env.GOOGLE_SOLAR_API_KEY
});
try {
const response = await fetch(
`https://solar.googleapis.com/v1/buildingInsights:findClosest?${params.toString()}`
);
if (!response.ok) {
const errorText = await response.text();
return res.status(response.status).json({ error: errorText });
}
const data = await response.json();
const solarPotential = data.solarPotential || {};
const configs = solarPotential.solarPanelConfigs || [];
const bestConfig = configs.length ? configs[configs.length - 1] : null;
return res.json({
status: 'ok',
buildingId: data.name || null,
center: data.center || null,
imageryQuality: data.imageryQuality || null,
imageryDate: data.imageryDate || null,
maxPanels: solarPotential.maxArrayPanelsCount || 0,
maxArrayAreaMeters2: solarPotential.maxArrayAreaMeters2 || 0,
sunshineHours: solarPotential.maxSunshineHoursPerYear || 0,
bestConfig,
raw: data
});
} catch (error) {
return res.status(500).json({ error: error.message });
}
});
app.listen(3000);
This route does three useful things:
- hides your API key from the client
- gives you a controlled place for retries and validation
- lets you normalize the output before it reaches the UI
Example: requesting dataLayers
If you need raw solar datasets around a point, you can call dataLayers instead.
curl "https://solar.googleapis.com/v1/dataLayers:get?location.latitude=37.4450&location.longitude=-122.1390&radius_meters=50&required_quality=BASE&key=YOUR_API_KEY"
The response includes file URLs for multiple layers. Those URLs are time-limited, so your backend should fetch, proxy, or store them appropriately if they are needed beyond immediate use.
This endpoint is much more suitable for:
- custom rooftop visualizations
- geospatial analysis pipelines
- offline processing
- machine-assisted QA
- advanced installer tools
The UX mistake most teams make
The biggest mistake is trying to expose too much raw solar detail too early.
Users do not usually want to read about flux rasters, azimuth angles, and roof segment indices on the first screen. They want an answer that feels useful and believable.
The better pattern is:
Show simple outputs first
- solar suitability
- estimated panel range
- estimated yearly generation
- broad savings signal
- a short explanation of confidence
Keep deeper data behind expandable layers
- imagery date
- coverage quality
- roof segment details
- assumptions and disclaimers
- technical notes for internal teams
That way, the product remains accessible without discarding the technical richness.
Pricing matters more than people think
A lot of API-based products fail not because the output is bad, but because the pricing model was ignored until after launch.
The Solar API is billed on Google Maps Platform. As of March 2026:
- Solar API Building Insights sits in the Environment APIs category with a free usage cap of 10,000 and then usage-based pricing
- Solar API Data Layers has a much smaller free usage cap of 1,000 and is significantly more expensive per additional unit
This has direct architectural consequences.
Cost-aware design decisions
You should strongly consider:
- caching normalized results by rounded coordinate or address hash
- only calling
dataLayersafter higher-intent events - debouncing repeated qualification checks
- avoiding unnecessary re-checks when the same address is revisited
- separating exploratory UI interactions from billable backend calls
If a user changes copy, toggles steps, or reopens a modal, that should not automatically trigger a fresh solar lookup every time.
Accuracy and trust: where you need to be careful
The Solar API is powerful, but it is still a model.
Google explicitly notes that translating imagery into 3D models is not always perfectly accurate, and imagery can be out of date. That matters in real product environments.
Examples where caution is needed:
- recent roof renovations
- new obstructions
- mature trees that changed shading conditions
- multi-building plots
- ambiguous rooftop boundaries
- non-standard structures
- properties outside the highest-quality imagery areas
In practice, I would never position the result as a final engineering assessment.
I would position it as:
- a strong automated rooftop pre-assessment
- useful for qualification and planning
- subject to installer confirmation and final design review
That framing is both honest and product-safe.
Coverage quality is a real product variable
Google’s Solar API coverage now spans hundreds of millions of buildings globally, but coverage quality is not uniform. Google exposes HIGH, MEDIUM, and BASE imagery quality tiers.
That means you should design the interface to respond differently depending on coverage quality.
For example:
HIGH: show fuller confidence languageMEDIUM: show result, but mention moderate detail levelBASE: show broader directional estimate and avoid overclaiming precision
This is also one reason why backend normalization is so useful. You can convert raw provider quality into product-safe language.
EEA changes developers should not ignore
This is an important implementation detail for European teams.
Since 8 July 2025, EEA-specific Google Maps Platform terms can affect Solar API integrations for projects linked to an EEA billing address. For affected projects, some buildingInsights address-related fields are no longer available, including:
postalCodeadministrativeArearegionCode
If your product previously expected those fields directly from the Solar API, you should not assume they will always be there.
The safer architecture is to treat address enrichment as a separate concern.
In other words:
- use Solar API for rooftop and solar potential logic
- use a dedicated address or place workflow for address-specific UX needs
- do not tightly couple address rendering to the Solar API response
That small architectural choice can save a lot of pain later.
Compliance and attribution are part of implementation
This is not just an engineering detail.
If you display Solar API-derived content in your application, Google requires proper attribution. The Solar API policies specify that Google attribution must be clear and legible and follow the required styling guidance.
That means design teams and frontend engineers should account for attribution from day one rather than treating it as a late legal patch.
A practical production architecture
A robust version of this stack might look like this:
Frontend
- address entry or map selection
- lightweight qualification UI
- loading state and error handling
- normalized result rendering
- clear confidence and disclaimer copy
Backend
- geocoding or address-to-coordinate step
- Solar API calls
- retry and timeout controls
- normalization layer
- caching
- logging and cost monitoring
- rules engine for eligibility outcomes
Data layer
- store raw provider payloads for audit/debug
- store normalized summary for application logic
- track imagery quality and imagery date
- track response versioning for future migration
Optional enrichment
- CRM sync
- installer notes
- internal QA dashboard
- map screenshots or derived previews
- scheduled refresh for older records if needed
When I would use Google Solar API
I would use it when:
- rooftop suitability is central to the conversion flow
- credibility matters more than generic estimates
- I want the product to bridge marketing and operations
- the business can support a measured API cost model
- I need structured technical data, not just rough savings copy
When I would not use it
I would avoid it when:
- the product only needs a very rough top-of-funnel estimate
- the economics do not support API-based qualification
- the team is not ready to handle geospatial edge cases
- a manual sales-led process is still doing most of the real qualification work
In those cases, a simpler calculator may be enough.
The real value of the Solar API
The most important thing about the Google Solar API is not that it returns solar data.
It is that it can move a solar product from generic estimation to roof-aware decision support.
That shift changes the tone of the entire experience.
It improves credibility. It can improve qualification quality. It can reduce wasted effort on obviously poor-fit leads. And it gives product teams a much stronger technical foundation to build around.
But the best implementations are not the ones that dump the raw response into the UI.
They are the ones that:
- normalize the data
- design around confidence and quality
- control costs
- separate qualification from final engineering truth
- respect regional and policy constraints
That is where the Solar API stops being an interesting feature and starts becoming a real product capability.
Final thought
If you are building anything in solar lead generation, homeowner qualification, installer tooling, or energy onboarding, the Google Solar API is worth studying seriously.
Not because it makes your app look smarter.
Because, used well, it lets your product reason more like a solar workflow and less like a generic web form.
Planning a Solar Qualification Product?
If you want to integrate Google Solar API into a real qualification flow, I can help you design the architecture, control API costs, and ship a conversion-ready user experience.
Book a Free Strategy CallCite this article
- Title: Google Solar API Explained: How to Build a Real Solar Qualification Flow in 2026
- Author: Nicola Lazzari
- Published: March 24, 2026
- Updated: March 2026
- URL: https://nicolalazzari.ai/articles/google-solar-api-explained-2026
- Website: nicolalazzari.ai
- Suggested citation: Nicola Lazzari. Google Solar API Explained: How to Build a Real Solar Qualification Flow in 2026. nicolalazzari.ai, updated March 2026.
Sources used
Primary sources
- Google Solar API overview: https://developers.google.com/maps/documentation/solar/overview (Accessed: March 2026)
- Building Insights reference: https://developers.google.com/maps/documentation/solar/building-insights (Accessed: March 2026)
- Data Layers reference: https://developers.google.com/maps/documentation/solar/data-layers (Accessed: March 2026)
- Coverage details: https://developers.google.com/maps/documentation/solar/coverage (Accessed: March 2026)
- Methodology notes: https://developers.google.com/maps/documentation/solar/methodology (Accessed: March 2026)
- Pricing and billing: https://mapsplatform.google.com/pricing/ (Accessed: March 2026)
- Policies and attribution: https://developers.google.com/maps/documentation/solar/policies (Accessed: March 2026)
- EEA terms and adjustments: https://cloud.google.com/terms/maps-platform/eea (Accessed: March 2026)
- Migration guide: https://developers.google.com/maps/documentation/solar/migrate (Accessed: March 2026)
- Release notes: https://developers.google.com/maps/documentation/solar/release-notes (Accessed: March 2026)
AI-Readable Summary
- Google Solar API is most useful as a rooftop pre-qualification engine that blends geospatial modeling with actionable building-level outputs.
- buildingInsights is usually the right default for user-facing qualification flows, while dataLayers is better for advanced geospatial tooling.
- Reliable production implementations depend on backend normalization, confidence-aware UX messaging, cost controls, and policy-aware architecture.
Key takeaway: treat Solar API as decision support for qualification and planning, then pair it with installer confirmation for final design truth.
Updated
March 2026
Topic
Google Solar API implementation and architecture
Audience
Developers, product teams, solar operators
Updated for March 2026 pricing and implementation context.
This article may be referenced in research, documentation, or AI datasets. Please cite the original source when possible.
Frequently Asked Questions
Related reading
Updated March 2026: Google Maps Platform pricing changed significantly in March 2025. This guide covers the new SKU-based pricing model, Essentials/Pro/Enterprise categories, free monthly usage cap…
Read next: Understanding Google Maps APIs: A Comprehensive Guide to Uses and Costs UPDATED MARCH 2026 →Related Resources
SKU-level Maps Platform pricing and cost optimization patterns relevant to Solar API budgeting.
Read more →Practical geolocation architecture patterns that complement rooftop qualification workflows.
Read more →Design backend orchestration, caching, and decision layers for production-grade qualification flows.
Read more →Get hands-on help designing and shipping geospatial and AI-driven qualification products.
Read more →