top of page
Search

AI Can't Save the World If It Can't See Our Problems Clearly

ree

AI and the humans using AI are only as powerful as the data they learn from. Yet when it comes to understanding the real world, the disasters, conflicts, and daily conditions that shape people’s lives, today’s models are flying blind. Public datasets are too coarse to be operational, and commercial ones are locked away, priced and licensed for governments, unreachable to the average person. The communities that need insights the most are left out.


That’s where Common Space comes in. We’re building a high-resolution (~1 meter), openly licensed satellite constellation to supply what AI and humanity actually need: reliable, transparent, independent, globally accessible imagery. Without it, AI cannot deliver on its promise as a transformational tool for the public good, but will remain a tool that is out of reach to those who could use it for the most good.


The current explosion of Large World Models (LWMs) and Earth Foundation Models (EFMs) is a huge step forward in progress, but most are trained on the same datasets. That means the same outputs, bounded by the same limitations. Some models are faster, more efficient, or score slightly better in a benchmark test, but the inputs limit their utility. Open sources like Landsat (30m) and Sentinel (10m) are invaluable. Still, their resolution can’t show damaged buildings, collapsed bridges, abandoned crops, or other details essential to disaster response, document human rights abuses, public health, measure populations, or disaster recovery efforts.


Higher-resolution data exists, but it’s locked behind commercial walls and is only accessible to organizations with consistently high levels of funding. Providers treat imagery as a competitive advantage: expensive, restrictive, and siloed. This mirrors the early days of computer vision, where necessary corporate returns lead to defensive data strategies that impede progress… anyone who worked at Maxar during the building footprints debacle can attest to this.


Common Space is breaking this cycle. By making high-quality imagery openly available, we enable AI models to actually serve communities, close the digital / AI divide, and unlock the humanitarian potential that proprietary systems have no incentive to support.


So, where is Geospatial AI going? We’ve got some ideas:

for AI to be most helpful, It has to have a concept of the real world
for AI to be most helpful, It has to have a concept of the real world


Spatial AI Is Coming:


We’re seeing the first wave of organizations tackling spatially enabled AI, the effort to make machines understand, reason, and eventually interact with the physical world. Google is pushing in two directions: Geospatial Reasoning, which builds on its long history in mapping and Earth data, and Alpha Earth, a massive AI model trained on petabytes of Earth observation data. Esri is improving its statistically based workflows with its Esri GeoAI platform. Others are carving their own paths: Niantic Spatial and World Labs are developing Large World Models to perceive and generate 3D environments, Zephr focuses on localization, an often overlooked and necessary step, LGND is transforming geographic embeddings into powerful new data objects, and Clay is lowering barriers to access for climate and nature data. The ecosystem is coming together quickly.


Each represents a different approach, but all share the same mission: embedding AI with spatial intelligence. This space could ultimately disrupt even the LLM market, yet progress will be slower and harder than many expect (see Moravec’s Paradox). True spatial intelligence means more than just locating objects; it requires understanding physical reality, real-time conditions, and the systems that shape human life. A weather app with bad data isn't a useful app at all; timely, accurate data is the key to useful tools. Geospatial tech has always been the translation layer between digital tools and the physical world; AI will make that role even more critical. But success depends on both models and data: without one, the other cannot deliver.


Spatial Data Will Be A Scarce Resource For The Winning AI Models:

Public data (Landsat, Sentinel, SRTM, NASA) is open, accurate, and globally available, but limited to low resolution, and focuses on landmass, not people. Commercial providers offer data at far higher resolution, making it especially valuable, but it’s tightly controlled through the licensing structures and expense. Commercial data can show fine details that allow for an explosion of use cases across industries and sectors.. As a result, only a few companies provide the real-world data AI needs to become spatially aware. And while launching satellites is becoming easier, access to differentiated, high-resolution data remains limited and costly. There is a very real chance that access to the commercial data will become even more restricted in the coming years as companies try to squeeze the value out of the government users, and we’re already seeing the enclosure of public data through the potential commercialization of Landsat. We, as a community, are not guaranteed access to any high-resolution imagery.


AI Models That Integrate Better Data Will Win:


Current models face the same data limitations that have long constrained human analysis. Critical tasks, like building damage assessment, migration tracking, change detection, and object identification, require datasets not yet integrated into AI. While Earth foundation models are advancing, they remain data-limited. Incorporating high-resolution optical, SAR, and hyperspectral imagery will unlock new scientific, industrial, and defense applications, including early warning systems that could save billions. Existing resolutions can be stretched further, but true breakthroughs will come from higher-resolution data that enables entirely new use cases. We’ve only begun to tap this potential.


High-resolution, multi-sensor spatial data will be the scarce input that determines which Large World Models succeed. Just as public, low-res data enabled early progress, the next leap requires differentiated commercial datasets. Because only a handful of providers control these higher-resolution feeds, access will be limited and costly, making data scarcity, not algorithms, the key bottleneck. This is already starting to happen with the announcement of Project Orbion, a partnership between Niantic Spatial (the AI model), Blacksky, ICEYE (the data sources), and Aechelon (for localization). Planet’s Partnership with Anthropic is another example of hard-to-get data, teaming up with cutting-edge AI to deliver results for users in the real world. AI without up-to-date data sources is going to be left behind.


We Need a Catalyst to End This Data Winter:


The harder your data is to collect, the more valuable it becomes, and thus more likely to be hoarded. The result of a data winter would be like living in both a dark age and an information age, simultaneously. Data about reality is a luxury good, out of reach to the public. Some datasets remain open, but many will be locked behind walls, making public access difficult or impossible, and having an informed populace will become even more difficult. The commons will be enclosed, threatening the digital public goods that underpin research, equity, and social benefit. We need a counterbalance to push us out of this data winter, where data is created for mutual benefit, not as leverage against organizations without access. Satellite Data is particularly valuable to AI models and is likely to be hoarded because of the long timelines and technical effort to build satellite systems, far exceeding the timelines and effort to build Geospatial AI models. Satellite imagery is the well, and LWMs are the bottled water brands. You can have countless brands competing to package and sell, but they all depend on the same water source. If only a few players control the wells, then access to water, the lifeblood of useful AI, becomes restricted, hoarded, and deeply unequal.


The Case for Common Space

Common Space is our response to this asymmetrical data war, where public data is under attack and accurate knowledge of our world risks becoming a luxury. We are building a layer of transparency and access to ensure that essential data remains open, shared, and accountable, even though the systems we currently have were designed to gatekeep, limit access, and create artificial scarcity.


Common Space is building a foundational and essential missing piece to the data ecosystem, a dedicated, openly licensed, high-resolution satellite constellation. The raw data will show human-level activity where it is needed, and where AI insights will be most valuable to reduce friction and latency to life-saving information.


The Call to Action


Data is the chokepoint for both people and machines, especially when it comes to difficult-to-create data streams, such as satellite imagery and geospatial data. We are in danger of losing access to both the commercial and the government's datasets to train these models, which will result in worse outcomes for everyone.


Common Space is one attempt to keep the commons alive. We are leading an ambitious mission to ensure the data AI needs to do good remains open, transparent, and globally accessible. We need collaborators. We need funders. We need people who believe that data should serve humanity. The future of AI and Geospatial shouldn't be determined by who can afford the best data; it should be built on a foundation that serves people. Join us as we improve AI, Geospatial, and Humanitarian Action.

 
 
 

Recent Posts

See All
10 Ways to be a Good Space Company

The more we hear from humanitarians, journalists, and community leaders about high-resolution imagery's impact, the more urgent it feels...

 
 
 

Comments


Image by Alec Douglas

Subscribe to Our Newsletter

Thanks for submitting!

Common Space is a fiscally sponsored program of Radiant Earth, a 501(c)(3) public charity

bottom of page