Towards a ‘Mission Control for Earth’: Better understanding Earth’s systems using AI and space data

In August 2025, FDL Earth Systems Lab presented three big AI research outcomes to improve how we understand and predict Earth’s changing systems and offer a window on how we might build a ‘Mission Control for Earth’. Leveraging the European Space Agency’s missions and funded by ESA Φ-lab, this initiative combines fresh datasets with innovative AI tools to give the global community better ways to track and respond to our planet’s most urgent environmental shifts.
“Guided by artificial intelligence, driven by human good”. This could be FDL Earth Systems Lab (ESL)’s motto. ESL is a research collaboration framework funded by ESA Φ-lab and implemented by Trillium Technologies, with the support of University of Oxford, Google Cloud, NVIDIA, Scan AI, and Pasteur ISI. It focuses on artificial intelligence (AI) – in particular machine learning (ML) – to support Earth sciences, helping researchers create practical tools for some of humanity’s toughest challenges with the best of motivations: ‘planetary stewardship’.
FDL Earth Systems Lab has run annually since 2008. Experts with deep knowledge of the challenge domain work side by side with data scientists to develop new AI-enhanced approaches and tools. The short, focused format encourages quick testing and refinement, ensuring stronger results.
Last August, the ESL 2025 Live Showcase featured three ambitious research sprints: (1) refining 3D cloud models to improve forecasts of extreme events; (2) testing how well foundation models perform in sparsely observed events; and (3) advancing onboard ML to spot short-lived atmospheric events, such as greenhouse gas emissions. Each sprint brought together unique datasets and new AI-based methods to support the global research community.
Advancing global 3D cloud reconstruction is essential to deepen our understanding of cloud structure and the interactions with terrestrial and atmospheric phenomena. This is critical for tropical cyclones, which remain among the hardest weather systems to predict, especially during the intensification stage. Forecasts often poorly resolve a cyclone’s internal dynamics, simulations of cloud properties are highly uncertain, and observational records are limited, with only about 80 to 90 tropical cyclones occurring each year. The ‘3D Clouds for Climate Extremes’ sprint builds on a mature model training pipeline established in ESL 2024, which successfully modelled 3D clouds from geostationary data.
First, the team pre-trained a sensor-independent model on a large dataset of top-view satellite imagery from GOES-16, MSG and Himawari-8, to reconstruct masked versions of the observations. Second, they fine-tuned the model using a dataset from CloudSat, which provides vertical cloud profiles. The team also created a benchmark dataset, by combining satellite imagery with the timing and location of cyclone events. Since the model is sensor-independent, it is possible to include other satellite data that were not used for training, ensuring global coverage.
Together, these data enable the reconstruction of key microphysical properties of clouds, including ice water content (notably elevated in rapidly intensifying cyclones), droplet effective radius (a critical factor in cloud absorption and reflection of sunlight), and radar reflectivity (linked to cloud density and an indicator of rainfall).
Improving the prediction of cloud structures in three dimensions opens opportunities for a wide range of scientific and applied use cases: forecasts of hurricane intensity, discriminative cloud classification, or to understand how deforestation influences cloud cover and type. This ambition aligns closely with the objectives of ESA’s cloud, aerosol and radiation explorer mission,EarthCARE, which aims to advance our understanding of cloud-aerosol-radiation interactions.
Earth observation foundation models are very powerful tools, but they also have limitations, especially when facing unfamiliar scenarios such as extreme events. One of the reasons is that training datasets typically contain limited examples from these events, leading to weaker performances when applied outside the conditions represented in the data.
When queried about a particular topic, foundation models can be ‘confidently wrong’. This becomes especially problematic when these models are used in critical, time-sensitive situations such as disaster response. It is essential to increase model transparency in cases where the model output has a high degree of uncertainty and requires human validation. But how can we know if the model is uncertain?
The ‘Foundation Models for Extreme Environments’ team brought a novel answer to that question. The team – mentored by Φ-lab’s Internal Research Fellows Patrick Ebel and Ruben Cartuyvels – focused on distinguishing two types of uncertainty: data-driven or model-driven.
SHRUG-FM (Systematic Handling of Real-world Uncertainty for Geospatial Foundation Models) was developed as an adaptable framework for the community that combines input and training image comparison, embedding comparison, and the foundation model’s output and uncertainty into a planning and selective prediction mechanism, to ensure that the model can give a prediction, raise a warning, or simply say that it does not know the answer.
One of the most urgent applications of Earth observation is detecting and tracking greenhouse gas (GHG) emissions that are driving global warming. Methane, in particular, is one of the most powerful heat-trapping gases. Hyperspectral satellites play a crucial role in the detection of such gases: each gas interacts with light in a unique way, creating a distinct ‘spectral signature’ or ‘fingerprint’ that allows its identification from space.
The STARCOP 2.0 solution is built on a ‘tip-and-cue’ system that makes use of hyperspectral satellite data. In this setup, the ‘tip’ satellite is responsible for quickly detecting methane plumes. Once a plume is identified, it alerts the ‘cue’ satellite, which carries out more advanced tasks such as detailed plume segmentation and estimating methane concentrations using a U-Net ML model.
Unlike traditional approaches, image analysis happens directly onboard, avoiding delays from sending images to ground stations for processing. To achieve this, the team built two ML-ready datasets, one with orthorectified images, and another with un-orthorectified images that are more suitable and realistic for onboard implementation. These datasets were used to train three models, bypassing the need for image correction and reducing inference time.
The datasets have been shared with the community, and the models are being optimised for spacecraft limitations in computing power, memory and energy. This system makes it possible to detect methane and other GHG leaks quickly, helping policymakers hold polluters accountable and support efforts to reduce emissions.
“We’re motivated to show how AI’s powerful predictive and insight-extracting toolbox can make a significant difference to how we monitor and manage our planet. What’s exciting about this year’s research products is that we are showing how multi-instrument methods and context-aware AI can be harnessed to make a dent in open problems – such as rapidly determining the anatomy of a cyclone or identifying erroneous greenhouse gas emissions from orbit. If you are a tech optimist – which we are – you will see that the puzzle pieces for a ‘mission control for Earth’ are now within our reach,” commented James Parr, Founder and Chief Executive Officer at Trillium Technologies.
Nicolas Longépé, Earth Observation Data Scientist at Φ-lab, is ESA’s Technical Officer for the initiative: “The FDL sprint format works because it brings together experts from different fields to collaborate intensively and prototype solutions quickly. By combining domain specialists, AI researchers, and technical mentors, we can tackle complex, carefully chosen challenges with real impact. These three sprints fit perfectly into the Earth Action paradigm we pursue at Φ-lab, moving beyond passive observation towards proactive insights and decision-making for a more resilient planet.”
To know more: ESA Φ-lab, Trillium Technologies, FDL ESL AI SOTA Live Showcase
Photo courtesy of Trillium Technologies
Share