ESA title

SPRINT4EO: giving Europe a running start in Earth observation innovation

As the demand for Earth observation capabilities increases by the day, innovation must keep pace. Funded by ESA Φ-lab and implemented by OHB Digital Services, SPRINT4EO is a new initiative to create a rapid prototyping environment where small, focused teams work on breakthrough ideas, benefitting from technical expertise outside the traditional space sector.

What if we could repurpose disruptive technologies from other domains and introduce them into the Earth observation ecosystem?

To maintain its relevance and ensure Earth observation remains a pillar of the global digital economy, we must bring fresh perspectives. By adopting existing tools rather than building tailored Earth observation solutions from scratch, a cross-disciplinary approach can turn Earth observation into a high-speed technological ecosystem capable of scaling at the pace of modern data demands.

This is precisely what SPRINT4EO aims to do. As part of the ESA Φ-lab initiative ‘EO Foresight Exploratory Sprints’ and led by OHB Digital Services as the prime contractor and coordinator, “SPRINT4EO is designed to shorten the path from promising ideas to concrete Earth observation applications,” comments Patrick Rückert-Schindler, Proposal and Project Manager at OHB Digital Services.

“By running focused research sprints with specialised external teams, the initiative creates room to test disruptive technologies quickly and in an application-driven way”, Patrick added. SPRINT4EO enables a rapid prototyping framework within ESA to encourage the participation of companies that never had the opportunity to work with the agency.

A helping hand from medical technology

One of the implementers of SPRINT4EO is the Fraunhofer Institute for Digital Medicine MEVIS, which combines applied research with robust software development, targeting real-world use cases in healthcare. MEVIS’ focus has been on integrating advanced image and data analysis into tools that support diagnosis, therapy or clinical decision-making.

The resulting expertise with AI-driven analysis and complex imaging workflows is what Fraunhofer MEVIS now brings into Earth observation through SPRINT4EO: Earth Observation Multi Agent System (EOMAS) is a sprint focused on developing an agentic AI assistant prototype for Earth observation queries that will understand a user’s query, plan the required workflow, and use dedicated tools for data access, image processing and visualisation.

“It is very exciting for us to apply our expertise to the Earth observation domain. There are interesting analogies between histopathology and satellite imaging, despite the vast difference in scale. Within SPRINT4EO, we explore the possibilities of agentic systems together,” commented Hans Meine, Head of Image Analysis and Deep Learning at Fraunhofer Institute for Digital Medicine MEVIS.

From point clouds to carbon storage

Pointly GmbH is a Berlin-based geospatial start-up specialised in cloud-based analysis, management, and classification of large 3D point clouds. Its platform combines pre-trained and custom AI models, manual annotation tools, vectorisation, and scalable cloud workflows to turn raw point cloud data into structured geospatial information for applications such as urban planning, infrastructure monitoring, and digital twin creation.

In SPRINT4EO, Pointly builds on this 3D geodata and AI expertise to lead the Carbonherence sprint, which aims to create a hybrid workflow for dynamic urban biomass for the purpose of carbon assessment. It combines multispectral, radar, thermal, and LiDAR-based information to create a scalable method for estimating vegetation structure, which will then be used to assess carbon storage in cities.

“SPRINT4EO gave us the framework to take an idea we had long been developing, combining Pointly’s 3D AI capabilities with multi-source Earth observation data and turn it into a rigorous, scalable workflow. Carbonherence is exactly the kind of challenge that benefits from ESA’s support and only possible when space data and AI innovation work in concert,” commented Sid Hinrichs, Head of Operations at Pointly GmbH.

Making Sentinel-2 image analysis more powerful

Zentrix Lab is an Estonian small and medium-sized enterprise with a strong profile in research, innovation and software development. The company works across areas such as Artificial Intelligence, the Internet of Things, and Earth Observation, while also delivering commercial solutions for horizontal market sectors.

This company leads the HYPERFUSE sprint, which is advancing a new AI-enabled fusion layer designed to transform Sentinel-2 imagery into harmonised, very-high resolution (VHR) proxy mosaics. By preserving the radiometric quality of Sentinel-2 data while introducing VHR-like structural organisation, the approach aims to support more effective downstream Earth observation analytics, particularly in settings where access to real very-high resolution imagery is limited or too expensive.

“With HYPERFUSE, we explore how AI-driven fusion can enhance the analytical value of Sentinel-2 data by introducing a harmonised high-resolution proxy layer, aiming to support more scalable and cost-efficient Earth observation analytics,” commented Anđela Marković, Researcher at Zentrix Lab.

Cross-disciplinary collaboration is the key for success

Looking beyond the traditional space sector expertise and embracing fresh perspectives is key to generate innovation in the Earth observation domain and to strengthen the European Earth observation industrial competitiveness.

“A key strength of SPRINT4EO is the combination of the Earth observation domain expertise, agile execution and external innovation capacity. That creates a practical framework for exploring new technologies before they move into larger operational contexts,” commented Mounia El Baz, Earth Observation Digital Innovation Engineer at ESA Φ-lab and Technical Officer for SPRINT4EO.

“The first round of research sprints already shows the wealth of innovation the initiative aims to unlock – an agentic AI model for Earth observation question answering, multi-sensor carbon intelligence for cities, and AI-enabled fusion that makes Sentinel data analytically more powerful,” Mounia added.

The SPRINT4EO activities are organised in three overlapping lots, with individual sprints running for up to six months after the kick-off date. The initiative will run until June 2027.

More information about SPRINT4EO and how to participate can be found here.

To know more: SPRINT4EO, ESA Φ-lab, OHB Digital Services, Fraunhofer Institute for Digital Medicine MEVIS, Pointly GmbH, Zentrix Lab

Rome, Italy, is featured in this image captured by the Copernicus Sentinel-2 mission. Photo contains modified Copernicus Sentinel data (2020), processed by ESA.

Turning data from space into action for Earth

Happy Earth Day, 22 April – a global call to act and protect our planet. At the European Space Agency, that action begins in orbit, where satellites deliver a continuous, global view of Earth and track environmental change.

Working with partners, ESA turns this stream of data into actionable information through its FutureEO programme, helping governments and communities respond faster and more effectively to climate-driven risks.

As climate change accelerates the spread of mosquito-borne diseases, satellite-based early warning systems are giving health authorities a critical head start. By fusing Earth observation data with machine learning, ESA and UNICEF have developed a digital platform that helps countries predict, prepare for and respond to dengue and malaria outbreaks weeks before they escalate.

At the heart of this effort is the Disease Incidence and Resource Estimator (DIRE), developed by ESA Φ‑lab for UNICEF. DIRE combines machine learning with satellite‑derived environmental data to model how climate and geography influence disease transmission and predict imminent disease epidemics.

Read the full article on www.esa.int.

Photo courtesy of Pixabay/A Different Perspective

EVE: making Earth observation knowledge accessible to everyone

Despite the wealth of Earth observation and Earth sciences knowledge, much of it is scattered across many different sources and formats and only accessible to experts. EVE (Earth Virtual Expert) is a new intelligent companion for exploring the world of Earth observation and Earth sciences that can explain both beginner and advanced concepts, guide users to trusted sources, summarise scientific documents and deliver insights on trends and tools, acting as a centralised platform for Earth observation and Earth sciences insights.

Anyone who has ever had to compile information for a report knows how difficult it is to include all the pertinent data and cross-check references. This is the reality in several domains – and Earth observation and Earth sciences are no exception.

Earth observation and Earth science research generates a lot of high-value knowledge, but this knowledge is scattered across many different sources and formats. Accessing this information usually requires deep expertise, limiting comprehensive understanding.

All of this creates a significant entry barrier for many potential users, like domain practitioners and decision-makers who need transparent, trusted and scientifically robust information – something that traditional systems struggle to provide.

As environmental decisions and interventions rely more and more on Earth observation, there is the need for systems that not only retrieve information but also interpret and reason across heterogeneous sources. Recent advances have been made in the field of large language models (LLMs), but general-purpose models lack the domain specificity and rigorous evaluation needed for reliable Earth Intelligence applications.

Meet EVE (Earth Virtual Expert), an Earth observation and Earth science-specialised LLM. Funded by ESA Φ-lab and built by Pi School, EVE was developed in partnership with Imperative Space and Mistral AI to close the gap between Earth sciences and decision-making.

EVE-Instruct, the core 24B LLM for Earth Intelligence behind EVE’s chat platform, was built on Mistral’s Small 3.2 model and further optimised for reasoning and question answering. As corpus design and domain-adaptive pre-training are central to the performance of a specialised LLM, the team curated a large-scale Earth observation and Earth sciences corpus by manually selecting 172 sources across 22 trusted publishing institutions that included open-access, private and proprietary collections, the latter as the result of a partnership agreement with Wiley.

Adapting an instruction-tuned LLM to a target domain may come at the expense of the ability of the model to follow instructions, its conversational stability or tool-use behaviour. The team implemented a fine-tuning strategy that interleaves instruction fine-tuning and long-form text, mixing general-domain replay data with synthetic Earth observation and Earth sciences text. As a result, EVE-Instruct is more stable and can follow instructions better than its parent model.  

Due to a lack of standardised benchmarks for dialogue and natural language processing capabilities applied to Earth observation and Earth sciences, the team also curated an evaluation set targeting domain-relevant tasks like multiple choice question-answering (MCQA), hallucination detection and open-ended question-answering (QA), in what constitutes the first systematic benchmarks within Earth observation and Earth sciences for language modelling.

EVE-Instruct was evaluated using these benchmarks and general-domain benchmarks to access domain gains and preservation of its general capabilities. It was compared against the parent model and three additional LLMs of comparable scale: Mistral Small 3.2, Gemma3, Qwen3 and Llama4 Scout, respectively.

EVE-Instruct achieved the highest performance across MCQA benchmarks, indicating effective incorporation of Earth observation and Earth science knowledge during the fine-tuning step. It also leads competing models on open-ended QA without context under both the ‘LLM-a-as-judge’ and ‘Win Rate’ evaluations.

To address the issue of factual hallucinations and to extend EVE’s knowledge beyond the training data, the team developed a Retrieval-Augmented Generation pipeline that grounds EVE’s answers in relevant documents from team-curated Earth observation and Earth sciences knowledge bases.

For hallucination detection, EVE-Instruct goes through a first fact-checking stage after a query, in which it acts as an evaluator, producing a binary hallucination label and a justification for the label. If any hallucination occurred, the query is reformulated using the justification to address the identified issues. With newly retrieved information, the model generates a revised, more grounded response.

EVE-Instruct has the ability to critique the original answer using both prior and newly retrieved evidence and then produce a revised answer. In the end, the model ranks the original and revised outputs, selecting the most evidence-supported, reliable response.

Beyond offline evaluation, a six-month pilot stage for EVE’s chat platform was carried out starting in September 2025 with the help of 350 users, through a graphic user interface and an application programming interface. Interested parties can read more about the development of EVE in this technical paper and in this one-pager.

The models, code, curated corpus, benchmarks and a subset of the synthetically-generated fine-tuning dataset used to create EVE-Instruct are now available on EVE-ESA’s Hugging Face and GitHub.

By using EVE’s chat platform, anyone – despite scientific background and level of expertise – can explore and ask Earth observation and Earth science-related questions using natural language. This platform will be able to explain both beginner and advanced concepts, guide users to trusted sources, summarise scientific documents and deliver insights on trends and tools.

An operational version of platform is undergoing its final stages of development and will be available soon. Registrations for its public launch are now open here.

For now, EVE is text-only and does not reason directly over Earth observation and Earth sciences imagery or structured geospatial data. However, the team aims to expand it into a multimodal, agentic platform capable of reasoning over imagery and geospatial data, supporting multi-step scientific workflows for large-scale Earth observation and Earth sciences analyses and data-driven inference.

To facilitate this transition, the next steps have already been prepared: EVE natively operates using the standard Model Context Protocol (MCP), enabling seamless connectivity with a wide range of external geospatial tools, services, and processing backends. This design choice ensures that multimodal and agentic capabilities can be integrated incrementally, allowing EVE to orchestrate and interact with geospatial computation resources as they are plugged into the MCP ecosystem.

Discover more about EVE and register for EVE’s public opening on the EVE website.

To know more: ESA Φ-lab, Pi School, Imperative Space, Mistral AI, Wiley

Photo courtesy of Unsplash/Vimal S.

OroraTech’s journey from start-up to Copernicus data provider

Since joining ESA’s European Emerging Copernicus Contributing Missions (CCM) activity in June 2023, OroraTech has grown from a space start-up to a leading supplier of thermal sensing data and predictive wildfire solutions.

The Munich-based company now operates a growing constellation of thermal-sensing spacecraft and recently reached a significant milestone – participating in a hands-on workshop directly with the Copernicus Emergency Management Service (CEMS).

OroraTech’s relationship with ESA began through the ESA BIC Bavaria and ESA Kick-Start incubation programmes. This was followed in 2022 by co-funding from ESA’s InCubed programme for the development of FOREST-3 – the company’s first fully internally developed spacecraft, which launched in January 2025.

Read the full article on www.rapidresponse.copernicus.eu.

Photo courtesy of Unsplash/Mike Newbry.

Earth observation community spotlight: Saturnalia

In this article, we speak with Gianni Iannelli, Chief Executive of Saturnalia, about the challenges facing agriculture, the importance of Earth observation in crop monitoring and risk assessment, and how data provided through the TPM programme are helping Saturnalia achieve its aims.

Italy-based Saturnalia is a geospatial intelligence company that harnesses the power of Earth observation data to improve decision-making and risk management in agriculture. The company offers an easy-to-use platform that is used by farmers and agriculture insurers to better protect crops, assess exposure, and enhance productivity.

Saturnalia works with data from ESA’s Third Party Missions (TPM) programme, which disseminates data from commercial and institutional partners to European businesses participating in ESA incubation activities or developing pre-commercial applications.

Saturnalia gained access to these datasets through an ESA InCubed project, which was instrumental in enabling the development of our AI-driven processing pipeline. As part of this work, we integrated data provided by ESA TPM to prove the feasibility of the solution without incurring prohibitive upfront costs.

Read the full article on www.earth.esa.int.

Photo courtesy of Unsplash/Ant Rozetsky

Join us for the 2026 ESA EO Commercialisation Forum

Held in Seville (Spain) from 12 to 14 May 2026, the 3rd ESA Earth Observation Commercialisation Forum will give attendees the opportunity to meet institutions, industry leaders, start-ups, investors, users and entrepreneurs, and connect with potential partners while staying ahead of key Earth observation market trends and challenges.

The 3rd ESA Earth Observation Commercialisation Forum (ESA CommEO) will take place from 12 to 14 May 2026 at the prestigious Hotel Meliá Lebreros (Seville, Spain). Organised by ESA Φ-lab and supported by the Spanish Space Agency, the event will bring together the global Earth observation ecosystem for three days of insight, innovation, and high-level networking.

This years’ edition is focused on the latest trends arising in the Earth observation commercial market, featuring an engaging programme that includes keynote speeches, panel discussions, exhibitor booths, and curated matchmaking opportunities designed to foster new partnerships and boost commercial growth within the Earth observation sector.

The programme is divided into three key axes – ‘Strategy, Finance & Market Dynamics’, ‘Earth Intelligence, AI & Commercial Adoption’ and ‘Key Verticals & Future Capabilities’, creating a well-rounded experience that caters to diverse interests, expertise levels and strategic priorities.

For the first time, the event will offer dedicated sponsorship opportunities, giving organisations the chance to strengthen relationships with end users, institutions, entrepreneurs, and investors. Sponsors will also be able to generate qualified leads by connecting directly with key stakeholders who are actively shaping the future of Earth observation commercialisation.

The event will also feature various parallel sessions, from matchmaking with investors and exploring commercial opportunities in Africa, to supporting New Space companies and exploring Copernicus Contributing Missions.

While the event has a strong focus on commercialisation, it also supports innovation. The top three finalists of the ESA Φ-lab Grand Marathon will pitch their Earth observation-based solutions aimed at protecting civilians in disaster and public-safety contexts.

For the third year, ESA CommEO will give start-ups the opportunity to compete for the CommEO Award. Powered by ESA and Creative Destruction Lab (CDL-Milan), the 3rd ESA CommEO Award is designed for ambitious, early-stage startups looking to anchor their technical innovation in a robust commercial strategy. Prizes include a guaranteed interview for the Creative Destruction Lab’s Global CDL Space Programme, a € 25.000 voucher for ESA Third Party Missions (TPM) data, a € 10.000 voucher for OVHcloud’s cloud computing services, and a free admission to this event.

A great event goes beyond keynote speeches and panel discussions: attendees will have the opportunity to network during the event’s Gala Dinner and wander through the Seville’s grand plazas during the event’s social activity.

More information about the event and registrations is available on the ESA CommEO website.

To know more: ESA CommEO, ESA Φ-lab, Spanish Space Agency, Creative Destruction Lab (CDL-Milan)

Photo courtesy of ESA

Using Φ-lab’s machine learning algorithms to fight mosquito-borne outbreaks in Brazil and Peru

As the number of dengue and malaria cases rises each year, governments and health authorities are in a race against time. DIRE (Disease Incidence and Resource Estimator) is a digital, predictive data analysis and visualisation platform that transforms climate and epidemiological data into a concrete operational roadmap by using a machine learning approach developed by ESA Φ-lab for UNICEF. This platform will help governments in high-burden regions like Brazil and Peru to shift from reactive crisis management to proactive, life-saving preparation.

Dengue and malaria are two of the most threatening mosquito-borne diseases worldwide, placing an immense burden on global healthcare systems and economies. According to the World Health Organization (WHO), about half of the world’s population is now at risk of dengue, with an estimated 100 to 400 million infections occurring each year.

As for malaria, it remains a leading cause of mortality, particularly among children under five years old in sub-Saharan Africa. The World Malaria Report from 2024 states that, in 2023, there were an estimated 263 million cases and 597 000 deaths globally.

While these two diseases are transmitted by different mosquito species, the causes that lead to their spreading within populations are very similar. Dengue and malaria are both deeply tethered to the environment. Climate change, land use change, deforestation, rapid urbanisation and poor drainage create ‘hotspots’ where mosquitoes thrive, increasing human exposure.

When we talk about infectious diseases, timing is everything. Tools that predict outbreaks are therefore paramount to shift public health action from reactive – responding once people are already sick – to proactive, allowing governments to plan ahead and act before cases spike.

Meet DIRE, a digital, predictive data analysis and visualisation platform for imminent disease epidemics. This tool was funded by Wellcome Trust and developed by the University of California San Diego School of Global Policy and Strategy and New Light Technologies.

DIRE translates disease forecasting into actionable guidance for decision-makers through an interactive map that uses geospatial predictive analytics, showing where dengue and malaria outbreaks are likely to occur and what public resources may be needed to control them.

At the heart of DIRE lies a climate-based ensemble model developed by ESA Φ-lab for UNICEF that uses multiple machine learning approaches and Earth observation products to take account of geographical variations in dengue incidence. The model proved to be more accurate than previous predictive techniques when piloted in Brazil and Peru. This novel approach was selected as one of UNICEF’s top research initiatives of 2022 and one of UNESCO’s Top 100 AI solutions for Sustainable Development Goals.

As the senior author of the study behind Φ-lab’s machine learning approach used in DIRE, Rochelle Schneider (Copernicus Ecosystem Operations Engineer and ESA Φ-lab ambassador) shares her thoughts: “Predicting outbreaks is a challenging work where the complexity is present in data, model, and decision-support layers. By leveraging the machine learning framework we originally developed at Φ-lab, DIRE abstracts these complexities to non-expert users.”

“Seeing this technology transition from the lab to a tool that predicts the needs and resource allocation in Brazil and Peru is the ultimate evidence of Φ-lab’s impact. It aligns with our ‘AI for Good’ mission on creating and implementing new ideas through AI and Earth observation”, Rochelle added.

DIRE focuses on Brazil and Peru, as these two countries have faced persistent, climate-related outbreaks of both dengue and malaria. Its interactive and user-friendly format allow users to view predicted disease risks at multiple geographic levels and see both recent trends and short-term predictions.

The DIRE visualisation platform shows the dengue outbreak risk prediction in Brazilian municipalities (middle). Municipalities with a lower outbreak risk are shown in blue, while the regions with higher outbreak risk are shown in red (as per the map legend on the left-side panel). Municipalities in stripes have a low-confidence prediction. The numbers of young (purple), adult (pink) and total (green) cases per month in a given municipality are shown in the right-side panel. In this panel, the number and cost of commodities/personnel required to mitigate the outbreak and the model indicators used in the prediction are also shown in two separate tabs. Credits: New Light Technologies, Inc.

Users can select a country (Brazil or Peru) to view past reported cases and projections for the current month and up to two months in advance. DIRE provides a range of socio-economic and environmental indicators that were used by the model and flags regions where predictions are less certain, helping users weigh the risks alongside uncertainty.

“UNICEF and ESA previously pioneered machine learning-based predictive models for dengue outbreaks in Latin America by synthesising UNICEF’s granular field data with ESA Φ-lab’s robust Earth observation and machine learning capabilities. This foundational work garnered significant interest from major entities, including the Wellcome Trust, and ultimately served as the analytical backbone for the DIRE project—a private-public collaboration focused on scalability”, commented Do-Hyung Kim, Data Science Specialist at UNICEF’s Climate and Environment Data Unit.

“It is a compelling testament to our partnership that such research initiatives produce high-quality, open-source algorithms that can be scaled to support diverse regions globally. I hope UNICEF and ESA continue to lead in this space”, Do-Hyung added.

DIRE has come a long way in predicting disease outbreaks and its capabilities go beyond forecasting. This platform also estimates the quantity and the cost of commodities and personnel required for disease control and treatment in each region – for example, the number of vaccines and fumigation kits needed, as well as their costs. With these data, DIRE generates a PDF report to be shared with local authorities who need clear information about the risk and resource readiness.

For Carlos Zegarra Zamalloa, Health Specialist at UNICEF Peru, DIRE is a reflection of the collaborative spirit between all stakeholders involved: “Climate-related outbreaks like dengue and malaria are becoming more frequent and dangerous in Peru, especially for children and pregnant women. In 2025 alone, Peru reported 39,000 dengue cases, with a substantial proportion affected being children; the scale has been overwhelming the current capacity of governments and communities to respond effectively. We were therefore delighted to work together with UC San Diego and New Light Technologies to bring a range of stakeholders together to troubleshoot the problem.”

During this soft launch phase, DIRE’s interface and data quality are undergoing improvement tests. The long-term impact of this platform will be determined by its adoption by local authorities to plan and respond to disease outbreaks, supported by real examples and testimonials of its use in the field.

The DIRE visualisation platform is available here. The technical details about the model are available in this Nature Scientific Report’s article.   

To know more: DIRE, ESA Φ-lab, UNICEF, UNESCO, Wellcome Trust, University of California San Diego School of Global Policy and Strategy, New Light Technologies.

Photo courtesy of Unsplash/John Cameron

A thunderous shift in foundation model architecture with THOR

Foundation models are enabling new ways to use Earth observation data, but most existing models struggle to handle data from diverse sensors and are limited to fixed patch sizes. This makes them hard to use in real-world applications that require flexibility. Funded by ESA Φ-lab and developed by the Norwegian Computing Centre, THOR is a new foundation model designed to overcome both the challenges of heterogeneous inputs and rigid deployment constraints.

Foundation models are driving a paradigm shift in Earth observation, moving the field away from specialised models towards general-purpose geospatial intelligence. Although they promise to revolutionise the way we interact with satellite data, most current foundation models are architecturally rigid.

This means they are trained using a fixed input image size and a fixed patch size (the size of small, non-overlapping segments into which input images are divided before being fed to the model), making it more difficult to process data that differs, even slightly, from the format they saw during training.

Their rigidity creates a bottleneck for data-efficient adaptation: when the workflow breaks down the data into smaller patches, it produces a low-resolution sequence of tokens – units of data that foundation models process to understand the input they were given and then generate an output. Subsequent, dense pixel-level tasks like segmentation will then require large, complex decoders to upsample features. These decoders often require large amounts of data for fine-tuning, undermining the efficiency of foundation models.

Inspired by the Norse god of thunder and his legendary hammer, THOR (Transformer-based foundation model for Heterogeneous Observation and Resolution) is a versatile multi-modal foundation model that will shatter these shortcomings. This model has been developed by the Norwegian Computing Center, funded and supported by ESA Φ-lab through ESA’s Foundation Models for Climate and Society (FM4CS) project.  

THOR is the first foundation model with an architecture that unifies the 10 – 1000m ground sampling distance range of Sentinel-1, -2 and -3, including the OLCI (Ocean and Land Colour Instrument) and SLSTR (Sea and Land Surface Temperature Radiometer) sensors.

This model has been trained on the LUMI high-performance computer using the THOR Pretrain dataset, a 22TB-dataset that has been aligned spatio-temporally and across modalities, and that contains diverse land cover products, digital elevation models, and ERA5-Land variables. By incorporating a randomised patch size and input image size during pre-training, THOR becomes ‘computer-adaptive’.

Other state-of-the-art models like TerraMind, DOFA or Copernicus-FM are flexible in handling diverse inputs, but not so versatile when it comes to deployment. These models have a fixed internal resolution, meaning that, for very fine‑grained tasks like detailed floods or crop boundaries, they often rely on large, complex task‑specific decoders to recover detail.

Instead of locking the model into a fixed image size and resolution, THOR can change its internal resolution at inference time, allowing users to trade accuracy for computational cost without retraining the model: coarser patches could be used for faster, global analyses, while smaller patches can be used for more detailed, local maps.

This way, THOR solves both input heterogeneity and deployment versatility, focusing on making a single model adaptable and efficient across resolutions, data availability, and deployment constraints. THOR achieved state-of-the-art performance and demonstrated its superior data efficiency in the PANGAEA 10% benchmark, a standardised, open-source benchmarking framework designed specifically to evaluate the performance of geospatial foundation models (GFMs). The 10% benchmark refers to a specific, low-data evaluation scenario within PANGAEA designed to assess the effectiveness of GFMs when they are trained using only 10% of the labelled data for downstream tasks. 

Valerio Marsocci, Internal Research Fellow at ESA Φ-lab, comments the importance of THOR for real-world scenarios: “With dense, high‑quality features produced directly from the encoder, THOR often requires much simpler downstream models, which improves robustness and reduces costs. By providing a flexible pre-training starting point, we empower scientists to solve both local and global problems – whether it is mapping disasters or monitoring crop health – without needing to reinvent the architectural wheel.”

For Arnt-Børre Salberg, Chief Research Scientist at the Norwegian Computing Center, THOR sets a new standard for foundation models in the European space ecosystem: “We developed THOR to be a global ‘go-to’ foundation model for Earth observation. This open-access tool transforms satellite data into vital intelligence for maritime security, hydropower energy management and emergency preparedness against floods and avalanches, being an essential tool for a safer, more sustainable future driven by Norwegian innovation.”

THOR is helping Norway consolidate its strategic position in the Arctic region, according to Dag Anders Moldestad, Lead, Earth Observation at the Norwegian Space Agency: “Norway occupies a unique vantage point in the Northern Hemisphere. For us, satellites are not just tools, but our eyes on the ground.”

“What makes THOR a game-changer is its flexibility. It allows us to develop and deploy services in real time with significantly less computing power, so we can respond to crises as they happen. In disaster management, where every second counts, or in tracking the rapid shifts of our climate, THOR provides the speed and efficiency necessary to turn raw data into valuable information”, Moldestad added. 

Find more information about THOR’s technical details in this arXiv paper. The model and pretrain dataset are now available on Hugging Face. Its source code and TerraTorch extension are available on GitHub. A showcase of THOR can be found here.

To know more: FM4CS, ESA Φ-lab, Norwegian Computing Center

Photo courtesy of Unsplash/Mark Kӧnig

A new training explored AI in Earth observation

From 8 to 11 December, ESA Academy’s Training and Learning Facility in Belgium hosted the pilot edition of the Disruptive Innovation and Commercialisation in Earth Observation Training Course. Organised in collaboration with ESA Φ-lab, this first edition brought together 30 Master’s and PhD students from 16 different nationalities, creating a vibrant and diverse learning environment.

One of the aspects that made this course unique was its dual focus. Participants were trained not only in Artificial Intelligence (AI) applied to Earth observation, but also in the business and commercialisation strategies necessary to turn innovative ideas into viable ventures. This combination of technical and entrepreneurial skills was designed to push students beyond traditional academic boundaries.

“The unique combination of AI, business and Earth observation made it truly one of a kind,” said one student. “Collaborating with motivated participants and learning from the ESA Academy and Φ-lab experts pushed me to think beyond disciplines.”

Read the full article on www.esa.int.

Breaking the satellite trade-off: AI creates near real-time 3D cloud maps

Clouds play a critical role in Earth’s climate system and are a major source of uncertainty in climate projections. The vertical distribution of ice and water particles in clouds impacts their radiative properties and with that Earth’s energy balance. Recent research also showed that the internal properties of clouds in tropical cyclones influence how storms intensify. Yet satellites face a fundamental trade-off: systems that measure vertical structure lack continuous coverage, while those with continuous coverage cannot see inside clouds.

Now, research conducted through the Earth Systems Lab research programme, funded through the ESA Φ-lab and involving former ESA research fellow, Dr Anna Jungbluth, has developed a breakthrough machine learning framework that translates two-dimensional geostationary satellite imagery into detailed three-dimensional cloud maps in near real-time. Published in November 2025, the study demonstrates for the first time the ability to create global instantaneous 3D cloud reconstructions, with particular success in mapping the internal structure of intense tropical cyclones.

Read the full article on www.climate.esa.int.

Photo courtesy of Unsplash/Zbynek Burival