ESA title
Φ-lab
 
June 11, 2024

Φ-lab leads the way for new ChatGPT-style tools for Earth observation

Posted in

As recently announced, ESA Φ-lab, in conjunction with its technology partners, is leading activities to develop AI foundation models, in a ChatGPT style, aimed at intelligent information retrieval in Earth observation (EO). With the launch of further initiatives exploring large language models, now is a good time to look back at the new and existing work Φ-lab is doing in this field in more detail.

ESA, other space agencies and New Space enterprises operate Earth observation missions for the benefit of science, commerce and society as a whole, but the volume of satellite data available far exceeds the capacity of humans to process and derive actionable insight in a timely manner.

Progress with more traditional AI can however be hampered by the need for a pool of labelled data to train AI models. Foundation models help to circumvent this limitation through generally self-supervised learning from large and varied sources of unlabelled data, in addition to supervised ones that are still necessary. Foundation models also deliver tools that can be adapted to a broad range of tasks, and since their inception in 2018 foundation models have contributed to a huge transformation in machine learning, even leading to chatbots with impressive natural language capabilities and several other emerging properties.

Φ-lab has a proven pedigree in disruptive innovation in Earth observation, with a particular focus on AI4EO and innovative computation paradigms. As covered in an article in March, given the enormous potential of foundation models for rapid, self-supervised learning, Φ-lab is undertaking various initiatives to create foundation models exploiting EO and remote sensing datasets.

The PhilEO project has been running for over a year. Developed by Φ-lab in conjunction with e-GEOS and Leonardo Labs, and exploiting the davinci-1 supercomputer, PhilEO is a geospatial foundation model trained on global Copernicus Sentinel-2 data. The model uses metadata from Sentinel-2 images and is trained to identify geographical features around the Earth, enabling it to learn general features and perform land cover classification, estimation of density and proximity between buildings and road segmentation regression.

In a major milestone, the PhilEO team is now releasing the model itself and associated resources to further research and testing throughout the EO community. PhilEO Bench, an evaluation benchmark that allows the performance of various models to be compared, can already be found on GitHub, and PhilEO Globe, the Sentinel-2 dataset, has been uploaded to Hugging Face. The code for the model will be available on the Hugging Face page in the coming weeks.

Two new activities supported by Φ-lab have also just been launched. A consortium comprising DLR, FZ Jülich, KP Labs and IBM will develop a European foundation model that is expected to significantly progress the state of the art. This project, which is named FAST-EO (Fostering Advancements in foundation models via unsupervised and Self-supervised learning for downstream Tasks in Earth Observation), will develop a multi-modal foundation model. Incorporating both Sentinel-1 SAR and Sentinel-2 optical, worldwide datasets, the model will integrate natural language capabilities and undergo validation through a range of environmentally critical applications such as methane leaks, biomass estimation and landcover change.

A second initiative has commenced in the last months. Foundation Models for Climate and Society is led by the Norwegian Computing Center, along with various national meteorological offices. This project, which is named FM4CS (Foundation Models for Climate and Society), will develop a foundation model that will focus on climate adaptation and extreme-weather-event mitigation. This enterprise will also benefit from the use of LUMI (Large Unified Modern Infrastructure), a petascale, world-class supercomputer.

AI foundation models serve as the engines of digital assistants, whereby the core processing of the foundation model is integrated with natural language models and interactive user interfaces. The general idea of a digital assistant is for all users – from non-technical to EO experts – to be able to perform a query on EO data archives such as “How many different crop types are in this Sentinel-1 image?”, ask more generic questions linked to EO and Earth science such as “How can EO help to monitor urban heat islands?”

To mature the human-interfacing aspect of a digital assistant, Φ-lab has just launched a new project with Pi School to build an EO Virtual Expert (EOVE). The team is exploring a set of large language models (LLMs), which will be trained and fine-tuned on specific and crafted documents related to EO and Earth science. A web platform with a simple graphical user interface and an application programming interface will be created as the gateway for the trained LLM.

The end game for Φ-lab’s various ventures in foundation models and LLMs is to set a path towards an EO digital assistant that can respond to information and knowledge queries posed in natural language and that will produce reliable, validated content.

“Foundation models are bringing a paradigm shift in AI, thanks to the scaling in training data and model size. These intelligent agents can be adapted to several specific applications and are showing impressive emerging properties, unlocking the potential of AI like never seen before. In this field, LLMs are currently disrupting the way humans interact with intelligent agents via natural language,” comments Head of ESA Φ-lab Division Giuseppe Borghi. “The integration of these models with EO and other heterogenous data will ultimately place a dedicated ChatGPT-style tool at the fingertips of EO end users in many sectors.”

To know more: Φ-lab, Norwegian Computing Center, Pi School

Photo courtesy of Pexels/ThisIsEngineering