The Russo-Ukrainian war gave rise to a surge of new interactive visualization tools that integrate open-source intelligence (OSINT) with geoinformation. For example, the Centre for Information Resilience launched in January 2022 the project “Eyes on Russia” [1] which collects image and video data found in social media as well as satellite imagery and other related media data. Subsequently, the collected data is manually geolocalized and visualized on an interactive map of Ukraine. As such, the tool provides intelligence for timely situational awareness and investigations. In addition, other OSINT platforms provide detailed information on the location of military formations (uawardata [2]), or equipment loss (Oryx [3]).

The main bottleneck for such tools is the manual geolocalization/-confirmation of the collected data. Recent advances in deep learning, including Contrastive Language-Image Pretraining (CLIP) [4], may provide models to automatically infer the geolocation for a given image/video [5]. Such a model would significantly lessen the burden of manual work and provide a verification mechanism for previous manually annotated data.

Goal

The goal of this project is divided in two parts: i) implementation of an interactive visualization which integrates information from different OSINT archives on the Russo-Ukrainian war, and ii) investigation of current deep learning methods for automated geolocation of image/video data. In particular, the combination of regular OSINT with advanced deep learning methods forms the core of this project. The final outcome is an interactive demonstrator which highlights the possibility to fuse information from different OSINT archives and automated geolocation in the case of the Russo-Ukrainian war.

Requirements

  • Good programming skills (Python)
  • Basic knowledge of machine learning
  • Interest in OSINT and conflict monitoring
  • Knowledge in web engineering (e.g. flask) and visualization of geo-data (e.g. folium) is a plus

If you are interested and want to hear more about the project, please contact us.

References

[1] https://eyesonrussia.org/ [accessed 27.02.2023]

[2] https://github.com/simonhuwiler/uawardata, [accessed 27.02.2023]

[3] https://github.com/leedrake5/Russia-Ukraine. [accessed 27.02.2023]

[4] Radford, Alec, et al. “Learning transferable visual models from natural language supervision.” International conference on machine learning. PMLR, 2021.

[4] Haas, Lukas, Silas Alberti, and Michal Skreta. “Learning Generalized Zero-Shot Learners for Open-Domain Image Geolocalization.” arXiv preprint arXiv:2302.00275 (2023).