Home ITCAbout ITCCentres of expertiseCentre for Disaster ResilienceNewsNavigating data privacy and responsible geoinformation use in humanitarian contexts: The “Do No Harm” Project

Navigating data privacy and responsible geoinformation use in humanitarian contexts: The “Do No Harm” Project

This year alone, climate change, natural hazards and conflicts will result in approximately 300 million people requiring aid. While data-driven technologies can help to identify and prioritise the allocation of aid resources, geodata technologies pose a challenge for societal values (for example, privacy, fairness, and justice), especially so when vulnerable communities are involved. In the ever-evolving landscape of technology and data, the University of Twente-led “Do No Harm” project seeks to understand how to harness geoinformation for disaster mapping while safeguarding privacy and ensuring fairness. 

We spoke to project members – Postdoctoral Researcher, Rogers Alunge and PhD Candidate, Brian Masinde – to learn more about their work which include field work in Malawi and case study with data from The Philippines in addressing privacy, bias, and fairness issues arising from geodata and Artificial Intelligence (AI). 

Why the name “Do No Harm"?  

The project’s core principle is simple yet profound: solutions should not cause more harm. As geoinformation becomes a critical tool for disaster response, it also becomes a potential double-edged sword. “The data we collect holds immense power. But how do we wield this power responsibly? Solutions should not cause more harm than good” said Alunge.  

Data protection, education and awareness in Malawi 

With a legal background, Alunge felt both fascinated and intrigued by the interdisciplinary aspects of technology law and data protection. He stated “I always wondered how do we maintain our humanity in innovation? And this led me to take part in case studies in African data protection cutting across humanitarian and cultural issues, which was the beginning of my journey to do no harm”. 

 Alunge emphasises the need to take to account regulations to determine a project’s helpfulness. In times of conflict for example, data can easily fall into the wrong hands, potentially causing harm.  Awareness is crucial, as data serves both good and bad purposes. “There is a great need to identify who collects data and ensure transparency, the local communities, too, must be data-literate and aware of their privacy rights” says Alunge. 

To understand this better, Alunge conducted research in Malawi with cyclone-affected communities in the districts of Rumphi and Karonga in Northern Malawi. In collaboration with UNICEF, he conducted focus group meetings with community groups and, through questionnaires and recordings, learnt about their perspectives on drones and data usage.  

During his research, Alunge found that many of the project’s participants showed little concern to matters of data privacy. They placed a lot of trust in the government and local chiefs to collect and use any information about them or their community diligently. “Data literacy” seemed distant to them. Yet, this vulnerability highlights the urgency for humanitarian organisations to prioritise data privacy education and awareness when collecting data, something that Alunge is championing for. 

Rogers Alunge holding a focus group meeting with residents of the farming community of Mzokoto-Chisokwo, Rumphi district, Malawi. Discussions centred on the residents’ perspectives of privacy, data protection and other cultural concerns they may have in relation to the flying of drones over their communities and collection of aerial, high-resolution images by humanitarian organizations for disaster preparedness and resilience, but which equally depict their livestock and lifestyle in detail, 28th November 2022. 

Navigating privacy, bias and fairness in AI for humanitarian causes 

The intersection of artificial intelligence (AI) and humanitarian efforts presents both promise and peril. Algorithms, which are sequence of instructions that guide a computer to perform a task, are increasingly used to make critical decisions, often with little consideration of data ethics. Brian Masinde is committed to addressing privacy, bias, and fairness challenges arising from geodata and AI. 

Masinde’s journey began at the crossroads of mathematics, statistics, and machine learning (ML). “In my work, I noticed that sensitive data and algorithms are being used more and more, and with the evolution of ML and AI, there are lots of ethical, privacy and fairness concerns” said Masinde. 

The awareness gap regarding data privacy persists. Literacy about data rights and privacy remains low. Masinde’s primary goal is to raise awareness within humanitarian organisations to the pitfalls of automating aid distribution. Who should receive assistance, where, and when? These questions drive Masinde’s work. His solutions aim to strike a balance between efficiency and ethical considerations, with the need for distributive equity – ensuring that AI-driven decisions benefit all, especially the most vulnerable.  

Auditing geo-intelligence workflows

Masinde’s focus lies in auditing geo-intelligence workflows for group privacy issues and algorithmic biases. He examines the potential consequences of AI decisions in humanitarian contexts. He uses data from Malawi and the Philippines to conduct his study research. By dissecting real-world scenarios, he can uncover vulnerabilities and opportunities for improvement. Three critical concepts guide his work: 

  1. Justice and Fairness: Evaluating whether AI systems treat different groups equitably. 
  2. Causality: Understanding the cause-and-effect relationships in AI outcomes. 
  3. Methods for Fairness: Exploring techniques to mitigate bias and promote fairness. 

Find out more about Masinde’s research on group privacy threats for geodata in the humanitarian context in his recent publication, where he sheds light on the intricacies of safeguarding collective data rights.  

Kyungu, Karonga, Northern Malawi. Demonstration of a drone flight and aerial image collection of the community, amidst curious onlookers, 29th November 2022. 

The way ahead: Current work and next steps 

The Do No Harm project finds that people’s apathy towards data privacy poses risks. Humanitarian organisations must champion responsible data practices. National protections should extend beyond legal frameworks to community awareness and literacy.  

AI’s impact extends beyond algorithms – it shapes lives. Privacy, bias, and fairness need to be thoroughly considered within technological innovations to ensure that they serve humanity without harm.  

The “Do No Harm” project continues its vital work, advocating for a harmonious coexistence between data-driven solutions and individual well-being.  

Project partners 

The Do No Harm project is led by Prof. Jaap Zevenbergen (PGM, ITC Faculty) in collaboration with Dr. Caroline Gevaert (EOS, ITC Faculty), Dr. Michael Nagenborg (Philosophy department, BMS Faculty) and Dr. Fran Meissner (PGM, ITC Faculty).  

External project partners include two main organisations: 510, an initiative of the Netherlands Red Cross, and UNICEF Malawi. Some of the main contributors to the project are Prof. Marc van den Homberg, the Scientific Lead at 510 and ITC Princess Margriet Chair; Joachim Ramakers, Program Manager for Data and Digital Support at 510; Dr. Michael Scheibenreif from UNICEF Eastern and Southern Africa; Tautvydas Juskauskas; and Postar Chikaoneka from UNICEF Malawi.  

Please contact Rogers Alunge and Brian Masinde for more information about the project: 

Rogers Alunge
Postdoctoral Researcher
B.K. Masinde MSc (Brian)
PhD Candidate