Detecting Buildings From Drone Imagery For Disaster Response with WeRobotics

Objectives

  • Build a prototype model that can generate building outlines in drone imagery, which can be used for disaster response analysis
  • Achieve an Intersection Over Union Score (IOU) – a common metric for evaluating the performance of such models that measures how much ground truth data agrees with the building outlines generated by our models – of over .6
  • Develop code and pipelines for processing drone imagery for modeling
  • Rectify and align drone imagery from different years to enable easier comparison over time
  • Create simple front end interface and workflow to allow Tanzania Flying Labs to easily run models

Question

Autonomous robotic solutions, such as drones, are increasingly being used in disaster-prone countries to support a wide range of humanitarian efforts. For example, mapping drones are used to quickly pinpoint disaster damage and document destroyed homes and infrastructure, photographing in hours what once took days and weeks. But while drones allow quick information collection, analyzing the data from a single 20-minute drone flight can generate over 13 hours of analysis time; therefore, using artificial intelligence (AI) to accelerate the generation of insights is incredibly valuable. 

WeRobotics is a nonprofit organization dedicated to scaling and accelerating the positive impact of humanitarian, environmental, and public health projects by localizing appropriate robotics solutions such as drones. The organization does this through strategic partnerships coupled with the co-creation of a growing network of Flying Labs in Asia, Africa, Latin America, and Oceania. Together, with their technology partners, they match the latest in robotics technology with local needs and demands, creating a vibrant and high-end market and workforce. WeRobotics has spearheaded a range of humanitarian drone missions around the world with the UN, Red Cross, World Bank, and others over the years. 

Time and time again, one of the biggest challenges during these missions has been the lack of rapid analysis of the aerial data (photos, videos, and 3D models). Put simply, aerial data is a big data problem. Tanzania Flying Labs (TFL)  has identified the automatic classification of buildings in drone imagery of Dar es Salaam as a pressing need across numerous potential projects. Being able to identify where buildings are, or aren’t, allows for examining change over time: an analysis that’s especially crucial after a natural disaster. 

DataKind first explored this type of problem in 2018 with the World Bank Global Facility for Disaster Reduction and Recovery and showed that machine learning/AI (ML/AI) application was a possible solution. Such work can support efforts in flood impact estimation and the analysis of change in the development of certain areas of Dar es Salaam. Besides the identification of buildings and generation of building outlines, TFL was also very interested in being able to identify changes in imagery taken at different points in time. This would allow them to see how the built environment was changing, support their efforts to estimate impact after natural disasters, and support other data-driven efforts undertaken by associated groups. 

What Happened

This DataCorps project was sponsored by The Rockefeller Foundation and the volunteer team was composed of Data Ambassador Krishna Bhogaonker and team members Carlos Espino, Will McCluskey, Tony Szedlak, and Terence Tam. The team worked with project champion Leka Tingitana from TFL. 

TFL and WeRobotics, more broadly, were interested in and committed to harnessing the power of AI to accelerate image processing and review. There are numerous potential use cases for such models –  from crop identification to the disaster impact estimation. WeRobotics and TFL wanted to know if a high performing deep learning model – a machine learning technique that has been found to perform well for image classification tasks – for building identification could be developed for areas of Dar es Salaam that are prone to flooding, and if these models could support change detection efforts to measure the impact of that flooding. 

The volunteer team endeavored over the past year to develop high performing building classifiers that can automatically generate building outlines for Dar es Salaam. In the process of developing this model, they discovered the critical need for improved image alignment solutions to allow for the comparison of images from different flying missions. Image alignment is a perennial challenge for drones and even satellite images. Images collected at different times can be aligned using existing software, but misalignment can persist due to differences in drone type, software, flying altitude, etc. Such misalignment makes change detection incredibly difficult as misaligned objects appear as changes when in fact they are just errors. The team developed a deep learning model to improve the alignment by shifting and slightly warping the image. They’re currently working on an open source version so more users of such images can benefit from the tool.

The final building detection model was trained on imagery from Ghana and then tested in Dar es Salaam. By using imagery from a different location, they prevented the model from overfitting or “over learning” specific features of Dar es Salaam instead of just learning how to identify buildings. They achieved an Intersection Over Union Score (IOU) – a common metric for evaluating the performance of such models that measures how much ground truth data agrees with the building outlines generated by our models – of over .6, meeting the goal set by WeRobotics and achieving cutting-edge performance. Therefore, the outputs of the model have greater accuracy than other available models and can more readily be used for analysis. 

It used to take teams of analysts days, if not weeks or months, to survey affected areas damaged from floods, fires, and other natural disasters. The neural network model and associated data pipeline developed by DataKind and its team of pro bono data scientists allows TFL to compare images before and after heavy rainfall, to identify damaged or destroyed homes in hours. With the modeling complete, the team then developed an interactive frontend tool that can be used to more swiftly and easily (particularly for non-technical team members) identify destroyed structures. It simplified the process of running the model, where TFL can simply upload data to standard cloud storage services and then run their models from a simple web application. All of this was then deployed to cloud service providers to allow TFL to run it from anywhere. 

undefined

(source: WeRobotics)                   

What’s Next

Currently, TFL is testing the tool on their own infrastructure and new imagery. They’ve presented on the team’s work at the 2019 African Drone Forum. We’re excited to share that the volunteer team continues to work on open sourcing portions of this project to make the work of image detection projects far easier so others can benefit from their important work. 

undefined

African Drone Forum 2020 (source: WeRobotics)

Read More

Scroll to Top