Combining drone-mapping and AI-powered object detection to localize mosquito breeding spots
According to the World Health Organization, mosquitoes are one of the deadliest animals in the world. Annually, mosquitoes infect nearly 700 million people all over the world. Mosquito-borne diseases like Dengue, Zika, Malaria and Yellow Fever represent a major public health issue. More than 40 percent of the world’s population is at risk of dengue infection. Mosquitoes have no respect for borders and spread disease wherever they go. The Zika virus had a first large outbreak on the Island of Yap (Federated States of Micronesia) in 2007 and since then has spread to 86 countries.
“For all of these diseases, vector control is a powerful preventive tool that is not used to its full potential while taking action is entirely feasible.” – citing Dr. Margaret Chan, Director-General of the World Health Organization (2014).
Mosquitoes thrive in stagnant water and containers like used car tires can serve as ideal mosquito-breeding spots. Finding and properly disposing of used tires is essential to preventing outbreaks, however scouting for them from ground level can be really challenging. Inspecting a large area by foot would take several hours.
Flying a drone and capturing an aerial view can effectively facilitate the scouting process. An aerial image can support us to visually spot and localize all potential breeding spots. Although, a visual inspection would be an exhausting and long process due to the great size of the area. In this situation, processing the aerial image with AI will allow us to detect used tires and localize them at a large scale. Picterra offers a solution for tailoring object detection algorithms to analyze images and look for objects, like in this case of used tires.
Drones vs. Mosquitoes
Let’s go through the project!
Upload the right image
After capturing aerial images and creating an orthomosaic with a photogrammetry software (e.g. Metashape, Pix4D, DroneDeploy, nFrames, correlator3D, etc.), you can upload it in TIFF format to Picterra Platform.
For this example project, we uploaded an orthomosaic of a 2.54-hectare ground located in the industrial area of Sevastopol, Ukraine.
Notice: The ideal image resolution depends on the size of the object you want to detect. Generally, objects need to have a width of at least 10px to be efficiently detected. This image has a resolution of 1.18cm/ pixel.
For more information about the image resolution, check out our FAQ article What resolution do I need for my images?
Quickly zooming in, there are many used tires scattering around. They blend into the environment, therefore spotting and counting them manually would be time-consuming. Luckily we can use AI to automate this task.
Building a Custom Detector
First things first, we will create an “Old tires detector” by using the “Custom Detector Builder” tool of Picterra. We choose “Detectors” and click on “New Custom Detector” to open the tool.
Defining training areas
In order to build the custom detector, the first step is to define “Training Areas”. They will be used to “teach” Picterra AI’s model the type of objects we want to detect (old tires). There are three main types of training areas you should focus on to achieve optimal results:
- Training areas that contain examples of old tires (positive examples). This teaches the model what to detect.
Training areas that contain examples of objects being similar to old tires (similar counterexamples). This teaches the model what not to detect.
- Training areas that are empty and just contain “background” (general counterexamples). This teaches the model what background looks like.
Then we need to outline our objects of interest, the old tires, inside these training areas. We end up with all the annotated areas below:
Make sure to annotate every and each object in the “positive” training areas, even partially contained objects. Leave the counterexample training areas empty.
For more insights on how your annotations affect the detection outputs, check out our crash intro into AI-powered object detection.
Training and saving the “Old tires detector” detector
With the training areas and annotated examples ready, we can train the detector and run it. It should take a few minutes to have the detection over the whole image:
Have a closer look, here we can see all the old tires have been detected. If the first detection results missed some tires or wrongly spotted, you can improve your detector by iterating over the process of adding additional training areas and annotations.
Once we are satisfied with the results, we can save the detector and run it from the detector library on our project image.
Sharing the project
After adding the detection outputs to the map, it is easy to share the project using a URL.
You can check the project here: http://bit.ly/2W1Cisk
Sharing such a map localizing the old tires with the clean-up team will greatly support the process of tracking and disposing of them.
Stats & Export
An additional step you could take —in the Stats & Report panel— is to define areas of interest to quantify the number of tires, compare between areas and generate PDF reports or download the geolocated detections in multiple formats (GeoJSON, KML, Shapefile and, CSV).
You can incorporate the data outputs into your statistical analysis, mapping or GIS workflow.
It’s time to take action!
“No one in the 21st century should die from the bite of a mosquito, a sandfly, a blackfly or a tick.” – Dr. Margaret Chan, Director-General of the World Health Organization (2014).
Combining the strength of drone-mapping and AI-powered image analysis, localizing and disposing of old tires —and other stagnant water containers— is easier than ever.
You can help prevent outbreaks of deadly mosquito-borne diseases. Get started now with your own “fighting mosquitoes” project.