About the Client
According to the WWF (World Wildlife Fund), forests cover more than 30% of the Earth’s land surface and are considered to be the lungs of the planet. Unfortunately, people are not using this resource wisely and every day an alarmingly high amount of trees are being cut. Whether it is a natural loss of trees or human-driven – deforestation has horrifying consequences.
In cooperation with the ecological organization, SCGIS, Quantum has developed ClearCut, a cloud-based precision deforestation platform to help ecological institutions and private initiatives monitor the logging process and promptly react to illegal cases.
Ecological organizations supervise huge territories with limited resources trying to keep track of the logging process. Manual overseeing takes a lot of time and resources. Usually, it’s inaccurate and outdated, thus preventing conservation organizations from quick reactions to illegal cutting.
The mission of this web platform is to provide continuous analysis of logging information to people and/or organizations that use geographic information systems (GIS) and scientists to conserve natural resources.
ClearCut is an online platform, developed by Quantum, which uses present-day technologies and allows us to automatically monitor deforestation by means of computer vision and artificial intelligence.
By collecting satellite data – every 3-5 days, the program automatically compares the state of forests and provides reports on changes that occurred. Notification about the land lots that have been exposed to felling is delivered per email that contains the location and updated data. This enables environmentalists to respond quickly to the occurring situation.
“Being a socially conscious company and understanding the importance of such an acute environmental problem as deforestation, especially illegal logging, made us create this open-source platform,” says Ruben Melkonian, CTO at Quantum.
With the help of satellite data ClearCut collects and processes, conservation organizations are able to update inaccurate and outdated information about the state of deforestation and to automatically take measurements and analyze information about the state of the forest.
It took 4 months to release the project.
Firstly, we did thorough research and determined the subject area by analyzing available data and existing methods for solving segmentation problems. Secondly, we developed and tested various models. Only after that, we’ve developed the server and client’s part, did their integration and deployed the system.
When developing the project we had faced a couple of challenges, e.g. system integration with external services, which grants access to satellite images. We have tried to use different services, part of which were paid when others required significant modifications.
We have worked with multi-channel satellite data that required additional handling, such as image normalization, integration into one image, calculating additional channels (NDVI). Since these images were quite big (10000х10000 pxl) it was impossible to perform complex calculations due to RAM size limitation. We had to break the calculations into smaller parts with saving intermediate results. There were also imperfections in the markups that influenced the quality of the model, as a result, geo-specialists have received multiple requests to update the markups.
In order to track changes, a system of color identification was implemented:
- Deforestation areas that have not changed over a certain period are marked with YELLOW.
- The areas where there is an increase in deforestation are marked with RED.
- The areas that have no satellite data are marked with BLUE.
For each land lot, there is an opportunity to receive detailed information about the changes in deforestation.
Segmentation models were built using Keras, which is easy to use in the early stages of development, further we have moved to PyTorch that allows building more complex models.
The data science part for this platform was written in Python, therefore it was decided to use Python for the back-end part of the project for easier integration.
As the database, we have chosen PostgreSQL, as it has spatial database extender – PostGIS, which allowed us to store geospatial objects in the database.
For image processing and GIS we have used:
– openCV (powerful imaging tools)
– gdal (used for calculating additional channels and images normalization)
– geopandas (used for working with polygons and binding them to geographical coordinates)
– rasterio (used for images in tiff format).
All systems services were deployed on AWS infrastructure.