The usage of airborne mapping is rising day by day, be it in the Government or the industrial sector. Aerial imagery is becoming more relevant day by day. Many Countries are using aerial imagery to help monitor and manage environmental change or to improve town planning. Mining companies are using aerial imagery to quantify and manage disturbance and rehabilitation. Engineers rely on aerial imagery to help determine optimal route location and design major infrastructure projects. Emergency services rely on rapid-response aerial imagery to assess the damages caused by natural disasters and to plan future mitigation strategies and many more.
But there exists a big question as to
- How is aerial mapping done?
- What are the outputs, we get out of aerial mapping?
- What are its applications?
What’s Aerial Mapping?
Aerial mapping is a method of collecting geomatics or other imagery by using airplanes, helicopters, UAVs, balloons, and other aerial methods. Typical types of data collected include Aerial Photography, Lidar, Remote Sensing (using various – visible and invisible bands of the electromagnetic spectrum, such as infrared, gamma, or ultraviolet) and also Geophysical Data (such as aero-magnetic surveys and gravity).
In a nutshell, there are two major components in Aerial Mapping: Carrier and Sensors.
Here is an image of different types of carriers in aerial mapping.
The most commonly used sensors in aerial mapping are:
Digital Camera (RGB format)
Digital cameras can quickly acquire grayscale or color images. However, the sensors have the advantages of low cost, lightweight, and convenient operation.
With its help, simple data can be processed anywhere even in a low working environment and ever-changing requirements. They are primarily used to map sites and inspect them.
Light Detection and Ranging (LiDAR)
LIDAR is a surveying method that can effectively measure the distance to a target with the help of emitting laser light emissions.
It is an active remote sensing device that adopts the photoelectric detection method to use the laser as the primary transmitting light source and measuring its distance from the target object. They are primarily used for topographic surveying.
Multispectral/Hyperspectral imaging sensor
Multispectral imaging sensors are devices that is capable of sensing and recording radiation from invisible as well as visible parts of the electromagnetic spectrum.
Multispectral and Hyper-spectral cameras are deployed in many applications and industries, but especially in the agriculture sector since they are capable of obtaining a large number of very narrow bands and continuous spectra at a time. Hyper-spectral images also come with the added advantage of having more band information and higher spectral resolution.
Thermal Infrared Imaging Sensor
Thermal infrared imaging sensors are the ones that use infrared detectors. They are often known as “non-visible” imaging since the infrared spectrum is not visible to the human eye.
With the help of an optical imaging lens, it receives infrared radiation energy in the photosensitive element, infrared detector. These imaging sensors are sensitive to wavelengths in the infrared region of the electromagnetic spectrum.
Synthetic-Aperture radar (SAR)
In SAR imaging, microwave pulses are sent towards the Earth with the help of an antenna. The pulses that are reflected back are then measured. The time delay is used to create images out of it.
It is mainly used for conducting coherent processing of the received echo in different locations to obtain high-resolution data. SAR is also used to create two-dimensional images or three-dimensional reconstructions of objects, such as landscapes. Synthetic Aperture Radar (SAR) is rapidly becoming a key dataset in the geospatial investigation.
A magnetometer is what constitutes the most integral part of an electronic compass. It measures magnetism. In common terms, it measures the direction, strength, or relative change of a magnetic field at a particular location.
Magnetometers are widely used for measuring the Earth’s magnetic field, and in geophysical surveys, to detect magnetic anomalies of various types.
Now let’s discuss as to what are the outputs and use cases of the sensors mentioned above. This article is restricted to the applications and outputs for the first four sensors.
RGB and LiDAR Sensor
RGB and LIDAR sensors are used primarily for topography analysis. RGB sensors further require the photogrammetry process to build results that can be utilized for analysis. LIDAR sensors typically give you output similar to the photogrammetry process except for the visual image. Let us look at some of the most popular outputs of RGB and LIDAR sensor.
What is Orthomosaic/ Orthophotograph/ Orthoimage?
Orthomosaic is made up of two words Ortho (meaning perpendicular) and Mosaic (meaning small pieces joined together to form a big one). An orthophoto, orthophotograph or orthoimage is an aerial photograph geometrically corrected (“orthorectified”) such that the scale is uniform.
Unlike an uncorrected aerial photograph, an orthophotograph can be used to measure true distances, because it is an accurate representation of the Earth’s surface, having been adjusted for topographic relief, lens distortion, and camera tilt.
In simple words, an orthomosaic is an accurate representation of the Earth’s surface from top view in 2D. It gives X (Latitude) and Y (Longitude).
Quick facts on Orthomosaic:
- Ground Sampling Distance (GSD) represents the resolution of the orthomosaic. It is measured typically in centimeters, being very high in resolution. The resolution gives an idea that objects greater than GSD can be seen in orthomosaic.
- Accuracy defines the error in X (Latitude), Y (Longitude) and Z (Elevation) and is typically in ranges of 2-5 cm. Here are some quick ways to increase the accuracy.
The commonly used orthomosaic/DSM formats are:
- Multi-resolution Seamless Image Database (MrSID)
- Enhanced Compressed Wavelets (ECW)
- Tagged Image File Format (Tiff)/ Geo-Tagged Image File Format (GeoTiff)
What is Digital Elevation Model (DEM)?
Elevation simply means height. In simplest terms, it can be referred to as Digital Height Model. Thus, a digital elevation model (DEM) is a representation of a terrain’s surface only in terms of its height. DEMs are the superset of digital surface models (DSM) and digital terrain models (DTM)
Digital Surface Model (DSM):
As the name suggests, a digital surface model is a digital representation of the Earth’s surface (in terms of height). In other words, DSM provides the height of all the points on Earth as seen from above such as buildings, trees, etc. It can be obtained directly from any standard photogrammetric software.
Digital Terrain Model (DTM):
It is similar to a digital surface model except it represents only the bare surface of the Earth without including any height information about the structures above the ground such as buildings, trees. Thus, It is a bare earth model. A DTM can be obtained from further processing of a DSM and is used to generate a contour of the given area.
In simple words, a DEM is an accurate representation of the Earth’s topography. It gives Z (elevation).
What is Point Cloud ?
Have you ever imagined the real-world in the form of points? if not, then imagine because that’s what a point cloud is — a 3D representation of the real-world in the form of points.
In other words, it is a collection of a large number of points/pixels (>1 crore) where each point has X, Y and Z location in a given 3D coordinate system, and also a corresponding R, G, B color value, such that all the points when combined according to their position in the coordinate system, it looks a digital copy of the real-world.
Unlike Orthomosaic, where we can do only 2D measurement (distance/length), Point Cloud gives us the freedom of measuring in 3-dimension (3D) . Thus point cloud enable us to find out height difference between two points, volume estimation of a specified region, elevation profile of a section, etc.
In simple words, point clouds are the data points that usually exist along the x, y, and z coordinates within the 3D scanned space.
What is 3D Mesh / 3D Model ?
A 3D Mesh is the most accurate representation of the real-world in every form. Unlike the point cloud, which is a discrete set of points, a 3D Mesh is continuously similar to the real-world.
It also gives us the freedom of measuring in 3-dimension (3D) and hence enabling us to find out the height difference between two points, volume estimation of a specified region, elevation profile of a section, etc.
In simple words, 3D object representation can be a polygon mesh, which consists of a collection of vertices and polygons that define the shape of an object in 3D.
Hyperspectral / Multispectral Sensor
Both sensors have the majority of its use cases in the assessment of crop stresses, characterization of soils and vegetative cover and yield estimation, in addition to predictive analysis.
The benefits of hyper-spectral and multispectral imaging are that these technologies are: low cost (when compared with traditional scouting methods), give consistent results, simple to use, allow rapid assessments, non-destructive, highly accurate, and have a broad range of applications.
Let us look at some of the indices by Hyper-spectral / Multispectral Sensor.
What is NDVI ?
Normalized Difference Vegetation Index (NDVI) quantifies vegetation by measuring the difference between near-infrared (which vegetation strongly reflects) and red light (which vegetation absorbs).
In simple words, NDVI is a measurement of a plant’s health based on how the plant reflects light (usually sunlight) at specific frequencies.
Here is an image to calculate NDVI and vegetation classification using the same.
Applications of NDVI?
There are a plethora of applications of NDVI, some of them being :
- In agriculture, farmers use NDVI for precision farming and to measure biomass.
- In forestry, foresters use NDVI to quantify forest supply and leaf area index.
- NDVI is a good indicator of drought. When water limits vegetation growth, it has a lower relative NDVI and density of vegetation.
Here’s an interactive NDVI Map of Soyabean Crops. More open datasets can be referred from Indshine Project Library.
Infrared radiation coming from a scene is focused onto an infrared sensor detector. From there it is processed into the visible light spectrum which our eyes can see.
As a result, thermal images can look different depending on how the thermal image is reproduced in the visible spectrum. Typical go-to color palette options include black hot, white-hot, fusion, and iron bow.
To sum it up all. thermal imaging cameras are precise non-contact temperature measurement devices.
Types of Thermal Sensors
- Radiometric Thermal Cameras
Cameras labeled “R” are radiometrically calibrated. Using such cameras enables the capture of absolute temperature in every pixel of an image. They save their images in RJPG (radiometric JPG) format.
- Non-Radiometric Thermal Cameras
Some Non-Radiometric Thermal Cameras also offer absolute temperature values of each pixel but there is generally a formula/mapping which is needed to be applied at the raw image in order to get the absolute temperature values. So, in short, they are not calibrated like radiometric.