Share this post on:

Idely applied in image classification and object detection [21,23,24]. At present, deep belief network (DBN) [25], stacked autoencoder (SAE) [26], convolutional neural network (CNN) [27], as well as other Streptonigrin medchemexpress models have been applied in HI classification, and CNN is significantly superior to the other models in classification and target detection tasks [280]. Consequently, the CNN model has been extensively employed in PWD studies in recent years. Inside a study, two advanced object detection models, namely You Only Look Once version three (YOLOv3) and Faster Region-based Convolutional Neural Network (More rapidly RCNN), have been employed in early diagnosis of PWD infection, obtaining very good outcomes and proposing an effective and fast method for the early diagnosis of PWD [19]. In an additional study, Yu et al. [20] employed Faster R-CNN and YOLOv4 to identify early infected pine trees by PWD, revealing that early detection of PWD might be optimized by with regards to broadleaved trees. Qin et al. [31] proposed a brand new framework, namely spatial-context-attention network (SCANet), to recognize PWD-infected pine trees employing UAV photos. The study obtained an general accuracy (OA) of 79 and offered a beneficial strategy to monitor and manage PWD. Tao et al. [32] applied two CNN models (i.e., AlexNet and GoogLeNet) along with a traditional template matching (TM) method to predict the distribution of dead pineRemote Sens. 2021, 13,5 oftrees triggered by PWD, revealing that the detection accuracy of CNN-based approaches was greater than that with the regular TM system. The above studies are all based on two-dimensional CNN (2D-CNN). Here, 2D-CNN [27] can receive spatial information and facts from the original raw pictures, whereas it cannot proficiently extract spectral information and facts. When 2D-CNN is applied to HI classification, it truly is essential to operate 2-D convolution on the original data of all bands; the convolution operation would be quite complex due to the fact each and every band needs a group of convolution kernels to be trained. Distinct in the pictures with RGB bands, the input hyperspectral data within the network normally harbor a huge selection of spectral dimensions, which demands various convolution kernels. This may cause over-fitting with the model, drastically growing the computational expense. To resolve this difficulty, three-dimensional CNN (3D-CNN) is therefore introduced to HI classification [335]. Right here, 3D-CNN makes use of 3-D convolution to perform simultaneously in 3 Tasisulam Technical Information dimensions to directly extract the spectral and spatial information in the hyperspectral images. The 3-D convolution kernel is capable of extracting 3-D info, of which two represent spatial dimensions as well as the other one represents the spectral dimension. The HRS image can be a 3-D cube, therefore 3D-CNN can directly extract spatial and spectral information in the identical time. These benefits allow 3D-CNN to serve as a additional suitable model for HI classification. By way of example, M ret al. [21] collected hyperspectral and LiDAR data (LiDAR data can obtain canopy height model, which was employed to match ground reference data to aerial imagery), and employed the 3D-CNN model for person tree species classification from hyperspectral data, showing that 3D-CNNs were effective in distinguishing coniferous species from each other, and in the exact same time showed higher accuracy in classifying aspen. In another study, Zhang et al. [24] employed hyperspectral images and proposed a 3D-1D convolutional neural network model for tree species classification, turning the captured high-level semantic conce.

Share this post on:

Author: nucleoside analogue