Plant Leaf12. This Leaf12 dataset contains inter-class species

Plant species recognition is a prominent research area similar to Face recognition problem. Plant species recognition done through image processing has common obstacles in images such as illumination, viewpoints, orientation, background colour variations. Apart from this plant species recognition is hard since it has many species with inter-class and intra-class variations. To solve these problems, a backpropagation neural network (BPNN) has been created. This BPNN model contains an input layer, two hidden layers and an output layer. The input layer contains 3072 neurons and the neurons in output layer depend on the number of classes to be classified in the dataset. We have collected and created our own Indian plant dataset named as Leaf12. This Leaf12 dataset contains inter-class species with 3840 images. BPNN model is tested using our dataset and compared with other benchmark datasets such as Folio, Swedish, 17Flowers, and Flavia. Our Indian plant dataset worked well in variations such as illumination, viewpoints, orientation, background colour variations and for inter-class species.

Keywords

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Plant species recognition, back propagation neural network, Indian plant dataset, inter-class species, classification

1.    Introduction

Plant species recognition is one of the interesting research areas in image processing since it has its wide application in agriculture, medicine, etc. Plant species recognition is complicated because of the common challenges that appear in images such as illumination, viewpoints, orientation, colour variation, background changes, intra-class and inter-class variations between species. There are several applications such as [email protected], leafsnap, etc. for plant image retrieval, displaying the plant information. These applications also tend to collect plant images from the users. Since, leaf is the common part available in plants, most of the datasets contain leaf images. Either of the features like shape, texture or colour are used in plant species recognition.

Camilla et al. used backpropagation neural network (BPNN) for intra-class classification of tea plants (17 tea plant varieties) from Vietnam. Fourteen morphological parameters were used as inputs. Fifty hidden neurons were activated by the logistic sigmoid activation function. BPNN outputs were further investigated by cluster analysis using Unweighted Pair Group Method analysis (UPGMA) and formed a dendogram.

Hongfei et al. classified five species from Camelia genus (93 plant species sample) based on clustering approach, Learning Vector Quantization Neural network (LVQ-ANN), Dynamic Architecture for Artificial Neural Network (DAN2), C-Support Vector Machine. For classification, architectural and morphological characteristics of the leaf were considered. Camelia species were identified the best using DAN2 and SVM methods.

Stephen et al. proposed a model for classification of plant leaves using flavia dataset. Principal Component Analysis was used for feature extraction and Probabilistic Neural Network (PNN) for classification of 12 morphological features, which were derived from 5 geometric features. Five features were identified using PCA as important and was taken as the inputs for PNN.

Naresh et al. proposed modified LBP (Local Binary Pattern) for feature extraction and nearest neighbor classifier for medicinal plant classification. In general, hard thresholding was used for LBP. In modified LBP, mean and standard deviation were taken into consideration instead of threshold values. LBP method was chosen for feature extraction as this was good for texture analysis. This method was tested over UoM medicinal plant dataset, Flavia, foliage Swedish, and Outex datasets. UoM medicinal plant dataset were collected from Mysore, India and contained 33 medicinal plants with 1320 images in total.

Yu Sun et al.  proposed a 26-layer ResNet (Residual Network) model for plant identification and it was said to be suitable for smart forestry. A large dataset was created through images taken using mobile phone and was known as BJFU100 dataset. This dataset contained 10000 images of 100 ornamental plant species found in Beijing Forestry University campus. For experimental analysis, BJFU100 and Flavia datasets were taken.  In deep residual networks 18, 26, 34, and 50 layers were considered. Amongst the 4 layers taken, ResNet 26 outperformed the other three models. For experimental training, learning rate was set to 0.001. Flavia dataset accuracy result was compared with other approaches like PBPNN, SVM, DBN and ResNet26 shows 99.65% recognition rate.

Deep neural networks being the hot topic in image recognition, it was observed that works on simple neural networks over Indian plant species was sparse in recent years. Even, if works related to neural networks were present, morphological or physiological features were considered. Hence, instead of using geometrical or morphological features of leaf, the images were normalized and those values were given as inputs for BPNN. Our paper has been organized into four sections such as, neural network, datasets, results and discussion, conclusion and future work.

2.    Backpropagation Neural Network (BPNN)

ANN mimics functionalities of human brain. Artificial Neural Networks (ANN) has several group of nodes called neurons which are interconnected. The interconnections are weighted connections. There are layers containing several neurons. In ANN, each neuron takes features as input.

BPNN initially takes a random set of weights and biases. A Backpropagation Neural Network (BPNN) was created with two hidden layers and the images from the datasets were given as the direct input. This was a multi-layer perceptron network. Their predicted output has N classes i.e. the number of classes available in the dataset. Fig. 1 is the model diagram of BPNN for leaf12.

 

 

 

Input Images                        2 hidden layers                                           Output
32x32x3                               500                   250                                       N classes
 

3072×500 values
va

500×250 values
va

250xN values
 

 

 

 

                                                                                                                                               

 

 

                                                         

 

 

 

                                                                                                              

                                                                                          

 

 

 

 

 

 

 

 

 

Fig. 1 Backpropagation Neural Network (BPNN)

The following are the steps involved in a BPNN and are divided into three phases

Phase I

i)                 Initialize weights, w_ij and biases, b_ij and the inputs be taken as x_i.

ii)               Calculate net input for hidden units, z_in

iii)             Apply activation function , a_j

iv)              The output, z_i is forwarded as input to the next output layer. If another hidden layer is present follow steps (ii) and (iii). Otherwise, proceed with the next step.

v)               Calculate net output for output units, o_in

vi)              Apply activation function , a_k for o_in.

 

Phase II

 

i)                 Calculate the error E_k from target output (t_k), predicted output (y_k) and first derivative of f(o_in)

ii)               Based on step (i), update weights (w_jk) and bias

Where, is the learning rate

iii)             Backpropagating from output to hidden layer,

iv)              Depending on E_j, update

Phase III

i)                 For every unit in output layer, y_k, k=1 to m, update bias and weight is given by,

ii)               For every unit in hidden layer, z_j, j=1 to n, update weight and bias

 

Repeat steps in phase I, II and III until convergence.

Convergence point: Stop after reaching certain number of epochs or when actual/predicted  output is equal to target output.

2.1 Image Preprocessing

All the images in the datasets were cropped and resized initially to 200×200 pixels. Further, for faster processing of images it was resized to 32×32 pixels. Colour images were used for processing. Here, size of the image was 32*32 and three colour channels and hence it had 32x32x3 (3072) dimensions.

 

2.2 Hidden Layers and Output Layer

Two hidden layers (H1, H2) with 500 and 250 neurons were created. ReLU was the activation function used for hidden layers. Softmax was used in the output layer. The number of neurons in the output layer is dependent on the number of classes in each dataset. For the real-time dataset considered in our work contains twelve output neurons, since, 12 classes of plants were present in it.

2.3 BPNN Model

As said in subsection 2.1, 3072 dimensions were used as input neurons in input layer. Between the layers, weight and bias calculations happen. The biases used in each layer in H1, H2 and output layer were 500, 250 and 12.The number of weight parameters between (i) input and H1 layer were 1,536,000 (3072*500) (ii) H1 and H2 layers were 125,000 (500*250) and (iii) H2 and output layers were 3000 (250*12). The output layer had 12 neurons in case of Leaf12 dataset. The total number of parameters (weights and biases) was 1,664,762. The number of parameters or each layer was displayed through the function model.summary() in keras. The created BPNN model was compiled using optimizers such as Adam, SGD, Adamax and Adadelta. Loss function used in BPNN was categorical cross-entropy used for multi-class classification of objects. Batch updates were done instead of updating every training sample.

 

2.3.1         Activation Functions and Error function

Catergorical crossentropy is a loss function for multi-class classification, which is a combination of error function and activation function

 

2.3.2         Optimizers

There are several optimizers discussed by Sebastian, which are black box optimizers. Nowadays, these are the weight update rules that are being used. SGD is Stochastic Gradient Descent, which updates for every training example.