Figures 1.6-1.9: Automated Tensorflow Object Detection API data was collected throughout the global steps as seen in the command prompt. Total loss parameters, as acknowledged in the CNN background, are ultimately summation functions of the biases and weights of the trained neural network (which result from the Softmax function and its counterparts). Although exponential loss decay is not comparable to the drastic decrease in the Faster RCNN Inception model, the SSD Mobilenet model maintained a high speed of ~30 ms for full image analysis. Based on the Single Shot approach of the neural mobile network, training and loss values did not converge at low values immediately. Loss value convergence occurred around 8,000 steps near a value of 4.00. Nevertheless, the SSD architecture allowed realtime analysis of images (e.g. an OpenCV webcam window which detected live images). Thus, although loss value, in proportion to precision, was lower for the SSD Mobilenet model, immediate image visualization was easier.
Data Analysis: Faster RCNN Inception V2 Training Loss Scalars