Figures 2.0-2.3: The Faster RCNN Inception V2 model reveals exceptional loss values converging near minimum values of ~0.05 during the R-CNN training. Total loss values converged near steps ranging in 4,500-6,000, although, experienced natural fluctuations. Evidently, the Faster RCNN model reveals higher scalability in terms of training parameters. In other words, loss values decreased drastically while precision inversely increases within a range of 10,000 global steps. However, the Faster RCNN model is notorious for overconsumption of GPU contained RAM. On NVIDIA Cuda cores, nearly all accelerators are utilized during the training process which minimizes versatility of the model and the potential for deployment. The training model accurately localized smaller cells and borderline features on the image (i.e. not general-apparent details). Following the Python shell testing, it was apparent that the Faster RCNN demanded more time for image analysis at ~58 ms. Due to the high-scale nature of the R-CNN model, real time analysis is limited to ~2-4 frames per second (FPS), which eliminates potential for deployability on small device operating systems such as Android and IOS.