The networking integrity of the Mobilenet system uses default mechanisms such as a batchnorm and ReLU nonlinearity operation (excluding the final fully connected layer possessing no non-linearity and utilizes a softmax layer for classification) for dataset processing. The compact nature of the SSD Mobilenet framework is associated with a reduction in both parameters and Mult-adds. Although the neural network structure of SSD Mobilenet is notorious for minimal accuracy in relation to the Faster R-CNN Inception models, an intensively trained Mobilenet model can equate accuracy and precision on output object detection. Nevertheless, preliminary average precision scores (mAP) on small background objects reveal reduced values in relation to the inception models. Table 1.0 exhibits the pretrained COCO-Tensorflow models and their relationships in speed (ms) and mAP(^1):
Coco Model Name
Speed (ms)
mAP^[1]
SSD Mobilenet V1
30
21
SSD Mobilenet V2
31
22
SSD Lite mobilenet v2
27
22
SSD inception v2
42
24
Faster RCNN Inception v2
58
28
Faster RCNN ResNet 50
89
30
Faster RCNN ResNet 50 Low Proposals
64
30
RFCN-ResNet 101
92
30
Faster RCNN ResNet 101
106
32
Faster RCNN ResNet 101 Low Proposals
82
32
Faster RCNN Inception ResNet Atrous V2
620
37
Faster RCNN Inception ResNet Atrous v2 Low Proposals
241
37
+
1833
43
Faster RCNN Nas Low Proposals
540
43
Table 1.0 (above) and Figure 1.4 (below): Relationships between typical object detection API models and their associated speeds and measures of average precision. The models of interest, SSD Mobilenet and Faster RCNN Inception, exhibit the most compatibility for future deployment with a coupling of efficiency and precision.