Figure 1.0: The Cognitive Domain
The Cognitive Domain magnifies strategies that pathologists can use to draw preliminary hypotheses and deduction based on radiologic, microscopic, and clinical data. A default (given) procedure of pattern recognition is followed throughout the tissue-blood analysis by allowing the pathologists to extract features from the image and relate this to other biological samples. Multiple branching or arborization allows a breakdown of incoming data, while the exhaustive strategy is a rushed and uncoordinated approach towards massive data analysis.
The Cognitive Domain reveals a human oriented investigation in the pathology prognosis process, however, the efficacy of the system is not universal and is sacrificed due to natural human error. Moreover, the backbone of the Cognitive Domain takes advantage of metacognition to create an efficient toolkit and guideline of strategies to be utilized during the investigation. 

Convolution Neural Network (CNN) and Image Processing

Prior to considering Faster R-CNN, it is necessary to briefly introduce the basis of Convolutional Neural Networks since this model uses this neural network as a framework. Convolutional Neural Networks have been effective in image recognition and object detection. There are four essential components of the neural network: Convolutional Step, ReLU, Pooling and Fully Connected Layer. The Convolution Step extracts features from the input image and learns the image features using matrices of input data while allowing to cultivate the spatial relationship of each feature within the image- this will produce a feature map. Non-Linearity(ReLU) is then introduced by replacing all negative pixel values in the feature map by zero in order to insert non-linearity. As a result, the network will learn to process inputs in a non-linear fashion that is similar to how data is conveyed in the real world.
Pooling reduces the dimensionality of the feature map, however it retains the important information of each in addition to minimizing overfitting by reducing the number of parameters and computations in the network. The final stage is the input approaching the Fully Connected Layer, which is a Multi-Layer Perceptron which utilizes a softmax activation function in the output layer to ensure that the outputs are probabilities that add up to 1. The softmax function does this by taking a vector of arbitrarily valued scores and minimizing it to a vector of values explicitly between 0 and 1. As one of the classes approaches a value of 1, this would create a positive inference for the detection class, leaving all other negative values near 0 to indicate a fail. At this stage in the process, the output represents high-level features of the input image. The objective of the Fully Connected Layer is to utilize the features to classify the input image into different classes dependent upon the training data (Britz, 2015).
Figure 1.1: Typical feature extraction in a CNN passing through pooling layers, reaching the final neural network.