Figure 1.5: High contrast dark field imaging of Treponema Pallidum (Syphilis) blood sample. The image exhibits the potential for clear-sharp extraction of the specimen for diagnosis.

Disease Control in Impoverished 3rd World Countries

Health systems in developing countries typically have two objectives in mind to enhance social, economic, and medical advantages in the area: a.) improving the level and distribution of health outcome in the population and b.) to protect individuals from financial risks that are frequently very substantial and cause impoverished lifestyles (World Health Organization, 2000). Common financial risks are caused by illness-related loss of income and the drastic expenditures on treatment and care. However, much of this loss can be ameliorated by preventing illness or its progression by using new medical-financial architectures in the region. This paper centralizes both Malaria and Syphilis as a disease demographic, with a 90% prevalence in South Africa, however, also takes advantage of visualization techniques in a compact and easy diagnosis of human diseases. According to the Global Burden of Disease Study in 2010, diseases including Malaria and Syphilis, have been surprisingly masqueraded and unmentioned by local medical communities and departments in South African countries. This continued lack of awareness is partly rooted from the dense population pool (South- Sub Saharan Africa) experiencing this disease and the lack of regulation, assessment and modern diagnostic tools.

Proposal and Rationale: Real time Object Detection in Disease Diagnostics

The objective in deploying use of object detection algorithm models in disease diagnostics is to utilize the Faster R-CNN model in order to detect Malaria and Syphilis in a hospital setting, due to the blood samples via dark field microscopy needing to be analyzed as frozen/still images. In addition, this model provides a significantly higher precision and accuracy rate when being able to detect every instance of the pathogens in each image. As seen on the figures and graphs of the Data Analysis portion
The plan to utilize the SSD MobileNet model is for the model to be used in an mobile Android application as to detect pathogens in real time. Theoretically, the rationale is to create a Raspberry Pi device connected to a microscope that is able to gather blood samples and for the SSD MobileNet model to accurately detect instances of the pathogens in these samples. This would be of benefit as it Unfortunately, the accuracy in which the model detects each individual instance of the objects is significantly less than the Faster R-CNN model as shown in the figures and graphs on the Data Analysis portion of this paper. Therefore, it is ideal to utilize the SSD MobileNet model in order to accomplish this goal.

Methods and Protocols for Faster R-CNN Inception V2 COCO Model  and SSD MobileNet V1

Before considering the methodology, a brief overview of the Tensorflow Object Detection API and data collection methods are overviewed:
  1. Install TensorFlow-GPU v1.5, CUDA v9.0 and cuDNN v7
  2. Download official Tensorflow Object Detection API repository from GitHub and setup the Tensorflow directory in the Windows C:/ drive
  3. Download the Faster-RCNN-Inception-V2-COCO and SSD Mobilenet V1 models from the official Tensorflow model zoo and place inside of the \object_detection folder
  4. Set up the Anaconda virtual environment and install packages: anaconda protobuf, pillow, lxml, Cython, jupyter, matplotlib, pandas, opencv-python
  5. Configure PYTHONPATH  environment variable to point to the correct directories in \models-masters
  6. Compile Protobufs and run setup.py in the Anaconda Command Prompt in order to create a name _pb2.py file from every name.proto file in the \protos folder
  7. Test the Tensorflow Object Detection API to verify it has been set up successfully by launching the object_detection_tutorial.ipynb script with Jupyter
  8. Gathered 31 images of Syphilis and 31 images of Malaria and saved into the images folder of the object detection directory
  9. Open folder of images in the LabelImg software and create a new edited folder in LabelImg in order to hold the new .xml files once all of the verification boxes were saved in each image containing the every instance of the object of interest
  10. Generate the training data by converting the .xml to .csv files in order to be read by the object detection classifier
  11. Create a label map that tells the trainer what each object is defined by mapping class names (Malaria and Syphilis) to class ID numbers (Malaria:1, Syphilis: 2) and save the map as a .pbtxt file
  12. Configure the object detection training pipeline. This is done by changing the number of classes to 2, changing the fine tune checkpoint to contain the Faster RCNN Inception V2 COCO model, changing the number of examples to 62 images and updating the eval_input_reader section, specifically the input_path and label_map_path to contain the directory of the test.record input path and the labelmap.pbtxt in the label map path.
  13. Initiate training
  14. Export the inference graph and modify the last saved checkpoint value post-training
  15. Modify image path in python config file via idle to a newly saved image
  16. Run command from Anaconda Prompt