MODEL INFERENCE (PRODUCTION)

After confirming that your model works and correctly infers your species ], it is time to deploy it to a server or mobile app where you can be able to carry out inference from your model. Here i will illustrate how to make a web page where you can upload and classify images - more complicated user design and interface might need the services of a front end software engineer who are now abundant in the software engineering domain. 
Depending on the complexity of the project there are other options to deploy to Google App Engine, AWS Lambda, Amazon SageMaker, AWS Elastic Beanstalk and Microsoft Azure Functions. There are also options to host it on your own servers for grand projects that might require dedicated devops engineers.
For our purpose we will deploy it to a pay as you go platform that can easily scale and is easy for anyone to setup. The link here leads to a page of how to do this by deploying your trained models to Render https://course.fast.ai/deployment_render.html. Below i explore the process step by step with an assumption of basic github skills.
1. Github configurations