The high level software architecture was developed to enable researchers to pick and choose the most appropriate components, or to use the entire platform as a turnkey solution. The complete solution requires administrative privileges in order to run JupyterHub, but it can be run in a cutdown form where the Jupyter/JupyterHub integration is disabled, but prompts are offered so that users can run their own Jupyter notebook and connect to the data server. This works for most things, but removes the ability to store notebooks, and to link from calculations to the notebooks that generated them.
As the project has evolved it has become clear that whilst a complete solution provides a broader overview of the data generation, analysis, and publication process it can be too onerous. It may also be preferable to export as much of the process as possible to services that are able to provide greater guarantees of longevity, i.e. months, years, decades, centuries depending upon need. This led to adapting the project to serve in different roles where active data generation benefits greatly from a running data server. Whereas once projects are complete it may not be feasible to keep a project server up and running but there are a number of community services that can be employed to store static assets that clearly document the path from data generation to final figures such as Zenodo \cite{shared}, Binder \cite{Jupyter_2018} coupled with GitHub \cite{project}, and Figshare \cite{figshare}. To this end the data and metadata standards enable the export of individual calculations, and the notebooks can be archived. The Binder export discussed later shows one example of moving to a static repository containing data and a notebook, the process is currently manual and we are working on improved automation.