Background

Context: Linked Data 

Although the break-neck speed of innovation in science and technology make these very exciting times, we are also facing serious bottlenecks when it comes to communicating scientific and technical knowledge. Due to their inefficient nature, the current scientific communication tools— lectures, scientific papers, and data— make research and education agonizingly slow processes. The heart of the matter seems to be a general stagnancy in communicating implicit knowledge to others, or a lack of participation in the effort of making the implicit explicit, so that it can be processed faster by humans and machines alike. After all, searching for an answer to a question through querying a knowledge graph with natural language is likely to be much more efficient than using the modest text-recognition and reasoning faculties of humans, and for a machine, it would be much more efficient to reason over that same graph than scanning the texts of billions of documents with an attempt to extract knowledge from them. Indeed, this bottleneck in knowledge flow is widely recognized, and solutions such as Semantic Web and linked data has been proposed and acknowledged as the next step in World Wide Web's evolution. These are relatively old propositions, however, and their adoption still seems limited after more than 20 years. Awe-inspiring initiatives such as DBpedia and adoption by giant data creators like Twitter aside, the semantic web technology remains inaccessible (or simply undesirable) to average Web citizen / developer / designer today. The Web is full of dead SPARQL endpoints, and technologies such as REST APIs is being preferred over triple stores.
One of the major reasons for slow adoption rate of semantic web technologies could be their inaccessibility. Like with any new technology, semantic web is in a technical-state and generally lacks user-friendly interfaces. For instance, the industry standard ontology editor, Protege is disliked by many but used anyway due to lack of a better alternative; and tools for semantic web are spread out throughout the web. Besides the shortcomings of existing standard tools even in the most basic aspects (e.g., both as Protege and Stardog both missing RDF validators, and Protege 5 not even giving an error message upon encountering a .turtle file with a faulty line!),  as of today, there is no integrated development environment for semantic web IDE. Therefore, development of tools that are better designed from both a software development and user-experience perspective (e.g., Neo4j) are urgently needed to overcome linked data and semantic web initiatives' own bottleneck: lack of good tools. A project at Vrije Universiteit Amsterdam, Linked Data Reactor is such a project, and it could significantly lower the threshold for using and producing linked data. This project aims to utilize it and demonstrate this point on usability with a prototype, while also using the framework to create and analysis platform for a scientific problem: knowledge flows and interdisciplinarity in research.

Domain: Interdisciplinarity

Interdisciplinarity in research is generally seen as desirable, and it is likely to be an important factor that can bring about new perspectives and solutions to our increasingly sophisticated and multi-faceted research pursuits today. However, the impact of interdisciplinarity —or to put simply, the effect of diversity of research in an article,  journal, or institute— on the scientific quality and merit is a matter of debate, and there does not seem to be conclusive findings. Some authors suggest that 'distance' between disciplines may play a critical role in the effectiveness of interdisciplinarity \cite{Jensen_2013,Zhang_2015}, and some claim a 'U-shaped' relationship \cite{Wang_2015}, while the discussion also includes various other theories and findings \cite{Yegros_Yegros_2015,Barry_2008}. The ongoing debate and possible impact of results on policy making invites more studies in this direction.