Constraint Addition
Here is one rudimentary, though intuitive, model. We have a problem P which is not fully formulated: C is not completely known. One consequence of an underdeveloped knowledge of C is that one cannot provide a sufficiently narrow solution space. An obvious way in which additional disciplines might contribute is by supplying more, or more precisely formulated, constraints. As these are applied the solution space can be successively narrowed until it is adequately downsized. A form of this model was suggested already in 1948 by Wassily Leontif as a way of conceiving of interdisciplinary relationships (Leontif 1948). Leontif modelled the situation with Venn diagrams overlapping in different ways (Figure 1). In situations where the intersection is neither empty nor identical to the solution space defined by either discipline (I call this interdependence in Figure 1) interdisciplinarity is warranted (even necessary) on Leontif’s account.
We can isolate at least two motives behind constraint addition. The first is exploratory and aims to reveal the constraints associated with the problem at hand. There are many examples of problems in which several disciplines have to be recruited in order to provide a proper understanding of what a solution might look like. One is: the problem of understanding how carbon cycles through the climate system cannot be completely solved unless one is able to integrate resources from physics, oceanography, chemistry, biology, ecology, at times even economics, just to mention a few.
This particular problem involves a kind of compartmentalization of the problem space allowing the overarching problem to be sub-divided and solved individually. Interest in the influence of atmospheric carbon on the temperature of the earth was originally related to ice-age theory. Geological oddities, such as the misplaced boulders, and gigantic mounds and eskers which could be found around Europe, had been hypothesized to be the result of one, or several, ice ages. But if the earth had been considerably cooler in the past, then the climate system apparently could change. This prompted the question: how? Joseph Fourier had already, in the 1820s, suggested that gas concentrations may be involved in heating and cooling, as they could trap energy in the form of heat. John Tyndall, following Fourier’s lead, proceeded to find candidates for such “greenhouse gases” and concluded that both CO2 and ordinary water vapour qualified. In 1896 the Swedish physicist Svante Arrhenius produced the first quantitative climate model to describe the impact of various concentrations of CO2 in the atmosphere on mean surface temperature at different latitudes (Arrhenius 1896). But Arrhenius’ model—although the estimates it produced, curiously, are actually close to current best guesses—was simplistic and could not accurately represent the climate system.
Arrhenius was himself well aware of some of these problems and noted, for instance, that the ocean was likely to play a role unaccounted for in the model. In actual fact, several important oceanic mechanisms were not properly understood until the mid-1950s, when the joint efforts of Roger Revelle, Hans Suess and Harmon Craig at the Scripps Institute connected the chemical properties of the ocean (specifically, the fact that it is a buffer solution) with its mechanical behaviour (the fact that horizontal turn-around between layers is very slow) (Revelle and Suess 1957; Craig 1957). In other words, Arrhenius was not aware of all the constraints involved in his problem; and these constraints would be added, one after the other, over many ensuing decades (the process goes on still). This process of adding constraints has been a distinctly interdisciplinary one: Revelle and Suess revealed an important mechanism in the oceans, others have described the role of biological matter, and so on.
The other motive is not exploratory, but pragmatic, and involves solvability as a virtue in itself. Certain problems are open or ill-defined, in that they do not appear to have a solution space that can be non-arbitrarily delimited (Reitman 1963; see also Simon 1973). The standard example here, which happens to be non-scientific, is the problem of composing a fugue. A problem solver approaching this issue will, in order to solve the problem, aim for something considerably narrower than the ‘actual’ problem. For any set of solutions to this problem, there will always be one that is not included in the set. Often enough, scientific problem solving shares this feature. A broad, or open problem, is reinterpreted as a more specific one. In a way, Arrhenius’ numerical climate model is an example; the problem he was interested in was how CO2 affected mean surface temperature. He treated this issue—via a range of different idealizations and simplifying assumptions—as a mathematical problem, and duly solved it. The problems that confront us in sustainability science are perhaps even more obvious examples. Again, the idea that Solow’s argument that maintenance of consumption under situations of limited resources shows us how to realize a sustainable economy also involves introducing many further constraints on what is to be thought of as a solution. If sustainability is indeed a concept that it is impossible to capture precisely, as some have maintained, then it is necessary to introduce constraints in this fashion to procure a problem that it is so much as possible to approach. Let us call this type of arbitrary constraint addition pragmatic.
For one reason or another it may be impossible, or very hard, for a single discipline to provide even arbitrary constraints to narrow the solution space of a problem sufficiently, and hence there may be reason to draw on several disciplines when solving an open problem. Interestingly, interdisciplinarity is at times seen as a virtue in itself, so that drawing on several disciplines is regarded as preferable to drawing on just one, even where that is possible.
Constraint Revision and Constraint Subtraction
The focus here will be on revision. However, the subtraction of constraints is also a powerful way of solving a problem. There are classical examples of this, such as the so-called ‘nine dots puzzle’. The problem, generally, is that people who fail to solve the puzzle constrain the problem. They think that the lines to be drawn have to be confined within the area delimited by the dots, and this makes the task impossible to solve. In order to obtain the solution the tacit constraint has be subtracted. In interdisciplinary problem solving a new discipline may highlight the fact that a constraint implicitly, or explicitly, encompassed in other disciplines is obscuring the solution. An interesting case in which this failed to happen is the discovery of nuclear fission (see Andersen 1996). During the 1930s several research groups were working on the effects of bombarding uranium with neutrons—most notably Fermi’s group, in Rome, and Meitner, Hahn and Strassman, in Berlin. In 1934 Fermi surmised that they had produced transuranic elements, and it was only with reluctance that Hahn and Strassman showed, in 1938, that this could not have been the case. The products of the nuclear reactions were not, as had been widely expected, larger elements, but smaller and lighter ones. Interestingly, the chemist Ida Noddack had sent papers to both these research groups in the mid-1930s pointing to this possibility, but her suggestions were dismissed. Andersen writes:
Apparently, the chemist Noddack was not aware of the physical constraints on the taxonomy of disintegration processes and cared only for the chemical categorization which she found inadequate. Hence, she suggested a further chemical analysis to check if the elements produced by Fermi and his collaborators could be much lighter elements than the transuranic elements suggested by Fermi’s team. What she here suggested was categorizing the elements as fission products—but at a time when the conceptual structure did not allow for the existence of nuclear fission. (Andersen 1996, 485f)
A different, and rarely highlighted feature of interdisciplinary problem solving, works in the opposite direction. In phase-1 this takes the form of constraint revision. Here C is revised under the influence of several disciplines. An interesting function that revision might serve—and one that cannot be provided by constraint addition—is to open up new possibilities. This may be desirable when, for instance, problems are over-constrained and thus hard, or even impossible, to solve. Adding constraints can never increase the solution space, but revision can.
Given the suggested model, constraint revisions can take one of two forms. They can be corrective. Here enquirers are wrong about some constraint and revise it in light of new, and more accurate, information. Or, where the problem altered, they can be transformative. Here it is a matter not of knowledge, but rather of producing a new problem on the basis of an old one. Transformative constraint revision is explained by reference to the values encompassed within a disciplinary matrix. A new problem is produced that is considered to be more interesting than the one from which it was produced. Often the motivation for transformation lies in trying to produce a problem it is possible to solve given the resources available to the problem solvers. One may consider Solow’s argument concerning the maintenance of consumption as a case in point. The overarching problem of sustainability is, for reasons quite trifling, impossible to solve. It is simply too broad. However, by radical transformation a formal problem can be extracted—namely, that of the maintenance of consumption under very specific circumstances for very specific economics—which can be solved by means readily available to economists.
Thus we can introduce a further distinction. Let us separate epistemological and ontological over-constraining. Over-constraining in general involves situations where the set C makes P either impossible or extremely difficult to solve. What we will here call epistemological over-constraining occurs in situations where C, erroneously, has been made out to be inconsistent, or includes constraints that are too prohibitive. Such a situation is ameliorated by corrective constraint revision. Hence epistemological over-constraining is a property of the problem formulation, rather than the problem itself—it implies that we made a mistake when trying to spell out the constraints constitutive of the problem we intended to solve. Ontological over-constraining, on the other hand, means that the problem itself is in some way too narrow. It may, for instance, be impossible to solve, in which case we might salvage a problem that it is possible to solve by manipulating C. Again, the n-body problem appears to qualify; the requirement that the solution to the problem has to be analytical simply means that the solution space is empty. This can be resolved by transformative constraint revision; in this case one might, for instance, permit numerical solutions.
Non-Ideal Problem Solving Processes and Problem Stability
Suppose we think of an ideal problem solving process in phase-1 as a process that involves only explorative constraint addition and corrective constraint revision. In the interdisciplinary variety of this type of problem solving different disciplines are thus drawn from in order to contribute to, and revise, the problem so as to provide a formulation of the problem that is, eventually, identical with what the problem really is. This is, essentially, Popper’s ideal for an interdisciplinary science if we apply it to phase-1.
Are there good examples of ideal problem solving processes? It is hard to say. First, it is rather difficult to establish whether a specific process was indeed ideal or not. To begin with, meticulous historical study would be required. There is also reason to think that problem solvers who present their results ex post facto may be inclined to tell the story as if it was an ideal process. Nonetheless, certain problems are extraordinarily stable and may therefore provide plausible examples. Interdisciplinary ideal processes may be harder to come by, but looking at disciplinary varieties, perhaps some logical and mathematical problems, and the processes that led to their solution, may provide a source of examples. A spectacular potential example is Andrew Wiles’ proof of Fermat’s Last Theorem. Although the problem had been clearly stated for centuries, it is quite clear, in light of Wiles’ eventual proof, that those who attempted to prove the theorem before him were not aware of its full complexity. Crucially, however, Wiles did not solve some variety of the problem, nor did he transform it into some other, different problem. He solved exactly the problem he set out to solve. Hence the process seems to be one in which the set of constraints were successively uncovered by means of explorative constraint addition and corrective constraint revision.
There are several things to note about such a process, but one remark which can be made in light of the example above is that it can only work with a specific type of problem: one that is neither open nor ontologically over-constrained. We will return to this below.
A non-ideal problem solving process in phase-1, then, is a process which is not limited to explorative constraint addition and corrective constraint revision. It is a process involving either pragmatic constraint addition or transformative constraint revision, or both. For the most part this type of process will also feature explorative constraint addition and corrective constraint revision; and hence it will typically be a process in which our understanding of the problem is improved whilst the problem is, simultaneously, transformed.
Let us now move on and talk about problem stability.
Procedural vs. Cross-Boundary Problem Stability
In an ideal problem solving process the problem is stable; the process is always directed on the same problem. In other words, diachronic, or perhaps more appropriately procedural, problem stability is maintained throughout the process. Sometimes this stability can be provided by the problem itself: that is, it is a non-open problem that is not over-constrained, but having such a problem is no guarantee that synchronic problem stability will be retained. For example, the problem solvers may find that some transformation of the problem they set out to solve is more interesting and thus change their focus.
In phase-1 of any problem solving process synchronic problem stability will either be maintained or (to some extent) lost. However, in interdisciplinary problem solving processes another type of problem stability becomes crucial. This is cross-boundary stability. This type of problem stability is about maintaining problem identity across disciplinary boundaries, or more broadly, across relevant contexts.
Trivially, for an interdisciplinary problem solving process to solve the problem it set out to solve both procedural and cross-boundary stability need to be maintained. Clearly, this is neither always possible nor always desirable. Many problems are open, to some extent, and some are ontologically over-constrained. But it appears that an interdisciplinary problem solving process can be successful, at least according to its own standards, as long as cross-boundary problem stability is maintained.
Interestingly, the notion of cross-boundary problem stability offers a way of distinguishing that which is merely multidisciplinary from that which is interdisciplinary. In the previous chapter we contrasted interdisciplinarity with disciplinarity. Commonly, however, interdisciplinarity has been contrasted with multidisciplinarity. The idea is that the latter involves the “juxtaposition of various disciplines, sometimes with no apparent connection between them” (Apostel et al 1972, 24), or is “essentially additive, not integrative” (Klein 1990, 56). Interdisciplinarity, on the other hand, is integrative. My suggestion here, therefore, is that we should understand matters in the following way: multidisciplinary problem solving as a process in which cross-boundary problem stability is not maintained, whereas in interdisciplinary problem solving it is. Hence, in the multidisciplinary case there is no knowing how the solutions that eventually come out of the involved disciplines actually relate to one another.
In other words, cross-boundary problem stability is necessary for a problem solving process to be interdisciplinary. Intuitively, this is what it means to share a problem.
A final remark on procedural problem stability. Although an ideal problem merely provides an opportunity for an ideal process, it is the case that if the process is indeed ideal, and thus that procedural problem stability has been maintained, then cross-boundary problem stability just follows, trivially. Otherwise, the process cannot have been ideal after all.
Problem Stability and Bilateral Problem Feeding
We can also use this notion of cross-boundary problem stability to improve our understanding of problem feeding. Maintaining problem stability necessarily involves transferring problems between disciplines. That much was already established at the outset. Thus any interdisciplinary problem solving process will involve problem feeding.
It seems we are now in a position where the notions of cross-boundary problem stability and bilateral problem feeding can inform one another.
Suppose we draw a sharp distinction between procedural and cross-boundary problem stability. One might object to such a distinction on the grounds that transferring a problem is in itself a part of the process of solving the problem. It involves formulating theories about how the disciplines (or fields) in question are connected, some type of sharing of terms, and so on. Thus, crossing a boundary starts to look very much like one step in the process, and the distinction between cross-boundary and procedural problem stability suddenly seems a lot less straightforward.
This notwithstanding, I think that we can grant the substance of this objection and maintain the distinction. What is crucial in maintaining cross-boundary problem stability is not, for example, that problem P is identical before and after it is transferred from discipline A to discipline B, but rather that both disciplines, A and B, accept the transformations.
Two further points. First, we can deploy this idea to develop our understanding of the distinction between unilateral and bilateral problem feeding. In bilateral problem feeding cross-boundary problem stability is maintained explicitly. Thus a relationship of mutual relevance is established, even if this is perhaps not realized. In unilateral problem feeding, on the other hand, no efforts are made to retain this sense of mutual relevance, and thus there is nothing to guarantee that problem stability is maintained. This does not necessarily mean that it is not maintained, as that may happen accidentally—but it seems unlikely that it is.
Second, explicitly the maintenance of cross-boundary problem stability is crucial to collaborative interdisciplinarity. It is simply what the latter means.
Popper, Kuhn, and the Challenge of Interdisciplinarity Revisited
Let us return to the topic of the previous chapter. It seems that the kinds of problem that Popper imagined science to be involved with were precisely those where the set of constraints generate a problem that is neither open nor ontologically over-constrained. Let us call these problems ideal problems. In solving them phase-1 becomes one of revealing the structure of the problem. It may not be possible to find the constraints within the confines of a single discipline—as was the case in the CO2 example. Many disciplines may need to be deployed. Furthermore, the problem will often be too vaguely understood for us to determine at the outset precisely which disciplines will ultimately be involved. So the process may be slow. The circulation of CO2 in the climate system has been investigated explicitly for at least a century.
Now, not all problems are ideal, and not all problem solvers stick to what they set out to do. As has been mentioned, other and more interesting and important problems are often discovered along the way, and attention may be diverted to them. But instead of dwelling on that matter let us consider the problems. What about these problems that fail to meet the criteria required for an ideal problem solving process to be a possibility?
Let us first try to say something about openness, as I suspect this is more prevalent. Openness comes in degrees. The problem of managing sustainability transitions (see Kates et al. 2001; Kates 2011), as such and without qualification, is open in the extreme. One reason for this is that the concept of sustainability just is not precise (see e.g. Pezzey 1997). Other problems are much less open.
How common are non-ideal processes in interdisciplinary problem solving? That is difficult to gauge with any accuracy, but a tempting analysis presents itself here. In solving open problems the solution will always involve stipulatively narrowing the solution space by introducing, or revising, constraints arbitrarily. The situation is highly reminiscent of what Kuhn describes, and the discussion above can be used to model the dilemma described in the previous chapter. Kuhnian challenges to interdisciplinarity therefore arise not with respect to all problems, but only non-ideal ones. In such cases it is simply necessary to make transformations that are not identity preserving—i.e. procedural identity cannot be maintained. It is with these non-ideal problems that frameworks associated with different disciplines become problematic, and the reason they become problematic is that they threaten cross-boundary problem stability.
Where does this leave us? First, we can now dissolve the seeming contradictions outlined in the previous chapter. Popper’s ideal process really concerns a particular type of problem. Second, if we understand the Kuhnian challenges in these terms, then overcoming them involves maintaining cross-boundary problem stability. This is not, perhaps, a method or scheme for overcoming these challenges—especially not if we think of interdisciplinarity as the maintenance such problem stability. But it nonetheless seems to enrich our understanding. In all likelihood there is no definitive method for achieving problem stability that is both highly specific and never fails. The process is one in which the disciplines involved need to stay in touch with one another and remain in active communication. This is the collaborative heart of interdisciplinarity, and it also provides some basis for the idea that collaborativity—communication, trust, explicit and mutual exchange, and so on—is part of what it means for something to be interdisciplinary. The borrowing, or exchanging, of various cognitive tools is interdisciplinary only in a very weak sense, and in many ways it and disciplinarity are virtually indiscernible.
A further possible benefit of using this model with respect to Kuhn, in particular, is that it offers a way of understanding how interdisciplinarity may be motivated and made to work in normal science. A problem that arises within a disciplinary matrix, and which is such that the matrix cannot solve it (because there is no transformation that can admissibly be carried out within the matrix that makes the problem suitably similar to some exemplar), although it is believed within the matrix that it should be solved (our D is present), is an anomaly that threatens the paradigm. By transferring this problem, however, the paradigm is protected: an anomaly is avoided and at the same time the problem in question is solved.
The Purpose of Problem Feeding
We are now in a position to make the purpose of problem feeding somewhat clearer. The immediate purpose is of course to solve problems, but as already noted, solving problems can be
On Ill-Structured and Wicked Problems
As already mentioned in passing, Walter Reitman once suggested that some problems—what he calls ill-defined problems—have open constraints. Reitman elaborates with a representation of problems, or problem situations, in terms of a three-component vector [A, B, ⇒], where A is an initial state, B a terminal state, and ⇒ denotes a process, or sequence of operations, that brings the problem solver from the initial state to the terminal state. Reitman thus discusses a number of possibilities with respect to how well each of these components is specified. Sometimes the terminal state is well-defined—such as when a thief struggles to invent an alibi that casts the sequence of events leading to his arrest as mere accidents—but the process and initial state is left wide open. In other problems it is the initial state that is well-defined, not the process, nor the terminal state (you have eggs, some carrots, a little milk, and small piece of veal, make dinner!), and so on. Solving a problem involves going from a problem vector such as the one described above, to another, more specific one [A’, B’, ⇒’], where the components of this latter vector are elements of the components in the former. Solving the problem, then, is working through a kind of meta-process, ⇒*, which makes the problem more specific, until a unique solution can be produced. Solving problems that contain open constraints thus involves interpreting them more specifically. This transforms the problem into a new, precise, version in which a specific solution can be produced. The fugue composer eventually settles on a single fugue.
Herbert Simon (1973) argued, a few years after the publication of Reitman’s famous paper, that what makes a problem ill-structured or well-structured has nothing to do with properties of the problem themselves, but is the result of their relationship to the problem solver. A well-structured problem, on Simon’s account, needs to meet a number of criteria. These include that there should be some “definite criterion” for determining whether or not something is a solution, and that there is a “mechanizable process for applying the criterion”, and that there is “at least one problem space” in which the different states of the problem (i.e. the initial state, goal states, and whatever other intermediate states are possible) can be represented. An ill-structured problem fails to meet some or all of these criteria, and because of this it would not be possible for a general problem solver to solve it. Simon, however, stresses the point that for real problem solvers most problems are in fact ill-structured, since real problem solvers—unlike their idealized counterparts—do not have infinite amounts of computational power, or the time required to sift through all possibilities. Winning a game of chess looks like a well-structured problem as long as we consider a problem solver that is powerful enough. For a human, or a simple chess computer, there will be no practicable way of determining what move is really best given some situation. What the chess computer does, then, is transform the ill-structured problem into a well-structured problem. It plays a kind of pseudo chess in which winning means maximizing some evaluative function.
The transformation, in effect, limits the knowledge base of potentially relevant information, which is in essence how Simon suggests we view the distinction between ill-structured and well-structured problems. A problem is ill-structured if there is more potentially relevant information than the problem solver can feasibly compute. Structuring a problem involves limiting the amount of background information until it matches the available computational resources. Solving a problem is largely a matter of structuring it.
How are we to accommodate this in the account above? Interdisciplinarity, it could be argued, involves increasing—often dramatically—the amount of potentially relevant knowledge. In the present context this seems to point in the wrong direction. It is likely to transform well-structured problems in to ill-structured ones, and perhaps that is indeed what sometimes happens. However, as has already been argued, seeing successful interdisciplinary problem solving as a series of operations on the constraints on admissible solutions often has the opposite affect; when the constraints are specified more precisely it can be the case that two disciplines can arrive at a problem formulation that is better—that is, it is clearer how it should be solved—than the formulation within a single discipline.
Interestingly, in passing Simon also mentions issues of problem stability, although not in that terminology:
Now some obvious difficulties can arise from solving problems in this manner. Interrelations among the various well-structured sub-problems are likely to be neglected or underemphasized. Solutions to particular sub-problems are apt to be disturbed or undone at a later stage when new aspects are attended to, and the considerations leading to the original solutions forgotten or not noticed. (Simon 1973, 191)
The issues to which Simon alludes here appear quite likely to be exacerbated in interdisciplinary contexts, and this further emphasizes both the challenges involved in realizing genuine interdisciplinary problem solving and the need to maintain cross-boundary problem stability.
[Ill-structured problems was, and is, a topic that is of interest among cognitive scientists and AI researchers, as well as philosophers of science. In sustainability science, however, another kind of problem is usually referred to; namely wicked problems \cite{Rittel_1973,Norton_2005}. Wicked problems—paradigmatically exemplified with broad social problems like crime or poverty—more or less lack definitive solutions. Solutions—that is to say, interventions—have system wide consequences that are impossible to survey beforehand and that tend to radically change the problem situation in ways that generate new problems of the same form. Solving wicked problems, then, is a continual adaptive process in which certain types of solutions are categorically avoided (once and for all time solutions) and outcomes are closely monitored.]
Concluding Remarks
A leading thought here is that interdisciplinary problem solving is similar to problem solving in general but harder to realize. In most cases the process of solving a problem oscillates between exploring a problem perceived as stable and transforming that problem into one which, for instance, better fits the resources available. Interdisciplinarity offers opportunities to approach problems which, for one reason or another, are difficult to solve within single disciplines. But, crucially, to take advantage of those opportunities efforts have to be made continually to align disciplines to one another with respect to some particular problem.
In this chapter I have developed an extensive nomenclature to describe interdisciplinary problem solving. I have then deployed this nomenclature—especially the notion of problem feeding, ideal and non-ideal problems, problem solving processes, and cross-boundary problem stability—to provide a better understanding of the Kuhnian challenges to interdisciplinarity outlined in the previous chapter and to explain what interdisciplinarity really amounts to. In the next chapter we will discuss a different feature of cross-disciplinary interaction: the idea that disciplines can overstep their boundaries and infringe on other disciplines.