Technology has a way of creating unintended consequences. I have been reading Jaron Lanier’s stuff lately (I don’t agree with 90% of it, but it’s interesting and provocative, which is more than I can say for 90% of the stuff I read; I also notice that the digerati who bash him have almost never actually read him) and it seems a key undertone of his work. It’s easy to see in his Edge interview on the “local-global flip”.
There, he’s talking about the outcomes of Wal-Mart, of Apple, of Google. He’s negative about them. And in many ways I share some of the negativity. I miss the mom and pop stores of my childhood in East Tennessee. I fear the implications of technology that make people passive consumers. And I am working full time on making data something that people own and control.
But he’s somehow lost the good parts of these systems, and there are good parts. They’re where these systems came from. Wal-Mart means not just the destruction of small stores, but the proliferation of warm, cheap clothing and cheap calories. That’s a benefit. Apple means my mom can send email and see her grandson (don’t tell me about other systems - I run Ubuntu - but Apple made it *easy*, and that’s what some people actually need). Google is, well, Google, for better or worse.
The negative parts are theunintended long term consequences of the technologies and their implications. The link is for a lovely paper, with the lovely example of the microwave oven, whose inventor did not intend to be part of the long term destruction of the family meal (negative unintended consequence) or of the long term movement to liberate women (positive unintended consequence). Indeed, he was a guy fixing a radar system whonoticed his chocolate had melted, so it’s safe to say he didn’t have much of a social agenda at all.
This is all a setup to my real point, which comes back to Open Access, and indeed to free culture generally. We are pursuing what Merton called a purposive social action in advocating for the liberation of scholarly, educational, and cultural content through free licensing (if you haven’t read the paper, do so now - it’s short, it’s beautiful, and it’s important). We are intentionally trying to change a system from closed to open under the belief that this effort will be more natural and native to a networked, digital world, and that the long term outcome will be a net positive to humanity. Part of our job should be to think about the consequences of that, for good and for bad.
We tend to think of “undesired” results as negative ones, but Merton’s paper makes the key distinction that not all undesired results are undesirable. I think that’s the fundamental key to understanding innovation, actually.
A lot of the arguments against OA focus on the undesired but foreseeable outcomes: business models will have to change, filtering and quality control methods will have to change, some people in power will have less, some new people will have new power. I don’t really give a hoot about those: the internet is here, the king is dead, long live the king.
Some of the more nuanced arguments focus on foreseeable and truly negative outcomes: the concentration of wealth among the large publishers who can afford the move to author-pays, the lack of funds to make author-pays work in many disciplines, the inequality of asking authors to pay in the developing world, and so forth. I am more sympathetic to these arguments by far, and we have to address them. If we don’t, our failure to do so will cloud our ability to address the outcomes in the previous paragraph, which only really affect people in power.
Lanier’s point about the flip is well taken here - we don’t want Elsevier, Springer, and Nature to concentrate publishing even more fully in their hands. They’ve already got more than half the market, and the local-global flip means we could easily see that skyrocket up, not down, as a consequence of openness.
That’s a short term (10 year) view however. I tend to think these things shake themselves out over time. There will be a pre-cambrian explosion of models to address them, and a small number of the models will work, and will then mutate to address the needs. If we know these will be problems we can set up boundary conditions and facilitate the experimentation that needs to happen. Open Journal Systems is the kind of thing that helps here, by way of example.
But there are going to be “undesired” effects of this purposive action - as in, effects that weren’t part of our argument for the change, or part of the arguments against it. Donald Rumsfeld famously called these “unknown unknowns”.
Some of the unknown unknowns are going to be positive. Some are going to be negative. The beauty of systems that are open at the core is that those who follow us will have the rights to amplify the good ones, and the rights to fix the bad ones. And that’s in the end the point of Open Access, for me, as a purposive social action. It’s to guarantee first that the world has the right to read the literature of scholarship through the network, but the real goal is to make sure that whoever is reading has the knowledge to address the things we screw up, the negative consequences of Wal-Mart and Apple and Google. To fix the unknown unknowns.
We have to deal with the foreseeable negative outcomes - especially the concentration of power that Lanier points out so well, which is looming over scholarly publishing like a wave at Mavericks.
But we can’t lose sight of the goal in attempting to fix all the negative outcomes that we can predict: since we have a massive set of scientific problems to deal with, if we charge the world $30 an article, we are statistically less likely to have the right brains filled with the right knowledge at the right time to fix the problems we’ve left them. Work back from that, not forward from problems we can already see.