We blew our chance to build a data world that truly empowers individuals
already with consumer data and surveillance. We screwed it up well and
good. We let the argument be that functionality - features, free stuff -
is a solid, ethical, and most of all, normal trade for
data that describes us in intimate detail. We let the companies that
create that functionality use that intimate detail to sift our mail for
us, to recommend sweaters to us, to share with the surveillance state
and watch us.
But we haven’t yet blown that chance with health. Health data is
frustratingly
illiquid,
a truth that frustrates so many attempts to bring consumer level
functionality to the health care system, especially here in the USA.
But the same structural barriers to data liquidity that are so
frustrating have given us a gift, the gift of time. Time has revealed
exactly how abusive “data liquidity” can be when it’s controlled
completely by large corporations, by an app economy that uses design to
make us think as little as possible about the tradeoffs involved with
our data. If we already had it in health, do you think it’d be any more
individually empowering?
We have a chance to do it right.
We have a chance to build a health system from the inside out around
mobile devices, around commodity
prices for clinical data, around cheap DNA
sequences, around
electronic
health records owned by the individual rather than the health care
Borg.
It’s no small irony that we can use the very same devices that have been
revealed to relentlessly surveil us without agency to relentlessly
surveil ourselves for health and pool that data to advance our
collective understanding of health. And it’s going to happen, one way or
another. There’s too much money at stake.
But the choices that get made in what we measure, and how we measure,
encode our politics. We
can
measure depression with the secondary measure of movement, for example.
Do we take in GPS data? If we do, do we measure the latitude and
longitude all the time, or the percentage change from a single point
over the course of the day, which in turn is deleted every day? One of
those choices is more exploitative than the other. Choosing one is a
clear proof of how we think about the individual’s rights.
Doing research this way involves risks. Some of those risks we know:
disclosure, loss of privacy, risk of re-identification, discrimination,
bullying. Some of them we don’t. But clinical study has never been
without risk. Informed consent exists because we believe that “normal”
people are capable of
balancing the risks
of a study against its benefits, and exercising their agency to decide
to join or not.
We can’t transition to a new kind of science based on individual
engagement if we don’t agitate for people’s rights to share their data.
We won’t be able to do that until we start to loudly and regularly
proclaim, basically, that people can - should - must - be trusted to
process complex concepts about research and make an informed choice to
participate.
Right now we run informed consent as a zero sum choice, a form written
by a doctor, edited by a lawyer and a committee, no agency at all
besides yes or no. The forms are typically long, filled with boilerplate
that limits the liability of the organizations involved, and ban re-use
of the data. Generally, we allow clinical research to treat the
individual involved like a piece of land on which valuable oil sits
waiting to be extracted and piped away. That’s what the participant gets
to slog through, and it has a
predictable impact.
That’s unacceptable. We have an
entire industry
devoted to making complex things engaging, as long as those complex
things make it easier to purchase goods and services. We need apply the
same design tools towards the illustration of complex concepts like
de-identification, risks of data disclosure, and benefits of data
sharing and collaborative analysis.
We need a visual language for the concepts of informed consent and
research. We need technical frameworks that encode the agency of the
individual: return results to participants, allow them to connect with
the researchers, to connect with other participants. We need new norms
and structures that impose obligations on data users: to be good
stewards, to respect clinical study as partnership with real people, to
be trustable.
We can do much of this with
good
informed consent. Informed consent is a vector into the entire
engagement of people in research, and it’s uniquely a vector that helps
the open society world compared to the corporate world.
And that’s another reason to do not just act on this chance, but to act
quickly: if we wait until the health care behemoths get to it, we can
expect informed consent to get road-graded down to a one-click. And
that’d be a damn shame.
(derived in part from a
talk given
at the Broad Institute yesterday)