Status Quaestionis – Tamar Sharon

Each Status Quaestionis Splijtstof interviews a teacher or researcher in philosophy about their current projects and academic topics of interest. For this edition Jochem and Ted visited dr. Tamar Sharon, associate professor practical philosophy at the Radboud University, and co-founder of the brand new interdisciplinary Hub for Security, Privacy and Data Governance (iHub). We sat down in her office on the refurbished 19th floor – now home of iHub – where we talked about her academic career, her past and current research, the aims of iHub, and the reasons why philosophers should go out in the field a lot more.

dr. T. Sharon (Tamar)
Tamar Sharon

Could you start by telling us what you have done in academia so far and how you ended up at Radboud University?
It took me a long time to know what I wanted to study; and even longer to know that I wanted to be an academic. I have quite a complicated background; I am half French, half Israeli, but I was born in the United States. When I was about eleven, my family and I moved to France. In France, you study philosophy instead of French literature in the last year of high school and it is part of the final exams. And I really loved it. But when thinking about university, I was quite pragmatic – I thought it would be difficult to get a job after a degree in philosophy. I thought I wanted to work in international relations later on, the UN or something. I began studying history and political science in Paris, and moved to Israel shortly after.

At Tel Aviv University, I did an MA in political science, but with an emphasis on political theory. You can see that I kept getting drawn to philosophy, although I tried to keep it at bay. I wrote a master’s thesis on Charles Taylor’s hermeneutics and I did some comparisons with Michel Foucault. After that, I wasn’t sure what to do. Political science didn’t seem so attractive anymore. I decided to start another research master’s, in a new interdisciplinary program at Bar Ilan University that included philosophy, sociology, linguistics, psychoanalysis, etc. It was called ‘Hermeneutics and Culture’. I also took courses in the programme on science and technology there. I had always been interested in questions about how new technologies influence how we see the world and understand ourselves. I wrote my dissertation on human enhancement, or ‘post-humanism’, at the intersection of both of these programs, with a supervisor who specialized in post-structuralist theory and another who was a historian of science. I was fascinated by these types of technologies (such as gene editing, assisted reproductive technologies, cognitive enhancement techniques), and how they seemed to challenge fundamental concepts about what it means to be human. Are humans defined by what is given, ‘naturally’, or are they defined by the aspiration to be more than what is given, or are they never more than their social conditions? I was promoted in 2011 and published this as a book shortly after (Human Nature in the Age of Biotechnology, 2013, red.).

After my PhD I moved to Maastricht for a post-doc at Maastricht University. I was happy to come to the Netherlands because the discipline called Science and Technology Studies (STS – which studies science and technology from a humanities and social science perspective) is very strong in the Netherlands. I worked there for a couple of years. Then, a little over a year ago, Bart Jacobs, a professor of computer science here at Radboud University, asked me if I would join him in opening a new interdisciplinary research center on digitalization and society that Radboud University was interested in setting up. I knew Bart from my research. He’s done a lot of excellent work in “privacy-by-design”, that is, ensuring that the value of privacy is safeguarded by default in technological architectures. We had also been discussing possibilities to work together: the research project I’m working on at the moment looks at a new way of doing medical research that is based on collaborations between computer-tech companies like Google, Apple, and Amazon, and public research institutions like hospitals. One of these collaborations is taking place at Radboud UMC (Parkinson op Maat red.), with Verily, a sister company of Google, and Bart and his digital security group were brought in to develop a safe data sharing framework for the project – to make sure the collaboration with Verily was lawful and ethical, on the privacy front at least. I’m a strong believer in interdisciplinary research, and the need to bring many disciplines together to work on the challenges of digitalization. The opportunity to help develop a research center on this was really exciting, and I jumped on it. At the same time, our dean, Christoph Lüthy, was thinking about ways to develop a new research line at our faculty on digital ethics. Marcel Becker in the Practical Philosophy group was already working on this, and they wanted to expand this, also in terms of teaching, for example in the new PPS bachelor. So, the stars aligned, and that’s how I ended up at Radboud University.

Could you tell us a bit more about this new research centre, the Interdisciplinary Hub for Security, Privacy and Data Governance (iHub)?
The aim is to do state of the art research on how digitalization or digital technologies are changing our world. Society, science, relationships between people, between people and the state, between institutions and the state: all these things are changing in light of digitalization. Our starting point, drawing on philosophy of technology and science and technology studies, is that technologies are not just neutral instruments, but have an immense transformative potential. As they are introduced and as we use them, they destabilize fundamental values that we use to make sense of the world and our place in it, and routine practices and norms that embody these values.

“Addressing these questions requires help from many disciplines: computer science, philosophy, law, communication sciences, organizational studies and others.”

In a first instance, we’re looking at four sets of fundamental values that are being disrupted by technologies: privacy and identity, justice and solidarity, autonomy and freedom, and knowledge and expertise. An example of the type of research questions we’re asking along these lines are: how do you build a data sharing framework that protects people’s privacy? What kind of discriminatory effects can AI or automated decision-making have? Which different conceptions of the common good are being mobilized in digital health, and how will this change biomedical research and healthcare? Some of us are looking at the value of autonomy. Fleur Jongepier, for example, is exploring what algorithmic decision-making – or algorithms that claim to know us better than ourselves – means for first-person authority. Many digital technologies are being promoted with the promise of enhancing citizens’ autonomy, but that’s a big question. You can think of the platforms that are being built so that people can manage their own health data. There are a lot of initiatives like this that say: here is a platform where you can upload all your medical data; your electronic health records; data that you’re collecting on your phone that is related to your health, and you can store it and manage it there. The promise is that this will enhance your autonomy because you are in charge of your data, and therefore you might be more in charge of your health. At the same time this is offsetting a lot of responsibilities from states or organisations (e.g. hospitals) onto individuals to be the ones to decide who can access that data and how to manage this. I don’t think we’re ready for this onslaught of new responsibilities and their implications.

Addressing these questions requires help from many disciplines: computer science, philosophy, law, communication sciences, organizational studies and others. Individual disciplines cannot solve them. So at iHub we’re bringing people from these disciplines to work on these questions together. Philosophers and social scientists, for example, are good at diagnosing problems, at framing them and making explicit what is at stake. But we’re also looking to develop ways of steering digital transformations in beneficial ways. These can be technical solutions (designing values like privacy into a technology), they can be legal (updating non-discrimination law to account for new forms of unfair differentiation resulting from algorithmic decision-making), and they can be normative (deciding with medical researchers which conditions are necessary for collaborating with big tech in a way that secures the common good of healthcare).

You chose to create a physical space for the iHub on the 19th floor of the Erasmus Building. Why did you make that decision?
The iHub started officially on the first of May, that was when we moved here and got this great space on the 19th floor. It is very important to us to have a physical space where we are all located. We also insist that people come into work every day, or almost every day, and don’t work too remotely. The idea is to work on these topics together, we really want people to rub shoulders, to drink coffee together and discuss what they are working on and to work through the problems they are facing together.

You mentioned your own research project on ‘the Googlization of health’. Could you elaborate on this project that focuses on developing a normative framework for healthcare studies?
The ‘Googlization of health’, it is a bit of a snappy title, not that I am in favour of those, but it sticks. It is a good marketing trick. What I mean with ‘Googlization’ is the fact that the ‘Big Five’ (Google, Amazon, Facebook, Apple and Microsoft) and some other companies have become aware that the wealth of data that they are sitting on and the expertise they have in data management can make them important players in health and medical research. They have started moving quite fast and aggressively into the sector of biomedical research and more recently into healthcare. The project at Radboud UMC, led by Prof. Bas Bloem, is a very good example of this. The idea there is to gather lots of different types of data – DNA, brain images, performance metrics, stool – on over 600 Parkinson’s patients over the course of two years, in order to gain insights into the disease at an individual level. Verily has developed a ‘study watch’ which they are piloting in this project, that patients can wear and which gathers data, ubiquitously, 24/7, outside of the clinic.

Medical research currently and in the future is very much going to be based on this type of research: bringing large heterogeneous datasets together to infer personalized insights. These data will include information generated by smartphones and wearables, like how many steps people walk or how they sleep, electronic health records, genomic data, etc. A company like Google is quite good at managing such large amounts of data. They have been doing this for some time with programmes like Google Search. Amazon is also very good at doing these kinds of things. These companies have an expertise which has become in a sense quite attractive to the medical community. That is why we are seeing these kinds of collaborations.

My starting point on this topic was that this is problematic, because we know of all the issues that we are seeing in the online world: everything from the Snowden revelations to Cambridge Analytica shows that it is kind of the Wild West online. We can assume that many of those problems will be imported into medicine and health if these companies will get involved in this research. Now, most of the public discussion around these issues is about privacy. That is what everybody is worried about, because we know that the business model of these companies has long been to collect data that we’re basically giving away for free or in return for their services. That data is then used, for example, for targeted advertising. This makes privacy a hot topic now.

“We know that who asks the questions in science also determines which questions get asked. Do we want these companies to be the ones doing that?”

But privacy is maybe not the only issue here. The framework that Bart Jacobs and his group have developed for the Parkinson’s project is a very hermetic privacy friendly framework: it might erase the privacy issue. However, I think that there are issues that remain even if we’ve solved the privacy problem. Do we want these companies to be involved in health, in our health, in medical research? If they start amassing these datasets that are useful for the medical community, do they also become gatekeepers? They probably will, meaning they have a say in who can access them and at what price. Also, what role will they start playing in setting research agendas? We know that who asks the questions in science also determines which questions get asked. Do we want these companies to be the ones doing that? These kinds of questions move beyond privacy and are more concerned with the common good, and medical research and health as a common good.

I think it is the question of the common good we need to be asking. Interestingly, in the world of data, practices are often framed in terms of two separate spheres: the sphere of the market, and the sphere of social value. You are either sharing data for profit, or for social value. Although this dichotomy is helpful for sensitizing us to the exploitative practices of the data economy, it is too limiting to understand a phenomenon like the Googlization of health. If we look at what the companies are saying and if we look at press releases around the medical projects that are being set up, there is very much an appeal to the common good in these. We can think that this just a rhetorical appeal, but I think that’s too simple because it would imply that everyone who is sharing data has been fooled by these companies. I think other things are happening as well.

If we take these appeals to the common good at face value, rather than assuming that they are rhetorical tricks to get people to share data, what we start seeing is that there are many different conceptions of the common good that are kind of pushing this phenomenon of the Googlization of health forward. This is the real starting point of the project. If we are interested in securing the common good, the first step is to understand that there are many different conceptions of the common good that are being mobilized by different actors in this phenomenon: the companies, the medical researchers, the patients who are involved in this, and by the governments who are funding some of this research.

So how do we go about studying that? I use a framework that was developed by two economic sociologists, Luc Boltanski and Laurent Thévenot, who have a book called On Justification, in which they look at how people justify what they do. When there is a conflict and you ask people why they do what they do, they tend to try to justify what they do in a way that will be understandable to the person who is asking, or to the person they are in conflict with. Boltanski and Thévenot found that there are about six types of justifications that people appeal to. They call these ‘moral repertoires’. These moral repertoires are ways of understanding the world that are organized around one understanding of the common good. So, you can have a market repertoire, which understands the common good in terms of increased economic growth. You can have a civic repertoire, in which the common good is understood in terms of inclusivity, of solidarity, of participation. But you can have other values, like efficiency, for example in what is called the industrial repertoire, which is very important in something like the Googlization of health: here the common good is about making things more functional and efficient and smoothly running.

What we’re doing in my project first, is to map these repertoires. A lot of empirical work has to be done to understand which repertoires the actors who are involved in this research are appealing to. I think that there are appeals to a lot more than just the market repertoire and the civic repertoire that we see in the dichotomy that I mentioned before. We are seeing other repertoires as well: the industrial repertoire; the project repertoire, which is very much about innovation for the sake of innovation, and the vitality repertoire, which is very much about just increasing health above all. The aim of the research, in a sense, is then to start a serious conversation about which repertoire should be the dominant one concerning the googlization of health.

So, these repertoires are not mutually exclusive, but one should maybe prevail above the other?
Yes, they are not mutually exclusive, and we also see that the same people might appeal to different repertoires in different situations. My idea is that we should aim to develop combinations of repertoires as governing mechanisms for different research projects; combinations that would be acceptable to all actors involved, but where the civic repertoire would play a dominant role. That’s because the civic repertoire is about inclusivity, solidarity and democratic control. And I think that this is precisely what we are losing in these developments. So, we can think of combinations where the civic repertoire protects against the shortcoming or the perversions of other repertoires. This is still at the level of hypothesis, but we might say: okay, maybe the only way to do the data intensive research that is necessary for the future of medical research is by collaborating with a company like Google or Verily, because they have the required technical skills. That is an appeal to an industrial repertoire. But, we say that the company cannot do both data collection, data analysis and data storage: we will split that up. This separation of roles, of checks and balances, is a solution that appeals to the civic repertoire. We can think of all kind of combinations like that. The companies can be involved, but if they use publicly generated datasets, they must pay taxes on it. That is also a civic solution.

“I think we should be open to the benefits that technology companies can bring to this kind of research and what we can get out of it. However, we have got to set down the conditions.”

The aim will be to come up with these combinations, because they are more open to multiple conceptualisations of the common good. Theoretically and normatively speaking, I think that is good. And pragmatically it is important, because if we want to have all these different actors on board, we must come up with solutions that speak to them as well. I think we should be open to the benefits that technology companies can bring to this kind of research and what we can get out of it. However, we have got to set down the conditions. And this implies giving the civic repertoire a prominent role. At the same time, the civic repertoire also has its shortcomings in the digital age, which need to be studied and assessed. The civic repertoire also needs some ‘updating’

Which philosophers or ethicists are important for your current research?
I am currently working with the notion of ‘separate spheres’, or what the economic sociologist Viviana Zelizer calls the ‘hostile worlds doctrine’ which has traditionally been very prominent in social theory and philosophy, and which I think is limiting in the context of the Googlization of health for several reasons. So I’m drawn to theorists who work with ‘sphere plurality’ – like Boltanski and Thevenot (though ‘repertoires’ are not the same as ‘spheres’) or Michael Walzer. Sphere plurality accounts for a richer and more realistic ethical landscape. This allows us is to say: wait, we should not focus only on transgressions from the sphere of the market into the sphere of healthcare, for instance, but that any type of sphere transgression is potentially dangerous and, therefore, deserves our attention. This opens our eyes to other risks – like the industrial repertoire, which is currently becoming so pervasive in healthcare and medicine. We are then better prepared to deal with a phenomenon like the Googlization of health, in which repertoires from many spheres are competing. What’s challenging here, is that often, with philosophers like Walzer, or Michael Sandel, or Elizabeth Anderson, in order to decide how to draw the limits between spheres, you need to first understand the nature of the good which is at stake (healthcare, commodities, education, etc.). This makes it possible to decide how to appropriately valuate it. But the good here, in the Googlization of health, is personal health data, whose definition and social meaning is highly controversial at the moment.

As far as philosophers of technology go, I have been influenced a lot by Bruno Latour and especially his ideas around the agency of technological artefacts and how they steer our behaviour. I have also been influenced by philosophers like Annemarie Mol and Peter-Paul Verbeek, who take a more nuanced approach in comparison to the quite pessimistic approach of more “classical” philosophers of technology like Martin Heidegger, Hans Jonas and Herbert Marcuse. The approach of Mol and Verbeek is more “neither good, nor bad, nor neutral.” Technologies need to be studied individually and in their context.

“I think philosophers would benefit from learning some skills from the empirical toolbox. Not only to look at reality but to also get into it, to feel it a bit more.”

In my earlier writings on posthumanism I was inspired by Deleuze and his work with Guattari, but then I took what they call in philosophy of technology an ‘empirical turn’, where I wanted to be much closer to the people who are actually using the technologies that many philosophers only write about. For that I had to learn how to do some ethnography, learn how to speak to people and observe practices. I’m also influenced by pragmatist philosophers like Dewey: how do we think about morality in the sense that it changes? Morality doesn’t always stay the same, so how do we study that? I think philosophers would benefit from learning some skills from the empirical toolbox as well. Not only to look at reality but to also get into it, to feel it a bit more, and to get a sense of how people experience the use of technology. That use is always going to be quite different from the grand discourses around technology. Social theorists studying new health technologies today often draw from Foucauldian critiques of neoliberalism and how it responsibilizes people in relation to health. Yet, in my empirical research I’ve seen that things are more complicated. People are aware of these discourses. And while they often reproduce them in their practices, they also challenge and resist them, and re-appropriate them in surprising ways. You can study technology practices as tactical engagements with everyday life, drawing on Michel de Certeau. I very much agree with his phrase “We mustn’t take people for fools”. It’s important to go out into the field and get a feel of what people are doing and how they experience these discourses.

Can you give some examples of how these discourses are challenged by the users of technology?
I did quite a lot of research on people who use tracking devices, what is called the ‘Quantified Self’. These people are avid users of mobile devices to collect data about themselves, health-related or other. There is a very promissory discourse around these technologies: people will be empowered to become healthier and will be more in charge of their health by collecting this data. The ‘counter-discourse’, generated by social theorists and critical theorists, is that this is a neoliberalization of what used to be a collective responsibility for health: people are expected to become active, to be proactive in being involved and managing themselves. This is seen as an economic agenda to reduce healthcare costs and spending.

I found that the Quantified Self people have adopted this entrepreneurial behavior towards their health, the so-called neoliberal discourse where they are individuals who want to be in charge of their health. Looking at their practices, however, you see that the way they understand this ‘empowerment’ is not the way the promissory discourse talks about it, nor what the counter-discourse fears. Say, for example, that the promissory discourse tells us to eat one apple a day, consume X amount of protein, and take ten thousand steps a day. Some of these individuals then say: ‘Well I am indeed an individual, so that does not mean I comply with any norms that public health is determining. My uniqueness is very different from anyone else’s.’ In one article I call this ‘empowerment in the wild’, as opposed to the kind of ‘controlled empowerment’ that public health discourse is talking about when it is talking about empowerment, where it actually wants everyone to be kind of the same. Autonomy among the Quantified Self was very much about not following norms and making their own norms. They develop this very intricate kind of individual experiments to find out what is optimal for them, but it is a very individualized understanding of optimization. In this sense, it is a reproduction of the discourse that you as an individual are unique and should be involved and responsible for your health. However, they are taking it to an extreme that certainly nobody in public health imagined. I found this again and again among the Quantified Self.

There is also a lot of use of AI in the medical world, for instance to give diagnoses. In one of your iHub lectures[1] you say that as these artificial intelligences grow more complex, there is a growing problem of accountability. Could you maybe elaborate on that?
Well, AI and machine learning – the type of AI that’s being used most for something like making decisions automatically – are very interesting within the medical field. The way it works, though I am no specialist, is that you have to input much data into the AI so that it can learn from previous cases. Say you are looking at medical data, for example people with a certain type of cancer that have responded well to a certain type of treatment and worse to another type. This data will go into the AI and it will learn from these cases. Based on this data, and training and learning, it will come up with new recommendations: the output. How those decisions are made, however, is very much a black box. The whole point of machine learning is that once it trains on data that you have put into it; it learns by itself. It learns how to learn, that is kind of the great thing about it. However, because it does this on its own, we also do not know exactly how it comes to its decision and why.

“One of the fun things about technologies is that they often make us reflect back onto our own practices and make us question these.”

Traditionally, a doctor decides. If you ask the doctor: ‘why did you decide this?’, the doctor can trace back the steps of her decision. It is explainable, whereas the decisions coming out of the black box of the AI are not explainable. If you decide to follow the recommendation that the AI made, and you treat the patient, and then the patient dies, you cannot ask the doctor to retrace the steps. So the question is: where does the accountability lie? I think this is interesting for philosophers because moral responsibility traditionally is something we pin to a human actor. What we are seeing here is that non-human actors are involved, who maybe should be considered responsible. That is one question. Another might be that perhaps there is a network of human and non-human actors acting together, where accountability might be of the whole network: a shared responsibility. This undermines our traditional concept of responsibility, which focuses on individuals. So you see how the technology here explodes our whole understanding of moral responsibility. We have not found solutions for this yet, and yet the technologies are running forward and they are being used.

Does that not also bring into question the way the doctor motivates his conclusions? If the black box AI can spew out the right answer, you can also wonder what the causational motivation of the doctor is. Is it as autonomous as we have presumed all this time?
I think it also makes us question human cognition. Is that not a black box as well? Do we really know everything that goes into making a decision? Can a doctor actually retrace all of her steps? I implied that she could, but can you really articulate every factor that went into deciding? One of the fun things about technologies is that they often make us reflect back onto our own practices and make us question these. Maybe the way we have always thought of responsibility was way too simplistic; to think of one human actor as responsible when many other factors play into any kind of decision-making process. In the medical field there are many things that we also do not understand, and we just go with it because they work. Take aspirin for example: scientists did not know how it worked for many years, but they saw that there was an effect. Sometimes you have to be pragmatic about it and say: ‘alright, if it helps these people and we do not see side effects, then just go with it. Even if we do not understand it.’ We might relate like that to AI as well.

How do you go about as an ethicist making your argument for more morally ‘right’ approaches in medical science when there are these big data companies that have approaches that ‘just work’?
First of all, there is a question if things work. That is kind of a technical question that I am not really interested in, but that needs to be found out first. But secondly, the question that the ethicist asks, is not necessarily the question about feasibility but rather about desirability. What happens – and what do we lose if it works? That is the question that politicians and maybe scientists are not thinking about, but the ethicists are. Certainly, in all of the discussion around health surveillance, or surveillance in general, one of the values at stake is privacy and autonomy. If the big promise of personalized medicine comes to bear, it will also involve a kind of continuous monitoring of people. That might make us healthier, but we will probably also lose a sense of autonomy and self-determination, what Julie Cohen calls the “breathing room” we need to develop as human beings, without being watched. I think it is our job to pinpoint what the trade-offs are, and then to make a space for discussion around whether we agree to this.

What is difficult is that health is very much a supervalue today. It is almost impossible to argue against health. It is like our religion today: everybody wants to be healthy, right? Everybody is in favour of doing things to improve population health. I don’t know the exact percentage of the budget spent on health and research into health (including my own), but it is extremely high. But what do we lose when we gain more health? At what costs is this supervalue being upheld? That is a conversation we do not have much in our society. So that is really what the ethicist needs to do, and that is what we are good at: thinking about these shortcomings, these trade-offs. And we are doing this in my project with each of the repertoires as well. We try to think what the shortcoming of each repertoire is and to open up the space to have some public deliberation about that.

iHub organized seminars last year, will you continue doing that as well?
Yes. The seminar series last year was just in-house, but now we are opening up to speakers from outside of Radboud and the Netherlands. That will be monthly. All the dates and guests will be up on our website soon. The seminars are open to anyone and you can sign up to our mailing list to get announcements about this and other events.

Traditionally we finish these interviews by asking what kind of advice you have for students that are interested in these topics – in particular getting involved in iHub and the philosophy of science and the ethics of technology.
One advice I would have for philosophy students is to read non-philosophy as well. [Laughs] The kind of philosophy I do is inclusive of a lot of social sciences. To get at these practices, and the reality of how people are experiencing these challenges brought on by new technologies, requires some kind of empirical work as well. You have this kind of traditional division of labor between what philosophers do, which is theoretical, and what the social scientists do, which is to go out and do the empirical work. I think we benefit from knowing how to do some of both: the deepest normative insights come from also engaging with our practical reality.

“One advice I would have for philosophy students is to read non-philosophy as well. The deepest normative insights come from also engaging with our practical reality.”

I think philosophy students also tend to shy away from technology. I think that today, with the challenges we are seeing with digital society, this very much calls upon philosophy students to also think about these things. We should not leave it to the computer scientists to engineer our social world. A lot of the questions that emerge from transformations in digital society are almost traditional philosophical questions. Questions about moral responsibility, justice, distributive justice, autonomy: all of these things are questions philosophers are very good at asking. I want to say: apply this more to digital society. I would like more philosophy students to get interested in questions around digital technology and understand that these are the philosophical questions of today. Companies know this as well: they are out looking for ethicists and philosophers. Society very much needs us today.

At iHub, our intention is to grow, and we will be hiring PhD students in the future. So if you are interested in doing a PhD project that is somehow related to these topics, keep your eyes open, come to our seminars, and I’m always happy to discuss research so you can drop me a line.

To keep up with iHub, visit https://www.ru.nl/ihub/


[1]              The digital disruption of health: promises, challenges and ways forward: https://www.youtube.com/watch?v=3grITwmSYX0.