Big data and analytics, especially when combined with the level of personalization allowed by mobile computing and wearable devices, and with the power of sensors and actuators that enable machines to react to events and make choices on behalf of individuals even if those individuals are not aware, is raising puzzling ethical questions.
I must confess, I am not a big fan of science fiction. Not of that series of movies that talks about wars in space, or of that TV series that tell tales of a ship navigating through galaxies. But I have read a few things; for instance I am fascinated by the writings of 20th century geniuses, like George Orwell and Ray Bradbury. Also I should qualify myself as not being particularly concerned about my privacy in the virtual world. I use many social networks and I am happy with the level of privacy protection they offer and the ability to configure what I authorize them to do. Similarly I own a smartphone and I like that I can choose when and if I want to disclose my location information for a certain app or not. I am certainly annoyed by spam. And by my mobile operator that sends me advertising through SMS at least three or four times a day, but they also offer me a very good plan, so that spam is a price worth paying. There is a transparent contract among us.
Notwithstanding my good level of comfort, I see how the emergence of big data and analytics, particularly its combination with mobile computing, wearable computing and more broadly the Internet of Things triggers a number of ethical concerns. Rather than discussing vague concepts I imagined two public sector scenarios that would impact what I consider the most vulnerable population clusters: young people (in a student scenario) and people affected by medical illness (that are patients of some healthcare provider).
Let's start with the student scenario. Big data and analytics now enable to leverage vast arrays of student data to: predict if students are at risk of dropping out; help identify domains where students are more likely to succeed; adapt learning and testing pathways for topics where individual students struggle. The more quickly these insights can be made available to school administrators, teachers and students themselves, the more rapidly remedial or preventive action can be taken, but careful consideration must be given to the risk of stigmatizing, isolating, or embarrassing students. Another popular use case is using student data to protect school revenues. So what if schools decide to offer their services only to students that are more likely to succeed ("creaming their customers") and instead "parking" those that are less likely to succeed. If you think of it, this is the concept that was traditionally applied in graduate education. Institutions use international test scores, essays, recommendations and other data to decide who is most likely to succeed, hence who is worth the scarce and prestigious resources they offer. The difference is that graduate courses applicants knowingly approach schools understanding that is the underlying "social contract". With BDA capabilities being more widely available and easier and quicker to use, schools could apply such rules without a transparent communication that they are doing so, hence increasing the risk of pushing the boundaries of elite education.
The second scenario (a bit more '1984ish' admittedly) is that of a diabetic teenager, whose mother might use indoor location, video surveillance and internet connected refrigerator and microwave to control what she/he eats to make sure the diet is appropriate. But the world is full of temptations, so the teenager could go out and eat unhealthy food out of home. Or worse develop habits that lead bulimia, because for example the refrigerator is timed to open only at certain times of the day, hence giving an incentive to eat all at once.
IDC research indicates that these and other similar ethical questions will proliferate. And banning them would do more harm than good, as it was the case with security and policy concerns posed by BYOD and social media. Tolerating and embracing the opportunities with appropriate policies is the best course of action. But public sector IT and non-IT executives should take into account that:
- There are a number of use cases that will emerge and we cannot foresee. CIOs will retain a crucial role in large organizations to help business users and their managers understand what the implications are.
- Data privacy rules are lagging behind the pace of technology evolution and legislators are still debating whether it is appropriate to maintain the current regime where authorization are asked at the point of data collection, possibly with the option of having a "right to be forgotten", or whether authorization should be given on a use case by use case basis.
- There could be an "ethical gap". I am certainly not a technology expert, but I understand enough to change my privacy settings in a social network, or to consider what company policies ask me to do, or not to do. Younger generations of consumers (citizens, patients, students, workers, taxpayers, voters...) know more about technology than I do, but are not as familiar and interested in policies. For older generations the opposite is true.