Why “notice and choice” approaches to privacy reduce our privacy

In a recently published article, we (Lemi Baruh and Mihaela Popescu) discuss the limitations of reliance on market mechanisms for privacy protection.

Self-management frameworks such as “notice or choice” are inherently biased towards 1) reducing the level of privacy enjoyed by the members of the society and 2) creating privacy inequities (i.e., privacy haves and have nots). In the article we also discuss an alternative way of approaching privacy protection in the age of big data analytics.

Here is the abstract:

This article looks at how the logic of big data analytics, which promotes an aura of unchallenged objectivity to the algorithmic analysis of quantitative data, preempts individuals’ ability to self-define and closes off any opportunity for those inferences to be challenged or resisted. We argue that the predominant privacy protection regimes based on the privacy self-management framework of “notice and choice” not only fail to protect individual privacy, but also underplay privacy as a collective good. To illustrate this claim, we discuss how two possible individual strategies—withdrawal from the market (avoidance) and complete reliance on market-provided privacy protections (assimilation)—may result in less privacy options available to the society at large. We conclude by discussing how acknowledging the collective dimension of privacy could provide more meaningful alternatives for privacy protection.

Fulltext of New Article on Privacy Protection in Mobile Environments

A new article entitled “Captive But Mobile: Privacy Concerns and Remedies for the Mobile Environment” is now published in The Information Society 

Authors: Mihaela Popescu (California State University, San Bernardino) and Lemi Baruh (Koç University, Istanbul)

Abstract
We use the legal framework of captive audience to examine the Federal Trade Commission’s 2012 privacy guidelines as applied to mobile marketing. We define captive audiences as audiences without functional opt-out mechanisms to avoid situations of coercive communication. By analyzing the current mobile marketing ecosystem, we show that the Federal Trade Commission’s privacy guidelines inspired by the Canadian “privacy by design” paradigm fall short of protecting consumers against invasive mobile marketing in at least three respects: (a) the guidelines overlook how, in the context of data monopolies, the combination of location and personal history data threatens autonomy of choice; (b) the guidelines focus exclusively on user control over data sharing, while ignoring control over communicative interaction; (c) the reliance on market mechanisms to produce improved privacy policies may actually increase opt-out costs for consumers. We conclude by discussing two concrete proposals for improvement: a “home mode” for mobile privacy and target-specific privacy contract negotiation.
—-
—-

What Your Communication Metadata Says About You?

A few weeks ago, when information about the National Security Agency’s (U.S.) phone surveillance program surfaced, the U.S. President Obama was quick to announce that “nobody is listening to your telephone calls”. It was rather a little “harmless” system that collected metadata about individuals’ phone calls.

We have been hearing similar claims about electronic surveillance systems lately. For example, the “don’t be evil” company Google attempts to comfort us that their e-mail surveillance system is not creepy by saying that:

Ad targeting in Gmail is fully automated, and no humans read your email or Google Account information in order to show you advertisements or related information.

So no peeping tom is reading your e-mails, Google says. And that should be enough to comfort you about the privacy of your e-mails. Or is it?

The problem is, often, metadata says more about an individual than once can ever imagine. As EFF has recently put it, it may even say more about a person than the actual content of a phone call (or an e-mail):

Sorry, your phone records—oops, “so-called metadata”—can reveal a lot more about the content of your calls than the government is implying. Metadata provides enough context to know some of the most intimate details of your lives.  And the government has given no assurances that this data will never be correlated with other easily obtained data. They may start out with just a phone number, but a reverse telephone directory is not hard to find. Given the public positions the government has taken on location information, it would be no surprise if they include location information demands in Section 215 orders for metadata.

However, as I said, it may often be very difficult to imagine what your communication metadata says about you. And the good folks at MIT come to the rescue. They created a very simple visualization tool that demonstrates how companies or the government can make inferences about your relationships based on your contacts in e-mail. The tool is called Immersion.

You can link your Gmail accounts to MIT’s “Immersion” tool here and see what comes up.

Mine is below (contacts anonymized, of course)

f5535889-65f9-4063-9e22-b2ba36c81364

Privacy, Literacy, and Awareness Paradox in Big Data Ecosystems

The abstract and a copy of the presentation that we (Mihaela Popescu & Lemi Baruh) made in IAMCR conference in Dublin are available below. The paper introduces the concept of “awareness paradox” to discuss privacy literacy in an era of Big Data analytics. Thanks to all listeners for their responses and questions. The full paper is still a work in progress.

Title: Digital Literacy & Privacy Self-governance: A Value-based Approach to Privacy in Big Data Ecosystems

Abstract: The growth of interactive and mobile technologies, which equip institutions with vast capabilities to amass an unprecedented amount of information about individuals, has enabled the development of new business models. These business models rely on data-mining techniques and real-time or near real-time Big Data analytics designed to uncover hidden patterns and relations. Increasingly, the use of personal information, including behavioral and locational data, for large-scale data mining is becoming a normative capital-generating practice.

This new regime of data intensive surveillance is akin to a fishing expedition that starts by comparing each data-point to the population base, while potentially signaling any deviation as a potential risk to avoid or an opportunity to capitalize on. More importantly, by relying on algorithmic analysis of data, this regime of surveillance removes humans from the interpretation process, makes the process increasingly opaque, and adds an aura of objectivity that preempts challenges to the epistemological foundations of its inferences. At the same time, as policies for digital media use shift toward promoting self-management, they increasingly assume an ideal “omni-competent” user who is at once able to “benefit” from being open to information sharing and communication availability, weigh these benefits against the potential risks in the digital environment, and engage in risk-aversive behaviors. This assumption is distinctly at odds with the reality of individuals’ understanding of privacy risks. The ubiquitous and technically specialized nature of modern data collection makes it increasingly difficult for online and mobile users to understand which entities are collecting data about them and how. Moreover, studies show that less than a third of online users read, however partially, a website’s privacy policy (McDonald et al. 2009), with only about 2% likely to read it thoroughly (Turow et al. 2007). Ironically, as the work of the Ponemon Institute in the United States demonstrates, absent legislation to the contrary, Big Data also means that companies are able to identify privacy-centric customers and treat them differently in order to allay their concerns, rather than providing privacy-conscious policies for all customers (Urbanski, 2013).

This paper positions the discussion of privacy self-governance in the context of normative digital literacy skills. The paper seeks to unpack and question the nature of these apparently conflicting digital literacy competencies as they relate to current privacy policies in the United States, policies which emphasize individual responsibility, choice, and informed consent by educated consumers. Taking as the point of departure an exploration of what it means to be aware of privacy risks, the paper analyzes recent policy documents issued by governmental agencies as well as commercial discourse in marketing trade journals, in order to examine how state and market-based entities frame both the meaning of user privacy risk awareness and the meaning of voluntary opt-in into data collection regimes. Next, by referencing data collection and data mining practices in Big Data ecosystems, the paper discusses to what extent these differential framings translate into a coherent set of principles that position privacy education among the “new” literacies said to enhance individual participation in the digital economy. Given the “take it or leave it” approach adopted by corporations that engage in data collection and use, the paper questions the potential of digital literacy to translate into meaningful actions from users able to prompt a change in data practices that dominate the U.S. market. The paper concludes with a discussion of the policy implications of this “awareness paradox”—a concept that describes how more digital literacy about privacy may lead to a higher tendency for some users to withdraw themselves from the market for certain types of online services, which, in turn, my decrease the incentives for the market to cater to their privacy needs.

Click Here for the Presentation

In case it is useful, please cite as: Popescu, M. & Baruh, L. (2013, June). Digital Literacy and Privacy Self-Governance: A Value-Based Approach to Privacy in Big Data Ecosystems. Paper presented at the annual meeting of the International Association for Media and Communication Research Conference, Dublin, Ireland.

Freedom-Not-Fear, Day of Protest

The Freedom-Not-Fear movement, a group of European citizens concerned with growing expansion of surveillance in the post 9/11 era of “war on terror” is taking their protest to the EU capital Brussels on September 17, 2011.

The movement draws attention to how the climate of fear instilled by the vivid imagery of planes crashing into the World Trade Center and the rhetoric of “War on Terror” has contributed to an expansion of authoritarian control at the expense of individual liberties and it is time for a change:

European policy making is affecting our every day lives and civil liberties more and more. The EU is increasingly imposing unnecessary and disproportionate governmental surveillance measures on us. We will not take this any longer. Let’s carry our protest into the capital of the EU!

We invite you to join us for a protest march “Freedom not Fear – Stop the surveillance mania!” in Brussels on Saturday, 17 September 2011.

The program of the protest: Saturday, 17.9.2011

  • from 13h30: gathering together at place luxembourg
  • from 14h00: starting manifestation at place luxembourg
  • ca. 15h00: manifestation at place schuman
  • ca. 16h30: going on at chapel madeleine (near grasmarkt)
  • ca. 18h00: closing/ending the manifestation

Farewell to IAMCR 2011

This week, esteemed friends from Kadir Has University, Istanbul successfully hosted a five-day IAMCR conference. I was proud to be a part of it, met some very interesting colleagues, and listened to even more interesting papers presented throughout the conference.

Congratulations to IAMCR, the local organizing committee at Kadir Has University, and Delano for bringing IAMCR to Istanbul.

Hope to see you in South Africa in 2012.

My two presentations from the conference are below:

  1. Lemi Baruh & Mihaela Popescu. Communicating Turkish-Islamic Identity in the Aftermath of Gaza Flotilla Raid: But Who is the “Us” in “Us” vs. “Them”?
  2. Mihaela Popescu & Lemi Baruh. Captive audience protections for the digital environment.

New issue of Çizgidışıdergi is out

A new Turkish  journal called Çizgidışıdergi is out with its third issue. 

I also had the pleasure of writing a short article titled “Benibendendahaiyibilenler.com: Etkleşimli Ortamda Tüketici Gözetimi (Surveillance) ve Kimlik” [Roughly translated as ThosewhoknowmebetterthanIdo.com: Consumer Surveillance in Interactive Media].

The issue is available for free download.

Resist The Face(book) of Surveillance

About a week has passed since Facebook decided to change its privacy policies to make users information available to everyone (not only on Facebook, everyone online).

Public backlash seems to be rising:

The Electronic Privacy Information Center has filed a complaint with the U.S. Federal Trade Commission.

You can also take action by making a complaint with the eTrust.

Beware what you read and what you opt out of…

A few years ago, in an article published in New Media & Society (2007), I attempted to draw attention to a recurring problem with respect to user privacy online: the lack of transparency surrounding the data collection/sharing/use practices make it almost impossible for even the most “savy” user to actually challenge corporations’ “interpretations” about who we as individuals are… The problem gets even more complicated when we start confusing “data security” with privacy and somewhat falsely believe that coupled with higher consumer knowledge, opt out systems will be able to protect the privacy of individuals (no…not consumers, we are individuals).

In a recent entry, blogger, security and privacy analyst Christopher Soghoian provides some very useful examples from Google.  For those of you who are too busy to read the rest, the gist is that when you opt out of Google’s web history etc. all you are doing is stopping Google from using the data (but the data remains in their servers):

Consider this snippet from the Frequently Asked Questions page for the Google Web History service:

You can choose to stop storing your web activity in Web History either temporarily or permanently, or remove items, as described in Web History Help. If you remove items, they will be removed from the service and will not be used to improve your search experience. As is common practice in the industry, Google also maintains a separate logs system for auditing purposes and to help us improve the quality of our services for users. For example, we use this information to audit our ads systems, understand which features are most popular to users, improve the quality of our search results, and help us combat vulnerabilities such as denial of service attacks.

As this page makes clear, Google does not promise to delete all copies of your old search records when you delete them using the Web History feature. No, the company will merely no longer show them to you, and will no longer use that information to provide customized search. I’m sure this was an honest mistake on Mayer’s part, right? As the company’s vice president of search products and user experience, its not like she should actually be expected to understand the fine grained details of the company’s policies for search and user privacy.

In other words, shall the Big Brother or one of the smaller brothers need data about you, it stays there. And all you are left with is the rightfully and comfortingly false belief that Google or Yahoo or else is protecting your privacy.