Do online privacy concerns predict privacy behavior?

In a new article, we (Lemi Baruh, Ekin Seçinti, Zeynep Cemalcılar) meta-analytically chime in on the frequently debated concept of “privacy paradox”.  We  investigate whether users’ reported privacy concerns and literacy influence the extent to which they utilize online services (including but not limited to SNSs), disclose personal information and adopt measures to protect their privacy. Privacy concerns did not predict SNS use; however, it was associated with lower disclosure of information, lower use of other types of online services (e.g., e-commerce), and higher tendency to engage in privacy protective measures.

Click here for access to the article.

Click here for access to additional information about the meta-analysis.

Cyberpsychology’s special issue on self-disclosure and privacy published

Privacy and disclosure special issue of Cyberpsychology: Journal of Psychosocial Research on Cyberspace edited by Michel Walrave, Sonja Utz, Alexander P. Schouten, Wannes Heirman is now out and available for download (and hopefully for wide reading, discussing, citing).

Also included in the special issue is an article from SIMLAB (Murat Kezer, Barış Sevi, Zeynep Cemalcılar, and Lemi Baruh). The article compares three age groups  (18-40, 41-65, 65+) in terms of their tendency to self-disclose on Facebook, and their privacy attitudes, privacy literacy and use of privacy protective measures.

The study reports that young adults are more likely than other age groups to self-disclose on Facebook; yet, they are also the age group that is most likely to utilize privacy protective measures on Facebook. Furthermore, using a multidimensional approach to privacy attitude measurement, the study reports that while young adults are more likely to be concerned about their own privacy, mature adults tend to be more concerned about others’ privacy. Finally, the findings of the study suggest that the impact of privacy attitudes on privacy-protective behaviors is strongest among mature adults.

Here is the link to the full article.

We thank the editors of the special issue Michel Walrave, Sonja Utz, Alexander P. Schouten, Wannes Heirman for the opportunity.

PhD Studentship(s) at Social Interaction and Media Lab, Koç University, Istanbul

SIMLAB at Koç University, Istanbul is looking for candidates for PhD studentships interested in working in the following topics:

  • Online socialization
  • Impression formation, relationship initiation and maintanance on social media
  • Self-disclosure, communication and detection of emotions on social media
  • Social media and identity
  • Privacy attitudes, preferences and privacy management behavior of users

Candidates should have a background, and preferably graduate training, in social psychology, media studies, or other related fields. Candidates who have applied experience in quantitative research methods, statistical analysis, field management, and/or programming languages are particularly welcome.

Information regarding application procedures for PhD studentships are provided in the Design, Technology, & Society (DTS) PhD program webpage. Further inquiries about the application procedures should be sent to simlab@ku.edu.tr

Candidates who are accepted as PhD students will receive scholarships as described in the webpage of the Graduate School of Social Sciences and Humanities at Koç University

simlab_dark

Why “notice and choice” approaches to privacy reduce our privacy

In a recently published article, we (Lemi Baruh and Mihaela Popescu) discuss the limitations of reliance on market mechanisms for privacy protection.

Self-management frameworks such as “notice or choice” are inherently biased towards 1) reducing the level of privacy enjoyed by the members of the society and 2) creating privacy inequities (i.e., privacy haves and have nots). In the article we also discuss an alternative way of approaching privacy protection in the age of big data analytics.

Here is the abstract:

This article looks at how the logic of big data analytics, which promotes an aura of unchallenged objectivity to the algorithmic analysis of quantitative data, preempts individuals’ ability to self-define and closes off any opportunity for those inferences to be challenged or resisted. We argue that the predominant privacy protection regimes based on the privacy self-management framework of “notice and choice” not only fail to protect individual privacy, but also underplay privacy as a collective good. To illustrate this claim, we discuss how two possible individual strategies—withdrawal from the market (avoidance) and complete reliance on market-provided privacy protections (assimilation)—may result in less privacy options available to the society at large. We conclude by discussing how acknowledging the collective dimension of privacy could provide more meaningful alternatives for privacy protection.

Sharing sensitive information on Twitter and its “rubbernecking effect”

In a new article titled “Rubbernecking Effect of Intimate Information on Twitter: When Getting Attention Works Against Interpersonal Attraction” published in Cyberpsychology, Behavior, and Social Networking, we (Lemi Baruh and Zeynep Cemalcılar) discuss the effects of sharing sensitive (intimate) information in social media platform Twitter.

The article focuses on how viewers of a Twitter account react to sensitive information they see on a Twitter profile: While viewers of a Twitter account may initially stick around longer to look at a profile containing more sensitive information, they find profiles sharing sensitive information less attractive. We link this reaction to satisfaction of a voyeuristic curiosity. Just like the rubbernecking behaviour of “a driver passing by a car accident, the satisfaction of voyeuristic curiosity through profile browsing on Twitter is temporarily enjoyed at the moment when the opportunity is available”.

Below is the abstract:

Social networking sites offer individuals an opportunity to document and share information about themselves, as well as engaging in social browsing to learn about others. As a micro-blogging site within which users often share information publicly, Twitter may be a particularly suitable venue that can help satisfy both of these motivations. This study investigates how viewers react to disclosure of intimate information on Twitter. Specifically, the impact of disclosure intimacy is studied on attention that viewers pay to a Twitter page, reduction in their uncertainty about the attributes of the page owner, and their interpersonal attraction to the owner of the page. A total of 618 adult online panel members viewed one of six Twitter pages that contained either low-intimacy or high-intimacy tweets. Analyses indicated that viewers exposed to the Twitter pages containing high-intimate information paid more attention to the pages, were more confident about the attributions they could make about the page owner, yet were less willing to pursue further socialization with the page owner. Furthermore, attributional confidence mediated and perceived similarity moderated the relationship between disclosure intimacy and interpersonal attraction. This interaction between disclosure intimacy and perceived similarity was such that viewers who considered the page owner to be similar (dissimilar) to themselves were more (less) socially attracted to page owners who disclosed intimate information. These findings suggest that while intimate information shared on a Twitter page draws attention, this does not necessarily result in further socialization with the page owner—an effect we named as the “rubbernecking effect” of intimate information.

Introducing a “multidimensional privacy attitudes scale”

As one of our first research project in SIMLAB (founded in late 2012), we (Lemi Baruh & Zeynep Cemalcılar) had been working on developing a multidimensional privacy orientation scale. The scale is summarised in an article published in November 2014 in Personality and Individual Differences.

The article reports that individuals’ decisions about level of privacy they need is determined not only by concern about themselves but also concern about privacy of other individuals:

  • There are four distinction dimensions of privacy:  (1) belief in the value of “privacy as a right”; (2) “other-contingent privacy”; (3) “concern about own informational privacy” and (4) “concern about privacy of others.”
  • A segmentation of users in terms of these four dimensions of privacy points to three distinct types of users: 1) privacy advocates,who are concerned about both their own and other people’s privacy; (2) privacy individualists, who are concerned mostly about their own privacy, and (3) privacy indifferents, whose score on all dimensions are lower than other segments.User segments are privacy advocates, privacy individualists, privacy indifferents.
  • Users who value others’ privacy are less likely to invade informational privacy.
  • Privacy individualists use social network sites for satisfying voyeuristic curiosity.
  • Reciprocating disclosure is more likely for privacy advocates than for individualists.

The multidimensional scale has 18 items, all measured using a 5-point likert scale (ranging from strongly disagree to strongly agree):

Dimension 1: Privacy as a Right

  • Privacy laws should be strengthened to protect personal privacy.
  • People need legal protection against misuse of personal data.
  • If I were to write a constitution today, I would probably add privacy as a fundamental right.

Dimension 2: Concern about Own Informational Privacy

  • When I share the details of my personal life with somebody, I often worry that he/she will tell those details to other people.
  • I am concerned that people around me know too much about me.
  • I am concerned with the consequences of sharing identity information
  • I worry about sharing information with more people than I intend to.

Dimension 3: Other-Contingent Privacy

  • If somebody is not careful about protecting their own privacy, I cannot trust them about respecting mine.
  • If I am to enjoy some privacy in my life, I need my friends to be careful about protecting their privacy as well.
  • I could never trust someone as my confidant if they go around sharing details about their own private lives.
  • The level of privacy that I can enjoy depends on the extent to which people around me protect their own privacy.

Dimension 4: Concern about Privacy of Others

  • It is important for me to respect the privacy of individuals, even if they are not careful about protecting their own privacy.
  • I value other people’s privacy as much as I value mine
  • Even when somebody is not careful about his/her privacy, I do my best to respect that person’s privacy
  • I always do my best not to intrude into other people’s private lives
  • Respect for others’ privacy should be an important priority in social relations

Please feel free to use (and/or translate) scale. We would appreciate it greatly if you could notify us about an translation of the scale.

Citation information: 

Baruh, Lemi, and Zeynep Cemalcılar. 2014. “It Is More than Personal: Development and Validation of a Multidimensional Privacy Orientation Scale.” Personality and Individual Differences 70 (November). Elsevier Ltd: 165–70. doi:10.1016/j.paid.2014.06.042.

Fulltext of New Article on Privacy Protection in Mobile Environments

A new article entitled “Captive But Mobile: Privacy Concerns and Remedies for the Mobile Environment” is now published in The Information Society 

Authors: Mihaela Popescu (California State University, San Bernardino) and Lemi Baruh (Koç University, Istanbul)

Abstract
We use the legal framework of captive audience to examine the Federal Trade Commission’s 2012 privacy guidelines as applied to mobile marketing. We define captive audiences as audiences without functional opt-out mechanisms to avoid situations of coercive communication. By analyzing the current mobile marketing ecosystem, we show that the Federal Trade Commission’s privacy guidelines inspired by the Canadian “privacy by design” paradigm fall short of protecting consumers against invasive mobile marketing in at least three respects: (a) the guidelines overlook how, in the context of data monopolies, the combination of location and personal history data threatens autonomy of choice; (b) the guidelines focus exclusively on user control over data sharing, while ignoring control over communicative interaction; (c) the reliance on market mechanisms to produce improved privacy policies may actually increase opt-out costs for consumers. We conclude by discussing two concrete proposals for improvement: a “home mode” for mobile privacy and target-specific privacy contract negotiation.
—-
—-

What Your Communication Metadata Says About You?

A few weeks ago, when information about the National Security Agency’s (U.S.) phone surveillance program surfaced, the U.S. President Obama was quick to announce that “nobody is listening to your telephone calls”. It was rather a little “harmless” system that collected metadata about individuals’ phone calls.

We have been hearing similar claims about electronic surveillance systems lately. For example, the “don’t be evil” company Google attempts to comfort us that their e-mail surveillance system is not creepy by saying that:

Ad targeting in Gmail is fully automated, and no humans read your email or Google Account information in order to show you advertisements or related information.

So no peeping tom is reading your e-mails, Google says. And that should be enough to comfort you about the privacy of your e-mails. Or is it?

The problem is, often, metadata says more about an individual than once can ever imagine. As EFF has recently put it, it may even say more about a person than the actual content of a phone call (or an e-mail):

Sorry, your phone records—oops, “so-called metadata”—can reveal a lot more about the content of your calls than the government is implying. Metadata provides enough context to know some of the most intimate details of your lives.  And the government has given no assurances that this data will never be correlated with other easily obtained data. They may start out with just a phone number, but a reverse telephone directory is not hard to find. Given the public positions the government has taken on location information, it would be no surprise if they include location information demands in Section 215 orders for metadata.

However, as I said, it may often be very difficult to imagine what your communication metadata says about you. And the good folks at MIT come to the rescue. They created a very simple visualization tool that demonstrates how companies or the government can make inferences about your relationships based on your contacts in e-mail. The tool is called Immersion.

You can link your Gmail accounts to MIT’s “Immersion” tool here and see what comes up.

Mine is below (contacts anonymized, of course)

f5535889-65f9-4063-9e22-b2ba36c81364

Privacy, Literacy, and Awareness Paradox in Big Data Ecosystems

The abstract and a copy of the presentation that we (Mihaela Popescu & Lemi Baruh) made in IAMCR conference in Dublin are available below. The paper introduces the concept of “awareness paradox” to discuss privacy literacy in an era of Big Data analytics. Thanks to all listeners for their responses and questions. The full paper is still a work in progress.

Title: Digital Literacy & Privacy Self-governance: A Value-based Approach to Privacy in Big Data Ecosystems

Abstract: The growth of interactive and mobile technologies, which equip institutions with vast capabilities to amass an unprecedented amount of information about individuals, has enabled the development of new business models. These business models rely on data-mining techniques and real-time or near real-time Big Data analytics designed to uncover hidden patterns and relations. Increasingly, the use of personal information, including behavioral and locational data, for large-scale data mining is becoming a normative capital-generating practice.

This new regime of data intensive surveillance is akin to a fishing expedition that starts by comparing each data-point to the population base, while potentially signaling any deviation as a potential risk to avoid or an opportunity to capitalize on. More importantly, by relying on algorithmic analysis of data, this regime of surveillance removes humans from the interpretation process, makes the process increasingly opaque, and adds an aura of objectivity that preempts challenges to the epistemological foundations of its inferences. At the same time, as policies for digital media use shift toward promoting self-management, they increasingly assume an ideal “omni-competent” user who is at once able to “benefit” from being open to information sharing and communication availability, weigh these benefits against the potential risks in the digital environment, and engage in risk-aversive behaviors. This assumption is distinctly at odds with the reality of individuals’ understanding of privacy risks. The ubiquitous and technically specialized nature of modern data collection makes it increasingly difficult for online and mobile users to understand which entities are collecting data about them and how. Moreover, studies show that less than a third of online users read, however partially, a website’s privacy policy (McDonald et al. 2009), with only about 2% likely to read it thoroughly (Turow et al. 2007). Ironically, as the work of the Ponemon Institute in the United States demonstrates, absent legislation to the contrary, Big Data also means that companies are able to identify privacy-centric customers and treat them differently in order to allay their concerns, rather than providing privacy-conscious policies for all customers (Urbanski, 2013).

This paper positions the discussion of privacy self-governance in the context of normative digital literacy skills. The paper seeks to unpack and question the nature of these apparently conflicting digital literacy competencies as they relate to current privacy policies in the United States, policies which emphasize individual responsibility, choice, and informed consent by educated consumers. Taking as the point of departure an exploration of what it means to be aware of privacy risks, the paper analyzes recent policy documents issued by governmental agencies as well as commercial discourse in marketing trade journals, in order to examine how state and market-based entities frame both the meaning of user privacy risk awareness and the meaning of voluntary opt-in into data collection regimes. Next, by referencing data collection and data mining practices in Big Data ecosystems, the paper discusses to what extent these differential framings translate into a coherent set of principles that position privacy education among the “new” literacies said to enhance individual participation in the digital economy. Given the “take it or leave it” approach adopted by corporations that engage in data collection and use, the paper questions the potential of digital literacy to translate into meaningful actions from users able to prompt a change in data practices that dominate the U.S. market. The paper concludes with a discussion of the policy implications of this “awareness paradox”—a concept that describes how more digital literacy about privacy may lead to a higher tendency for some users to withdraw themselves from the market for certain types of online services, which, in turn, my decrease the incentives for the market to cater to their privacy needs.

Click Here for the Presentation

In case it is useful, please cite as: Popescu, M. & Baruh, L. (2013, June). Digital Literacy and Privacy Self-Governance: A Value-Based Approach to Privacy in Big Data Ecosystems. Paper presented at the annual meeting of the International Association for Media and Communication Research Conference, Dublin, Ireland.

Freedom-Not-Fear, Day of Protest

The Freedom-Not-Fear movement, a group of European citizens concerned with growing expansion of surveillance in the post 9/11 era of “war on terror” is taking their protest to the EU capital Brussels on September 17, 2011.

The movement draws attention to how the climate of fear instilled by the vivid imagery of planes crashing into the World Trade Center and the rhetoric of “War on Terror” has contributed to an expansion of authoritarian control at the expense of individual liberties and it is time for a change:

European policy making is affecting our every day lives and civil liberties more and more. The EU is increasingly imposing unnecessary and disproportionate governmental surveillance measures on us. We will not take this any longer. Let’s carry our protest into the capital of the EU!

We invite you to join us for a protest march “Freedom not Fear – Stop the surveillance mania!” in Brussels on Saturday, 17 September 2011.

The program of the protest: Saturday, 17.9.2011

  • from 13h30: gathering together at place luxembourg
  • from 14h00: starting manifestation at place luxembourg
  • ca. 15h00: manifestation at place schuman
  • ca. 16h30: going on at chapel madeleine (near grasmarkt)
  • ca. 18h00: closing/ending the manifestation