- Sep 2 Classes Begin
- Sep 2 New Traditions: 2014 Faculty Exhibition
- Sep 4 Author Daniel Gilbert to Speak at Opening Convocation
- Sep 5 Taste of Service and Involvement Fair
- Sep 6 Cheer on the Scots in Their Home Opener
- Sep 18 EnviroThursday - "Helping Forests Adapt to a Changing Climate"
- Sep 18 Visualities of Memory Symposium: Film "The Act of Killing"
- Sep 19 Visualities of Memory Symposium: Poster sessions and roundtable presentations/discussions
- Sep 26 Admissions Fall Sampler
- Sep 26 Inventory: New Paintings by Lisa Bergh and Andrew Nordin Opening Reception
Published in Macalester Today
Amazon tracks your purchases. Google sifts your email. And Uncle Sam may be monitoring your international calls. Is privacy even possible in the age of the Internet?
Like many college students, Jesse Russell ’14 (Eden Prairie, Minn.) logs onto Facebook several times a day. He posts status updates, sends messages to friends, and uploads photos and videos. But he’s careful to monitor what he and others post on the social-media site—and who in his network can see his activity. For example, though Russell is 21, the legal drinking age, he says, “I don’t want a lot of pictures of me on Facebook holding a beer.” He reviews everything that others post on his timeline and controls who can access each post (friends see party pix; Grandma doesn’t). “I grew up around technology,” says Russell, a political science major. “I love computers and how much they can help people. I love that we can use digital technology to improve communication and even save lives. But we also have to be careful with how we use it.”
As digital technology seeps into every aspect of modern life—from cell phones to cars, entertainment to cooking—our ability to maintain the privacy of personal information is increasingly threatened. We no longer assume our Google searches are anonymous. We never know exactly who views our Tweets or YouTube videos. Some of us worry that marketers are mining our data for commercial purposes—or that government officials are reading our emails. Hackers lurk everywhere.
Privacy may indeed be under attack. But where did our notions of privacy originate? What’s a reasonable level of privacy to expect in the digital age? And shouldn’t we acknowledge that often the biggest threat to our privacy is, well, ourselves?
What Orwell Didn't Predict
Assaults on privacy routinely make headlines in the media. But reaction among the general public has been harder to gauge. When reports were published last May indicating that the National Security Agency was monitoring, among other things, international communications made by private U.S. citizens, many Americans reacted with concern rather than outrage.
Days later, news broke that the U.S. Postal Service routinely photographed every piece of mail, capturing information about the addressee and sender, if not the actual contents inside. The postmaster general suggested such information was in fact collected, but rarely analyzed. Again, public reaction was muted.
George Orwell, author of the dystopian novel 1984, warned us decades ago about the dangers of the government tracking personal information. But the writer failed to anticipate that our behaviors and movements might someday be assiduously followed by commercial ventures as well. Last spring the New York Times reported that the high-end retailer Nordstrom was using customers’ cell-phone connections to its Wi-Fi network to track buyers’ paths between departments. The information, though tracked anonymously, according to Nordstrom, monitored how much time customers spent in each area. (The retailer has since ended the surveillance, according to news reports.)
And cell-phone data isn’t the only information companies are eager to assess. In 2012 the Times revealed that mass merchandiser Target could mill digital data from past purchases at its stores with enough accuracy to predict when a particular customer was pregnant, linking such information to their Guest ID so that coupons would automatically be dispensed for related goods such as baby food and maternity wear.
“Most of us don’t like the idea of someone tracking our data and collecting our information,” says philosophy professor Diane Michelfelder, who teaches a course on ethics and the Internet. “We worry that our data may be used to harm us in some way.” Says Russell, who participated in Michelfelder’s course last fall, “Privacy isn’t about whether the information is out there. Privacy is about what gets done with it.”
A Right to Privacy?
Changes in technology have given rise to privacy concerns for more than a century, says political science professor Patrick Schmidt. In 1890 Boston lawyers Samuel Warren and [future Supreme Court justice] Louis Brandeis published a persuasive essay in the Harvard Law Review arguing that individuals who had not sought the spotlight had a legal right to privacy, or, as Brandeis later put it, a “right to be left alone.” (Some scholars believe the article was written in response to the rise of both photography and yellow journalism and their potential intrusions into people’s lives.)
“What courts defended in the 19th century was essentially a ‘your home is your castle’ kind of doctrine,” Schmidt says. “At home, you could expect privacy—from the public and from the government.” Photographers couldn’t nose a lens through your curtains. Reporters—and the government—couldn’t enter your home without permission or a police warrant.
That view prevailed until the 1960s and 1970s, when Americans began to realize the many ways their privacy was affected by what happened outside their homes. Then nation was rocked by revelations that President Richard Nixon was using the government’s resources to spy on civil rights demonstrators and Vietnam War protestors. Certainly some citizens saw that as a legitimate reason to encroach on privacy—an effort to protect the nation from radicals. But Idaho Senator Frank Church thought otherwise, leading an effort to investigate a shadowy government entity that few Americans had ever heard of, the National Security Agency. Peter Fenn ’70, a staffer on the Senate Intelligence Committee led by Church, remembers, “People did feel violated. They didn’t think their mail should be opened. They were worried about people listening in on their phone conversations.”
The Church Commission ultimately led to government curbs on information gathering. But neither Church nor anyone else anticipated the Internet age and its potential privacy perils, according to Fenn, now a political-communications consultant based in Washington, D.C. “We didn’t even consider digital technology,” he laughs. “That wasn’t even part of our vocabulary. We were only concerned about the Postal Service reading our mail and people tapping pay phones.”
Nixon failed to persuade most Americans that sometimes privacy must be sacrificed for the public good. (Few citizens liked the idea of spying on Americans—even if those people disagreed with their political views.) It would take 9/11 to reshape that view: The war on terror, the public agreed, occasionally necessitated some infringements on personal privacy and liberty.
In the wake of 9/11, federal officials argued that privacy rights needed to be balanced with security needs. Privacy is important, went the line of reasoning, but the fight against global terrorism occasionally requires some trespass on privacy rights. So now we surrender to searches at the airport. And when we discover that the U.S. Postal Service photographs every piece of our personal mail, we only shrug. We’ve willingly traded some rights of privacy for the possibility of security.
In fact, giving up privacy often has public and personal benefits. Philosophy professor Martin Gunderson points to public health as an arena where, in recent years, privacy rights and the public weal have been reevaluated and rebalanced. Prior to the spread of HIV/AIDS in the early 1980s, many bioethicists focused on patient rights, arguing that those rights were sacrosanct. “Privacy was pitted against public health,” Gunderson says.
As HIV ripped through the gay community, however, officials at the Centers for Disease Control and elsewhere argued that getting access to information about patients was vital to stopping the spread of the disease and educating the very community being decimated by the plague. GLBT advocates worried that collecting patient information would result in the “outing” and persecution of closeted gay men. But ultimately, Gunderson says, CDC officials managed to protect personal privacy and access the data they needed to track HIV.
More recently, Google has used web search data to help health officials predict the spread of influenza across the United States. Private anonymous searches are being used to benefit the public at large. Public health officials can respond with vaccines and PSA. A net gain, right?
Giving up privacy can benefit us personally, too. Making your profile public on Facebook allows old high school friends to find you—possibly resulting in reconnection, a social gain. Allowing Google to track your searches can result in browser ads serving up deals on the very products or services you’re seeking—a potential money- or time-saver.
The pros of sharing personal information on Facebook and Twitter outweigh the cons for English major Michael Abramson ’15 (Atherton, Calif.). In fact, his experience interning at two tech startups—one in Palo Alto and another in St. Paul—suggests that digitally sharing information may be essential to his future employment. “Having a developed social media presence is a very valuable thing,” Abramson says. “If you’re someone my age and you’re not doing social media, that can be a detrimental. It’s a job skill at this point.”
Abramson regularly posts controversial articles as a way of provoking discussions among his friends— timing his posts to maximize visibility and click-throughs—so the popularity of his posts can be measured. Does he worry that a prospective employer may someday sift through those posts and scrap his resumé based on his views? Nope. “If a future employer is unwilling to hire me because of my opinions, I’m okay with that,” Abramson says. “I would rather stand by my morals and let my views be out there than censor myself.”
Disclosure and Dataveillance
But even 9/11 couldn’t do what the Internet and social media would eventually do: allow us to share our lives’ most private details in public forums. Political science professor Adrienne Christiansen remembers posting opinions and personal information to online bulletin boards in the early days of the World Wide Web. Often she used a pseudonym. Sometimes she pretended to be a man. “Those were the days when people on the Internet wouldn’t know if you were a dog,” Christiansen, who teaches a course in cyberpolitics, says. “That was half the fun.”
The details Christiansen shared online led one user to accurately guess that she was a professor—in St. Paul. Christiansen didn’t mind (the two eventually met and became friends), but that provides an unforgettable reminder that the hints we drop online can be used to build profiles of us that are astonishingly spot on. The size of the Internet doesn’t guarantees anonymity either. The needle in the haystack that is personal data can be easily found. A few years ago researchers mined anonymous search data released by Internet service provider AOL to accurately identify several users. “We reveal so many things about our lives online,” Christiansen says. “A lot of the privacy breaches that people worry about are ones that we created ourselves.”
The Mirage of Privacy
More and more, we’ve come to understand that online privacy is an illusion. Once we’ve hit “send” on the email, posted the video to YouTube, submitted the online comment, or uploaded the document to Dropbox, we’ve essentially relinquished control: Our private information is now subject to the vagaries of weak passwords and murky privacy policies. The information can be forwarded, copied, analyzed, and—thanks to improvements in digital-storage technologies— potentially accessed for generations.
“I think most first-year students are aware that what they post to Facebook will live forever,” says computer science professor Shilad Sen. Services like Snapchat, which allows users to send photos that disappear within seconds of reaching the user, are increasingly popular, Sen notes, precisely because they lack permanence.
But what about the data we don’t share, the information about our behaviors and habits that we don’t want disseminated? Analysis of cell-phone data could reveal that you travel to Las Vegas at least once a month—a precious morsel of marketing data that might be sold to a hotel chain desiring your patronage. Cars are now outfitted with computers that can track speeds and other driving details, notes philosophy professor Michelfelder. Should your insurance company have access to such data? What if you were in an accident caused by a speeding driver? Would that change your mind? What if sharing your driving data could lower your premiums? Trading privacy can pay handsomely.
There are plenty of reasons to welcome the spread of digital technology and the miracles it has wrought. Amazon knows what we like to read. Facebook automatically tags photos of our friends so we don’t have to. Someday our coffeemakers may switch on the second they sense we’re stirring in bed, and our medicine cabinets may call the pharmacist when our prescriptions are getting low. But for the time being, no technology can accurately read our minds. No technology can match the spark of intimacy that occurs when two humans connect and reveal their private thoughts and opinions.
“I like talking to people more in person these days, especially with all the NSA stuff,” says Russell, the political science major. He’s less interested in cultivating Facebook friends. “If someone wants to get to know me,” he says, “I hope they want to get to know me in person.”