Friday, April 13, 2018

How Facebook could reinvent itself – 3 ideas from academia

File 20180412 540 mdk8m5.jpg?ixlib=rb 1.1
What will he decide to do? AP Photo/Andrew Harnik
Jeff Inglis, The Conversation

Facebook CEO Mark Zuckerberg’s testimony in front of Congress, following disclosures of personal data being misused by third parties, has raised the question over how and whether the social media company should be regulated. But short of regulation, the company can take a number of steps to address privacy concerns and the ways its platform has been used to disseminate false information to influence elections.

Scholars of privacy and digital trust have written for The Conversation about concrete ideas – some of them radical breaks with its current business model – the company could use right away.

1. Act like a media company

Facebook plays an enormous role in U.S. society and in civil society around the world. The leader of a multiyear global study of how digital technologies spread and how much people trust them, Tufts University’s Bhaskar Chakravorti, recommends the company accept that it is a media company, and therefore

take responsibility for the content it publishes and republishes. It can combine both human and artificial intelligence to sort through the content, labeling news, opinions, hearsay, research and other types of information in ways ordinary users can understand.”

2. Focus on truth

Facebook could then, perhaps, embrace the mission of journalism and watchdog organizations, and as American University scholars of public accountability and digital media systems Barbara Romzek and Aram Sinnreich suggest,

start competing to provide the most accurate news instead of the most click-worthy, and the most trustworthy sources rather than the most sensational.”

3. Cut users in on the deal

If Facebook wants to keep making money from its users’ data, Indiana University technology law and ethics scholar Scott Shackelford suggests

“flip[ping] the relationship and having Facebook pay people for their data, [which] could be [worth] as much as US$1,000 a year for the average social media user.”

The ConversationThe multi-billion-dollar company has an opportunity to find a new path before the public and lawmakers weigh in.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Thursday, April 5, 2018

Understanding Facebook's data crisis: 5 essential reads

File 20180404 189804 6fxdh5.jpg?ixlib=rb 1.1
What will Mark Zuckerberg say to Congress? AP Photo/Noah Berger
Jeff Inglis, The Conversation

Most of Facebook’s 2 billion users have likely had their data collected by third parties, the company revealed April 4. That follows reports that 87 million users’ data were used to target online political advertising in the run-up to the 2016 U.S. presidential election.

As company CEO Mark Zuckerberg prepares to testify before Congress, Facebook is beginning to respond to international public and government criticism of its data-harvesting and data-sharing policies. Many scholars around the U.S. are discussing what happened, what’s at stake, how to fix it, and what could come next. Here we spotlight five examples from our recent coverage.

1. What actually happened?

A lot of the concern has arisen from reporting that indicated Cambridge Analytica’s analysis was based on profiling people’s personalities, based on work from Cambridge University researcher Aleksandr Kogan.

Media scholar Matthew Hindman actually asked Kogan what he had done. As Hindman explained, “Information on users’ personalities or ‘psychographics’ was just a modest part of how the model targeted citizens. It was not a personality model strictly speaking, but rather one that boiled down demographics, social influences, personality and everything else into a big correlated lump.”

2. What were the effects of what happened?

On a personal level, this level of data collection – particularly for the 50 million Facebook users who had never consented to having their data collected by Kogan or Cambridge Analytica – was distressing. Ethical hacker Timothy Summers noted that democracy itself is at stake:

“What used to be a public exchange of information and democratic dialogue is now a customized whisper campaign: Groups both ethical and malicious can divide Americans, whispering into the ear of each and every user, nudging them based on their fears and encouraging them to whisper to others who share those fears.”

3. What should I do in response?

The backlash has been significant, with most Facebook users expressing some level of concern over what might be done with personal data Facebook has on them. As sociologists Denise Anthony and Luke Stark explain, people shouldn’t trust Facebook or other companies that collect massive amounts of user data: “Neither regulations nor third-party institutions currently exist to ensure that social media companies are trustworthy.”

4. What if I want to quit Facebook?

Many people have thought about, and talked about, deleting their Facebook accounts. But it’s harder than most people expect to actually do so. A communications research group at the University of Pennsylvania discussed all the psychological boosts that keep people hooked on social media, including Facebook’s own overt protestations:

“When one of us tried deactivating her account, she was told how huge the loss would be – profile disabled, all the memories evaporating, losing touch with over 500 friends.”

5. Should I be worried about future data-using manipulation?

If Facebook is that hard to leave, just think about what will happen as virtual reality becomes more popular. The powerful algorithms that manipulate Facebook users are not nearly as effective as VR will be, with its full immersion, writes user-experience scholar Elissa Redmiles:

“A person who uses virtual reality is, often willingly, being controlled to far greater extents than were ever possible before. Everything a person sees and hears – and perhaps even feels or smells – is totally created by another person.”

The ConversationAnd people are concerned now that they’re too trusting.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Monday, February 5, 2018

Improve your internet safety: 4 essential reads

File 20180205 14107 79mp9o.jpg?ixlib=rb 1.1
Staying safe online requires more than just a good password.
Jeff Inglis, The Conversation

On Feb. 6, technology companies, educators and others mark Safer Internet Day and urge people to improve their online safety. Many scholars and academic researchers around the U.S. are studying aspects of cybersecurity and have identified ways people can help themselves stay safe online. Here are a few highlights from their work.

1. Passwords are a weakness

With all the advice to make passwords long, complex and unique – and not reused from site to site – remembering passwords becomes a problem, but there’s help, writes Elon University computer scientist Megan Squire:

“The average internet user has 19 different passwords. … Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”

That’s a good start.

2. Use a physical key

To add another layer of protection, keep your most important accounts locked with an actual physical key, writes Penn State-Altoona information sciences and technology professor Jungwoo Ryoo:

“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. The chip itself contains a method of authenticating itself.”

Just don’t leave your keys on the table at home.

3. Protect your data in the cloud

Many people store documents, photos and even sensitive private information in cloud services like Google Drive, Dropbox and iCloud. That’s not always the safest practice because of where the data’s encryption keys are stored, explains computer scientist Haibin Zhang at University of Maryland, Baltimore County:

“Just like regular keys, if someone else has them, they might be stolen or misused without the data owner knowing. And some services might have flaws in their security practices that leave users’ data vulnerable.”

So check with your provider, and consider where to best store your most important data.

4. Don’t forget about the rest of the world

Sadly, in the digital age, nowhere is truly safe. Jeremy Straub from North Dakota State University explains how physical objects can be used to hijack your smartphone:

“Attackers may find it very attractive to embed malicious software in the physical world, just waiting for unsuspecting people to scan it with a smartphone or a more specialized device. Hidden in plain sight, the malicious software becomes a sort of ‘sleeper agent’ that can avoid detection until it reaches its target.”

The ConversationIt’s a reminder that using the internet more safely isn’t just a one-day effort.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Thursday, December 21, 2017

Is there such a thing as online privacy? 7 essential reads

File 20171219 4973 85358.jpg?ixlib=rb 1.1
Who’s sharing your secrets? Antonio Guillem/Shutterstock
Jeff Inglis, The Conversation

Over the course of 2017, people in the U.S. and around the world became increasingly concerned about how their digital data are transmitted, stored and analyzed. As news broke that every Yahoo email account had been compromised, as well as the financial information of nearly every adult in the U.S., the true scale of how much data private companies have about people became clearer than ever.

This, of course, brings them enormous profits, but comes with significant social and individual risks. Many scholars are researching aspects of this issue, both describing the problem in greater detail and identifying ways people can reclaim power over the data their lives and online activity generate. Here we spotlight seven examples from our 2017 archives.

1. The government doesn’t think much of user privacy

One major concern people have about digital privacy is how much access the police might have to their online information, like what websites people visit and what their emails and text messages say. Mobile phones can be particularly revealing, not only containing large amounts of private information, but also tracking users’ locations. As H.V. Jagadish at University of Michigan writes, the government doesn’t think smartphones’ locations are private information. The legal logic defies common sense:

“By carrying a cellphone – which communicates on its own with the phone company – you have effectively told the phone company where you are. Therefore, your location isn’t private, and the police can get that information from the cellphone company without a warrant, and without even telling you they’re tracking you.

2. Neither do software designers

But mobile phone companies and the government aren’t the only people with access to data on people’s smartphones. Mobile apps of all kinds can monitor location, user activity and data stored on their users’ phones. As an international group of telecommunications security scholars found, ”More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics.“

Those companies can even merge information from different apps – one that tracks a user’s location and another that tracks, say, time spent playing a game or money spent through a digital wallet – to develop extremely detailed profiles of individual users.

3. People care, but struggle to find information

Despite how concerned people are, they can’t actually easily find out what’s being shared about them, when or to whom. Florian Schaub at the University of Michigan explains the conflicting purposes of apps’ and websites’ privacy policies:

"Companies use a privacy policy to demonstrate compliance with legal and regulatory notice requirements, and to limit liability. Regulators in turn use privacy policies to investigate and enforce compliance with regulations.”

That can leave consumers without the information they need to make informed choices.

4. Boosting comprehension

Another problem with privacy policies is that they’re incomprehensible. Anyone who does try to read and understand them will be quickly frustrated by the legalese and awkward language. Karuna Pande Joshi and Tim Finin from the University of Maryland, Baltimore County suggest that artificial intelligence could help:

“What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts.”

That would certainly make life simpler for users, but it would preserve a world in which privacy is not a given.

5. Programmers could help, too

Jean Yang at Carnegie Mellon University is working to change that assumption. At the moment, she explains, computer programmers have to keep track of users’ choices about privacy protections throughout all the various programs a site uses to operate. That makes errors both likely and hard to track down.

Yang’s approach, called “policy-agnostic programming,” builds sharing restrictions right into the software design process. That both forces developers to address privacy, and makes it easier for them to do so.

6. So could a new way of thinking about it

But it may not be enough for some software developers to choose programming tools that would protect their users’ data. Scott Shackelford from Indiana University discussed the movement to declare cybersecurity – including data privacy – a human right recognized under international law.

He predicts real progress will result from consumer demand:

“As people use online services more in their daily lives, their expectations of digital privacy and freedom of expression will lead them to demand better protections. Governments will respond by building on the foundations of existing international law, formally extending into cyberspace the human rights to privacy, freedom of expression and improved economic well-being.”

But governments can be slow to act, leaving people to protect themselves in the meantime.

7. The real basis of all privacy is strong encryption

The fundamental way to protect privacy is to make sure data is stored so securely that only the people authorized to access it are able to read it. Susan Landau at Tufts University explains the importance of individuals having access to strong encryption. And she observes police and the intelligence community are coming around to understanding this view:

“Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection …, which can protect against hacking and other data theft incidents.”

The ConversationOne day, perhaps, governments and businesses will have the same concerns about individuals’ privacy as people themselves do. Until then, strong encryption without special access for law enforcement or other authorities will remain the only reliable guardian of privacy.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Wednesday, November 8, 2017

FBI tries to crack another smartphone: 5 essential reads

File 20171108 14209 1ki7y0d.jpg?ixlib=rb 1.1
Who should be allowed inside? PopTika/
Jeff Inglis, The Conversation

Editor’s note: The following is a roundup of archival stories.

Federal investigators following up on the mass shooting at a Texas church on Nov. 5 have seized the alleged shooter’s smartphone – reportedly an iPhone – but are reporting they are unable to unlock it, to decode its encryption and read any data or messages stored on it.

The situation adds fuel to an ongoing dispute over whether, when and how police should be allowed to defeat encryption systems on suspects’ technological devices. Here are highlights of The Conversation’s coverage of that debate.

#1. Police have never had unfettered access to everything

The FBI and the U.S. Department of Justice have in recent years – especially since the 2015 mass shooting in San Bernardino, California – been increasing calls for what they term “exceptional access,” a way around encryption that police could use to gather information on crimes both future and past. Technology and privacy scholar Susan Landau, at Tufts University, argues that limits and challenges to investigative power are strengths of democracy, not weaknesses:

“[L]aw enforcement has always had to deal with blocks to obtaining evidence; the exclusionary rule, for example, means that evidence collected in violation of a citizen’s constitutional protections is often inadmissible in court.”

Further, she notes that almost any person or organization, including community groups, could be a potential target for hackers – and therefore should use strong encryption in their communications and data storage:

“This broad threat to fundamental parts of American society poses a serious danger to national security as well as individual privacy. Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection (much like the kind Apple has been developing), which can protect against hacking and other data theft incidents.”

#2. FBI has other ways to get this information

The idea of weakening encryption for everyone just so police can have an easier time is increasingly recognized as unworkable, writes Ben Buchanan, a fellow at Harvard’s Belfer Center for Science and International Affairs. Instead,

“The future of law enforcement and intelligence gathering efforts involving digital information is an emerging field that I and others who are exploring it sometimes call "lawful hacking.” Rather than employing a skeleton key that grants immediate access to encrypted information, government agents will have to find other technical ways – often involving malicious code – and other legal frameworks.“

Indeed he observes, when the FBI failed to force Apple to unlock the San Bernardino shooter’s iPhone,

"the FBI found another way. The bureau hired an outside firm that was able to exploit a vulnerability in the iPhone’s software and gain access. It wasn’t the first time the bureau had done such a thing.”

#3. It’s not just about iPhones

When the San Bernardino suspect’s iPhone was targeted by investigators, Android researchers William Enck and Adwait Nadkarni at North Carolina State University tried to crack a smartphone themselves. They found that one key to encryption’s effectiveness is proper setup:

“Overall, devices running the most recent versions of iOS and Android are comparably protected against offline attacks, when configured correctly by both the phone manufacturer and the end user. Older versions may be more vulnerable; one system could be cracked in less than 10 seconds. Additionally, configuration and software flaws by phone manufacturers may also compromise security of both Android and iOS devices.”

#4. What they’re not looking for

What are investigators hoping to find, anyway? It’s nearly a given that they aren’t looking for emails the suspect may have sent or received. As Georgia State University constitutional scholar Clark Cunningham explains, the government already believes it is allowed to read all of a person’s email, without the email owner ever knowing:

“[The] law allows the government to use a warrant to get electronic communications from the company providing the service – rather than the true owner of the email account, the person who uses it.

"And the government then usually asks that the warrant be "sealed,” which means it won’t appear in public court records and will be hidden from you. Even worse, the law lets the government get what is called a “gag order,” a court ruling preventing the company from telling you it got a warrant for your email.“

#5. The political stakes are high

With this new case, federal officials risk weakening public support for giving investigators special access to circumvent or evade encryption. After the controversy over the San Bernardino shooter’s phone, public demand for privacy and encryption climbed, wrote Carnegie Mellon professor Rahul Telang:

"Repeated stories on data breaches and privacy invasion, particularly from former NSA contractor Edward Snowden, appears to have heightened users’ attention to security and privacy. Those two attributes have become important enough that companies are finding it profitable to advertise and promote them.

"Apple, in particular, has highlighted the security of its products recently and reportedly is doubling down and plans to make it even harder for anyone to crack an iPhone.”

The ConversationIt seems unlikely this debate will ever truly go away: Police will continue to want easy access to all information that might help them prevent or solve crimes, and regular people will continue to want to protect their private information and communications from prying eyes, whether that’s criminals, hackers or, indeed, the government itself.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.