Friday, July 13, 2018

Securing America's voting systems against spying and meddling: 6 essential reads

File 20180713 27012 13ibe1p.jpg?ixlib=rb 1.1
Can they be confident their votes will count? 4zevar/Shutterstock.com
Jeff Inglis, The Conversation

The federal indictments of 12 Russian government agents accuse them of hacking computers to spy on and meddle with the U.S. 2016 presidential election – including state and county election databases.

With the 2018 midterm congressional elections approaching – along with countless state and local elections – here are highlights of The Conversation’s coverage of voting system integrity, offering ideas, lessons and cautions for officials and voters alike.

1. Voting machines are old

After the debacle of the 2000 election’s efforts to count votes, the federal government handed out massive amounts of money to the states to buy newer voting equipment that, everyone hoped, would avoid a repeat of the “hanging chad” mess. But almost two decades later, as Lawrence Norden and Christopher Famighetti at the Brennan Center for Justice at New York University explain, that one-time cash infusion has left a troubling legacy of aging voting machines:

“Imagine you went to your basement and dusted off the laptop or mobile phone that you used in 2002. What would happen if you tried to turn it on?”

That’s the machinery U.S. democracy depends on.

2. Not everyone can use the devices

Most voting machines don’t make accommodations for people with physical disabilities that affect how they vote. Juan Gilbert at the University of Florida quantified the problem during the 2012 presidential election:

“The turnout rate for voters with disabilities was 5.7 percent lower than for people without disabilities. If voters with disabilities had voted at the same rate as those without a disability, there would have been three million more voters weighing in on issues of local, state and national significance.”

To date, most efforts to solve the problems have involved using special voting equipment just for people with particular disabilities. That’s expensive and inefficient – and remember, separate is not equal. Gilbert has invented an open-source (read: inexpensive) voting machine system that can be used by people with many different disabilities, as well as people without disabilities.

With the system, which has been tested and approved in several states, voters can cast their ballots using a keyboard, a joystick, physical buttons, a touchscreen or even their voice.

3. Machines are not secure

In part because of their age, nearly every voting machine in use is vulnerable to various sorts of cyberattacks. For years, researchers have documented ways to tamper with vote counts, and yet few machines have had their cyberdefenses upgraded.

The fact that the election system is so widespread – with multiple machines in every municipality nationwide – also makes it weaker, writes Richard Forno at the University of Maryland, Baltimore County: There are simply more opportunities for an attacker to find a way in.

“Voter registration and administration systems operated by state and national governments are at risk too. Hacks here could affect voter rosters and citizen databases. Failing to secure these systems and records could result in fraudulent information in the voter database that may lead to improper (or illegal) voter registrations and potentially the casting of fraudulent votes.”

4. Even without an attack, major concerns

Even if any particular election isn’t actually attacked – or if nobody can prove it was – public trust in elections is vulnerable to sore losers taking advantage of the fact that cyberweaknesses exist. Just that prospect could destabilize the country, argues Herbert Lin of Stanford University:

“State and local election officials can and should provide for paper backup of voting this (and every) November. But in the end, debunking claims of election rigging, electronically or otherwise, amounts to trying to prove something didn’t happen – it can’t be done.”

5. The Russians are a factor

Deputy Attorney General Rod Rosenstein announces the indictments of 12 Russian government officials for hacking in connection with the 2016 U.S. presidential election. AP Photo/Evan Vucci

American University historian Eric Lohr explains the centuries of experience Russia has in meddling in other countries’ affairs, but notes that the U.S. isn’t innocent itself:

“In fact, the U.S. has a long record of putting its finger on the scales in elections in other countries.”

Neither country is unique: Countries have attempted to influence each other’s domestic politics throughout history.

6. Other problems aren’t technological

Other major threats to U.S. election integrity have to do with domestic policies governing how voting districts are designed, and who can vote.

Penn State technologist Sascha Meinrath discusses how partisan panels have “systematically drawn voting districts in ways that dilute the power of their opponent’s party” and “chosen to systematically disenfranchise poor, minority and overwhelmingly Democratic-leaning constituencies.”

There’s plenty of work to be done.

The ConversationEditors’ note: This is an updated version of an article originally published Oct. 18, 2016.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Friday, April 13, 2018

How Facebook could reinvent itself – 3 ideas from academia

File 20180412 540 mdk8m5.jpg?ixlib=rb 1.1
What will he decide to do? AP Photo/Andrew Harnik
Jeff Inglis, The Conversation

Facebook CEO Mark Zuckerberg’s testimony in front of Congress, following disclosures of personal data being misused by third parties, has raised the question over how and whether the social media company should be regulated. But short of regulation, the company can take a number of steps to address privacy concerns and the ways its platform has been used to disseminate false information to influence elections.

Scholars of privacy and digital trust have written for The Conversation about concrete ideas – some of them radical breaks with its current business model – the company could use right away.

1. Act like a media company

Facebook plays an enormous role in U.S. society and in civil society around the world. The leader of a multiyear global study of how digital technologies spread and how much people trust them, Tufts University’s Bhaskar Chakravorti, recommends the company accept that it is a media company, and therefore

take responsibility for the content it publishes and republishes. It can combine both human and artificial intelligence to sort through the content, labeling news, opinions, hearsay, research and other types of information in ways ordinary users can understand.”

2. Focus on truth

Facebook could then, perhaps, embrace the mission of journalism and watchdog organizations, and as American University scholars of public accountability and digital media systems Barbara Romzek and Aram Sinnreich suggest,

start competing to provide the most accurate news instead of the most click-worthy, and the most trustworthy sources rather than the most sensational.”

3. Cut users in on the deal

If Facebook wants to keep making money from its users’ data, Indiana University technology law and ethics scholar Scott Shackelford suggests

“flip[ping] the relationship and having Facebook pay people for their data, [which] could be [worth] as much as US$1,000 a year for the average social media user.”

The ConversationThe multi-billion-dollar company has an opportunity to find a new path before the public and lawmakers weigh in.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Thursday, April 5, 2018

Understanding Facebook's data crisis: 5 essential reads

File 20180404 189804 6fxdh5.jpg?ixlib=rb 1.1
What will Mark Zuckerberg say to Congress? AP Photo/Noah Berger
Jeff Inglis, The Conversation

Most of Facebook’s 2 billion users have likely had their data collected by third parties, the company revealed April 4. That follows reports that 87 million users’ data were used to target online political advertising in the run-up to the 2016 U.S. presidential election.

As company CEO Mark Zuckerberg prepares to testify before Congress, Facebook is beginning to respond to international public and government criticism of its data-harvesting and data-sharing policies. Many scholars around the U.S. are discussing what happened, what’s at stake, how to fix it, and what could come next. Here we spotlight five examples from our recent coverage.

1. What actually happened?

A lot of the concern has arisen from reporting that indicated Cambridge Analytica’s analysis was based on profiling people’s personalities, based on work from Cambridge University researcher Aleksandr Kogan.

Media scholar Matthew Hindman actually asked Kogan what he had done. As Hindman explained, “Information on users’ personalities or ‘psychographics’ was just a modest part of how the model targeted citizens. It was not a personality model strictly speaking, but rather one that boiled down demographics, social influences, personality and everything else into a big correlated lump.”

2. What were the effects of what happened?

On a personal level, this level of data collection – particularly for the 50 million Facebook users who had never consented to having their data collected by Kogan or Cambridge Analytica – was distressing. Ethical hacker Timothy Summers noted that democracy itself is at stake:

“What used to be a public exchange of information and democratic dialogue is now a customized whisper campaign: Groups both ethical and malicious can divide Americans, whispering into the ear of each and every user, nudging them based on their fears and encouraging them to whisper to others who share those fears.”

3. What should I do in response?

The backlash has been significant, with most Facebook users expressing some level of concern over what might be done with personal data Facebook has on them. As sociologists Denise Anthony and Luke Stark explain, people shouldn’t trust Facebook or other companies that collect massive amounts of user data: “Neither regulations nor third-party institutions currently exist to ensure that social media companies are trustworthy.”

4. What if I want to quit Facebook?

Many people have thought about, and talked about, deleting their Facebook accounts. But it’s harder than most people expect to actually do so. A communications research group at the University of Pennsylvania discussed all the psychological boosts that keep people hooked on social media, including Facebook’s own overt protestations:

“When one of us tried deactivating her account, she was told how huge the loss would be – profile disabled, all the memories evaporating, losing touch with over 500 friends.”

5. Should I be worried about future data-using manipulation?

If Facebook is that hard to leave, just think about what will happen as virtual reality becomes more popular. The powerful algorithms that manipulate Facebook users are not nearly as effective as VR will be, with its full immersion, writes user-experience scholar Elissa Redmiles:

“A person who uses virtual reality is, often willingly, being controlled to far greater extents than were ever possible before. Everything a person sees and hears – and perhaps even feels or smells – is totally created by another person.”

The ConversationAnd people are concerned now that they’re too trusting.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Monday, February 5, 2018

Improve your internet safety: 4 essential reads

File 20180205 14107 79mp9o.jpg?ixlib=rb 1.1
Staying safe online requires more than just a good password. Rawpixel.com/Shutterstock.com
Jeff Inglis, The Conversation

On Feb. 6, technology companies, educators and others mark Safer Internet Day and urge people to improve their online safety. Many scholars and academic researchers around the U.S. are studying aspects of cybersecurity and have identified ways people can help themselves stay safe online. Here are a few highlights from their work.

1. Passwords are a weakness

With all the advice to make passwords long, complex and unique – and not reused from site to site – remembering passwords becomes a problem, but there’s help, writes Elon University computer scientist Megan Squire:

“The average internet user has 19 different passwords. … Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”

That’s a good start.

2. Use a physical key

To add another layer of protection, keep your most important accounts locked with an actual physical key, writes Penn State-Altoona information sciences and technology professor Jungwoo Ryoo:

“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. The chip itself contains a method of authenticating itself.”

Just don’t leave your keys on the table at home.

3. Protect your data in the cloud

Many people store documents, photos and even sensitive private information in cloud services like Google Drive, Dropbox and iCloud. That’s not always the safest practice because of where the data’s encryption keys are stored, explains computer scientist Haibin Zhang at University of Maryland, Baltimore County:

“Just like regular keys, if someone else has them, they might be stolen or misused without the data owner knowing. And some services might have flaws in their security practices that leave users’ data vulnerable.”

So check with your provider, and consider where to best store your most important data.

4. Don’t forget about the rest of the world

Sadly, in the digital age, nowhere is truly safe. Jeremy Straub from North Dakota State University explains how physical objects can be used to hijack your smartphone:

“Attackers may find it very attractive to embed malicious software in the physical world, just waiting for unsuspecting people to scan it with a smartphone or a more specialized device. Hidden in plain sight, the malicious software becomes a sort of ‘sleeper agent’ that can avoid detection until it reaches its target.”

The ConversationIt’s a reminder that using the internet more safely isn’t just a one-day effort.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Thursday, December 21, 2017

Is there such a thing as online privacy? 7 essential reads

File 20171219 4973 85358.jpg?ixlib=rb 1.1
Who’s sharing your secrets? Antonio Guillem/Shutterstock
Jeff Inglis, The Conversation

Over the course of 2017, people in the U.S. and around the world became increasingly concerned about how their digital data are transmitted, stored and analyzed. As news broke that every Yahoo email account had been compromised, as well as the financial information of nearly every adult in the U.S., the true scale of how much data private companies have about people became clearer than ever.

This, of course, brings them enormous profits, but comes with significant social and individual risks. Many scholars are researching aspects of this issue, both describing the problem in greater detail and identifying ways people can reclaim power over the data their lives and online activity generate. Here we spotlight seven examples from our 2017 archives.

1. The government doesn’t think much of user privacy

One major concern people have about digital privacy is how much access the police might have to their online information, like what websites people visit and what their emails and text messages say. Mobile phones can be particularly revealing, not only containing large amounts of private information, but also tracking users’ locations. As H.V. Jagadish at University of Michigan writes, the government doesn’t think smartphones’ locations are private information. The legal logic defies common sense:

“By carrying a cellphone – which communicates on its own with the phone company – you have effectively told the phone company where you are. Therefore, your location isn’t private, and the police can get that information from the cellphone company without a warrant, and without even telling you they’re tracking you.

2. Neither do software designers

But mobile phone companies and the government aren’t the only people with access to data on people’s smartphones. Mobile apps of all kinds can monitor location, user activity and data stored on their users’ phones. As an international group of telecommunications security scholars found, ”More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics.“

Those companies can even merge information from different apps – one that tracks a user’s location and another that tracks, say, time spent playing a game or money spent through a digital wallet – to develop extremely detailed profiles of individual users.

3. People care, but struggle to find information

Despite how concerned people are, they can’t actually easily find out what’s being shared about them, when or to whom. Florian Schaub at the University of Michigan explains the conflicting purposes of apps’ and websites’ privacy policies:

"Companies use a privacy policy to demonstrate compliance with legal and regulatory notice requirements, and to limit liability. Regulators in turn use privacy policies to investigate and enforce compliance with regulations.”

That can leave consumers without the information they need to make informed choices.

4. Boosting comprehension

Another problem with privacy policies is that they’re incomprehensible. Anyone who does try to read and understand them will be quickly frustrated by the legalese and awkward language. Karuna Pande Joshi and Tim Finin from the University of Maryland, Baltimore County suggest that artificial intelligence could help:

“What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts.”

That would certainly make life simpler for users, but it would preserve a world in which privacy is not a given.

5. Programmers could help, too

Jean Yang at Carnegie Mellon University is working to change that assumption. At the moment, she explains, computer programmers have to keep track of users’ choices about privacy protections throughout all the various programs a site uses to operate. That makes errors both likely and hard to track down.

Yang’s approach, called “policy-agnostic programming,” builds sharing restrictions right into the software design process. That both forces developers to address privacy, and makes it easier for them to do so.

6. So could a new way of thinking about it

But it may not be enough for some software developers to choose programming tools that would protect their users’ data. Scott Shackelford from Indiana University discussed the movement to declare cybersecurity – including data privacy – a human right recognized under international law.

He predicts real progress will result from consumer demand:

“As people use online services more in their daily lives, their expectations of digital privacy and freedom of expression will lead them to demand better protections. Governments will respond by building on the foundations of existing international law, formally extending into cyberspace the human rights to privacy, freedom of expression and improved economic well-being.”

But governments can be slow to act, leaving people to protect themselves in the meantime.

7. The real basis of all privacy is strong encryption

The fundamental way to protect privacy is to make sure data is stored so securely that only the people authorized to access it are able to read it. Susan Landau at Tufts University explains the importance of individuals having access to strong encryption. And she observes police and the intelligence community are coming around to understanding this view:

“Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection …, which can protect against hacking and other data theft incidents.”

The ConversationOne day, perhaps, governments and businesses will have the same concerns about individuals’ privacy as people themselves do. Until then, strong encryption without special access for law enforcement or other authorities will remain the only reliable guardian of privacy.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.