If you have a smartphone, it probably is a significant part of your life, storing appointments and destinations as well as being central to your communications with friends, loved ones and co-workers. Research and investigative reporting continue to reveal the degree to which your smartphone is aware of what you’re up to and where you are – and how much of that information is shared with companies that want to track your every move, hoping to better target you with advertising.
Several scholars at U.S. universities have written for The Conversation about how these technologies work, and the privacy problems they raise.
1. Most apps give away personal data
A study based at the University of California, Berkeley found that 7 in 10 apps shared personal data, like location and what apps a person uses, with companies that exist to track users online and in the physical world, digital privacy scholars Narseo Vallina-Rodriguez and Srikanth Sundaresan write. Fifteen percent of the apps the study examined sent that data to five or more tracking websites.
In addition, 1 in 4 trackers received “at least one unique device identifier, such as the phone number … [which] are crucial for online tracking services because they can connect different types of personal data provided by different apps to a single person or device.”
2. Turning off tracking doesn’t always work
Even people who tell their phones and apps not to track their activity are vulnerable. Northeastern University computer scientist Guevara Noubir found that “a phone can listen in on a user’s finger typing to discover a secret password – and […] simply carrying a phone in your pocket can tell data companies where you are and where you’re going.”
3. Your profile is worth money
All of this information on who you are, where you are and what you’re doing gets assembled into enormously detailed digital profiles, which get turned into money, Wayne State University law professor Jonathan Weinberg explains: “By combining online and offline data, Facebook can charge premium rates to an advertiser who wants to target, say, people in Idaho who are in long-distance relationships and are thinking about buying a minivan. (There are 3,100 of them in Facebook’s database.)”
4. Rules and laws don’t exist – in the US
Right now in the U.S., there’s not much regulatory oversight making sure digital apps and services protect people’s privacy and the privacy of their data. “Federal laws protect medical information, financial data and education-related records,” writes University of Michigan privacy scholar Florian Schaub, before noting that “Online services and apps are barely regulated, though they must protect children, limit unsolicited email marketing and tell the public what they do with data they collect.”
European rules are more comprehensive, but the problem remains that people’s digital companions collect and share large amounts of information about their real-world lives.
That event wasn’t caused by an attacker, but many of the recommendations of the final incident reportfocused on cybersecurity. Fifteen years later, the stakes of a long-term outage are even higher, as American business and society are even more dependent on electronic devices. Scholars around the country are studying the problem of protecting the grid from cyberattacks and software flaws. Several of them have written about their work for The Conversation:
1. Attacks could be hard to detect
Though the software error that amplified the blackout was not the result of a cyberattack, power grid scholar Michael McElfresh at Santa Clara University explains that a clever attacker could disguise the intrusion “as something as simple as a large number of apparent customers lowering their thermostat settings in a short period on a peak hot day.”
2. Grid targets are tempting
Iowa State University’s Manimaran Govindarasu and Washington State University’s Adam Hahn, both grid security scholars, noted that the grid is an attractive target for hackers, who could shut off power to large numbers of people: “It happened in Ukraine in 2015 and again in 2016, and it could happen here in the U.S., too.”
3. What to do now?
In another article, Govindarasu and Hahn went on to describe the level to which “Russians had penetrated the computers of multiple U.S. electric utilities and were able to gain … privileges that were sufficient to cause power outages.”
The response, they wrote, involves extending federal grid-security regulations to “all utility companies – even the smallest,” having “all companies that are part of the grid participate in coordinated grid exercises to improve cybersecurity preparedness and share best practices” and – crucially – insisting that power utilities “ensure the hardware and software they use are from trustworthy sources and have not been tampered with or modified to allow unauthorized users in.”
Those steps won’t prevent software bugs, but they could reduce the likelihood of attackers exploiting computer systems’ vulnerabilities to shut off the lights.
4. Restructuring the grid itself
To protect against all types of threats to the grid – including natural and human-caused ones – engineering professor Joshua M. Pearce at Michigan Technological University suggests generating energy at many locations around the country, rather than in centralized power plants. He reports that his research has found that connecting those smaller power producers together with nearby electricity users would make supply more reliable, less vulnerable and cheaper. In fact, he found the U.S. military “could generate all of its electricity from distributed renewable sources by 2025 using … microgrids.”
At least that way a small problem with the grid would be less likely to spread and become a major problem for tens of millions of people, like the Northeast Blackout of 2003 was.
Editor’s note: This story is a roundup of articles from The Conversation’s archives.
Securing the electrical grid, upon which is built almost the entirety of modern society, is a monumental challenge. Several experts have explained aspects of the task, potential solutions and the risks of failure for The Conversation:
1. What’s at stake?
The scale of disruption would depend, in part, on how much damage the attackers wanted to do. But a major cyberattack on the electricity grid could send surges through the grid, much as solar storms have done.
Those events, explains Rochester Institute of Technology space weather scholar Roger Dube, cause power surges, damaging transmission equipment. One solar storm in March 1989, he writes, left “6 million people without power for nine hours … [and] destroyed a large transformer at a New Jersey nuclear plant. Even though a spare transformer was nearby, it still took six months to remove and replace the melted unit.”
More serious attacks, like larger solar storms, could knock out manufacturing plants that build replacement electrical equipment, gas pumps to fuel trucks to deliver the material and even “the machinery that extracts oil from the ground and refines it into usable fuel. … Even systems that seem non-technological, like public water supplies, would shut down: Their pumps and purification systems need electricity.”
In the most severe cases, with fuel-starved transportation stalled and other basic infrastructure not working, “[p]eople in developed countries would find themselves with no running water, no sewage systems, no refrigerated food, and no way to get any food or other necessities transported from far away. People in places with more basic economies would also be without needed supplies from afar.”
2. It wouldn’t be the first time
Russia has penetrated other countries’ electricity grids in the past, and used its access to do real damage. In the middle of winter 2015, for instance, a Russian cyberattack shut off the power to Ukraine’s capital in the middle of winter 2015.
Power grid scholar Michael McElfresh at Santa Clara University discusses what happened to cause hundreds of thousands of Ukrainians to lose power for several hours, and notes that U.S. utilities use software similar to their Ukrainian counterparts – and therefore share the same vulnerabilities.
3. Security work is ongoing
These threats aren’t new, write grid security experts Manimaran Govindarasu from Iowa State and Adam Hahn from Washington State University. There are a lot of people planning defenses, including the U.S. government. And the “North American Electric Reliability Corporation, which oversees the grid in the U.S. and Canada, has rules … for how electric companies must protect the power grid both physically and electronically.” The group holds training exercises in which utility companies practice responding to attacks.
4. There are more vulnerabilities now
Grid researcher McElfresh also explains that the grid is increasingly complex, with with thousands of companies responsible for different aspects of generating, transmission, and delivery to customers. In addition, new technologies have led companies to incorporate more sensors and other “smart grid” technologies. He describes how that “has created many more access points for penetrating into the grid computer systems.”
5. It’s time to ramp up efforts
The depth of access and potential control over electrical systems means there has never been a better time than right now to step up grid security, writes public-utility researcher Theodore Kury at the University of Florida. He notes that many of those efforts may also help protect the grid from storm damage and other disasters.
6. A possible solution could be smaller grids
One protective effort was identified by electrical engineer Joshua Pearce at Michigan Technological University, who has studied ways to protect electricity supplies to U.S. military bases both within the country and abroad. He found that the Pentagon has already begun testing systems that combine solar-panel arrays with large-capacity batteries. “The equipment is connected together – and to buildings it serves – in what is called a ‘microgrid,’ which is normally connected to the regular commercial power grid but can be disconnected and become self-sustaining when disaster strikes.”
He found that microgrid systems could make military bases more resilient in the face of cyberattacks, criminals or terrorists and natural disasters – and even help the military “generate all of its electricity from distributed renewable sources by 2025 … which would provide energy reliability and decrease costs, [and] largely eliminate a major group of very real threats to national security.”
Editor’s note: This story is a roundup of articles from The Conversation’s archives.
With the 2018 midterm congressional elections approaching – along with countless state and local elections – here are highlights of The Conversation’s coverage of voting system integrity, offering ideas, lessons and cautions for officials and voters alike.
1. Voting machines are old
After the debacle of the 2000 election’s efforts to count votes, the federal government handed out massive amounts of money to the states to buy newer voting equipment that, everyone hoped, would avoid a repeat of the “hanging chad” mess. But almost two decades later, as Lawrence Norden and Christopher Famighetti at the Brennan Center for Justice at New York University explain, that one-time cash infusion has left a troubling legacy of aging voting machines:
“Imagine you went to your basement and dusted off the laptop or mobile phone that you used in 2002. What would happen if you tried to turn it on?”
That’s the machinery U.S. democracy depends on.
2. Not everyone can use the devices
Most voting machines don’t make accommodations for people with physical disabilities that affect how they vote. Juan Gilbert at the University of Florida quantified the problem during the 2012 presidential election:
“The turnout rate for voters with disabilities was 5.7 percent lower than for people without disabilities. If voters with disabilities had voted at the same rate as those without a disability, there would have been three million more voters weighing in on issues of local, state and national significance.”
To date, most efforts to solve the problems have involved using special voting equipment just for people with particular disabilities. That’s expensive and inefficient – and remember, separate is not equal. Gilbert has invented an open-source (read: inexpensive) voting machine system that can be used by people with many different disabilities, as well as people without disabilities.
With the system, which has been tested and approved in several states, voters can cast their ballots using a keyboard, a joystick, physical buttons, a touchscreen or even their voice.
3. Machines are not secure
In part because of their age, nearly every voting machine in use is vulnerable to various sorts of cyberattacks. For years, researchers have documented ways to tamper with vote counts, and yet few machines have had their cyberdefenses upgraded.
The fact that the election system is so widespread – with multiple machines in every municipality nationwide – also makes it weaker, writes Richard Forno at the University of Maryland, Baltimore County: There are simply more opportunities for an attacker to find a way in.
“Voter registration and administration systems operated by state and national governments are at risk too. Hacks here could affect voter rosters and citizen databases. Failing to secure these systems and records could result in fraudulent information in the voter database that may lead to improper (or illegal) voter registrations and potentially the casting of fraudulent votes.”
4. Even without an attack, major concerns
Even if any particular election isn’t actually attacked – or if nobody can prove it was – public trust in elections is vulnerable to sore losers taking advantage of the fact that cyberweaknesses exist. Just that prospect could destabilize the country, argues Herbert Lin of Stanford University:
“State and local election officials can and should provide for paper backup of voting this (and every) November. But in the end, debunking claims of election rigging, electronically or otherwise, amounts to trying to prove something didn’t happen – it can’t be done.”
5. The Russians are a factor
Deputy Attorney General Rod Rosenstein announces the indictments of 12 Russian government officials for hacking in connection with the 2016 U.S. presidential election.AP Photo/Evan Vucci
American University historian Eric Lohr explains the centuries of experience Russia has in meddling in other countries’ affairs, but notes that the U.S. isn’t innocent itself:
“In fact, the U.S. has a long record of putting its finger on the scales in elections in other countries.”
Neither country is unique: Countries have attempted to influence each other’s domestic politics throughout history.
6. Other problems aren’t technological
Other major threats to U.S. election integrity have to do with domestic policies governing how voting districts are designed, and who can vote.
Penn State technologist Sascha Meinrath discusses how partisan panels have “systematically drawn voting districts in ways that dilute the power of their opponent’s party” and “chosen to systematically disenfranchise poor, minority and overwhelmingly Democratic-leaning constituencies.”
There’s plenty of work to be done.
Editors’ note: This is an updated version of an article originally published Oct. 18, 2016.
Facebook CEO Mark Zuckerberg’s testimony in front of Congress, following disclosures of personal data being misused by third parties, has raised the question over how and whether the social media company should be regulated. But short of regulation, the company can take a number of steps to address privacy concerns and the ways its platform has been used to disseminate false information to influence elections.
Scholars of privacy and digital trust have written for The Conversation about concrete ideas – some of them radical breaks with its current business model – the company could use right away.
1. Act like a media company
Facebook plays an enormous role in U.S. society and in civil society around the world. The leader of a multiyear global study of how digital technologies spread and how much people trust them, Tufts University’s Bhaskar Chakravorti, recommends the company accept that it is a media company, and therefore
“take responsibility for the content it publishes and republishes. It can combine both human and artificial intelligence to sort through the content, labeling news, opinions, hearsay, research and other types of information in ways ordinary users can understand.”
2. Focus on truth
Facebook could then, perhaps, embrace the mission of journalism and watchdog organizations, and as American University scholars of public accountability and digital media systems Barbara Romzek and Aram Sinnreich suggest,
As company CEO Mark Zuckerberg prepares to testify before Congress, Facebook is beginning torespond to international public and government criticism of its data-harvesting and data-sharing policies. Many scholars around the U.S. are discussing what happened, what’s at stake, how to fix it, and what could come next. Here we spotlight five examples from our recent coverage.
1. What actually happened?
A lot of the concern has arisen from reporting that indicated Cambridge Analytica’s analysis was based on profiling people’s personalities, based on work from Cambridge University researcher Aleksandr Kogan.
Media scholar Matthew Hindman actually asked Kogan what he had done. As Hindman explained, “Information on users’ personalities or ‘psychographics’ was just a modest part of how the model targeted citizens. It was not a personality model strictly speaking, but rather one that boiled down demographics, social influences, personality and everything else into a big correlated lump.”
2. What were the effects of what happened?
On a personal level, this level of data collection – particularly for the 50 million Facebook users who had never consented to having their data collected by Kogan or Cambridge Analytica – was distressing. Ethical hacker Timothy Summers noted that democracy itself is at stake:
“What used to be a public exchange of information and democratic dialogue is now a customized whisper campaign: Groups both ethical and malicious can divide Americans, whispering into the ear of each and every user, nudging them based on their fears and encouraging them to whisper to others who share those fears.”
3. What should I do in response?
The backlash has been significant, with most Facebook users expressing some level of concern over what might be done with personal data Facebook has on them. As sociologists Denise Anthony and Luke Stark explain, people shouldn’t trust Facebook or other companies that collect massive amounts of user data: “Neither regulations nor third-party institutions currently exist to ensure that social media companies are trustworthy.”
4. What if I want to quit Facebook?
Many people have thought about, and talked about, deleting their Facebook accounts. But it’s harder than most people expect to actually do so. A communications research group at the University of Pennsylvania discussed all the psychological boosts that keep people hooked on social media, including Facebook’s own overt protestations:
“When one of us tried deactivating her account, she was told how huge the loss would be – profile disabled, all the memories evaporating, losing touch with over 500 friends.”
5. Should I be worried about future data-using manipulation?
If Facebook is that hard to leave, just think about what will happen as virtual reality becomes more popular. The powerful algorithms that manipulate Facebook users are not nearly as effective as VR will be, with its full immersion, writes user-experience scholar Elissa Redmiles:
“A person who uses virtual reality is, often willingly, being controlled to far greater extents than were ever possible before. Everything a person sees and hears – and perhaps even feels or smells – is totally created by another person.”
And people are concerned now that they’re too trusting.
On Feb. 6, technology companies, educators and others mark Safer Internet Day and urge people to improve their online safety. Many scholars and academic researchers around the U.S. are studying aspects of cybersecurity and have identified ways people can help themselves stay safe online. Here are a few highlights from their work.
1. Passwords are a weakness
With all the advice to make passwords long, complex and unique – and not reused from site to site – remembering passwords becomes a problem, but there’s help, writes Elon University computer scientist Megan Squire:
“The average internet user has 19 different passwords. … Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”
That’s a good start.
2. Use a physical key
To add another layer of protection, keep your most important accounts locked with an actual physical key, writes Penn State-Altoona information sciences and technology professor Jungwoo Ryoo:
“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. The chip itself contains a method of authenticating itself.”
Just don’t leave your keys on the table at home.
3. Protect your data in the cloud
Many people store documents, photos and even sensitive private information in cloud services like Google Drive, Dropbox and iCloud. That’s not always the safest practice because of where the data’s encryption keys are stored, explains computer scientist Haibin Zhang at University of Maryland, Baltimore County:
“Just like regular keys, if someone else has them, they might be stolen or misused without the data owner knowing. And some services might have flaws in their security practices that leave users’ data vulnerable.”
So check with your provider, and consider where to best store your most important data.
“Attackers may find it very attractive to embed malicious software in the physical world, just waiting for unsuspecting people to scan it with a smartphone or a more specialized device. Hidden in plain sight, the malicious software becomes a sort of ‘sleeper agent’ that can avoid detection until it reaches its target.”
It’s a reminder that using the internet more safely isn’t just a one-day effort.
Over the course of 2017, people in the U.S. and around the world became increasingly concerned about how their digital data are transmitted, stored and analyzed. As news broke that every Yahoo email account had been compromised, as well as the financial information of nearly every adult in the U.S., the true scale of how much data private companies have about people became clearer than ever.
This, of course, brings them enormous profits, but comes with significant social and individual risks. Many scholars are researching aspects of this issue, both describing the problem in greater detail and identifying ways people can reclaim power over the data their lives and online activity generate. Here we spotlight seven examples from our 2017 archives.
1. The government doesn’t think much of user privacy
One major concern people have about digital privacy is how much access the police might have to their online information, like what websites people visit and what their emails and text messages say. Mobile phones can be particularly revealing, not only containing large amounts of private information, but also tracking users’ locations. As H.V. Jagadish at University of Michigan writes, the government doesn’t think smartphones’ locations are private information. The legal logic defies common sense:
“By carrying a cellphone – which communicates on its own with the phone company – you have effectively told the phone company where you are. Therefore, your location isn’t private, and the police can get that information from the cellphone company without a warrant, and without even telling you they’re tracking you.
2. Neither do software designers
But mobile phone companies and the government aren’t the only people with access to data on people’s smartphones. Mobile apps of all kinds can monitor location, user activity and data stored on their users’ phones. As an international group of telecommunications security scholars found, ”More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics.“
Those companies can even merge information from different apps – one that tracks a user’s location and another that tracks, say, time spent playing a game or money spent through a digital wallet – to develop extremely detailed profiles of individual users.
3. People care, but struggle to find information
Despite how concerned people are, they can’t actually easily find out what’s being shared about them, when or to whom. Florian Schaub at the University of Michigan explains the conflicting purposes of apps’ and websites’ privacy policies:
"Companies use a privacy policy to demonstrate compliance with legal and regulatory notice requirements, and to limit liability. Regulators in turn use privacy policies to investigate and enforce compliance with regulations.”
That can leave consumers without the information they need to make informed choices.
4. Boosting comprehension
Another problem with privacy policies is that they’re incomprehensible. Anyone who does try to read and understand them will be quickly frustrated by the legalese and awkward language. Karuna Pande Joshi and Tim Finin from the University of Maryland, Baltimore County suggest that artificial intelligence could help:
“What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts.”
That would certainly make life simpler for users, but it would preserve a world in which privacy is not a given.
5. Programmers could help, too
Jean Yang at Carnegie Mellon University is working to change that assumption. At the moment, she explains, computer programmers have to keep track of users’ choices about privacy protections throughout all the various programs a site uses to operate. That makes errors both likely and hard to track down.
Yang’s approach, called “policy-agnostic programming,” builds sharing restrictions right into the software design process. That both forces developers to address privacy, and makes it easier for them to do so.
6. So could a new way of thinking about it
But it may not be enough for some software developers to choose programming tools that would protect their users’ data. Scott Shackelford from Indiana University discussed the movement to declare cybersecurity – including data privacy – a human right recognized under international law.
He predicts real progress will result from consumer demand:
“As people use online services more in their daily lives, their expectations of digital privacy and freedom of expression will lead them to demand better protections. Governments will respond by building on the foundations of existing international law, formally extending into cyberspace the human rights to privacy, freedom of expression and improved economic well-being.”
But governments can be slow to act, leaving people to protect themselves in the meantime.
7. The real basis of all privacy is strong encryption
The fundamental way to protect privacy is to make sure data is stored so securely that only the people authorized to access it are able to read it. Susan Landau at Tufts University explains the importance of individuals having access to strong encryption. And she observes police and the intelligence community are coming around to understanding this view:
“Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection …, which can protect against hacking and other data theft incidents.”
One day, perhaps, governments and businesses will have the same concerns about individuals’ privacy as people themselves do. Until then, strong encryption without special access for law enforcement or other authorities will remain the only reliable guardian of privacy.
Editor’s note: The following is a roundup of archival stories.
Federal investigators following up on the mass shooting at a Texas church on Nov. 5 have seized the alleged shooter’s smartphone – reportedly an iPhone – but are reporting they are unable to unlock it, to decode its encryption and read any data or messages stored on it.
The situation adds fuel to an ongoing dispute over whether, when and how police should be allowed to defeat encryption systems on suspects’ technological devices. Here are highlights of The Conversation’s coverage of that debate.
#1. Police have never had unfettered access to everything
The FBI and the U.S. Department of Justice have in recent years – especially since the 2015 mass shooting in San Bernardino, California – been increasing calls for what they term “exceptional access,” a way around encryption that police could use to gather information on crimes both future and past. Technology and privacy scholar Susan Landau, at Tufts University, argues that limits and challenges to investigative power are strengths of democracy, not weaknesses:
“[L]aw enforcement has always had to deal with blocks to obtaining evidence; the exclusionary rule, for example, means that evidence collected in violation of a citizen’s constitutional protections is often inadmissible in court.”
Further, she notes that almost any person or organization, including community groups, could be a potential target for hackers – and therefore should use strong encryption in their communications and data storage:
“This broad threat to fundamental parts of American society poses a serious danger to national security as well as individual privacy. Increasingly, a number of former senior law enforcement and national security officials have come out strongly in support of end-to-end encryption and strong device protection (much like the kind Apple has been developing), which can protect against hacking and other data theft incidents.”
#2. FBI has other ways to get this information
The idea of weakening encryption for everyone just so police can have an easier time is increasingly recognized as unworkable, writes Ben Buchanan, a fellow at Harvard’s Belfer Center for Science and International Affairs. Instead,
“The future of law enforcement and intelligence gathering efforts involving digital information is an emerging field that I and others who are exploring it sometimes call "lawful hacking.” Rather than employing a skeleton key that grants immediate access to encrypted information, government agents will have to find other technical ways – often involving malicious code – and other legal frameworks.“
Indeed he observes, when the FBI failed to force Apple to unlock the San Bernardino shooter’s iPhone,
"the FBI found another way. The bureau hired an outside firm that was able to exploit a vulnerability in the iPhone’s software and gain access. It wasn’t the first time the bureau had done such a thing.”
#3. It’s not just about iPhones
When the San Bernardino suspect’s iPhone was targeted by investigators, Android researchers William Enck and Adwait Nadkarni at North Carolina State University tried to crack a smartphone themselves. They found that one key to encryption’s effectiveness is proper setup:
“Overall, devices running the most recent versions of iOS and Android are comparably protected against offline attacks, when configured correctly by both the phone manufacturer and the end user. Older versions may be more vulnerable; one system could be cracked in less than 10 seconds. Additionally, configuration and software flaws by phone manufacturers may also compromise security of both Android and iOS devices.”
#4. What they’re not looking for
What are investigators hoping to find, anyway? It’s nearly a given that they aren’t looking for emails the suspect may have sent or received. As Georgia State University constitutional scholar Clark Cunningham explains, the government already believes it is allowed to read all of a person’s email, without the email owner ever knowing:
“[The] law allows the government to use a warrant to get electronic communications from the company providing the service – rather than the true owner of the email account, the person who uses it.
"And the government then usually asks that the warrant be "sealed,” which means it won’t appear in public court records and will be hidden from you. Even worse, the law lets the government get what is called a “gag order,” a court ruling preventing the company from telling you it got a warrant for your email.“
#5. The political stakes are high
With this new case, federal officials risk weakening public support for giving investigators special access to circumvent or evade encryption. After the controversy over the San Bernardino shooter’s phone, public demand for privacy and encryption climbed, wrote Carnegie Mellon professor Rahul Telang:
"Repeated stories on data breaches and privacy invasion, particularly from former NSA contractor Edward Snowden, appears to have heightened users’ attention to security and privacy. Those two attributes have become important enough that companies are finding it profitable to advertise and promote them.
"Apple, in particular, has highlighted the security of its products recently and reportedly is doubling down and plans to make it even harder for anyone to crack an iPhone.”
It seems unlikely this debate will ever truly go away: Police will continue to want easy access to all information that might help them prevent or solve crimes, and regular people will continue to want to protect their private information and communications from prying eyes, whether that’s criminals, hackers or, indeed, the government itself.
Editor’s note: the following is roundup of previously published articles.
Passwords are everywhere – and they present an impossible puzzle. Social media profiles, financial records, personal correspondence and vital work documents are all protected by passwords. To keep all that information safe, the rules sound simple: Passwords need to be long, different for every site, easy to remember, hard to guess and never written down. But we’re only human! What is to be done about our need for secure passwords?
Get good advice
Sadly, much of the password advice people have been given over the past decade-plus is wrong, and in part that’s because the real threat is not an individual hacker targeting you specifically, write five scholars who are part of the Carnegie Mellon University passwords research group:
“People who are trying to break into online accounts don’t just sit down at a computer and make a few guesses…. [C]omputer programs let them make millions or billions of guesses in just a few hours…. [So] users need to go beyond choosing passwords that are hard for a human to guess: Passwords need to be difficult for a computer to figure out.”
To help, those researchers have developed a system that checks passwords as users create them, and offers immediate advice about how to make each password stronger.
Use a password manager
All that computing power can work to our advantage too, writes Elon University computer scientist Megan Squire:
“The average internet user has 19 different passwords. It’s easy to see why people write them down on sticky notes or just click the ‘I forgot my password’ link. Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”
That sounds like a good start.
Getting emoji – 🐱💦🎆🎌 – into the act
Then again, it might be even better not to use any regular characters. A group of emoji could improve security, writes Florian Schaub, an assistant professor of information and of electrical engineering and computer science at the University of Michigan:
“We found that emoji passcodes consisting of six randomly selected emojis were hardest to steal over a user’s shoulder. Other types of passcodes, such as four or six emojis in a pattern, or four or six numeric digits, were easier to observe and recall correctly.”
Still, emoji are – like letters and numbers – drawn from a finite library of options. So they’re vulnerable to being guessed by powerful computers.
Drawing toward a solution
To add even more potential variation to the mix, consider making a quick doodle-like drawing to serve as a password. Janne Lindqvist from Rutgers University calls that sort of motion a “gesture,” and is working on a system to do just that:
“We have explored the potential for people to use doodles instead of passwords on several websites. It appeared to be no more difficult to remember multiple gestures than it is to recall different passwords for each site. In fact, it was faster: Logging in with a gesture took two to six seconds less time than doing so with a text password. It’s faster to generate a gesture than a password, too: People spent 42 percent less time generating gesture credentials than people we studied who had to make up new passwords. We also found that people could successfully enter gestures without spending as much attention on them as they had to with text passwords.”
Easier to make, faster to enter, and not any more difficult to remember? That’s progress.
A world without passwords
Any type of password is inherently vulnerable, though, because it is an heir to centuries of tradition in writing, writes literature scholar Brian Lennon of Pennsylvania State University:
“[E]ven the strongest password … can be used anywhere and at any time once it has been separated from its assigned user. It is for this reason that both security professionals and knowledgeable users have been calling for the abandonment of password security altogether.”
What would be left then? Only attributes about who we are as living beings.
The unknowable password
Identifying people based not on what they know, but rather their actual biology, is perhaps the ultimate goal. This goes well beyond fingerprints and retina scans, Elon’s Squire explains:
“[A] computer game similar to ‘Guitar Hero’ [can] train the subconscious brain to learn a series of keystrokes. When a musician memorizes how to play a piece of music, she doesn’t need to think about each note or sequence. It becomes an ingrained, trained reaction usable as a password but nearly impossible even for the musician to spell out note by note, or for the user to disclose letter by letter.”
That might just do away with passwords altogether. And yet if you’re really just longing for the days of deadbolts, padlocks and keys, you’re not alone.
Don’t just leave things to a password
User authentication using an electronic key is here, as Penn State-Altoona information sciences and technology professor Jungwoo Ryoo writes:
“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. (It even typically has a hole to fit on a keychain.) The chip itself contains a method of authenticating itself … And it has USB or wireless connections so it can either plug into any computer easily or communicate wirelessly with a mobile device.”
Editor’s note: The following is a roundup of archival stories.
On March 14, or 3/14, mathematicians and other obscure-holiday aficionados celebrate Pi Day, honoring π, the Greek symbol representing an irrational number that begins with 3.14. Pi, as schoolteachers everywhere repeat, represents the ratio of a circle’s circumference to its diameter.
What is Pi Day, and what, really, do we know about π anyway? Here are three-and-bit-more articles to round out your Pi Day festivities.
A silly holiday
First off, a reflection on this “holiday” construct. Pi itself is very important, writes mathematics professor Daniel Ullman of George Washington University, but celebrating it is absurd:
The Gregorian calendar, the decimal system, the Greek alphabet, and pies are relatively modern, human-made inventions, chosen arbitrarily among many equivalent choices. Of course a mood-boosting piece of lemon meringue could be just what many math lovers need in the middle of March at the end of a long winter. But there’s an element of absurdity to celebrating π by noting its connections with these ephemera, which have themselves no connection to π at all, just as absurd as it would be to celebrate Earth Day by eating foods that start with the letter “E.”
And yet, here we are, looking at the calendar and getting goofily giddy about the sequence of numbers it shows us.
There’s never enough
In fact, as Jon Borwein of the University of Newcastle and David H. Bailey of the University of California, Davis, document, π is having a sustained cultural moment, popping up in literature, film and song:
Sometimes the attention given to pi is annoying. On 14 August 2012, the U.S. Census Office announced the population of the country had passed exactly 314,159,265. Such precision was, of course, completely unwarranted. But sometimes the attention is breathtakingly pleasurable.
Come to think of it, pi can indeed be a source of great pleasure. Apple’s always comforting, and cherry packs a tart pop. Chocolate cream, though, might just be where it’s at.
Strange connections
Of course π appears in all kinds of places that relate to circles. But it crops up in other places, too – often where circles are hiding in plain sight. Lorenzo Sadun, a professor of mathematics at the University of Texas at Austin, explores surprising appearances:
Pi also crops up in probability. The function f(x)=e-x², where e=2.71828… is Euler’s number, describes the most common probability distribution seen in the real world, governing everything from SAT scores to locations of darts thrown at a target. The area under this curve is exactly the square root of π.
It’s enough to make your head spin.
Historical pi
If you want to engage with π more directly, follow the lead of Georgia State University mathematician Xiaojing Ye, whose guide starts thousands of years ago:
The earliest written approximations of pi are 3.125 in Babylon (1900-1600 B.C.) and 3.1605 in ancient Egypt (1650 B.C.). Both approximations start with 3.1 – pretty close to the actual value, but still relatively far off.
By the end of his article, you’ll find a method to calculate π for yourself. You can even try it at home!
An irrational bonus
And because π is irrational, we’ll irrationally give you even one more, from education professor Gareth Ffowc Roberts at Bangor University in Wales, who highlights the very humble beginnings of the symbol π:
After attending a charity school, William Jones of the parish of Llanfihangel Tre’r Beirdd landed a job as a merchant’s accountant and then as a maths teacher on a warship, before publishing A New Compendium of the Whole Art of Navigation, his first book in 1702 on the mathematics of navigation. On his return to Britain he began to teach maths in London, possibly starting by holding classes in coffee shops for a small fee.
Shortly afterwards he published “Synopsis palmariorum matheseos,” a summary of the current state of the art developments in mathematics which reflected his own particular interests. In it is the first recorded use of the symbol π as the number that gives the ratio of a circle’s circumference to its diameter.
What made him realize that this ratio needed a symbol to represent a numeric value? And why did he choose π? It’s all Greek to us.
The basic conflict is a result of the history of the internet, and the telecommunications industry more generally, writes internet law scholar Allen Hammond at Santa Clara University:
Like the telephone, broadcast and cable predecessors from which they evolved, the wire and mobile broadband networks that carry internet traffic travel over public property. The spectrum and land over which these broadband networks travel are known as rights of way. Congress allowed each network technology to be privately owned. However, the explicit arrangement has been that private owner access to the publicly owned spectrum and rights of way necessary to exploit the technology is exchanged for public access and speech rights.
The government is trying to balance competing interests in how the benefits of those network services. Should people have unfiltered access to any and all data services, or should some internet providers be allowed to charge a premium to let companies reach audiences more widely and more quickly?
2. Media is the basis of democracy
Pai’s move against net neutrality, media scholar Christopher Ali at the University of Virginia writes, is just part of a larger effort at the FCC to accelerate the deregulation trend of the past 30 years. The stakes are high:
Media is more than just our window on the world. It’s how we talk to each other, how we engage with our society and our government. Without a media environment that serves the public’s need to be informed, connected and involved, our democracy and our society will suffer….
If only a few wealthy companies control how Americans communicate with each other, it will be harder for people to talk among ourselves about the kind of society we want to build.
3. Pushing back against corporate control
Competition is already fairly limited, it turns out. Across America, most people have very little – if any – choice in who their internet provider is. Communication studies professor Amanda Lotz at the University of Michigan explains the concerns raised by a monopoly marketplace and the potential effects of turning back the current policy of net neutrality:
The rules were created out of concern internet service providers would reserve high-speed internet lanes for content providers who could pay for it, while relegating to slower speeds those that didn’t – or couldn’t, such as libraries, local governments and universities. Net neutrality is also important for innovation, because it protects small and start-up companies’ access to the massive online marketplace of internet users.
In this view, the internet is a public utility that should be preserved and protected for all to access freely.
4. Getting around the rules
Even with net neutrality rules in place, companies were pushing the boundaries of what is legal. In recent years, many mobile internet providers have been simultaneously imposing and creating exemptions from limits on how much data their customers can use in a given month. Called “zero rating policies,” these exemptions omit from the monthly cap certain types of data, or certain companies’ data. For example, T-Mobile customers can listen endlessly to Spotify internet radio regardless of how much high-speed data they use for other purposes. Information systems scholars Liangfei Qiu, Soohyun Cho and Subhajyoti Bandyopadhyay at the University of Florida examined the effects of those policies on the marketplace:
At first glance, zero rating plans would seem to be good for consumers because they allow users to consume traffic for free. But our research suggests the variety of content may be reduced, which in the long run harms consumers.
Their findings suggest that keeping the internet open would be best for the public.
Today’s business models may not be viable in the future. Net neutrality rules run counter to that reality by freezing in place a particular industry structure, making it difficult for firms to respond to underlying changes in technology and consumer demand over time.
6. A vestige of the 20th century
Whether net neutrality rises or falls, however, the debate will continue. The rules and frameworks the government uses to try to regulate the internet are long out of date, and were written to address a very different time, when landline telephone service was not yet ubiquitous. Boston University communication and law professor T. Barton Carter explained what the real solution is:
The laws governing the internet were written in the early 20th century, decades before the companies that dominate the internet like Google and YouTube even existed. The only solution is a complete rewrite of the 80-year-old Communications Act – unfortunately a fool’s errand in today’s Washington.
7. Can net neutrality even happen?
And maintaining net neutrality itself could be a major challenge, if not a fool’s errand, thanks to important technical details that could make the ideal impossible, writes University of Michigan computer scientist Harsha Madhyastha:
If one user is streaming video and another is backing up data to the cloud, should both of them have their data slowed down? Or would users’ collective experience be best if those watching videos were given priority? That would mean slightly slowing down the data backup, freeing up bandwidth to minimize video delays and keep the picture quality high.
Northeastern University computer scientist David Choffnes describes how his team built an app that can measure exactly how internet service providers handle different types of traffic:
The methods we used and the tools we developed investigate how internet service providers manage your traffic and demonstrate how open the internet really is – or isn’t – as a result of evolving internet service plans, as well as political and regulatory changes. Regular people can explore their own services with our mobile app for Android, which is out now; an iOS version is coming soon.
Letting people see whether, and how, their data service handles internet traffic may be the best way to show people the importance of an open internet.
Based on our findings, I believe that rolling back net neutrality rules will jeopardize the digital startup ecosystem that has created value for customers, wealth for investors and globally recognized leadership for American technology companies and entrepreneurs. The digital economy in the U.S. is already on the verge of stalling; failing to protect an open internet would further erode the United States’ digital competitiveness, making a troubling situation even worse.
10. Setting clearer guidelines
If Pai’s proposal goes through, it will signal that future changes in partisan control in Washington, D.C., could also lead to major shifts in internet regulation. A key part of this potential problem is lack of clarity in the laws, meaning regulators and courts have to sort through major policy questions that would better be dealt with in Congress, writes Timothy Brennan, a former chief economist at the FCC who is now a public policy scholar at the University of Maryland, Baltimore County. He explains three steps Congress could take to simplify the debate – without even having to agree on the policy itself:
If Congress could enact legislation that removed the distinction between “telecommunication” and “information” services, reinforced the importance of the public interest in communications and restored antitrust enforcement power for regulators, the FCC would be better able to develop net neutrality regulations – whatever they may turn out to be – with solid substantive and legal foundations.
That could go a long way to furthering both public debate and public policy.
Editor’s note: The following is a roundup of stories related to election cybersecurity.
Every vote counts. It’s the key principle underlying democracy. Through the history of democratic elections, people have created many safeguards to ensure votes are cast and counted fairly: paper ballots, curtains around voting booths, locked ballot boxes, supervised counting, provisions for recounting and more.
With the advent of computer technology has come the prospect of faster counting of votes, and even, some hope, more secure and accurate voting. That’s much harder to achieve than it might seem, though. Here are highlights of The Conversation’s coverage of why that is.
Voting machines are old
After the debacle of the 2000 election’s efforts to count votes, the federal government handed out massive amounts of money to the states to buy newer voting equipment that, everyone hoped, would avoid a repeat of the “hanging chad” mess. But more than a decade later, as Lawrence Norden and Christopher Famighetti at the Brennan Center for Justice at New York University explain, that one-time cash infusion has left a troubling legacy:
Imagine you went to your basement and dusted off the laptop or mobile phone that you used in 2002. What would happen if you tried to turn it on? We don’t have to guess. Around the country this election year, people are going into storage, pulling out computers that date back to 2002 and asking us to vote on them.
They asked election officials around the country about the situation, and report on some worrying findings, including how vulnerable the equipment is to cyberattack, and how voting machine breakdowns lead to long lines that deter voters from participating.
Not everyone can use the devices
Also limiting voter turnout is the fact that most voting machines don’t make accommodations for people with physical disabilities that affect how they vote. Juan Gilbert at the University of Florida quantifies the problem:
“In the 2012 presidential election, … The turnout rate for voters with disabilities was 5.7 percent lower than for people without disabilities. If voters with disabilities had voted at the same rate as those without a disability, there would have been three million more voters weighing in on issues of local, state and national significance.”
To date, most efforts to solve the problems have involved using special voting equipment just for people with particular disabilities. That’s expensive and inefficient – and remember, separate is not equal. Gilbert has invented an open-source (read: inexpensive) voting machine system that can be used by people with many different disabilities, as well as people without disabilities.
With the system, which has been tested and approved in several states, voters can cast their ballots using a keyboard, a joystick, physical buttons, a touchscreen or even their voice.
Machines are not secure
Nearly every voting machine in use, though, is vulnerable to various sorts of cyberattacks. For years, researchers have documented ways to tamper with vote counts, and yet few machines have had their cyberdefenses upgraded.
The fact that the election system is so widespread – with multiple machines in every municipality nationwide – also makes it weaker, writes Richard Forno at the University of Maryland, Baltimore County: There are simply more opportunities for an attacker to find a way in.
“Voter registration and administration systems operated by state and national governments are at risk too. Hacks here could affect voter rosters and citizen databases. Failing to secure these systems and records could result in fraudulent information in the voter database that may lead to improper (or illegal) voter registrations and potentially the casting of fraudulent votes.”
Even without an attack, major concerns
Even if an attack never happens – or if nobody can prove one happened – November’s election is vulnerable to sore losers taking advantage of the fact that cyberweaknesses exist.
There is more than enough evidence that a cyberattack is possible. And just that prospect could destabilize the country, argues Herbert Lin of Stanford University:
Imagine that on Nov. 9, the day after Election Day, the early presidential election returns show that Donald Trump has lost. … Trump could call the electronically tallied vote counts obviously fraudulent. Even without pointing to any internal campaign polling suggesting he would win, he could highlight the indisputable fact that no one knows what is going on inside the voting machines.
It’s enough to make you turn out to vote, and keep you up all night afterward.
Since its launch in 2011, FracFocus, a government- and industry-funded website, has been the only place where Americans could learn the details about chemicals and water used in fracking operations near their homes, schools and businesses. But FracFocus has never lived up to its promise of bringing true transparency to fracking. And now, at least one state is planning to set its own course for fracking disclosure.
Pennsylvania’s Department of Environmental Protection has announced that it is withdrawing from FracFocus. Starting in March 2016, Pennsylvania’s fracking operators will have to report electronically to a state database that will present citizens with a map-based interface with simple one-click summaries of specific wells, in addition to downloadable bulk data.
Pennsylvania officials say this is to counter FracFocus’s lack of user-friendliness, which has long been a source of consternation to researchers attempting to document the impacts and risks of fracking. For many years, FracFocus’s website was populated with individual PDF files, scanned copies of forms filed by fracking companies. Initially, many of those disclosures were voluntary; as the site’s influence grew, states began requiring frackers to file with FracFocus. But the database was always far from complete.
FracFocus could be useful for citizens curious about an individual well, but the database was notoriously unfriendly to those wanting to probe more deeply into fracking. For a long time, searches could not return more than 2,000 records. From those search results, users could not download more than a small number of actual disclosure forms each day. What they were able to download was not machine-readable or searchable in any way.
It was 2015 before the public was allowed to download machine-readable data. This latest improvement in FracFocus transparency is welcome, but still falls short of modern standards for making data available and accessible to the public. In Frontier Group’s work on government spending transparency, we have argued that, to be useful to the public, transparency data must (among other things) be searchable, bulk-downloadable, and “one-stop,” meaning that citizens shouldn’t have to jump through multiple hoops or have specialized knowledge to obtain important information.
By contrast, here’s what the average person would have to do to even look at the bulk-downloadable data from FracFocus:
Download, install, configure and operate a major database server system, Microsoft SQL Server Management Studio, as well as SQL Server Configuration Manager. They are free, but hard to find on the Microsoft download website (the latest version is here). They have terribly un-intuitive interfaces once they’re running. They are also PC-specific, so Mac users are out of luck.
Purchase Microsoft Access, a database program not included in the regular version of Microsoft Office (the one that includes Word, Excel and PowerPoint). Microsoft charges $109.99; Amazon’s price is $99.99. (You could use a different database program, but the instructions provided by FracFocus are Access-specific.)
Follow a complicated series of steps – laid out in a nine-page PDF document provided by FracFocus – to convert the data to a form usable by Microsoft Access.
Construct queries in Access – not a simple point-and-click database program by any stretch – and interpret the results.
This process is not for the faint of heart, nor for the computer-inexperienced.
Even then, the data are not presented clearly. Rather than a company simply listing how many gallons of water and how many pounds of which chemicals it pumped deep underground at which well, key numbers are presented as percentages of the final fracking fluid. That requires a significant series of careful database queries and spreadsheet calculations to get actual usable figures.
With luck, Pennsylvania’s reporting system will set a new standard for public disclosure of, and citizen access to, data related to fracking. The creation of separate databases for every state where fracking occurs is not the ideal solution – a high-quality national database would be better. But until FracFocus catches up to the standards of data quality and user-friendliness people expect in the 21st century, citizens will need to look to the states to protect their access to this important information that affects their health and well-being.