Wednesday, December 19, 2018

Remember, you're being manipulated on social media: 4 essential reads

File 20181109 116838 12jwy9.jpg?ixlib=rb 1.1
Beware the strings attached to social media and smartphone use. VAZZEN/Shutterstock.com
Jeff Inglis, The Conversation

Editor’s note: As we come to the end of the year, Conversation editors take a look back at the stories that – for them – exemplified 2018.

Sometime in the political frenzy of the past year, I realized I had to stop scanning Twitter.

I had become used to taking the pulse of online society, but was no longer confident that the tweets I was reading were accurate portrayals of the authentic views of real humans. Some of them were, no doubt – yet I had worked with so many scholars on articles about how social media sites leave users vulnerable to being misled and misinformed. There’s plenty of evidence that social media platforms were misusing my data, and allowing trolls and bots to exploit their systems, to manipulate my thinking.

I haven’t been back to Twitter since – nor have I used Facebook for anything other than looking at friends’ photos of babies and other celebrations. Here are some of the articles I worked on that informed me how wary I should be of secret, malicious influencers online.

1. Don’t trust social media

When 2018 began, I – like many in the U.S. – was worried about the previous year’s revelations about how Facebook data had been used to influence voters in the 2016 election. I considered deleting my Facebook account, but as part of my job I need to be aware of what’s happening on the platform. So I took the advice of Dartmouth College social media scholars Denise Anthony and Luke Stark:

“Without full information about what happens to their personal data once it’s gathered, we recommend people default to not trusting companies until they’re convinced they should.”

Since then, I have spent far less time on the site than I used to. Also, I deleted some information from my profile, and am extremely limited about clicking on links, commenting on posts or even clicking “like.” Facebook can still track what I see, but not how I react to it. I imagine, and hope, that means the company has less information about me, and is less able to manipulate me.

2. Checking my own perceptions

To further understand how manipulative and misleading online activity spread, I used the tools created by Filippo Menczer, Giovanni Luca Ciampaglia and their colleagues at the Observatory on Social Media at Indiana University. They want to “help people become aware of [biases in the brain, society and technologies] and protect themselves from outside influences designed to exploit them.”

The most fun is their game “Fakey,” which asks players to identify which news stories and information sources are reliable – and which aren’t. They’ve also built Hoaxy, which shows graphically how falsehoods spread across social networks, and Botometer, which rates how likely it is that a particular Twitter account is a bot – or not.

3. Bots are powerful

Those bots, I learned from MIT professor Tauhid Zaman, can be dangerous even if there aren’t very many of them. He analyzed Twitter activity, including both people and bots, and measured users’ political opinions. Then he found a way to simulate what the humans’ views would have been if the bots weren’t there.

A small number of very active bots can actually significantly shift public opinion,” he found. The key wasn’t how many Twitter bots there were, but how many posts they made.

4. Engaging with real people

All the free time I gained by spending less time on social media went to good use, for socializing in-person and being by myself – which likely made me feel happier. As Georgetown psychologist Kostadin Kushlev found, “Digital socializing doesn’t add to, but in fact subtracts from, the psychological benefits of nondigital socializing.”

I certainly feel best when socializing face-to-face and, as Kushlev found in his research subjects, focusing on the people who are right in front of me is even more enjoyable than hanging out in person while also messaging others on their phones.

Avoiding psychological and political manipulation and having a more enjoyable time with friends and loved ones in person sounds like a great plan for 2019, too.The Conversation

Jeff Inglis, Science + Technology Editor, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, December 11, 2018

Your smartphone apps are tracking your every move – 4 essential reads

File 20181210 76980 o86td7.jpg?ixlib=rb 1.1
If you feel like you’re being watched, it could be your smartphone spying on you. Jakub Grygier/Shutterstock.com
Jeff Inglis, The Conversation

If you have a smartphone, it probably is a significant part of your life, storing appointments and destinations as well as being central to your communications with friends, loved ones and co-workers. Research and investigative reporting continue to reveal the degree to which your smartphone is aware of what you’re up to and where you are – and how much of that information is shared with companies that want to track your every move, hoping to better target you with advertising.

Several scholars at U.S. universities have written for The Conversation about how these technologies work, and the privacy problems they raise.

1. Most apps give away personal data

A study based at the University of California, Berkeley found that 7 in 10 apps shared personal data, like location and what apps a person uses, with companies that exist to track users online and in the physical world, digital privacy scholars Narseo Vallina-Rodriguez and Srikanth Sundaresan write. Fifteen percent of the apps the study examined sent that data to five or more tracking websites.

In addition, 1 in 4 trackers received “at least one unique device identifier, such as the phone number … [which] are crucial for online tracking services because they can connect different types of personal data provided by different apps to a single person or device.”

2. Turning off tracking doesn’t always work

Even people who tell their phones and apps not to track their activity are vulnerable. Northeastern University computer scientist Guevara Noubir found that “a phone can listen in on a user’s finger typing to discover a secret password – and […] simply carrying a phone in your pocket can tell data companies where you are and where you’re going.”

3. Your profile is worth money

All of this information on who you are, where you are and what you’re doing gets assembled into enormously detailed digital profiles, which get turned into money, Wayne State University law professor Jonathan Weinberg explains: “By combining online and offline data, Facebook can charge premium rates to an advertiser who wants to target, say, people in Idaho who are in long-distance relationships and are thinking about buying a minivan. (There are 3,100 of them in Facebook’s database.)”

4. Rules and laws don’t exist – in the US

Right now in the U.S., there’s not much regulatory oversight making sure digital apps and services protect people’s privacy and the privacy of their data. “Federal laws protect medical information, financial data and education-related records,” writes University of Michigan privacy scholar Florian Schaub, before noting that “Online services and apps are barely regulated, though they must protect children, limit unsolicited email marketing and tell the public what they do with data they collect.”

European rules are more comprehensive, but the problem remains that people’s digital companions collect and share large amounts of information about their real-world lives.The Conversation

Jeff Inglis, Science + Technology Editor, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, August 14, 2018

Keeping the electricity grid running – 4 essential reads

File 20180813 2912 12fg1c9.jpg?ixlib=rb 1.1
A man reads the newspaper by flashlight during the Northeast Blackout in August 2003. AP Photo/Joe Kohen
Jeff Inglis, The Conversation

On Aug. 14, 2003, a software bug contributed to a blackout that left 50 million people across nine U.S. northeastern states and a Canadian province without power. The outage lasted for as long as four days, with rolling blackouts in some areas for days after that.

That event wasn’t caused by an attacker, but many of the recommendations of the final incident report focused on cybersecurity. Fifteen years later, the stakes of a long-term outage are even higher, as American business and society are even more dependent on electronic devices. Scholars around the country are studying the problem of protecting the grid from cyberattacks and software flaws. Several of them have written about their work for The Conversation:

1. Attacks could be hard to detect

Though the software error that amplified the blackout was not the result of a cyberattack, power grid scholar Michael McElfresh at Santa Clara University explains that a clever attacker could disguise the intrusion “as something as simple as a large number of apparent customers lowering their thermostat settings in a short period on a peak hot day.”

2. Grid targets are tempting

Iowa State University’s Manimaran Govindarasu and Washington State University’s Adam Hahn, both grid security scholars, noted that the grid is an attractive target for hackers, who could shut off power to large numbers of people: “It happened in Ukraine in 2015 and again in 2016, and it could happen here in the U.S., too.”

3. What to do now?

In another article, Govindarasu and Hahn went on to describe the level to which “Russians had penetrated the computers of multiple U.S. electric utilities and were able to gain … privileges that were sufficient to cause power outages.”

The response, they wrote, involves extending federal grid-security regulations to “all utility companies – even the smallest,” having “all companies that are part of the grid participate in coordinated grid exercises to improve cybersecurity preparedness and share best practices” and – crucially – insisting that power utilities “ensure the hardware and software they use are from trustworthy sources and have not been tampered with or modified to allow unauthorized users in.”

Those steps won’t prevent software bugs, but they could reduce the likelihood of attackers exploiting computer systems’ vulnerabilities to shut off the lights.

4. Restructuring the grid itself

To protect against all types of threats to the grid – including natural and human-caused ones – engineering professor Joshua M. Pearce at Michigan Technological University suggests generating energy at many locations around the country, rather than in centralized power plants. He reports that his research has found that connecting those smaller power producers together with nearby electricity users would make supply more reliable, less vulnerable and cheaper. In fact, he found the U.S. military “could generate all of its electricity from distributed renewable sources by 2025 using … microgrids.”

At least that way a small problem with the grid would be less likely to spread and become a major problem for tens of millions of people, like the Northeast Blackout of 2003 was.

The ConversationEditor’s note: This story is a roundup of articles from The Conversation’s archives.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Tuesday, July 24, 2018

Russians hacked into US electric utilities: 6 essential reads

File 20180724 194124 1shx4er.jpg?ixlib=rb 1.1
Who’s in control of what’s flowing in these wires? D Sharon Pruitt, CC BY
Jeff Inglis, The Conversation

The U.S. Department of Homeland Security has revealed that Russian government hackers have gained deep access to hundreds of U.S. electrical utility companies, gaining far more access to the operations of many more companies than previously disclosed by federal officials.

Securing the electrical grid, upon which is built almost the entirety of modern society, is a monumental challenge. Several experts have explained aspects of the task, potential solutions and the risks of failure for The Conversation:

1. What’s at stake?

The scale of disruption would depend, in part, on how much damage the attackers wanted to do. But a major cyberattack on the electricity grid could send surges through the grid, much as solar storms have done.

Those events, explains Rochester Institute of Technology space weather scholar Roger Dube, cause power surges, damaging transmission equipment. One solar storm in March 1989, he writes, left “6 million people without power for nine hours … [and] destroyed a large transformer at a New Jersey nuclear plant. Even though a spare transformer was nearby, it still took six months to remove and replace the melted unit.”

More serious attacks, like larger solar storms, could knock out manufacturing plants that build replacement electrical equipment, gas pumps to fuel trucks to deliver the material and even “the machinery that extracts oil from the ground and refines it into usable fuel. … Even systems that seem non-technological, like public water supplies, would shut down: Their pumps and purification systems need electricity.”

In the most severe cases, with fuel-starved transportation stalled and other basic infrastructure not working, “[p]eople in developed countries would find themselves with no running water, no sewage systems, no refrigerated food, and no way to get any food or other necessities transported from far away. People in places with more basic economies would also be without needed supplies from afar.”

2. It wouldn’t be the first time

Russia has penetrated other countries’ electricity grids in the past, and used its access to do real damage. In the middle of winter 2015, for instance, a Russian cyberattack shut off the power to Ukraine’s capital in the middle of winter 2015.

Power grid scholar Michael McElfresh at Santa Clara University discusses what happened to cause hundreds of thousands of Ukrainians to lose power for several hours, and notes that U.S. utilities use software similar to their Ukrainian counterparts – and therefore share the same vulnerabilities.

3. Security work is ongoing

These threats aren’t new, write grid security experts Manimaran Govindarasu from Iowa State and Adam Hahn from Washington State University. There are a lot of people planning defenses, including the U.S. government. And the “North American Electric Reliability Corporation, which oversees the grid in the U.S. and Canada, has rules … for how electric companies must protect the power grid both physically and electronically.” The group holds training exercises in which utility companies practice responding to attacks.

4. There are more vulnerabilities now

Grid researcher McElfresh also explains that the grid is increasingly complex, with with thousands of companies responsible for different aspects of generating, transmission, and delivery to customers. In addition, new technologies have led companies to incorporate more sensors and other “smart grid” technologies. He describes how that “has created many more access points for penetrating into the grid computer systems.”

5. It’s time to ramp up efforts

The depth of access and potential control over electrical systems means there has never been a better time than right now to step up grid security, writes public-utility researcher Theodore Kury at the University of Florida. He notes that many of those efforts may also help protect the grid from storm damage and other disasters.

6. A possible solution could be smaller grids

One protective effort was identified by electrical engineer Joshua Pearce at Michigan Technological University, who has studied ways to protect electricity supplies to U.S. military bases both within the country and abroad. He found that the Pentagon has already begun testing systems that combine solar-panel arrays with large-capacity batteries. “The equipment is connected together – and to buildings it serves – in what is called a ‘microgrid,’ which is normally connected to the regular commercial power grid but can be disconnected and become self-sustaining when disaster strikes.”

He found that microgrid systems could make military bases more resilient in the face of cyberattacks, criminals or terrorists and natural disasters – and even help the military “generate all of its electricity from distributed renewable sources by 2025 … which would provide energy reliability and decrease costs, [and] largely eliminate a major group of very real threats to national security.”

The ConversationEditor’s note: This story is a roundup of articles from The Conversation’s archives.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Friday, July 13, 2018

Securing America's voting systems against spying and meddling: 6 essential reads

File 20180713 27012 13ibe1p.jpg?ixlib=rb 1.1
Can they be confident their votes will count? 4zevar/Shutterstock.com
Jeff Inglis, The Conversation

The federal indictments of 12 Russian government agents accuse them of hacking computers to spy on and meddle with the U.S. 2016 presidential election – including state and county election databases.

With the 2018 midterm congressional elections approaching – along with countless state and local elections – here are highlights of The Conversation’s coverage of voting system integrity, offering ideas, lessons and cautions for officials and voters alike.

1. Voting machines are old

After the debacle of the 2000 election’s efforts to count votes, the federal government handed out massive amounts of money to the states to buy newer voting equipment that, everyone hoped, would avoid a repeat of the “hanging chad” mess. But almost two decades later, as Lawrence Norden and Christopher Famighetti at the Brennan Center for Justice at New York University explain, that one-time cash infusion has left a troubling legacy of aging voting machines:

“Imagine you went to your basement and dusted off the laptop or mobile phone that you used in 2002. What would happen if you tried to turn it on?”

That’s the machinery U.S. democracy depends on.

2. Not everyone can use the devices

Most voting machines don’t make accommodations for people with physical disabilities that affect how they vote. Juan Gilbert at the University of Florida quantified the problem during the 2012 presidential election:

“The turnout rate for voters with disabilities was 5.7 percent lower than for people without disabilities. If voters with disabilities had voted at the same rate as those without a disability, there would have been three million more voters weighing in on issues of local, state and national significance.”

To date, most efforts to solve the problems have involved using special voting equipment just for people with particular disabilities. That’s expensive and inefficient – and remember, separate is not equal. Gilbert has invented an open-source (read: inexpensive) voting machine system that can be used by people with many different disabilities, as well as people without disabilities.

With the system, which has been tested and approved in several states, voters can cast their ballots using a keyboard, a joystick, physical buttons, a touchscreen or even their voice.

3. Machines are not secure

In part because of their age, nearly every voting machine in use is vulnerable to various sorts of cyberattacks. For years, researchers have documented ways to tamper with vote counts, and yet few machines have had their cyberdefenses upgraded.

The fact that the election system is so widespread – with multiple machines in every municipality nationwide – also makes it weaker, writes Richard Forno at the University of Maryland, Baltimore County: There are simply more opportunities for an attacker to find a way in.

“Voter registration and administration systems operated by state and national governments are at risk too. Hacks here could affect voter rosters and citizen databases. Failing to secure these systems and records could result in fraudulent information in the voter database that may lead to improper (or illegal) voter registrations and potentially the casting of fraudulent votes.”

4. Even without an attack, major concerns

Even if any particular election isn’t actually attacked – or if nobody can prove it was – public trust in elections is vulnerable to sore losers taking advantage of the fact that cyberweaknesses exist. Just that prospect could destabilize the country, argues Herbert Lin of Stanford University:

“State and local election officials can and should provide for paper backup of voting this (and every) November. But in the end, debunking claims of election rigging, electronically or otherwise, amounts to trying to prove something didn’t happen – it can’t be done.”

5. The Russians are a factor

Deputy Attorney General Rod Rosenstein announces the indictments of 12 Russian government officials for hacking in connection with the 2016 U.S. presidential election. AP Photo/Evan Vucci

American University historian Eric Lohr explains the centuries of experience Russia has in meddling in other countries’ affairs, but notes that the U.S. isn’t innocent itself:

“In fact, the U.S. has a long record of putting its finger on the scales in elections in other countries.”

Neither country is unique: Countries have attempted to influence each other’s domestic politics throughout history.

6. Other problems aren’t technological

Other major threats to U.S. election integrity have to do with domestic policies governing how voting districts are designed, and who can vote.

Penn State technologist Sascha Meinrath discusses how partisan panels have “systematically drawn voting districts in ways that dilute the power of their opponent’s party” and “chosen to systematically disenfranchise poor, minority and overwhelmingly Democratic-leaning constituencies.”

There’s plenty of work to be done.

The ConversationEditors’ note: This is an updated version of an article originally published Oct. 18, 2016.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Friday, April 13, 2018

How Facebook could reinvent itself – 3 ideas from academia

File 20180412 540 mdk8m5.jpg?ixlib=rb 1.1
What will he decide to do? AP Photo/Andrew Harnik
Jeff Inglis, The Conversation

Facebook CEO Mark Zuckerberg’s testimony in front of Congress, following disclosures of personal data being misused by third parties, has raised the question over how and whether the social media company should be regulated. But short of regulation, the company can take a number of steps to address privacy concerns and the ways its platform has been used to disseminate false information to influence elections.

Scholars of privacy and digital trust have written for The Conversation about concrete ideas – some of them radical breaks with its current business model – the company could use right away.

1. Act like a media company

Facebook plays an enormous role in U.S. society and in civil society around the world. The leader of a multiyear global study of how digital technologies spread and how much people trust them, Tufts University’s Bhaskar Chakravorti, recommends the company accept that it is a media company, and therefore

take responsibility for the content it publishes and republishes. It can combine both human and artificial intelligence to sort through the content, labeling news, opinions, hearsay, research and other types of information in ways ordinary users can understand.”

2. Focus on truth

Facebook could then, perhaps, embrace the mission of journalism and watchdog organizations, and as American University scholars of public accountability and digital media systems Barbara Romzek and Aram Sinnreich suggest,

start competing to provide the most accurate news instead of the most click-worthy, and the most trustworthy sources rather than the most sensational.”

3. Cut users in on the deal

If Facebook wants to keep making money from its users’ data, Indiana University technology law and ethics scholar Scott Shackelford suggests

“flip[ping] the relationship and having Facebook pay people for their data, [which] could be [worth] as much as US$1,000 a year for the average social media user.”

The ConversationThe multi-billion-dollar company has an opportunity to find a new path before the public and lawmakers weigh in.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Thursday, April 5, 2018

Understanding Facebook's data crisis: 5 essential reads

File 20180404 189804 6fxdh5.jpg?ixlib=rb 1.1
What will Mark Zuckerberg say to Congress? AP Photo/Noah Berger
Jeff Inglis, The Conversation

Most of Facebook’s 2 billion users have likely had their data collected by third parties, the company revealed April 4. That follows reports that 87 million users’ data were used to target online political advertising in the run-up to the 2016 U.S. presidential election.

As company CEO Mark Zuckerberg prepares to testify before Congress, Facebook is beginning to respond to international public and government criticism of its data-harvesting and data-sharing policies. Many scholars around the U.S. are discussing what happened, what’s at stake, how to fix it, and what could come next. Here we spotlight five examples from our recent coverage.

1. What actually happened?

A lot of the concern has arisen from reporting that indicated Cambridge Analytica’s analysis was based on profiling people’s personalities, based on work from Cambridge University researcher Aleksandr Kogan.

Media scholar Matthew Hindman actually asked Kogan what he had done. As Hindman explained, “Information on users’ personalities or ‘psychographics’ was just a modest part of how the model targeted citizens. It was not a personality model strictly speaking, but rather one that boiled down demographics, social influences, personality and everything else into a big correlated lump.”

2. What were the effects of what happened?

On a personal level, this level of data collection – particularly for the 50 million Facebook users who had never consented to having their data collected by Kogan or Cambridge Analytica – was distressing. Ethical hacker Timothy Summers noted that democracy itself is at stake:

“What used to be a public exchange of information and democratic dialogue is now a customized whisper campaign: Groups both ethical and malicious can divide Americans, whispering into the ear of each and every user, nudging them based on their fears and encouraging them to whisper to others who share those fears.”

3. What should I do in response?

The backlash has been significant, with most Facebook users expressing some level of concern over what might be done with personal data Facebook has on them. As sociologists Denise Anthony and Luke Stark explain, people shouldn’t trust Facebook or other companies that collect massive amounts of user data: “Neither regulations nor third-party institutions currently exist to ensure that social media companies are trustworthy.”

4. What if I want to quit Facebook?

Many people have thought about, and talked about, deleting their Facebook accounts. But it’s harder than most people expect to actually do so. A communications research group at the University of Pennsylvania discussed all the psychological boosts that keep people hooked on social media, including Facebook’s own overt protestations:

“When one of us tried deactivating her account, she was told how huge the loss would be – profile disabled, all the memories evaporating, losing touch with over 500 friends.”

5. Should I be worried about future data-using manipulation?

If Facebook is that hard to leave, just think about what will happen as virtual reality becomes more popular. The powerful algorithms that manipulate Facebook users are not nearly as effective as VR will be, with its full immersion, writes user-experience scholar Elissa Redmiles:

“A person who uses virtual reality is, often willingly, being controlled to far greater extents than were ever possible before. Everything a person sees and hears – and perhaps even feels or smells – is totally created by another person.”

The ConversationAnd people are concerned now that they’re too trusting.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.

Monday, February 5, 2018

Improve your internet safety: 4 essential reads

File 20180205 14107 79mp9o.jpg?ixlib=rb 1.1
Staying safe online requires more than just a good password. Rawpixel.com/Shutterstock.com
Jeff Inglis, The Conversation

On Feb. 6, technology companies, educators and others mark Safer Internet Day and urge people to improve their online safety. Many scholars and academic researchers around the U.S. are studying aspects of cybersecurity and have identified ways people can help themselves stay safe online. Here are a few highlights from their work.

1. Passwords are a weakness

With all the advice to make passwords long, complex and unique – and not reused from site to site – remembering passwords becomes a problem, but there’s help, writes Elon University computer scientist Megan Squire:

“The average internet user has 19 different passwords. … Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.”

That’s a good start.

2. Use a physical key

To add another layer of protection, keep your most important accounts locked with an actual physical key, writes Penn State-Altoona information sciences and technology professor Jungwoo Ryoo:

“A new, even more secure method is gaining popularity, and it’s a lot like an old-fashioned metal key. It’s a computer chip in a small portable physical form that makes it easy to carry around. The chip itself contains a method of authenticating itself.”

Just don’t leave your keys on the table at home.

3. Protect your data in the cloud

Many people store documents, photos and even sensitive private information in cloud services like Google Drive, Dropbox and iCloud. That’s not always the safest practice because of where the data’s encryption keys are stored, explains computer scientist Haibin Zhang at University of Maryland, Baltimore County:

“Just like regular keys, if someone else has them, they might be stolen or misused without the data owner knowing. And some services might have flaws in their security practices that leave users’ data vulnerable.”

So check with your provider, and consider where to best store your most important data.

4. Don’t forget about the rest of the world

Sadly, in the digital age, nowhere is truly safe. Jeremy Straub from North Dakota State University explains how physical objects can be used to hijack your smartphone:

“Attackers may find it very attractive to embed malicious software in the physical world, just waiting for unsuspecting people to scan it with a smartphone or a more specialized device. Hidden in plain sight, the malicious software becomes a sort of ‘sleeper agent’ that can avoid detection until it reaches its target.”

The ConversationIt’s a reminder that using the internet more safely isn’t just a one-day effort.

Jeff Inglis, Science + Technology Editor, The Conversation

This article was originally published on The Conversation. Read the original article.