Advertisement
Tech

Tech companies don’t have a clear answer when it comes to data privacy for the Internet of Things

‘In some respects we’re solving yesterday’s problems.’

Photo of Aaron Sankin

Aaron Sankin

Article Lead Image

If you regularly use a computer or smartphone to access the Internet, which is probable considering you’re reading this right now, there are likely no organizations on the planet that know more about you than Google, Facebook, and Microsoft. As the people in charge of the privacy at those three tech giants gathered onstage during a panel at the South By Southwest Interactive conference in Austin, Texas on Saturday morning the question that, in many ways, dominated the discussion was how to manage users’ understandings and expectations about privacy when everything is connected to the Internet—not just computers and smartphones.

Featured Video

The panel, which was entitled “How the Big Tech Firms Protect Consumer Privacy” and hosted by International Association of Privacy Professionals President Trevor Hughes, brought together Facebook Chief Privacy Officer Erin Egan, Google Legal Privacy Director Keith Enright, and Microsoft Chief Privacy Counsel Mike Hintze for a conversation about how the trio of firms deal with privacy issues both in terms of government regulations and in maintaining the trust of their users.

“One of the things that makes privacy interesting is that the law is evolving and the technology is evolving, so they questions are constantly evolving,” said Hintze.

As a result, when tech companies look at the privacy implications of a new product or feature, the question is often less about whether or not it runs afoul of the current patchwork of government regulations than it is about the perceptions of the consumers using the product. “Would people be surprised about the data being collected by a product?” Hintze asked. “Would they be unhappy by not being able to turn it off?”

Advertisement

The panel agreed that the best way to accomplish this is by thinking about privacy from the very earliest stages the development cycle. For example, Egan recounted how Facebook brought in privacy experts from academia and the nonprofit sector for looking at the privacy implications of a feature the social network implemented last year that alerts when their friends are nearby. 

The company was understandably concerned about backlash from users having their locations reported to people they know on Facebook without their explicit consent. The experts advised Facebook represent the geolocation feature picture of a phone in a pocket as a visual companion to the text notification.

“Consent is a core aspect of privacy.” Egan insisted. “Consent can be really good too, if it’s in context. The question is what does it mean and how do we present it in a way so people understand it?

However, getting user consent on how information is used has issues of its own. The nuts and bots of how Facebok uses the data of its over one billion users is tucked away in the depths of its lengthy agreement. The company has implemented a simple interface where users can do privacy check-ups to manage what information they share and with whom, but those efforts haven’t seemed to engender a significant level of faith among the public at large.

Advertisement

According the recent survey of nearly 2,000 registered voters conducted by Morning Consult, only 32 percent respondents had “total” confidence in Facebook’s ability to keep their personal information and data secure.

Facebook isn’t exactly alone in having a user trust problem. The survey found that much of the tech industry is facing a crisis of confidence when it comes to the responsible use and storage of user data. The segment of users that place trust in each company stands at 50 percent for Google and 54 percent for Apple. For Uber, the figure was an abysmally low 18 percent. Not a single organization the pollsters asked about cracked 60 percent.

What’s interesting here is that a lack of confidence in data security isn’t stopping people from using these services. Uber, which was the least trusted of all the companies queried about, still delivers over two million rides per day. Nevertheless, likely as a result of uncertainly, people are worried about what companies are doing with their data, which is precisely why the panelists emphasized that informing the public what happens to their information is so crucial.

“To a degree that we’re engaging with a user on a screen, even if its a small screen on a phone, we’re getting better and better on giving people ways to meaningfully give consent,” noted Enright, adding one major caveat.

Advertisement

“In some respects we’re solving yesterday’s problems,” he continued. “All of that work doesn’t necessarily move the ball forward on solving tomorrow’s problem when we’re living in a world of connected cars and thermostats. It’s going to have to be more of a focus on user education … We are necessarily going to be moving away from the paradigm we have now with keyboards and screens.”

As much of the technology on display at the conference indicates, the industry is rapidly moving into a world where the only window into the Internet is through a laptop or smartphone screen. The number of connected devices that comprise the so-called Internet of Things is expected to grow to 38 billion individual units by 2020 generating an unimaginably vast and precise amount of data about every aspect of our lives. Informing users exactly where their privacy ends and how their data is used is going to become even more difficult.

Take, for example, an Internet of Things light switch installed in a person’s house. What’s the responsible way to inform people about how the data from that light switch is used? If it’s deep in a user manual that comes with the device, the odds anyone will thoroughly read and understand it is essentially zero. It could be put on the device itself, but what if that light switch doesn’t come with a display panel or that panel isn’t set up to easily display a lot of text?

In that case, the issue of consent isn’t just dictated by whether the manufacturer places a bold privacy alert on its website, but also by the very nature of the device itself.

Advertisement

Hintze suggested a solution of drawing a line between what is being done with the data as to determine if a privacy notification is necessary. “If the data is anonymous and aggregated, the privacy issue maybe isn’t that big an issue,” he said. “But if the data is connected to an individual, that could be a problem.”

He added that, in the latter instance, there’s likely to be a sign-up process that does involve a traditional screen. At that point, the user can, at least in theory, be presented with information about, or even given agency to pro-actively change, precisely what information they share.

But what happens when there is no screen anywhere in the process? Is it better to just assume that consumers will value the convenience and utility offered by their products over any concerns about privacy and bury data use policies somewhere in the dark recesses of their corporate website? Or is it the obligation of companies to build privacy notification and accessibility into the very design of the product?

“I think that ethics is the next big issue in the privacy space,” said Hintze. “It’s not just what can you do. It’s what should you do…We need philosophers on staff these days.”

Advertisement

Illustration by Max Fleishman

 
The Daily Dot