IoT Functionality and Personal Privacy are Inversely Correlated

The Internet of Things is powered by data. The more data the better, because the data powers the killer feature of IoT: personalization.

As I write about in The Real Internet of Things, one of the most compelling features of IoT is going to be Ubiquitous Customization. Everywhere you go things will shape and mold according to your preferences.

The problem is that this requires that they have your preferences. And have them they will.

As I mentioned in a previous post, people are often reluctant to give up their personal information—including preferences—but if you ask for help customizing something for them they’re eager to assist.

The issue is that this data flow is one-way. The data goes into the IoT, but it doesn’t come out. Your preferences are most useful to you when you give them in extreme detail, and when the most companies have them.

And that brings us to the functionality / privacy tradeoff chart above. There’s a fundamental conflict here: Ideal IoT functionality requires your personal data to be as exhaustive and as realtime as possible, and as widely distributed as possible.

The more data you give—to as many companies as possible—the more useful IoT becomes. But you destroy your privacy in the process. And the more private you are with your data the less adaptive, contextual, and personalized your IoT experience will be.

In short, it’s a zero-sum game between personal privacy and IoT functionality, and that raises the question of which one we’ll choose.

I think the answer is clearly that we’ll choose the functionality, and personal privacy will become a legacy concept—like handwritten letters sent through the post.

The best we can hope for is that people understand this IoT privacy/functionality tradeoff before they provide their data. Because once they do, it cannot easily be undone.

Notes

  1. The allure of IoT functionality will be ever-present and unrelenting. You could go for 15 years giving zero data, see a product you love and give everything and it’ll be no different than if you had been giving it all along. In the IoT world, preference data will soak into the environment. It’ll be in everything, and like rain soaking into the ground, you never really get it back.
  2. Disregard the exact timeline and the year that there is a crossover between privacy and functionality. All I was trying to show is that these two things are aggressively trending in opposite directions.
  3. The reason this data surrender is one-directional is because it’s hard to change most of the data and preferences you provide. Are you going to change your preference in male suitors? Are you going to change your date of birth? Your national ID number? The names of your kids? Not really.
  4. Another consideration with personal data and preferences is that this is also the type of data that can give con artist and intelligence types leverage over you. Preferences are buttons, and smart/trained people know how to press them. It’s an interesting dynamic, because this is precisely what machine learning is going to do. Algorithms are going to parse all your data, mix it with billions of other peoples’ data, and then recommend the exact way to make you happy. But the flip side of that is that it’ll also be a recipe to manipulate you.

__

I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

Source: http://feeds.danielmiessler.com

Leave a Reply