This week in #digitalethics: the useful versus creepy problem

Photo: Phillippe Teuwen, cc 2.0

Photo: Phillippe Teuwen, cc 2.0

Remember when we were twelve, and we thought the funniest thing ever was to play the game where you add the phrase “in bed” to every sentence? As the mom to a middle-schooler, I’m rapidly being re-introduced to this delightful genre of humor. Puberty is disruptive. Context is everything.

And as I continue to think about the events of the last week, I can’t help but substitute the phrase “in digital” to every story I read. It changes, well, not everything, but a lot.

In digital, data is less transactional, more ambient

An excellent article by Tom Goodwin in AdAge argues that connected devices—from refrigerators to wearables to cars to, of course, mobile phones, what we call the Internet of Things—are driving a redefinition of data collection, from something that requires action (e.g., dial up in the olden days) to something that just…happens. And this data—what you eat, where you go, how much you move—is increasingly intimate.

Speaking of which, your television may be listening

You’ve seen the movies; this is classic thriller/horror-story fare. You would hope to be entitled to privacy in your own home (unless you are a spy, under investigation, the owner of a baby monitor, or Olivia Pope), but if you use voice commands, your TV may actually be listening to all your conversations. Samsung just added a supplemental disclosure to its Smart TV web page stating that the company:

… may collect and your device may capture voice commands and associated texts so that we can provide you with Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

In situations like these, is a disclosure on a website enough? As a consumer, I say nope. I happened to see a news story about this or would have had no idea this was even possible. Is this mass collection of spoken data even necessary, or can the data collection be turned on and off?

Facebook will soon be able to identify you in any photo

An article in Science last week revealed that, soon, using its DeepFace technology, Facebook will be able to identify you in any photo. At the same time, a Facebook spokesperson says, “you will get an alert from Facebook telling you that you appear in the picture…You can then choose to blur out your face from the picture to protect your privacy.” This of course raises the question of when consent to be identified actually occurs: when you sign the Terms of Service, or when you are presented with the option of allowing yourself to be tagged or blurring your photo?

Artificial intelligence either does or doesn’t signal the end of humanity as we know it

Dina Bass and Jack Clark of Bloomberg ran a story last week about efforts to balance the discussion about what ambient data and artificial intelligence mean for the future of humanity. On the “AI is a potential menace” side of the debate: Elon Musk, Stephen Hawking and Bill Gates. Representing “Please don’t let AI be misunderstood”: Paul Allen, Jack Ma and researchers from MIT and Stanford.

Wherever one’s convictions may lie, this is a conversation that must become more explicit and specific. It requires a deeper understanding of not only the technologically possible (what machines can do today and likely in the future) but also of the ethical implications. One useful example harks back to the old “lifeboat ethics” question: what happens when a self-driving car is in a no-win situation and must sacrifice itself and its passengers, or hit a pedestrian or bicyclist? What would you do? Who decides?

Context is everything

For most people, there is no expectation that your TV is listening to conversations in your home. Your fridge is there to keep things cold, not to provide data on your eating habits. If you happen to be photographed while going about your daily business, there’s little chance (unless you’re a celebrity) that there will be consequences. Anonymity bestows a degree of privacy. But when data is ambient, anonymity is no longer possible.

Some people call this the “useful versus creepy” problem: in the digital age, does technology have to be creepy to be useful? Or does respecting privacy and building in appropriate controls make technology inherently less useful?

I think this is a false dichotomy.

Earl Warren, former chief justice of the United States, once said “Law floats in a sea of ethics.” To that I’d add, “Ethics floats in a sea of technology.”

Jess Groopman and I are collecting use cases and working on frameworks to parse these issues and make these conversations more explicit. We welcome and will cite your contributions.

About susanetlinger

Industry Analyst at Altimeter Group
This entry was posted in Artificial Intelligence, behavior, Big Data, Data Science, Ethics, Innovation, Internet of Things, social data ethics, Television, Uncategorized. Bookmark the permalink.

One Response to This week in #digitalethics: the useful versus creepy problem

  1. Pingback: This week in #digitalethics: the useful versus ...

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s