SXSW15 Redux: What happens at SX spreads everywhere

Photo: Alan Cleaver, CC 2.0

Photo: Alan Cleaver, CC 2.0

First, let’s get this out of the way: every year, SXSW regulars say the festival has jumped the shark. It’s too big, there are too many panels, and they’re poorly curated. It’s impossible to get anywhere. Too many lines, too many wristbands, too few taxis, too damn many cards, pens, pins, stickers that will inevitably end up in landfill. Breakfast tacos become a temporary food group. And there’s always a contingent who mistake the festival for Spring Break and leave their trash, noise and bodily fluids everywhere.

At the same time, certain things happen at SXSW that rarely happen elsewhere: hallway/street/in line for barbeque conversations that build or change businesses; serendipitous combinations of technologies; new and old friends settle into the Driskill or the Four Seasons or a corner at a party somewhere at 1:00 am and plan out the innovations that will drive next year’s technology agenda.

From the conversations I had, it was generally agreed to be a transitional year. Social media, long the darling of SX, was significantly less prominent, replaced by the maker movement, collaborative economy, IoT, privacy and surveillance, cognitive computing/AI, digital ethics, and data, data, everywhere.

Sure, people were Yik Yakking and Meerkating away, but the tone of the conversation was a bit more sober, at least among the people I interacted with. Meerkat in particular raised the spectre of Google Glass, specifically because of its privacy implications. The Beacon sensors throughout the conference logged the movements of thousands of people in an effort to better understand attendee traffic patterns and preferences, although a lot of people I spoke with were unaware that they were being tracked, a sign of cognitive dissonance if there ever was one.

I counted 21 panels that featured “privacy” in the title (121 that included it in the description), and five with “ethics”, (93 that included it in the description.)  “Surveillance” clocked in at five, with 38 total results. The panel on DARPA was so over-full that many, many people were turned away, while other panels (as usual) had barely enough attendees to fill the first row.

Overall, it felt as though the zeitgeist was catching up to Albert Einstein’s assertion, so many decades ago, that “it has become appallingly obvious that our technology has exceeded our humanity.” I wasn’t able to attend as many sessions as I wanted (who is?), but the ones I attended were terrific. Parry Aftab, CEO of WiredTrust, and Mat Honan, bureau chief of Buzzfeed, proposed a framework for privacy by design that seeks to embed privacy into business practices and operational processes. Not sexy, not even cool (yet), but so, so needed. In that panel, Ann Cavoukian (via video) rejected the notion of privacy as a zero-sum game: we don’t have to trade competitiveness for ethics. To me, that was a breath of fresh air.

The panel I participated on, “Emerging Issues in Digital Ethics,” (hashtag: #ethicscode) was moderated by the brilliant Don Heider, Founding Dean and Professor at the School of Communication at Loyola University Chicago, and Founder of the Center for Digital Ethics and Policy there. My co-panelists, Brian Abamont from State Farm (speaking on his own behalf) and Erin Reilly, Managing Director & Research Fellow, USC Annenberg School for Communication and Journalism, covered everything from privacy to cyberbullying, Gamergate to doxxing to scraping. There was so much ground to cover, and yet the panel started to feel more like an open conversation than a transfer of “knowledge” (at least to me). The audience was as or more fluent with these topics as we were; we’re all still figuring it out.

A final thought: every year I insist that it’s my last SXSW, and every year I break down, pack my comfiest shoes and attend. If there’s any takeaway this year, it’s that SX continues to be a pretty good indicator of the tech zeitgeist. I’d love to see some of my data science friends go through the schedule and do an analysis of trending topics on the schedule from year to year. With all our obsession about data, wouldn’t that be an interesting benchmark to have?

 

 

Posted in Artificial Intelligence, Big Data, data privacy, Data Science, data security, digital ethics, Ethics, Policy, Privacy, Social Data, social data ethics, Uncategorized | 3 Comments

Three Implications of ‘POPI’: South Africa’s New Legislation on Data Hoarding

7318395990_5c31627428_o

Photo: BuzzFarmers, cc 2.0

There probably won’t be a TLC reality show about it anytime soon, but the concept of “data hoarding” is real, and it’s about to be declared illegal in South Africa, according to an article last week by ITWeb SA. The story concerns the imminent implementation of the Protection of Personal Information (POPI) Act, which stipulates that “data may only be processed for as long as there are clear and defined business purposes to do so” (italics mine).

If you haven’t heard the term before, “data hoarding” is, according to Michiel Jonker, director of IT advisory at Grant Thornton:

“The gathering of data without a clear business reason or security strategy to protect the underlying information.”

Implication #1: data hoarding legislation could spread to other countries

This is big news for organizations doing business in South Africa, as well as data-watchers outside the country: marketers, data scientists, strategists and anyone whose business depends on collecting, processing, using and storing data.

This means you.

Says Jonker, “we are all data hoarders. Data is hoarded in eletronic and non-electronic formats and, with the emergence of the Internet of Things, machines are also creating data. People also have a tendency to multiply data by sharing it, processing it and storing it.” “The problem with data hoarding,” he says, “is it attracts ‘flies.’ As data is being referred to as the new currency, big data also attracts criminals.”

I asked Judy Selby, Partner, Baker Hostetler and an expert in data privacy law, whether legislation such as POPI could ever be adopted in the United States. She believes that it could. “Some of our privacy laws have criminal penalties, so it’s not unheard of. In the context of data hoarding, especially involving a data broker, I suspect if there’s a big privacy or security incident associated with the data, some of the more active states in this space (such as California, for example) might make a move in that direction.”

Implication #2: Data hoarding legislation and risk avoidance put pressure on data strategy

This piece of legislation gets at a particularly thorny issue for data scientists, ethicists, marketers—really anyone interested in balancing the twin imperatives of extracting insight and fostering trust. To extract the most useful insights, develop the most personalized services, run the most effective and efficient campaigns and organizations requires data—lots of it. It’s not always possible to anticipate what is needed, so the natural impulse is to store it until it comes in handy.

But to protect privacy, and reduce what Jonker refers to as a company’s “risk surface,” we actually need to collect as little data as is practically necessary, and only for uses that we can define today. POPI lays down the law for that decision in South Africa.

Implication #3: Organizations should address security and define use cases now

Organizations should look closely at the two main tenets of the POPI legislation–clear and defined business reasons, and security strategy—for leading indicators of issues that may crop up in other geographies.

Both tenets are challenging, partly because of the potential multitude of business cases for data, and because of the many and disparate data types available. While security strategy may be the most obvious (albeit challenging) first step, we would also recommend early thinking on future uses of big data, including IoT (sensor) data.

My colleague Jessica Groopman’s research report, entitled “Customer Experience in the Internet of Things,” published today, offers excellent examples of how organization are using IoT today, and how they may do so in the future. Reading this report is a terrific first step toward envisioning how such data might be used in the enterprise.

We’ll be watching this space closely for future developments, and suggest you do the same.

 

 

 

Posted in Analytics, Big Data, Data Science, data security, Ethics, Internet of Things, Policy, Predictive Analytics, Privacy | Tagged , , , , , , , , , | Leave a comment

What’s the ROI of Trust? Results of the Edelman Trust Barometer 2015

Photo: purplejavatroll, CC 2.0

Photo: purplejavatroll, CC 2.0

Each year, Edelman conducts research that culminates in the publication of its Trust Barometer (disclosure: I worked at Edelman when the first report was published). It’s been long enough now–15 years and counting–that the study has seen meaningful shifts in patterns of trust: of governments, NGOs, business and business leaders. We’ve seen these shifts through the lens of many significant geopolitical events: the 1999 “Battle in Seattle,” the war in Iraq, and the great recession of 2007-2008 are some examples.

But this year’s results have troubling implications for the technology industry in particular. According to the report, “[F]or the first time since the end of the Great Recession, trust in business faltered.” Technology, while still the most trusted of industries at 78 percent, experienced declines in many countries across multiple sectors. Trust in the consumer electronics sector fell across 74 percent of countries, trust in the telecommunication sector fell in 67 percent of countries, and, “in 70 percent of countries trust in technology in general sank.”

But most troubling of all were the findings about trust in innovation. Granted, this was the first time that Edelman has asked this particular set of questions, but it likely won’t be the last. And it sets a useful baseline against which to measure and better understand how the general population feels about the nature and pace of innovation over time.

The finding–that 51 percent of respondents feel the pace of innovation is “too fast”–is worth unpacking, especially as it varies substantially among industries. One of the findings I found most interesting: trust in a particular sector–healthcare or financial services, for example–does not guarantee that the innovations in that sector are also trusted. Consider financial services (trust at 54 percent overall) compared to electronic payments (62 percent, up 8 percent), or Food & Beverage, (a 67 percent overall level of trust, compared to 30 percent for GMO foods). Of course, these numbers change dramatically as one views them across regions, from developed to developing countries, for example.

So, even given high overall levels of overall trust in the technology industry, we cannot sit comfortably and assume that there is a “trust dividend” that we can collect on as we continue to work on topics from cloud computing to big data to artificial intelligence.

While we don’t have data that specifically links levels of trust in technology and innovation to Edward Snowden’s revelations about NSA surveillance methods, or recent corporate data breaches, or disclosures about just how many pieces of credit card metadata you need to identify an individual, we do have evidence as to what kinds of behaviors affect perceptions of trust.

Unsurprisingly, integrity and engagement are the winners.

From there, you don’t need a map to get to the value of integrating trust into everything we do as an industry, or, more accurately, a collection of many, interrelated industries. Here is how Edelman reported respondents’ actions based on levels of trust:

Source: Edelman

Source: Edelman

And this is where we have to turn to the inevitable question: what are the ingredients in the ROI of trust? Based on the implicit and explicit findings above, I’d propose the following list of metrics, for starters:

  • Propensity to buy/revenue opportunity
  • Brand Reputation
  • Customer value (transaction to aggregate)
  • Shareholder value
  • Cost savings/improvement based loyalty
  • Cost savings/improvement based on advocacy

The next step, of course, is to take those twin behaviors–integrity and engagement–and drill down so that we really understand what moves the needle one way or another. That will be a continuing topic for upcoming research and blog posts.

Posted in Big Data, Ethics, Innovation, Internet of Things, Research | Tagged , , , , , , | Leave a comment

This week in #digitalethics: the useful versus creepy problem

Photo: Phillippe Teuwen, cc 2.0

Photo: Phillippe Teuwen, cc 2.0

Remember when we were twelve, and we thought the funniest thing ever was to play the game where you add the phrase “in bed” to every sentence? As the mom to a middle-schooler, I’m rapidly being re-introduced to this delightful genre of humor. Puberty is disruptive. Context is everything.

And as I continue to think about the events of the last week, I can’t help but substitute the phrase “in digital” to every story I read. It changes, well, not everything, but a lot.

In digital, data is less transactional, more ambient

An excellent article by Tom Goodwin in AdAge argues that connected devices—from refrigerators to wearables to cars to, of course, mobile phones, what we call the Internet of Things—are driving a redefinition of data collection, from something that requires action (e.g., dial up in the olden days) to something that just…happens. And this data—what you eat, where you go, how much you move—is increasingly intimate.

Speaking of which, your television may be listening

You’ve seen the movies; this is classic thriller/horror-story fare. You would hope to be entitled to privacy in your own home (unless you are a spy, under investigation, the owner of a baby monitor, or Olivia Pope), but if you use voice commands, your TV may actually be listening to all your conversations. Samsung just added a supplemental disclosure to its Smart TV web page stating that the company:

… may collect and your device may capture voice commands and associated texts so that we can provide you with Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

In situations like these, is a disclosure on a website enough? As a consumer, I say nope. I happened to see a news story about this or would have had no idea this was even possible. Is this mass collection of spoken data even necessary, or can the data collection be turned on and off?

Facebook will soon be able to identify you in any photo

An article in Science last week revealed that, soon, using its DeepFace technology, Facebook will be able to identify you in any photo. At the same time, a Facebook spokesperson says, “you will get an alert from Facebook telling you that you appear in the picture…You can then choose to blur out your face from the picture to protect your privacy.” This of course raises the question of when consent to be identified actually occurs: when you sign the Terms of Service, or when you are presented with the option of allowing yourself to be tagged or blurring your photo?

Artificial intelligence either does or doesn’t signal the end of humanity as we know it

Dina Bass and Jack Clark of Bloomberg ran a story last week about efforts to balance the discussion about what ambient data and artificial intelligence mean for the future of humanity. On the “AI is a potential menace” side of the debate: Elon Musk, Stephen Hawking and Bill Gates. Representing “Please don’t let AI be misunderstood”: Paul Allen, Jack Ma and researchers from MIT and Stanford.

Wherever one’s convictions may lie, this is a conversation that must become more explicit and specific. It requires a deeper understanding of not only the technologically possible (what machines can do today and likely in the future) but also of the ethical implications. One useful example harks back to the old “lifeboat ethics” question: what happens when a self-driving car is in a no-win situation and must sacrifice itself and its passengers, or hit a pedestrian or bicyclist? What would you do? Who decides?

Context is everything

For most people, there is no expectation that your TV is listening to conversations in your home. Your fridge is there to keep things cold, not to provide data on your eating habits. If you happen to be photographed while going about your daily business, there’s little chance (unless you’re a celebrity) that there will be consequences. Anonymity bestows a degree of privacy. But when data is ambient, anonymity is no longer possible.

Some people call this the “useful versus creepy” problem: in the digital age, does technology have to be creepy to be useful? Or does respecting privacy and building in appropriate controls make technology inherently less useful?

I think this is a false dichotomy.

Earl Warren, former chief justice of the United States, once said “Law floats in a sea of ethics.” To that I’d add, “Ethics floats in a sea of technology.”

Jess Groopman and I are collecting use cases and working on frameworks to parse these issues and make these conversations more explicit. We welcome and will cite your contributions.

Posted in Artificial Intelligence, behavior, Big Data, Data Science, Ethics, Innovation, Internet of Things, social data ethics, Television, Uncategorized | 1 Comment

Big Boulder Initiative: What’s on the Social Data Ethics Agenda for 2015?

Big_Boulder_Ini-048Today, #snowpocalypse2015 permitting, the board of directors of the Big Boulder Initiative is meeting up in San Francisco to plan 2015 in more granular detail. As a member, I’m really proud of what we accomplished during the past year, but recognize that there is a lot of ground to cover. Here are some of the highlights of the past year, from a post last week from board director Chris Moody, VP Data Strategy at Twitter:

  1. We established the first independently operated and self-sustaining 501(c)(6) nonprofit trade association dedicated to establishing the foundation for the long-term success of the social data industry.
  2. We formed a board of directors comprised of representatives from enterprise, startups and academia within the ecosystem, whose mission is to collectively address key challenges within the industry.
  3. We published a Code of Ethics and Standards in an effort to define a set of ethical values for the treatment of social data that will be used as a benchmark for companies and individuals associated with the social data industry around the world.
  4. Earlier this month, BBI hosted a half-day workshop in Boston, hosted by Fidelity Investments. The focus of the workshop was around the ethics of social data.
  5. We added three new board members:
    • Justin DeGraaf, Global Media Insights Director at The Coca-Cola Company
    • Mark Josephson, CEO of Bitly
    • Farida Vis, Director of the Visual Social Media Lab and Faculty Research Fellow at The University for Sheffield
  6. Finally, Brandwatch, IBM, NetBase and Twitter have joined the Big Boulder Initiative as founding members. In recognition of their efforts, BBI has added the following board observers to the board of directors:
    • Will McInnes, CMO of Brandwatch
    • Jason Breed, Partner/Global Lead, Social Business at IBM
    • Pernille Bruun-Jensen, CMO of NetBase
    • Randy Almond, Head of Data Marketing at Twitter

Over the next several weeks and months, we’ll be holding events (details to come!) and publishing more about our activities. In the meantime, if you’d like to hear about membership or you have any questions about the Big Boulder Initiative overall, please contact:

Bre Zigich
Big Boulder Initiative Board Secretary
bre@twitter.com
720.212.2120

Posted in Big Boulder Initiative, Big Data, Ethics, Gnip, Social Data, social data ethics | Tagged , , | Leave a comment