Facebook Reactions: Not Just Another Smiley Face

Screen Shot 2015-10-09 at 8.18.29 AMYesterday, Facebook started testing its answer to the “dislike button” in Spain and Ireland: a set of six animated emoji called “Reactions”–love, haha, yay, wow, sad, and angry. The emoji address a lot of what people have asked for on Facebook; specifically, a bit more nuance in how to respond to posts. Here’s what Chris Cox, Chief Product Officer at Facebook, had to say:

Today we’re launching a pilot test of Reactions — a more expressive Like button.

As you can see, it’s not a “dislike” button, though we hope it addresses the spirit of this request more broadly. We studied which comments and reactions are most commonly and universally expressed across Facebook, then worked to design an experience around them that was elegant and fun. Starting today Ireland and Spain can start loving, wow-ing, or expressing sympathy to posts on Facebook by hovering or long-pressing the Like button wherever they see it. We’ll use the feedback from this to improve the feature and hope to roll it out to everyone soon.

This is a much smarter move than the more obvious and problematic option of a “dislike” button, for several reasons. One is context; “dislike” can refer to a friend’s hard day, but is vulnerable to trolling or other (context-free) negativity. Another is range of expression. Even a set of six emoji can address a range of expressive options that a simple “like” or “share” couldn’t do. (For more on how Facebook arrived at these options, and a couple of other fun nuggets, see Casey Newton’s hilarious piece in The Verge.)

Of course, brands’ first question will be how “Reactions” will affect ranking, a hot-button issue for some time now. Chris Tosswill, Facebook Product Manager, says on the Facebook blog that:

We see this as an opportunity for businesses and publishers to better understand how people are responding to their content on Facebook. During this test, Page owners will be able to see Reactions to all of their posts on Page insights. Reactions will have the same impact on ad delivery as Likes do.

But one of  the more interesting aspects for me is what these six little guys mean from a brand strategy point of view. Here’s why I think this was a smart move on Facebook’s part:

  1. Universality. Pictorial language such as emoji don’t require translation and are (more) culturally universal than written language. That’s a cost and time saver for global organizations.
  2. Familiarity. Emoji are becoming more commonly used. In fact, a May 2015 article in the BBC cited a study that found that emoji is the United Kingdom’s fastest-growing language.
  3. Simplicity. From a data point of view, structured data is easier to process and analyze than unstructured data. At the same time, however, not everything will be black and white (or other colors); human beings are notoriously resourceful when it comes to applying sarcasm and other shades of gray.
  4. Succinctness. Emoji are fairly economical in terms of screen real estate, a boon to UX everywhere.
  5. Extensibility. You can always add more!

True to Facebook’s style, this is a test-and-learn process, as it should be (once they’re available) for brands too. This also means there are a lot of as-yet unanswered questions: will brands be able to use these emoji outside Facebook? If so, when? And, more importantly, should they? Will they have a range of options in terms of what emoji they can use, or will they have to offer all six? As you can imagine, this can get complex fairly quickly.

It’s also interesting to imagine our experience as consumers. For example, I could see using “angry” or “sad” emoji if a favorite item is out of stock, or “haha” for fashions I consider particularly ludicrous. Can I say “haha” if my flight is delayed? What does “wow” really mean? Six little faces can mean a lot of things, guys.

As a brand, I would be interested in benchmarking for a while to see what my “normal” looks like. Then I’d want to better understand cases where the emoji is actually being used in unexpected ways. And of course I’d want to compare “Reactions” to other signals. That’s going to require some upfront thinking and scenario planning.

Of course, there are some interesting analytics options, for media and other types of organizations. I can already see an “anger index” or even a “wow index” to compare reactions to news stories over time. I also have to wonder whether the team saw evidence for but sidelined the idea of a WTF emoji (or its more SFW cousin, WTH) because of its obvious trolling potential. But it sure would make for a great Buzzfeed-style roundup story at the end of 2016. So I hereby lobby for some kind of “disbelief” emoji come 2016. I think “wow” may be our only option for now.

But I’m getting ahead of myself. For now, this looks to be a solid first step toward providing more expressive options for Facebook users. and more food for thought for brands as they plan their digital strategies.

Feel free to haha, yay, or wow in the comments.

Posted in Uncategorized | Leave a comment

With privacy, communication is critical (but it isn’t everything)

Analysis and chart of emotions expressed in social media about Windows 10 release, courtesy NetBase

Analysis and chart of emotions expressed in social media about Windows 10 release, courtesy NetBase.

Every week, we’re seeing new stories in the news that highlight the uneasy state of privacy norms.

The announcement of Windows 10 came with swift backlash against “privacy nightmares,” Spotify’s new privacy policy sparked another wave of disbelief and outrage, and other stories–such as the one about how JFK airport may be pinging your phone to deliver more accurate wait times–are being reported with a mixture of breathlessness and unease.

Basically, according to the news, you have a choice between two extremes:

  • Everyone is tracking everything about you, and we’re hurtling toward 1984; or
  • Just calm down already, you paranoid Luddite

As you’d expect, the truth is somewhere in the middle. But where, exactly?

If you read past the first wave of reporting on Windows and Spotify, a new theme emerges. It’s not about what is being collected and why, it’s about how the data collection was communicated. Consider this comment from Whitson Gordon, published in Lifehacker:

Microsoft’s language on one or two settings is very vague, which means it’s hard to tell when it is and isn’t collecting data related to some settings. The “Getting to Know You” setting is particularly vague and problematic.

Now compare this comment to one made by Spotify CEO Daniel Ek in a blog post apologizing for the rollout of the new privacy policy:

We are in the middle of rolling out new terms and conditions and privacy policy and they’ve caused a lot of confusion about what kind of information we access and what we do with it. We apologize for that. We should have done a better job in communicating what these policies mean and how any information you choose to share will – and will not – be used.

Implicit in these arguments is that it’s less what companies are doing that’s at issue than how they communicate about what they’re doing. And all of that comes in response to popular backlash.

Interestingly, the story about JFK airport pinging your phone with Beacons and pulling off its MAC address–your phone’s unique identifier–did not garner nearly as much attention as the other stories, especially given that this data collection is being done at TSA and border control checkpoints. [Wouldn’t you like to know whether the TSA and Border Control have access to that data? I bet a lot of people would.]

Certainly there is a tremendous opportunity for more active transparency (meaning that companies make a concerted effort to communicate) and clarity (the effectiveness of these communications) when it comes to data and how it is used, both within privacy policies and in the apps themselves. This would, as the Lifehacker and Fast Company articles assert, solve a lot of problems. For example:

  • Want to upload a photo to your profile?
    Then you have to grant access to the app to access your photos.
  • Want your ride-share service to pick you up where you actually are?
    Then you have to share your location with the app and give it your precise whereabouts.
  • Want to use voice control on your apps?
    Then you need to let the app collect, record and process your speech into something the machine can understand.

All of the above are rational trade-offs, assuming the data is used as advertised, isn’t stored, used for another undisclosed purpose, or shared with someone or something you didn’t intend.

But, as the saying goes, there’s the rub.

Our recent report, “The Trust Imperative” uses a framework developed by the Information Accountability Foundation that identifies six principles that are critical to ethical data use. Here is the list:

Screen Shot 2015-08-31 at 2.34.20 PM 1

As you can see, there is a lot of ground to cover. Let’s run the JFK example through this filter.

  1. Is it beneficial? Yes, because collecting the MAC address helps the airport communicate more accurate wait times. (But, of course, benefit is in the eye of the beholder).
  2. Is it progressive? Is the minimum amount of data (MAC address only, not stored, no additional personal information) being collected? Arguably yes, because the company that makes the technology, BlipTrack, says that only the MAC address is captured, and they tell us it is not stored.
  3. Is it sustainable? Harder to know. If, for the sake of argument, Blip Systems goes out of business and this service is discontinued, what happens?
  4. Is it respectful? I’d have to say no. No one let passengers know their phones were being pinged by beacons or that their MAC addresses were being collected, encrypted or not. You could make an argument that people should know not to make their phones discoverable in public places, but from what I can tell, there was also no signage that explained that this technology was being used.
  5. Is it fair? Unclear. If the only use of the MAC address is to communicate accurate wait times, probably. But if the data were to be used for any other purpose, commercial or legal, it could be a different story.

While this isn’t a perfect science, it’s a good filter to use to determine whether a new or existing use case for data collection might have unwanted consequences. For more detailed information on this framework and its implications, please download “The Trust Imperative: A Framework for Ethical Data Use.”

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Are we building a data “genome”?


Photo: Dave Fayram, cc 2.0

Lately, I can’t stop collecting examples of how data and algorithms have infused our daily lives. But it’s not just the ads we click on or the items we place in shopping carts. Today, data carries intimate information about our bodies, finances, friends, interests, politics, family histories and emotions.

Here are some examples from the past few weeks:

  • Lawyers are using data from fitness devices as evidence in court cases [Dark Reading].
  • The FBI now considers retweets of ISIS content to constitute probable cause for terrorism charges [Huffington Post].
  • Facebook recently received a patent that would enable lenders to consider whether your social network makes you a good credit risk [CNN Money].
  • Google/Alphabet just received a patent that would enable it to search a video archive of your life [Huffington Post].
  • In case you happen to post anything to Facebook today with a smiling emoji, an LOL, a haha or a snarky hehe, guess what? Facebook knows you’re laughing. (Also, if you’re using LOL, you’re probably old) [Marketplace and Facebook].

Taken collectively, these and other anecdotes illustrate just how pervasive–and intimate–data has become. But more than that, it shows how, without even realizing it, we are each creating a detailed and potentially permanent record of ourselves throughout our lifetimes (and beyond); a data genome, so to speak.

Does that mean it will one day be possible–even common–to sequence virtually an entire life into a “digital blueprint”?

Before you go telling me I’ve been drinking too much coffee and watching too much Mr. Robot, Humans and Black Mirror (all true, I admit), consider this: we already have the precedent of a “customer profile”; it’s just the extent of that profile–what can and cannot be included, and under what circumstances–that will require careful oversight and negotiation over the coming years.

William Gibson once said, “The future is already here; it’s just not very evenly distributed.”

I’d argue that–at least with respect to data–the future is distributing itself faster and faster these days. We’re actually lucky to have lived vicariously through the assorted paranoid visions of Huxley, Orwell, Dick, Gibson and others.

Now the responsibility is ours. We need to consider these issues deeply, build a set of data usage practices that protect us as organizations and individuals, and establish the foundation for a world we want to live in: one, five ten or fifty years from now.


Posted in Artificial Intelligence, data privacy, Data Science, digital ethics, Digital Media, Ethics, Internet of Things, Policy, Privacy, Quantified Self, social data ethics, Uncategorized | Tagged , , , , , , | Leave a comment

In social analytics, maturity is relative

Photo: Daniel Kleeman, cc 2.0

Photo: Daniel Kleeman, cc 2.0

At Altimeter Group, we’ve been measuring the state of social business for several years now.

Each year brings new shifts. Some are surprising, others not so much. This year’s theme, in “The 2015 State of Social Business,” by my colleague Ed Terpening, the shift is from scaling to integrating. That makes sense from a data perspective too.

I have to admit I had an ambivalent reaction to this particular chart, which shows the relative maturity of various aspects of social business today.

Screen Shot 2015-07-29 at 10.17.05 AM
Social engagement–the ability for an organization to interact digitally with communities at scale–is unsurprisingly first, at 72 percent maturity (those figures are self-reported, by the way.) Coming up next is social analytics, with 63 percent of organizations surveyed reporting that their programs are mature.

But when you unpack these numbers, it suggest a bit of a different narrative. All of these functions–from social selling to governance to employee recruitment–must be measurable, and must have relevant and credible KPIs to demonstrate performance. Do those exist in these organizations?

It’s hard to know, as these capabilities are themselves maturing, and social identity–the ability to match social posts with individuals–has only achieved maturity in about a quarter of the organizations we surveyed.

So what exactly are these social analytics measuring?

In my own work with brands and social technology companies, the answer is highly variable, but there are some consistent themes. Engagement is pretty measurable, but the outcome of engagement is much harder. Social customer service metrics in many organizations have matured to the point that social service levels aren’t too different from service levels in “traditional” channels. Event/sponsorship activation works when there are ways to attribute outcomes to those programs. But we still struggle mightily with the dimming effect of last-click attribution on the actual, meaningful outcomes we all want to see: revenue generation, cost reduction, time-to-hire, etc.

What this chart suggests to me is that we are still at a point when, even though analytics have supposedly matured, the actual criteria for business value (impact of social on sales, on activations, on recruitment, on acquisition, on churn) may still be cloudy for many companies.

The other point–not in scope for this research but a related theme–is the extent to which social data is being tied with other data streams. Anecdotally, I’m hearing far more evidence of this in 2015. Most companies I speak with are looking at social data in context of other business-critical data, but norming remains a challenge. And so social analytics tends to revert to the mean, which in this case means counting volumes rather than gauging outcomes.

For the most part, I agree with what Forrester and others have said–that social analytics need to become more predictive. So I look at this data as a bit of context for brands, and a challenge for analytics companies to take up: we need to focus less on volumes and more on holding our own feet to the fire on what executives really care about: real business indicators and outcomes that suggest meaningful action.

You can download “The 2015 State of Social Business” here.

Posted in Altimeter, behavior, content measurement, Listening, Predictive Analytics, Social Analytics, Social media measurement, Uncategorized | Tagged , , , , , , | Leave a comment

Is your data cheating on you? Five life lessons from the Ashley Madison hack

6322992684_d252f695ed_oIf you’re not one of the 37 million people whose data was hacked in the Ashley Madison breach, you can breathe a sigh of relief.

Sort of.

The Ashley Madison story may be great for a few news cycles of schadenfreude, but it also illustrates the realities we face in the age of data ubiquity: as people, consumers, businesspeople, patients and citizens.

1. Intimate data about us is everywhere. Our purchases, location, sexuality, religion, health history, political party, whose house we went to last night, the stiletto heels or sleek watch or expensive bourbon we clicked on on a website–is out there, somewhere. In most cases this data is protected by layers of security, encryption, policy and regulation, but, as we’ve seen from Anthem to Target to Ashley Madison–it’s not always effective. Beyond data security, however, is the question of how this data is actually used by the businesses that collect it. Is it to deliver better services, products ads? Is it being sold to a third party?

2. Profiling is not just for the FBI. Marketers love profiling. Why? Because good marketers realize that it’s good business to sell you something you are likely to want, rather than wasting your attention (and their money) on trying to sell you something you don’t. So, naturally, they want to know more about you: who you are, what you covet, where you shop, where you live, how old you are and how much money you have, so they can target ads and products and services more effectively. Whoever you are, you’re profiled somewhere: thrifty boomer, young married, millenial hipster; sounds like a Hollywood casting call, doesn’t it? Like any tool, profiling can be extremely effective when properly used, dangerous if not.

3. You leave digital footsteps everywhere you go, and they just may live forever. Everywhere you go, you leave digital traces. Even if you were “just browsing” in a store, you may have left a digital trace if you used a retail app, and/or the store used beacons or shelf weights. Add to that your web, mobile and social activity, and any apps you’ve used. Now imagine a ten-year timeline of that data being used to try to predict your next purchase. Or next spouse.

4. Chances are, you haven’t the slightest idea what data is being collected about you at any given time. If you want to do a simple test, install Ghostery on your web browser for a while. It’ll tell you what data is being collected by the website you’re using. Did you know this data is collected? Do you know how it’s used? I bet not.

5. Your data may be cheating on you. When you clicked “Accept” on any one of a number of apps you used, or bought a book, or downloaded a movie, you may have digitally consented to share this data with third parties. But did you really know what you were consenting to? Sometimes this is a non-issue (some companies will never share your data with others). Sometimes it can have uncomfortable implications, as when Borders declared bankruptcy, and decided to sell one of its greatest assets–its customer purchase history. (The FTC stepped in and required Borders to provide an “opt out” option).

To be clear, I’m not saying any of this is inherently bad, or suggesting we can roll back the clock; it’s just reality these days. But as data becomes more intrinsic to our lives and our business, I believe in finding “teachable moments” anywhere we can:

  1. As individuals, there will never be a better time to educate ourselves about what tradeoffs we are making, consciously or unconsciously, with our our data.
  2. As business people, we need to decide what kind of data stewards we will be, especially as data becomes more ingrained in business strategy.
  3. As an industry, we need to start putting clear and practical norms in place to clarify these issues so that we can have a fair and productive conversation about them and, frankl,y set a good example.

I’ve outlined a lot of these issues and recommendations in The Trust Imperative: A Framework for Ethical Data Use. If you’re not lying on a beach somewhere, I’d love your thoughts and feedback.


Posted in altimeter group, behavior, data privacy, data security, digital ethics, Ethics, Internet of Things, Predictive Analytics, Privacy, social data ethics, Susan etlinger, Uncategorized | Tagged , , , , , | Leave a comment

Altimeter Group Joins Forces with Prophet

In the flurry of excitement today, I wanted to make sure to mark a momentous occasion; the company I work for, Altimeter Group, announced today that we have been acquired by Prophet. It’s a great move for both teams; we have similar clients, outlooks and cultures, which always makes for a great partnership. And we can help each other in may ways.

Here’s a video describing the new relationship, and what we hope to do together…

…and a nice story by Anthony Ha in TechCrunch. I’m thrilled for Charlene, who has taken this company to an important milestone, and for my colleagues old and new.

More to come!

Posted in Altimeter, altimeter group, Uncategorized | Leave a comment

What Brands Can Learn From Pinterest’s Privacy Updates

Screen Shot 2015-06-30 at 10.17.03 AMIn the midst of all the complexity and fear about data usage and privacy, it’s nice to see an example of disclosure done well.

A couple of weeks ago, Pinterest announced Buyable Pins, which will enable their users to buy products directly from Pinterest on iPhones and iPads. Like any new feature, this one comes with data privacy implications: if I buy something on Pinterest, both Pinterest and the seller will have access to this transaction information–and possibly more about me.

I’m a Pinterest user myself, so last week I received this email.

Screen Shot 2015-07-06 at 9.59.26 AM

Long story short: Pinterest and the seller receive enough information to complete the transaction, facilitate future transactions and make promotions more relevant to me. If I don’t want to share information to customize my experience, I can turn it off. Short, sweet and to the point.

If I want more information, Pinterest’s privacy policy covers a range of other issues in similarly clear language. The other thing I like about it is that it prompts me to dig deeper if I want to. Clearly, this should be true of any privacy policy update, but the naturalistic and concise nature of the language makes that process a little less initimidating.

I asked the Pinterest team what they were trying to achieve with the privacy language, and here’s what they told me:

Buyable Pins has been a highly requested feature, so we wanted to make sure the language for the policy was clear right from the start. The goal was for Pinners to have an understanding of why the updates are being made, how they can customize settings, and where they can learn more. The approach was similar to past policy updates, where we aim to put Pinners first and be as helpful and concise as possible.

There are two really important issues at play here: 1. people have been asking for this feature, so there is going to be a lot of scrutiny among the pinner community; and 2. Pinterest is now dealing with people’s money. So there’s a lot at stake.

Privacy Policies in Context

Two weeks ago, we at Altimeter Group published The Trust Imperative: A Framework for Ethical Data Use. The central framework in this report combines the data life cycle with ethical data use principles developed by the Information Accountability Foundation (IAF).

Screen Shot 2015-06-25 at 9.55.06 AM 1

The Pinterest privacy policy explicitly fits into two areas of the framework:

  • Collection and Respect. Have we been transparent about the fact that we collect data?
  • Communication and Respect. Have we communicated clearly about what information we collect, and why?

This is why our use of language is an ethical choice:

While dense and legalistic language may satisfy the legal team, clear and simple language demonstrates respect for the user. 

You could further state that Pinterest, like many other ad-supported sites, is arguing that increasing the relevance of promoted pins is a benefit to pinners, which would cover Collection and Benefit as well. [That argument only holds up if users agree that the benefit is worth the exchange of data.]

This is not to say that a privacy policy is the only thing organizations need to consider when it comes to ethical data use. Many other issues have gotten organizations into hot water, whether in courts of law or public opinion. Some top-of-mind examples include Borders (for attempting to sell customer transaction data as part of its bankruptcy process) or Anthem and others (for data breaches). These examples map to Respect/Fairness and Usage, and Respect/Fairness and Storage and Security, respectively.

But now that the framework is out, I will be testing it (and suggest you do too) against real-world examples, using the IAF principles and the data lifecycle stages to examine and illustrate examples of ethical data use in theory and, most importantly, in practice.

Posted in Analytics, data privacy, digital ethics, Ethics, iPad, Pinterest, Privacy, Social Data, social data ethics, Uncategorized | Tagged , , , , , , | Leave a comment