visit
Communities have always been positively and negatively affected by the actions of individual members. So why do people behave differently in the digital world?If you haven’t already, check to find out if your Facebook data was shared with Cambridge Analytica. Here are my results:
Yep. I was 1 of the 87 million people whose data was shared with Cambridge Analytica, as a result of a controversial Facebook App called “This is your Digital Life”. Interestingly, I personally never used this app. My only error? Being friends on Facebook with someone who did. As you can see in the screenshot, my friend’s participation authorized the sharing of my public profile, page likes, birthday and current city.
Debating the implications, consequences, and whether or not this controversy actually caused me any harm is best for another article. For now, I’m interested in digging into some potential factors that contributed to my friend thinking it was socially acceptable to share my birthday and zip code with a random developer. Wouldn’t this same transaction of information have seemed strange in real life?
Photo by
When we offer up information in the physical world, we first run some simple math in our heads. “What’s the worst that could happen if I give this salesman my phone number?” We calculate the potential damage that could be caused by this person (based on realistic human capabilities), weigh the pros and cons, and make a decision that is in line with our risk tolerance. However, when computers are involved, the average consumer vastly underestimates the speed, scale and reach of automation-assisted systems. To put it in perspective, readily available can process 1,000+ records per second. I’m sure some of you have even more powerful examples. Authorizing an app to access my friends list seems harmless independently. Yet, when mixed, matched, and overlaid with other data sets, a list of page likes can yield scary results. Check out ’s article “Your Facebook Data is Scary as Hell”.
Digital communities aren’t a “fixed pie”. In the physical world, personal circles are dictated by the number of relationships a person can juggle, whereas in the digital world, personal circles are only limited by database capacity. For example, I’ve found that as I move to a new town or job, making new friends almost always leads to me to drop older friends out of my community. After all, it’s only humanly possible for me to manage a certain number of friendships. “Dunbar’s Number” states that “”. It seems we implement some version of a FIFO method of keeping our casual acquaintances to a manageable size. Furthermore, physical relationships require two-sided attention. If one party stops participating in a friendship, the relationship fades. Yet this doesn’t seem to exist in the digital world. I don’t actively contribute to 1000+ friends in my Facebook friends list, yet keeping them as friends apparently represents some level of risk. Smaller, physical communities function well because there is a set of norms, expectations, and a tangible understanding of how individual decisions may negatively affect all members of the group. Yet, unwieldy social communities seem impenetrable. Surely, poor decisions couldn’t affect all these people? LinkedIn apparently enforces a . I wonder their reason for this specific number? Would a fixed community limit on social platforms that more closely reflected our real-world help?
What do you think? Do we share the blame with Facebook, or should they have prevented this in the first place? How can we create better incentives or regulations to equip people to make better online decisions?
If you enjoyed this, feel free to clap 👏Follow me on , or check out what I’m .