top of page

what's the big deal? cont'd

Remember the good ‘ol days, when the Internet was just a simple, innocent place to go for useful information like movie times and weather? 

Just a simple place where you could track down your old high school boyfriend or girlfriend, to specifically see if their life had indeed gone on without you and to make sure their significant other was not as pretty or fun as you are?

​How times have changed in such a short period of time.  Over the last few years we have, unfortunately, seen the Internet’s dark side. 

We have seen foreign countries maliciously attack our sacred elections; intimate photos of women posted without their consent for revenge, or just a cheap thrill; fake social media accounts created to harass and embarrass ex-boyfriends and girlfriends; terrorist propaganda accounts enabled and empowered; and Americans accused of murder and other horrible crimes, with zero evidence — openly defamed, maligned and slandered with little recourse.

We have seen social media firms shamelessly sell us out by not only failing to protect our personal information, but actively pimp it out; we have seen truth and productive discourse replaced by disinformation and hate.

Like a slow-moving car crash, we have seen the Internet morph from an innocent, cuddly Calico kitten into an irresponsible, out-of-control Bengal tiger — weaponized for the destruction of almost everything we hold dear, from our personal privacy to our hallowed democracy.

The Cyberspace Solarium Commission, a bipartisan commission ordered by the U.S. Congress, put it this way: “The digital connectivity that has brought economic growth, technological dominance, and an improved quality of life to nearly every American has also created a strategic dilemma.  The more digital connections people make and data they exchange, the more opportunities adversaries have to destroy private lives, disrupt critical infrastructure, and damage our economic and democratic institutions.”

We have to get a handle on this, and fast.  There are tons of issues that need to be addressed involving the Internet — everything from cybersecurity to online influence operations to cyber bullying (recommendations for all of these are covered in these books) — but the behavior and responsibility of social networks is at the top of the list. 

What’s the big deal, because Facebook and Twitter accounts are free anyway, right?  Not even close. 

Many services in the digital economy appear to be free, but you actually pay for them not with money, but with your personal data.  In fact, your personal information is a currency far more valuable to social media companies than if you paid them a large monthly fee.

We are the product being sold here.  Our likes and dislikes, our penchants and preferences, our vulnerabilities and insecurities.  What we eat, when we sleep, why we vote, where we shop.  Who we worship, who our friends are, who our enemies are…all sold to the highest bidder. 

Social media companies not only have access to a mind-boggling pool of our personal data, but they also possess an unprecedented “social graph” that allows them to not only know the desires and habits of each of their members, but also how each of their members connects and interacts with their other members.  This goldmine is invaluable to advertisers.

Until recently, when the public became more aware of their behavior, these companies showed little regard for their actions, even though they knew exactly how they were manipulating their users and negatively affecting society.  Their irresponsible behavior did not stop with enabling ​Russian bots and fake antifa accounts, or even the spread of disinformation, conspiracy theories and hate speech.  They also punted on basic human decency. 

For example, in 2018, Facebook employees created a slide presentation as part of an internal effort to understand how Facebook shapes user behavior, and how the company could possibly alleviate potential harmful effects.  One of the slides said: “Our algorithms exploit the human brain’s attraction to divisiveness.  If left unchecked, Facebook would feed users more and more divisive content in an effort to gain user attention and increase time on the platform.” 

Facebook founder and chief executive Mark Zuckerberg, along with other senior members of his team, seemingly buried the results of the research.

What’s even more disturbing is that, according to The Wall Street Journal, “the concern was that some proposed changes would have disproportionately affected conservative users and publishers, at a time when the company faced accusations from the right of political bias.”  In other words, the leaders of Facebook threw us all under the bus because of political pressure.,,guess the Facebook executive conveniently left that out of his interview with Politico.

​The Wall Street Journal also reported that “a 2016 presentation that names as author a Facebook researcher and sociologist, Monica Lee, found extremist content thriving in more than one-third of large German political groups on the platform.”

 

“Swamped with racist, conspiracy-minded and pro-Russian content, the groups were disproportionately influenced by a subset of hyperactive users, the presentation notes.  Most of them were private or secret.  The high number of extremist groups was concerning, the presentation says.”

“Worse was Facebook’s realization that its algorithms were responsible for their growth.  The 2016 presentation states that ’64 percent of all extremist group joins are due to our recommendation tools’ and that most of the activity came from the platform’s Groups You Should Join and Discover algorithms: Our recommendation systems grow the problem.’”

Please reread that paragraph.  Facebook has known for years that their own algorithms promote and even encourage extremism.  That is truly beyond the pale.

In July 2020, Facebook released the results of a long-awaited audit of its civil rights policies.  It wasn’t good. 

“With each success the auditors became more hopeful that Facebook would develop a more coherent and positive plan of action that demonstrated, in word and deed, the company’s commitment to civil rights.  Unfortunately, in our view Facebook’s approach to civil rights remains too reactive and piecemeal.”

Perhaps most exasperating to the auditors is Mark Zuckerberg’s stance on political speech.  Using the example of Donald Trump’s May 2020 Facebook post that warned protesters “when the looting starts, the shooting starts,” they said:

“After the company publicly left up the looting and shooting post, more than five political and merchandise ads have run on Facebook sending the same dangerous message that ‘looters’ and ‘antifa terrorists’ can or should be shot by armed citizens.  The auditors do not believe that Facebook is sufficiently attuned to the depth of concern on the issue of polarization and the way that the algorithms used by Facebook inadvertently fuel extreme and polarizing content.”

“When powerful politicians do not have to abide by the same rules that everyone else does, a hierarchy of speech is created that privileges certain voices over less powerful voices.” 

To be fair, social media firms were far more disciplined right before, during and after the 2020 election.  According to The Economist, Facebook removed ten times the number of hate speech posts than they had two years before. They also deactivate 17 million fake accounts every single day, double the number from three years prior.

Facebook also reinforced its security teams, conducted practice drills to plan for every possible election outcome, blocked new political ads for certain time periods, limited the number of people and/or groups with which a message can be shared, and strengthened transparency rules for advertisers.

Honestly, we're grateful they are trying to do better but, because of their size and scale of impact on communication, media, and civil society overall, it’s clear we cannot rely on their self-policing alone.  The stakes are just way too high.

For one, cleaning this mess up flies directly in the face of their entire profit model, which is obviously a disincentive.  Social media algorithms are designed to attract as much of the user’s attention as possible, then push the user to interact with others.  The algorithms don’t distinguish between “good” and “bad” content, they just understand that they need to push the content that gets the most comments, clicks and shares.

Recent experience proves that this is a disaster waiting to happen.  We now know that primitive emotion and extreme behavior generate more attention and interest than cats playing Pat-a-Cake, meaning these companies make more money on the extremes — which is exactly the reason Facebook executives buried their own research.

Thankfully, there is a silver lining to this.  Because of the way social network business models work, social media companies need us more than we need them.  They need as many of us as possible to participate, because their survival depends on “network effects” — meaning, the more people they have using their services, the more valuable their services are.

This fact alone gives all of us social media users tremendous power.  And we need to wield it.  The good news is that it is absolutely possible to strike an appropriate balance between guardrails and innovation.

As we search for the best solutions to this challenge, we have to be extremely mindful of protecting self-expression and free speech.  If we are not careful, those pesky unintended consequences could come and bite us really quickly.  #TheButterflyEffect

Find Sources for This Section Here

bottom of page