Why the Internet is Broken and How it Breaks Everything Else
It's election season in the USA and thinking people are raising an alarm. Social Dilemma, the Netflix documentary, is getting lots of attention. Online disinformation is pervasive. No one seems to agree on the facts. Polarization is at an all time high.
What exactly led us here? What needs to be fixed?
I work in Machine Learning and am known for Recommenders so let's take a look at how algorithms, business models, and bad leadership have led us here.
The last 20 years in Venture Capital have given us huge companies like Facebook, Twitter, and Google. These companies thrive and are global because – in part – they are free. Well not exactly free, they gather information about you and are paid to put things in your field of view. For instance they all watch your behavior and put ads in front of you. "No surprise and no problem," you might say. But taking this several steps further leads to one key problem. If you build a gigantic company with a huge market cap, you will have correspondingly huge pressure to get more attention from your "users" so you can sell it to your "clients", which should lead to more profit and higher stock prices. Since Milton Friedman called "maximizing profit" this highest corporate good, all would seem to be well with this system – or the ever increasing stock value of thg tech giants would seem to say. Friedman was so obviously wrong that we need to look deeper into tech business models.
The single minded push for more "eyeball time" from users has led to better algorithms that learn to maximize this "user attention" and some of these are tuned to the wrong metrics to yield "user satisfaction." This mistake is disastrous in ways we'll see but af first it doesn't seem wrong. What a small price to pay for something as complex and big as a search engine that can find almost anything on the web. If the trade-off only went that far we might forgive the surveillance and ignore ads we're not interested in.
The real trouble comes when social media like Facebook and Twitter get their hands on eyeball time optimizers. This is illustrated by the catch-phrase, "If it enrages it engages." The algorithms optimize eyeball time – even if it is user time spent hating, ranting, and raging. There are several solutions to this inevitable slide into an unwell state of mind in an optimized dystopia but first...
There once was an exceedingly blithe personality who meant no one harm. She loved chatting with friends when time allowed and went out of her way to help them when they were in need. She loved keeping up with news of their lives on social media and often "liked" their photos or commented with encouraging words.
One day she recognized her own need to be more supportive when examples from "Black Lives Matter" got wide broadcast. Equality was a human right, was it not? She left comments of support in places where the issue was discussed.
Meanwhile she had a childhood friend who moved far away and with whom she had diminishing contact. He was funny and witty so she tried to keep in touch every so often. But he had grown intolerant over time and downright racist – judging by his posts. It seemed to her like they were becoming mostly about race as time progressed.
Rather than confront him, she unfollowed him. She would rather remember him as a caring friend than witness his growing hate.
IF she could have seen everything he posted, rather than the subset of posts that were shown to her, she might have seen a more nuanced person. But the big-bad social media company made the selection for her based on which posts led to more clicks and those were his most controversial posts.
IF she could have seen which of her posts had been shown to him she would have seen that her comments in support of BLM protesters were all that got through. IF she only knew that her posts we shown to even more blatant racists by algorithms that were trained to get users to click she would have been appalled.
How did big-bad social media company subvert our heroine's supportive and empathetic comments turning them into dog whistles for intolerance?
Venture Capitalists and other big stock owners pushed the social media company into ever more ad sales. This was motivated by Friedmanesque values and the simple profit motive. But algorithms created in these tech companies need to be given a goal to optimize, they do not make moral judgements. Web site analytics typically include Key Performance Indicators (KPIs) that measure things like the number of pages viewed per user visit. This is considered a good indication of user engagement and it is fairly easy to use as an optimization goal for algorithms. If users see more pages per visit, the goal is satisfied, more ads are seen, the social media company makes more money, and VCs and stock holders are happy.
The problem with this kind of goal is that it serves the money, not the user. If all users of a near monopoly are taken together, this means society is not served either. Algorithms tuned to enrage users, applied en masse will enrage society.
How Do We Fix This?
- Milton Friedman: yes he is to blame for a patently silly premise that has survived too long, advanced by lazy or greedy minds. Stakeholder Capitalism is on the right track but lacks a framework like shareholder value, which was Friedman's one seductive idea. He is gone now but needs to be repudiated and his legacy disavowed by business. Seek information about Stakeholder Capitalism and spread the ideas. What is missing with Stakeholder Capitalism is a framework for making broader interests reflected naturally in corporate governance. Without this, corporations may act responsibly only as an exercise in public relations. Friedman wanted corporations to act as if Capital were the only driver for corporate success and the stock market is an easy measure for this kind of success. As yet, we have no alternative that measures the contribution of other factors to corporate success. Factors like customer satisfaction, employee wellbeing, sustainable environmental impact, and even the full cost of doing business are merely things to exploit in Friedman's Capitalism but that's another blog post. Still it seems clear that if the social media companies had a framework for measuring their users's satisfaction and optimizing it (like the stock price does for capital), it would help move us away from the our growing dystopia.
- The CEOs and leaders of the social media companies: plenty of blame here but would they have survived if they tried to fix the problem? I think so and moves by Twitter and to a lesser extent Facebook will tell us the answer. Mark Zuckerberg, the CEO of Facebook, has shown utter callousness with regard to reforming his broken platforms and he should go. Many of his own employees have even joined the call for changes (and this is only the latest of several internal protests). Unfortunately most discussions involve the spread of disinformation by "users" NOT misfiring or bad algorithms, which is the subject of this post. There are many problems with social media as it exists today but the algorithms are not receiving enough attention and may have more impact in the long run than misinformation, though bad enough on its own. I call on the CEOs and their Engineers to answer for what their algorithms optimize – what goal do the algorithms have – what KPIs are they designed to increase? This will help us see clearly the key algorithmic factors leading to today's dystopia. We do not need to know their proprietary algorithms, only their optimization goals.
- US Regulators and Congress: there is plenty of interest here but government too often misunderstands the problem and so their solutions may also miss the mark. We can hope those who are in place to protect the common good will do so but get involved and help influence the direction of this debate. Anti-trust investigations into the tech giants seems to center on breakup as a solution, which is sometimes beside the point. Let's push Congress to focus on the problems not just the tools they have (anti-trust laws).
- We the Users: Here we collectively have the most power – refuse to participate, delete your account (notice how they try to get you to take any action but deletion). It will be hard, you will miss your friends and feel disoriented in the dark. But make no mistake, as long as the algorithms are designed to create divisiveness, they will do so and use your content to do it no matter your intent. You may think you aren't involved with all this but you are. After all your content is all the algorithms of the social media companies have to get eyeball time from others. Take it away.