<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[metareflections]]></title><description><![CDATA[In pursuit of our potential]]></description><link>https://metareflections.com/</link><generator>Ghost 3.33</generator><lastBuildDate>Fri, 22 Aug 2025 04:05:05 GMT</lastBuildDate><atom:link href="https://metareflections.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Checkmate humanity? The Social Dilemma's Contribution]]></title><description><![CDATA[If you watch the film, you won’t be able to look at your social media feeds in the same way.]]></description><link>https://metareflections.com/social-dilemma/</link><guid isPermaLink="false">5f7a41e1e547da0001613ab9</guid><category><![CDATA[Review]]></category><dc:creator><![CDATA[Patrick Ward]]></dc:creator><pubDate>Sun, 04 Oct 2020 21:47:39 GMT</pubDate><media:content url="https://metareflections.com/content/images/2020/10/output-onlinepngtools-2-1.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://metareflections.com/content/images/2020/10/Screen-Shot-2020-10-01-at-12.58.43-PM.png" class="kg-image" alt="Checkmate humanity? The Social Dilemma's Contribution"></figure><img src="https://metareflections.com/content/images/2020/10/output-onlinepngtools-2-1.png" alt="Checkmate humanity? The Social Dilemma's Contribution"><p><em>Netflix's The Social Dilemma</em> is a dramatic wake-up call to the inherent dangers of social media.</p><p>Although the screenshot above screams the sky is falling and the film has other sensationalist moments, the overall tone is more <em>After School Special</em> than <em>Chicken Little</em>. As the credits roll, the experts even offer practical tips for dealing with social media in your personal life). But the film’s most important contribution is providing viewers with a powerful mental model for how social media platforms operate vis-à-vis their users. If you watch the film, you won’t be able to look at your social media feeds in the same way.</p><p>As a wake-up call, it is highly effective. It brings to life social media’s potential for abuse and addiction and to shape our perceptions of reality. It explains the underlying attention-based business models and algorithm-powered technologies of social media platforms in an accessible and powerful way. Moreover, it shows how their combination can take advantage of our psychology and accelerate extremism.</p><p>Even if you think the actual harm caused by social media to date is overblown, the film suggests you shouldn’t take much comfort for the future: these problems are likely to magnify as the AI-powered algorithms continue to rapidly improve and human users and society are much slower to adapt.</p><p><strong><em>Note: spoilers below</em></strong></p><h2 id="a-narrative-solution">A narrative solution?</h2><!--kg-card-begin: markdown--><p>As a wake-up call, it works. The message is clear: Something must be done! But as “Dilemma” suggests, the film ultimately raises more questions than it provides answers. However, in a notable way, the film itself is part of the solution.</p>
<p>If social media’s superpower is the ability to personalize and micro-target using seemingly infinite amounts of data, film’s superpower is life-like, multi-sensory storytelling. <em>The Social Dilemma</em> uses that superpower to bring Facebook’s<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup> algorithms to life.<sup class="footnote-ref"><a href="#fn2" id="fnref2">[2]</a></sup> In between standard documentary fare—interviewer-less shots of former employees and executives, academics and cultural observers—the film interweaves the story of a fictional American family using and succumbing to the dangers of social media.</p>
<p>The toy narrative is bookended with two dramatic examples:</p>
<p>Towards the start, a family dinner ends in chaos. Mom tries to get everyone’s full participation by storing all family members’ cell phones in a plastic time-lock container for one hour. But within minutes, the dinner ends with a literal bang. One of the teenage daughters—embodying full-blown social-media addiction—smashes the container to pieces with a hammer to get to her phone.<sup class="footnote-ref"><a href="#fn3" id="fnref3">[3]</a></sup></p>
<p>Later, near the end of the film, the two other siblings end up in handcuffs face down on the ground. The teenage son had attended a demonstration after being radicalized to the fictional “Extreme Center” by propaganda and conspiracy theories promoted to him in his social media feeds. His sister is trying to take him home but is also swept up.</p>
<p>The fictional narrative and its characters are often simplistic and shallow. But that is wholly appropriate for the mini-story’s main character: the Facebook algorithm anthropomorphized. This representation is the film’s biggest contribution.<br>
<br></p>
<p><img src="https://metareflections.com/content/images/2020/10/Screen-Shot-2020-10-01-at-12.54.12-PM-2.png" alt="Checkmate humanity? The Social Dilemma's Contribution"></p>
<h2 id="doyourealizewhatyoureupagainst">Do you realize what you’re up against?!?</h2>
<p>The algorithm is literally brought to life in the form of three lookalike guys operating a vast <em>Minority Report</em> style control room that is entirely dedicated to the Facebook feed of a single teenage boy, the mini-story’s main character. As he is using the app, the control room operators analyze all the incoming data: how long did he pause on that picture, which links did he click, what friends are physically nearby him, how long has he been using the app this session and how does this compare to an average session. They use all that data to decide what piece of content to show next and when to show an ad—all the while trying to make sure he keeps using the app as long as possible.</p>
<p>When he is not using the app, the operators work to get him back on it—sending him notifications most likely to get his attention when seen and most likely to prompt further action on the app if he checks them. The control room’s digital “voodoo doll-like” model” drives home the ability of the algorithm’s stimuli to prompt robot-like responses.</p>
<p>Watching the film may be the first time some viewers consider how social media apps works under the hood, but the film is not revealing any secrets in the details. Nevertheless, the film’s synthesis of known components into a metaphorical whole seems revelatory even for someone familiar with how the apps work.</p>
<p>The three men are a nod to the social media platforms’ three goals: engagement (time spent on the app), growth (new users) and monetization (displaying ads). The algorithms care about these three things. The algorithms don’t care about anything else.</p>
<p>The film’s commentary emphasizes one axis of power: the vast computing power and brainpower dedicated to achieving those goals. But the story shows there is another axis of power even more important: the unyielding focus on those objectives. From that focus, necessarily follows an indifference to all other objectives.</p>
<p>As a result, the algorithm does not care about the actual contents of the “content” it shows you. It only cares that you keep scrolling, keep liking, keep sharing. If family pictures and cat videos do the job, the algorithm will show those. But if clickbait or misleading headlines or conspiracy theories do a better job, goodbye cat videos.</p>
<p>“You are the product” is often invoked in the context of privacy and personal data. But <em>The Social Dilemma</em> reframes the metaphor and suggests we should worry more about how these platforms demand our attention and how they change our behavior than we should worry about the data they are extracting.</p>
<p>If you could design a sophisticated control room to filter the information you received, I doubt you would optimize it for engagement, i.e., time spent using it. It would be like directing an elementary school teacher to focus only making sure the students have maximum fun every day. <em>The Social Dilemma</em> demonstrates those sophisticated control rooms <em>do</em> exist and maximum engagement <em>is</em> their goal. And we continue to rely on control rooms like this for more and more of our information.</p>
<p>Facebook apparently felt unfairly attacked and maligned by the film and <a href="https://about.fb.com/wp-content/uploads/2020/10/What-The-Social-Dilemma-Gets-Wrong.pdf">released a statement</a> on Friday with a list of contentions. The statement correctly points out that Facebook has taken steps to limit the worst effects suggested by the film (and by the simple depiction above), including down-ranking (but not removing) misinformation and limiting internal incentives to maximize users’ screen time. But the statement mostly misses the point. Despite its implicit hubris, Facebook cannot solve all the potential downsides of social media through its own engineering efforts and Facebook does not represent the entire social media universe even today. Facebook aims to be benign but future platforms may not.</p>
<p>Algorithms tailored to individual behavior increasingly will be used in technology beyond social media. Much good can come from that. But evil can too. Some skepticism is healthy. The depiction in <em>The Social Dilemma</em> helps.</p>
<p><em>Have you watched The Social Dilemma? Add your thoughts below.</em></p>
<p>Thanks to Job Chanasit, JD Co-Reyes and Mark Terrelonge for discussion about the film. Thanks to <a href="https://noahmaier.substack.com">Noah Maier</a> from the <a href="https://www.compoundwriting.com">Compound Writing</a> community for feedback on an earlier of this post</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>In the fictional mini-story within the film, the social media platform is unnamed and not specifically identified as Facebook. But the product depicted is more similar to Facebook than to any other major social network. <a href="#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>It also leverages the unique strengths of film in other ways. It uses a set of dramatic interstitials reminiscent of Requiem for a Dream (showing short clips of drug use and hyper-magnified  physiological changes) to associate social media with hard drug usage. <a href="#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>She is seen after the fact holding a hammer and wearing safety glasses, which I found hilarious in its ridiculousness. But it also fits the Afterschool Special vibe of the mini story. <a href="#fnref3" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Conceptualizing our potential]]></title><description><![CDATA[<p>In 1965, Intel's co-founder Gordon Moore predicted that the number of components in an integrated circuit would double every year until 1975. The forecast proved remarkably accurate. In 1975, he extended his prediction at a slower rate of two years per doubling. His prescient prediction continued to hold correct until</p>]]></description><link>https://metareflections.com/conceptualizing-our-potential/</link><guid isPermaLink="false">5f6ba873e547da0001613a88</guid><dc:creator><![CDATA[Patrick Ward]]></dc:creator><pubDate>Fri, 07 Aug 2020 12:00:00 GMT</pubDate><media:content url="https://metareflections.com/content/images/2020/09/0AD999DC-5672-4CDD-9258-D0FD83168203-2-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://metareflections.com/content/images/2020/09/0AD999DC-5672-4CDD-9258-D0FD83168203-2-1.png" alt="Conceptualizing our potential"><p>In 1965, Intel's co-founder Gordon Moore predicted that the number of components in an integrated circuit would double every year until 1975. The forecast proved remarkably accurate. In 1975, he extended his prediction at a slower rate of two years per doubling. His prescient prediction continued to hold correct until recently.</p><p>This astounding and regular progress has been one of the driving forces of the seemingly inevitable improvements in computer power and affordability that we have experienced in the past few decades. The chart below shows the increasing speed of processors over time on a log scale.</p><figure class="kg-card kg-image-card"><img src="https://www.remixingwork.com/content/images/2020/08/1920px-Moore-s_Law_Transistor_Count_1971-2018.png" class="kg-image" alt="Conceptualizing our potential"></figure><p>Moore based his forecast on the brief historical trends in the nascent semiconductor industry and his understanding of the manufacturing process. But in making his prediction, there was nothing preordained that the "law" should persist.</p><p>David Rotman recently <a href="https://www.technologyreview.com/2016/05/13/245938/moores-law-is-dead-now-what/">recounted</a>:</p><blockquote>Moore wrote that "cramming more components onto integrated circuits," the title of his 1965 article, would "lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment." In other words, stick to his road map of squeezing ever more transistors onto chips and it would lead you to the promised land. And for the following decades, a booming industry, the government, and armies of academic and industrial researchers poured money and time into upholding Moore's Law, creating a self-fulfilling prophecy that kept progress on track with uncanny accuracy. Though the pace of progress has slipped in recent years, the most advanced chips today have nearly 50 billion transistors.</blockquote><p>The prophesy's fulfillment became expected, and that expectation became destiny, in part, "because the semiconductor industry decided it would," Rotman writes. The law gave the industry a clear R&amp;D target and a drumbeat-like cadence. Much like Apple has for years organized all of its iPhone hardware and software development around an annual product release cycle, Moore's law kept continuous pressure on semiconductor manufacturers to advance the state of the art.</p><p>A similarly steady beat has marked progress in extending human life expectancy. Until a slight reversal after 2014, life expectancy in the US has increased an average of two years with each passing decade. Unlike for Gordon Moore's exponential growth, this growth is linear—but it has still been steady.</p><figure class="kg-card kg-image-card"><img src="https://www.remixingwork.com/content/images/2020/08/fredgraph-3.png" class="kg-image" alt="Conceptualizing our potential"></figure><p>There is no famous "law" that has served as a drumbeat for pursuing increasing longevity. Nevertheless, these extra years of expected life are undeniable evidence of our potential to improve ourselves—against what many consider a rather important dimension of human performance.</p><p>Improving other dimensions of human performance can happen at the level of a single person and can occur much faster. Athletics provides many such examples of transcending what previously seemed to be boundaries of human potential.</p><p>In 1954, Roger Bannister became the first person to run a mile in less than four minutes. It was a goal long pursued but long out of reach. <a href="https://hbr.org/2018/03/what-breaking-the-4-minute-mile-taught-us-about-the-limits-of-conventional-thinking">Bill Taylor described how substantial the barrier had seemed</a>:</p><blockquote>[R]unners had been chasing the goal seriously since at least 1886, and that the challenge involved the most brilliant coaches and gifted athletes in North America, Europe, and Australia. "For years, milers had been striving against the clock, but the elusive four minutes had always beaten them," he notes. "It had become as much a psychological barrier as a physical one. And like an unconquerable mountain, the closer it was approached, the more daunting it seemed."</blockquote><p>Having stood as a barrier for decades, many thought Bannister would be unique in that accomplishment for some time. Instead, he quickly had company. Only 46 days later, John Landy broke the four-minute mark and Bannister's record time. A year later, three runners finished the same race in under four minutes. Over the last 50 years, more than a thousand runners have accomplished a feat that previously may have seemed superhuman (to use the contemporary phrase).</p><p>The fastest have gotten even faster still. The world record is now held by Hicham El Guerrouj, with a time of 3 minutes, 43 seconds and 13 hundredths.</p><p>These athletic feats are a powerful illustration of the potential to improve even our most basic physical capabilities. What if we could similarly enhance our cognitive capabilities?</p><p>Almost no one's livelihood depends on how fast they run. But virtually everyone's livelihood depends on how well they think. If we could each double our cognitive capabilities, we would be individually and collectively much better off.</p><p>The obvious objection is that this isn't possible. But there is good evidence that we can improve at least some of our cognitive abilities on scales much greater than a mere doubling.</p><p>For instance, there is substantial evidence that dramatic improvements in memory are possible for anyone committed to training. Journalist Joshua Foer demonstrated what is possible in his account of his memory training journey in his 2011 book "Moonwalking with Einstein."</p><p>Reigning world memory champion Ben Pridmore's ability to memorize "the precise order of 1,528 random digits in an hour" and "any poem handed to him" sparked Foer's curiosity.  It burst into flames when he read Pridmore's explanation:</p><blockquote>"It's all about technique and understanding how the memory works," he told the reporter. "Anyone could do it, really."</blockquote><p>Alongside investigating the people who compete in memory tournaments, Foer decided to compete too. After a year of intense practice, Foer won the US Memory Championship. Along with completing various other feats, Foer memorized every card in a thoroughly shuffled deck faster than all of his competitors.</p><p>The techniques Foer learned to memorize random strings of numbers aren't the solution to learning the things that matter to us in school and work. But this narrow ability is still remarkable given humans usually can only remember "seven [items], plus or minus two." That he and others can remember lengths of higher orders of magnitude suggests the correct scale to conceptualize our cognitive abilities is much greater than we commonly think.</p><p>Consider this in the abstract: If you score 90 on some measure of performance and believe the maximum score possible is 100, then you don't have too much room for improvement. Maxing out the scale would only make you ~10% better. If it seemed hard or risky to improve, it probably wouldn't be worth it: you don't have much to gain and have plenty you could lose.</p><p>But what if your belief about the maximum score is entirely wrong? What if the maximum is actually 1,000? Then there are tremendous potential gains ahead of you.</p><figure class="kg-card kg-image-card"><img src="https://www.remixingwork.com/content/images/2020/08/0AD999DC-5672-4CDD-9258-D0FD83168203-2.png" class="kg-image" alt="Conceptualizing our potential"></figure><p>How we conceive of our potential has significant effects. If you thought taking care of your health might give you another hundred years of life, wouldn't you act differently than if you thought it might buy you an extra decade at most?</p><p>Unfortunately, it is common to think of our cognitive abilities as if the scale only goes to 100. This view is partly related to our education system's performance, which has not generated gains that look like Moore's law or our increasing longevity. The cause could be a lack of understood and accepted measures for our cognitive abilities. Perhaps thousands of people have learned to think at a four-minute pace.</p><p>But there is no readily available data to support that narrative. Most of the available data related in some way to aggregate cognitive performance is from our education system, where the indicators are quite mixed. I won't draw any conclusions about our education system in this essay—there are both successes and failures that could be analyzed. But, notably, there is no easily accessible story of increasing capability. In inputs and hours spent, yes. But in outputs that matter, it's not so clear. Dysfunctions burden our healthcare system, but healthcare can at least tell a story of progress based on our increasing longevity.</p><p>We devote enormous resources to education, and many dedicated people have worked tirelessly to improve our education system. Yet, with perceived results failing to match the perceived effort, many people doubt that future initiatives to improve education will have much impact.</p><p>This narrative in which substantial effort translates into minimal results is even more destructive: it shrinks our perceived potential. It suggests the scale really only goes to 100.</p><p>But the performance of the memory champions suggests this is misguided. With the right techniques, they cannot only memorize a few more cards than the average person--they can memorize a few more <em>decks</em> of cards. It suggests that substantially more powerful cognitive abilities may be available to all of us.</p><p>Unfortunately, stories of great human achievement are often told in a way that inhibits other people rather than expanding their sense of potential. Instead of holding up people with notable accomplishments in a way that says "Look, anyone can walk this path if they want," cultural storytellers instead often spin a tale of "superhuman" ability based on prodigal "genius" or "talent." Potential progress needs to seem accessible. We only get better by being challenged, but most people will give up if the challenge seems too insurmountable.</p><p>"Superhuman" sounds impossible. Using it to describe the impressive abilities of people that are all too human is misuse. More often than not, those abilities are the product of years of deliberate practice, which is accessible to all.</p><p>There is much we don't know about how learning and intelligence work and how to optimize them. Yet the existing knowledge is tremendous compared to how much we actually apply. We could all be better learners and more cognitively capable with deliberate practice and different approaches to learning. This will require changes to how we learn at school and work and changes to the institutions in which we learn. These changes are easy to dismiss if we don't appreciate our potential.</p><p>The scale doesn't stop at 100. Our potential is greater than we realize.</p><p><em>This post originally appeared at <a href="https://www.remixingwork.com/potential/">Remixing Work</a>.</em></p>]]></content:encoded></item></channel></rss>