Hello there! Here’s your weekly (ish) issue from
. Quick favor first:Thank you!
Now, on to this week’s story: what I learned from hitting my first (semi)viral LinkedIn post.
I touched a nerve.
Five months, 124 LinkedIn posts, and one jaw-dropping insight later, the numbers confirmed what many people debating AI on LinkedIn have smelled for years: if you cheerlead AI you’re rewarded, if you challenge it, you’re quietly throttled.
Positive takes on AI earned me 73 % more reach than critical ones—a gulf big enough to drown nuance. My polite data packet to LinkedIn was met with the algorithmic equivalent of a shrug. That shrug is why you’re reading this newsletter.
What the Numbers Say When You Let Them Speak
My average impressions per post by sentiment about AI:
Cheering AI: 3,754 (59 posts)
Critical thinking, raising red flags: 2,164 (37 posts)
Other topics (productivity, careers, future of work): 2,410 (28 posts)
Controlling for time of day, format, and visuals left the pattern intact. Research into social-platform ranking confirms a built-in positivity preference: feeds surface content that maximises easy engagement, even if it flattens critical thought. [UC Press Online, Academic Oxford also provide interesting insights about that]
A Chorus of “I See It Too”
Shadow-bans & sudden drops. Several commenters reported post-impressions collapsing overnight—classic symptoms of “soft” demotion. [LinkedIn shadowban is real]
Audience-effect questions. Others wondered whether humans simply like hype more. Behavioural studies show positivity and excitement reliably attract quick reactions, pushing such posts higher in engagement-ranked feeds. [More support for this point: Socialinsider, Knight First Amendment Institute]
Migration talk. From Substack to decentralised networks, creators are hedging against opaque algorithms that dictate reach.
Wait, but this post went viral! One commenter argued that the post received algorithmic attention with 100s of reactions, comments and more than 50 reposts. No, it received THE PEOPLE's attention. If you look at the number of impressions, the story is nuanced:
Impressions are a function of organic reach, people’s response, plus what the algorithm decides to boost. If it pushes your post on many feeds, it will be seen by many. The post only had almost 15,000 impressions:
Yes, far higher than my average 3,000 per post. But far below what the support it received would typically generate.
To give you an example, a friend with less organic reach (2.3K followers vs. 8K for my account) had a post that received 3x fewer reactions, 10x fewer discussions, and zero reposts, but it still had 100,000 impressions. Telling, isn’t it?
Zooming Out: The Incentive Engine Behind Every Scroll
I’ve found out about different incentive loops backed by research and real-world evidence:
Engagement-first ranking: algorithms promote content that triggers fast reactions—even divisive or low-quality posts [Knight First Amendment Institute, Academic Oxford]. For example, Facebook’s 2014 “emotional contagion” experiment amplified mood-laden content without user consent [The New Yorker].
Opaque optimisation: Low transparency breeds user mistrust and “algorithmic gaslighting” [ScienceDirect, blogs.cornell.edu]. LinkedIn’s brief reply to my request for explanation (“We aim to be fair”) offered zero detail on ranking logic.
Business alignment: Platform owners benefit when the dominant narrative fuels product demand [Academy of Management]. Microsoft’s stake in OpenAI, gazillions invested in AI, cloud, data centers raise obvious questions about AI-positive content getting a leg-up.
Why This Matters Beyond LinkedIn
When hype reliably outranks critique, we risk an information diet that over-indexes on optimism and under-invests in caution.
The same mechanics are muting news on Facebook [The Guardian], censoring medical education on Instagram [Allure] and nudging gig-workers with opaque controls [BMC Psychology].
Algorithmic curation is already policy-making by other means.
What We Can Do—Five Practical Moves
Publish your own mini-audits. Label 20 of your posts, plot reach against sentiment, and share the chart. Normalising open, lightweight audits pressures platforms to respond.
Cross-post critical takes. Diversify to newsletters, Mastodon, Bluesky or Substack where ranking is chronological or follower-based.
Inject evidence, not just opinion. Links to peer-review or reputable reports cue readers to save and share, signals that algorithms rate highly—even for sober content [Noble Intent]. Just make sure links are shared in the comments or LinkedIn will penalize your post (they don’t like links in the post’s body that drive away from the platform).
Use collaborative crowdsourcing. Pool impression data with fellow creators to enlarge the sample and strengthen the signal.
Ask for stated-preference ranking. Experiments show feeds optimised for what users say they value amplify nuance over outrage. Push platforms to pilot such options. Academic Oxford
An Invitation
If you’ve felt the penalty of critical thinking, reply with your own data or forward this to someone who can add theirs. Evidence scales when communities connect the dots.
Algorithms can’t bury all of us at once.
Looking forward to learn your stories,
-a
Thanks for reading. Next issue: can a fairness-by-design AI rewrite the rules of social feeds?
Insightful for sure.