This whole thing, dubbed the WearthePants experiment, feels less like a bug and more like a feature of a system that’s learned its biases from the world it reflects. It’s a messy, fascinating, and frankly, infuriating glimpse into the black box of social media algorithms. The anecdata is piling up, and it’s not a good look for anyone.
The Tell-Tale Numbers
Before we get into the corporate denials, let’s look at the data coming directly from the users on the front lines, as reported by outlets like TechCrunch. These aren’t minor fluctuations; these are statistical earthquakes.
– Marilynn Joyner, one of the participants, reported her post impressions leapt by a whopping 238% in a single day after she changed her profile gender from female to male.
– Another user, identified only as Michelle, saw a 200% jump in impressions when she posted identical content as “Michael”.
– The experiment’s instigator, Cindy Gallop, highlighted a stark comparison where her own post reached a paltry 801 people, while an identical post from a male colleague soared to 10,408.
When you see numbers like this, it’s difficult to dismiss them as coincidence. It points to a systemic issue, whether intended or not. The experience of these women suggests a pattern of AI discrimination that actively disadvantages them, turning the platform from a tool of opportunity into a barrier.
LinkedIn’s Defence: “It’s Not Us, It’s Complicated”
Predictably, LinkedIn has responded with a carefully worded statement. Tim Jurka, a VP of Engineering, and Sakshi Jain, Head of Responsible AI, have assured everyone that “The algorithm and AI systems do not use demographic information such as age, race, or gender as a signal.” They point out that posting and engagement are up, with comments rising 24% year-over-year.
Honestly, this response is both technically true and utterly unhelpful. No one seriously thinks there’s a line of code that says if (user.gender == “female”) { reduce_reach_by_80% }. That’s not how modern machine learning works. The problem is far more subtle and, in many ways, far more dangerous.
Think of the algorithm as a club bouncer who has been told to let in “influential people”. The bouncer doesn’t have a list of pre-approved guests. Instead, they learn by observing who gets the most attention inside the club. If, historically, men in suits have been cheered and celebrated more often, the bouncer starts to develop an unconscious pattern: suit equals influential. They don’t explicitly ban anyone else, but they start prioritising people who fit the pattern they’ve learned. This is implicit bias, and it’s the core of the problem.
The Real Culprit: A Mirror to Our Own Biases
The real issue is the training data. An algorithm learns from the mountains of historical data we feed it – in this case, decades of user interactions on a professional networking site. If that history contains societal biases where male voices were, on average, amplified more than female voices, the AI will learn that as a successful pattern. As data ethics consultant Brandeis Marshall put it, “What we don’t know of is all the other levers that make this algorithm prioritize one person’s content over another.”
The AI isn’t sexist. It’s just an excellent student of a world that, all too often, is. It identifies proxies for success. Perhaps men, on average, write with a certain tone, use specific business jargon, or post about topics that have historically gained more traction from a male-dominated user base. The algorithm learns to favour that style, which inadvertently ends up favouring male users. This is where the conversation must shift from simple accusations to a demand for genuine algorithmic fairness.
Where is the Platform Accountability?
This experiment blows the lid off the silent bargain we’ve made with tech platforms. We provide the content, and they control who sees it, all behind a veil of algorithmic secrecy. This lack of transparency is no longer acceptable, especially when it has real-world economic consequences. Being less visible on LinkedIn means fewer job opportunities, fewer consulting gigs, and a smaller professional network.
The demand for platform accountability is growing louder. Users are the product and the labour force for social networks, yet they have zero say or visibility into how their work is distributed. This has to change. We need a fundamental rethink of our relationship with these platforms.
The solution requires two key things:
– Bias Mitigation: Companies like LinkedIn need to move beyond simple denials and actively engage in bias mitigation. This means auditing their algorithms not for explicit demographic flags, but for biased outcomes. It means deliberately searching for and correcting these learned patterns.
– Inclusion Metrics: You can’t fix what you don’t measure. Platforms must develop and publicise inclusion metrics. Are posts from different demographics receiving equitable visibility for similar quality content? Are job recommendations being distributed fairly? These are the questions we should be asking, and platforms should be answering with hard data.
This isn’t just about making people feel better; it’s about ensuring these powerful tools are creating opportunity, not reinforcing old hierarchies. The future implications are significant. If left unchecked, this kind of algorithmic bias could solidify existing inequalities for generations, creating a digital glass ceiling that is all the more difficult to shatter because it’s invisible.
The #WearthePants experiment has done us all a service by making the invisible visible. Now the ball is firmly in LinkedIn’s court. The era of “trust the algorithm” is over. The era of “show us the proof” has begun. So, LinkedIn, what’s it going to be? More PR statements, or a genuine commitment to building a platform that works for everyone? And for the rest of us, how long are we willing to work for platforms that may be actively working against us?


