In amongst all the angst and argument sparked by Elon Musk’s most recent actions in suspending journalists from the app, in between all the rage-bait and trolling from both sides of the political spectrum, there actually may be some new seeds of uneasy agreement forming, from which we could potentially see a new moderation approach grow and flourish within Elon’s ‘Twitter 2.0’ experiment.

I suspect that won’t be the eventual outcome, but the latest sequence of events has shown that there is a threshold at which almost everyone agrees that there needs to be some level of moderation at the app.

We even now have a direct aim for such, a barrier with which to base those future decisions.

To recap, earlier this week, Twitter abruptly banned an automated account which provided location info on the whereabouts of Elon’s private jet. According to Musk, somebody had used this info to seek him out in LA, eventually confronting a private car, which he wasn’t actually in, and harassing those inside, including Musk’s youngest child.

The rise in threat level prompted an immediate, angry response from Musk, who not only banned the @ElonJet account, but also implemented a new Twitter policy which outlaws all forms of live location tracking of individuals via tweet.

Which, as I wrote, will likely have implications beyond the edge case he’s trying to negate, and could even provide a new means for Government censorship of citizen dissent.

But nonetheless, Musk implemented the new policy anyway, which, the following day, he then also expanded to anyone, including journalists, who dared to share a link to the @ElonJet account on other platforms – which, in Musk’s view, was done in an effort to skirt the new rules.

Which it wasn’t, but Musk banned a handful of prominent tech reporters anyway, which sparked a whole new wave of criticism around freedom of the press, and Musk seeking to quell dissenting opinion, using, effectively, authoritarian powers in his new app.

In Musk’s view, the actions he took were merely an extension of his new rules, designed to protect his family. But to others, they’re a significant overstep in power, which has also raised the eyebrows of EU officials who have strict rules around freedom of the media built into social platform operating parameters.

Still, Musk has remained steadfast in his actions, even jumping into a Twitter Spaces chat with several journalists to defend his decision.

You’ll note that the @ElonJet account is a speaker in this Space, which shouldn’t be possible, because the account is actually suspended. Several other suspended accounts were also able to join the Space, and interact as normal, which is a bug that Musk eventually used as an excuse to shut down the conversation, and the entire Spaces functionality altogether, so the Twitter team could work on a fix (the Space carried on for around half an hour after Musk’s brief appearance).

Musk then went back to his own safe space, exchanging tweets with the same five or so people as always who offered support and endorsement of his actions.

This is the core assertion of Musk’s action to prevent location tracking, which is not entirely correct. Musk did apply for a PIA exemption, which would limit public tracking of his private jet, back in October, but it’s unclear if/when that exemption went into effect. And there are ways around this – basically, it is possible to track the location of Elon’s private jet via publicly available data, while any tracking of such doesn’t extend to what happens outside of airports. So the assertion that this was some kind of intricate personal tracking operation (Musk said that these accounts were publishing ‘assassination coordinates’) is a misnomer at best.

But that’s the line that Elon and Co. are taking. Musk’s friend Jason Calacanis also jumped on the Twitter Spaces chat before Musk, and repeatedly asked speakers ‘how would you feel if someone from Elon’s family got hurt’ as a result of the tracking data being published.

Which is a deliberate obfuscation of the core concern, or a blatant misunderstanding. But that’s the thing – on one side of the debate, you now have Elon’s biggest supporters, who are predominantly conservative commentators, saying things like:

Yet, on the other, they’re saying that Elon is fine with levelling unfounded claims of paedophilia against innocent people (which has now happened more than once), and unleashing his mob of supporters onto people who he personally doesn’t like. Musk seems to believe that this type of accusation, which can also cause real-world harm, should be allowed, but re-sharing public flight into should not.

Because one is a risk, and one isn’t?

Indeed, after recently being named and shamed by Musk over his work at Twitter, former platform Trust and Safety head Yoel Roth was forced to flee his home due to fears that Musk’s supporters may target him in real life.

Musk’s supporters expressed little sympathy for Roth’s plight.

But the same question that Calacanis posed to journalists could be put to Musk and his fans in this instance – what if Yoel Roth was actually attacked as a result of Musk’s ‘name and shame’ approach?

The question itself gets the core of the moderation issue at social platforms, which is not about political censorship and making calls on content along ideological lines, as has been suggested by Musk via his recent ‘Twitter Files’ expose of internal discussions around key moderation concerns.

If anything, social platforms want to leave as much borderline content up on the platform, as it generates more engagement – because as past research has shown, the closer people get to the borderline of the platform’s rules, the more interaction their posts see.

Borderline Content Research

Social platforms are incentivized to leave the worst, most divisive comments up – but at some point, they need to ask whether that comment or post could actually lead to real-world harm.

In this sense, both sides of the discussion here are absolutely correct – Elon Musk should not be vilifying past employees by name, and making them a target, because that could lead to real world harm. An account dedicated to sharing Elon Musk’s private flights, whether it’s publicly available data or not, arguably shouldn’t be posting such to a platform of millions of people, because that could also lead to real-world harm as a result.

That’s the right threshold that we should be using for critical moderation decisions – which would also relate to this:

Tweet From Donald Trump

Elon Musk has been a vocal critic of Twitter’s past decisions to censor and suspend the former President. But Trump’s comments, viewed through the same prism of assessment, could indeed lead to real-world harm.

Which is why this comment was removed, and what led to Trump, eventually, being banned.

As Musk himself said in his brief appearance in yesterday’s Spaces chat:

“There’s not going to be any distinction in the future between journalists and regular people, everyone’s going to be treated the same. You’re not special because you’re a journalist, You’re a twitter user, You’re a citizen.”

That same approach should apply to politicians – if this is the barrier at which we rule what should and should not be allowed in the app, then all users should be held to the same standards, and the red line should be ‘potential for harm’.

Could this tweet lead to real world harm? Then it should be removed.

There’s nuance and complexity within that question, as it’s difficult to ascertain the actual potential of harm in each comment. But that’s been the challenge of online content moderation forever, and the positive here is that we’re getting to the core of it with Elon and Co. quickly, which could lead to meaningful reform in approach.

Though I suspect it won’t. Right now, the platform is being split into sides, with both accusing one another of opposing their perspectives, and Twitter, seemingly, benefitting (at least in some ways) from the increased engagement.

But in reality, this incident shows that they aren’t in opposition. The Musk jet tracking controversy shows that both are actually closer to agreement than they seemingly realize, with the ‘free speech’ crew acknowledging that there are limits to such, and the left-leaning group conceding that, yes, there could be harm from an account that tracks the location of Musk’s vehicles.

So the meeting point here is ‘harm’ and the potential of a tweet to cause such in real life.

Now we know the limit, that seemingly everyone accepts. Hopefully, Musk and Co. can use this to formulate more effective moderation parameters moving forward.