Verification has a serious UI problem
Blue-checks, grey-checks, and signatures .... what is actually helping with credibility?
The conversation around generative AI's potential to amplify misinformation harms has been ongoing for years. To combat this, several techniques have been deployed - from robust content filters and community annotations to the trendy 'pre-bunking' approach.
Yet, what has become most popular are 'verification' systems like the blue checks you see on Instagram or Twitter.
But I'm skeptical. Do these digital signatures genuinely address the 'trust' problem with misinformation among those most prone? There is minimal empirical evidence suggest that blue/grey checkmarks, digital signatures, or any flashy cryptographic feature at the UI level are truly enhancing the credibility of information.
In fact, it wouldn't shock me if, for those wary of the government, an official cryptographic signature turns them off instead of winning their trust.
In this post, I'll be throwing the spotlight on a few noteworthy UI issues currently present in verification tools, as well as sharing some insightful papers that have caught my eye in this domain.
UI Flaws
Consider this: I want to be sure that what I'm reading as being from President Joe Biden, is genuinely from him (or his team).
Naturally, I head over to https://joebiden.com/
It appears trustworthy enough - no overt red flags, professional photos, and overall, it looks credible. Yet, this impression is solely based on basic heuristics like "the domain seems legit" or "it doesn't appear fraudulent". Notably, there's no cryptographic foundation at play here to reinforce my belief.
Given the apprehensions in our AI-driven era, let's say I decide to further solidify my confidence in this source. To the best of my knowledge, I could turn to two platforms to validate the registration of joebiden.com: ICANN's lookup table or the SSL Certification.
My exploration here doesn't yield any particularly insightful information.
I also do not really get anything particularly valuable here.
In the end, after extensive exploration, the most reliable method to verify the website's credibility was to cross-check with Joe Biden's Twitter account, noted as government-verified.
I've come to trust the grey-check mark on Twitter - it seems more credible than the purchasable blue-check mark. Plus, the fact that this account has been verified since 2012 adds to its authenticity over time. An examination of the followers, mutual connections, and the characteristic style of posts, likes, and replies all boost its credibility, convincing me of the site's legitimacy.
Next, I decided to test this with another US policymaker - Governor of Florida, Ron DeSantis. A quick Twitter search reveals a blue-check account with a comparable verification timeline and an impressive follower count. Does this instill the same level of trust? Surprisingly, I find myself less convinced.
I realize that there is also a government account for Ron DeSantis.
There's also an official government account for POTUS. Interestingly, while Biden's personal account has a grey-check mark, DeSantis's doesn't. The reason for this discrepancy, given they're both policymakers, is unclear - accentuating the confusion in the verification system.
In demonstrating that even the most robust cryptographic tools, while theoretically providing a high level of confidence in the authenticity of information, fail to instil trust due to inconsistent UI experiences. As someone growing more suspicious of content authenticity, these inconsistencies are troubling.
Numerous questions arise. Does a public registry exist for cross-checking policymakers' websites and Twitter handles? Who would manage such a database? What other tools am I missing here that can be helpful in addressing these concerns? How does the trust factor vary among individuals, especially those inherently distrusting of government bodies, when presented with grey-checked, blue-checked, or digitally signed content?
Looking ahead, the 2024 election raises greater concerns. The prospective rise in generative content could significantly disrupt the information space. If we pin our hopes on the existing verification system, we may unwittingly be undermining our own goals. These verification tools might alienate, rather than assure, those skeptical demographics, leading to greater distrust.
Historical Examples
The case of Pretty Good Privacy (PGP), and its limited adoption beyond specific privacy and security circles, serves as a crucial learning point. We might be navigating a different era where PGP's role becomes increasingly pivotal, yet, fundamental UI obstacles persist, hindering its widespread usage.
It is possible that the place for authentication and digital signatures isn't so much at the user-level, but more likely in the content filtering stage, much like how HTTPS certificates currently function for websites.
With more empirical research, we may even see certain political campaigns might opt out of blue-checkmarks when targeting specific demographics, adding another layer of complexity to this issue.
Recommended Reads
I have enjoyed reading the work of Professor Brendan Nyhan and Professor Joshua Tucker. Here are some of the recommended reads I have from their work:
“A platform penalty for news? How social media context can alter information credibility online”
“Less than you think: Prevalence and predictors of fake news dissemination on Facebook”
And some other general reads I have aggregated on this topic:
I will update these links as I gather more resources here.