🦋 Glasswing #2- You Will Care for the Wrong Reasons
your reputation is at stake if you don't know how to defend against it
The complexity and rapid development of AI have left many people struggling to understand its true magnitude. While AI has increasingly been embedded in our daily lives, from recommendation tools to virtual assistants, the underlying processes and implications of this technology have been opaque to a non-technical individual.
This opaqueness of AI changed in December of 2022. For the first time, it became common knowledge that “some big AI system read the whole internet and can spit out what I need”.
While the discussion around the adoption of chatGPT is ongoing, it's worth emphasizing the extent to which generative AI tools have permeated the internet. To illustrate this point, consider the figure below, which compares the usage of chatGPT with that of other popular social media products.
There is great promise in AI and its impact it can have on human productivity, but the very short term use cases of chatGPT are discussed in the Twitter post I shared below.
With the growing strength of AI tools and the decreasing cost of deception, the ability to pollute our information ecosystems is becoming accessible to anyone.
Currently, we have social norms that help us distinguish between human-generated and bot-generated content. As an example, I currently have a pretty good sense of what is and what is not a bot on Instagram and Twitter. However, as these tools become more advanced, humans may no longer be able to differentiate between AI-generated and human-generated content, muddying our social norms.
What happens when you can longer distinguish what is real or what is fake? You operate in a low-trust environment. You tend towards verification.
I don’t verify things with my friends today. When my friend texts me today on iMessage, I assume it is my friend texting me and not them using chatGPT to text me. When my friend FaceTimes me, I trust that it is them and not a generative AI version of them. When my friend posts something from their social media account that has many mutuals of my account, I trust that it is their account.
All of the above “trust assumptions” vanish in the limits of which generative AI systems are headed. Already, pictures of government ID are no longer sufficient to prove who you are. It is going to be incredibly easy to generate accounts that look like you, but are not you. This will be done in image, text and video formats.
We'll need to verify everything we see and hear, fundamentally changing how we interact with the world around us, especially those close with us.
Today, I get a sense that people make fun of those who are subscribed to Twitter Blue or would even consider buying an Instagram blue check mark. The sentiments I see is “paying for status”, “trying to be elite”, etc. The sentiment will shift to having a blue check be a requirement not for status but for people to trust it is actually you (the purpose of these verification methods in the first place).
No longer is Beyonce and other influencers susceptible to fake accounts and fan-pages, but so are you and your mother because of how easy it is to create these fake accounts in a very persuasive way.
Everyday individuals who don’t know anything about AI from a technical perspective will soon have the power to create fake social graphs of you and your friends that look like a more real account and pictures of you than you currently have on your camera roll today. They will post at a cadence that is optimized to your followers and DM them with a similar tone they anticipate you to talk in. One could even become foul and create porn with you in it or videos of you committing hate crimes or killing babies. I am trying to exercise the worst case scenarios to make this image clear.
In short, when you see very gross and inaccurate text, image, or videos being shared amongst your local community about you or people you know, your reputation is at risk. It is this risk of reputation that I believe will cause people to start to care about identity and information verification.
The more verification and authenticity matters, the more people will understand why the challenge is no longer “are you a robot?”, but rather “are you the REAL Shrey?”.
A more optimistic view of this challenge is that AI could be the precise attack the internet has always needed for humans to adopt more enhanced privacy preserving technology as it becomes a necessity to communicate.
These privacy preserving tools could form the identity layer that the internet was never built with in the first place but will become necessary for the birth of a new internet in the age of AI.
Enjoyed this post? Subscribe to Glasswing below for more content like this and leave a comment for others.