In a world increasingly shaped by Artificial Intelligence, we are seeing incredible innovations. However, with great power comes significant privacy risks.
We regularly share parts of ourselves online: photos, thoughts, and our voices. When it comes to AI, this casual sharing can have serious, long term consequences. It can lead to losing control of your identity and blurring the very concept of truth.
Let’s break down why you need to think twice before handing over your digital self to AI.
The truth is the literal foundation of how we understand the world and trust each other. Without it, our ability to make sense of things together crumbles. We are already navigating an ocean of misinformation, but AI generated deepfakes are taking this to a new level.
These are not just clever fakes. They are hyper realistic images, videos, and audio that can perfectly mimic real people and events. Imagine seeing a video of a public figure saying something they never said, or hearing an audio clip of a loved one’s voice making a request they never actually made.
AI deepfakes pose a unique threat because they do not just tell a lie.
Instead, they manufacture an entirely alternate reality that looks and sounds identical to the truth. When we can no longer believe our own eyes or ears, the line between fact and fiction disappears. This makes it easier for people to be manipulated and for reputations to be destroyed. Protecting the truth is not just about debunking fakes; it is about making sure we do not lose our fundamental grasp on reality.
The Danger of Giving Your Likeness and Data to AI
When you upload a selfie to an AI tool to turn it into a cartoon or enhance it, you are often giving that company permission to use your unique biometric data. This includes your face, your voice, and your specific mannerisms.
Once your likeness is learned by an AI model, you lose control. It can be used to generate new images or videos of you doing or saying things you never did. In addition, extracting your data from an AI model once it has been trained is virtually impossible. Your digital twin could exist within their systems forever.
The Pre-Upload Checklist
Before you hit "upload" on any AI tool, follow these steps to protect your privacy and your identity.
Strip the Metadata
Photos contain hidden data called EXIF files. This includes your GPS coordinates, the exact time of the photo, and your device type.
On iPhone: Open the photo, tap the "i" icon at the bottom, and tap "Adjust" in the location section to select "No Location."
On Android: Open the Gallery, select the photo, tap "Details" or "Info," and look for an option to "Remove location data."
The Shortcut: Take a screenshot of the photo and upload the screenshot instead. Screenshots generally do not carry the original GPS data.
Scrub Personal Details
Check the background of your images. Are there house numbers, street signs, or school logos visible? Use your phone's built in "markup" tool to blur or black out these identifying details.
Read the Fine Print
Look for the "Terms of Use." If the company claims "a royalty free, perpetual license" to your content, they own your likeness forever. If you would not want to see that photo on a billboard in five years, do not upload it.
Opt Out of Data Training
Most AI apps automatically use your uploads to "improve their models."
How to do it: Go to the "Settings" or "Privacy" menu in the app. Look for a toggle that says "Data Training" or "Improve AI Models" and turn it OFF.
Use a Burner Email
Avoid signing up with your primary email or your social media accounts. Use a secondary email address to keep your personal identity separate from the data you provide to the AI.

No comments