AS a society, we are currently grappling with a profound sense of violation. Recent global reports surrounding certain generative AI platforms, highlighting their capacity to generate non-consensual, sexually explicit deepfakes of women and children, have rightly sparked widespread outrage.
It forces us to confront a reality many find difficult to process: the troubling potential for automated exploitation. The strong global reaction to these non-consensual deepfakes — a clear violation of human dignity and online safety — stems from a collective understanding that our image, body and identity are intrinsically our own.
Advertisement

Yet, while we recoil from the potential theft and misuse of our digital identity, we often voluntarily surrender intimate details for the sake of a viral trend.
This is evident in phenomena like recent AI caricature trends, where users upload selfies and provide detailed personal prompts or simply instruct the AI to generate portraits based on “everything it knows”.
Whether actively describing their jobs and home environments or passively granting permission to scour their cumulative chat history, the result is the same.
Users are allowing AI to aggregate scattered data points into a cohesive, high-resolution psychographic profile linked to their biometric data.
This is alarming. On one hand, there is a global call for stricter measures against AI misuse. On the other hand, we treat our sensitive personal data as currency to purchase a fleeting moment of social media engagement.
From a legal and data privacy perspective, this normalisation of “data surrender” carries inherent risks. When individuals participate in these trends, they are not merely “playing” with AI; they are actively training it.
Algorithms learn to recognise faces, understand contexts, and map lives with increasing precision. Every piece of data fed into these models contributes to a digital profile that renders individuals increasingly identifiable and vulnerable to targeting.
The implications for the vulnerable — particularly children — are profound. While children cannot legally provide consent, the long-term privacy implications of their digital footprints, established by well-meaning adults uploading their images for AI-generated content, are significant.
Such actions contribute to an ever-expanding digital dossier for a child, established without their future agency or understanding.
This is not to suggest that technology is inherently malicious, nor that progress should be halted. Innovation offers immense benefits and is crucial for societal advancement. However, it is imperative to critically assess the terms of our engagement with these powerful tools.
We cannot effectively advocate for robust protections against the non-consensual weaponisation of AI if we simultaneously cultivate a culture of uncritical over-sharing.
Responsible digital citizenship requires a clear understanding that privacy is not merely a passive right to be enforced, but an active discipline that individuals must exercise.
To foster a digital ecosystem that genuinely respects human dignity and drives responsible innovation, we must recognise that in the age of AI, our identity — our face, history and context — is our most valuable asset.
Protecting it demands not just robust legal frameworks against exploitation, but also a conscious cultivation of data hygiene and digital discernment.
Thulasy Suppiah
The views expressed here are the views of the writer and do not necessarily reflect those of the Daily Express. If you have something to share, write to us at: Forum@dailyexpress.com.my