Sudha Murty’s Deepfake Video Used for Investment Scam, Author Reacts Strongly
Renowned author, philanthropist, and Rajya Sabha member Sudha Murty has reacted after a deepfake video impersonating her was allegedly used to promote a fraudulent investment scheme. The incident has once again exposed the growing threat posed by artificial intelligence–driven misinformation and digital fraud, particularly involving trusted public figures.
The fake video, circulated online, falsely showed Sudha Murty endorsing a high-return investment opportunity, misleading unsuspecting viewers and potentially putting their savings at risk.

How the Scam Came to Light
The deepfake video began circulating on social media platforms and messaging apps, using manipulated visuals and audio to make it appear as though Sudha Murty was personally recommending a specific financial investment. The content was designed to look credible, leveraging her reputation for integrity, simplicity, and social responsibility.
Alert viewers and followers soon flagged the video as suspicious, prompting clarification and further investigation.
Sudha Murty’s Reaction
Reacting to the misuse of her identity, Sudha Murty strongly condemned the act and clarified that she has no association with any investment scheme or financial advisory service. She warned people against trusting such videos blindly and urged them to verify information before acting on it.
She described the incident as deeply disturbing, especially because such scams often target ordinary citizens, senior citizens, and people with limited digital awareness.
Concern Over Misuse of Trust
Sudha Murty emphasised that the scam was particularly dangerous because it exploited public trust. Known for her grounded lifestyle and ethical values, her image was deliberately used to lend credibility to a fraudulent activity.
She expressed concern that people might lose hard-earned money by believing such fake endorsements, calling it a serious misuse of technology.
Rising Threat of Deepfake Technology
The incident has highlighted the alarming rise of deepfake technology being used for criminal purposes. With advancements in artificial intelligence, it has become increasingly easy to create realistic fake videos and audio clips that are difficult to distinguish from genuine content.
Experts warn that public figures are especially vulnerable, as their speeches, interviews, and public appearances provide ample material for manipulation.
Legal and Cybercrime Angle
Authorities have been alerted about the fake video, and cybercrime agencies are expected to investigate the source of the scam. Legal experts say impersonation, identity theft, and digital fraud using deepfakes fall under serious criminal offences.
There are growing calls for stricter regulations and faster response mechanisms to tackle AI-driven scams.
Public Advisory Issued
Following the incident, cybersecurity experts have advised the public to remain cautious of investment offers promoted through social media videos, especially those promising unusually high returns.
They stressed that no credible public figure or institution promotes investments through random videos or messaging platforms.
Call for Digital Awareness
Sudha Murty also used the opportunity to stress the importance of digital literacy. She urged people, especially the elderly, to consult family members or trusted financial advisors before investing money based on online content.
She said technology should be used to educate and uplift society, not deceive and exploit vulnerable individuals.
Broader Implications
This case is not an isolated incident. Several celebrities, business leaders, and public figures have recently fallen victim to deepfake-based scams. The trend has raised urgent questions about online safety, platform accountability, and the ethical use of artificial intelligence.
The misuse of respected personalities adds another layer of harm, eroding public trust and increasing fear around digital content.
Conclusion
The deepfake scam involving Sudha Murty serves as a stark warning about the dangers of unchecked technology and digital deception. Her swift clarification and strong reaction have helped prevent further damage, but the incident underscores the urgent need for awareness, regulation, and vigilance.
As deepfake technology becomes more sophisticated, experts stress that collective responsibility—from authorities, tech platforms, and users—is essential to protect individuals from fraud and preserve trust in the digital age.
