- Ai is getting dangerous shows a polish research by creating a fake passport with the help of Chatgpt
- The fake passport bypassed checks on major fintech platforms, Which increases the risks.
- This raises severe concerns of mass identity theft and might give ways for serious malpractices.
The Polish Experiment
A Polish researcher used ChatGPT-4o to forge a passport that was good enough to fool basic digital identity verifications employed by large financial sites.
“You can now generate fake passports with GPT-4o. It took me 5 minutes to create a replica of my own passport that most automated KYC systems would likely accept without blinking,” Polish Researcher Borys Musielak posted on X. His test bypassed traditional AI forgery myths, coming up with more realistic results than older software like Photoshop.
The Polish Researcher also said the “You can now generate fake passports with GPT-4o. It took me 5 minutes to create a replica of my own passport that most automated KYC systems would likely accept without blinking.
The implications are obvious any verification flow relying on images as “proof” is now officially obsolete. The same applies to selfies. Static or video , it doesn’t matter. GenAI can fake them too. Photo based KYC is done. Game over.
The only viable path forward is digitally verified identity, like eID wallets mandated by the EU. One of the companies ahead of this shift is our portfolio startup. If you’re running KYC in banking, insurance, travel, crypto, or anywhere else it’s time to upgrade your process. Your users deserve better. So does your compliance team.
Security experts caution this ability would open mass identity theft and fake account creation to historic levels. Musielak wrote, “The only realistic way forward is digitally verified identity, such as eID wallets forced upon the EU.”
Concerns and Risks
After the post published by the Polish researcher Borys Musielak, certain risks should be considered as a serious concern. Avoiding Know Your Customer (KYC) systems is a growing concern, as most of these systems rely on automated image based screening. AI created false passports or IDs might pass these checks, allowing for fraudulent account openings in areas such as banking, crypto exchanges, and government services.
Another major risk is mass identity theft, where fake IDs that are highly realistic can now be created in a matter of minutes. Malicious actors can produce stolen or synthetic identities in bulk, resulting in large scale identity fraud. This contributes to the erosion of trust in digital documents if visual documents cannot be trusted, entire systems based on document verification are at risk.

Governments, businesses, and platforms may face a trust crisis, forcing them to reimagine their verification pipelines. Compounding the problem are regulatory and legal loopholes, as current laws and frameworks are not yet equipped to keep pace with the rapid development of generative AI.
This lag permits these technologies to be utilized maliciously without clear accountability. Additionally, organized crime groups may weaponize generative AI to amplify their operations, using forged documents to facilitate trafficking, money laundering, or cybercrime on a much larger scale than before.
Lastly, AI generated documents may deceive border control in situations where digital submission is followed by physical checks, potentially undermining national security and immigration systems, especially in countries heavily reliant on e-visas or offshore application processing.
Musielak’s test is an alarm call, static image authentication is not secure in the AI age. Systems need to shift toward cryptographically provable digital identities such as those in the EU’s eIDAS 2.0 project or they will be vulnerable to attack.
Also Read : AI Detects Sudden Cardiac Arrest Risk 2 Weeks In Advance