Cybercrimes and AI: How Emerging Technologies Create New Risks
Artificial intelligence (AI) has rapidly evolved from a promising innovation into a core part of our digital ecosystem. It powers search engines, assists in medical diagnoses, streamlines legal research, and automates decision-making in finance, logistics, and governance. Yet, as AI tools become more capable, they are also being weaponized. Cybercriminals are exploiting these technologies to carry out more advanced, targeted, and harder-to-detect attacks.
This transformation in the threat landscape poses complex questions: How do we regulate crimes committed with the aid of AI? Can existing cyber laws adapt quickly enough? And how do we balance innovation with public safety?
A New Era of AI-Driven Cybercrimes
AI has given cybercriminals a toolkit that makes traditional attack methods more sophisticated:
Deepfake scams — Fraudsters use AI to create hyper-realistic videos or audio that impersonate executives, politicians, or even family members. This is not just a reputational threat but a financial one, with cases reported where employees were tricked into transferring millions.
Hyper-personalized phishing — AI can scan social media, emails, and public data to craft phishing messages tailored to an individual’s interests, tone of speech, and relationships, drastically increasing the success rate.
Automated hacking and vulnerability scanning — AI can process huge datasets to identify system weaknesses faster than any human hacker. Once vulnerabilities are found, malware or ransomware can be deployed almost instantly.
AI-powered misinformation campaigns — From fake news articles to automated social media bots, AI can manipulate public opinion or disrupt elections on a large scale, raising concerns for both democracy and national security.
Legal and Regulatory Challenges
Attribution
Pinpointing the origin of an AI-driven attack is far more difficult than in traditional cybercrimes. Attackers can route actions through multiple layers of AI-generated scripts, anonymizing networks, and compromised devices worldwide.
Jurisdiction
Cybercrimes involving AI often cross borders. A deepfake scam may be orchestrated in one country, hosted on servers in another, and target victims in multiple jurisdictions. This makes prosecution complex and sometimes politically sensitive.
Evidentiary issues
Proving in court that AI-generated content is fraudulent requires specialized forensic expertise. Judges and juries may also struggle to understand the underlying technology, creating gaps in fair trial processes.
Legislative gaps
Most countries, including Uzbekistan, currently regulate cybercrime under general criminal and IT laws. While these frameworks address hacking, fraud, and identity theft, they are not always specific enough to deal with AI-assisted attacks.
Uzbekistan’s Legal Position and Global Practices
Uzbekistan has introduced the Law on Information Security and the Law on Personal Data, which provide a foundation for addressing cybersecurity threats. However, AI-specific risks demand more targeted provisions. Possible reforms include:
-
Creating a dedicated AI and cybercrime statute defining offenses such as deepfake fraud, AI-aided phishing, and automated hacking.
-
Expanding digital evidence rules to include AI-generated media.
-
Establishing specialized cybercrime units within law enforcement with AI expertise.
International Comparisons
-
European Union: The AI Act sets standards for high-risk AI systems, while the GDPR enforces strict rules on personal data processing.
-
United States: No single federal AI law exists, but states such as California and New York have introduced AI-specific consumer protection measures.
-
Singapore: Combines strong cybercrime legislation with AI governance frameworks to encourage safe innovation.
Prevention and Protection Strategies
For Governments
-
Establish AI-focused cybersecurity task forces.
-
Strengthen international cooperation for cross-border investigations.
-
Implement public awareness campaigns on AI-enabled scams.
For Businesses
-
Adopt AI-powered cybersecurity tools to detect anomalies.
-
Regularly audit digital infrastructure for vulnerabilities.
-
Train employees to identify and report suspicious activities, including deepfakes.
For Individuals
-
Verify unexpected requests for money or sensitive information, especially if received via audio/video messages.
-
Use multi-factor authentication for all online accounts.
-
Stay informed about the latest AI-related fraud tactics.
Conclusion
AI is a double-edged sword: a driver of innovation and efficiency, yet also a catalyst for new forms of cybercrime. Legal systems, technology developers, and security professionals must collaborate to create adaptive, AI-specific regulatory frameworks. Without proactive measures, society risks falling behind in the battle against AI-powered threats.
Final Note: For Uzbekistan, now is the time to lead regionally by introducing comprehensive AI and cybercrime legislation. This will not only protect citizens and businesses but also position the country as a safe and forward-looking digital economy.
Nuriddin Khudoyberdiev
The Pennsylvania state university, Penn state law.

