Artificial Intelligence, Intellectual Property, And The Rise Of Cyber Risks: A New Legal Battleground
Artificial intelligence is no longer a futuristic concept—it is embedded in our daily lives, from content recommendation systems to generative models creating images, text, and even music. While this technological leap offers enormous benefits, it also brings with it a new set of legal challenges that lie at the intersection of intellectual property law and cyber security. The speed of AI development has outpaced the ability of lawmakers to provide clear rules, creating an uncertain environment for creators, businesses, and regulators alike.
One of the biggest debates in recent years has been whether AI-generated works deserve copyright protection. Courts in the United States have so far been consistent in saying no, most notably in the Thaler v. Perlmutter case, where the U.S. Copyright Office rejected an application for a work created without human authorship. This decision signals that copyright law remains rooted in the human element, but it leaves unanswered questions: if companies rely on AI to generate creative outputs, who will own the commercial value of those works, and how will disputes be resolved across jurisdictions where the laws differ?
At the same time, the training of AI models on massive datasets raises equally pressing concerns. Many of these datasets include copyrighted materials scraped from the internet without permission. Lawsuits against companies like Stability AI and OpenAI show how fragile the balance is between innovation and rights protection. In the absence of clearer fair use standards, courts will play a central role in shaping the boundaries of what is acceptable, but this patchwork approach risks creating fragmented rules that complicate international commerce.
Beyond intellectual property, the rise of AI has fueled cyber law concerns. Deepfake technologies, for instance, are being weaponized in political campaigns, financial scams, and even personal harassment cases. While existing fraud and defamation laws may provide partial remedies, they are not always well-equipped to handle the scale and sophistication of AI-enabled threats. Lawmakers in the European Union, through the EU AI Act, and regulators in the United States are considering new frameworks, but enforcement and cross-border coordination remain major hurdles.
The convergence of AI, IP law, and cyber risks reveals that these issues cannot be treated in isolation. Intellectual property rules must adapt not only to protect human creators but also to ensure that AI-driven innovation does not collapse under the weight of endless litigation. Cyber security frameworks, on the other hand, need to recognize that AI is both a tool for defense and a weapon for attackers. In this space, collaboration between technologists, lawyers, and policymakers is not optional—it is essential.
For businesses and legal practitioners, the lesson is clear: the traditional silos of copyright law, data protection, and cyber security are blurring. Contracts, compliance policies, and risk management strategies must evolve to address scenarios where an AI tool generates potentially infringing content, or where a deepfake attack damages a company’s reputation overnight. Waiting for legislators to catch up is not an option; proactive governance and ethical guidelines will likely define the winners and losers of this new era.
Ultimately, the question is not whether AI should be regulated under IP or cyber law, but how these fields can adapt together to create a balanced legal ecosystem. Ignoring one side of the equation risks creating loopholes that malicious actors can exploit. The challenge for lawyers today is to anticipate these risks and guide clients through a world where creativity, ownership, and security are all being reshaped by algorithms.
Lukas Schneider, LL.M. student at the University of Illinois College of Law

