The Next Frontier of Cybercrime Law for Artificial Intelligence and Criminal Liability


Abstract views: 16 / PDF downloads: 8

Authors

  • Sadia Sattar Leads University Lahore

DOI:

https://doi.org/10.59022/ijlp.355

Keywords:

Artificial Intelligence, Cybercrime, Liability, Autonomy, Regulation

Abstract

Artificial intelligence (AI) is spreading rapidly in society, but criminal law still mainly deals with human actions. This research looks at how AI systems challenge basic ideas in criminal law, such as intention (mens rea), action (actus reus), and causation. Unlike humans, AI can act on its own, learn from data, and sometimes cause harm without direct human control. Current laws are not fully able to handle situations where AI creates or helps in criminal acts. By studying recent cases, proposed laws, and legal theories, this research shows major gaps in existing rules. It argues that new legal frameworks are needed to deal with different levels of AI independence. Suggested solutions include shared responsibility models, stronger corporate liability, and clear rules for AI design and use. These ideas are important not only for cybercrime but also for areas like self-driving cars, medical AI, and automated decision-making

References

Abbott, R. (2020). The reasonable robot: Artificial intelligence and the law. Cambridge University Press. https://doi.org/10.1017/9781108640534

Alqodsi, E. M., & Gura, D. (2023). High tech and legal challenges: Artificial intelligence-caused damage regulation. Cogent Social Sciences, 9(2), Article 2270751. https://doi.org/10.1080/23311886.2023.2270751 DOI: https://doi.org/10.1080/23311886.2023.2270751

Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. MIT Press. https://fairmlbook.org/

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. https://global.oup.com/academic/product/superintelligence-9780199678112

Bryson, J., Winfield, A. F., & Theodorou, A. (2017). The artificial intelligence liability puzzle and a solution. Paladyn, Journal of Behavioral Robotics, 8(1), 180–194. https://doi.org/10.1515/pjbr-2017-0020

Bursztein, E. (2023, March). AI and cybersecurity: The new arms race. Google Security Research. https://security.googleblog.com/2023/03/ai-and-cybersecurity-new-arms-race.html

Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103(3), 513–563. https://www.californialawreview.org/wp-content/uploads/2015/06/2Calo.pdf

European Commission. (2024). AI liability directive proposal. EUR-Lex. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0496

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5 DOI: https://doi.org/10.1007/s11023-018-9482-5

Goodman, R., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation.” AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741 DOI: https://doi.org/10.1609/aimag.v38i3.2741

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009 DOI: https://doi.org/10.1145/3236009

Hallevy, G. (2015). Liability for crimes involving artificial intelligence systems. Springer International Publishing. https://doi.org/10.1007/978-3-319-11889-0 DOI: https://doi.org/10.1007/978-3-319-10124-8

IEEE Standards Association. (2023). IEEE 2857-2021: Standard for privacy engineering and risk management. IEEE. https://standards.ieee.org/ieee/2857/7063/

Laukyte, M. (2017). AI and criminal liability. European Criminal Law Review, 7(2), 178–195. https://doi.org/10.5235/219174717821819828

Mamak, K. (2025). AI personhood, criminal law, and punishment. In P. Hacker (Ed.), Oxford intersections: AI in society (online ed.). Oxford Academic. https://doi.org/10.1093/9780198945215.003.0015 DOI: https://doi.org/10.1093/9780198945215.003.0015

MIT Technology Review. (2024, January 15). The state of AI liability law. MIT Press. https://www.technologyreview.com/2024/01/15/1086435/ai-liability-law-status/

National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework. https://www.nist.gov/itl/ai-risk-management-framework

National Security Commission on Artificial Intelligence. (2021). Final report. NSCAI. https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf

OECD. (2024). AI governance and liability framework. OECD Publishing. https://doi.org/10.1787/5k9fnh0vf8nj-en

Pagallo, U. (2013). The laws of robots: Crimes, contracts, and torts. Springer Netherlands. https://doi.org/10.1007/978-94-007-6564-1 DOI: https://doi.org/10.1007/978-94-007-6564-1

Partnership on AI. (2023). AI liability and responsibility framework. Partnership on AI. https://partnershiponai.org/ai-liability-framework/

Robinson, P. H. (1993). Should the criminal law abandon the actus reus-mens rea distinction? In S. Shute, J. Gardner, & J. Horder (Eds.), Action and value in criminal law (pp187-211). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198258063.003.0009 DOI: https://doi.org/10.1093/acprof:oso/9780198258063.003.0009

Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. https://aima.cs.berkeley.edu/

Selbst, A. D. (2021). An institutional view of algorithmic impact assessments. Harvard Journal of Law & Technology, 35(1), 117–186. https://jolt.law.harvard.edu/assets/articlePDFs/v35/35HarvJLTech117.pdf

Stanford HAI. (2024). AI index report 2024. Stanford University. https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report_2024.pdf

UK Law Commission. (2023). Automated vehicles: Consultation on liability. Law Commission. https://www.lawcom.gov.uk/project/automated-vehicles/

United Nations. (2023). AI for good global summit report. ITU. https://aiforgood.itu.int/summit23/report/

US Government Accountability Office. (2024). Artificial intelligence: Status of developing agency guidance. GAO. https://www.gao.gov/products/gao-24-105541

Vladeck, D. C. (2014). Machines without principals: Liability rules and artificial intelligence. Washington Law Review, 89(1), 117–150. https://digitalcommons.law.uw.edu/wlr/vol89/iss1/5/

World Economic Forum. (2024). AI governance alliance report. WEF. https://www.weforum.org/publications/ai-governance-alliance-report/

Published

2025-08-30

How to Cite

Sattar, S. (2025). The Next Frontier of Cybercrime Law for Artificial Intelligence and Criminal Liability. International Journal of Law and Policy, 3(8), 12–25. https://doi.org/10.59022/ijlp.355

Issue

Section

Articles