Indonesia’s Legal Vacuum on AI: A Rising Threat to Stability and Rights

share:

Jakarta – As artificial intelligence (AI) rapidly transforms industries and public life, Indonesia is grappling with a critical gap: the absence of comprehensive legislation to regulate the technology. The country’s expanding AI use — from creative content generation to social media manipulation — is outpacing its legal system, leaving a dangerous vacuum with growing national and international implications.

Today, AI powers music, video, and digital content across Indonesia’s booming creative economy. Yet there is still no formal legal framework recognizing or regulating AI’s role in authorship, copyright, or accountability. The nation relies solely on the Electronic Information and Transactions Law (UU ITE) and a non-binding 2023 circular by the Ministry of Communications (Kominfo) — neither of which are designed to handle the ethical, economic, and legal complexities of autonomous AI.

Legal Blind Spots in an AI-Powered Society

Current Indonesian laws fall short in addressing machine-generated content and decision-making. The UU ITE focuses broadly on digital communication but lacks any mention of algorithmic production, machine authorship, or AI accountability.

According to legal scholar Harry Surden (2019), the fundamental limitation of traditional law lies in its human-centric focus. In Indonesia, this means that songs, illustrations, and even entire videos created by AI software like Boomy or Synthesia cannot be legally copyrighted — because, under current interpretation, only humans can own intellectual property.

This loophole is more than academic. AI-generated content is already monetized en masse on platforms like YouTube. Creators using generative AI are profiting, while rights over royalties, content reuse, and attribution remain undefined. If an AI replicates elements from copyrighted materials — intentionally or not — who is liable? The app user? The developer? Or no one at all?

Disinformation and Deepfake Dangers

The stakes go beyond economy. AI-generated content is increasingly deployed in political contexts — to spread disinformation, influence public opinion, and even incite distrust toward institutions. Deepfakes, which manipulate public figures’ faces and voices with chilling realism, have been used to create false narratives targeting political leaders, law enforcement, and state agencies.

This opens up an entirely new dimension of national security risk. When malicious actors use AI to produce politically charged, manipulated content, Indonesia’s law enforcement is left without a clear mandate. The police cannot prosecute what the law does not define — and that ambiguity offers a dangerous safe haven for digital sabotage.

Experts argue that the legal vacuum leaves Indonesia vulnerable to polarization, manipulation, and even foreign interference. With elections on the horizon and digital engagement intensifying, the lack of enforceable AI governance becomes not just a legal issue, but a democratic one.

The Need for Inclusive and Adaptive AI Legislation

To address these growing risks, Indonesia must move urgently toward enacting AI-specific legislation — but not hastily, and not in isolation. Legal experts and technologists emphasize the need for a participatory approach, involving private sector innovators, civil society, academia, and policymakers.

Future AI laws must cover authorship and ownership of machine-generated content, establish responsibility in the event of infringement or harm, and ensure transparency, accountability, and fairness in automated decision-making. They must also balance innovation with regulation — enabling growth while protecting rights and institutions.

International examples abound. The European Union’s AI Act and Japan’s adaptive policy model provide useful frameworks for Indonesia to study. Both approaches stress open dialogue, risk-tiered classification, and sector-specific guidelines — principles that Indonesia’s emerging AI landscape desperately needs.

A Policy Crisis in Motion

Without clear rules, AI in Indonesia will continue to operate in legal ambiguity. Creative industries will expand on shaky intellectual property foundations. Law enforcement will struggle to respond to increasingly sophisticated digital threats. And public trust in both information and institutions will erode.

The ITE Law and ministry circulars are no longer sufficient. As AI becomes more integrated into Indonesia’s economy and politics, legal silence is no longer an option. A robust, inclusive, and adaptive AI law — grounded in human rights and technological realities — is not just desirable. It is imperative.