Malaysia mulls over enacting law on AI

PETALING JAYA – The Science, Technology and Innovation Ministry is looking into the possibility of regulating artificial intelligence (AI) applications in Malaysia, including labelling material produced by such apps.

Science, Technology and Innovation Minister Chang Lih Kang said the ministry is considering spearheading the drafting of a Bill that would involve consultations with technology experts, legal professionals, stakeholders and the public to ensure it is robust and relevant.

“It is a strategic move considering the global trend towards stronger regulations around AI usage,” he told the Sunday edition of The Star.

Mr Chang said due to the widespread use of AI, it would be essential to label any material produced by generative AI as “AI-generated” or “AI-assisted” to ensure transparency and enable informed consumption.

“We should actively explore and advocate for policy measures that require content produced entirely or in part by AI to be clearly identified. Additionally, adopting global standards for AI transparency and pushing for relevant certification can bolster these transparency efforts,” he said.

“These standards might include guidelines on how to label AI-produced content and how to provide easy-to-understand explanations about the workings of AI systems,” he added.

In March, the World Economic Forum reported that the European Union was working on a legal framework for regulating the use of AI, chiefly focusing on galvanising rules on data transparency, quality, accountability and human oversight.

Dubbed the “AI Act”, the legislation is also designed to resolve “ethical questions and implementation challenges” in various industries, including education, finance and healthcare.

On July 21, AI companies, including OpenAI, Alphabet and Meta, made voluntary commitments to the United States government to implement measures such as watermarking AI-generated content.

Mr Chang pointed out that such a Bill in Malaysia would cover crucial aspects such as data privacy and public awareness of AI use.

“It would be important for this AI Act to, among other things, encompass areas such as transparency, data privacy, accountability and cyber security.

“The legislation could also include provisions for educating the public about AI and promoting research and development in the field,” he said.

The legislation, he said, would not curtail the development of AI technology, adding that it was important to balance the need to manage risks with the potential for innovation as well as ensuring AI continues to positively contribute to the economy and society.

“It is also crucial for the ministry to continuously advance research and development in AI and machine learning technologies, promoting ethical guidelines, and supporting innovation that can help in detecting and countering misinformation and other forms of harmful content,” he added.

On the possible abuse of AI in elections through libellous content or misinformation, Mr Chang said this is why there is a need for clear regulations.

“It is crucial to have strong legal frameworks and ethical guidelines for AI use.

“This could include laws that mandate transparency about the source of information, and severe penalties for those who use AI tools to spread false information.

“We also need to work with relevant ministries, social media companies and other platforms where misinformation is often spread, pushing them to increase their efforts to identify and remove such content,” he said.

Mr Chang also said people would need to be taught to recognise AI-generated content to help them make informed opinions and choices.

He stressed the need to develop resources and public awareness campaigns on the basics of AI and how it is being used to generate content.

“This includes understanding the biases that can be inherent in AI, as well as the distinction between human-produced and AI-produced content.

“Raising awareness about AI has many advantages. It helps people make better choices and decisions, encourages them to be more critical about the media they consume, and enables them to participate in discussions about AI rules and guidelines.

“Ultimately, it can lead to a more cautious and aware community, reducing the impact of AI-generated misinformation,” he said. THE STAR/ASIA NEWS NETWORK

Leave a Reply

Your email address will not be published. Required fields are marked *

fourteen − nine =