In its latest move to step up oversight of powerful artificial intelligence (AI) technology, China has issued draft guidelines for standardising the industry.
The draft guidelines aim at “seizing the early opportunities from the development of the AI industry”, China’s industry ministry said in a statement posted its website on Wednesday.
The draft proposes to form more than 50 national and industry-wide standards for AI by 2026.
Also on AF: Big Tech Chasing AI Profits Ignoring Risks, UN Chief Says
It also said China aimed to participate in forming more than 20 international standards for AI by that time.
The ministry added that 60% of these prospective standards should be aimed towards serving “general key technologies and application development projects”.
Furthermore, it aims to have more than 1,000 companies adopt and advocate for these new standards.
The development comes at a time when China is trying to catch up with the United States in AI development after US company OpenAI shocked the world with its generative AI sensation ChatGPT in 2022.
China sees AI as an area in which it wants to rival the US, and has set its sights on becoming a world leader in the field by 2030.
China’s push to regulate AI
The Xi Jinping-led government has also taken a lead in developing regulations to oversee AI at a time when countries globally are grappling with setting guardrails for the technology.
Last year in April, China’s cyberspace regulator unveiled draft measures for managing ChatGPT-like generative AI services, saying it wants firms to submit security assessments to authorities before they launch their offerings to the public.
The regulator added that China supports AI innovation and application, but content generated by generative AI had to be in line with the country’s core socialist values.
At a time when the industry globally was debating ethical issues around material used to train AI models, the Chinese regulator also said that generative AI providers will be responsible for the legitimacy of data used to train generative AI products.
It further directed providers to take measures to prevent discrimination when designing algorithms and training data for their services.
In October, the regulator also proposed a blacklist of sources that cannot be used to train AI models.
The sources included those “advocating terrorism” or violence, as well as “overthrowing the socialist system”, “damaging the country’s image”, and “undermining national unity and social stability”.
Most recently, in December — a time when copyright concerns around AI-generated material were at their peak — a court in China awarded damages to a creator who sued a blogger for using his AI-generated image without permission.
The court said the creator had “made a certain degree of intellectual investment” in selecting prompts to generate the image and was, thus, protected by copyright law.
- Reuters, with additional inputs from Vishakha Saxena