ARTIFICIAL INTELLIGENCE (AI) is rewriting modern Chinese lives in every way it can. Glazers in ceramic factories of southern China’s Yunnan Province were replaced by robots that were three times more efficient at spraying glazes. The latter did not suffer from pneumoconiosis and could work 24 hours a day.
A TV station in Beijing quickly found some tools to liberate the anchors. It only takes a few minutes to generate an anchor’s own avatar, which can “broadcast” in different accents, different languages 24/7.
Industry insiders believe that algorithms are learning and surpassing human intelligence. It’s not a question of whether AI can replace human labour, but by how much — and whether sooner or later.
Large language models in various domestic fields emerged in an endless stream last year, following the emergence of ChatGPT. AI-driven robots, intelligent manufacturing, and autonomous driving, among others, are technologies under the spotlight.
Then there’s Sora, a large language model developed by OpenAI to generate video out of text prompts, allowing people to see the potential of AI’s rapid growth.
When I saw it for the first time, I felt that it could be the start of something big, and for the first time, my industry was challenged by AI. Although Sora still cannot generate logical text in the video, the overall picture looks very close to the input commands.
This quickly raised concerns about “fake news.” Think about how much trouble some AI-generated Elon Musk or Donald Trump or government spokesperson could cause to the world. This is one of the reasons why Sora has not yet been made public, and why regulation of the AI industry is particularly needed.
As the world’s largest artificial intelligence research and production country, China deployed a New Generation AI Development Plan as early as 2017. While the plan primarily focuses on encouraging AI development, it also lays out a timetable for developing AI governance regulations through 2030.
In the following years, several more specific and influential regulations were introduced, such as the 2021 regulation on recommendation algorithms, which prohibits excessive price discrimination and protects the rights of workers affected by algorithmic scheduling; the 2022 rules for deep synthesis that require prominent labelling on synthetically generated content. The most recent is the 2023 draft rules on generative AI, which require training data and model output to be “true and accurate” and “not to be evil.”
The sweeping development of AI naturally makes it a buzzword at China’s 2024 Two Sessions, an opportunity to reflect on broader trends in Chinese politics and policymaking. The Chinese government realises that under the trend of declining demographic dividend, AI will become a “new productive force” and play a big part in the country’s economic blueprint.
Data from McKinsey and Company suggests Generative AI could add $2.6 to $4.4 trillion to the world’s economy annually in the coming decades, and could further release $2trn-worth of economic benefits in China each year.
While maintaining effective content control, Chinese academics, businesses and policymakers are actively discussing how not to suppress China’s emerging generative AI industry.
Official support is pivotal. In January, China’s state-owned Assets Supervision and Administration Commission announced that it would promote several major projects, build strategic emerging industry clusters, and implement special actions on “AI+” areas. Investment in corresponding areas has also been encouraged.
For a technology that is bound to boom, China is embracing it, governing it and thinking of ways to develop it.