Categories
Tech Stocks

Nvidia Tightens Ecosystem Barriers, Chinese GPU Manufacturers Accelerate Autonomous Ecosystem Development

Recent announcements by Nvidia (NVDA) regarding the prohibition of running CUDA software through translation layers on other GPUs have sparked widespread discussions within China’s industry.

The core competitiveness of GPUs lies in factors such as architecture, which determine performance superiority and computational ecosystem barriers. It’s well-known that Nvidia, with its first-mover advantage and the CUDA architecture, which significantly lowers the development threshold, has firmly captured a large number of users. This not only makes GPUs gradually take center stage in general computing but also builds a moat for Nvidia itself.

“At the toolchain level, GPU manufacturers compatible with CUDA will be affected, but the impact itself is quite complex technically. Nvidia is sending a strong signal that it’s tightening the fence around its own ecosystem,” a GPU industry insider in China told reporters.

China CITIC Securities stated that due to CUDA’s closed-source nature and rapid updates, it’s difficult for latecomers to achieve perfect compatibility through instruction translation. Even partial compatibility may result in significant performance losses, leading to continuous lag behind Nvidia in terms of cost-effectiveness. Moreover, CUDA is Nvidia’s proprietary software stack, containing many proprietary features of Nvidia GPU hardware, which are not reflected in chips from other manufacturers.

This is the dilemma facing Chinese manufacturers. Currently, Chinese GPU manufacturers are vigorously investing in research and iterative architecture development, seeking to build their own autonomous software and hardware ecosystems. While compatibility with existing ecosystems is easier for the initial development of Chinese GPUs, in the long term, they need to move away from compatibility and develop their own core technologies.

“We often talk about compatibility, but compatibility doesn’t mean doing exactly the same thing as Nvidia; it means your product can support the ecosystem of all technologies, absorb Nvidia’s ecosystem, and directly utilize it. However, achieving comprehensive functionality comparable to Nvidia’s GPU chips is very difficult. At present, most manufacturers’ strategy is to only implement some functions of Nvidia’s GPU artificial intelligence acceleration,” said Zhang Yubo, CTO of Moore Threads in China. “But Moore Threads can achieve the four major functions in Nvidia’s system architecture, including general computing, AI acceleration, graphics rendering, and video encoding and decoding.”

Moore Threads, established in 2020, is an integrated circuit enterprise mainly focused on full-featured GPU chip design. It’s reported that Moore Threads has launched the MUSA architecture, which is fully comparable to CUDA. Users can recompile applications written in CUDA into MUSA applications through Moore Threads’ compiler, achieving near-zero-cost migration while also being able to develop new applications using standard programming languages. “So, MUSA itself is an independent ecosystem, while also being an open one that can absorb the existing ecosystem,” Zhang Yubo said.

“Independence and openness are not contradictory. On the one hand, we can independently develop and control, and on the other hand, we can open up and be compatible with Nvidia’s advantages,” Zhang Yubo told reporters. “Only when the hardware functions are fully comparable can we effectively absorb the applications of the CUDA ecosystem. If there is no way to absorb the existing ecosystem and build a new one, it will take ten to twenty years to truly establish it.”

In fact, customer migration costs are one of the important factors driving Chinese GPU manufacturers to accelerate ecosystem construction. Currently, China also has some companies adhering to the “difficult but correct” concept, choosing to build their own ecosystems and not to be compatible. One of them is Sugon in China.

Sugon focuses on cloud and edge computing products in the field of artificial intelligence, dedicated to building a computing base for general artificial intelligence, providing original and independent AI acceleration cards, system clusters, and hardware and software solutions.

For companies like Sugon, customer migration costs always exist, so they need to find like-minded customers. “Sugon hopes to build an open-source ecosystem with industry partners, and our customers are willing to work with partners who have a long-term vision to refine products,” said Li Xingyu, Chief Ecological Officer of Sugon.

With the advancement of technology, the road of independent ecosystem construction by Chinese manufacturers is expected to become wider.

“The paradigm shift in technology ecosystems brings a new opportunity for startups like Sugon,” Li Xingyu believes. With the advent of the era of large models, the architecture base of models tends to be consistent, namely Transformer, which converges the demand for hardware, making the direction of hardware design more focused and clear, reducing fragmentation. At the same time, the increasingly popular open-source frameworks and programming languages allow chip companies to have a better foundation to adapt to different models, making it easier for developers to adapt to different hardware at the development tool level.

“The customer’s migration cost depends on many factors, but the overall trend is becoming more convenient,” Li Xingyu said. “For example, we are compatible with mainstream operators of PyTorch, and models using these mainstream operators theoretically can be directly migrated without changing the source code. At the same time, we will also support more mainstream open-source programming languages in the future, making it easier for customers to develop new models.”

Currently, there are many AI chip manufacturers in China choosing to build their own ecosystems, but they have not formed a unified ecosystem, and each is in a period of rapid development. Indeed, in the early stages of technological development and rapid iteration, it’s difficult to formulate a unified standard. Just as in the early stages of the development of GPUs in other countries, there were more than forty companies in the industry, but after the storm, only a few remained and grew stronger. In the face of rapidly changing technological trends, everyone has their own understanding, and letting the market and customers choose may be a better way.

“The improvement of technology ultimately depends on the traction of the market and customer demand. China’s real advantage lies in having the world’s largest market and many developers willing to embrace new technologies,” Li Xingyu said.

Nvidia’s recent prohibition on running CUDA software through translation layers on other GPUs may sound the alarm for some manufacturers who rely solely on compatibility paths. According to GPU industry insiders, Nvidia’s “symbolic restriction measures” are still relatively restrained and have not imposed restrictions on application programming interfaces. However, for an entrepreneur, what needs to be considered is not just the current situation, but also what the second and third steps of restrictions are.

Leave a Reply

Your email address will not be published. Required fields are marked *