International. The complexity of AI training algorithms is growing astonishingly fast, and the amount of computation required to run the new training algorithms seems to double roughly every four months.
To keep pace with this growth, you need hardware for AI applications that is not only scalable, but also capable of managing increasingly complex models at a point close to the end user.
IDTechEx predicts that the growth of AI will continue unabated over the next ten years, as our world and the devices that inhabit it become increasingly automated and interconnected.
Training and inference stages
Machine learning is the process by which computer programs use data to make predictions based on a model and then optimize the model to better fit the data provided, adjusting the weights used. This calculation involves two steps: Training and Inference.
In the training stage, data is fed into the model and the model adjusts its weights until it properly matches the data provided. In the inference stage, the trained AI algorithm is executed, and the new data (not provided in the training stage) is classified in a way that is consistent with the acquired data.
The training stage involves performing the same calculation millions of times. Therefore, it is carried out in cloud computing environments, where a large number of chips are used that can perform the type of parallel processing necessary for efficient training of algorithms.
Chip manufacturing & supply
Intel, Samsung, and TSMC are the only companies that make 5nm chips. TSMC's global market share in semiconductor production is around 60%. In the case of advanced nodes, it's close to 90%. Of TSMC's six 12-inch and six 8-inch factories, two are in China and one is in the United States. The rest are in Taiwan.
This concentration carries a great deal of risk in case the supply chain is threatened in any way, as happened in 2020 when there was a global chip shortage.
Since then, the United States, the European Union, South Korea, Japan and China have all sought to reduce their exposure to a manufacturing shortfall, should another chip shortage occur even more exacerbated.
Proof of this are several government initiatives that aim to stimulate additional private investment through the attractiveness of tax breaks and partial financing in the form of grants and loans.
Growth in the next decade
Revenue generated from the sale of AI chips is expected to rise to nearly $300 billion by 2034, with a compound annual growth rate of 22% between 2024 and 2034.
This revenue figure incorporates the use of chips for the acceleration of machine learning workloads at the network edge, for the telecom edge, and within cloud data centers.
Starting in 2024, chips for inference purposes (both at the edge and within the cloud) will comprise 63% of the revenue generated, and this share will grow to more than two-thirds of total revenue by 2034.
In terms of industry verticals, IT and telecommunications are expected to lead the use of AI chips over the next decade, followed closely by banking, financial services and insurance (BFSI) and consumer electronics.
Leave your comment