We’ve entered the artificial intelligence (AI) era, and there is growing interest in the telecom industry on how best to leverage AI’s superpowers to generate business growth, to build differentiated networks and to achieve unprecedented operational efficiencies. While all these outcomes are achievable, they are daunting—and worse, if left behind can be fatal for a business.
The telecom industry has been gradually adopting AI/machine learning (ML) in networks and for business operations. For example, the Open RAN industry has been working on several use cases to increase spectral efficiency, improve energy efficiency and optimize network performance. Similarly, telecom business operations have been working on chatbots to improve customer engagement, conduct sentiment analysis and predict churn.
The journey ahead is going to be transformational.
In a survey released in February 2023, NVIDIA found that nearly all telecommunications respondents (95%) were already engaged with AI. However, much of this engagement was in early stages, as only one-third (34%) had been using AI for over six months, while nearly a quarter (23%) were still learning how to apply AI on their networks.
In this blog, I’ll cover several aspects of how AI/ML will transform the telecom business.
AI Applied to the Network
Today, AI is already being used to assist networks to achieve some level of automation and efficiency. Going forward, the industry is poised to move from AI-assisted, where AI is only used peripherally, to AI-native, where the core of the systems is built and operated with AI/ML models.
AI/ML will permeate through the entire network stack, presenting unique opportunities for communications service providers (CSPs) to create differentiated networks that are highly automated, delivering higher performance, achieving higher energy efficiency and more. Every layer of the network stack—from neural receivers to neural schedulers to neural optimizers—is poised to be transformed with AI/ML to deliver higher performance and better quality.
And from network planning to network deployment to network optimization—the entire network lifecycle is poised to be automated. The use cases range from applying traditional AI/ML for network optimization, to harnessing the natural language processing (NLP) superpowers of generative AI (GenAI) for network troubleshooting. Further, network digital twins will enable CSPs to carry out exhaustive simulations in the digital world before rolling out changes in the real physical world.
This is just scratching the surface. AI will enable CSPs to build differentiated and automated networks.
AI Supporting the Business
AI will play a pivotal role in simplifying, modernizing and automating CSPs’ business operations. AI is poised to enable CSPs to achieve unprecedented productivity gains, to effectively monetize their networks and to grow both their top line and bottom line.
The journey has already begun for many CSPs with chatbots and call center copilots that are creating new ways to deliver customer care, perform sentiment analysis and proactively mitigate churn. We are collaborating with our partners in a TM Forum Catalyst project “AI chat agent: The game changer for telecoms” to pave the way for the industry on this. This Catalyst project is powering chatbot with telecom-trained large language models (LLM) to provide swift, precise and personalized customer care. The goal is to enhance both customer experience and deliver revenue growth. The Catalyst demonstrates the power of GenAI by showing how LLM-based chatbots can understand customer intent and trigger automated actions to offer personalized data plans or initiate new service subscriptions.
Copilots will drive productivity gains, from code development copilots to field support copilots, and GenAI-powered new digital assistants will enable telecom teams to achieve outcomes much faster. On same lines, GenAI will enable network operations teams to quickly analyze large logs and sift through a high volume of signaling message flows to detect anomalies and reduce troubleshooting time from days to hours to minutes.
With sustainability and lowering energy costs being top of mind for almost all CSPs, AI will play a pivotal role in network energy efficiency—especially at the edge in (usually) unstaffed locations. By using AI/ML to understand what equipment is in use and when, organizations can automate energy management at edge locations.
Meaningful efficiency and productivity gains, as well as the ability to lower operating costs, will be unlocked for networks with GenAI.
Building an AI-ready Infrastructure
To capture the full business potential of AI, CSPs need to get their network infrastructure ready for a whole new class of workloads: AI workloads.
AI-ready infrastructure will enable CSPs to not only create differentiated networks and realize productivity gains, but it also will enable them to offer new services and generate new revenue streams. CSPs are uniquely positioned to offer inferencing services at the edge.
Networks are the edge of the real world, and they are pervasive. As AI proliferates across different industries, demand for inferencing will substantially grow. Inferencing closer to where the data is generated has both technical and business benefits—consider data transport costs, latency, ability to reject low value data at the edge and so on. With AI-ready infrastructure, CSPs can offer inferencing services for this new class of AI workloads at the edge. And in many markets, CSPs are also building AI-factories to offer sovereign AI model training/tuning services.
To build the right AI-ready infrastructure, CSPs will have to navigate a range of questions. Right sizing the AI infrastructure for training and inferencing will require taking AI model sizes and complexity into account and simultaneously meeting real-world constraints for power, cooling, form factor, etc. for a given location in the network.
Dell Technologies has a wide range of our PowerEdge server and GPU portfolio that caters to the growing performance requirements of AI workloads while also meeting real world constraints. For example:
- PowerEdge XE9680 with 8x Nvidia H100 GPUs is well-suited for large AI model training/tuning and can be hosted in CSPs’ national and/or regional data centers (NDC/RDC).
- PowerEdge R760XA can hold up to 4x double-wide GPUs or up to 12x single-wide GPUs and is well suited for telecom’s core network deployments.
- PowerEdge XR8000 CPUs are AI ready to handle small edge AI inferencing workloads and paired with GPUs can handle medium inferencing workloads.
These are just some examples, and with our comprehensive PowerEdge server and GPU, portfolio CSPs can right size their infrastructure.
These are exciting times in technology. AI is a transformational force that will redefine the telecom landscape. Those who get out of the gates early have much to gain, and those who get left behind have a lot to worry about. To start (or continue) your AI voyage and learn how Dell can help CSPs, visit our website or get in touch.
AI phone home.