Using AI to Automate HFC Upstream
Noise Localization

By: Dr. Foad Towfiq

Since the inception of Data Over Cable, DOCSIS network operators have faced the difficult task of locating upstream noise in the network. This difficulty is mainly due to the branch-tree architecture of the coaxial part of the HFC network, which allows all noise originated from any points to accumulate in the upstream direction and affect all the subscribers’ cable modems on that upstream channel. 

Beyond the fact that a single noise problem affects many customers, it is extremely difficult to pinpoint where the noise problem is. The network management approach has evolved and improved, but cable operators are still facing the same challenge. In most cases, technicians trying to find the source of noise issues must start from one end of a node and work their way through the network. To make matters worse, technicians’ actions like this can impact large groups of customers, many of whom were not affected in the first place—for example, when the diagnostic step involves disconnecting network legs to see if the noise “goes away.”

The costs mount quickly! There is a direct technician cost of working for hours, days and even weeks to find such issues, plus there is the cost of customer dissatisfaction, and ultimately the opportunity cost of spending so much time on diagnosing noise issues that could be better spent doing proactive maintenance on the network. Operators need solutions to help pinpoint the causes of noise, thereby lessening the impact on customer satisfaction, and freeing up technicians to pursue other critical network issues.

A short history of AI technology

As quoted from Buchanan, 2006: “The history of AI is a history of fantasies, possibilities, demonstrations, and promise. Ever since Homer wrote of mechanical “tripods” waiting on the gods at dinner, imagined mechanical assistants have been a part of our culture. However, only in the last half century have we, the AI community, been able to build experimental machines that test hypotheses about the mechanisms of thought and intelligent behavior and thereby demonstrate mechanisms that formerly existed only as theoretical possibilities.” While the foundation of AI was set in the 1950s by such pioneers as Alan Turing, the hardware needed to deliver true AI has only become available in the last 20 years. With the theoretical foundation of AI in place, and hardware now available that is powerful enough to deliver, AI technology is changing the world every day.

Another AI pioneer, Marvin Minsky at MIT, sought to use more advanced software logic with well-crafted subject-matter-expertise types of logic running on Expert Systems. This track of AI R&D achieved some results but was ultimately deemed too brittle for practical, economic use. Another track of AI R&D developed to apply Neural Network methods to seek a similar result, essentially to apply large decision-tree paths as networks of local decision-making in response to data inputs and allow the AI system to make more informed and intelligent recommendations based on given inputs.

In contrast to these traditional models of AI development, the recent success stories in AI revolve around fully using the massive improvement in compute-power in Big Data processing. New ML models now use the available compute-power to process massive amounts of data with repetitive, nuanced software logic that leads to unexpectedly brilliant conclusions about “what all that data means.”

This important synergy connects the DOCSIS network operators’ noise dilemma with the Big Data processing and new success with AI ML models, leading to surprising results with noise localization. This new form of AI has allowed Promptlink to deliver innovative new network management solutions, adding new capabilities to improve cable operators’ business—and their customers’ cable experience.


Latest Updates

Subscribe to our YouTube Channel