SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

The Impacts of Open Optical Systems - Now and In the Future

By: Dean Campbell

In the 2018 IHS Markit Optical Network Strategies Service Provider survey, it was found that, “of service providers using optical transmission and switching equipment, 47 percent of respondents indicated interest in the use of disaggregated optical equipment in their networks, up from 33 percent in 2016.” Breaking down network functionality into smaller elements allows for more network flexibility, faster updates, and more reliable systems. A fundamental requirement to make this architecture work, however, is the availability of standardized management interfaces (API’s) – “open” interfaces that facilitate the integration of elements within a network.

What Does “Open” Really Mean?

In most contexts, “open” would denote that the protocols, API’s, and other interfaces are well-documented; that they are available for use by resources outside of the vendor’s control. Although documented, an individual vendor could provide its own set of unique interfaces and still be considered “open.” Using this definition, most of the networking devices and software in use today are “open;” they document various API’s that are available for use to configure, manage, and monitor their hardware and software. Each requires customization to the network management tools to account for variation in commands, parameters, and configuration data.

If we look at “open” in a more recent definition—which is the adherence to industry-wide standards (both protocol and parameters)—then the picture becomes considerably different. Although we have common protocols (languages) for management of devices, the parameters (nouns and verbs) are still unique for each manufacturer’s device. There are few true industry-wide management standards for network element types.

Nearly all vendors support Simple Network Management Protocol (SNMP) standards. SNMP provides the ability to retrieve settings and status and receive alerts. Furthermore, SNMP provides the ability to update configuration settings. However, much of the data sent and retrieved from each device is custom for that vendor and device. Vendors publish the Management Information Base (MIB)—the definition of the custom data structures used in the SNMP communication—for each device. When using SNMP, there is a consistent communication mechanism, but most of the content varies by vendor and element.

Network Configuration Protocol (NETCONF) is defined as, “the standard for installing, manipulating and deleting configuration of network devices,” and is quickly gaining traction in the marketplace for managing packet networks. Yet Another Next Generation (YANG) data models are taking the functionality previously provided by MIBs. NETCONF provides the communication mechanism, and the YANG data models provide definition of the data available for reading and writing through the NETCONF interface—very similar to capabilities provided by SNMP and MIBs. Even in this newer interface, though, there is little standardization of the data exchanged. Each vendor strives to differentiate its own products in the marketplace. 

In the optical space, the picture is a similar. We have well-defined interface standards (TL-1 for example), but even simple things like alarms can have widely varying content and descriptions. Some of the work from the packet domain is bleeding into the optical space, as many new optical devices support SNMP/MIBs and NETCONF/YANG as monitoring and configuration interfaces. But, as with packet devices, even when these interfaces are implemented, they utilize custom product data.

Describing today’s carrier networks as “open” points to a distinctly mixed picture. Management communication protocols are consistent, but the content and syntax of commands are unique to each manufacturer and product. To date, there has been little prospect for industry-led adoption of fully-standardized API’s. Most of the efforts that are currently underway are driven by consumer constituency rather than having full industry-wide support. In order to truly benefit from “open” systems, there needs to be increased “standardization” of the commands and parameters exchanged across the management protocols. We believe there to be both pros and cons to this goal, as outlined below.

Pros:

5G and the Internet of Things (IoT) are two trends that require extensive amounts of new bandwidth. The latest Ericsson Mobility Report states that, “5G will cover more than 20 percent of the global population six years from now.” The adoption of 5G and IoT is also increasing the pace of change in the network environment. Because future bandwidth demands will be significantly more mobile and dynamic, enhancing the network’s ability to adapt to these changes becomes a “must have.” Applications that perform intelligent decision-making, made and applied at machine speed, will start to become the norm. This demands the creation of an intelligent, automated control plane. Standardization of management across network element functionality is required to enable this upper-layer control plane to function effectively.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel