VIAVI Solutions partners with hyperscale data center operators, cloud network or service providers and those deploying data center interconnects (DCI). Our solutions are designed to reduce testing time, optimize optical networks, reduce latency and ensure 100% reliability that supports SLAs.
Hyperscale Data Centers
VIAVI Solutions is an active participant in over thirty standards bodies and open source initiatives including the Telecom Infra Project (TIP). But when standards don’t move quickly enough, we anticipate and develop equipment to test evolving infrastructure standards. We believe in open APIs, so hyperscale data center companies can write their own automation code.
VIAVI has been testing business communications equipment for nearly 100 years, and we work closely with standards bodies such as the FOA as standards continue to evolve. We guarantee performance of optical hardware from labs to turn-up to monitoring. This includes equipment that can inspect MPO connectors in seconds, as well as equipment that can test two 100G ports simultaneously.
What is a Hyperscale Data Center?
It is a large-scale distributed computing center that is owned and operated by the company it supports. The term “hyperscale” does not refer exclusively to data center size, but rather the ability to scale up capacity rapidly in response to increased demand. This can include horizontal scaling (scaling out), by adding more hardware to the data center, or vertical scaling (scaling up), by increasing the power, speed or bandwidth capabilities of existing hardware. They frequently leverage software defined networking along their own proprietary technology.
Data center interconnects (DCIs) are used to link massive hyperscale data center operators around the world. The size and complexity of hyperscale data center architecture makes testing essential during network construction, expansion and monitoring phases. With 16 QAM modulation now being used to gain 200G out of a single wavelength, DCI connection testing supports ease of installation by quickly verifying throughput and accurately pinpointing problem areas.
VIAVI is a provider offering an unmatched breadth and depth of interoperable test products and expertise. This ensures an elite class of service and reliability commensurate with hyperscale cloud data center performance.
What is a Hyperscaler?
The hyperscaler threshold has been defined by the International Data Corporation (IDC) as five thousand or more servers on a ten thousand square foot or larger footprint. Internet content providers (ICPs), cloud services and big data storage have become similar to hyperscale computing, although not all companies within these categories utilize these massive data centers.
Social media and enterprise software are other arenas where data centers have been established. The actual number of companies meeting the hyperscaler definition is relatively low (below thirty) and predominantly U.S. based.
There are over 400 hyperscale data centers in operation worldwide today, and this figure is only growing. Distinctive hallmarks of the hyperscaler are the motivation and technical acumen necessary to customize data center hardware and software according to their business model and applications.
This customization can ripple throughout the supply chain as vendors react to the high volume and rapid evolution of hyperscale hardware and components. To commit to the requisite costs, hyperscalers must establish a long-term vision and runway for the future that includes significant scalability and growth potential.
Challenges of Computing
The sheer size associated with hyperscale data center architecture provides several benefits including more efficient cooling and power distribution, balanced workloads across servers, and built-in redundancies.
These economies of scale can be negated by the challenges associated with hyperscale computing. High traffic volumes and complex flow patterns can outstrip the capabilities of traditional monitoring tools and practices. Visibility into external traffic flows can also be compromised because the focus can be diluted by the speed and quantity of growth in connections.
Security concerns are another challenge that can be magnified by the size of the data center. Although preventive and proactive security systems are an essential element of computing, a single breach can potentially expose enormous amounts of sensitive customer data.
Data center resource planning must balance the proximity of available talent with physical size demands that can sometimes limit location options to more remote or previously undeveloped areas. With some of the largest centers exceeding 500,000 square feet of floor space, establishing the utilities, roads and other infrastructure needed to support these locations can be equally challenging.
Despite these constraints, many of the largest centers in the world are currently located in or near major cities with dense population and development. Hyperscale data center proportions will expand considerably over the next decade, even as the available talent pool continues to shrink. More automation, machine learning and virtualization will be required to prevent an exponential demand for resources and talent from overwhelming the ecosystem.
Testing practices for fiber connections, network performance and service quality remain consistent with conventional data center testing, only on a much larger scale. Uptime reliability becomes increasingly important even as the testing complexity grows through the sheer volume of pathways and design components. Hyperscale DCIs running close to full capacity should be tested consistently to verify throughput and find potential issues before a fault occurs. Automated monitoring solutions should be leveraged to downscale resource demands.
The customization of data center hardware and software makes interoperability essential for premium test solutions. Test tools supporting open APIs add flexibility to accommodate the hyperscaler diversity. As the hardware has diverged, common interface conventions such as PCIe and MPO have continued to grow throughout the ecosystem due to their optimized mix of density and capacity.
Bit error rate testers like the MAP-2100 have been developed specifically for environments where little or no personnel is generally available to perform network tests. Network monitoring solutions intended for the hyperscale ecosystem can flexibly launch large-scale performance monitoring tests from multiple physical or virtual access points.
Best practices defined through the development of testing tools for elements such as MPO and ribbon fiber can also be applied on an immense scale within hyperscale deployments.
New hyperscale data centers companies continue to push the boundaries of size, complexity and density. Test solutions and products originally conceived for other high-capacity fiber applications, can be utilized in tandem to verify and maintain enterprise data center performance.
The substantial volume of fiber connections within and between data centers underscores the need for reliable and efficient fiber inspection tools. A single particle, defect, or contaminated end-face can lead to insertion loss and compromised performance of the network. The best fiber inspection tools for hyperscale applications combine compact form factors with automated inspection routines and multi-fiber connector compatibility.
High Speed Transport: 400G and 800G
Emerging technologies like the IoT and 5G with their inherent bandwidth demands have made 400G and 800G technology essential to computing. While these cutting-edge, high speed Ethernet standards have enabled the data centers to keep pace, PAM-4 modulation and Forward Error Correction (FEC) have contributed to testing complexity. Scalable, upgradable testing tools can help facilitate advanced error rate and throughput testing to meet performance demands.
100G and 200G
Interconnects between hyperscale data centers use up to 100G over a single wavelength, and continue to upgrade to 200G. Network testing and monitoring tools with automated test scripts can ensure the integrity and security of these important connections. Intentional stressing of fiber connections prior to traffic turn up can identify and locate trouble spots. RFC 2544 and RFC 6349 TCP throughput testing can also be performed to verify the 16 QAM modulations required for 200G.
The increase of fiber optics within the hyperscale data center ecosystem has made robust fiber monitoring a formidable yet essential task. Versatile testing tools for continuity, optical loss measurement, optical power measurement and OTDR are a must for construction, activation and troubleshooting activities. Automated fiber monitoring systems like the ONMSi Remote Fiber Test System (RFTS) can provide scalable, centralized fiber monitoring with immediate alerts.
Portable test instruments that once required the dispatch of a network technician can now be virtually operated through the Fusion test agent. This virtualization helps to offset the copious resource demands of the data center. The software-based Fusion platform can be used to monitor networks, ensure performance and verify SLAs. Ethernet activation tasks such as RFC 6349 TCP throughput testing can also be virtually initiated and executed.
Multi-fiber push on (MPO) connectors were once used primarily for dense trunk cables. Today, the density constraints of the hyperscale data center have led to rapid adoption of this interface for patch panel, server and switch connections. Fiber testing and inspection can be accelerated through dedicated MPO test solutions with automated pass/fail results.
The diverse VIAVI test product offerings cover all aspects and phases of hyperscale data center construction, activation and maintenance. Throughout the migration towards larger and denser data centers, fiber connector inspection has remained an essential element of the overall test strategy.
The rapid expansion of MPO connector utilization makes the FiberChek Sidewinder an ideal solution for automated multi-fiber end face certification. Optical loss test sets (OLTS) designed specifically for the MPO interface, such as the SmartClass Fiber MPOLx, can make Tier 1 fiber certification easier and more reliable.
The T-BERD 5800 100G (or MTS-5800 100G outside of the North American market) is an industry leading, compact dual-port 100G test instrument that can facilitate fiber testing, service activation and troubleshooting. This versatile tool can be used for metro/core applications as well as DCI testing. With an advanced feature set and a rugged, compact form factor, OTN and Ethernet service activation testing can be completed quickly and accurately. The T-BERD 5800 supports consistency of operations with repeatable methods and procedures.
Hyperscale data centers are a synchronistic application for automated OTDR testing through MPO connections. The multi-fiber MPO switch module is designed to produce an all-in-one solution for MPO dominated, high density fiber environments. When used in conjunction with the T-BERD test platform, fibers can be characterized through OTDR without the need for time-consuming fan-out/break-out cables. Automated test workflows for certification can be performed for up to 12 fibers simultaneously.
Standalone, remote fiber monitoring is another category of advanced test solution that can find unlimited utility in the hyperscale cloud. With SmartOTU fiber monitoring, detected events including degradation, fiber tapping or intrusion are quickly converted to alerts. The system can be deployed right out of the box, with no configuration required. This cost-effective, accurate and modular tool provides yet another level of assurance for SLA contracts and DCI uptime.
The Future of Hyperscale Data Networks
New technology and applications will continue to drive the demand for hyperscale data center testing and computing into the next decade and beyond. As the monetization opportunities presented by 5G expand, new entrants will drive applications that further challenge density, throughput and efficiency constraints. More data centers will be interconnected through immense fiber networks as each hyperscaler expands their reach and base in an effort to provide top performance levels.
Hyperscale computing is truly a digital global phenomenon, with a growing proportion of new footprints appearing in Asia, Europe, South America and Africa as the building boom continues. Massive new submarine cables (submerged DCI) connecting these diverse geographies, and perhaps even new data centers, will be deployed on the ocean floor
Green technologies, such as solar rooftops and wind turbines, will make the massive computing power consumption more manageable and environmentally sustainable.
As hyperscale technology evolves, VIAVI continues to draw upon experience, expertise and end-to-end (e2e) involvement to help our customers. Our hyperscale service provider solutions are used for planning, deployment and management of discrete elements, from conception in the lab to servicing in the field, and beyond.