Learn about computing and advanced testing solutions from VIAVI
VIAVI Solutions partners with operators, cloud network or service providers and those deploying data center interconnects (DCI) to reduce testing time, optimize optical networks, reduce latency and ensure complete reliability that supports SLAs.
Hyperscale Data Centers
VIAVI Solutions partners with hyperscale data center operators, cloud network or service providers and those deploying robust data center interconnects (DCI) to reduce testing time, optimize optical networks, reduce latency and ensure 100% reliability that supports SLAs. We guarantee performance of optical hardware from labs to turn-up to monitoring, including equipment that can inspect MPO connectors in seconds as well as equipment that can test two 100G ports simultaneously. Because VIAVI Solutions is involved in all stages of hyperscale optical testing, we understand how you’re building high speed networks up to 400G and beyond – and we have the equipment to test it all.
VIAVI Solutions is an active participant in over thirty standards bodies and open source initiatives including the Telecom Infra Project (TIP). But when standards don’t move quickly enough, we anticipate and develop equipment to test evolving standards. We believe in open APIs, so hyperscale companies can write their own automation code. VIAVI Solutions has been testing communications equipment for nearly 100 years.
What is a Hyperscale Data Center?
It is a large-scale distributed computing center that is owned and operated by the company it supports. The term “hyperscale” does not refer exclusively to data center size, but rather the ability to scale up capacity rapidly in response to increased demand. This can include horizontal scaling (scaling out), by adding more hardware to the data center, or vertical scaling (scaling up), by increasing the power, speed or bandwidth capabilities of existing hardware. They frequently leverage software defined networking along their own proprietary technology.
Data center interconnects (DCIs) are used to link massive hyperscale facilities around the world. The size and complexity of hyperscale data center architecture makes testing essential during network construction, expansion and monitoring phases. With 16 QAM modulation now being used to gain 200G out of a single wavelength, DCI connection testing supports ease of installation by quickly verifying throughput and accurately pinpointing problem areas.
VIAVI provides an unmatched breadth and depth of interoperable test products and expertise. This ensures an elite class of service and reliability commensurate with hyperscale cloud performance.
What is a Hyperscaler?
The hyperscaler threshold has been defined by the International Data Corporation (IDC) as five thousand or more servers on a ten thousand square foot or larger footprint. Internet content providers (ICPs), cloud services and big data storage have become analogous to hyperscale, although not all companies within these categories utilize these massive data centers.
Social media and enterprise software are other arenas where data centers have been established. The actual number of companies meeting the hyperscaler definition is relatively low (below thirty) and predominantly U.S. based.
There are over 400 hyperscale data centers in operation worldwide today, and this figure is only growing. Distinctive hallmarks of the hyperscaler are the motivation and technical acumen necessary to customize data center hardware and software according to their business model and applications.
This customization can ripple throughout the supply chain as vendors react to the high volume and rapid evolution of hyperscale hardware and components. To commit to the requisite costs, hyperscalers must establish a long-term vision and runway for the future that includes significant scalability and growth potential.
Challenges of Computing
The sheer size associated with hyperscale data center architecture provides several inherent benefits including more efficient cooling and power distribution, balanced workloads across servers, and built-in redundancies.
These economies of scale can be negated by the challenges associated with hyperscale computing. High traffic volumes and complex flow patterns can outstrip the capabilities of traditional monitoring tools and practices. Visibility into external traffic flows can also be compromised because the focus can be diluted by the speed and quantity of connections.
Security concerns are another challenge that can be magnified by the size of the data center. Although preventive and proactive security systems are an essential element of computing, a single breach can potentially expose enormous amounts of sensitive customer data to compromise simultaneously.
Hyperscale data center resource planning must balance the proximity of available talent with physical size demands that can sometimes limit location options to more remote or previously undeveloped areas. With some of the largest centers exceeding 500,000 square feet of floor space, establishing the utilities, roads and other infrastructure needed to support these locations can be equally challenging.
Despite these constraints, many of the largest centers in the world are currently located in or near major cities with dense population and development. Hyperscale data center proportions will expand considerably over the next decade, even as the available talent pool continues to shrink. More automation, machine learning and virtualization will be required to prevent an exponential demand for resources and talent from overwhelming the ecosystem.
Testing practices for fiber connections, network performance and service quality remain consistent with conventional data center testing, only on a much larger scale. Uptime reliability becomes increasingly important even as the testing complexity grows through the sheer volume of pathways and components. Hyperscale DCIs running close to full capacity should be tested consistently to verify throughput and find potential issues before a fault occurs. Automated monitoring solutions should be leveraged to downscale resource demands.
The customization by hyperscalers of data center hardware and software makes interoperability essential for premium test solutions. Test tools supporting open APIs add flexibility to accommodate the hyperscaler diversity. As the hardware has diverged, common interface conventions such as PCIe and MPO have continued to propagate throughout the hyperscale ecosystem due to their optimized coalescence of density and capacity. To further mitigate the challenges of interoperability and fiber network conformity, VIAVI has worked closely with standards bodies such as the FOA as the rules for the diverse playing field have evolved.
Bit error rate testers like the MAP-2100 have been developed specifically for environments where little or no personnel is generally available to perform network tests. Network monitoring solutions intended for the hyperscale ecosystem can flexibly launch large-scale performance monitoring tests from multiple physical or virtual access points.
By conceptualizing and developing the optimal testing tools for elements such as MPO and ribbon fiber, the same attention to procedures and consistency can be applied on an immense scale within hyperscale deployments.
New data centers continue to push the boundaries of size, complexity and density. Test solutions designed specifically for the hyperscale environment, as well as tools and products originally conceived for other high-capacity fiber applications, can be utilized in tandem to effectively verify and maintain data center performance.
The diverse VIAVI test product offerings cover all aspects and phases of hyperscale data center construction, activation and maintenance. Throughout the migration towards larger and denser data centers, fiber connector inspection has remained an essential element of the overall test strategy.
The rapid expansion of MPO connector utilization makes the FiberChek Sidewinder an ideal solution for automated multi-fiber end face certification. Optical loss test sets (OLTS) designed specifically for the MPO interface, such as the SmartClass Fiber MPOLx, can make Tier 1 fiber certification within the hyperscale data center easier and more reliable.
The T-BERD 5800 100G (or MTS-5800 100G outside of North America) is an industry leading, compact dual-port 100G test instrument that can facilitate fiber testing, service activation and troubleshooting. This versatile tool can be used for metro/core applications as well as DCI testing. With an advanced feature set and a rugged, compact form factor, OTN and Ethernet service activation testing can be completed quickly and accurately. The T-BERD 5800 supports consistency of operations with repeatable methods and procedures.
Hyperscale data centers are a synchronistic application for automated OTDR testing through MPO connections. The multi-fiber MPO switch module is designed to produce an all-in-one solution for MPO dominated, high density fiber environments. When used in conjunction with the T-BERD test platform, fibers can be characterized through OTDR without the need for time-consuming fan-out/break-out cables. Automated test workflows for certification can be performed for up to 12 fibers simultaneously.
Standalone, remote fiber monitoring is another category of advanced test solution that can find unlimited utility in the hyperscale cloud. With SmartOTU fiber monitoring, detected events including degradation, fiber tapping or intrusion are quickly converted to alerts. The system can be deployed right out of the box, with no configuration required. This cost-effective, accurate and modular tool provides yet another level of assurance for SLA contracts and DCI uptime.
The Future of Hyperscale
New technology and applications will continue to drive the demand for hyperscale computing into the next decade and beyond. As the monetization opportunities presented by 5G expand, new entrants will drive applications that further challenge density, throughput and efficiency constraints. More data centers will be interconnected through immense fiber networks as each hyperscaler expands their reach and base.
Hyperscale computing is truly a global phenomenon, with a growing proportion of new footprints appearing in Asia, Europe, South America and Africa as the building boom continues. Massive new submarine cables (submerged DCI) connecting these disparate geographies, and perhaps even new data centers, will be deployed on the ocean floor
Green technologies, such as solar rooftops and wind turbines, will make the massive computing power consumption more manageable and environmentally sustainable. As hyperscale technology evolves, VIAVI will continue to draw upon its decades of experience, expertise and end-to-end (e2e) involvement in the planning, deployment and management of discrete hyperscale elements from conception in the lab to servicing in the field.