What is a Hyperscale Data Center?
A hyperscale data center is a large-scale distributed computing center that is owned and operated by the company it supports. The term “hyperscale” does not refer exclusively to data center size, but rather the ability to scale up capacity rapidly in response to increased demand. This can include horizontal scaling (scaling out), by adding more hardware to the data center, or vertical scaling (scaling up), by increasing the power, speed or bandwidth capabilities of existing hardware. Hyperscale data centers frequently leverage software defined networking along their own proprietary technology.
Data center interconnects (DCIs) are used to link massive hyperscale data centers around the world. The size and complexity of hyperscale data center architecture makes testing essential during network construction, expansion and monitoring phases. With 16 QAM modulation now being used to gain 200G out of a single wavelength, DCI connection testing supports ease of installation by quickly verifying throughput and accurately pinpointing problem areas.
VIAVI provides an unmatched breadth and depth of interoperable test products and expertise. This ensures an elite class of service and reliability commensurate with hyperscale cloud performance.
What is a Hyperscaler?
The hyperscaler threshold has been defined by the International Data Corporation (IDC) as five thousand or more servers on a ten thousand square foot or larger footprint. Internet content providers (ICPs), cloud services and big data storage have become analogous to hyperscale, although not all companies within these categories utilize these massive data centers.
Social media and enterprise software are other arenas where hyperscale data centers have been established. The actual number of companies meeting the hyperscaler definition is relatively low (below thirty) and predominantly U.S. based.
There are over 400 hyperscale data centers in operation worldwide today, and this figure is only growing. Distinctive hallmarks of the hyperscaler are the motivation and technical acumen necessary to customize data center hardware and software according to their business model and applications.
This customization can ripple throughout the supply chain as vendors react to the high volume and rapid evolution of hyperscale hardware and components. To commit to the requisite costs, hyperscalers must establish a long-term vision and runway for the future that includes significant scalability and growth potential.
Challenges of Hyperscale Computing
The sheer size associated with hyperscale data center architecture provides several inherent benefits including more efficient cooling and power distribution, balanced workloads across servers, and built-in redundancies.
These economies of scale can be negated by the challenges associated with hyperscale computing. High traffic volumes and complex flow patterns can outstrip the capabilities of traditional monitoring tools and practices. Visibility into external traffic flows can also be compromised because the focus can be diluted by the speed and quantity of connections.
Security concerns are another challenge that can be magnified by the size of the data center. Although preventive and proactive security systems are an essential element of hyperscale computing, a single breach can potentially expose enormous amounts of sensitive customer data to compromise simultaneously.
Hyperscale data center resource planning must balance the proximity of available talent with physical size demands that can sometimes limit location options to more remote or previously undeveloped areas. With some of the largest centers exceeding 500,000 square feet of floor space, establishing the utilities, roads and other infrastructure needed to support hyperscale locations can be equally challenging.
Despite these constraints, many of the largest hyperscale data centers in the world are currently located in or near major cities with dense population and development. Hyperscale data center proportions will expand considerably over the next decade, even as the available talent pool continues to shrink. More automation, machine learning and virtualization will be required to prevent an exponential demand for resources and talent from overwhelming the hyperscale ecosystem.
How to Test in a Hyperscale Data Center Environment
Testing practices for fiber connections, network performance and service quality remain consistent with conventional data center testing, only on a much larger scale. Uptime reliability becomes increasingly important even as the testing complexity grows through the sheer volume of pathways and components. Hyperscale DCIs running close to full capacity should be tested consistently to verify throughput and find potential issues before a fault occurs. Automated monitoring solutions should be leveraged to downscale resource demands.
The customization by hyperscalers of data center hardware and software makes interoperability essential for premium test solutions. Test tools supporting open APIs add flexibility to accommodate the hyperscaler diversity. As the hardware has diverged, common interface conventions such as PCIe and MPO have continued to propagate throughout the hyperscale ecosystem due to their their optimized coalescence of density and capacity. To further mitigate the challenges of interoperability and fiber network conformity, VIAVI has worked closely with standards bodies such as the FOA as the rules for the diverse hyperscale playing field have evolved.
Bit error rate testers like the MAP-2100 have been developed specifically for environments such as hyperscale data centers where little or no personnel is generally available to perform network tests. Network monitoring solutions intended for the hyperscale ecosystem can flexibly launch large-scale performance monitoring tests from multiple physical or virtual access points.
By conceptualizing and developing the optimal testing tools for hyperscale elements such as MPO and ribbon fiber, the same attention to procedures and consistency can be applied on an immense scale within hyperscale deployments.
New hyperscale data centers continue to push the boundaries of size, complexity and density. Test solutions designed specifically for the hyperscale environment, as well as tools and products originally conceived for other high-capacity fiber applications, can be utilized in tandem to effectively verify and maintain data center performance.
- Fiber Inspection
The substantial volume of fiber connections within and between hyperscale data centers underscores the need for reliable and efficient fiber inspection tools. A single particle, defect, or contaminated end-face can lead to insertion loss and compromised performance of the network. The best fiber inspection tools for hyperscale applications combine compact form factors with automated inspection routines and multi-fiber connector compatibility.
- High Speed Transport: 400G and 800G
Emerging technologies like the IoT and 5G with their inherent bandwidth demands have made 400G and 800G technology essential to hyperscale computing. While these cutting-edge, high speed Ethernet standards have enabled the data centers to keep pace, PAM-4 modulation and Forward Error Correction (FEC) have contributed to testing complexity. Scalable, upgradable testing tools can help facilitate advanced error rate and throughput testing to meet the performance demands of the hyperscale data center.
- 100G and 200G
Interconnects between data centers use up to 100G over a single wavelength, and continue to upgrade to 200G. Network testing and monitoring tools with automated test scripts can ensure the integrity and security of these important connections. Intentional stressing of fiber connections prior to traffic turn up can identify and locate trouble spots. RFC 2544 and RFC 6349 TCP throughput testing can also be performed to verify the 16 QAM modulations required for 200G.
- Fiber Monitoring
The proliferation of fiber optics within the hyperscale data center ecosystem has made robust fiber monitoring a formidable yet essential task. Versatile testing tools for continuity, optical loss measurement, optical power measurement and OTDR are a must for construction, activation and troubleshooting activities. Automated fiber monitoring solutions like the ONMSi Remote Fiber Test System (RFTS) can provide scalable, centralized fiber monitoring with immediate alerts.
- Virtual Test
Portable test instruments that once required the dispatch of a network technician can now be virtually operated through the Fusion test agent. This virtualization helps to offset the copious resource demands of the hyperscale data center. The software-based Fusion platform can be used to monitor networks, ensure performance and verify SLAs. Ethernet activation tasks such as RFC 6349 TCP throughput testing can also be virtually initiated and executed.
Multi-fiber push on (MPO) connectors were once used primarily for dense trunk cables. Today, the density constraints of the hyperscale data center have led to rapid adoption of this interface for patch panel, server and switch connections. With ribbon fiber counts of up to 72 fibers constrained in a compact form factor, fiber testing and inspection can be accelerated through dedicated MPO test solutions with automated pass/fail results.
The diverse VIAVI test product offerings cover all aspects and phases of hyperscale data center construction, activation and maintenance. Throughout the migration towards larger and denser data centers, fiber connector inspection has remained an essential element of the overall test strategy.
The rapid expansion of MPO connector utilization makes the FiberCheck Sidewinder an ideal solution for automated multi-fiber end face certification. Optical loss test sets (OLTS) designed specifically for the MPO interface, such as the SmartClass Fiber MPOLx, can make Tier 1 fiber certification within the hyperscale data center easier and more reliable.
The T-BERD 5800 100G (or MTS-5800 100G outside of North America) is an industry leading, compact dual-port 100G test instrument that can facilitate fiber testing, service activation and troubleshooting. This versatile tool can be used for metro/core applications as well as DCI testing. With an advanced feature set and a rugged, compact form factor, OTN and Ethernet service activation testing can be completed quickly and accurately. The T-BERD 5800 supports consistency of operations with repeatable methods and procedures.
Hyperscale data centers are a synchronistic application for automated OTDR testing through MPO connections. The multi-fiber MPO switch module is designed to produce an all-in-one solution for MPO dominated, high density fiber environments. When used in conjunction with the T-BERD test platform, fibers can be characterized through OTDR without the need for time-consuming fan-out/break-out cables. Automated test workflows for certification can be performed for up to 12 fibers simultaneously.
Standalone, remote fiber monitoring is another category of advanced test solution that can find unlimited utility in the hyperscale cloud. With SmartOTU fiber monitoring, detected events including degradation, fiber tapping or intrusion are quickly converted to alerts. The system can be deployed right out of the box, with no configuration required. This cost-effective, accurate and modular tool provides yet another level of assurance for SLA contracts and DCI uptime.
The Future of Hyperscale
New technology and applications will continue to drive the demand for hyperscale computing into the next decade and beyond. As the monetization opportunities presented by 5G expand, new hyperscale entrants will drive applications that further challenge density, throughput and efficiency constraints. More hyperscale data centers will be interconnected through immense fiber networks as each hyperscaler expands their reach and base.
Hyperscale computing is truly a global phenomenon, with a growing proportion of new footprints appearing in Asia, Europe, South America and Africa as the building boom continues. Massive new submarine cables (submerged DCI) connecting these disparate geographies, and perhaps even new hyperscale data centers, will be deployed on the ocean floor
Green technologies, such as solar rooftops and wind turbines, will make the massive power consumption more manageable and environmentally sustainable. As hyperscale technology evolves, VIAVI will continue to draw upon its decades of experience, expertise and end-to-end (e2e) involvement in the planning, deployment and management of discrete hyperscale elements from conception in the lab to servicing in the field.
Test your hyperscale data center with help from VIAVI today!
Are you ready to take the next step with one of our hyperscale products or solutions?
Complete one of the following forms to continue: