VIAVI Solutions partners with hyperscale data center operators, cloud network or service providers, and those deploying data center interconnects (DCI). Our solutions are designed to reduce testing time, optimize optical networks, reduce latency, and ensure 100% reliability to support SLAs.
Hyperscale Data Centers
VIAVI Solutions is an active participant in over thirty standards bodies and open source initiatives including the Telecom Infra Project (TIP). But when standards don’t move quickly enough, we anticipate and develop equipment to test evolving infrastructure standards. We believe in open APIs, so hyperscale data center companies can continue to write their own automation code.
Decades of innovation, partnership, and collaboration with over 4000 global customers and standards bodies like the FOA have uniquely qualified VIAVI to address the testing scale and complexity of hyperscale data centers.
We guarantee performance of optical hardware over the lifecycle of the hyperscale ecosystem, from lab to turn-up to monitoring. Testing and monitoring within the data center extends outward to data center campuses and large-scale metro, along with global and distributed data center networks and their interconnects.
Automated MPO connector inspection completed in seconds, simultaneous testing of multiple 400G or 800G ports, and virtual service activation and monitoring are some of the test capabilities required to keep pace with hyperscale evolution.
What is a Hyperscale Data Center?
It is a large-scale distributed computing center that is frequently owned and operated by the company it supports. The term “hyperscale” refers to data center size as well as the ability to scale up capacity in response to demand.
- Horizontal scaling (scaling out), is accomplished by adding more hardware to the data center.
- Vertical scaling (scaling up), increases the power, speed, or bandwidth of existing hardware.
- Software defined networking (SDN) is often incorporated, along with the hyperscaler’s own unique software and hardware technologies.
- Data center interconnects (DCIs) are used to link massive data centers to one another as well as intelligent edge computing centers around the world.
- DCI connection testing eases installation by quickly verifying throughput and accurately pinpointing sources of latency or other issues.
- PAM4 modulation and Forward Error Correction (FEC) add complexity to ultra-fast 400-800G Ethernet DCI connections.
The size and complexity of hyperscale data center architecture makes testing essential during network construction, expansion, and monitoring phases. VIAVI offers an unmatched breadth and depth of interoperable test products and expertise. This ensures levels of service and reliability that meet the high standards for hyperscale cloud computing performance.
What is a Hyperscaler?
According the International Data Corporation (IDC) definition, a “hyperscaler” must use at least 5,000 servers and 10,000 square feet, although many hyperscale data centers are significantly larger.
Internet content providers (ICPs), cloud services, and big data storage are known for their hyperscale computing, although not all companies in these categories use such massive data centers. Social media, E-commerce, and enterprise software are other industries commonly found in a large standalone or colocation data center.
- There are over 600 hyperscale data centers in operation worldwide today, and this number continues to grow.
- The total number of companies meeting the hyperscaler criteria remains relatively low, with almost 40% of all hyperscale deployments within the U.S.
- Synergy Research Group data shows that the three largest hyperscale players: Amazon Web Services (AWS), Microsoft, and Google, account for over half of all installations.
- Hyperscalers typically maintain the in-house technical skills needed to customize data center hardware and software according to their business model and applications.
- Data center colocation is a beneficial strategy for both established hyperscalers and smaller companies seeking to lease cloud computing capacity without the ground-up investment.
- The data center industry supply chain is complicated by customization and the early adoption of new hardware and software technologies. This makes high volume production and fast delivery challenging for many vendors.
Challenges of Hyperscale Computing
The size associated with scalable cloud architecture provides several benefits, including more efficient cooling and power distribution, balanced workloads across servers, flexibility, and built-in redundancies. These economies of scale can also lead to challenges. High traffic volumes and complex flows can make real-time hyperscale monitoring difficult. Visibility into external traffic can also be complicated by the speed and quantity of fiber and connections.
- Security concerns are magnified by the size of the hyperscale data center. Although proactive security systems are an essential part of cloud computing, a single breach can expose huge amounts of sensitive customer data.
- Energy consumption and greenhouse emissions are growing concerns for the data center industry. Data centers already consume approximately 3% of the world’s electricity. This has prompted many leading cloud computing companies and data center owners, including Google and Amazon, to pledge climate neutrality by 2030.
- Talent and Resource availability becomes a concern in more remote or underdeveloped hyperscale locations. With a lower limit of 5,000 servers and 10,000 square feet and some centers exceeding 500,000 square feet, establishing the utilities, roads and other infrastructure needed to support these locations can be challenging.
- Hyperscale Data Center proportions will continue to expand, even as the available talent pool continues to shrink. More automation, machine learning, and virtualization will be required to prevent the demand for resources from overwhelming the ecosystem.
- Construction schedules are compressed by the increased demand for internet content, big data storage, and telecom applications. The addition of 5G, the IoT, and intelligent edge computing centers adds to the burden. These pressures can lead to minimized or omitted pre-deployment fiber and performance testing, and more problems or issues discovered after commissioning.
Hyperscale Testing Practices
Many testing practices for fiber connections, network performance, and service quality remain consistent with conventional data center testing, only on a significantly larger scale. Uptime reliability becomes more important even as the testing complexity grows. DCIs running close to full capacity should be tested and monitored consistently to verify throughput and find potential issues before a fault occurs. Automated monitoring solutions should also be used to minimize resource demands.
The customization of data center hardware and software makes interoperability essential for premium test solutions. This includes test tools supporting open APIs to accommodate the hyperscaler diversity. Common interfaces like PCIe and MPO that enable high density and capacity have grown in popularity and require solutions to efficiently test and manage them.
- Bit error rate testers like the MAP-2100 have been developed specifically for environments where little or no personnel is available to perform network tests.
- Network monitoring solutions intended for this type of ecosystem can flexibly launch large-scale performance monitoring tests from multiple physical or virtual access points.
- Testing best practices defined for elements like MPO and ribbon fiber can be applied on a massive scale within these large data center deployments.
Hyperscale Solutions
New data center installations continue to add size, complexity, and density. Test solutions and products originally developed for other high-capacity fiber applications can also be used to verify and maintain enterprise data center performance.
- Fiber Inspection: The high volume of fiber connections within and between data centers require reliable and efficient fiber inspection tools. A single particle, defect, or contaminated end-face can lead to insertion loss and compromised network performance. The best fiber inspection tools for hyperscale applications include compact form factors, automated inspection routines, and multi-fiber connector compatibility.
- Fiber Monitoring: The increase in fiber optics within the hyper scale data center industry has made fiber monitoring a difficult yet essential task. Versatile testing tools for continuity, optical loss measurement, optical power measurement, and OTDR are a must. This applies throughout all phases of construction, activation and troubleshooting activities. Automated fiber monitoring systems like the ONMSi Remote Fiber Test System (RFTS) can provide scalable, centralized fiber monitoring with immediate alerts.
- MPO: Multi-fiber push on (MPO) connectors were once used primarily for dense trunk cables. Today, density constraints have led to rapid adoption of MPO for patch panel, server, and switch connections. Fiber testing and inspection can be accelerated through dedicated MPO test solutions with automated pass/fail results.
- High Speed Transport - 400G and 800G: Emerging technologies like the IoT and 5G with high bandwidth demands have made 400G and 800G technology essential to hyperscale computing. While these cutting-edge, high speed Ethernet standards have enabled the data centers to keep pace, PAM-4 modulation and Forward Error Correction (FEC) add complexity. Scalable, automated test tools can perform advanced error rate and throughput testing to meet performance demands. Industry standard Y.1564 and RFC 2544 service activation workflows also assess latency, jitter, and frame loss at high speeds.
- Virtual Test: Portable test instruments that once required a network technician on site can now be virtually operated through the Fusion test agent to reduce resource demands. The software-based Fusion platform can be used to monitor networks, ensure performance, and verify SLAs. Ethernet activation tasks such as RFC 6349 TCP throughput testing can also be virtually initiated and executed.
- 5G Networks: Distributed, disaggregated 5G networks bring more demand for hyperscale computing and virtual testing of network functionality, applications, and security. The TeraVM solution is a valuable tool for validating both physical and virtual network functions and emulating millions of unique application flows to assess overall QoE. Learn more about our complete end-to-end RANtoCore™ testing and validation solutions.
- Observability: the concept of Observability is linked to achieving deeper levels of network insight which lead to higher levels of performance and reliability. The VIAVI Observer platform utilizes valuable network data sources to produce flexible, intuitive dashboards and actionable insights. This results in faster problem resolution, improved scalability, and optimized service delivery. Learn more about Performance Management & Security
What We Offer
The diverse VIAVI test product offerings cover all aspects and phases of hyperscale data center construction, activation, and maintenance. Throughout the migration towards larger and denser data centers, fiber connector inspection has remained an essential element of the overall test strategy.
- Certification: The rapid expansion of MPO connector utilization makes the FiberChek Sidewinder an ideal solution for automated multi-fiber end face certification. Optical loss test sets (OLTS) designed specifically for the MPO interface, such as the SmartClass Fiber MPOLx, also make Tier 1 fiber certification easier and more reliable.
- High-Speed Test: Optical Transport Network (OTN) testing and Ethernet service activation must be performed quickly and accurately to support the high-speed connectivity of hyperscale data centers:
- The T-BERD 5800 100G (or MTS-5800 100G outside of the North American market) is an industry leading, ruggedized, and compact dual-port 100G test instrument for fiber testing, service activation, and troubleshooting. This versatile tool can be used for metro/core applications as well as DCI testing.
- The T-BERD 5800 supports consistency of operations with repeatable methods and procedures.
- The versatile, cloud-enabled OneAdvisor-1000 takes high speed testing to the next level with full rate and protocol coverage, PAM4 native connectivity, and service activation testing for 400G as well as legacy technologies.
- Multi-fiber, All-in-One: Hyperscale data centers are an ideal setting for automated OTDR testing through MPO connections. The multi-fiber MPO switch module is an all-in-one solution for MPO dominated, high-density fiber environments. When used in conjunction with the T-BERD test platform, fibers can be characterized through OTDR without the need for time-consuming fan-out/break-out cables. Automated test workflows for certification can be performed for up to 12 fibers simultaneously.
- Automated Testing: Test process automation (TPA) reduces data center construction times, manual test processes, and training hours. Automation enables efficient throughput and BER testing between hyperscale data centers as well as end-to-end verification of complex 5G network slices.
- The SmartClass Fiber MPOLx optical loss test set brings TPA to Tier 1 fiber certification with native MPO connectivity, automated workflows, and full visibility of both ends of the link. Comprehensive 12-fiber test results are delivered in under 6 seconds.
- The handheld Optimeter optical fiber meter makes the “no-test” option irrelevant by completing fully automated, one-touch fiber link certification in less than a minute.
- Standalone, Remote Fiber Monitoring: Advanced, remote test solutions find unlimited utility in scalable cloud settings. With SmartOTU fiber monitoring, detected events including degradation, fiber-tapping, or intrusions are quickly converted to alerts, safeguarding SLA contracts and DCI uptime. The ONMSi Remote Fiber Test System (RFTS) performs ongoing OTDR “sweeps” to accurately detect and predict fiber degradation throughout the network. Hyperscale data center OpEx, MTTR, and network downtime are dramatically reduced.
- Observability and Validation: The same machine learning (ML), artificial intelligence (AI), and network function virtualization (NFV) breakthroughs that enable scalable cloud and edge computing for 5G are also driving advanced hyperscale data center test solutions.
- The Observer platform goes beyond traditional monitoring by intelligently converting enriched flow data and traffic conversation details into real-time health assessments and valuable end-user experience scoring.
- The flexible TeraVM software appliance is ideal for validating virtualized network functions. Key network segments including access, mobile network backhaul, and security can be fully validated in the lab, data center, or cloud.
The Future of Hyperscale Data Networks
New technology and applications will continue to drive the demand for testing and computing into the next decade and beyond. As the monetization opportunities presented by 5G and the IoT expand, new entrants will drive applications that further challenge density, throughput, and efficiency limits.
As data centers increase in size, they will also become more distributed. Smaller edge computing centers move intelligence closer to the users while reducing latency and susceptibility to large scale distributed denial of service (DDoS) attacks. This trend toward distributed networks led by Google, Amazon (AWS), and other hyperscale leaders means even more data centers will be interconnected through immense fiber networks, as each provider expands their reach and base to improve performance levels.
Data center consolidation and colocation will continue to drive interoperability and reduce barriers to entry. Fortunately, these consolidated hyperscale locations are also well equipped to employ green technologies such as liquid cooling, solar rooftops, and wind turbines, along with advanced AI to optimize cooling and power consumption.
Hyperscale computing is truly a digital global phenomenon, with a growing proportion of new footprints appearing in Asia, Europe, South America, and Africa as the building boom continues. Massive new submarine cables (submerged DCI) connecting these diverse geographies, and perhaps even new data centers, will be deployed on the ocean floor.
As the technology evolves, VIAVI continues to draw upon experience, expertise, and end-to-end (e2e) involvement to help our customers. Our hyperscale solutions are used for planning, deployment, and management of discrete elements, from conception in the lab to servicing in the field, and beyond.
Test your data center with help from VIAVI today!
Are you ready to take the next step with one of our products or solutions?
Complete one of the following forms to continue: