Remote PHY: Problems Solved, and Problems Created By DAA

August 3, 2017

Let’s face it, business as usual is no longer an option for cable operators trying to keep up with ever-increasing consumer demand for faster and higher quality broadband services. Traditional node splits have been the answer to increase speed and capacity until recently, but cost and hub space/power issues are now breaking this model. Distributed access architectures (DAA) like Remote PHY are the consensus solution to these challenges, but their implementation will not come without new challenges. In this webinar we will share what we have learned in working with early-adopter MSOs and leading DAA vendors in the planning and early roll-out phases of DAA.

Topics will include:

  • Industry trends driving DAA adoption
  • Brief explanation of DAA variants
  • Operational/HFC maintenance challenges created by DAA and potential solutions
  • Current DAA adoption status and lessons learned to date What’s next and how best to prepare for DAA rollouts

Duration: 1 hour

View Recording

Read the Remote PHY: Problems Solved, and Problems Created By DAA webinar transcript below:

Today we're going to talk about Distributed Access Architectures. We're going to talk a little bit about the problem solved, but we're going to talk mostly about the problems that are created by distributed access architecture, from an operational standpoint, and what can be done to address those. We've been working with DAA vendors for a little bit over three years now, and MSOs for almost two, so again, we're going to share what we've learned from them about the operational challenges of distributed access architecture roll-outs.

We'll start off just by a quick look at what is the state of the HFC today? Where are we? No surprise, we have unprecedented demand growth for downstream broadband services, largely driven by over the top video, things like Netflix. But also by some of the skinny bundles in the over the top offerings that are challenging our traditional video model. Upstream demand hasn't been growing nearly as fast. A couple of years ago, the big prediction was that YouTube and then to a greater extent Skype, were going to be the killer app that was going to drive upstream demand. But they really haven't happened yet.

It will be interesting to see as the iPhone 8 releases, when all of the rumored virtual analysis and probably more so augmented reality, if the capabilities are as cool as they say they are, they may be the killer app that will drive the upstream. So the other big thing driving downstream demand has been disruptors like Google Fiber and similar providers in EMEA, who really forced our hands as a cable industry on one gigabit service. The actual need for one gig is somewhat limited as evidenced by the low take rates for our one gig packages, but in order to compete, we must meet their billboard speeds. Good news is, the need for speed is nothing new for cable operators. We've had all these needs for faster speeds and more capacity in the past. And we have several tricks up our sleeve for increasing bandwidth. The first one has been just to add more carriers.

But, we're running out of space to do that. Things are filling up and even in the 860 and one gig plans today, analog reclamation helps some. But again, we've managed to fill that space we recovered already. The good folks at cable labs helped us out with DOCSIS 3.1 and the optional frequency extensions, including out to 1.2 gigahertz in the downstream. But we're really only seeing adoption of those primarily in Europe. So those aren't being taken over in a lot of places. So simply adding more carriers, that old trick really isn't working for us anymore. Squeezing more bits per hertz out of the frequency that we have. So we're pretty well tapped out at 256 QAM for our current access 3.0 plants. Again, the cable ops folks helped us out in DOCSIS 3.1 with OFDN and more so LDCP to help with more bits out of each hertz.

But again, even those alone really aren't enough to meet the capacity demands a couple years out for sure. So we're left with what we've been doing a lot of lately and that's splitting nodes. Traditional node splits means shrinking the service group sizes, which means adding more CMTS ports. With the CMTS ports comes more chassis, more splitting and combining networks, all which require more rack space, which we're constrained on already. When you throw in the power and cooling pieces that need go with all that new hardware that's being racked up, we just simply don't have the space or the power cooling to do this and it's quite expensive for the cable operators to be adding more hubs. In reality what they're trying to do is collapse some of these hubs down.

So splitting nodes the way we always have in the past really isn't the answer. We have to do something different. Other things happening in the HFC today. So DOCSIS 3.1 has arrived like I said. 4096 QAM and better are possible. Great, but in order to get those, we need the best SNR we can get. The centralized access architecture nodes of today with their analog fiber is one of the choke points that will prevent us from gaining full benefit from DOSIS 3.1. then the last thing in the state of the HFC that really affects this is 5G wireless and full duplex DOCSIS. Why in the world are we talking about wireless in an HFC discussion? Well this is all about the fact that 5G is still being defined, but for sure it's going to be using higher frequencies, which equals shorter range and more cell sites needed.

I've heard 5G discussed as a football field technology, meaning that each cell site can only cover the area of a football field in some cases. A lot of fiber is going to be needed to feed all those cell sites. I think the quote is, "Great wires make great wireless." And cable is pushing fiber deeper and deeper to do this. The wireless operators are talking about doing the same in parallel. Just today I saw in the news that Verizon bought the fiber network from WOW in the Chicago area for $250 million just for this reason. So I think this is what's driving a lot of the merger and acquisition discussions and the synergies between cable and fiber. Full duplex DOCSIS, great technology that has the potential to push out the need to pull fiber for the last mile for a decade or more, but requires node plus zero; therefore, lots of fiber. So these are all the things that are happening in the industry that's leading us up to distributed access architectures.

Given that we can't get enough megahertz out of the spectrum that we have today to meet demand, and we can't get enough bits per hertz out of the spectrum that we have to, then we must reduce service group sizes. So business as usual nodes splits are overflowing our hub, so we have to do something different. That something different is distributed access architectures. Again, this is a very operations focused webinar. I'm not going to go through the details, the huge details of this. But we need to cover at least the basics of the different variants for the rest of the discussion to make sense. So centralized access architectures are where we're at today. So we have our cable RF plant, all of our amplifiers and hard line and cable all funneling down to a single fiber node, which feeds back over an analog fiber back to our upstream demodulators and our downstream transmitters in the head end or the hub, which also includes the phy layer and the MAC layer, all of this. This is what we have today. This is business as usual today.

As its name implies, Remote PHY (also known as R-PHY, R PHY, and RPHY) essentially takes the physical layer and moves it from the hub out into the field. So we're taking our upstream demodulation and our downstream modulation and moving them out into the field with the rest of the cable and RF plant. Instead of having the analog fiber, what we now have is a 10 gig or potentially 40 gig optical ethernet connection between our node and the head end. Now in the case of remote phy, the pipe itself is a commodity pipe shared with other telecom technologies, which greatly reduces the price. But the protocol over this is a little bit different. There's some special stuff that has to happen in the form of pseudo wires and the like to handle the timing aspects when we split the mac in the fire layer. So this is, right now, the most common form of distributed access architecture that's out there.

This tends to be the choice of the incumbent CMTS vendors the people who already have the chassis in place and the cable hubs and head ends where a lot of these controls live. This is generally the approach that they're taking. Contrasting are remote mac PHY and remote c-cap and I'll talk about these as one because the differences between them are somewhat nuanced. But as the name would imply in this case, you're also taking the mac layer and moving it out into the field. So you're removing the issue of timing between the mac and the phy by combining them all out there. And you're using the same 40gigE optical ethernet pipe but in this case, it is truly just pure ethernet being pumped across this pipe. This generally tends to have a larger footprint reduction in that it generally doesn't use the chassis, the large chassis and the hub and head end that the remote phy uses.

This tends to be the choice of the new entrant vendors, the people who, again, don't have the footprint in the hubs, people who started with the queen sheet tend to go more in this direction. So we've seen examples published where people have done comparisons between what would it take to do a node in remote CCAP versus a centralized access architecture and there have been claims of the remote CCAP type of implementation requiring only 10% of the power and rack space versus CAA at less than half the cost, both the upfront initial cost as well as ongoing operating expense. So, again, lots and lots of great papers that get into the subtleties of each one of these and I'm definitely not going to join the religious argument over which is the best approach for each individual circumstance. But these are the general categories of distributed access architectures.

The next slide we're going to share is going to be some really good industry data that was taken. So, we're seeing about 20% with no plans and more than half in 2017 and 2018 starting. So if we move back to the slides, then ... so I absolutely love this slide. I stole this from the light reading cable next gen session from March. It was some data that was taken by the S&P Global Market Intelligence group. They went out and surveyed 35 MSOs from around the world. These are director and MVP level people, so people who really know about what the plans are. They key takeaway is that much like the poll we just did informally here, 80% of customers, or I'm sorry 80% of MSO's out there today have DAA deployment plans. So this isn't some niche technology. This is something that's pretty widespread as well as, more than half of them plan to deploy by the end of next year.

So I think we saw 67 or 70% on this call. It's about 50% in this global study. So numbers track relatively well. But again, the takeaway is, this isn't a niche. This is something that's going to be happening widespread and this is something that's going to be happening very soon. So, distributed access architectures. It's a key enabler, something we have to do to meet the node splits that we need to do in order to meet the service group sizes. We've talked about a lot of these benefits. So, hub space, power and cooling savings. The obvious, although the power savings are a little bit nuanced because we can't just say that all of the power that used to be in the hub is being eliminated. Some of that power consumption is actually being moved out into the field and being consumed by the device out in the field.

Now in aggregate, there are going to be power savings. What's being transferred from the hub out into the field is less than 100% of that, but it's not a 100% reduction if that makes sense. But the cooling is 100% because we are going to be cooling an ambient air versus requiring HVAC in the hubs and head ends. Higher bits per hertz out of the optical link. By having a digital optical link, we're gaining up to an ATP gain in SNR, which allows us to more comfortably go to the higher org or modulations and get more bits per hertz out of the optical link. Although that is not the only benefit of the digital optical link. It is a much more robust link in that it just generally tends to work. I've heard a lot of descriptions about the way people describe setting up an analog optical link. And there's a handful of people out there, of technicians who are truly experts, who can follow the rules, follow the procedures and do it right every time.

For everybody else, it tends to be more of an art than a science. So the digital link is basically set it and forget it. You turn it on and it works and it's much more robust. It has more range. It has more forgiveness for what you have out and for variations in the plant and again, it's just generally much more reliable. So this equates to increased customer satisfaction. You have less customer issues because of optical issues and you spend a lot less of your technician's time and a lot fewer truck rolls tweaking it, which leads to your reduced total cost of ownership. So, total cost of ownership of DAA again is not just the initial cost of buying it up front, but also includes the reduction in maintenance that you get through the more robust link and related things to that.

Another benefits of distributed access architecture is that it doesn't get as much press as I think it should is the flexibility aspect. So as you're pushing fiber deeper and deeper into the plant, you have the opportunity to do some things you couldn't do before. So I've seen several studies before that I've talked about. The top one or two percent of bandwidth users consume a way disproportionate share of the overall bandwidth, sometimes forcing operators to do node splits to accommodate them sooner than they would have to otherwise. With this ethernet pushed way deep into the plant, it's a whole lot easier to split off a pond connection, a fiber to the home connection to accommodate these high bandwidth users. And the same thing if you have a small or medium business requiring higher reliability, higher bandwidth. You have the opportunity to do some things with this fiber and this ethernet way out deeper in the plant.

With the good comes the challenges though. The majority of the rest of this discussion is going to be about how disruptive distributed access architectures are to HFC maintenance practices. So I won't dwell on that now. But some other challenges that I've heard; not all are substantiated, but they're definitely being discussed are is there concerns about the complexity and pushing these complex devices from the controlled environment of the head end with specialists close by out into the field? And definitely one that is real is putting high dollar devices in the field. So in Latin America, it's not that uncommon for amplifiers to be stolen in one region and sold to a competitor in a different region. So now the concern is, if we put much more expensive devices out in the field, then are we going to run into those theft issues?

Again, somewhat regionally focused, but is a concern. Another one not entirely substantiated is data security and IT concerns. So is it going to be easier to hack into the network and get at data and spy and things with some of the higher layers pushed out into the field? Don't know. Power envelope. So it's assumed that at this point, that the distributed access architecture nodes are going to fit roughly within the power envelope of what the plant is today and I'm not going to have to do a lot different with plant powering to feed them. If that's not true, then it changes economics. It changes the pictures. And then lastly is the ability to update thousands of remote devices. We all know firmware devices and the like don't always go perfect. So having these things spread out across the field versus having them all centrally located ... is there going to be that one catastrophic event where you have to go out and touch every node and have downtime because of it? Again, not a substantiated concern, but definitely one that is valid and is worth talking about, worth thinking about as you go forward.

So we say DAA is disruptive to traditional plant maintenance. Well why is that? So if you look at today's hub, the way things work today, you have your return fiber coming back into the hub. It's split with RF feed to the CCAP and one to your return path monitoring and return sweep systems. In the forward path, you have the opportunity to combine sweep pulses, sweep traces with your downstream feed and feed them out into the plant to feed your sweep systems, your monitoring systems. In a distributed access architecture plant, you have a 10 gig ethernet pipe. You have no RF in the hub, so you don't have anything to feed your return path monitoring or your reverse sweep systems. You don't have the ability to inject downstream sweep pulses. Again, given the lack of RF here. So you can't use your existing head end hardware and unless something differently is done back in this end, then the processes out in the field will have to change as well which will be very disruptive.

An additional challenge with the DAA roll outs is that these are not expected to be an overnight thing. So this is something that's going to be starting as we saw in 2017, 2018 for more than half of the operators, but it's expected to last potentially five to ten years as operators slowly start replacing the analog fiber and the centralized access architecture nodes with distributed access architecture through node splits or other inflection points. So what's that going to look like? Today, operators are going to have the vast majority of their nodes on centralized access architectures, but they might dip their toe in the water with green fields or select areas of node splits with DAA. So for the sake of argument, let's say that they use ... they go remote phy. They use remote phy from the same vendor as the CCAP they have today. So it's just a matter of putting another card in the chassis to support it.

So now they have CAA and remote phy nodes. Well shortly thereafter, for whatever the reason, be it long fiber runs, be it whatever, they may implement some remote mac phy nodes. Now remember, I said before that remote phy generally is the form factor preferred by incumbent CMTS vendors. The remote mac phy tends to be the new guys. So almost necessarily, these two are going to be from different network equipment manufacturers. So now you have three architectures from at least two different network equipment manufacturers. Shortly after that, they may decide there are reasons why they want to have a remote CCAP right on the side of a large MDU. That very well might be from a different network equipment manufacturer. A couple years down the line, we may be deeper into a fiber deeper node plus zero plan and a new low cost DAA system, remote phy system designed specifically for the small service groups of remote phy may emerge.

That may be from a different network equipment manufacturer. So why does this matter? Why is this important? Well put yourself in a technician's shoes. So in the morning they may be working on a CAA node. Then the first call after lunch may be a remote phy node from this NAM and then the next call may be a remote CCAP node from another NAM. If the operator isn't able to come up with a way to have standardized processes between all of these different architectures and NAMs, then it's going to be very difficult for the technician to keep up with which tools to use, how to use them and all the subtle differences between them. So the proliferation is just a very real concern knowing that this is going to much more of a marathon than a sprint of DAA rollout and there are going to be multiple technologies deployed.

Remote Phy Modules

So we lose visibility due to no RF in the hub or head end. So what are the solutions that have been suggested in industry? What are the ones that have been investigated or are being investigated to fill that gap? The first is add on hardware modules. So essentially taking and developing a hardware module that can be cut into the plant either ahead of or behind the remote phy module or DAA module, or some type of card that can be made with a variant to work with all the different types of DAA. General feedback we've gotten is not to go there, that there's a lot of concerns that it can be cost effective and that it can be scalable as we get down to really, really small service group sizes and potentially exponential growth and node count and number of these units.

So not a lot of interest in that approach. NDF/NDR, narrow band digital forward, narrow band digital return are the approaches that were proposed by cable labs. So these are designed specifically for settling around telemetry and DAA plants for set top boxes, transponders and test and measurement uses. So these will definitely play a part in the solution, but these in and of themselves are not the solution because by their very nature and definition, they're narrow band. These are for settling around telemetry, not digitizing entire RF spectrum and shipping it back. In some instances, even in the narrow band fashion, they're not very hub space efficient, so we have to be careful how we use NDF and NDR. But the good news is that the cable labs folks have the foresight to put in these allowances to allow a set top box telemetry and test and measurement to work in the DAA environment, or at least have adequate substitutes.

Remote Phy and Virtualization

The last approach is virtualization. So this is really the path that is scalable and really aligns with the DAA goals of not having a lot of new equipment in the hubs and head ends. It is the best overall option. So what does that look like? Again, the chart we looked at before where we used to have ability for RF in the hub, we no longer do. We don't have the visibility for return path monitoring and sweep. So the way it will work is we will use this distributed access architecture unit that's out in the field. This will become the return visibility point, replacing hardware and this will be talking back to the same server that's supporting the hardware for centralized access architecture nodes. And this in some cases will be the transmitter. So instead of injecting sweep or telemetry in the downstream, this will be the opportunity to do any kind of transmitting that we need to do.

So this unit, which is already going to be deployed, becomes the hardware for distributed access architecture plant maintenance. But, some gaps still remain. So at this point, I'm going to go deep on one specific gap. Just as an example of what it takes to address all of these gaps as they come up and help frame some of the challenges that we face. I picked this one because this is one of the first ones that really emerged for us and that we spent a lot of calories addressing. So sweep. From a downstream sweep standpoint, again remember we can't inject downstream sweep pulses, but in general, sweepless sweep has been accepted by operators as an acceptable solution. I'll spend one minute on it in the next slide just to talk over the basics of what it is.

PNM, especially 3.1 PNM, there's a lot of really good stuff in here to help us prioritize what nodes we need to sweep and even potentially what legs within a node we need to sweep and help us prevent time wasted by sweeping clean plant. But it doesn't actually do the sweep out in the field itself. For that operators are absolutely still demanding that we have meter based return sweep capabilities. There really isn't a specified solution for that today. Again, sweepless sweep maybe reviewed, but just to spend a minute on it to review what it is and how it works. This is a capability that's available from VIAVI Solutions as well as pretty much any test and measurement provider who sells sweep systems. But essentially the way it works is you take your meter out in the field and you take a reference measurement, say at the node. Or if you're trying to sweep between two amplifiers at the first amplifier, you take a measurement there of the downstream.

You save that as a reference and then you go to the next point, the other endpoint that you want to characterize of the plant and you take another measurement. What sweepless sweep does is it goes through and determines the difference between these two. It solves for the difference between them and the delta is the frequency response between those points. The advantage of sweepless sweep is that it uses existing carriers for characterizing downstream spectral performance so there's no hub or head end gear required for it. One of the big drawbacks of sweepless sweep is that it doesn't cover vacant spectrum. But as we said earlier, that really doesn't exist unless you're doing frequency extensions. In those cases, you could potentially just turn up carriers on your CCAP with some small business model changes on the CCAP vendors and to address this.

Downstream sweep is pretty well taken care of, but return sweep is still a gap. So operators absolutely still need this, both for aligning amplifiers because most distributed access nodes are not going to be node plus zero, at least in the first few years. A lot of this is going to be replacing business as usual node splits but also for node plus zero and otherwise, sweep is still considered an absolutely critical troubleshooting tool by operators, something that they tell us again and again they cannot let go of. They need to have that for troubleshooting outages and even certain for proactive maintenance activities. So in asking, okay, what does return sweep need to look like in a DAA plant, we came up with about five common requirements from the operators. One is that it absolutely has to be software based. And I hate to sound like a broken record, but no specialized hardware has to be deployed.

Their goal is to clear out the hubs and the head ends to get rid of all of the hardware that's in there and move as much as possible up into data centers. So no specialized hardware. It needs to be multi user largely to support the way the contractors who generally do sweep work. And they generally will send a whole herd of people in and gang up and put a bunch of people on a node and sweep to get through them efficiently. So you need to be able to have multiple users sweeping a node at the same time. Standards based. So this again is a very, very common theme in industry. Whatever method we use to replace the way we used to talk back between the instrument and the sweep controller have to be standardized. They have to be standards based in that they don't tie to a particular test and measurement or network equipment vendor and they need to make it so the NAMS don't really have to do anything special to support multiple test and measurement vendors.

This is all about making this interchangeable so you can drop in a different NAM or drop in a different test and measurement vendor non obtrusively. It has to be fast, meaning there's very little time budgeted when operators are doing some of these frequency upgrades for the contractors, or frequency extensions. And they don't have a lot of time to stop and have a meet, arrange and register or do something that requires a lot of setup time before each measurement. So it needs to be something quick and it needs to cover both occupied and vacant spectrums. So we need to be able to know what the frequency response is, where the carriers are and also above the carriers, and more importantly, below the carriers where there tends to be a lot of noise. And they have to be reliable and always work. Again, sweep is viewed as a critical trouble shooting tool. It's gotta work when DOCSIS services are down. If they have an outage, they want to have sweep available to help resolve it. For themselves.

So, before I explain some of the options that we looked at for return sweep, it's probably worth a minute to spend on sweep 101 and understanding how the telemetry works to understand what needs to be replaced and where the gaps are with DAA. So, today, the ONX, OneExpert CATV field meter would be out in the field with the technician. Then there would be a sweep control unit, an SCU mounted in the hub. So when it was time to sweep, the user on the ONX would push a button and say, hey I'd like to sweep, which will sign an FSK telemetry carrier generally buried down in the noise band somewhere, across the HFC plant up to the SCU who will then hear that and say okay, request granted. And it will tell the meter I want you to put sweep points at this location. Here's when we're going to start and here's the duration of the sweep.

So then at that agreed to time, then the ONX will start transmitting a sweep pulses and the SCU will capture the sweep pulses. After it's done, the ONX will say okay, how did we do? Please send me down the data. The data will be turned on that same FSK carrier in the form of sweep measurements and the display will update on the field meter. Then the steps just repeat. So the problem in DAA is, we don't have a path to get this telemetry carrier from the field meter back to the SCU. And we also don't have a method to get the full band of the upstream to ... there is no RF in the hub for this hub mounted, hardware based unit to take the measurements and no way to get that RF back to it. So those are the two challenges that prevent us from doing things the way we've always done things with upstream sweep.

Remote Phy Nodes and Units

So on this slide, we'll quickly go through some of the options we looked at for replacing that. The first is just using the return signal generator on the ONX on one expert field meter. So basically, this can generate up to eight carriers, or Cws in this case, you can place across the upstream and it will shoot those out, out to the remote phy node and then through various mechanisms, those can be sent back to the ONX and viewed as they were received by the remote phy unit on the ONX. Kind of like the field view capability for people who are used to using VIAVI meters. So from that, you could see what the frequency response is from wherever you are in the field back to the remote phy unit.

Second option was to leverage the cable modem that's embedded within the one expert. So in this case, you would take the field meter out to where you wanted to sweep from and you would bring the filed meter online and then as soon as it was online, of course the one expert would know what it's equalization taps were. It could request what the frequency response looks like for whatever carriers were in use. Best case, you'd be bonding across all of your up streams. Worst case, you'd be at one carrier. But then it could see, okay based on solving for the pre-equalization and the actual received response, then what is my frequency response from here to here?

Third option is taking a step back to the way we've always done things, so using the sweep pulse is generated from the ONX meter. But in this case, we would be using the telemetry over the DOCSIS channels. So instead of sending it as a telemetry carrier, then we would packetize it into TCP/IP packets and transmit it over the DOCSIS carrier and back in the other direction. So sweep the way we've always known it just using the DOCSIS for the telemetry instead of an FSK carrier. Fourth option, we're actually quite excited about. This was using LTE instead of DOCSYS. There's some limitations on DOCSIS we'll talk to later. But basically having your one expert tethered to a wireless device, like a phone that a technician carries and doing the telemetry over LTE instead of DOSIS.

Remote Phy Devices in the Field

A fifth option is looking at generally how CableLabs had things drawn up. So I'll go through that in this slide. To lay the groundwork, this is what we saw before where we have the one expert field meter in the field, we have our remote phy device also in the field, talking to a CCAP, which will then talk to this new box, this new hardware box in the hub, which is the NDR/NDF box. This guy's job is to take ethernet signals from the CCAP and convert them into the RF domain so that they can be communicated to an RF based hardware box. So the way this would work is the ONX just like before would send out the exact same out of band upstream telemetry carrier saying I'd like to sweep please. At that point, the remote phy unit would have an NDR session set up so that it would receive, they would know where that carrier is. The would receive that carrier, convert it into IQ points and then pipe it over IP, over the 10 gig pipe, over ethernet, up to this NDR/NDF box, which would then convert it into RF and send the FSK upstream telemetry carrier to this hardware based SCU, the same way hardware based sweep works.

Which would the communicate with the orchestration server. It would say okay, these are where I want the sweep points. This is when we're going to do the timing. It would transmit that back over telemetry, which the NDF/NDR would then convert into IQ samples to be piped over ethernet, back to the field meter. So very similar flow to what we had before just using this NDF/NDR as a pass through to send this telemetry carrier back to the hub to these hardware boxes in the RD domain. So then at the predetermined time, the expert track, the orchestration server, would tell the remote phy device please start capturing spectrum. The sweep pulses would be transmitted by the ONX. They would be captured and then the results sent back. So this was option five.

The last option, option six is very similar, although there's some problems and some challenges with creating a new hardware based  box and requiring RF in the hub for a sweep solution. So in this case what was done was, again, the exact same out of band upstream telemetry sent to the remote phy device. NDF/NDR is set up to convert the RF into IQ samples, sent 10 gig E to the CCAP. But now we have a dedicated telemetry API replacing this loop up here to talk to the orchestration server, which then sets up all the parameters, send it back over the telemetry API back over the 10 gig out to the ONX again. And again, predetermined time, expert track tells the remote phy device to start collecting spectrum and your sweep pulses are sent across and sweep happens just as expected.

So this approach eliminates all of the hardware in the hub and is a completely virtual solution. So this is what it would look like. So what are the pros and cons of these different approaches? Quickly, injecting a carrier, viewing a spectrum analyzer that certainly not full band because you only have eight CW's and it's not multi user. So that one was pretty quickly eliminated. Using end channel frequency response has several issues. One is it's not fast since it requires arrange and register of the DOCSIS modem within the instrument. This, again, taking 30 to 60 seconds every time you connect and disconnect from the network was a showstopper. And it's also not full band in that best case is capturing the frequency response for all of your bonded carriers, but in a degraded plant situation that might only be one carrier and you're missing everything out of band.

It also fails the always works test because it required DOCSIS services to be up. So in the event of an outage, then you don't have the ability to use this in order to troubleshoot the plant. TCP/IP over DOCSIS again, it fails the fast because it required DOCSIS services to be up and it fails the always works. Again, TCP/IP over LTE, we thought this was a good approach initially. But going out and testing it with operators from a concept review, North America primarily, we heard that there's just too many rural areas where LTE coverage is inadequate. And we heard largely in amia and ap that when you get down into access tunnels and things like that, then there's areas where LTE doesn't work. So it fails the always works test.

Remote Phy CCAP Interface

NDF/NDR telemetry again, as you remember there were the two hardware boxes that were required for this approach. So just the pure NDR/NDF fails on the software based, but the NDR/NDF direct API was the only one that met all of the requirements. This is what the architecture looks like that we have chosen specifically and that we are recommending for standardization. So you have your ONX instrument out in the field, which is transmitting the same out of band FSK telemetry signals that it would for centralized access architecture sweep. Talking through the rpe device down to the router up to the CCAP and then it comes to this expert track rci. The remote phy CCAP interface. What this is, is software that can run on a VM, co-located with a CCAP for latency reasons, but they can be anywhere. So this software application running on a virtual machine core can be wherever the CCAP is, be it in a head end, a hub or somewhere off in a data center even. So it really aligns with the operator's desire to push the CCAP processing way farther back toward or into the data centers.

So this can support multiple RPDs. For scalability, If you keep growing the number of remote phy devices out there, then you would just need to spin up another instance of this on a virtual machine farm to take care of that. So the result if a process which really looks identical to CAA sweep the tech. So they do the same things to sweep. They get the same results back in the same format. So it really takes care of some of the proliferation arguments. It makes it a lot simpler for a technician to work on the node no matter what kind it is. So that was a lot of detail on just one subject. But again, the point was to illustrate that replacing and improving existing test and measurement capabilities is not really straight forward, especially when you have multiple systems involved.

So it's a bit of palate going through here. Lets', Jennifer, if we could jump into the second poll question please.

Question about Remote Phy Rollout and relevant DAA architectures

Yeah, great. I'll go ahead and share that now. So it says which architectures are you planning to use for your initial DAA rollout? I'll give you guys a few seconds to do that. Also just a reminder that if you have any questions for the Q&A portion at the end to type it into the questions box on the right hand side of your screen and we'll save all those until the end. All right. I see them going still. I'll give you guys just a few more seconds. All right. Looks like it's about stopped so I'm going to close it now.


And results.

Answers to Remote Phy Rollout Question

Yeah, interesting. About what I expected. So there's a lot of people looking at remote phy for initial rollout, which again is what I said because there are the most players in this space today. As well as this is generally the approach chosen by a lot of the incumbents, but I'm surprised and I'm not surprised to see 33% are looking at multiple architectures right from the start. We fully expected to have a heavy remote phy number started and if I would have asked a poll question of what do you expect to have deployed a year from now, or a year after you start, then I would have expected 33%, but that's an even higher number than I thought right at the start.

Remote Phy Variance

It reinforces what we've heard is, the remote phy variance are not a one size fits all. It's going to be a heterogeneous plan. It's going to be a mix of all these different architectures because they each have their strengths and their benefits. So, good stuff, good stuff. Thanks. So if we could go back to the slides please. What is the current state of DAA implementation? Most operators are in lab trials today. I'll say probably 75% of them are in lab trials on components. So testing individual pieces of the DAA solution and not the end to end solution. Although there are a few doing end to end. They're still a little bit earlier. There are some very limited production deployments and I probably should have put production deployments in quotes because these were being forced due to various external reasons that they had to deploy quickly.

And there have been a lot of challenges with those, a lot of problems, a lot of growing pains. Perhaps some of these early production deployments may have gone out sooner than they should have. We are seeing a steady stream of initial production release announcements coming out. So it looks like we are getting close to general commercial availability for a selection of different DAA architecture. So we're definitely moving along, but I would say were in day one of a 365 day project if you want to look at it that way of the multi-year deployment. We're still pretty early in the actual operationalization portion. So we've talked drivers considerably, but alignment points. So DAA roll outs are a pretty extensive task in and of themselves, but what else are these deployments aligning with? So again I said, and Europe is primarily where we're seeing the 1.2 GHz frequency extensions. And we are seeing operators trying to be very conscious of aligning their DAA deployments with these 1.2 Gig extensions.

For them, they have to go out and touch all the diplexers. They have to touch almost everything in their plant for the 1.2 expansions, not necessarily diplexers, but the amplifier modules themselves to be able to go up to the higher modules, or the higher frequency ranges. So they're trying to align the DAA with that so that they're only touching these nodes once. Smaller service groups kind of goes without saying because the smaller service groups means node splits, which DAA is an enabler for. But what I meant by this is, so operators don't want to do these node splits twice. They can't just say I'm going to stop doing node splits until DAA is available. But they're being very strategic about where they do the node splits and putting off as many as they can so that they can roll out DAA for these node splits instead of touching them twice for their rollout. So there's various ways they can do that.

The last one is DOCSIS 3.1. so DOCSIS 3.1 does necessarily mean that you are fork lifting out head end gear if you're going with the standard CAA types of architectures. So what some operators are doing is looking at regions where they absolutely have to do DOCSIS 3.1 for competitive reasons. Whether they're going against one gig from Google fiber or whether they just need the additional capacity, whatever the reason and they're moving those to the front of the pack if they're looking at doing remote mac phy or remote CCAP. So instead of replacing all of their head end gear, what they're looking at doing is putting in DOCSIS 3.1 capable remote phy and remote mac phy, I'm sorry remote mac phy and remote CCAP units in those regions first to minimize the amount of work they have to do in the head end. So they're coordinating their DOCSIS 3.1 rollout with their DAA roll outs at the same time. That is some of the drivers and alignment points and the current status of operators roll outs. So what about tests? We've talked about all the gaps in tests and all the things that can be done to help with those gaps.

So where are we at? So I can only speak for VIAVI, but from an upstream spectrum integration standpoint, in other words, the ingress remediation use case, live spectrum analyzers, we have integration complete for multiple DAA vendors and production releases today. And we're adding more as we go. We're in various stages of engagement with most of the DAA vendors who we don't have production releases with. And we actually have field trials in process for the integrated upstream sweep architecture that we showed before. So we're working that out in the field in field trial mode as well as we showed at ANGA. We'll be showing it at SCTE expo if anyone wants to come by and see it in action.

What have learned? What are the takeaways from where we've been with this? One lesson we learned to date is about strange bed fellows and some different collaborations than we've had in the past. Our first step into distributed access architectures is actually a little over three years ago when we were approached by one of the thought leaders in this area who was developing a solution and taking it out and pitching the concept to cable operators. The cable operators completely got the revolutionary way that they were looking at breaking up some of the head end components and where they were splitting things up and then how they were going to make it easier to scale to these greater node counts. But the almost universal argument they got, the objection they got from the cable operators was how in the world am I going to maintain my plan?

So they actually approached us and said hey. We are going to break the maintenance models that operators are using, using your tools and unless we can come up with a solution for helping the operators maintain their plant, then we can't sell our solution. So it's very symbiotic. We need each other. Test and measurement was not their core competency, but having a solution with table stakes and the model that traditional test and measurement vendors have been using was going to be broken by the rule of RF, so we needed their help to come up with the best possible solution to help the operator. So it's been a great collaboration working with, between the T and M vendors and the distributed access architecture vendors to come up with the best possible solutions for the operators.

Another lesson learned is just about standardization. So, just because something is standardized don't assume that it's standard. NDF and NDR requirements have been standardized and CableLabs did a great job of predicting what would be needed to shuttle around these telemetry carriers, both for set tops and for test and measurement. But the protocol implications, the protocol implementation, is not specified. So how it's actually mechanized is not. And those tend to be a little bit different from DAA vendor to DAA vendor. So again, we're working with them trying to come up with the least common denominator and I think we're all working very well together to come up with something that will meet the cable operators need of, even for things that aren't standardized, a defacto industry standard of how this is done so it's interchangeable between test and measurement vendors and DAA vendors.

But again, the take away is, just because the standards written doesn't mean that everything is going to flow smoothly and everything is going to be completely interchangeable without some coordination. I touched on this before that this is definitely a long term prospect. We're looking at a 10 plus year DAA rollout. Lots of concerns on proliferation and uncertainty. We're finding out more and more every day that everything is not 100% figured out yet. The industry has gone very, very fast with rolling out distributed access architectures, getting them into the labs, getting hardware built, getting the software OS's behind them built and potentially we've gotten a bit ahead of ourselves on some things like the plant maintenance activities like some of the gotchas that we found on NDF/NDR as an example.

So, you come to expect that anytime you're rolling out a new technology extremely fast. We don't have everything figured out, but I think we're doing a great job of reacting as things come up. Last slide. What's next? How to prepare. Hopefully, we've given you a few things to think about regarding plant maintenance and how it's going to change before and after your transition. So if you think about these points really from the start when you start thinking about which DAA system you're going to purchase as well as how are you going to maintain it, then I think you can be some steps ahead. So, when choosing a DAA system, it seems like a kind of obvious question to ask of, hey does it support cable ops standards? You say well that's kind of a silly question because in order to be certified, it has to. Certification is still much more about passing packets than it is the maintenance pieces of it.

So it's a question worth asking of, if they don't support the maintenance pieces, do they have road map to do it in the future or are those considered optional on their opinion? Are the candidate solutions open and accessible by third parties or are they proprietary? Generally this is not an issue, but some systems tend to be quite closed and they rely on proprietary software and don't really interface well with external systems. So they tend to operate a bit in a vacuum, so it's a question worth asking. Then just overall, your maintenance tool strategy. So think about it and what are the critical maintenance capabilities that you have today? What are the tasks? What are the things you need to accomplish with plant maintenance and are you going to have the ability to meet those goals in the future, whether it's doing things the way you've always done them or doing things differently? Do you have a plan? Do you have a path to sustain your core test and measurement capabilities? A sub question might be are your deployed instruments going to be supported? And if they're not, that's fine. You just need to consider the re-buy of your instruments as part of your overall DAA deployment plan.

Make sure you don't miss that. Think through the proliferation. So play long ball here. Think about not your initial rollout and how are you going to support that from a maintenance standpoint, but when you have that heterogeneous plant with multiple architectures, multiple vendors, how are you going to handle all testing and monitoring and troubleshooting your centralized access architecture nodes and the different variants of DAA nodes at the same time? Are you going to go with this strategy of multiple discreet point solutions? Are you doing to go with a single platform? How are you going to manage these? Just all questions to think through to help you plan ahead to have the smoothest transition possible. To quote one of our great founders, Benjamin Franklin, "By failing to prepare, you are preparing to fail." Hopefully, nobody on this call is either in that boat today or will be in that boat after us giving you some things to think about moving forward.

Looking for more Remote Phy resources? Check out www.viavisolutions.com/remote-phy