Operator: Good day, and welcome to Ciena Corporation's fiscal first quarter 2026 financial results conference call. All participants will be in listen-only mode. To withdraw your question, please note this event is being recorded. I would now like to turn the conference over to Gregg Lampf, Vice President of Investor Relations. Please go ahead.
Gregg Lampf: Thank you. Good morning, and welcome to Ciena Corporation's 2026 fiscal first quarter conference call. On the call today is Gary Smith, President and CEO, and Mark Graff, CFO. Scott McFeely, Executive Adviser, is also with us for Q&A. In addition to this call and the press release, we have posted to the investors section of our website an accompanying investor presentation that reflects this discussion as well as certain highlighted items from the quarter. Our comments today speak to our recent performance, our view on current market dynamics and drivers of our business, as well as a discussion of our financial outlook. Today's discussion includes certain adjusted or non-GAAP measures of Ciena Corporation's results of operations. A reconciliation of these non-GAAP measures to our GAAP results is included in today's press release. Before turning the call over to Gary, I remind you that during this call, we will be making certain forward-looking statements. Such statements, including our quarterly and annual guidance, commentary on market dynamics, and the discussion of our opportunities and strategy, are based on current expectations, forecasts, and assumptions regarding the company and its markets, which include risks and uncertainties that could cause actual results to differ materially from the statements discussed today. Assumptions relating to our outlook, whether mentioned on this call or included in the investor presentation that we posted earlier today, are an important part of such forward-looking statements and we encourage you to consider them. Forward-looking statements should also be viewed in the context of the risk factors detailed in our most recent 10-Ks and our forthcoming 10-Q. Ciena Corporation assumes no obligation to update the information discussed in this conference call whether as a result of new information, future events, or otherwise. As always, to allow for as much Q&A as possible today, we ask that you limit yourselves to one question and one follow-up. With that, I will turn the call over to Gary.
Gary Smith: Thanks, Gregg, and good morning, everyone. Today, we reported strong fiscal first quarter financial performance. We delivered revenue of $1.43 billion in the quarter, our highest ever and at the top end of our guidance, reflecting strong execution across the business. Demand is incredibly strong with exceptional order activity in the quarter. This, along with long-term planning conversations with customers, gives us confidence in the durability of demand and our ability to drive growth as we move through the year and into 2027 and beyond. Adjusted gross margin came in at 44.7%, which was ahead of expectations, and we continued to drive increased profitability, illustrated in part by our adjusted earnings per share of $1.35, which is more than double our EPS in Q1 of last year. These record results reflect Ciena Corporation's market leadership and reinforce our role as a critical provider of the high-speed optical systems and interconnects that enable AI workloads to scale and to be monetized. In fact, we are taking meaningful share of the increase in AI-driven connectivity spend as customers trust our technology leadership, deep collaboration, and proven execution. To this end, we believe 2025 will ultimately stand out as one of our strongest years of market share gains, and we believe it will be even stronger in 2026. With our recent inclusion in the S&P 500, we may have new listeners on the call, so allow me to begin with a brief summary of our business. At the highest level, Ciena Corporation is the global leader in high-speed connectivity. We build solutions that move enormous amounts of data across cities, data center campuses, countries, and oceans, quickly, reliably, and at massive scale. Through industry-leading optical systems and interconnect solutions along with automation software and services, we power the world's most advanced networks, helping service providers, cloud companies, hyperscalers, governments, and enterprises meet explosive connectivity demands, especially in an increasingly AI-driven world. Our foundational business has always been to address connectivity needs in the wide area network, or WAN, spanning subsea, long-haul, metro, and data center interconnect, or DCI. We remain the undisputed global leader in this domain. Today, much of this business is driven by the continued adoption of cloud services across our global customer base and the network infrastructure required to support it. It is also increasingly fueled by the rise of large-scale AI data centers that need to be interconnected with DCI solutions linking data centers across campuses, regions, and continents. Additionally, service providers around the world have begun reinvesting in their optical transport infrastructure alongside autonomous networking capabilities, both to support surging AI-driven traffic growth across their networks and to improve operating efficiencies. And service provider and cloud provider customers are increasingly working together to deliver connectivity through managed optical fiber networks, or MOFN, as they navigate regulatory requirements and capacity needs in the U.S. and in other new and emerging geographies around the world. By way of example, our orders in India were up 40% year over year, reflecting ongoing high demand specifically for MOFN in that country. Together, we view these as structural multi-year demand drivers that reinforce the critical need to serve WAN-connected connectivity requirements, fueling both our growth and continued momentum. We expect revenue from the MOFN application will continue to be an important contributor to overall service provider growth going forward, and we are uniquely well positioned to further strengthen our leadership in high-speed WAN connectivity for service providers, cloud providers, and the growing group of neoscalers from whom we saw increased momentum in the quarter for both direct and MOFN-related design wins. In parallel to this, we are focused on the significant expansion of our addressable market opportunities in and around the data center. It is now well understood that cloud providers are investing heavily in data centers to deliver on both the current and future promises of AI. In just the last few weeks, we have seen announcements from the four largest global hyperscalers that outlined a step-function increase in their 2026 CapEx to more than $600 billion in aggregate, driven by infrastructure needs related to AI training and inference workloads at massive scale. These build-outs involve several areas of opportunity for Ciena Corporation, not only in the WAN, but increasingly in and around the data center, including scale across, scale out, scale up, and our unique data center out-of-band management solution, or DCOM. I will start first to discuss the scale across, which is really an application supported in part by our interconnects portfolio, which is emerging as AI data centers grow in size and begin to hit power and space limitations. To overcome these constraints, customers are distributing compute across multiple sites and using high-speed performance optical networks to interconnect them, effectively creating one single AI training environment that operates across distance. We believe that we are in the very early stages of this wave of opportunity, and we are already experiencing extraordinary demand, with three hyperscalers choosing to use our optical solutions for their training applications across distance, which we have talked to you about in recent quarters. And all three hyperscalers are significantly ramping, including additional orders for multiple additional from the first hyperscaler we announced in Q3 2025. We are addressing this demand for scale across solutions with our RLS platform, the de facto industry line-system standard for cloud providers, as well as our 800ZR pluggable optics. To underscore this, we realized a second consecutive record quarter for RLS shipments and revenue. We expect to expand our role in scale across applications with the introduction of our new RLS HyperRail solution. HyperRail delivers an order-of-magnitude increase in fiber density within existing rack footprints, helping customers scale traffic while reducing, and in some cases avoiding, costs and complexity associated with adding substantial numbers of amplifier huts. The solution, developed in close collaboration with our hyperscaler and service provider customers, represents another inflection point for Ciena Corporation, and we expect to be first to market again. In fact, we will be demoing the first prototype of our HyperRail system at the OFC trade show in a few weeks’ time. This solution, we expect, will begin standardization at 2026 and will ramp in 2027, allowing us to capture share and incremental value as these distributed AI training expands across regional clusters and moves to further distances. In addition to scale across, we see meaningful opportunities inside the data center, including the scale-out connectivity between racks and scale-up connectivity within racks. As we know, the physics of copper inside the data center is reaching its limit. While there will be a place for copper solutions with shorter distance scale-up interconnects, network architectures will include more optical co-packaged interconnects, and over time, as data rates and bandwidth requirements continue to increase, coherent optical connections will overtake IMDD ones for shorter reaches to address growing capacity volumes inside the data center. And as the world's leading high-speed connectivity company, we are investing meaningfully to intersect these important use cases. We continue to demonstrate progress toward our in-and-around-the-data-center growth objectives, and our expanding interconnect portfolio, including ZR and ZR+ pluggables and optical components, is well positioned to address the rising power and space constraints associated with those evolving scale-up and scale-out architectures. We have just reached an important milestone with our first product introduction following the Nubis acquisition last fall, which addresses scale-out and scale-up needs. Last week, we announced the Vesta 206.4T optical engine, which is the industry's first high-density, low-power, open-ecosystem pluggable CPO solution. Samples of the Vesta product will be available in calendar Q2 2026, and we are actively discussing Vesta, as you would expect, with our cloud provider customers and partners, and we are excited to be showcasing it at OFC again in a few weeks' time. For scale-up opportunities inside the rack, where XPUs are getting faster and driving heat and power concerns, we are advancing the Nitro Linear Redriver technology, also from our Nubis acquisition. We believe this is a critical element to active copper cabling solutions, which extend the distance that signals can travel and reduce power by up to 80% versus AEC-type solutions. We also expect samples of the Nitro Redriver to be available in calendar Q2 2026. Finally, our data center out-of-band management, or DCOM, solution continues to represent another significant opportunity inside the data center. Leveraging our XGS-PON and routing and switching platforms, DCOM was initially designed with Meta to meet hyperscale provisioning and configuration requirements. We continue to work with them and are engaged in technical discussions with two other major global hyperscalers. Let me summarize by emphasizing that demand in Q1 2026 was unprecedented, reflected in very strong order intake and a meaningfully higher backlog. We executed well and demonstrated strong performance on both top and bottom lines. This exceptional demand was broad-based across service providers, hyperscalers, and an expanding set of neoscalers. Opportunity continues to build in waves, from our traditional and expanding WAN business to multiple applications in and around the data center. Furthermore, to monetize AI for both training and inference workloads—the latter of which represents another significant growth vector still in its infancy—the foundational requirement is again high-speed connectivity. These dynamics, combined with our deep collaborative customer relationships that improve our long-term visibility plus our continued focus on execution, give us increased confidence for multiyears of strong growth and profitability ahead. I will now turn the call over to Mark to cover our financial performance and guidance in more detail. Thank you, Mark.
Mark Graff: Thank you, Gary, and thanks everybody for joining the call this morning. As Gary noted, demand remains robust and has been, in fact, increasing. We are focusing our resources to not only strengthen our financial results, but also to secure near- and long-term supply and manufacturing capacity to deliver for both our customers and our owners. The results delivered in Q1 are a testament to the progress we are making and will continue to make. With that, I would like to update progress against our financial priorities previously discussed. We continue to make progress to our next milestone of 45% gross margin, as witnessed by our 44.7% gross margin performance in Q1. Q1 results benefited from product mix, inclusive of contributions from incremental demand for capacity infills, the execution of cost reductions, and early progress on advancing the value exchange with our customers. Longer term, an improving price environment, new product inflections like HyperRail, and focused cost optimization all provide opportunity to deliver improved gross margins. Our balance sheet continues to be a source of strength with working capital improving, driven by cash from operations yielding $228 million in Q1, a decrease in cash conversion of three days, and inventory turns growing to 3.2 times. With respect to capital allocation, we are taking a balanced, disciplined approach, prioritizing R&D to advance our technology leadership in the fastest growing segments of the market and to drive product velocity, all while holding OpEx levels approximately flat to 2025, delivering significant operating leverage. We are investing our CapEx to expand capacity, scale output, and meet rapidly growing demand. In Q1, capital expenditures were $74 million, inclusive of the accelerated capacity investments. For context, this is approximately two to three times our average CapEx over the last twelve quarters. Let me take a moment to comment on the industry supply and its impact on Ciena Corporation. As you have heard from many others in the industry over the last few weeks, the supply landscape remains challenging. To be blunt, our revenue in the first quarter would have been higher but for these constraints. Our close relationships with customers give us early visibility into their demand and our need to expand capacity to address it. We have been working with partners to scale by way of two key initiatives. First, we continue to partner with contract manufacturers with respect to their manufacturing capacity and output expansion, which is yielding strong results. Second, we are deeply engaged with component vendors, which is where more of the industry challenges exist, to secure and expand supply, including through responsible long-term purchase commitments. As shown by our Q1 results, we are navigating the supply environment well and are investing to expand capacity. However, we expect demand will continue to outstrip supply, at least for the next several quarters. Turning to Q1, as Gary noted, revenue reached $1.43 billion, up 33% year over year and a quarterly record for the company. Our optical revenue was up over 40% year over year, led by Waveserver and RLS product lines, each of which were up over 80% from the year-ago period. We had three greater-than-10% customers, including two global cloud providers and one Tier 1 North American service provider, with strong MOFN activity. Regarding backlog, as Gary discussed, our order intake has been incredibly strong over the past ninety days, leading to a new record by a significant margin. Given the extraordinary nature of the demand, we want to share with you that backlog has increased by approximately $2 billion this quarter to exit Q1 at approximately $7 billion. In fact, nearly all new orders we are taking now will be for fulfillment in fiscal 2027, providing ongoing confidence in our outlook. As a result, we expect backlog to continue to grow throughout the year. Rounding out Q1, adjusted operating expense met expectations, leading to an adjusted operating margin of 17.9%, 190 basis points over the midpoint of our December guide. We achieved adjusted net income of $197 million in the quarter, which delivered an adjusted EPS of $1.35, more than double a year ago. We exited the quarter with a cash balance of $1.4 billion after purchasing approximately 400,000 shares for $81 million under the current repurchase authorization. Before I discuss our Q2 and updated 2026 outlook, I would like to make a few comments on tariffs. As you know, on February 20, the Supreme Court struck down the IEEPA tariffs originally implemented in March 2025. As previously stated, these tariffs have been immaterial to our financial results. While we have noted this ruling as a subsequent event in our forthcoming 10-Q, it has not had any impact to our reported results. The administration has announced a new global replacement tariff under a separate legal authority with final rates still pending. Based on current information, we believe that these developments will have an immaterial effect on our business. Obviously, we are monitoring new developments and working closely with customers and suppliers to assess any future impacts. Now, with respect to our view for the remainder of the fiscal year and Q2, given the current dynamics, we now expect to deliver revenue for fiscal 2026 between $5.9 billion and $6.3 billion, essentially raising our year-over-year growth rate from 24% to 28% at the midpoint of the range. We believe this range appropriately balances the strong market demand with ongoing industry supply conditions. Given our Q1 results and the expectations for Q2, we expect our 2026 gross margin to be between 43.5% and 44.5%, one point above our December guide and 130 basis points improvement above 2025. With the first half exceeding our expectations and the supply challenges we are actively managing, we now expect first-half and second-half gross margins to be roughly equivalent. And we now expect adjusted operating expense of approximately $1.52 billion to $1.53 billion, resulting in adjusted operating margin of 17.5% to 19.5%. This small difference in OpEx is really due to the stronger demand environment. In Q2 2026, we expect to deliver revenue in the range of $1.5 billion, plus or minus $50 million; adjusted gross margins between 43.5% and 44.5%; and adjusted operating expense of approximately $375 million to $390 million, which will result in an adjusted operating margin of 17.5% to 18.5%. To conclude, we had a strong start to fiscal 2026. Demand for our technology is robust and durable. We see multiple waves of opportunity ahead, from continued AI training to expanding inference workloads, both domestically and internationally, to new HyperRail solutions and faster interconnects inside the data center as higher-speed requirements come online. We continue to offer market-leading, innovative technology that uniquely enables AI both in the WAN and in and around the data center, and we continue to thoughtfully allocate shareholder capital to deliver value to both our customers and our owners. Given all these opportunities, we are confident our momentum will extend beyond 2026. With that, we will now take questions from the sell-side analysts.
Operator: We will now begin the question-and-answer session. To ask a question, you may press star then 1 on your touchtone phone. If you are using a speakerphone, please pick up your handset before pressing the keys. If at any time your question has been addressed and you would like to withdraw your question, please press star then 2. Our first question comes from Amit Daryanani with Evercore ISI. Please go ahead.
Amit Daryanani: Yep. Thanks for taking my question. I guess I have two from my side. One of the things, just on the gross margin side, really impressive performance in the first half of the year despite some the supply chain issues folks are having, and I think mix was slightly negative. Just spend some time on what are the ups levers on gross margins that are helping you out, and are you seeing a shift in pricing at this point whatsoever? That would be really helpful to understand.
Mark Graff: Sure. Hey, Amit. It is Mark. Yeah, I agree. We had a very strong performance. We are quite happy with the 44.7% that we printed this morning, and it is really driven by a couple of things. We saw customers requiring increased capacity, both in hyperscalers and in service providers, that increased their infill rates, and so we got quite a bit of tailwind from that. Secondly, I think the engineering team has done a wonderful job of engineering cost reductions into our products that is really separate from the supply chain activities that you are seeing us increase revenue with. So between those two things, I think we are really seeing some good tailwinds. Moving forward, I think we have got a few more levers that we are going to start working through. You mentioned price increases. One of the things that we are trying to do is really balance the price increases with our share position in the market, and I think what you have seen is we have been able to increase our gross margin as well as increase our share, and so I think we are doing a really good job of balancing those two things. I think moving forward, you will see even more aggressive cost reductions, and then the price increases that we talked about at the end of last year—those really have not started to fully kick in until the second half of the year. So I think that creates additional tailwinds for us. So all in all, again, I think we are making really good progress towards that 45% waypoint, and you should see that throughout the year.
Amit Daryanani: Got it. And then if I would just follow up, how do you see the pluggables market, especially with 800-gig ramping up through fiscal 2026 and 2027? And if you could just maybe compare and contrast a bit about your positioning in 400 versus 800, that will be helpful as you go into the next cycle. Thank you.
Scott McFeely: Yeah, I mean, we have seen pluggable revenue increase period over period, and we have talked in the past about our interconnect business, and we went from 2024 to 2025, that doubling sort of in the rearview mirror, and then we talked about it as a major portion of our inside and around the data center with our aspiration to triple that this year, and we are well on track for that. So we do see significant growth. From a competitive perspective, as we have talked about in the past, through choices that we made to focus early introduction of the technology in the last generation more on our systems business and our pluggable business because that was a bigger opportunity, we were not necessarily first movers in that market, so that probably cost us some share and probably cost us, actually, frankly, some margin dollars. That is not the case in the 800-gig. We are first to market there, and 800-gig is moving quite along. Now, I will say, though, and I just want to make sure people understand this, is that we are talking about capacity adds across the portfolio. It is not just pluggables. Mark mentioned the growth that we are seeing on Waveserver. If you want to be the strategic supplier to the web scalers, they have networks that span campuses, metros, national networks, submarine networks. You have to have all the things in the toolkit, and we are seeing increases across all of those components, system business and pluggables.
Operator: And the next question comes from Simon Leopold with Raymond James. Please go ahead.
Jeff Cocci: Yeah, thanks, guys. Jeff Cocci in for Simon. So just a couple of housekeeping items. Can you give RPO for the quarter and the percentage of the $7 billion backlog that is product? And then, while you are doing that, maybe you could just give the percentage of sales that are ZR pluggables for the quarter. And then I guess my second follow-up would be what percentage of the telco revenue is now MOFN, and how did traditional telco grow? Thank you.
Mark Graff: Yeah, there were quite a few questions in there, Jeff, so let me start. If you think about the backlog, I think right now roughly 80% is products and software, and the rest I would think about as services. We are not going to really disclose the percent of pluggable revenue in the quarter. As Scott said, we expect that to triple year on year, and we are on track to deliver the 800 pluggable ramp that we talked about. Sorry, I lost track of all your questions.
Scott McFeely: What else did you have? What—
Mark Graff: RPO and then percentage of telco that is MOFN.
Gary Smith: I will take percentage on the MOFN thing for you. By the way, I would say the interconnect is somewhat of a proxy at this stage for pluggables to some extent, so we clearly disclose all of that. I would say you are looking at about 10% to 15% of our service provider business being MOFN. We have visibility to a fair amount of it, but not all of it. We partner with service providers on identifying some of these particular build-outs, and we are seeing a good steady ramp in that. You are seeing service provider growth; I think in the first quarter, it is like 22%. Of that growth rate, MOFN is a big contributor to it. But I think overall, it is going to be about 10% to 15% of our total service provider business.
Mark Graff: And then RPO, if you think about RPO as a percent of the orders that we took in Q1, Jeff, you should be thinking roughly 60%.
Jeff Cocci: Great. Thanks, guys.
Operator: And the next question comes from Ruben Roy with Stifel. Please go ahead.
Sahid Singh: Hey, guys. This is Sahid Singh on for Ruben Roy. I guess just digging into and following up on the last set of questions around backlog, you guys have gone from $5 billion last quarter to $7 billion this quarter. I think you just said 80% of the $7 billion is products and software. And so if I just apply that 80% to the 5, that is implying $1.6 billion in product and software growth, which, you know, loose math and loose assumptions there. So then I am thinking through, okay, last quarter you said Meta expanded their DCOM engagement, the RLS customer expanded, there are a couple more hyperscalers added on, and we are talking hundreds of millions per opportunity as you have mentioned. So could you just help us bridge the gap and perhaps provide some color as to what the incremental here is relative to the expansions that were announced last quarter or the new hyperscalers that were announced?
Gary Smith: Yeah, I would say that first of all, it is very broad demand that we are seeing. It is very strong on service providers, submarine, MOFN, and obviously hyperscalers. And I would also say hyperscalers in their various applications, because I think the point to note is we have very broad relationships with most of them now, across multiple applications—submarine cable, long-haul, metro, in and around the data center, and with things like DCOM inside the data center as well. So basically, if you look at all of those from an order point of view, they were all up and to the right. And I think that is sort of systemic around the drive of the traffic outside the data center now. You are seeing growth in cloud, general cloud. You are seeing inference. You are seeing this new market of training now emerge. As I said in my comments, we have now got three hyperscalers deploying us for training, and we are at the very, very early stages of that. So you put all of that together and that yields the incredible demand that we saw in Q1. And as Mark said, despite the fact that we are ramping our capacity for delivery as seen in our results, demand is going to continue, we believe, to outstrip our ability to supply, and that is going to continue for, we believe, this year. And so we are going to end up with a larger backlog than we have right now as we turn the year, despite the fact that we are ramping our capacity strongly throughout the year and obviously through 2027 and 2028.
Mark Graff: Yeah, and the one thing I just maybe want to clarify a little bit for you: that 80% is across the entire $7 billion of backlog, not just the $2 billion increment. So you can look through where we ended Q4 to where we are ending Q1 and back into, I think, the information you need.
Sahid Singh: Yeah, I think I got you there. The $2 billion was simply coming from the incremental, as you are saying, but I assumed 80% was sustained through last quarter as well, which may not be the case is what I am understanding. Okay. On the follow-up, maybe just touching on what Amit had asked at start of the call around pricing. How much of pricing increase is currently baked into backlog relative to volume?
Mark Graff: Yeah, we are probably not going to give you that number specifically. As we disclosed in Q4, the pricing increases that we talked about were really on the new orders, and because we had such a big backlog at the time, most of that was going to be seen in the second half. So you should expect those price increases to show up in Q3 and Q4.
Operator: And the next question comes from Meta Marshall with Morgan Stanley. Please go ahead.
Meta Marshall: Great, thanks for taking the question, and congrats on the quarter. Maybe just on impressive operating levers that you guys are out of the business? And just where are you finding those levers to keep OpEx flat as I assume bonus plans need to reset? There is obviously a lot of projects that you are working on with various hyperscalers. And then second, did you mention whether there were any 10% customers within the quarter? That is just a small nit, thanks.
Mark Graff: Yes, Meta. So on OpEx, first part of your question, we were able to hold OpEx flat year on year, really, for three reasons. The first is, if you recall last year, each quarter it seemed that we were increasing our OpEx guidance to take into account the increased performance that we were doing last year. We basically reset that, and we were able to scoop that increment and reinvest that back into the business. So that is one. Two is, you will recall, we announced a small RIF—somewhere between 4%–5% of the population. We have been able to harvest those savings and reinvest into the business. And then you will recall that we ceased further investment in our 25-gig PON activity. So those three things, we were able to scoop those up, reinvest them back into the business, and that met our needs year on year and so, nominally, that is how we got to that flat OpEx and the, to be honest, quite impressive operating leverage. On the 10% customers, we had three. We had two hyperscalers and one Tier 1 North America service provider that is pretty exposed to MOFN.
Meta Marshall: Great. Thank you.
Operator: And the next question comes from Karl Ackerman with BNP Paribas. Please go ahead.
Karl Ackerman: Yes, thank you. I have two, Mark. I will ask both of them for you. Could you speak to the duration of this accelerated CapEx spending, which seems driven by enhanced visibility you now see extending over a multiyear period? And for my follow-up, you also spoke about more aggressive cost reductions to support margins. I am curious if you could expand on that and whether that relates primarily to further outsourcing to the EMS partners or if there are other things we should consider. Thank you.
Mark Graff: Yeah, so let me take those, and on the second one, maybe Scott can add some more color here. On the duration of CapEx, you will remember in our December call we talked about we doubled our CapEx year on year, and within that doubling of CapEx, we were increasing our productive CapEx by 50%. So really think about working with our contract manufacturers to expand their manufacturing capacity. Now, obviously, that has some lead time, and so we are investing through the year, and we expect that increase in capacity to start showing up towards the end of the year. And the intent was really to set up a 2027 plan for us. I am not going to go into 2027 yet, but the intent is to invest in 2026 and to realize the benefits in 2027. On the cost reductions, I would not say that it is more outsourcing to EMS folks. I think our engineering and product teams are really looking at the cost components of the products and looking at different materials, different solutions, and trying to drive a lot of those costs out. I would also remind you that we have got the most vertically integrated supply chain, and that drives a lot of both cost advantage for us, but I would say right now, more importantly, supply stability. And so between those two things, as I said, we are starting to see the ability to increase revenue as well as bring in a little better cost profile. Scott, if you have got something to add.
Scott McFeely: Yeah, I think on the cost reduction piece, I think of it as three levers. One is we are driving a lot more volume through the machine, and we do have some fixed costs; you get a tailwind there. That is the first one to get your mind around. On the engineering aspects or design aspects that Mark talked about, think of it as a couple of things. Number one is where you do not change the function of a product, but you are going after the cost base of it, and that can be through more vertical integration, that could be through substituting parts for different parts, that could be opening up your supply chain to multiple other sources, and we are pushing on all of those levers, by the way. The other piece of the design stuff is as you go from generation to generation, where you are changing the function of the products, you get back to those price-value conversations with customers and, you know, sticking more dollars into our pocket as we do those transitions, and those are going on all the time to some degree. The third piece—and we did not talk a lot about it—it is not all on the lines that you said where we are depending more on the EMSs, but we are constantly looking at that supply chain design, the whole ecosystem design, and trying to optimize that to get cost out of it as well. So it is not the engineering design, but the supply chain design. And we are pushing on all of those, and that is why you are seeing the results you are getting. The team is doing a good job executing on those, and there is more in the future.
Karl Ackerman: Very clear. Thank you.
Operator: And the next question comes from George Notter with Wolfe Research. Please go ahead.
George Notter: Hi, guys. Thanks very much. Was curious about your comments about the progress with the value exchange with customers. Obviously, you are raising pricing. I know it is going to come through later in the year as you eat down the backlog. But just stepping back and thinking about the space, you have got higher memory costs, you have got component suppliers that are being really aggressive on price—they are repricing their own backlogs. It just seems like it is an environment you guys could be more aggressive on price and even perhaps reprice your own backlog. So I am just curious, why not be more aggressive here given the supply-demand dynamics and what is going on in the supply chain? Thanks a lot.
Gary Smith: Yeah, George, this is Gary. I think you know we have talked a lot about the good things that we are doing to manage our margins and the rest of it, including the value we are balancing, but it is a balance to it all, and that is what we are trying to strike as we go through this. You are seeing it translate into improved financial performance in all dimensions—market share gains, revenue, gross margin improvement, and operating leverage. We are seeing that, and it is a confluence of things. Scott talked about some of the cost reduction stuff. Mark talked about the value exchange. All of those things are happening and are getting weaved into the business over time. As you know, we take a very long-term view of how we run the business, and I think we see this as a multiyear opportunity for us, and we will strike a balance between those challenges of supply chain, because you have got a lot of shortages going on right now as well, which we are navigating through pretty well. So, it is the confluence of those things that result in the approach that we are taking.
Mark Graff: Gary said it well. Pricing is a lever, George, but we are also looking at can we improve cash conversion, can we get better terms and conditions, can we get longer-term purchasing commits with maybe some more non-cancelable, less risky terms as we satisfy this quite large backlog. We are not taking pricing off the table, so we should say that. And you are right, we are seeing some cost increases coming from the supply chain, and we are in early days of having those conversations with customers, so I do not want to get too far into that. But I think we are trying to pull on all the levers and overall, I am pretty pleased with the progress we are making so far across the board.
George Notter: Got it. Super. Anything new competitively? Obviously, the competitive environment is, I guess, more benign than it has been in recent years. You have had some consolidation among competitors. Anything new in terms of their behavior on pricing or terms or just general competitiveness in the space? Thanks.
Gary Smith: On the WAN business, I think you articulate the environment well there. We are fortunate because we have got such close relationships with the hyperscalers to get out in front, as Mark said, around the capacity and component supply to that, which is showing up in our growth rate. We were able to stay out ahead of that, and we took market share in 2025, and I think we will take even more market share in 2026. This is all really now about—we are on our next generation of line systems with the HyperRail; we are on our next generation of modem technologies in their various forms. So, our competitive position continues to improve there. Obviously, as you get in and around the data center, particularly inside of it, it is a different set of competitors. It is a different set of dynamics. What we bring to the table there is our leading high-speed technology and our systems knowledge, frankly, and translating that into the component purchase we believe is meaningful, and we have got a lot of the hyperscalers leaning in with us on that. But it is a different ecosystem and environment. We have got new and different competitors there, some of which are very large. So we do not underestimate that, but we think we are coming from a position of strength and uniqueness around our optical technology, as you are really looking at the opticalization—if that is a word—of the data center, as the electrical stuff runs out of steam from a physics point of view. And we are starting to pick off some of those applications where that is most pronounced. DCOM, I think, is a decent example of that. We have got the new technology that we announced in market from the Nubis acquisition. So that is going to be a different set of competitors for us.
Operator: And the next question comes from Tal Liani with Bank of America. Please go ahead.
Operator: You there? You might be on mute.
Operator: Kyle?
Operator: Alright, we will move on to the next. Tim Long with Barclays. Please go ahead. Go ahead.
Alyssa Shreves: Hi, this is Alyssa Shreves on for Tim. I just had two quick ones. Were you seeing any dynamic in the quarter with the order growth? Was there any trend in customers trying to get ahead of pricing actions, or was it really just underlying demand kind of driving the growth there? And then I had a follow-up.
Gary Smith: Pure underlying demand across the board. Not driven by pricing thresholds or anything. There is so much demand for capacity out there across the board. Service providers have not invested in their optical infrastructure for about five years—they have been so preoccupied with 5G, etc.—that there has been an under-invest in the optical infrastructure in the world, and you are seeing very strong growth from the service providers and MOFN activity as well. And then you have got hyperscalers with the across training, clustering, new market for optical that is really ramping pretty significantly. And then you have got the inside-the-data-center optical moves as well. So across the board, Alyssa.
Alyssa Shreves: Okay, that is helpful. And then just a quick one on APAC. The orders for India in the quarter were really strong. Should we expect the region to be driving APAC this year? Just given last year was more mediocre growth in the region, it was down the prior year. Should we expect a step change now with India?
Gary Smith: I think that India will probably be very, very strong and robust this year, largely driven by MOFN. Obviously, it is the fastest growing Internet market in the world. All of the hyperscalers are leaning in and playing there, and because of the regulatory environment, they have to really partner with local folks and service providers to provision their optical networks. So I think that is going to be very sustainable. We are seeing an uptick in the amount of projects there. I would say overall, we are going to see good growth out of Asia Pacific this year in a number of areas, including Japan. That is largely driven by two things. One, my point earlier about service providers have largely underinvested in optical in the last five years, so that is beginning to play a part in it. The second part of it is the increase in MOFN activity in the whole Asia Pacific area, and submarine cable being a part of that too.
Alyssa Shreves: Great. Thank you so much.
Scott McFeely: Thank you.
Operator: And the next question comes from Tal Liani with Bank of America. Please go ahead.
Tal Liani: This time, you hear me?
Operator: Yes. How are you doing?
Tal Liani: I got so excited, I broke my headset. I have a question about the risk of early ordering. What we are seeing in every cycle is that when there are constraints, customers start ordering much, much earlier, and that creates big increases in backlog and then declines. How can you manage it? So I am sure you probably do not know if there is or to what extent, but is there any way you can manage early ordering through pricing the way Cisco does it, or any other way in order to mitigate the phenomenon? So you do not have what we had in 2022 or 2023, whenever we had the previous cycle.
Gary Smith: Tal, that is a good question. First of all, I think having suffered through that, we are suitably sensitized to it, and we learned some lessons through that, one of which is visibility into things like installation and what are they actually doing and when with the equipment. I would say that the dynamic here—the service providers—is good steady growth. We have good visibility into that and what they are doing with it. And they were the main folks that were having the challenges around the ordering piece. The hyperscalers, I think we have deep collaborative relationships with them. They are our biggest services customers as well, and you saw our installation services were up 42% in the quarter. That gives us, and we have unique visibility into, what they are doing and deploying across the board there. So, given the scale of this, this is deep and collaborative relationships with them around precisely what are they trying to do, where. And so that gives us good confidence and visibility in the way we structure our agreements with them, given these lead times and the rest of it, which they are mindful of. I think we have great assurance—another way of saying this—in the quality of our backlog.
Mark Graff: Yeah, I think the only thing that I would probably add, Tal, is when we talked about value exchange, part of that is making sure we have got the right terms and conditions in place so that we do not get stuck holding the bag, and we have not really seen a lot of people pushing back on that.
Tal Liani: Got it. Second question is on margins. The risk is that in times like that, the component pricing will keep going up, and you start to see it. It started with memory. We start to see it now with other companies or other types of components. What can you do going forward? What can you do in order to mitigate the future risk? I understand what you are doing now and how you are trying to mitigate the current risk, but are there any forward pricing or forward purchase commitments, etc., you can take in order to mitigate the future increase in component pricing? Or what are you trying to do, or how are you trying to address it?
Mark Graff: Yeah, Tal, I think there is—again, we keep coming back to this word “balance.” I think we are really focused on ensuring that we have got the secure supply to satisfy the demand that we are looking at, and we are locking in the pricing as we know it today with our component suppliers and the contract manufacturing folks. All that said, there is still future risk of them repricing their backlog, and we are having conversations both on our supplier side as well as on the customer side, so that we are not getting squeezed in the middle. But, again, it is the balance of pricing and supply on one side and pricing and share on the other. And I think given the results that you have seen and the basis of our raise, I am feeling pretty comfortable that we are striking those right tones.
Tal Liani: Got it. Great. Thank you.
Operator: And the next question comes from Atif Malik with Citi. Please go ahead.
Adrienne Colby: Hi, it is Adrienne Colby for Atif. Thank you for taking the question. I wanted to ask another one about gross margin. With the 800ZR pluggables ramping in the latter part of the year and also with the pricing increases kicking in, why would we not see gross margin expansion in the second half?
Mark Graff: The guide that we gave was a good range based on what we see from the product mix and from the supply chain challenges that we are trying to work through—again, that balance that I talked about before. From our seat right now, we think that is a pretty good guide. As we make more progress, we will give you guys updates.
Adrienne Colby: Great, that is helpful. Thank you. And then just as a follow-up, I was wondering if you could provide some more color on the momentum that you are seeing with neoscalers, maybe just in the relative size of the opportunities, if most of that is falling in cloud direct versus MOFN.
Gary Smith: Yeah, we are seeing, obviously, an emerging ramp here around a bunch of the—loosely called—the neoscalers, which encompass a fair range of different players. I would say largely right now MOFN-orientated, given the capital expenditures, time to market for them, etc. But what is clear from it all is that the network is now a real priority for them. And I think that plays through to the hyperscalers too. There has been such a maniacal focus—and continues to be, obviously—on things like power, GPU accessibility, etc. Now it is really about the network. The traffic is beginning to come out of the network both for inference and for training. And the neoscalers are obviously seeing that too. So they are leaning in on the network. We are also beginning to see some of them wish to have control of some of that network as well and do their own builds. We are cautious about that approach given the financial structure of some of those neoscalers—not all of them. But we are seeing across the board the neoscalers leaning in on their whole network requirements, largely really, Adrienne, currently going for MOFN.
Operator: Thank you. We will take one other question today. Thank you. The next question comes from Ryan Koontz with Needham & Company. Please go ahead.
Ryan Koontz: Great, thanks. You touched on scale across a bit. It seems like we are very early in the momentum around that area. Can you maybe expand on those projects—where we are in terms of a rough count, how your visibility is improving there relative to backlog and specific scale across projects? Thank you.
Gary Smith: Hi, Ryan. We shared—I think it was in Q3—we announced the first large hyperscaler rollout. We have actually seen during the course of this quarter additional sites being added to that. Again, I would say all of these currently that we are seeing are in North America, which is, I think, to be expected. We have added two more hyperscalers to that that are also rolling this out. I think we are in the very, very early stages of this, and in talking with them, though, the plans are large and expansive, as you would expect for the scale of what they are trying to do here. It is absolutely enormous. So we are at the very early innings of this whole training, clustering. I would say that what we are also observing is there are—all of these hyperscalers we talk about homogeneously; they are not. They have very different business models. They have very different architectures both inside the data center to some extent and certainly outside from a networking point of view. Their training varies as well. And so you have got lots of different variables in there in terms of distance, capacity, speed, etc. They all want low latency, and they all want super high speed, but you are seeing a lot of variables about how they are clustering this. And again, I would say we are at the very early stages of this, Ryan.
Ryan Koontz: Really helpful. Thanks for that, Gary. One last question on DCOM here. Great early move here; seems like you have got a big lead in this opportunity to bring PON to out-of-band. Do you feel like that space is defensible for you, and how do you sustain a competitive advantage there? Thank you.
Gary Smith: I think there are a number of elements to that sustainability. I think it is deep collaboration, first off, and understanding and intimacy, and obviously Meta were incredibly helpful in instigating that. But there are different use cases; they are slightly different in the different hyperscalers. The dependability of it is we are very vertically integrated into it. We own the core technology, and it is the software that we are putting on that as well. We are uniquely positioned about that. So we think it is the combination of all of those elements—the collaboration, the vertical integration, the uniqueness and high speed of it, and then all of our software integration capability, and also, by the way, installation, which we are also doing. It is the confluence of those things that provide—we think it is quite defendable.
Operator: Really helpful, Gary. Thank you. This concludes our question-and-answer session. The conference has now concluded. Thank you for attending today's presentation. You may now disconnect.