Operator: Good afternoon. My name is Christa, and I will be your conference operator today. At this time, I would like to welcome everyone to Meta's First Quarter 2026 Earnings Conference Call. [Operator Instructions] And this call will be recorded. Thank you very much. Kenneth Dorell, Meta's Director of Investor Relations. You may begin.
Kenneth Dorell: Thank you. Good afternoon, and welcome to Meta Platform's First Quarter 2026 Earnings Conference Call. Joining me today to discuss our results are Mark Zuckerberg, CEO; and Susan Li, CFO. Our remarks today will include forward-looking statements, which are based on assumptions as of today. Actual results may differ materially as a result of various factors, including those set forth in today's earnings press release and in our annual report on Form 10-K filed with the SEC. We undertake no obligation to update any forward-looking statement. During this call, we will present both GAAP and certain non-GAAP financial measures. A reconciliation of GAAP to non-GAAP measures is included in today's earnings press release. The earnings press release and an accompanying investor presentation are available on our website at investor.atmeta.com. And now I'd like to turn the call over to Mark.
Mark Zuckerberg: All right. Hey, everyone. Thanks for joining today. We had a strong quarter for our community, our business and our progress towards AI. More than 3.5 billion people use at least one of our apps every day. We saw a small decrease in total family dailies due to Internet outages in Iran and blocks in Russia. But otherwise, trends across our apps are strong. Daily and monthly actives on Instagram and Facebook continue to grow with video driving all-time high engagement across both apps. WhatsApp continues to see strong momentum, too, including in the U.S. And Threads continues on its trajectory to be the leading app in its category. Our biggest milestone so far this year has been the release of our Muse family of models and our first model MuSpark along with a significantly upgraded new version of Meta AI. This was the first release from Meta Super Intelligence Labs, and it shows that our work is on track to build a leading lab. Over the past 10 months, we have built the strongest research team in the industry and established the scientific and technical foundations to scale very advanced models. Spark is just one step on that scaling ladder, and we are already training even more advanced models. But Spark has already made Meta AI, a world-class assistant that leads in several areas related to our vision of personal super intelligence, including visual understanding, health, shopping, social content, local, creating games and more. We're hearing very positive feedback on it so far. We've seen large increases in Meta AI use since releasing the updates, and the Meta AI app has consistently been near the top of the app stores as well. Now that we have a strong model, we can develop more novel products as well. Since I first wrote about our vision for personal Super Intelligence last year, we've been focused on delivering personal and business agents to billions of people around the world. Our goal is not just to deliver Meta AI as an assistant, but to deliver agents that can understand your goals and then work day and night to help you achieve them. My view of AI is very different from many others in the industry. I hear a lot of people out there talk about how AI is going to replace people. Instead, I think that AI is going to amplify people's ability to do what you want, whether that's to improve your health, your learning, your relationships, your ability to achieve your personal career goals and more. My view is that human progress has always been driven by people pursuing their individual aspirations. And I believe that this will continue to be true in the future. People will be more important in the future, not less. Meta believes in empowering individuals. And those are the kinds of products that we're going to build, and I believe that they're going to be some of the most important and valuable products of all time. We are building a personal agent focused on helping people achieve the diverse goals in their lives. We're also building a business agent focused on helping entrepreneurs and businesses across the world, use our tools and others to grow their efforts, reach new customers and serve existing customers better. These agents will work together to form an ecosystem. And whether you use our personal or business agents to achieve your goals, I believe that the future will see a massive increase in entrepreneurship from people creating new things that they've always wanted to exist, but previously didn't have the tools to bring into the world. We're already testing an early version of business AIs and weekly conversations have grown 10x since the start of this year. We're also working on using Spark in our upcoming models to improve our recommendation systems and core business in Facebook, Instagram and ads. Right now, our apps primarily help people accomplish 3 important goals: connecting with people, learning about the world and entertainment. But we've always wanted our apps to understand more of people's goals so we can help improve their lives in all the ways that they want. These new AI models will let us understand this in more detail. So instead of just looking at statistical patterns of what types of people engage with what content, for the first time in Meta's history, we're going to be able to develop a first principles understanding of what you care about and what each piece of content in our system is about -- is that way we can show you more useful things for what you're trying to accomplish. And we'll also be able to create personalized content specifically for people to help you achieve your goals as well. Since our recommendation systems are operating at such a large scale, we'll phase in this new research and technology over time. But the trend over the last few years seems clear that we are seeing an increasing return on the amount that we can improve engagement for people and value for advertisers. This encourages us to continue investing heavily in what we expect will provide increasing value over the coming years as well. On that note, we are increasing our infrastructure CapEx forecast for this year. Most of that is due to higher component costs, particularly memory pricing, but every sign that we're seeing in our own work and across the industry gives us confidence in this investment. That said, we are very focused on increasing the efficiency of our investments, and as part of that, we are rolling out more than 1 gigawatt of our own custom silicon that we're developing with Broadcom, as well as significant amount of AMD chips to complement the new NVIDIA systems that we're rolling out as well. One of the primary goals of our Meta compute initiative is to lead the industry in efficiency of building compute, and we expect that will be a strategic advantage over time. Talking about building physical goods at scale, our AI glasses continue to perform well with the number of people using them, daily tripling year-over-year. This continues to be one of the fastest-growing categories of consumer electronics ever. We released Ray-Ban Meta optics this quarter designed for all day wear rather than primarily as sunglasses. And building on our release of Oakley last year, we have some exciting new partnerships and styles that I think are going to have the potential to reach even more people coming later this year. All of our glasses are designed to easily update to use our newest AI models and features. I'm also really excited to see the glasses evolve from being able to answer questions to being able to be a personal agent that's with you all day long, helping you remember things and achieve your goals. Beyond glasses, I am excited for more of our Metaverse efforts to be powered by the AI models we're training as well. We remain the biggest investors in the VR space across the industry, but we are focused on making our VR business sustainable as we invest more in other areas like AI and glasses. Before wrapping, I want to talk for a moment about how AI is transforming our work. We are seeing more and more examples where one or two people are building something in a week that would have previously taken dozens of people months. And I want to make sure that Meta is the best place in the world for these types of people to come and make an impact. We're building the next evolution of our company around these people. And there's a lot that we can do to enable this, building the best infrastructure for creating and delivering products at scale, streamlining our teams so they aren't bigger than they need to be. recognizing and rewarding the people who are having outsized impacts and setting ourselves up to try many more ideas and take on many new projects in the future. Of course, we will continue pushing to increase our efficiency as well. But overall, I think the future is about building many more higher-quality things than we've ever built before. All right. That is what I wanted to cover today. We are living through a historic technological transformation. We are among the few companies positioned to shape the future, and we are on track to do that. I'm looking forward to delivering personal super intelligence to billions of people. And as always, I am grateful for the hard work of our teams and to all of you for being on this journey with us.
Susan Li: Thanks, Mark, and good afternoon, everyone. Let's begin with our segment results. All comparisons are on a year-over-year basis, unless otherwise noted. We estimate 3.56 billion people used at least one of our family of apps on a daily basis in March, which declined slightly from December due to Internet disruptions in Iran and a restriction on access to WhatsApp in Russia. Absent these impacts, growth in family daily active people would have been positive quarter-over-quarter. . Q1 total family of apps revenue was $55.9 billion, up 33% year-over-year. Q1 family of apps ad revenue was $55 billion, up 33% or 29% on a constant currency basis. In Q1, the total number of ad impressions served across our services increased 19%. Impression growth was healthy across all regions, driven primarily by growth in engagement and users as well as ad load optimizations. The global average price per ad increased 12% year-over-year in Q1, with broad-based growth as we benefited from ad performance improvements, better macro conditions versus Q1 of last year, and currency tailwinds in international regions. This was partially offset by strong impression growth, including from lower monetizing regions. Family of Apps Other revenue was $885 million, up 74% driven primarily by WhatsApp paid messaging and subscriptions revenue. Within our Realty lab segment, Q1 revenue was $402 million, down 2% year-over-year due to lower Quest headset sales, which were partially offset by continued strong growth in AI glasses revenue. Moving now to our consolidated results. Q1 total revenue was $56.3 billion, up 33% or 29% on a constant currency basis. Q1 total expenses were $33.4 billion, up 35% compared to last year. Year-over-year growth was driven mainly by infrastructure costs and employee compensation. The growth in infrastructure costs was due to higher depreciation, data center operating costs and third-party cloud spend. The growth in employee compensation was driven by technical hires we've added over the past year, particularly AI talent. We ended Q1 with over 77,900 employees, down 1% from Q4 as the impact of headcount optimization efforts in certain functions was partially offset by hiring in priority areas of monetization and infrastructure. First quarter operating income was $22.9 billion, representing a 41% operating margin. Q1 interest and other income was negative $1.1 billion, driven by unrealized losses on our equity investments. Our tax rate for the quarter was negative 23%, which was favorably impacted by a tax benefit of $8.03 billion. This benefit partially relieves the $15.93 billion noncash tax charge we recorded in the third quarter of 2025, which reflects updated guidance from the U.S. Treasury issued in February 2026 regarding the tax treatment of previously capitalized R&D expenditures in the United States. Absent the tax benefit, our Q1 tax rate would have been 14%. Net income was $26.8 billion or $10.44 per share. Absent the tax benefit, our net income and EPS would have been $18.7 billion and $7.31, respectively. Capital expenditures, including principal payments on finance leases were $19.8 billion, driven by investments in servers, data centers and network infrastructure. Free cash flow was $12.4 billion. We ended the quarter with $81.2 billion in cash and marketable securities and $58.7 billion in debt. Turning now to the business performance. There are two primary factors that drive our revenue performance, our ability to deliver engaging experiences for our community and our effectiveness at monetizing that engagement over time. On the first, we're continuing to see significant gains from our content recommendation initiatives. On Instagram, the ranking improvements that we made in Q1 drove a 10% lift in Reels time spent. On Facebook, total video time increased more than 8% globally in Q1, the largest quarter-over-quarter gain in 4 years. Within the U.S. and Canada, ranking improvements we made drove a 9% increase in video watch time on Facebook in Q1. These gains are benefiting from advances we're making across the full stack. Starting with data, we doubled the length of user interaction sequences we use for training on Instagram in Q1 and increase the richness of how each user interaction is described, enabling our systems to develop a deeper understanding of user interests. Within our models, we've significantly increased the speed with which our ranking models index new posts, which is enabling us to recommend them sooner after they are published. We're also applying more advanced content understanding techniques, which is enabling us to quickly identify posts that may be interesting to someone even if they haven't engaged with a lot of similar content. These and other improvements have enabled us to increase the diversity and recency of recommended content with same-day posts now representing more than 30% of recommended reels on both Instagram and Facebook more than double the levels 1 year ago. We're also using AI to unlock more inventory by auto translating and dubbing videos into a viewer's local language, enabling us to recommend a more diverse set of content. Over 0.5 billion users on each of Facebook and Instagram are now watching AI translated videos weekly. Looking forward, we're making several investments we expect will deliver more valuable recommendations. This year, we will continue scaling up our models in several dimensions, including their size and complexity, while incorporating LLM to deepen content understanding across our platform. This will enable us to better match people to a wider variety of content aligned to their interests. At the same time, we are executing on our longer-term efforts to develop the next generation of our recommendation systems. This includes building foundation models that power organic content and ads recommendations as well as developing LLM based recommender systems. Our focus this year is validating the model architectures and techniques in these domains before we scale them out in future years. Aside from our recommendations work, we are focused on deploying the models from Meta super intelligence labs to enable a new set of product experiences. We're seeing encouraging results within Meta AI since we began powering responses with the first model from MSL, Muhspark. In tests we ran leading up to the launch, we saw meaningful engagement gains that accelerated week-over-week with each new iteration of the model. We're seeing similar games within Meta AI following the broad rollout of our new model with double-digit percent increases in Meta AI sessions per user. Mu Spark is now powering Meta AI in direct chat threads across our family of apps as well as the stand-alone Meta AI app and website, giving billions of people globally access to our latest model. Overall, we're very encouraged by the momentum within our research and product road map and look forward to sharing more detail on what we're building over the course of the year. Turning to the second driver of our revenue performance, increasing monetization efficiency. The first part of this work is optimizing the level of ads within organic engagement. Here, we continue to enhance our systems to show ads at the optimal time and location. In Q1, we also expanded availability of ads on our newer services, including bringing ads on Threads to people in more markets. On WhatsApp, we're making good progress with the rollout of ads and status with hundreds of millions of people now viewing them daily. Moving to the second part of increasing monetization efficiency, improving performance for the businesses who use our services. To do so, we're deploying AI more deeply across each layer of our systems and tools. Within our ad systems, we're delivering performance gains as we deploy more complex and predictive models. In Q1, enhancements we made to Lattice's modeling and learning techniques, along with advances in our GEM model architecture, drove a more than 6% increase in conversion rate for landing page view ads. In addition, we've been investing in more performing inference models for 1 more serving ads. In the second half of last year, we began rolling out our new adaptive ranking model, which is an LLM scale adds recommender model that we use for inference. This model improves our inference ROI by routing requests to more compute-intensive inference models when it determines there is a higher probability of conversion. In Q1, we expanded coverage of our adaptive ranking model to support off-site conversions, which drove a 1.6% increase in conversion rates across the major surfaces on Facebook and Instagram. We're also leveraging AI to make it easier for businesses to manage their customers, develop ad creative and engage with customers. The Meta AI business assistant has now been fully rolled out to all eligible advertisers on supported Meta buying services, providing personalized recommendations to advertisers, resolving account issues, and servicing campaign insights to help optimize results. Performance has been strong since we began testing the assistant in Q4 with common account issues being resolved at a 20% higher rate. This week, we're also introducing Meta ads AI connectors in open beta, providing advertisers the ability to connect their Meta ad account directly to an AI agent We've always supported advertisers both on our platform and through tools like the marketing API. And now we're extending that to AI. So businesses and agencies can analyze and optimize campaigns with the tools they're already using. Usage of our ad creative tools is also scaling with more than 8 million advertisers using at least one of our Gen AI ad creative tools and particularly strong adoption among small- and medium-sized advertisers. These tools are benefiting performance as well with advertisers using our video generation feature seeing more than 3% higher conversion rates in tests. We're also seeing good traction in using AI to facilitate customer engagement. In Q1, we expanded business AIs on WhatsApp to SMBs across Latin America and Indonesia as well as on Messenger in Asia Pacific. We now have more than 10 million conversations each week being facilitated through business AIs, up from 1 million at the start of the year. We'll further expand access to more countries this quarter while adding more capabilities to the AIs. We also continue to invest in the value optimization suite, which helps advertisers maximize their return on ad spend by prioritizing the highest value conversions rather than optimizing solely for the most conversions at the lowest cost. Adoption by businesses has been strong following performance improvements we've made over the past year with the annual revenue run rate of our value optimization suite now over $20 billion, more than doubling year-over-year. Last, I want to touch on our commerce efforts. People discover products on our platforms through ads and organic posts with brands increasingly turning to creators to promote their products. This is contributing to rapid growth in our partnership's ads product with its revenue run rate more than doubling year-over-year in Q1 to $10 billion. To support the product discovery and purchasing happening through creators, we're expanding our solutions beyond ads. Last month, we rolled out our affiliate partnerships offering on Facebook to more test partners, so creators can tag products from participating retailers on their posts and earn a commission when someone makes a purchase. We have also started testing similar experiences on Instagram. We see a real opportunity to help people more easily discover and buy products within our services, particularly as we incorporate AI deeply across our platforms. Next, I would like to discuss our approach to capital allocation. Compute is becoming increasingly important as it determines the quality of services we can provide including powering more capable models and delivering innovative new products. It is also becoming more critical to how we work at a -- as we are entering a world where employees are managing agents to help them generate new ideas, run experiments, execute tasks and build products. We are investing aggressively to meet our infrastructure needs, and ensure we maximize our strategic flexibility over the coming years. This includes substantially expanding our own data center footprint and striking deals throughout the supply chain to secure necessary components for future capacity. We're also signing cloud deals that will come online over the course of this year and 2027, allowing us to scale more quickly. These multiyear cloud deals and our infrastructure purchase agreements drove a $107 billion step-up in our contractual commitments this quarter. Our investments will support our training needs for future models and most importantly, provide us the inference capacity necessary to deliver personal and business agents to billions of people around the world, along with several other AI product experiences we're developing. As we grow our infrastructure spend, we remain committed to operating efficiently, and we recently shared internally that we plan to reduce the size of our employee base in May. We believe a leaner operating model will allow us to move more quickly while also helping to offset the substantial investments we're making. Moving to our financial outlook. We expect second quarter 2026 total revenue to be in the range of $58 billion to $61 billion. Our guidance assumes foreign currency is an approximately 2% tailwind to year-over-year total revenue growth based on current exchange rates. Turning to the expense and CapEx outlooks. We expect full year 2026 total expenses to be in the range of $162 billion to $169 billion, unchanged from our prior outlook. We continue to expect to deliver operating income this year that is above 2025 operating income. We anticipate 2026 capital expenditures including principal payments on finance leases to be in the range of $125 billion to $145 billion, increased from our prior range of $115 billion to $135 billion. This reflects our expectations for higher component pricing this year and to a lesser extent, additional data center costs to support future year capacity. Absent any changes to our tax landscape, we expect our tax rate for the remaining quarters of 2026 to be between 13% and 16%. Finally, we continue to monitor active legal and regulatory matters, including headwinds in the EU and the U.S. that could significantly impact our business and financial results. For example, we continue to see scrutiny on youth-related issues and have additional trials scheduled for this year in the U.S., which may ultimately result in a material loss. In closing, Q1 was a solid start to the year with strong execution across our core ads and engagement initiatives. We're also making exciting progress on our AI research and product efforts and expect to build that momentum over the course of this year. With that, Crystal, let's open up the call for questions.
Operator: [Operator Instructions] And your first question comes from Brian Nowak with Morgan Stanley.
Brian Nowak: I wanted to ask you just about the level of investment you're making in sort of the signposts you're watching to ensure you're going to generate ROIC and all these investments - us and the other products. So if you could just sort of let us know some of the key factors you're watching over the next 12 to 24 months, whether it's Meta AI, Muse advances, chloralgorithm, what are you sort of watching foremost just to make sure that you're on the right path to generating healthy ROIC on all this CapEx and infrastructure spend?
Mark Zuckerberg: That's a very technical question for basically where -- the things that we're watching are to make sure that we're on track building leading models and leading products. The formula for our company has always been build experiences that can get to billions of people and focus on monetizing them once you get to scale. That's -- we're seeing a little bit of that here where basically we invest in advance to build leading models, and we convert that into leading products. And then we think that these are going to be some of the most important products that get built over the next decade. So I think just like anything else that we've done over time, the basic milestones that I look at are around, first, technically, are we delivering the quality to enable a great product; then second, when you have the product, how is it scaling; and then third, you look at the monetization and then you drive up the efficiency of it towards increasing profitability. I don't -- I mean like I don't think we have a very precise plan for exactly how each product is going to scale month-over-month or anything like that. But I think we have a sense of the shape of where these things need to be. And I think if you look at the usage of these and the quality of the products and the quality of the models that are out there and the use that other frontier models are getting and the trajectory of that, I'm quite comfortable that a, the lab that we're building is on track to be a leading lab in the world. I think MuSpark was a very high-quality model. It powers Meta AI, which I think is now a world-class assistant. We have an ability to be able to grow that and have a large amount of engagement. And over the coming quarters, we're just going to be tracking how do our next set of training runs go. How do our products scale how excited are we about the products in the pipeline, where right now, we're very excited. And then we'll also ramp up monetization over that period of time as well. So those are the set of things that I look at. I think for the kind of specific financial questions, I think Susan can jump in if there's anything more to add.
Operator: Your next question comes from the line of Mark Shmulik with Bernstein.
Mark Shmulik: Mark, I guess now that we've got MuSpark kind of out there launched -- how are you thinking about the team's focus here kind of divided on to further model training runs and kind of further specialization in that personal intelligence goal versus product launches and kind of shipping more product out the door. And Susan, I guess, kind of as a follow-up to Brian's question, I know it's too early to discuss 2027 CapEx. But we've had peers mention tonight a potential significant step-up. Any way to think about dimensionalizing kind of how we think about some of the returns or traction this year and how it might affect the 2027 spend?
Mark Zuckerberg: I mean I think the road map from the team is -- has been pretty consistent. So we have the research team, which is focused on scaling increasingly intelligent models with capabilities for the specific things that we're focused on, which are business and personal agents. So we're -- we just released our first model, and I talked about in my comments how we're climbing the scaling ladder towards greater capabilities and scale for the models. That work continues. We have our next set of more advanced models in training now. And that is -- that work will, I think, just continue. I mean that's a loop. I don't think you were going to be done with that anytime soon. We're going to have teams that are just consistently focused on training more intelligent and more capable models and the way that we want. Then we have our product team, and that team is now really unlocked to be able to build things on top of our models because we now have a very strong model. So before this, we have been prototyping a bunch of things using other different models, whether it was our previous older models or kind of using the APIs from other companies. And now we're unlocked to be able to go build things and get them to scale on top of our own models. So I think you will see that over some period of time. I tried in my opening remarks to give a bit of a sense of where we're going, but I think that more of the details of that will become clear over the coming months. And I think that these are just both loops that we'll iterate on. We'll keep on iterating on the intelligence. We'll keep on working on building new products and scaling the products. And then as we get to product market fit, we're also going to increasingly focus on building the businesses around them and decreasing the costs. And this is kind of how we've done everything over the last 20 years of running the company, and that is basically the plan.
Susan Li: Mark, on your second question, we aren't providing a specific outlook for 2027 CapEx. And we are, frankly, undergoing a very dynamic planning process ourselves as we're working through what our capacity needs will be over the coming years. Our experience so far has been that we have continued to underestimate our compute needs even as we have been ramping capacity significantly as the advances in AI have continued and our teams continue to identify compelling new projects and initiatives. And now to, there are very compelling internal use cases. So our expectation is that compute will become even more central to the business going forward. And it will be critical to determining the quality of the models we develop, the types of products we can introduce, how productive we can be as an organization. So we're going to continue building out our infrastructure with flexibility in mind. And if we end up not needing as much as we anticipate, we can choose to bring it online more slowly or reduce our spending in future years as we grow into the capacity that we're building now.
Operator: Your next question comes from the line of Eric Sheridan with Goldman Sachs.
Eric Sheridan: Maybe if I can build out on one of the topics that was discussed in the prepared remarks. But just the opportunity set that sits in front of the company with respect to putting agentic compute in front of both consumers and enterprises. You've long been associated with sort of the consumer landscape. And I am curious about how you're thinking about extensions of the media engagement parts of your business model and the commerce part of the business model to become more agentic over time. But what do you see also as the opportunity set that sits in front of you across SMEs and enterprises where historically, you maybe haven't had as much product velocity?
Susan Li: Thanks, Eric. So I would say in the near term, obviously, the sort of biggest focuses are some of the areas that you mentioned around deepening sort of engagement, obviously, with our existing community and user base, making ad experiences meaningfully more personalized, more engaging, more valuable, helping SMBs find and engage with customers across our platform. Those are some of the, I think, most intuitive and adjacent opportunities to the business that we have today. And then, of course, as we are able to build out more agentic capabilities enabling agents to help people be more productive, but also agents for businesses and enabling, frankly, those agents to interact with each other and build what we hope will be a thriving commerce ecosystem on our platform. So I would say some of these are a little bit further out, especially in that latter category of things. Again, the focus is on building personal super intelligence, building a consumer agent that can work for you and help you get things done. That right now is a consumer experience that we're focused on, but we think there will be clear monetization opportunities over time. You can imagine commission structures or a premium offering. And on the business side, we're seeing a large opportunity, of course, around agents and scaling our business AI initiatives. I think I mentioned earlier in my remarks that there are over 10 million weekly conversations between people and business AIs on our messaging platforms. That's up from $1 million at the start of the year, and we're going to continue expanding globally in Q2. And business AIs today are currently free for most businesses on our messaging apps. But as we make more progress, we expect that we will also work towards establishing a longer-term monetization model. And we'll also consider other services services that we can offer to businesses in the future, but we don't have anything more to share today.
Operator: Your next question comes from the line of Youssef Squali with Truist Securities.
Youssef Squali: Maybe one for Mark and one for Susan. Mark, ray-Ban, Oakley AI glasses continue to perform really well for you guys, but Essilor-Luxottica owns and manages a lot more brands. What are the gating factors to see the launch of additional glasses under these other brands this year? And what would be a successful year for you as you look back at 2026, maybe in terms of units sold? And then Susan, on that 10% RIF, how much of that is due to efficiencies for maybe AI implementation versus just the need to stay fit? And as you look at your employee needs over time, how do you see that growing maybe relative to your overall top line growth?
Susan Li: I can go ahead and take both of those. I might answer your second question first. And I'm just trying to make sure I got all of the parts of the question. So in terms of what the sort of kind of the optimal size of the company, I think, over time, we don't really know the optimal size of the company will be in the future. I think there's a lot of change right now with AI capabilities advancing rapidly. We're very focused on leveraging AI tools to substantially increase our productivity, and we're seeing that reflected in the accelerating output from our engineers. And we're generally approaching -- we're approaching this with a bias towards wanting to use these tools to build even more products and services than we would have before. At the same time, we're making very significant investments in infrastructure, and we are very focused on continuing to operate efficiently. So I think we will be continuously evaluating how we're structured just to make sure we're best set up to deliver against our priorities over the coming years. So that is, I think, your second question. The first question was about the AI glasses. We have -- we're continuing to see strong growth in, obviously, the AI glasses sales over the course of Q1. Demand for the expanded portfolio lineup has generally been quite strong, and we're seeing sales shift now from the prior generation of Ray-Ban Meta's to the latest generation, which I think speaks to the value of the improved features like extended battery life and higher features like higher resolution video capture. So we're pretty excited about the progress we've made with glasses. We see strong interest now in the Meta Ray-Ban displays with the Meta neurlbands. So that's an encouraging sign that there is consumer appetite for display glasses, which is kind of the next generation of how this product evolves. And yes, so I think this is an area that we will continue to be -- that we continue to be excited about and are investing in.
Operator: Your next question comes from the line of Justin Post with Bank of America.
Justin Post: Mark, it took about 10 months to get you Spark out. I think it's a pretty good pace. Just help us understand what kind of unlock that is for some of the new products you're developing? And how is the product cadence going to be over the next 9 months on either consumer or business enterprise products built on top of that model?
Mark Zuckerberg: I mean the field is moving pretty quickly. So I mean, I'm very happy that we're -- I think the lab that has gone the fastest from standing up the lab to having a very kind of widely accepted a strong model. So that's good. I think that is a very significant validation of the effort that the team is working well together, that the infrastructure is working, that that the effort is on track. And I think that, that's basically the main thing that we've learned over the last quarter that I would take away is like where and we started what is this pretty big bet, and it's on track for our plan. In terms of what exactly the cadence is going to be? It's tough for me to say both because I don't really want to share competitively sensitive information and because I think some of the stuff we are more focused on quality than hitting a specific date. I mean, on the research side, this is research, right? We are trying novel things. You don't exactly know when they're going to land. And on the product side, I think we care a lot about just having. Let me put it this way. There's a lot of agents out there, right, that people are building for different things. And there aren't that many that I would want to give to my mother. And I think getting to like that quality bar is something that I care about more than hitting a specific week for launching or something like that. So -- but with that said, I mean we're in a zone here where the teams don't check in with me like once a quarter, like we make meaningful progress like day over day. I think that's part of the fun of developing in this world is that people can make very rapid progress. small groups of people and teams can make very rapid progress. So I think we're going to see a lot of innovation. The timing of this call is it's good in some ways because the MuSpark release, I think, was was positive. The Meta AI first released, I think, is positive. I think that, that shows that we're on track. I'm trying to kind of paint a picture of the very high-level direction that we're going in, but I think that the picture is going to come into focus a lot more over the subsequent quarters.
Operator: Your next question comes from the line of Ross Sandler with Barclays.
Ross Sandler: Yes. Mark, just sort of related to that last answer, but there's a lot of new consumer applications kind of cropping up everything from like an OpenClaude to something a little bit more consumer friendly than you would build for your mom, like you said, with like Pi or Dreamer, which you recently acquired. So how are these new ideas, I guess, changing your view around the direction that core Meta AI or Dreamer or kind of your overall agentic strategy needs to go? And then the second part of it would be, do you think the lab will stay in this consumer lane? Or do you think you need -- or you want to go down the route that others are going down with code writing and like the recursive self-improvement loop and in that direction kind of in parallel just thoughts on that.
Mark Zuckerberg: Yes. So look, on the OpenClaude and other agents, I think that they give you a very exciting glimpse of what types of things should be possible. Now they're pretty rough systems today. And to set up OpenClaud you need to like install a computer locally and then get into a terminal and configure a bunch of things that, again, like there's -- maybe there's hundreds of thousands of people or small numbers of millions of people who can do that. But what we're talking about is delivering personal super intelligence for billions of people around the world. So how do you make a version of that experience that is a lot more polished and dialed and easy and that has all the infrastructure basically done for people already and that just works. And that's kind of what we're focused on, on the consumer side. and I'm really excited about that. I think if you had something like that, that worked quite a bit better than those systems and was easy enough that people could just get then I think you go from having something that hundreds of thousands or millions of people are going to use to something that is going to be addressable to billions of people. And that has been our primary focus from day 1 of the lab is being able to deliver something like that as a product, and I think it's just going to be very exciting. By the way, the same thing is true for businesses, right? I mean, there's the personal version of this -- but there's also a lot of people's goals are they want to create things, right? They want to create websites. They want to create products. They want to grow their products. These are all things that good agents are going to be able to help people do, which I think is partially why this is so exciting. And in my opening comments, I talked about how today we can handle a few goals for people, they're big goals, right? We can help people stay connected with people they care about, learn about the world. These are big things that people care about. But they're not the only things that people care about. And one of the things that I would love for our products to be able to do is just understand people's goals specifically and then be able to just go work on them for them, and check back in and whenever you have questions that you need to answer it. So whether those are personal goals or you're trying to create a business or do work. I think that this is like -- this is stuff that I think literally every person in the world is going to want some version of it. And also, I think it is something that scales where the more you want to get out of it, I think people are going to also be willing to pay a lot of money to have premium or high compute versions of it. So I think that this is like it's a very exciting area. But I think what you all should be waiting to see is like whether we can build the version that really like just works and how effective we are at converting people who are using our products into being hundreds of millions and then billions of people using this stuff. And then over time, how can we effectively convert that into something that's increasingly profitable by monetizing it and getting the costs down. So I think that that's the road map of what we need to do. You asked about whether we're primarily focused on consumers or also recursive self-improvement. I think that we've talked about two main goals for the the team. I mean one is this kind of agents version vision of what we're doing. The other is that self-improvement is really important because you can't build a leading AI product if you don't have leading models. So -- and you're not going to have leading models in the future if your models can't improve themselves, right? So you're getting to a point where today, the models are still able to learn from people -- and then I think at some point, the models will have to improve themselves. And that's how the growth is going to -- an improvement in the models is going to happen. And if you don't -- if we don't have an ability to do that, then we or anyone else, I think the companies that don't do that are not going to be leading labs, then they're not going to produce leading products. So I think at that like that is a table stakes thing that we are focused on. Now does that make us a developer tools company? Not necessarily. I mean, I'm not against having an API or coding tools or anything like that. But it's not our primary focus. But I actually think people conflate coding with self-improvement more than they should. Coding is one ingredient for the model self improving. It's not the only thing. And we are focused on all of the parts that are going to be necessary for self-improvement in service of the personal super intelligence vision that we have for people and businesses.
Operator: Your next question comes from the line of Ron Josey with Citigroup.
Ronald Josey: Mark, maybe a quick follow-up to a prior question around personal agents and business agents. And with Sparks now live and more models in development, do you look at the personal agent opportunity, which we talked about earlier on the -- in the call, more of a short-term, medium-term, long-term goal, I'm sure it's a never-ending goal, but when we see a product, is the question short or medium term? And then Susan, I think the ranking recommendation model improvements are are very impressive to see, given the size and scale of both Instagram and Facebook. Could you help us understand just how doubling the length of these interaction sequences can drive greater usage. There's a thesis out there that maybe some of the rating recommendation improvements are long in the tooth. So it seems if there's a lot more room to go. So any help there would be helpful.
Mark Zuckerberg: I mean I think that the agents work, there's going to be short-term versions of it, but then I think that there's going to be massive upside for delivering more intelligence and more capabilities in the models. And you're kind of seeing this across the industry. Each month, each generation of models, they just have more capabilities and can do more things and people absorb it. and are able to get more superpowers and it's awesome. It's like the most exciting time in the industry. So I think of the agents as the product vehicle for delivering that capability to people. And we certainly -- I think this year is going to be a key period for establishing that as the vehicle for how people are going to use this, but then the model improvement, I think, is going to be something that's going to go on for a very long time. So there's a lot to do here in both the short, medium and long term.
Susan Li: And then on your second question, which I think is about the ranking and recommendations improvements that we talked about in our -- that I talked about in my earlier remarks, I think they're -- first of all, there is still a lot of room to continue improving recommendations over the rest of the year, and we expect we'll be able to do that to drive additional engagement on both Facebook and Instagram. A couple of the things. First, we're going to continue to improve our data infrastructure that's going to allow our models to train on more data. And we're adding more detail to how we describe the content that users have engaged with in the past and scaling up the complexity of our model architecture to take advantage of those larger data sets like using even longer histories of content interactions, and that should all be in service of improving the overall quality of recommendations. We also are focused on making the recommendations even more personalized and more relevant to any given users interest. There's work we're doing to redesign our content retrieval system to show more content that matches the full range of a user's interest and to tailor the diversity of the topics we recommend to the broadness of someone's interest. So someone with particularly concentrated interest might see relatively more of that content while people with a broader set of interest might see kind of a greater range in the topics that we show them. And then finally, we're continuing to make improvements to our sort of LLM based tuning algorithm features that allow users to provide more granular natural language feedback on what they want to see more of or less of in their feed. So the sort of the kind of the sequence length, which is the thing that you called out is 1 of really many improvements we made in Q1, and there is a big road map of further improvements going forward.
Operator: Your next question comes from the line of Doug Anmuth with JPMorgan.
Douglas Anmuth: Mark, how do you think about the step up as you go from leveraging smaller models in the ad business to use Spark and future large language models going forward, where are some of the key unlocks across engagement and monetization? And then on Manus, can you just talk at all about the strategic importance and the role in developing agentic products for Meta and then just current status around the tech and the deal.
Susan Li: I'll take that question. On Manus, we're still working through the details. So we don't have an update right now. On your first question, which is about sort of the -- going from leveraging smaller ads businesses -- smaller models in the ads business to kind of the ads sort of models growing. There's already some work underway, and I think I alluded to some of this in my earlier remarks, even kind of in the current landscape of the ads road map, where we're basically trying to advance the architecture here to allow sort of -- to allow us to leverage the abilities of larger models. Historically, we haven't used larger model architectures like GEM for inference. -- because their size and complexity would make them too cost prohibitive. And the way we drive performance from those models is by using them to transfer knowledge to smaller, more lightweight models that are used at run time. The inference models are bound by strict latency requirements since they need to find the right ad within milliseconds, and that has, again, historically prevented us from meaningfully sizing up -- scaling up their size and complexity. But in the second half of last year, we introduced a new adaptive ranking model, which enables us to leverage LLM scale model complexity of 1 trillion parameters, and we made advances in the model architecture and codesign the system with the underlying silicon, so it maintains the sub-second speed that is required to serve ads at scale. We also developed an approach that intelligently routes request more compute-intensive inference models if it determines that there is a higher probability of conversion and that lets us drive both better performance and increased inference ROI. So there's a lot of work being done there before we even sort of incorporate more of the LLM work into our underlying ads ranking models.
Operator: We have time for one more question, Ken Gawrelski with Wells Fargo.
Kenneth Gawrelski: Two, if I may. First, if I -- you talked on the Mu Spark launch. You've talked about two categories or two verticals. You talked about health and wellness and shopping. Can I dive a little bit -- ask you to dive a little deeper into the latter on the shopping and commerce side. And maybe if you could -- were there any learnings and the 2021, '22 phase where you push deeper into commerce on Instagram and on Facebook. Any learnings from that period that you might apply? Is there an opportunity for a next-gen marketplace-type business in commerce? And then the second, please, Maybe, Susan, can you talk a little bit about -- based on your model improvements and the content recommendations, where -- how much visibility do you think you have to kind of the growth trajectory on the core business? You continue to grow at basically double the pace of the industry despite being a very large share of the industry. Could you just talk about a little bit about your visibility into that continued performance?
Mark Zuckerberg: Yes. So I might give you a somewhat loftier answer to the question. You're asking about shopping. I think it's sort of an interesting example of the way in which the work that we're doing is different than what I think others are doing out there. These products, they -- AI agents get better when you fully optimize the stack. That's why we believe that we need to be a company that builds frontier models in addition to building the agents. And then in order to do that, you, of course, need to build your infrastructure in order to be able to do that well. So we're undertaking this large investment to be able to do that top to bottom. And I think a lot of the way to think about the investment that we're making is a bet that the individual things that people care about and that people are going to be more important in the future. And that's sort of like -- and I think it should be a pretty obvious thing to say. But I think so much of the rhetoric around AI in the industry is around like a company trying to build some kind of centralized thing that like does all the productive work in society in some way or something like that. And that just is very different from how we see the world. Like our vision for the future is one where society makes progress by individuals pursuing their own aspirations. And some people care about big grand things like curing diseases. And a lot of people care about personal things like finding the right for my daughter. And I just think that we want to -- we're going to build things that help deliver this vision for personal agents for people. And I think that part of the lane and what is interesting and differentiated about what we're doing is that that's just so different from how I hear everyone else talking about the work that we're doing. So even though I think some of these ideas, they seem like they should be so obvious. I actually think that our approach of trying to empower individuals and building consumer things is just in the details extremely different from what others are doing. And shopping might be one kind of specific example that I think is going to have interest in commercial implications. And I think people Consumers are going to like it. But I don't hear any other labs out there talking about how they're building an AI that's really good at shopping. And I think that the reason for that is like not because shopping is the most important thing by itself, but because like empowering people to do the things that matter in their lives, whether that's local or understanding social context, or shopping or personal health things or understanding what's going on around them visually, which is going to be really important on the glasses. These are all elements of the personal super intelligence vision. I think like a lot of this, and when you're thinking about kind of the investment in Meta over time, I think you should think about it as coming down to these set of values around what do we want AI to do in society. And if what you want it to do is empower individuals and build a world where the AI is in service to individuals goals, then that is what we are going to build, and I think it's going to be incredibly valuable.
Susan Li: Gosh, I almost wish we could end on that answer, but I will answer the second question, which I think kind of is 2 versions. One is a version of like what's the revenue outlook? And obviously, we gave the Q2 guide, which embeds, I think, both a range of kind of macro outcomes, but also the work that we've -- the ongoing work that we have to continue improving both the sort of usage and engagement on our family of apps and then our ability to continue making the ads better and more performant. I think the second question is maybe more of a -- the second version of that question is more of a higher-level question about kind of the overall trajectory of the road map here. And one of the things I will say, having been working on this for a very long time, I'm always really impressed by the team's ability to continue to advance the state of the art here. And our planning process now is, I think, really fine-tune around this. So I've mentioned on a couple of calls, the budgeting process in which we run a very sort of ROI-based process to make sure that we are funding all of the ads initiatives that we think will drive growth in future years. And that's something that is both quite dialed in. And I think that our ability to measure the impact of that has been pretty robust, and it's been a very important driver of our ads revenue growth, and that continues to be a process that again, we ran in this past budget and -- as far as we can -- as we have line of sight, we feel good about the investment opportunities ahead of us.
Kenneth Dorell: Great. Thank you, everyone, for joining us today. We look forward to speaking with you again soon.
Operator: This concludes today's conference call. Thank you for joining, and you may now disconnect.