Search
Close this search box.

Meta and Ray-Ban Smart Glasses Signal an Inflection Point for AR

Meta and Ray-Ban Smart Glasses Signal an Inflection Point for AR

The News: Meta introduced its next-generation smart glasses in partnership with Ray-Ban (EssilorLuxottica) at the fall Meta Connect event. The glasses, which are powered by Qualcomm’s Snapdragon AR1 Gen 1 platform, bring new capabilities never seen in smart glasses and seem to signal that smart glasses are finally ready for mainstream consumers. Read the full Meta, Qualcomm, and Ray-Ban smart glasses press release on the Meta website.

Meta and Ray-Ban Smart Glasses Signal an Inflection Point for AR

Analyst Take: When I look at the full spectrum of extended reality (XR) products today, from fully immersive virtual reality (VR) and mixed reality (MR) blending VR and augmented reality (AR) to the digital layering of AR and the thin functionality of smart glasses, I cannot help but notice how deeply segmented the category already is. This segmentation is giving OEMs and developers the opportunity to focus on expanding the range of category-specific designs, features, and experiences based on market needs. From a product development angle, the bonus effect of this segmentation is that different product teams are working on solving very specific engineering and user experience (UX) challenges. For smart glasses and AR glasses, these touch on implementing advanced features in lightweight, subtle form factors designed for all-day, near-frictionless use. In short, work being done on AR glasses and smart glasses simultaneously splits the engineering challenge of creating the perfect smart AR pair of glasses into two separate projects: perfecting the AR heads-up display (HUD) and perfecting hands-free AI and user interface (UI) integrations for what is essentially the same unobtrusive form factor.

The HUD half of the challenge is complex but has been fairly straightforward since we started talking about AR: delivering power-efficient interactive digital overlays of virtual objects, screens, notifications, and video content within thermal budget and depth of field (DOF) envelopes.

The other half has a few more moving parts. The first layer deals with managing arrays of intelligent cameras, audio components, and mics. The second layer deals with integrating AI-powered interfaces to empower users to easily control the device and the applications it supports, preferably hands-free (or with minimal physical touchpoints), to let users jump on calls, interact with digital assistants, capture and share video and photo content, open apps and files, navigate calendars, read and draft emails, and so on. (This functionality is in addition to the back-of-the-house AI that controls power management, camera and mic performance, etc.)

The most obvious difficulty lies in how to eventually fold both technology challenges into the same form factor. One that looks and feels like regular glasses and can carry a full day’s charge while delivering all of these features and more. Given the popularity and frictionless nature of voice interfaces and AI, I do not think we can keep talking about AR glasses (for broad consumer and enterprise markets) without also talking about the need for them to deliver built-in AI functionality. AR glasses and smart glasses might seem to be on parallel tracks, but they will inevitably converge into a single track of smart AR glasses. The form factor and power envelope math there being 1+1=1; it is taking a little time to get there, but it feels like we are getting very, very, very close.

If we frame the category’s trajectory in this way and think of the current state of the commercialization of AR and smart glasses as two parallel tracks working to eventually merge into one, Meta and Ray-Ban’s new smart glasses become more than just a cool, fun, impressive product. These glasses are important. They represent a very real inflection point for the category and constitute an important milestone on the journey that will inevitably bring AI and AR glasses together to create the next wave of ubiquitous, hands-free, human-machine interfaces.

Although these are not the first smart glasses to hit the market, they deliver on the second of the two challenges outlined earlier without looking goofy or bulky or compromising on features and performance … or price, which I will circle back to in a moment. What gets me most excited about the trajectory of smart AR glasses is that it promises to bring most (if not all) of the functionality of smartphones and tablets to a wearable hands-free, near-frictionless interface. We could very well see, in a not-so-distant future, smart AR glasses becoming our primary mobile interface, with mobile phones becoming more of a support device or secondary interface.

What These Glasses Can Do

Powered by Qualcomm’s new Snapdragon AR1 Gen 1 platform, the glasses sport a new 14-bit dual image signal processor (ISP), delivering ultra-wide 12-megapixel photos with portrait mode, auto exposure, and auto face detection capabilities, and support for 6 megapixel, 1080p videos. Although the glasses come with some local storage for those files, their built-in Bluetooth and Wi-Fi 7/6e capabilities can push photos and videos in real time to whatever device to which they are wirelessly tethered.

Audio-wise, the glasses come equipped with new custom-designed speakers to provide 2x the bass, 50% more volume, and improved directional audio (providing reduced audio leakage during calls or when listening to music, podcasts, or webinars, even in noisy or windy environments) compared with Ray-Ban’s previous generation of smart glasses. But what I find especially important to the new audio specs is the refreshed five-microphone array, which not only supports spatial audio recording but also is much better at canceling background noise and echo.

These noise cancellation and mic array improvements are going to be vitally important to the glasses’ impressive new AI-powered voice command capabilities brought on by the integration of Meta AI, Meta’s conversational assistant. From a UX standpoint, Meta AI can be activated by saying “Hey Meta,” allowing users hands-free access to a broad set of actions from search to control features. Users will be able to, for example, share photos with friends and family with a simple “send a photo” voice command, or livestream to Facebook or Instagram, and unlock an increasingly rich universe of features and capabilities.

This front-of-house AI feature set highlights why the mic array and the drivers supporting its noise-canceling features must be able to clearly pick up voice commands in noisy environments, from the inside of a car with the windows down and a packed airport terminal to a noisy street or busy hotel lobby. As the primary value of smart glasses is their portability, they must be able to work anywhere and under a broad range of less-than-ideal noise environments. Meta’s focus on improving the mic array speaks to the role that AI-powered voice interfaces will play in the future of the product. This focus on smart voice and audio interfaces is all the more important because this iteration of Meta and Ray-Ban smart glasses does not come equipped with a visual display.

Meta AI features will be available in the US in beta only at launch.

The AR Features That Meta and Ray-Ban Chose Not To Implement

The technological magic behind this Ray-Ban and Meta partnership is Qualcomm’s Snapdragon AR1 Gen 1 platform. Qualcomm has quietly been at the forefront of XR innovation for years, already powering more than 80 devices that have been announced or have already launched, with many more on the way. The smart glasses category, with its blending of physical and digital spaces and on-device AI integration in highly portable, low-friction form factors, presents a significant opportunity for Qualcomm, which has spent years developing and miniaturizing the technologies necessary to make that dream a reality. One of the ways that Qualcomm managed to pull off this feat is by incorporating a dedicated AI block to its AR1 Gen 1 system on chips (SOC). The block includes its Spectra ISP and hexagon graphics processing unit (GPU), a sensing hub, and what the company simply calls its “engine for visual analytics,” which is self-explanatory.

Meta’s overlapping hardware, partnership, and platform ecosystems are also ideal gateways for Qualcomm’s Snapdragon AR platform to reach broad user segments and scale quickly.

It is important to note, however, that the Snapdragon AR1 Gen 1 platform also supports AR/HUD features such as visual search, visual notifications (calendars, emails, timers, navigation, etc.), and content consumption (video blending seamlessly in the user’s field of view with 1280 x 1280 resolution per eye). Meta and Ray-Ban opted not to implement this family of features in this release.

Two quick thoughts: The first is that if Meta and Ray-Ban had wanted to release the first pair of smart AR glasses this year, they could have. Snapdragon AR1 Gen 1 could have made that possible. The second is that their decision not to makes sense. Here is why: For starters, Ray-Ban glasses are not a tech product. They are glasses. It would make little sense for an eyewear brand like Ray-Ban to pivot that hard into the cutting-edge technology market. Better to ease into it with fun, easy, useful features that add value to their products without changing the nature of their value proposition or brand positioning. Ray-Ban does not want to be in the business of competing against Apple or Google or Samsung (at least not yet). Ray-Ban is in the business of keeping its brand relevant in a world that values technology integration. Through its partnership with Meta, Ray-Ban can begin to integrate technology features into its products, build momentum for its program with this new market opportunity, and gauge consumer interest in more advanced AR features at its own pace. If and when the time comes for Ray-Ban to introduce full-featured smart AR glasses at scale, it will have the building blocks in place to do so.

Second, I suspect that for Ray-Ban, form factor trumps tech features. If I were a product manager for Ray-Ban, I would want to get as close to the analog Wayfarer (and Headliner) form factor as possible without compromising the brand’s designing aesthetic. As I pointed out earlier, the form factor challenge of fitting every AR and smart glasses feature into a single form factor is that it comes with real estate, weight, power management, and thermal envelope challenges. Those might have simply not been compatible with Ray-Ban’s presumably strict brand guidelines at this juncture.

Third, by keeping glasses’ features limited, Ray-Ban can keep price points safely out of sticker-shock territory. Analog Wayfarers tend to be priced between $120 and $170. Their smart, digital cousins start at only $299, which is not a considerable jump, taking into account how much technology is packed into them. Full-featured smart AR versions would have likely commanded a much heftier price point, resulting in a much higher adoption hurdle. The point for Ray-Ban is to make these glasses accessible to anyone with the budget to buy analog Ray-Bans and curious about these new capabilities. For the product to be commercially successful, it has to be scalable. I doubt that Ray-Ban is interested in capturing niche markets. Keeping the price point low and the features fun, useful, cool, and accessible to all lowers barriers of adoption that would otherwise plague an over-featured, premium-priced product offering.

Last, there are not enough apps on the market yet to make advanced AR features all that relevant to most users. It is a bit early yet. Ray-Ban and Meta know this, and the decision to deprioritize AR capabilities until developers have had a chance to build out the app ecosystem makes sense. Better to wait. The added benefit of waiting a year or two (or three) is that by the time Ray-Ban is ready to implement AR features into its glasses, Qualcomm’s AR-specific Snapdragon platform will likely deliver significant performance improvements.

All of this to say that although part of me feels a little disappointed that Meta and Ray-Ban did not release fully featured smart AR glasses, what they put together is exactly what the market is actually ready for and what fits within their current brand ecosystem.

At some point in the presumably near future, however, the combination of these voice and audio features and a HUD will create a perfect dashboard of hands-free capabilities for users that could replace mobile phone screens as their primary UI, and both Meta and Ray-Ban will be perfectly positioned to make the technology as mainstream as smartphones are today.

Meta’s Well-Executed Pivot To an AI-Powered Lifestyle Platform

It makes perfect sense for Meta to be highlighting the glasses’ Facebook and Instagram-specific features, and to be folding more well-designed hardware into its broad XR ecosystem. Even if conversations about “the metaverse” have been largely sidelined for now, Meta is smart to continue building its leadership in the entire space by also expanding its post-Oculus strategy beyond VR and MR into the smart glasses segment.

Some Thoughts on Meta versus Apple for This Segment

What also strikes me the most about the on-target feature set of the Meta-Qualcomm-Ray-Ban smart glasses is that they have essentially succeeded where Apple failed. Let me explain: For years now, consumers were promised some version of Apple AR or smart glasses in stylish, lightweight, all-day-use form factors. After all that buildup, though, Apple ended up releasing an MR product instead, and an expensive one at that. And although I have a lot of very good things to say about Apple’s Vision Pro headset, I cannot help but feel that it was not what consumers really wanted or were hoping for from Apple. Watching Meta’s presentation of the new Ray-Ban smart glasses, I could not help but feel that, given Apple’s tendency to launch innovative, cool, exciting premium consumer products and market-defining devices (iPod, iPod Touch, iPhone, iPad, Apple Watch, AirPods, and so on), Apple, not Meta, should have been the company to introduce this new generation of smart glasses to the market.

In short, Meta’s win here does not exist in a vacuum. Apple somehow is not even in this market, and that is a remarkable miss given the potential for this new product segment … and one that Meta will absolutely capitalize on. With no competition from Apple, Snapdragon’s AR platform appears to have an early market share capture advantage no matter who else releases smart glasses in the next 12-24 months.

The Market Adoption Case for Partnering With Established Eyewear Brands

One trend relevant to market share capture that I want to highlight when it comes to the consumer-facing smart glasses segment is the tendency for tech companies to partner with established eyewear brands rather than to release smart glasses under their own mantles. Although Meta’s partnership with Ray-Ban is not new, this latest release comes on the heels of Amazon announcing its own smart glasses partnership with Carrera. Given that eyewear brands already have well-established markets and distribution channels, deep (and sometimes multigenerational) relationships with consumers, brand loyalty, identity-defining branding, and product design expertise, it makes sense for companies such as Meta and Amazon to partner with them rather than try to compete with them.

This is not to say that there is not also an opportunity for tech giants such as Google/Pixel, Samsung, Microsoft, HP/Poly, or Sony to come out with their own designs, especially for the enterprise market, but it is not difficult to understand the value of sliding into established brand and design ecosystems such as Ray-Ban, Oakley, Warby Parker, and Persol for mass-market appeal, and more exclusive brands such as Prada, Gucci, Tom Ford, Carrera, and Versace for more differentiated consumer segments. Apple might be the sole exception to that model, but Apple has a unique brand story, design aesthetic, and role to play in the consumer tech ecosystem.

For every tech brand not named Apple, there is undeniable value in letting experienced, established eyewear brands take the design lead. I do not think I am an outlier for arguing that discretely integrating smart glasses tech into Ray-Ban’s Wayfarer and Headliner frames has much stronger consumer market appeal than “Meta glasses” would have had.

No Cellular Modem for Smart Glasses or AR Glasses Yet

I could not help but notice the absence of a cellular modem in the Meta and Ray-Ban smart glasses (or in the Snapdragon AR1 Gen 1 platform, for that matter), but this early in the game, that is fine. Until the category can produce glasses that can legitimately replace phones, tethering them to a mobile device via Bluetooth (or to a Wi-Fi network) will do just fine. In the future, though, especially as power efficiency improves, I expect to see 5G and 6G modems to start working their way into later versions of the platform.

From the discussions I had in the past few months with US carriers, 5G and 6G support for AR-centric use cases is not currently an investment priority for operators. It seems like a bit of a chicken-and-egg problem for now, but the impetus will be on the device manufacturers and platform companies such as Meta to make their demand-side case before that changes.

One Last Thing

One final note: I had an opportunity to ask Said Bakadir, Qualcomm’s senior director of XR Product Management, about AR1 Gen 1’s HUD performance across different degrees of lens tint. Although he was not able to comment on any specific OEM products, he did confirm that the platform’s drivers are designed to support a broad range of lens gradients. Optimization will generally fall on OEMs based on how they choose to implement the feature.

All in all, this announcement was a win for Meta, Ray-Ban, Qualcomm, and the XR ecosystem, but the smart glasses segment especially. I feel that between the capabilities of the Snapdragon AR1 Gen 1 platform and the excitement around the Meta/Ray-Ban smart glasses announcement, the market has taken a significant step toward bringing smartphone functionality to an entirely new era of hands-free, AI-enabled interfaces.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

AI-Enabled Features in Amazon Echo Products Point to the Future of UX

Apple WWDC 2023 Recap: Yep, It’s Mixed Reality!

Qualcomm’s New Snapdragon XR2 5G Reference Design Opens the Door to Truly Wireless On-Demand 8K XR Experiences

Author Information

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.

SHARE:

Latest Insights:

Mark Patterson, EVP and Chief Strategy Officer at Cisco, joins Patrick Moorhead and Daniel Newman to shed light on Cisco's strategic foray into AI, discussing its potential industry impact and the importance of collaboration in driving innovation.
Twilio’s Q3 Results Showcase Strong Financials and Cutting-Edge Technological Advancements, Positioning It as a Leader in AI-Driven Customer Engagement
Keith Kirkpatrick, Research Director at The Futurum Group, explores Twilio's Q3 earnings, focusing on how AI advancements and robust data integration are driving growth and transforming customer engagement for today's leading brands.
OpenText’s Strategic Focus on Cloud, AI, and Cybersecurity Propels Growth Despite Revenue Adjustments Following AMC Divestiture
Keith Kirkpatrick, Research Director at The Futurum Group, discusses OpenText's Q1 2025 results, reflecting strategic technological advancements and emphasizing growth in cloud services, artificial intelligence, and cybersecurity.
Gary Steele, Go-to-Market President at Cisco, shares his insights on enhancing digital resilience in the age of AI, emphasizing the synergy between Cisco and Splunk.