According to a new report from Ming-Chi Kuo (via 9to5mac), a reliable analyst on all things Apple, the company has been working on an augmented reality headset and is about to launch the device. This pair of glasses could go into mass production as early as Q4 2019 and should be available at some point during the first half of 2020.
It’s still unclear what you’ll be able to do with this mysterious headset. Kuo says that it’ll work more or less like an Apple Watch. You won’t be able to use the AR headset without an iPhone as it’ll rely heavily on your iPhone.
The glasses will act as a deported display to give you information right in front of your eyes. Your iPhone will do the heavy lifting when it comes to internet connectivity, location services and computing. I wouldn’t be surprised if the AR headset relies on Bluetooth to communicate with your iPhone.
Kuo’s report doesn’t say what you’ll find in the headset. Apple could embed displays and sensors so that the AR headset is aware of your surroundings. An AR device only makes sense if Apple puts sensors to detect things around you.
Apple has already experimented with augmented reality with its ARKit framework on iOS. Developers have been able to build apps that integrate digital elements in the real world, as viewed through your phone cameras.
While many apps have added AR features, most of them feel gimmicky and don’t add any real value. There hasn’t been a ton of AR-native apps either.
One interested use case for augmented reality is mapping. Google recently unveiled an augmented reality mode for Google Maps. You can hold your phone in front of your face to see arrows indicating where you’re supposed to go.
Apple has also been rebuilding Apple Maps with its own data. The company isn’t just drawing maps. It is collecting a ton of real world data using LiDAR sensors and eight cameras attached on a car roof. Let’s see if Apple Maps will play an important part in Apple’s rumored AR headset.
The growth of augmented and virtual reality applications and hardware is ushering in a new age of digital media and imaging technologies, and startups that are putting themselves at the center of that are attracting interest.
TechCrunch has learned and confirmed that Matterport, which started out making cameras but has since diversified into a wider platform to capture, create, search and utilise 3D imagery of interior and enclosed spaces in immersive real estate, design, insurance and other B2C and B2B applications, has raised $48 million. Sources tell us the money came at a pre-money valuation of around $325 million, although the company is not commenting on that.
From what we understand, the funding is coming ahead of a larger growth round from existing and new investors, to tap into what they see as a big opportunity for building and providing (as a service) highly accurate 3D images of enclosed spaces.
The company in December appointed a new CEO, RJ Pittman — who had been the chief product officer at eBay, and before that held executive roles at Apple and Google — also to help fill out that bigger strategy.
Matterport had raised just under $63 million prior to this and had been valued at around $207 million, according to PitchBook estimates.This current round is coming from existing backers, which include Lux Capital, DCM, Qualcomm Ventures and more.
Matterport’s roots are in high-end cameras built to capture multiple images to create 3D interior imagery for a variety of applications from interior design and real estate to gaming. Changing tides in the worlds of industry and hardware have somewhat shifted its course.
On the hardware side, we’ve seen a rise in the functionality of smartphone cameras, as well as a proliferation of specialised 3D cameras at lower price points. So while Matterport still sells its own high-end cameras, it is also starting to work with less expensive devices with spherical lenses — such as the Ricoh Theta, which is nearly 10 times less expensive than Matterport’s Pro2 camera — and smartphones.
Using an AI engine — which it has been building for some time — packaged into a service it calls Matterport Cloud 3.0, it converts 2D panoramic and 360-degree images into 3D ones. (Matterport Cloud 3.0 is currently in beta and will be launching fully on the 18th of March, initially supporting the Ricoh Theta V, the Theta Z1, the Insta360 ONE X, and the Leica Geosystems BLK360 laser scanner.)
Matterport is further using this technology to grow its wider database of images. It already has racked up 1.6 million 3D images and millions of 2D images, and at its current growth rate, the aim is to expand its library to 100 million in the coming years, positioning it as a Getty for 3D enclosed images.
These, in turn, will be used in two ways: to feed Matterport’s machine learning to train it to create better and faster 3D images; and to become part of a wider library, accessible to other businesses by way of a set of APIs.
And, from what I understand, the object will not just to be use images as they are: people would be able to manipulate the images to, for example, remove all the furniture in a room and re-stage it completely without needing to physically do that work ahead of listing a house for sale. Another is adding immersive interior shots into mapping applications like Google’s Street View.
“We are a data company,” RJ Pittman told me when I met him for coffee last month.
The ability to convert 2D into 3D images using artificial intelligence to help automate the process is a potentially big area that Matterport, and its investors, believe will be in increasing demand. That’s not just because people still think there will one day be a bigger market for virtual reality headsets, which will need more interesting content; but because we as consumers already have come to expect more realistic and immersive experiences today, even when viewing things on regular screens; and because B2B and enterprise services (for example design or insurance applications) have also grown in sophistication and now require these kinds of images.
(That demand is driving the creation of other kinds of 3D imaging startups, too. Threedy.ai launched last week with a seed round from a number of angels and VCs to perform a similar kind of 2D-to-3D mapping technique for objects rather than interior spaces. It is already working with a number of e-commerce sites to bypass some of the costs and inefficiencies of more established, manual methods of 3D rendering.)
While Matterport is doubling down on its cloud services strategy, it’s also been making some hires to take the business to its next steps. In addition to Pittman, they have included adding Dave Lippman, formerly design head at eBay, as its chief design officer; and engineering veteran Lou Marzano as its VP of hardware, R&D and manufacturing, with more hires to come.
Qualcomm wants to create a new device category, XR viewer headsets, that combine the compute power of its current Snapdragon 855 platform with the speed of 5G on a smartphone to provide you with mobile VR and AR experiences — or ‘Extended Reality,’ as Qualcomm likes to call it — with six degrees of freedom tracking. The company announced this new initiative at MWC in Barcelona and noted that it expects OEMs like Pico to launch devices later this year.
The idea here is that the headsets will be tethered to a smartphone via a USB-C connection that drives high-res displays, with a lot of the content being streamed over — ideally – a 5G connection.
The headsets are an extension of the company’s previous XR work which mostly focused on using a phone’s camera’s and displays to power AR experiences. The company did start an accelerator program for head mounted displays (HMD), the aptly named HMD accelerator program, back in 2017. In many ways, today’s announcement is an extension of this work.
“Our HMD Accelerator Program has been a critical catalyst for ecosystem partners ranging from component suppliers and ODMs, to bring quality standalone XR headsets to consumers,” said Hugo Swart, senior director, Product Management, Qualcomm. “Building upon the momentum of this program, we will extend this to XR viewers and compatible smartphones, starting with smartphones enabled by the Snapdragon 855 Mobile Platform.”
Qualcomm has signed up a number of platform and software partners like Arvizio, NetEase-AR, Iconic Engine, NextVR, SenseTime and Wikitude, as well as manufacturers like Acer and Asus.
Tyra Banks, the supermodel-turned-entrepreneur, has unveiled her latest venture. Dubbed Modelland, the in-person theme park-like experience will bring technology to the forefront, Banks told TechCrunch over the phone last week.
“We’re very open to partnering with and having integrations with different brands that bring technology to the forefront and make sure what we’re providing in Modelland are things you cannot do on your phone,” she told me.
The attraction will combine fantasy with interactive entertainment (think some augmented reality and virtual reality), as well as what people have come to expect from theme parks: food, events and shopping. While it’s called Modelland, Banks said it’s geared toward the masses and aims to celebrate all expressions of beauty.
“I am being very deliberate to make sure this is something families can come to,” Banks said. “I’m not creating this to service people who want to become models or are models.”
Modelland’s ultimate purpose is to continue to help redefine standards of beauty — no matter what your age, race, body size and so forth.
“I’m happy to see the world is much more celebrating of different types of beauty,” she said. “I think Modelland can be the next iteration of that.”
Modelland is slated to open later this year in Santa Monica, Calif.
The beta for iOS 12.2 contains a change to mobile Safari that could have implications for the advertising and marketing worlds, as well as for Web-based augmented or virtual reality more generally.
In the beta, a toggle labeled “Motion & Orientation Access” exists in the Safari privacy settings panel. This toggle determines whether sites visited in the mobile Safari browser will be able to access the iPhone, iPod touch, or iPad’s gyroscope or accelerometer. This setting currently defaults to “off,” which means users would have to have the foresight to navigate to the Settings app and enable it before being able to use AR experiences from the Web.
Two Apple employees on Twitter elaborated on the change. Apple software engineer Ricky Mondello wrote in a tweet thread recounting the various notes in the Safari 12.1 release for iOS: