Google launches new AI initiatives in Japan

It’s no surprise that Google used its Cloud Next 2018 event in Tokyo today — one of a number of international Cloud Next events that follow its flagship San Francisco conference — to announce a couple of new initiatives that specifically focus on the Japanese market.

These announcements include a couple of basic updates like translating its Machine Learning with TensorFlow on Google Cloud Platform Coursera specialization, its Associate Cloud Engineer certification and fifty of its hands-on Qwiklabs into Japanese.

In addition, Google is also launching an Advanced Solutions Lab in Tokyo as well. Previously Google opened similar labs in Dublin, Ireland, as well as Sunnyvale and New York. These labs offer a wide range of machine learning-centric training options, collaborative workspaces for teams that are part of the company’s four-week machine learning training program, and access to Google experts.

(Photo by Hitoshi Yamada/NurPhoto via Getty Images)

The company also today announced that it is working with Fast Retailing, the company behind brands like Uniqlo, to help it adopt new technologies. As its name implies, Fast Retailing would like to retail faster, so it’s looking at Google and its G Suite and machine learning tools to help it accelerate its growth. The code name for this project is ‘Ariake.’

“Making information accessible to all our employees is one of the foundations of the Ariake project, because it empowers them to use human traits like logic, judgment, and empathy to make decisions,” says Tadashi Yanai, CEO of Fast Retailing. “We write business plans every season, and we use collaborative tools like G Suite make sure they’re available to all. Our work with Google Cloud has gone well beyond demand forecasting; it’s fundamentally changed the way we work together.”

An Intel drone fell on my head during a light show

It didn’t hurt. I thought someone dropped a small cardboard box on my head. It felt sharp and light. I was sitting on the floor, along the back of the crowd and then an Intel Shooting Star Mini drone dropped on my head.

Audi put on a massive show to reveal its first EV, the e-tron. The automaker went all out, put journalists, executives and car dealers on a three-story paddle boat and sent us on a two-hour journey across San Francisco Bay. I had a beer and two dumplings. We were headed to a long-vacated Ford manufacturing plant in Richmond, CA.

By the time we reached our destination, the sun had set and Audi was ready to begin. Suddenly, in front of the boat, Intel’s Shooting Star drones put on a show that ended with Audi’s trademark four ring logo. The show continued as music pounded inside the warehouse, and just before the reveal of the e-tron, Intel’s Shooting Star Minis celebrated the occasion with a light show a couple of feet above attendees’ heads.

That’s when one hit me.

Natalie Cheung, GM of Intel Drone Light Shows, told me they knew when one drone failed to land on its zone that one went rogue. According to Cheung, the Shooting Star Mini drones were designed with safety in mind.

“The drone frame is made of flexible plastics, has prop guards, and is very small,” she said. “The drone itself can fit in the palm of your hand. In addition to safety being built into the drone, we have systems and procedures in place to promote safety. For example, we have visual observers around the space watching the drones in flight and communicating with the pilot in real-time. We have built-in software to regulate the flight paths of the drones.”

After the crash, I assumed someone from Audi or Intel would be around to collect the lost drone, but no one did, and at the end of the show, I was unable to find someone who knew where I could find the Intel staff. I notified my Intel contacts first thing the following morning and provided a local address where they could get the drone. As of publication, the drone is still on my desk.

I have covered Intel’s Shooting Star program since its first public show at Disney World in 2016. It’s a fascinating program and one of the most impressive uses of drones I’ve seen. The outdoor shows, which have been used at The Super Bowl and Olympics, are breathtaking. Hundreds of drones take to the sky and perform a seemingly impossible dance and then return home. A sophisticated program designates the route of each drone and GPS ensures each is where it’s supposed to be and it’s controlled by just one person.

Intel launched an indoor version of the Shooting Star program at CES in 2018. The concept is the same, but these drones do not use GPS to determine their location. The result is something even more magical than the outside version because with the Shooting Star Minis, the drones are often directly above the viewers. It’s an incredible experience to watch drones dance several feet overhead. It feels slightly dangerous. That’s the draw.

And that poses a safety concern.

The drone that hit me is light and mostly plastic. It weighs very little and is about 6-inches by 4-inches. A cage surrounds the bottom of the rotors though not the top. If there’s a power button, I can’t find it. The full-size drones are made out of plastic and Styrofoam.

Safety has always been baked into the Shooting Star programs but I’m not sure the current protocols are enough.

I was seated on the floor along the back of the venue. Most of the attendees where standing, taking selfies with the performing drones. It was a lovely show.

When the drone came down on my head, it tumbled onto the floor and the rotors continued to spin. A member of the catering staff was walking behind the barrier I was sitting against, reached out and touched the spinning rotors. I’m sure she’s fine, but when her finger touched the spinning rotor, she jumped in surprise. At this point, seconds after it crashed, the drone was upside down, and like an upturned beetle, continued to operate for a few seconds until the rotors shut off.

To be clear, I was not hurt. And that’s not the point. Drone swarm technology is fascinating and could lead to incredible use cases. Swarms of drones could quickly and efficiently inspect industrial equipment and survey crops. And they make for great shows in outside venues. But are they ready to be used inside, above people’s heads? I’m already going bald. I don’t need help.

Committed to privacy, Snips founder wants to take on Alexa and Google, with blockchain

Earlier this year we saw the headlines of how the users of popular voice assistants like Alexa and Siri and continue to face issues when their private data is compromised, or even sent to random people. In May it was reported that Amazon’s Alexa recorded a private conversation and sent it to a random contact. Amazon insists its Echo devices aren’t always recording, but it did confirm the audio was sent.

The story could be a harbinger of things to come when voice becomes more and more ubiquitous. After all, Amazon announced the launch of Alexa for Hospitality, its Alexa system for hotels, in June. News stories like this simply reinforce the idea that voice control is seeping into our daily lives.

The French startup Snips thinks it might have an answer to the issue of security and data privacy. Its built its software to run 100% on-device, independently from the cloud. As a result, user data is processed on the device itself, acting as a potentially stronger guarantor of privacy. Unlike centralized assistants like Alexa and Google, Snips knows nothing about its users.

Its approach is convincing investors. To date, Snips has raised €22 million in funding from investors like Korelya Capital, MAIF Avenir, BPI France and Eniac Ventures. Created in 2013 by 3 PhDs, and now employing more than 60 people in Paris and New York, Snips offers its voice assistant technology as a white-labelled solution for enterprise device manufacturers.

It’s tested its theories about voice by releasing the result of a consumer poll. The survey of 410 people found that 66% of respondents said they would be apprehensive of using a voice assistant in a hotel room, because of concerns over privacy, 90% said they would like to control the ways corporations use their data, even if it meant sacrificing convenience.

“Сonsumers are increasingly aware of the privacy concerns with voice assistants that rely on cloud storage — and that these concerns will actually impact their usage,” says Dr Rand Hindi, co-founder and CEO at Snips. “However, emerging technologies like blockchain are helping us to create safer and fairer alternatives for voice assistants.”

Indeed, blockchain is very much part of Snip’s future. As Hindi told TechCrunch in May, the company will release a new set of consumer devices independent of its enterprise business. The idea is to create a consumer business that will prompt further enterprise development. At the same time, they will issue a cryptographic token via an ICO to incentivize developers to improve the Snips platform, as an alternative to using data from consumers. The theory goes that this will put it at odds with the approach used by Google and Amazon, who are constantly criticised for invading our private lives merely to improve their platforms.

As a result Hindi believes that as voice-controlled devices become an increasingly common sight in public spaces, there could be a significant shift in public opinion about how their privacy is being protected.

In an interview conducted last month with TechCrunch, Hindi told me the company’s plans for its new consumer product are well advanced, and will be designed from the beginning to be improved over time using a combination of decentralized machine learning and cryptography.

By using blockchain technology to share data, they will be able to train the network “without ever anybody sending unencrypted data anywhere,” he told me.

And ‘training the network” is where it gets interesting. By issuing a cryptographic token for developers to use, Hindi says they will incentivize devs to work on their platform and process data in a decentralized fashion. They are starting from a good place. He claims they already have 14,000 developers on the platform who will be further incentivized by a token economy.

“Otherwise people have no incentive to process that data in a decentralized fashion, right?” he says.

“We got into blockchain because we’re trying to find a way to get people to participate in decentralized machine learning. We’ve been wanting to get into consumer [devices] for a couple of years but didn’t really figure out the end goal because we had always had this missing element which was: how do you keep making it better over time.”

“This is the main argument for Google and Amazon to pretend that you need to send your data to them, to make the service better. If we can fix this [by using blockchain] then we can offer a real alternative to Alexa that guarantees Privacy by Design,” he says.

“We now have over 14000 developers building for us and that’s really completely organic growth, zero marketing, purely word of mouth, which is really nice because it shows that there’s a very big demand for decentralized voice assistance, effectively.”

It could be a high-risk strategy. Launching a voice-controlled device is one thing. Layering it with applications produced by developed supposedly incentivized by tokens, especially when crypto prices have crashed, is quite another.

It does definitely feel like a moonshot idea, however, and we’ll really only know if Snips can live up to such lofty ideals after the launch.

Sen. Harris tells federal agencies to get serious about facial recognition risks

Facial recognition technology presents myriad opportunities as well as risks, but it seems like the government tends to only consider the former when deploying it for law enforcement and clerical purposes. Senator Kamala Harris (D-CA) has written the Federal Bureau of Investigation, Federal Trade Commission, and Equal Employment Opportunity Commission telling them they need to get with the program and face up to the very real biases and risks attending the controversial tech.

In three letters provided to TechCrunch (and embedded at the bottom of this post), Sen. Harris, along with several other notable legislators, pointed out recent research showing how facial recognition can produce or reinforce bias, or otherwise misfire. This must be considered and accommodated in the rules, guidance, and applications of federal agencies.

Other lawmakers and authorities have sent letters to various companies and CEOs or held hearings, but representatives for Sen. Harris explained that there is also a need to advance the issue within the government as well.

Sen. Harris at a recent hearing.

Attention paid to agencies like the FTC and EEOC that are “responsible for enforcing fairness” is “a signal to companies that the cop on the beat is paying attention, and an indirect signal that they need to be paying attention too. What we’re interested in is the fairness outcome rather than one particular company’s practices.”

If this research and the possibility of poorly controlled AI systems aren’t considered in the creation of rules and laws, or in the applications and deployments of the technology, serious harm could ensue. Not just  positive harm, such as the misidentification of a suspect in a crime, but negative harm, such as calcifying biases in data and business practices in algorithmic form and depriving those affected by the biases of employment or services.

“While some have expressed hope that facial analysis can help reduce human biases, a growing body of evidence indicates that it may actually amplify those biases,” the letter to the EEOC reads.

Here Sen. Harris, joined by Senators Patty Murray (D-WA) and Elisabeth Warren (D-MA), expresses concern over the growing automation of the employment process. Recruitment is a complex process and AI-based tools are being brought in at every stage, so this is not a theoretical problem. As the letter reads:

Suppose, for example, that an African American woman seeks a job at a company that uses facial analysis to assess how well a candidate’s mannerisms are similar to those of its top managers.

First, the technology may interpret her mannerisms less accurately than a white male candidate.

Second, if the company’s top managers are homogeneous, e.g., white and male, the very characteristics being sought may have nothing to do with job performance but are instead artifacts of belonging to this group. She may be as qualified for the job as a white male candidate, but facial analysis may not rate her as highly becuase her cues naturally differ.

Third, if a particular history of biased promotions led to homogeneity in top managers, then the facial recognition analysis technology could encode and then hide this bias behind a scientific veneer of objectivity.

If that sounds like a fantasy use of facial recognition, you probably haven’t been paying close enough attention. Besides, even if it’s still rare, it makes sense to consider these things before they become widespread problems, right? The idea is to identify issues inherent to the technology.

“We request that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and how this technology may violate anti-discrimination law,” the Senators ask.

A set of questions also follows (as it does in each of the letters): have there been any complaints along these lines, or are there any obvious problems with the tech under current laws? If facial technology were to become mainstream, how should it be tested, and how would the EEOC validate that testing? Sen. Harris and the others request a timeline of how the Commission plans to look into this by September 28.

Next on the list is the FTC. This agency is tasked with identifying and punishing unfair and deceptive practices in commerce and advertising; Sen. Harris asserts that the purveyors of facial recognition technology may be considered in violation of FTC rules if they fail to test or account for serious biases in their systems.

“Developers rarely if ever test and then disclose biases in their technology,” the letter reads. “Without information about the biases in a technology or the legal and ethical risks attendant to using it, good faith users may be unintentionally and unfairly engaging in discrimination. Moreover, failure to disclose these biases to purchasers may be deceptive under the FTC Act.”

Another example is offered:

Consider, for example, a situation in which an African American female in a retail store is misidentified as a shoplifter by a biased facial recognition technology and is falsely arrested based on this information. Such a false arrest can cause trauma and substantially injure her future house, employment, credit, and other opportunities.

Or, consider a scenario in which a young man with a dark complexion is unable to withdraw money from his own bank account because his bank’s ATM uses facial recognition technology that does not identify him as their customer.

Again, this is very far from fantasy. On stage at Disrupt just a couple weeks ago Chris Atageka of UCOT and Timnit Gebru from Microsoft Research discussed several very real problems faced by people of color interacting with AI-powered devices and processes.

The FTC actually had a workshop on the topic back in 2012. But, amazing as it sounds, this workshop did not consider the potential biases on the basis of race, gender, age, or other metrics. The agency certainly deserves credit for addressing the issue early, but clearly the industry and topic have advanced and it is in the interest of the agency and the people it serves to catch up.

The letter ends with questions and a deadline rather like those for the EEOC: have there been any complaints? How will they assess address potential biases? Will they issue “a set of best practices on the lawful, fair, and transparent use of facial analysis?” The letter is cosigned by Senators Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR).

Last is the FBI, over which Sen. Harris has something of an advantage: the Government Accountability Office issued a report on the very topic of facial recognition tech that had concrete recommendations for the Bureau to implement. What Harris wants to know is, what have they done about these, if anything?

“Although the GAO made its recommendations to the FBI over two years ago, there is no evidence that the agency has acted on those recommendations,” the letter reads.

The GAO had three major recommendations. Briefly summarized: do some serious testing of the Next Generation Identification-Interstate Photo System (NGI-IPS) to make sure it does what they think it does, follow that with annual testing to make sure it’s meeting needs and operating as intended, and audit external facial recognition programs for accuracy as well.

“We are also eager to ensure that the FBI responds to the latest research, particularly research that confirms that face recognition technology underperforms when analyzing the faces of women and African Americans,” the letter continues.

The list of questions here is largely in line with the GAO’s recommendations, merely asking the FBI to indicate whether and how it has complied with them. Has it tested NGI-IPS for accuracy in realistic conditions? Has it tested for performance across races, skin tones, genders, and ages? If not, why not, and when will it? And in the meantime, how can it justify usage of a system that hasn’t been adequately tested, and in fact performs poorest on the targets it is most frequently loosed upon?

The FBI letter, which has a deadline for response of October 1, is cosigned by Sen. Booker and Cedric Richmond, Chair of the Congressional Black Caucus.

These letters are just a part of what certainly ought to be a government-wide plan to inspect and understand new technology and how it is being integrated with existing systems and agencies. The federal government moves slowly, even at its best, and if it is to avoid or help mitigate real harm resulting from technologies that would otherwise go unregulated it must start early and update often.


You can find the letters in full below.

EEOC:

SenHarris – EEOC Facial Rec… by on Scribd

FTC:

SenHarris – FTC Facial Reco… by on Scribd

FBI:

SenHarris – FBI Facial Reco… by on Scribd

Here’s what Google’s $149 Home Hub smart display will reportedly look like

Google is reportedly getting ready to launch some new hardware at its October 9 hardware event and we just learned a lot more about a new product that might be launching.

It was rumored that Google was working on its own Smart Display, now we’ve got images of the Google Home Hub and details about its price tag via a report from AndroidAuthority.

via Android Authority

The device certainly looks like a Google Home product with all the fabric anyone could ask for and then far, far more on top of it.

It’s rocking a 7-inch screen and will cost just $149, which is quite a bit cheaper than the 8-inch Lenovo Smart Display which is currently the cheapest option at $199 while its 10-inch varietal ships for $249 as does the stereo-speakered JBL Link View.

Having played around with Lenovo’s product, Google has some very pretty software for their Smart Displays but there are some strange quirks given that the screen is basically superfluous by design as it can’t ever assumed that the speaker can see the screen when an answer is being given. Google has their work cut out for them, but it might be in their best interest to introduce some light touch interactions that allow you to perform more actions without speaking at all, otherwise the screen is always going to feel a bit misplaced aside from pulling up a YouTube video or watching a slideshow.

What will be interesting to see is what exclusive software wizardry the device has, if anything. The report details that the device will not have a camera like other Smart Displays which is a bit funny given that the whole point of it was to bolster its Duo video call service, which Google seems to realize either isn’t worth the inexpensive components or the potential privacy overhead.

If the rumored price of $149 proves accurate and Google opts for most of the internals that the partner Smart Displays have, this will be a very cool device at a great deal that will not get used very often. It is wildly unclear what the point is of this product vertical, and without breaking it free of its software prison Google seems to be missing a big opportunity that could be fulfilled by whatever the big G’s competitors eventually release.

This report seems pretty solid, but we only have to wait a couple more weeks to see what Google has in store, TechCrunch will be keeping up with the details at the company’s Pixel 3 hardware event on October 9.

Nintendo is offering an exclusive Fortnite bundle with the Switch

Fortnite has taken the world by storm. In fact, the game is so popular that Epic has released versions for PC, Xbox, PS4, iOS, Android and the Nintendo Switch, making the game about as accessible as possible.

The popularity of the game stems from the general popularity of the Battle Royale genre and popular streamers like Ninja, who have made the game so much fun to watch. But it also comes from the fun, and often fleeting, skins, dances and pick axes the game offers in its Item Shop.

On October 5th, folks interested in the Switch can pick up some extra Fortnite swag.

Nintendo is releasing a bundle that will include an exclusive Fortnite skin, glider and pick-axe, as well as an extra 1,000 V-Bucks. To be clear, 1,000 V-bucks is the equivalent of $10 and won’t get you much from the Item Shop.

Plus, as pointed out by the Verge, Nintendo has offered several different bundles which would allow customers to pick up a Switch for $329 alongside one of a few games. In most cases, those games cost money, whereas Fortnite is a free to play game.

But the Nintendo Switch bundle is the only way to get your hands on the Switch gear that comes with it.

This isn’t the first time that Epic has given out exclusive gear to players using different hardware or services. There is an exclusive Twitch Prime skin, a Sony PS4 skin, and even a skin for Galaxy Note 9 owners.

The Bundle is available for $329 on October 5.

Maison Me nabs $1M from Google’s Assistant fund and more for made-to-order clothes

Amazon’s focus on using its camera-enabled Echo devices to help you figure out what to wear everyday has highlighted how the tech world sees a big opportunity in building fashion-related tools and services beyond the now-ubiquitous but still quite basic business of e-commerce, where clothes are displayed on websites, and ordered for delivery to your home. Now one of the latest startups in the space has raised a seed round from an interesting group of investors.

Maison Me, a startup that has built a platform that lets people provide either a few clues, or very specific detail, of a piece of clothing that they would like, and then makes it to order, has raised $1 million to build out its business from backers that include Founders Fund, the new Google Assistant investment program, Gagarin Capital and others that are not being named for now. Maison Me’s co-founder and CEO Anastasia Sartan said that the startup will be using the money to continue optimising its services and building for new platforms such as Google Home.

That particular device is a notable one to build for: currently Google doesn’t have a camera or screen, but many reports speculate that it will be launching a new version very soon that will. Meanwhile, Amazon is also level-pegging on me-too functionality.

Google of course is not saying anything about any upcoming hardware, and sees Maison Me as something useful for the Google Home speaker that we know today, and for the displays out there not made by Google that are being powered by it.

“A lot of people start their daily routines asking their Google Home speakers for a weather forecast, looking for some help before they pick out their outfits for the day,” said ​Ilya Gelfenbeyn, head of the Google Assistant investment program, in a statment. “Smart Displays with the Google Assistant make it possible to build services and recommendations in such a visual industry like fashion, and we believe that personalized what-to-wear recommendations can really simplify the morning routines for people.”

Maison Me will have its first Google app of its clothing service ready in the beginning of November. But Maison Me (and Epytom, which is the name of the actual startup) has been in business for a while already, starting first with a chatbot that helped people figure out what to wear — data, Sartan said, has been used to help feed its algorithms for its clothing-making service. That bot is still active and has racked up 300,000 users so far.

“Google is emphasizing routines in voice assistants,” Sartan explained. “In the morning, 78 percent of people ask about the weather. But why? It is to figure out what to wear. Knowing the location and preferences and we can help here.”

The pitch that Maison Me is making for the clothing service is that it can go from specs to delivery in 15 days — longer than a Prime-style turnaround of a day, or nipping out to the shops for a quick purchase, but potentially more rewarding and individualised. The garments, she said, are all made in the US.

Within those two weeks there are a number of stages: After a conversation about the garment-to-come, which includes questions about a buyer’s interests beyond fashion, the data goes through algorithms created by Maison Me and then handed off to a human (not robot!) designer. A custom sketch is made for approval or modifications. All of that costs $15.

Then comes the making of the garment. A professional tailor — again, not a bot or computer vision program — takes a customer’s measurements and starts to produce the clothes, which end up with the user within 15 days.

One of the big issues that Sartan says she was trying to tackle is “dead stock” — the massive amount of overproduction that fashion houses and retailers create in the process of making mass-produced clothes. The creation and “disposal” of dead stock — in order for a brand to continue to have cachet — has been the subject of some controversy in the fashion industry, since not only does it fundamentally feel wasteful, but there is an environmental impact as well.

“Dead stock is the burden of the global fashion industry and the environment, and so are the unmet customers’ expectations regarding the fit, quality, and cost. Our goal now is to create clothes you reach for the most, because they fit your body and life perfectly, go with the rest of your wardrobe, and are truly worth their price,” Sartan said.

 

Zumper raises $46M more to take on Zillow and the rest with its apartment rental platform

While the property market in the US appears headed for a slowdown, a startup that’s honing in on rentals is doubling down on growth. Zumper, a San Francisco-based startup that has built an end-to-end platform to source, rent out and help service apartments across the US, has raised another $46 million in funding — money it plans to use to continue enhancing the services it offers to renters and landlords; and to continue its growth nationally. The funding comes on the back of a strong period of growth for the company, which has around 1 million listings on offer and sees 8 million visitors in an average month, with one-quarter of a million landlords using Zumper to connect to them.

This Series C was led by media giant Axel Springer and growth-stage investor Stereo Capital, with participation also from previous investors Dawn Capital, Kleiner Perkins Caufield & Byers (KPCB), Breyer Capital, Scott Cook, Goodwater Capital and xfund. Zumper has raised $90 million to date. It’s not revealing its valuation, but a source tells me it’s more than double its previous valuation, which was just over $100 million two years ago. That likely puts it at over $200 million (but probably less than $300 million).

Axel Springer, with its vast experience in media, and specifically publishing and classifieds, is an obvious strategic backer here. But Blackstone, I’ll note, is a strategic investor of sorts, too. “As a client of Zumper and one of the largest landlords in the US, Blackstone is keenly focused on technological innovation to improve the rental process for both renters and landlords,” said Sean Muellers of Blackstone who will act as a board observer, in a statement. “Partnering with the best-in-class Zumper team to effectuate this industry change is the definition of a win-win situation.”

Anthemos Georgiades, CEO and cofounder of Zumper, says that the company was built in 2014 on the premise that there were already too many portals online that helped people search for apartments. “Everyone had done that already,” he said, so instead the startup decided that it would focus on simplifying, speeding up and overall improving the whole process of renting.

As anyone who has rented, or rents out, an apartment knows, search aimed at apartment seekers (and “lead generation” for the landlords) is just the beginning of the process. There are many hoops to jump through after that with credit checks, lease signing, moving out of your old place, moving into a new one, and signing up for local services in the new place.

With that in mind, in addition to basic apartment searches, the Zumper platform has been gradually trying to address all of that. Today, it includes a prequalification tool that runs credit checks on would-be tenants, and Zumper is also running a beta in which would-be tenants can leave a deposit instantly on a place while touring it to take it off the market. It’s currently building one tool that will let landlords generate leases to have them signed via Zumper; and another tool so that tenants can also pay rent through the platform.

Further on down the line, Georgiades said that the plan will be to add on more “transactional” services to increase its margins further, such as helping tenants sign up for utilities and other services, and perhaps buy or rent things for their apartments. He said that there are already talks in the works with “half a dozen” companies to integrate these kinds of services, but he wouldn’t say when they would come online.

Interestingly, Zumper’s focus on taking a different approach to tackling the rental market is not without precedent. Zillow is offering an increasing range of services beyond basic search, and other startups like Compass have also focused on how it can provide more than simple listings to would-be tenants to attract top clients on both sides of the marketplace. (Compass, however, puts more more of an emphasis on the broker experience.) Others in Europe like Homelike are also looking at more innovative ways of providing an online platform for renters.

Zumper’s typical and target is a tenant is someone looking for a one-year lease, which it feels has not really been addressed well enough yet in the US.

“Airbnb has improved the experience in short-term, but no one is doing this in long-term rental,” Georgiades said.