Reminder: Extra Crunch Event Discounts for Summer Party
TechCrunch’s annual Summer Party is just around the corner next week — come meet all the staff at the Park Chalet beer garden on the Pacific Coast in San Francisco this Thursday evening. We handed out 50 free tickets to EC subscribers this past week by email, but if you weren’t able to snag one, be sure to use your event discount (part of the annual EC subscription offering) by emailing your member customer service representative at email@example.com.
How US national security is holding the internet hostage
I have written quite a bit about CFIUS, the inter-agency process for reviewing venture capital investments and company acquisitions made by foreigners. Now, our special correspondent Mark Harrisexplores a much less well-known group known as Team Telecom who has been actively reviewing — and denying — additional fiber bandwidth beneath the Pacific Ocean.
Introduced a few I/Os back, Fast Pair is Google’s attempt to make its own mark on the post-AirPod headphone landscape. Many of the features are similar to Apple’s offerings, but Google’s got a leg up in one key way: third-party hardware. Like Android, the company’s focused on bringing Fast Pair to as many manufacturers as possible.
That list now includes Libratone, Jaybird, JBL (four models), Cleer, LG (four models), Anker (one pair of headphones and speaker) and, of course, Google’s own Pixel Buds. This week, the company announced a number of key features coming to Fast Pair headphones.
New this time around is Find My Device functionality, aimed at helping owners locate missing headsets. The app will show the time and location they were last in use, and will send out a chime from buds that are still in Bluetooth range.
Also new is individual battery life for buds and case. Opening up the case near a paired handset will pop up that information. All of the above features will arrive on the 15 or so headphones that currently sport the feature.
Spotting and diagnosing cancer is a complex and difficult process even for the dedicated medical professionals who do it for a living. A new tool from Google researchers could improve the process by providing what amounts to reverse image search for suspicious or known cancerous cells. But it’s more than a simple matching algorithm.
Part of the diagnosis process is often examining tissue samples under a microscope and looking for certain telltale signals or shapes that may indicate one or another form of cancer. This can be a long and arduous process because every cancer and every body is different, and the person inspecting the data must not only look at the patient’s cells but also compare them to known cancerous tissues from a database or even a printed book of samples.
As has been amply demonstrated for years now, matching similar images to one another is a job well suited to machine learning agents. It’s what powers things like Google’s reverse image search, where you put in one picture and it finds ones that are visually similar. But this technique has also been used to automate processes in medicine, where a computer system can highlight areas of an X-ray or MRI that have patterns or features it has been trained to recognize.
That’s all well and good, but the complexity of cancer pathology rules out simple pattern recognition between two samples. One may be from the pancreas, another from the lung, for example, meaning the two situations might be completely different despite being visually similar. And an experienced doctor’s “intuition” is not to be replaced, nor would the doctor suffer it to be replaced.
Aware of both the opportunities and limitations here, Google’s research team built SMILY (Similar Medical Images Like Yours), which is a sort of heavily augmented reverse image search built specifically for tissue inspection and cancer diagnosis.
A user puts into the system a new sample from a patient — a huge, high-resolution image of a slide on which a dyed section of tissue is laid out. (This method is standardized and has been for a long time — otherwise how could you compare any two?)
Once it’s in the tool, the doctor can inspect it as they would normally, zooming in and panning around. When they see a section that piques their interest, they can draw a box around it and SMILY will perform its image-matching magic, comparing what’s inside the box to the entire corpus of the Cancer Genome Atlas, a huge database of tagged and anonymized samples.
Similar-looking regions pop up in the sidebar, and the user can easily peruse them. That’s useful enough right there. But as the researchers found out while they were building SMILY, what doctors really needed was to be able to get far more granular in what they were looking for. Overall visual similarity isn’t the only thing that matters; specific features within the square may be what the user is looking for, or certain proportions or types of cells.
As the researchers write:
Users needed the ability to guide and refine the search results on a case-by-case basis in order to actually find what they were looking for…This need for iterative search refinement was rooted in how doctors often perform “iterative diagnosis”—by generating hypotheses, collecting data to test these hypotheses, exploring alternative hypotheses, and revisiting or retesting previous hypotheses in an iterative fashion. It became clear that, for SMILY to meet real user needs, it would need to support a different approach to user interaction.
To this end the team added extra tools that let the user specify much more closely what they are interested in, and therefore what type of results the system should return.
First, a user can select a single shape within the area they are concerned with, and the system will focus only on that, ignoring other features that may only be distractions.
Second, the user can select from among the search results one that seems promising and the system will return more like it, less closely tied to the original query. This lets the user go down a sort of rabbit hole of cell features and types, doing that “iterative” process the researchers mentioned above.
And third, the system was trained to understand when certain features are present in the search result, such as fused glands, tumor precursors, and so on. These can be included or excluded in the search — so if someone is sure it’s not related to this or that feature, they can just sweep all those examples off the table.
In a study of pathologists given the tool to use, the results were promising. The doctors appeared to adopt the tool quickly, not only using its official capabilities but doing things like reshaping the query box to test the results or see if their intuition on a feature being common or troubling was right. “The tools were preferred over a traditional interface, without a loss in diagnostic accuracy,” the researchers write in their paper.
It’s a good start, but clearly still only an experiment. The processes used for diagnosis are carefully guarded and vetted; you can’t just bring in a random new tool and change up the whole thing when people’s lives are on the line. Rather, this is merely a bright start for “future human-ML collaborative systems for expert decision-making,” which may at some point be put into service at hospitals and research centers.
Doronichev compared Google’s commitment to Stadia to services like Gmail, Docs, Music, Movies and Photos, which have persisted for years with no sign of imminent shutdown. “We’ve been investing a ton in tech, infrastructure, and partnerships [for Stadia] over the past few years,” Doronichev said. “Nothing in life is certain, but we’re committed to making Stadia a success… Of course, it’s OK to doubt my words. There’s nothing I can say now to make you believe if you don’t. But what we can do is to launch the service and continue investing in it for years to come.”
Doronichev also compared the transition to streaming gaming to similar transitions that have already largely taken place in the movie and music industries, and with cloud storage of personal files like photos and written documents. While acknowledging that “moving to the cloud is scary,” he also insisted that “eventually all of our games will be safely in the cloud too and we’ll feel great about it.”
A future where drones can easily and cheaply do many useful things such as deliver packages, undertake search and rescue missions, deliver urgent medical supplies, not to mention unclogging our roads with flying taxis seems like a future worth shooting for. But before all this can happen, we need to make sure the thousands of drones in the sky are operating safely. A drone needs to be able to automatically detect when entering into the flight path of another drone, manned aircraft or restricted area and to alter its course accordingly to safely continue its journey. The alternative is the chaos and danger of the recent incidences of drones buzzing major airports, for instance.
There is a race on to produce just such a system. Wing LLC, an offshoot of the Alphabet / Google-owned X company, has announced a platform it calls OpenSky that it hopes will become the basis for a full-fledged air-traffic control system for drones. So far, it’s only been approved to manage drone flights in Australia, although it is also working on demonstration programs with the US Federal Aviation Administration.
But this week Altitude Angel a UK-based startup backed by Seraphim Capital and with $4.9M in funding has launched it’s own UTM (Unmanned Traffic Management) system.
Its ‘Conflict Resolution System’ (anti-collision) system is basically an automatic collision avoidance technology. This means that any drone flying beyond the line of sight, will remain safe in the sky and not cross existing flight plans or into restricted areas. By being automated, Altitude Angel says this technology will prevent any mid-air collisions, simply because by knowing where everything else is in the sky, there’ll be no surprises.
Altitude Angel’s CRS has both ‘Strategic’ and ‘Tactical’ aspects.
The Strategic part happens during the planning stages of a flight, i.e. when someone is submitting flight plans and requesting airspace permission. The system analyses the proposed route and cross-references it with any other flight plans that have been submitted, along with any restricted areas on the ground, to then propose a reroute to eliminate any flight plan conflicts. Eventually, what happens is that a drone operator does this from an app on their phone, and the approval to flight is automated.
The next stage is Tactical. This happens while the drone is actually in-flight. The dynamic system continuously monitors the airspace around the aircraft both for other aircraft or for changes in the airspace (such as a temporary flight restriction around police incident) and automatically adjusts the route.
The key aspect of this CRS is that drones and drone pilots can store flight plans with a globally-distributed service without needing to exchange private or potentially sensitive data with each other while benefiting from an immediate pre-flight conflict resolution advice.
Richard Parker, Altitude Angel, CEO and founder says: “The ability for drones and automated aircraft to strategically plan flights, be made aware of potential conflict, and alter their route accordingly is critical in ensuring safety in our skies. This first step is all about pre-flight coordination, between drone pilots, fleet operators and other UTM companies. Being able to predict and resolve conflict mid-flight by providing appropriate and timely guidance will revolutionize automated flight. CRS is one of the critical building blocks on which the drone and automated flight industries will grow.”
Altitude Angel wone be the last to unveil a CRS of this type, but it’s instructive that there are startups confident of taking on the mighty Google and Amazon – which also has similar drone delivery plans – to achieve this type of platform.
Team Telecom, a shadowy US national security unit tasked with protecting America’s telecommunications systems, is delaying plans by Google, Facebook and other tech companies for the next generation of international fiber optic cables.
Team Telecom is comprised of representatives from the departments of Defense, Homeland Security, and Justice (including the FBI), who assess foreign investments in American telecom infrastructure, with a focus on cybersecurity and surveillance vulnerabilities.
Team Telecom works at a notoriously sluggish pace, taking over seven years to decide that letting China Mobile operate in the US would “raise substantial and serious national security and law enforcement risks,” for instance. And while Team Telecom is working, applications are stalled at the FCC.
The on-going delays to submarine cable projects, which can cost nearly half a billion dollars each, come with significant financial impacts. They also cede advantage to connectivity projects that have not attracted Team Telecom’s attention – such as the nascent internet satellite mega-constellations from SpaceX, OneWeb and Amazon .
Team Telecom’s investigations have long been a source of tension within Silicon Valley. Google’s subsidiary GU Holdings Inc has been building a network of international submarine fiber-optic cables for over a decade. Every cable that lands on US soil is subject to Team Telecom review, and each one has faced delays and restrictions.
The U.S. Federal Trade Commission is considering an update to the laws governing children’s privacy online, known as the COPPA Rule (or, the Children’s Online Privacy Protection Act). The Rule first went into effect in 2000 and was amended in 2013 to address changes in how children use mobile devices and social networking sites. Now, the FTC believes it may be due for more revisions. The organization is seeking input and comments on possible updates, some of which are specifically focused on how to address sites that aren’t necessarily aimed at children, but have large numbers of child users.
The advocacy groups allege that YouTube is hiding behind its terms of service which claim YouTube is “not intended for children under 13” — a statement that’s clearly no longer true. Today, the platform is filled with videos designed for viewing by kids. Google even offers a YouTube Kids app aimed at preschooler to tween-aged children, but it’s optional. Kids can freely browse YouTube’s website and often interact with the service via the YouTube TV app — a platform where YouTube Kids has a limited presence.
According to the letter written by the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD), Google has now collected personal information from nearly 25 million children in the U.S., and it used this data to engage in “very sophisticated digital marketing techniques.”
The groups want YouTube to delete the children’s data, set up an age-gate on the site, and separate out any kids content into its own app where YouTube will have to properly follow COPPA guidelines.
These demands are among those pushing the FTC to this action.
The Commission says it wants input as to whether COPPA should be updated to better address websites and online services that are not traditionally aimed at children but are used by kids, as well as whether these “general audience platforms” should have to identity and police the child-directed content that’s uploaded by third parties.
In other words, should the FTC amend COPPA so it can protect the privacy of the kids using YouTube?
“In light of rapid technological changes that impact the online children’s marketplace, we must ensure COPPA remains effective,” said FTC Chairman Joe Simons, in a published statement. “We’re committed to strong COPPA enforcement, as well as industry outreach and a COPPA business hotline to foster a high level of COPPA compliance. But we also need to regularly revisit and, if warranted, update the Rule,” he added.
While YouTube is a key focus, the FTC will also seek comment on whether there should be an exception for parental consent for the use of educational technology in schools. And it wants to better understand the implications for COPPA in terms of interactive media, like interactive TV (think Netflix’s Minecraft: Story Mode, for example), or interactive gaming.
More broadly, the FTC wants to know how COPPA has impacted the availability of sites and services aimed at children, it says.
The decision to initiate a review of COPPA was a unanimous decision from the FTC’s five commissioners, which includes three Republicans and two Democrats.
Led by Simons, the FTC in February took action against Musical.ly (now TikTok), by issuing a record $5.7 million fine for its COPPA violations. Similar to YouTube, the app was used by a number of under-13 kids without parental consent. The company knew this was the case, but continued to collect the kids’ personal information, regardless.
“This record penalty should be a reminder to all online services and websites that target children: We take enforcement of COPPA very seriously, and we will not tolerate companies that flagrantly ignore the law,” Simons had said at the time.
The settlement with TikTok required the company to delete children’s videos and data and restrict underage users from being able to film videos.
It’s unclear why the FTC can’t now require the same of YouTube, given the similarities between the two services, without amending the law.
“They absolutely can and should fine YouTube, not to mention force YouTube to make significant changes, under the current regulations,” says Josh Golin, the Executive Director for CCFC. “As for the YouTube decision – by far the most important COPPA case in the agency’s history – it’s extremely concerning that the Commission appears to be signaling they do not have the authority under the current rules to hold YouTube accountable,” he says.
“COPPA rules could use some updating but the biggest problem with the law is the FTC’s lack of enforcement, which is something the Commission could address right away without a lengthy comment period,” Golin adds.
Bug hunting can be a lucrative gig. Depending on the company, a serious bug reported through the proper channels can earn whoever found it first tens of thousands of dollars.
Google launched a bug bounty program for Chrome in 2010. Today they’re increasing the maximum rewards for that program by 2-3x.
Rewards in Chrome’s bug bounty program vary considerably based on how severe a bug is and how detailed your report is — a “baseline” report with fewer details will generally earn less than a “high-quality” report that does things like explain how a bug might be exploited, why it’s happening, and how it might be fixed. You can read about how Google rates reports right here.
But in both cases, the potential reward size is being increased. The maximum payout for a baseline report is increasing from $5,000 to $15,000, while the maximum payout for a high quality report is being bumped from $15,000 to $30,000.
There’s one type of exploit that Google is particularly interested in: those that compromise a Chromebook or Chromebox device running in guest mode, and that aren’t fixed with a quick reboot. Google first offered a $50,000 reward for this type of bug, increasing it to $100,000 in 2016 after no one had managed to claim it. Today they’re bumping it to $150,000.
They’ve also introduced a new exploit category for Chrome OS rewards: lockscreen bypasses. If you can get around the lockscreen (by pulling information out of a locked user session, for example,) Google will pay out up to $15,000.
Google pays additional rewards for any bugs found using its “Chrome Fuzzer Program” —a program that lets researchers write automated tests and run them on lots and lots of machines in the hopes of finding a bug that only shows up at much larger scales. The bonus for bugs found through the Fuzzer program will be increased from $500 to $1000 (on top of whatever reward you’d normally get for a bug in that category.)
Google says that it’s paid out over $5M in bug bounties through its Chrome Vulnerability Rewards Program since it was introduced in 2010. As of February of this year, the company had paid out over $15M across all of their bug bounty programs.