CrowdStrike, the developer of a security technology that looks at changes in user behavior on networked devices and uses that information to identify potential cyber threats, has reached a $3 billion valuation on the back of a new $200 million round of funding.
The company’s hosted endpoint security technology has seen tremendous adoption worldwide and its popularity was able to win the attention of General Atlantic, Accel, and IVP which co-led the company’s latest round. Previous investors March Capital and CapitalG both participated in the company’s new financing.
For companies seeing the number of devices that are accessing their corporate networks proliferate rapidly, the CrowdStrike hosted security technology is one of several potential fixes to what’s becoming a significant problem.
For CrowdStrike that’s meant doubling revenues and headcount and winnin contracts with over 16% of the Fortune 1000 companies and 20% of companies in the Fortune 500.
The company claims that its software processes over 100 billion “security events” a day and its automated threat detection service makes 2.3 million decisions each second.
Other security companies like Cylance and Carbon Black have raised hundreds of millions for similar technologies. Indeed, the security market remains hotly contested in part because no technology has yet come up with a silver bullet for cyber attacks even as the number of attacks continue to proliferate.
Many chief security officers at big companies have mandates to only work with vendors that can replace at least three existing technologies that they’re already deploying, according to sources in the security industry.
In a blog post announcing the company’s new round, chief executive George Kurtz acknowledged the increasingly complex security environment that companies face, calling it “more global and dangerous” with lines blurring between “nation state and criminal adversaries”.
That’s why security companies like Cylance, Carbon Black and CrowdStriike have raised over $800 million between them. And why security remains such an attractive area for new venture investment.
The ongoing shift of emphasis in the cyber security industry from defensive, reactive actions towards pro-active detection and response has fueled veteran Finnish security company F-Secure’s acquisition of MWR InfoSecurity, announced today.
F-Secure is paying £80 million (€91,6M) in cash to purchase all outstanding shares in MWR InfoSecurity, funding the transaction with its own cash reserves and a five-year bank loan. In addition, the terms include an earn-out of a maximum of £25M (€28,6M) in cash to be paid after 18 months of the completion subject to the achievement of agreed business targets for the period from 1 July, 2018, until 31 December, 2019.
F-Secure says the acquisition will enable it to offer its customers access to the more offensive skillsets needed to combat targeted attacks — specialist capabilities that most companies are not likely to have in-house.
It points to detection and response solutions (EDR) and managed detection and response services (MDR) as one of the fastest growing market segments in the security space. And says the acquisition makes it the largest European single source of cyber security services and detection and response solutions, positioning it to cater to both mid-market companies and large enterprises globally.
“The acquisition brings MWR InfoSecurity’s industry-renowned technologies to F-Secure making our detection and response offering unrivaled,” said F-Secure CEO Samu Konttinen in a statement. “Their threat hunting platform (Countercept) is one of the most advanced in the market and is an excellent complement to our existing technologies.”
As well as having experts in-house skilled in offensive techniques, MWR InfoSecurity — a UK company that was founded in 2002 — is well known for its technical expertise and research.
And F-Secure says it expects learnings from major incident investigations and targeted attack simulations to provide insights that can be fed directly back into product creation, as well as be used to upgrade its offerings to reflect the latest security threats.
MWR InfoSecurity also has a suite of managed phishing protection services (phishd) which F-Secure also says will also enhance its offering.
The acquisition is expected to close in early July, and will add around 400 employees to F-Secure’s headcount. MWR InfoSecurity’s main offices are located in the UK, the US, South Africa and Singapore.
“I’m thrilled to welcome MWR InfoSecurity’s employees to F-Secure. With their vast experience and hundreds of experts performing cyber security services on four continents, we will have unparalleled visibility into real-life cyber attacks 24/7,” added Konttinen. “This enables us to detect indicators across an incredible breadth of attacks so we can protect our customers effectively. As most companies currently lack these capabilities, this represents a significant opportunity to accelerate F-Secure’s growth.”
“We’ve always relied on research-driven innovations executed by the best people and technology. This approach has earned MWR InfoSecurity the trust of some of the largest organizations in the world,” added MWR InfoSecurity CEO, Ian Shaw, who will be joining F-Secure’s leadership team after the transaction closes. “We see this approach thriving at F-Secure, and we look forward to working together so that we can break new ground in the cyber security industry.”
The companies will be holding a webcast to provide more detail on the news for investors and analysts later today, at 13:30 EEST.
European electronics and telecoms retailer Dixons Carphone has revealed a hack of its systems in which the intruder/s attempted to compromise 5.9 million payment cards.
In a statement put out today it says a review of its systems and data unearthed the data breach. It also confirms it has informed the UK’s data watchdog the ICO, financial conduct regulator the FCA, and the police.
According to the company, the vast majority of the cards (5.8M) were protected by chip-and-PIN technology — and it says the data accessed in respect of these cards contains “neither pin codes, card verification values (CVV) nor any authentication data enabling cardholder identification or a purchase to be made”.
However around 105,000 of the accessed cards were non-EU issued, and lacked chip-and-PIN, and it says those cards have been compromised.
“As a precaution we immediately notified the relevant card companies via our payment provider about all these cards so that they could take the appropriate measures to protect customers. We have no evidence of any fraud on these cards as a result of this incident,” it writes.
In addition to payment cards, the intruders also accessed 1.2M records containing non-financial personal data — such as name, address or email address.
“We have no evidence that this information has left our systems or has resulted in any fraud at this stage. We are contacting those whose non-financial personal data was accessed to inform them, to apologise, and to give them advice on any protective steps they should take,” the company adds.
In a statement about the breach, Dixons Carphone chief executive, Alex Baldock, said: “We are extremely disappointed and sorry for any upset this may cause. The protection of our data has to be at the heart of our business, and we’ve fallen short here. We’ve taken action to close off this unauthorised access and though we have currently no evidence of fraud as a result of these incidents, we are taking this extremely seriously.
“We are determined to put this right and are taking steps to do so; we promptly launched an investigation, engaged leading cyber security experts, added extra security measures to our systems and will be communicating directly with those affected. Cyber crime is a continual battle for business today and we are determined to tackle this fast-changing challenge.”
The company does not reveal when its systems were compromised; nor exactly when it discovered the intrusion; nor how long it took to launch an investigation — writing only that: “As part of a review of our systems and data, we have determined that there has been unauthorised access to certain data held by the company. We promptly launched an investigation, engaged leading cyber security experts and added extra security measures to our systems. We have taken action to close off this access and have no evidence it is continuing. We have no evidence to date of any fraudulent use of the data as result of these incidents.”
New European data protection rules are very strict in respect of data breaches, requiring that data controllers report any security incidents where personal data has been lost, stolen or otherwise accessed by unauthorized third parties to their data protection authority within 72 hours of them becoming aware of it. (Or even sooner if the breach is likely to result in a “high risk of adversely affecting individuals’ rights and freedoms”.)
And failure to promptly disclosure breaches can attract major fines under the GDPR data protection framework.
Yesterday the ICO issued a £250k penalty for a Yahoo data breach dating back to 2014 — though that was under the UK’s prior data protection regime which capped fines at a maximum of £500k. Whereas under GDPR fines can scale up to 4% of a company’s global annual turnover (or €20M, whichever is greater).
We’ve reached out to the ICO for comment on the Dixons Carphone breach and will update this story with any response. Update: An ICO spokesperson said: “An incident involving Dixons Carphone has been reported to us and we are liaising with the National Cyber Security Centre, the Financial Conduct Authority and other relevant agencies to ascertain the details and impact on customers. Anyone concerned about lost data and how it may be used should follow the advice of Action Fraud.”
Carphone Warehouse, a mobile division of Dixons Carphone, also suffered a major hack in 2015 — and the company was fined £400k by the ICO in January for that data breach which affected around 3M people.
The company’s stock dropped around 5% this morning after it reported the latest breach, before recovering slightly but still down around 3.5% at the time of writing.
Criminals and terrorists, like millions of others, rely on smartphone encryption to protect the information on their mobile devices. But unlike most of us, the data on their phones could endanger lives and pose a great threat to national security.
The challenge for law enforcement, and for us as a society, is how to reconcile the advantages of gaining access to the plans of dangerous individuals with the cost of opening a door to the lives of everyone else. It is the modern manifestation of the age-old conflict between privacy versus security, playing out in our pockets and palms.
The FBI has increasingly pressed the case that criminals and terrorists use smartphone security measures to avoid detection and investigation, arguing for a technological, cryptographic solution to stop these bad actors from “going dark.” In fact, there are recent reports that the Executive Branch is engaged in discussions to compel manufacturers to build technological tools so law enforcement can read otherwise-encrypted data on smartphones.
But the FBI is also tasked with protecting our nation against cyber threats. Encryption has a critical role in protecting our digital systems against compromises by hackers and thieves. And of course, a centralized data access tool would be a prime target for hackers and criminals. As recent events prove – from the 2016 elections to the recent ransomware attack against government computers in Atlanta – the problem will likely only become worse. Anything that weakens our cyber defenses will only make it more challenging for authorities to balance these “dual mandates” of cybersecurity and law enforcement access.
There is also the problem of internal threats: when they have access to customer data, service providers themselves can misuse or sell it without permission. Once someone’s data is out of their control, they have very limited means to protect it against exploitation. The current, growing scandal around the data harvesting practices on social networking platforms illustrates this risk. Indeed, our company Symphony Communications, a strongly encrypted messaging platform, was formed in the wake of a data misuse scandal by a service provider in the financial services sector.
(Photo by Chip Somodevilla/Getty Images)
So how do we help law enforcement without making data privacy even thornier than it already is? A potential solution is through a non-technological method, sensitive to the needs of all parties involved, that can sometimes solve the tension between government access and data protection while preventing abuse by service providers.
Agreements between some of our clients and the New York State Department of Financial Services (“NYSDFS”), proved popular enough that FBI Director Wray recently pointed to them as a model of “responsible encryption” that solves the problem of “going dark” without compromising robust encryption critical to our nation’s business infrastructure.
The solution requires storage of encryption keys — the codes needed to decrypt data — with third party custodians. Those custodians would not keep these client’s encryption keys. Rather, they give the access tool to clients, and then clients can choose how to use it and to whom they wish to give access. A core component of strong digital security is that a service provider should not have access to client’s unencrypted data nor control over a client’s encryption keys.
The distinction is crucial. This solution is not technological, like backdoor access built by manufacturers or service providers, but a human solution built around customer control. Such arrangements provide robust protection from criminals hacking the service, but they also prevent customer data harvesting by service providers.
Where clients choose their own custodians, they may subject those custodians to their own, rigorous security requirements. The clients can even split their encryption keys into multiple pieces distributed over different third parties, so that no one custodian can access a client’s data without the cooperation of the others.
This solution protects against hacking and espionage while safeguarding against the misuse of customer content by the service provider. But it is not a model that supports service provider or manufacturer built back doors; our approach keeps the encryption key control in clients’ hands, not ours or the government’s.
A custodial mechanism that utilizes customer-selected third parties is not the answer to every part of the cybersecurity and privacy dilemma. Indeed, it is hard to imagine that this dilemma will submit to a single solution, especially a purely technological one. Our experience shows that reasonable, effective solutions can exist. Technological features are core to such solutions, but just as critical are non-technological considerations. Advancing purely technical answers – no matter how inventive – without working through the checks, balances and risks of implementation would be a mistake.
Russian cybersecurity software maker Kaspersky Labs has announced it will be moving core infrastructure processes to Zurich, Switzerland, as part of a shift announced last year to try to win back customer trust.
It also said it’s arranging for the process to be independently supervised by a Switzerland-based third party qualified to conduct technical software reviews.
“By the end of 2019, Kaspersky Lab will have established a data center in Zurich and in this facility will store and process all information for users in Europe, North America, Singapore, Australia, Japan and South Korea, with more countries to follow,” it writes in a press release.
“Kaspersky Lab will relocate to Zurich its ‘software build conveyer’ — a set of programming tools used to assemble ready to use software out of source code. Before the end of 2018, Kaspersky Lab products and threat detection rule databases (AV databases) will start to be assembled and signed with a digital signature in Switzerland, before being distributed to the endpoints of customers worldwide.
“The relocation will ensure that all newly assembled software can be verified by an independent organization, and show that software builds and updates received by customers match the source code provided for audit.”
In October the company unveiled what it dubbed a “comprehensive transparency initiative” as it battled suspicion that its antivirus software had been hacked or penetrated by the Russian government and used as a route for scooping up US intelligence.
Being a trusted global cybersecurity firm and operating core processes out of Russia where authorities might be able to lean on your company for access has essentially become untenable as geopolitical concern over the Kremlin’s online activities has spiked in recent years.
Yesterday the Dutch government became the latest public sector customer to announce a move away from Kaspersky products (via Reuters) — saying it was doing so as a “precautionary measure”, and advising companies operating vital services to do the same.
Responding to the Dutch government’s decision, Kaspersky described it as “very disappointing”, saying its transparency initiative is “designed precisely to address any fears that people or organisations may have”.
“We are implementing these measures first and foremost in response to the evolving, ultra-connected global landscape and the challenges the cyber-world is currently facing,” the company adds in a detailed Q&A about the measures. “This is not exclusive to Kaspersky Lab, and we believe other organizations will in future also choose to adapt to these trends. Having said that, the overall aim of these measures is transparency, verified and proven, which means that anyone with concerns will now be able to see the integrity and trustworthiness of our solutions.”
The core processes that Kaspersky will move from Russia to Switzerland over this year and next — include customer data storage and processing (for “most regions”); and software assembly, including threat detection updates.
As a result of the shift it says it will be setting up “hundreds” of servers in Switzerland and establishing a new data center there, as well as drawing on facilities of a number of local data center providers.
Kaspersky is not exiting Russia entirely, though, and products for the Russian market will continue to be developed and distributed out of Moscow.
“In Switzerland we will be creating the ‘worldwide’ (ww) version of our products and AV bases. All modules for the ww-version will be compiled there. We will continue to use the current software build conveyer in Moscow for creating products and AV bases for the Russian market,” it writes, claiming it is retaining a software build conveyor in Russia to “simplify local certification”.
Data of customers from Latin American and Asia (with the exception of Japan, South Korea and Singapore) will also continue to be stored and processed in Russia — but Kaspersky says the list of countries for which data will be processed and stored in Switzerland will be “further extended, adding: “The current list is an initial one… and we are also considering the relocation of further data processing to other planned Transparency Centers, when these are opened.”
Whether retaining a presence and infrastructure in Russia will work against Kaspersky’s wider efforts to win back trust globally remains to be seen.
In the Q&A it claims: “There will be no difference between Switzerland and Russia in terms of data processing. In both regions we will adhere to our fundamental principle of respecting and protecting people’s privacy, and we will use a uniform approach to processing users’ data, with strict policies applied.”
However other pre-emptive responses in the document underline the trust challenge it is likely to face — such as a question asking what kind of data stored in Switzerland that will be sent or available to staff in its Moscow HQ.
On this it writes: “All data processed by Kaspersky Lab products located in regions excluding Russia, CIS, Latin America, Asian and African countries, will be stored in Switzerland. By default only aggregated statistics data will be sent to R&D in Moscow. However, Kaspersky Lab experts from HQ and other locations around the world will be able to access data stored in the Transparency Center. Each information request will be logged and monitored by the independent Swiss-based organization.”
Clearly the robustness of the third party oversight provisions will be essential to its Global Transparency Initiative winning trust.
Kaspersky’s activity in Switzerland will be overseen by an (as yet unnamed) independent third party which the company says will have “all access necessary to verify the trustworthiness of our products and business processes”, including: “Supervising and logging instances of Kaspersky Lab employees accessing product meta data received through KSN [Kaspersky Security Network] and stored in the Swiss data center; and organizing and conducting a source code review, plus other tasks aimed at assessing and verifying the trustworthiness of its products.
Switzerland will also host one of the dedicated Transparency Centers the company said last year that it would be opening as part of the wider program aimed at securing customer trust.
It expects the Swiss center to open this year, although the shifting of core infrastructure processes won’t be completed until Q4 2019. (It says on account of the complexity of redesigning infrastructure that’s been operating for ~20 years — estimating the cost of the project to be $12M.)
Within the Transparency Center, which Kaspersky will operate itself, the source code of its products and software updates will be available for review by “responsible stakeholders” — from the public and private sector.
It adds that the details of review processes — including how governments will be able to review code — are “currently under discussion” and will be made public “as soon as they are available”.
And providing government review in a way that does not risk further undermining customer trust may also provide a tricky balancing act for Kaspersky, given multi-directional geopolitical sensibilities, so the devil will be in the policy detail vis-a-vis “trusted” partners and whether the processes it deploys can reassure all of its customers all of the time.
“Trusted partners will have access to the company’s code, software updates and threat detection rules, among other things,” it writes, saying the Center will provide these third parties with: “Access to secure software development documentation; Access to the source code of any publicly released product; Access to threat detection rule databases; Access to the source code of cloud services responsible for receiving and storing the data of customers based in Europe, North America, Australia, Japan, South Korea and Singapore; Access to software tools used for the creation of a product (the build scripts), threat detection rule databases and cloud services”; along with “technical consultations on code and technologies”.
It is still intending to open two additional centers, one in North America and one in Asia, but precise locations have not yet been announced.
On supervision and review Kaspersky also says that it’s hoping to work with partners to establish an independent, non-profit organization for the purpose of producing professional technical reviews of the trustworthiness of the security products of multiple members — including but not limited to Kaspersky Lab itself.
Which would certainly go further to bolster trust. Though it has nothing firm to share about this plan as yet.
“Since transparency and trust are becoming universal requirements across the cybersecurity industry, Kaspersky Lab supports the creation of a new, non-profit organization to take on this responsibility, not just for the company, but for other partners and members who wish to join,” it writes on this.
Next month it’s also hosting an online summit to discuss “the growing need for transparency, collaboration and trust” within the cybersecurity industry.
Commenting in a statement, CEO Eugene Kaspersky, added: “In a rapidly changing industry such as ours we have to adapt to the evolving needs of our clients, stakeholders and partners. Transparency is one such need, and that is why we’ve decided to redesign our infrastructure and move our data processing facilities to Switzerland. We believe such action will become a global trend for cybersecurity, and that a policy of trust will catch on across the industry as a key basic requirement.”
A new exploit allows hackers to spoof two-factor authentication requests by sending a user to a fake login page and then stealing the username, password, and session cookie.
KnowBe4 Chief Hacking Officer Kevin Mitnick showed the hack in a public video. By convincing a victim to visit a typo-squatting domain liked “LunkedIn.com” and capturing the login, password, and authentication code, the hacker can pass the credentials to the actual site and capture the session cookie. Once this is done the hacker can login indefinitely. This essentially uses the one time 2FA code as a way to spoof a login and grab data.
“A white hat hacker friend of Kevin’s developed a tool to bypass two-factor authentication using social engineering tactics – and it can be weaponized for any site,” said Stu Sjouwerman, KnowBe4 CEO. “Two-factor authentication is intended to be an extra layer of security, but in this instance, we clearly see that you can’t rely on it alone to protect your organization.”
White hat hacker Kuba Gretzky created the system, called evilginx, and describes its implementation in a wonderfully thorough post on his site.
Sjouwerman notes that anti-phishing education is deeply important and that a hack like this is impossible to complete if the victim is savvy about security and the dangers of clicking links that come into your email box. To demonstrate this, Sjouwerman sent me an email seemingly addressed to me from Matt Burns (firstname.lastname@example.org) talking about a typo in a post. When I clicked on it I was transferred to a SendGrid redirect site and dumped into TechCrunch – but the payload could have been more nefarious.
“This highlights the need for new-school security awareness training and simulated phishing because people are truly your last line of defense,” said Sjouwerman. He estimates that hackers will begin trying this technique in the next few weeks and urges users and IT managers to harden their security protocols.
Cyan Banister is a partner at Founders Fund, where she invests across sectors and stages with a particular interest in augmented reality, fertility, heavily regulated industries and businesses that help people with basic skills find meaningful work.
Augmented Reality (AR) is still in its infancy and has a very promising youth and adulthood ahead. It has already become one of the most exciting, dynamic, and pervasive technologies ever developed. Every day someone is creating a novel way to reshape the real world with a new digital innovation.
Over the past couple of decades, the Internet and smartphone revolutions have transformed our lives, and AR has the potential to be that big. We’re already seeing AR act as a catalyst for major change, driving advances in everything from industrial machines to consumer electronics. It’s also pushing new frontiers in education, entertainment, and health care.
But as with any new technology, there are inherent risks we should acknowledge, anticipate, and deal with as soon as possible. If we do so, these technologies are likely to continue to thrive. Some industry watchers are forecasting a combined AR/VR market value of $108 billion by 2021, as businesses of all sizes take advantage of AR to change the way their customers interact with the world around them in ways previously only possible in science fiction.
As wonderful as AR is and will continue to be, there are some serious privacy and security pitfalls, including dangers to physical safety, that as an industry we need to collectively avoid. There are also ongoing threats from cyber criminals and nation states bent on political chaos and worse — to say nothing of teenagers who can be easily distracted and fail to exercise judgement — all creating virtual landmines that could slow or even derail the success of AR. We love AR, and that’s why we’re calling out these issues now to raise awareness.
Without widespread familiarity with the potential pitfalls, as well as robust self-regulation, AR will not only suffer from systemic security issues, it may be subject to stringent government oversight, slowing innovation, or even threaten existing First Amendment rights. In a climate where technology has come under attack from many fronts for unintended consequences and vulnerabilities–including Russian interference with the 2016 election as well as ever-growing incidents of hacking and malware–we should work together to make sure this doesn’t happen.
If anything causes government overreach in this area, it’ll likely be safety and privacy issues. An example of these concerns is shown in this dystopian video, in which a fictional engineer is able to manipulate both his own reality and that of others via retinal AR implants. Because AR by design blurs the divide between the digital and real worlds, threats to physical safety, job security, and digital identity can emerge in ways that were simply inconceivable in a world populated solely by traditional computers.
While far from exhaustive, the lists below present some of the pitfalls, as well as possible remedies for AR. Think of these as a starting point, beginning with pitfalls:
AR can cause big identity and property problems: Catching Pokemons on a sidewalk or receiving a Valentine on a coffee cup at Starbucks is really just scratching the surface of AR capabilities. On a fundamental level, we could lose the power to control how people see us. Imagine a virtual, 21st century equivalent of a sticky note with the words “kick me” stuck to some poor victim’s back. What if that note was digital, and the person couldn’t remove it? Even more seriously, AR could be used to create a digital doppelganger of someone doing something compromising or illegal. AR might also be used to add indelible graffiti to a house, business, sign, product, or art exhibit, raising some serious property concerns.
AR can threaten our privacy: Remember Google Glass and “Glassholes?” If a woman was physically confronted in a San Francisco dive bar just for wearing Google Glass (reportedly, her ability to capture the happenings at the bar on video was not appreciated by other patrons), imagine what might happen with true AR and privacy. We may soon see the emergence virtual dressing rooms, which would allow customers to try on clothing before purchasing online. A similar technology could be used to overlay virtual nudity onto someone without their permission. With AR wearables, for example, someone could surreptitiously take pictures of another person and publish them in real time, along with geotagged metadata. There are clear points at which the problem moves from the domain of creepiness to harassment and potentially to a safety concern.
AR can cause physical harm: Although hacking bank accounts and IoT devices can wreak havoc, these events don’t often lead to physical harm. With AR, however, this changes drastically when it is superimposed on the real world. AR can increase distractions and make travel more hazardous. As it becomes more common, over-reliance on AR navigation will leave consumers vulnerable to buggy or hacked GPS overlays that can manipulate drivers or pilots, making our outside world less safe. For example, if a bus driver’s AR headset or heads-up display starts showing illusory deer on the road, that’s a clear physical danger to pedestrians, passengers, and other drivers.
AR could launch disturbing career arms races: As AR advances, it can improve everything from individual productivity to worker data access, significantly impacting job performance. Eventually, workers with training and experience with AR technology might be preferred over those who don’t. That could lead to an even wider gap between so-called digital elites and those without such digital familiarity. More disturbingly, we might see something of an arms race in which a worker with eye implants as depicted in the film mentioned above might perform with higher productivity, thereby creating a competitive advantage over those who haven’t had the surgery. The person in the next cubicle could then feel pressure to do the same just to remain competitive in the job market.
How can we address and resolve these challenges? Here are some initial suggestions and guidelines to help get the conversation started:
Industry standards: Establish a sort of AR governing body that would evaluate, debate and then publish standards for developers to follow. Along with this, develop a centralized digital service akin to air traffic control for AR that classifies public, private and commercial spaces as well as establishes public areas as either safe or dangerous for AR use.
A comprehensive feedback system: Communities should feel empowered to voice their concerns. When it comes to AR, a strong and responsive way for reporting unsecure vendors that don’t comply with AR safety, privacy, and security standards will go a long way in driving consumer trust in next-gen AR products.
Responsible AR development and investment: Entrepreneurs and investors need to care about these issues when developing and backing AR products. They should follow a basic moral compass and not simply chase dollars and market share.
Guardrails for real-time AR screenshots: Rather than disallowing real-time AR screenshots entirely, instead control them through mechanisms such as geofencing. For example, an establishment such as a nightclub would need to set and publish its own rules which are then enforced by hardware or software.
While ambitious companies focus on innovation, they must also be vigilant about the potential hazards of those breakthroughs. In the case of AR, working to proactively wrestle with the challenges around identity, privacy and security will help mitigate the biggest hurdles to the success of this exciting new technology.
Recognizing risks to consumer safety and privacy is only the first step to resolving long-term vulnerabilities that rapidly emerging new technologies like AR create. Since AR blurs the line between the real world and the digital one, it’s imperative that we consider the repercussions of this technology alongside its compelling possibilities. As innovators, we have a duty to usher in new technologies responsibly and thoughtfully so that they’re improving society in ways that can’t also be abused -we need to anticipate problems and police ourselves. If we don’t safeguard our breakthroughs and the consumers who use them, someone else will.
Unlike most previous threats, all these vulnerabilities attack a computer’s hardware, rather than its software. This second release of attacks may be early indications that Meltdown and Spectre have opened a new front in the war between hackers and defenders in the realm of computer chips.
While experts are working to make and distribute patches for these bugs, the question remains: What does this mean for cybersecurity as a whole? The answer to that question starts with understanding a bit about how hackers work.
Hackers are a social and trendy bunch. A couple of years ago, hacking onboard computers on cars was common, so a bunch of vulnerabilities were found and patched and now cars have become somewhat harder to commandeer. Then drone hacking was all the rage, and drone manufacturers too have implemented patches and become somewhat more secure.
That is how cyber defenses work. Some smart researcher finds a new hole. If they’re nice (most are nice), they tell the manufacturers about it so they can fix the bugs. With Meltdown and Spectre, the researchers were nice and informed the manufacturers months beforehand. The MasterKey, RyzenFall, Fallout and Chimera researchers were not so nice, and only gave them a day. If the researchers are really not nice and decide instead to use their exploit, then some unlucky person or organization is probably going to have a very bad day.
That moment of discovery is the starting gun for an intense race between the defense community and the hacker community. Some hacker genius somewhere already knows how to use the bug and other hacker geniuses start working overtime to write their own code that exploits it.
Once a few of them figure it out, one of them will write a simpler version for people who don’t understand the details so that hackers who aren’t geniuses can use it too. Soon after that, it gets included in the common hacking databases. From that point on, anyone can literally point and click their way into your computer.
Although not much can be done for the folks who already had their bad day, the defense community, as a whole, almost always wins that race. As soon as their fastest programmer finds a fix, it can be quickly distributed throughout the world, making the new hacking toys only useful against the stragglers who fell behind the herd. And these days, it’s gotten pretty hard to fall behind. The patching process has become invisibly smooth, and most regular computer users never even know that there was a race on.
With hardware vulnerabilities, things could be different. You can’t change hardware by sending an invisible string of 1s and 0s through the air. For Meltdown and Spectre, workarounds where changing the software can help block the hardware problem are still being figured out and distributed. These workarounds showed up quickly at first, but the process has been anything but smooth, and proof-of-concept code for exploiting these vulnerabilities has been seen online for more than a month. As for the more recent vulnerabilities, it’s not clear yet what workarounds exist, and there might not always be a workaround that creates software solutions to hardware problems.
Though stark, this situation is not entirely unprecedented. Some operating systems are no longer supported by their vendors, which means that any new hole will go un-patched. The most famous example is Windows XP. Most people know by now that using Windows XP is not safe, but don’t fully understand how unsafe it is.
Today, any computer-savvy high schooler can watch a YouTube video and learn in just a couple hours how to point and click their way to control of someone else’s computer on the internet, so long as it is running Windows XP. Even with Windows XP though, when a truly nasty bug comes out, Microsoft can choose to go back and patch it like they did last year for the WannaCry ransomware. With a nasty hardware vulnerability, that may not even be an option.
So what can be done? Hopefully, the hacking community will not become enthralled with searching for hardware vulnerabilities. They might not. It is hard and requires rare expertise that is not as easy to come by as software hacking. If we are not so lucky, then defending the herd by responding quickly to the first attack may no longer be a viable approach — but herd immunity comes in many forms.
Perhaps it will be from increased diversity of chip designs or perhaps approaches to slow the spread of information from hacker genius to amateur. Perhaps it will be from improved perimeter defenses, although hardware at the perimeter may be just as vulnerable as the rest.
Time and again, the adaptability of the world’s smartest engineers have overcome the most dire threats to computing and the internet. The safe money is on them to win the day again, but with hardware vulnerabilities it may require a whole new approach for defending the herd.