MIT researchers say memory splitting breakthrough could prevent another Meltdown or Spectre

Virtually every modern computer processor was thrown under the bus earlier this year when researchers found a fundamental design weakness in Intel, AMD and ARM chips, making it possible to steal sensitive data from the computer’s memory.

The Meltdown and Spectre vulnerabilities — which date back to 1995 — punched holes in the walls that keeps apps from accessing other parts of the system’s memory that it doesn’t have permission to read. That meant a skilled attacker could figure out where sensitive data was stored, like passwords and encryption keys. While the companies mitigated some of the flaws, they acknowledged that their long term plan would require a core redesign in how their computer processors work.

Now, a team of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers say they have found a way to prevent a similar range of flaws like Meltdown and Spectre in the future.

When an app needs to store something in memory, it asks the processor where to put it. But searching for that memory is slow, so processors use a trick known as “speculative execution” to run several sets of tasks at the same time while it finds the right memory slot. But attackers can exploit the same technique to allow an app to read parts of the memory that it shouldn’t be allowed to read.

MIT’s CSAIL says their technique would split up memory so that the data not stored in the same place — in what the team calls “secure way partitioning.”

They call it called DAWG — or “Dynamically Allocated Way Guard” — which, admittedly might sound ridiculous, but it’s meant to work as a counterpoint to Intel’s Cache Allocation Technology, or CAT. According to their work, DAWG works similarly to CAT and doesn’t require many changes to the device’s operating system — making it potentially as easy to install on an affected computer as Meltdown’s microcode fix.

According to Vladimir Kiriansky, one of the research paper’s authors, the new technique “establishes clear boundaries for where sharing should and should not happen, so that programs with sensitive information can keep that data reasonably secure.”

Not only could the technology help to protect regular computers, but also also vulnerable cloud infrastructures.

Although DAWG can’t prevent against every speculative attack, the researchers are now working to improve their technology to prevent against more — if not all attacks.

But if their technology is picked up by Intel or any other chip maker, the researchers say techniques like DAWG could “restore our confidence in public cloud infrastructure, and hardware and software co-design will help minimize performance overheads.”

Apple to Australia: “This is no time to weaken encryption”

Apple to Australia: “This is no time to weaken encryption”

Apple has filed its formal opposition to a new bill currently being proposed by the Australian government that critics say would weaken encryption.

If it passes, the “Assistance and Access Bill 2018” would create a new type of warrant that would allow what governments often call “lawful access” to thwart encryption, something that the former Australian attorney general proposed last year.

The California company said in a filing provided to reporters on Friday that the proposal was flawed.

Read 7 remaining paragraphs | Comments

Apple Criticizes Proposed Anti-Encryption Legislation in Australia

The Australian government is considering a bill that would require tech companies like Apple to provide “critical assistance” to government agencies who are investigating crimes.

According to the Australian government, encryption is problematic because encrypted communications “are increasingly being used by terrorist groups and organized criminals to avoid detection and disruption.”



As noted by TechCrunch, Apple today penned a seven-page letter to the Australian parliament criticizing the proposed legislation.

In the letter, Apple calls the bill “dangerously ambiguous” and explains the importance of encryption in “protecting national security and citizens’ lives” from criminal attackers who are finding more serious and sophisticated ways to infiltrate iOS devices.

In the face of these threats, this is no time to weaken encryption. There is profound risk of making criminals’ jobs easier, not harder. Increasingly stronger — not weaker — encryption is the best way to protect against these threats.

Apple says that it “challenges the idea” that weaker encryption is necessary to aid law enforcement investigations as it has processed more than 26,000 requests for data to help solve crimes in Australia over the course of the last five years.

According to Apple, the language in the bill is broad and vague, with “ill-defined restrictions.” As an example, Apple says the language in the bill would permit the government to order companies who make smart home speakers to “install persistent eavesdropping capabilities” or require device makers to create a tool to unlock devices.

Apple says additional work needs to be done on the bill to include a “firm mandate” that “prohibits the weakening of encryption or security protections,” with the company going on to outline a wide range of specific concerns that it hopes the Australian parliament will address. The list of flaws Apple has found with the bill can be found in the full letter.

Apple has been fighting against anti-encryption legislation and attempts to weaken device encryption for years, and its most public battle was against the U.S. government in 2016 after Apple was ordered to help the FBI unlock the iPhone owned by Syed Farook, one of the shooters in the December 2015 attacks in San Bernardino.

Apple opposed the order and claimed that it would set a “dangerous precedent” with serious implications for the future of smartphone encryption. Apple ultimately held its ground and the U.S. government backed off after finding an alternate way to access the device, but Apple has continually had to deal with further law enforcement efforts to combat encryption.

Note: Due to the political nature of the discussion regarding this topic, the discussion thread is located in our Politics, Religion, Social Issues forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.

Discuss this article in our forums

Siilo injects $5.1M to try to transplant WhatsApp use in hospitals

Consumer messaging apps like WhatsApp are not only insanely popular for chatting with friends but have pushed deep into the workplace too, thanks to the speed and convenience they offer. They have even crept into hospitals, as time-strapped doctors reach for a quick and easy way to collaborate over patient cases on the ward.

Yet WhatsApp is not specifically designed with the safe sharing of highly sensitive medical information in mind. This is where Dutch startup Siilo has been carving a niche for itself for the past 2.5 years — via a free-at-the-point-of-use encrypted messaging app that’s intended for medical professions to securely collaborate on patient care, such as via in-app discussion groups and being able to securely store and share patient notes.

A business goal that could be buoyed by tighter EU regulations around handling personal data, say if hospital managers decide they need to address compliance risks around staff use of consumer messaging apps.

The app’s WhatsApp-style messaging interface will be instantly familiar to any smartphone user. But Siilo bakes in additional features for its target healthcare professional users, such as keeping photos, videos and files sent via the app siloed in an encrypted vault that’s entirely separate from any personal media also stored on the device.

Messages sent via Siilo are also automatically deleted after 30 days unless the user specifies a particular message should be retained. And the app does not make automated back-ups of users’ conversations.

Other doctor-friendly features include the ability to blur images (for patient privacy purposes); augment images with arrows for emphasis; and export threaded conversations to electronic health records.

There’s also mandatory security for accessing the app — with a requirement for either a PIN-code, fingerprint or facial recognition biometric to be used. While a remote wipe functionality to nix any locally stored data is baked into Siilo in the event of a device being lost or stolen.

Like WhatsApp, Siilo also uses end-to-end encryption — though in its case it says this is based on the opensource NaCl library

It also specifies that user messaging data is stored encrypted on European ISO-27001 certified servers — and deleted “as soon as we can”.

It also says it’s “possible” for its encryption code to be open to review on request.

Another addition is a user vetting layer to manually verify the medical professional users of its app are who they say they are.

Siilo says every user gets vetted. Though not prior to being able to use the messaging functions. But users that have passed verification unlock greater functionality — such as being able to search among other (verified) users to find peers or specialists to expand their professional network. Siilo says verification status is displayed on profiles.

“At Siilo, we coin this phenomenon ‘network medicine’, which is in contrast to the current old-­fashioned, siloed medicine,” says CEO and co-founder Joost Bruggeman in a statement. “The goal is to improve patient care overall, and patients have a network of doctors providing input into their treatment.”

While Bruggeman brings the all-important medical background to the startup, another co-founder, Onno Bakker, has been in the mobile messaging game for a long time — having been one of the entrepreneurs behind the veteran web and mobile messaging platform, eBuddy.

A third co-founder, CFO Arvind Rao, tells us Siilo transplanted eBuddy’s messaging dev team — couching this ported in-house expertise as an advantage over some of the smaller rivals also chasing the healthcare messaging opportunity.

It is also of course having to compete technically with the very well-resourced and smoothly operating WhatsApp behemoth.

“Our main competitor is always WhatsApp,” Rao tells TechCrunch. “Obviously there are also other players trying to move in this space. TigerText is the largest in the US. In the UK we come across local players like Hospify and Forward.

“A major difference we have very experienced in-house dev team… The experience of this team has helped to build a messenger that really can compete in usability with WhatsApp that is reflected in our rapid adoption and usage numbers.”

“Having worked in the trenches as a surgery resident, I’ve experienced the challenges that healthcare professionals face firsthand,” adds Bruggeman. “With Siilo, we’re connecting all healthcare professionals to make them more efficient, enable them to share patient information securely and continue learning and share their knowledge. The directory of vetted healthcare professionals helps ensure they’re successful team­players within a wider healthcare network that takes care of the same patient.”

Siilo launched its app in May 2016 and has since grown to ~100,000 users, with more than 7.5 million messages currently being processed monthly and 6,000+ clinical chat groups active monthly.

“We haven’t come across any other secure messenger for healthcare in Europe with these figures in the App Store/Google Play rankings and therefore believe we are the largest in Europe,” adds Rao. “We have multiple large institutions across Western-Europe where doctors are using Siilo.”

On the security front, as well flagging the ISO 27001 certification it has for its servers, he notes that it obtained “the highest NHS IG Toolkit level 3” — aka the now replaced system for organizations to self-assess their compliance with the UK’s National Health Service’s information governance processes, claiming “we haven’t seen [that] with any other messaging company”.

Siilo’s toolkit assessment was finalized at the end of Febuary 2018, and is valid for a year — so will be up for re-assessment under the replacement system (which was introduced this April) in Q1 2019. (Rao confirms they will be doing this “new (re-)assessment” at the end of the year.)

As well as being in active use in European hospitals such as St. George’s Hospital, London, and Charité Berlin, Germany, Siilo says its app has had some organic adoption by medical pros further afield — including among smaller home healthcare teams in California, and “entire transplantation teams” from Astana, Kazakhstan.

It also cites British Medical Journal research that found that of the 98.9% of U.K. hospital clinicians who now have smartphones, around a third are using consumer messaging apps in the clinical workplace. Persuading those healthcare workers to ditch WhatsApp at work is Siilo’s mission and challenge.

The team has just announced a €4.5 million (~$5.1M) seed to help it get onto the radar of more doctors. The round is led by EQT Ventures, with participation from existing investors. It says it will be using the funding to scale­ up its user base across Europe, with a particular focus on the UK and Germany.

Commenting on the funding in a statement, EQT Ventures’ Ashley Lundström, a venture lead and investment advisor at the VC firm, said: “The team was impressed with Siilo’s vision of creating a secure global network of healthcare professionals and the organic traction it has already achieved thanks to the team’s focus on building a product that’s easy to use. The healthcare industry has long been stuck using jurassic technologies and Siilo’s real­time messaging app can significantly improve efficiency
and patient care without putting patients’ data at risk.”

While the messaging app itself is free for healthcare professions to use, Siilo also offers a subscription service to monetize the freemium product.

This service, called Siilo Connect offers organisations and professional associations what it bills as “extensive management, administration, networking and software integration tools”, or just data regulation compliance services if they want the basic flavor of the paid tier.

Don’t want New Zealand officials to get into your phone? Pay up to NZ$5,000

Article intro image

New Zealand privacy activists have raised concerns over a new law that imposes a fine of up to NZ$5,000 (more than $3,200) for travelers—citizens and foreigners alike—who decline to unlock their digital devices when entering the country. (Presumably your phone would be seized anyway if it came to that.)

The Southern Pacific nation is believed to be the first in the world to impose such a law.

According to the Customs and Excise Act of 2018, which took effect on October 1, travelers must comply if officials believe there is a “reasonable cause.” The law does not spell out exactly what that means, nor does it provide a means for individuals to challenge this assessment.

Read 6 remaining paragraphs | Comments

FBI vs. Facebook Messenger: What’s at stake?

Article intro image

Greg Nojeim is director of the Freedom, Security, & Technology Project at the Center for Democracy & Technology. Eric Wenger is the director of Cybersecurity and Privacy Policy at Cisco Systems, Inc. Marc Zwillinger is the founder of ZwillGen PLLC and frequently represents technology companies on surveillance-related issues. The opinions expressed here do not necessarily represent those of Ars Technica.

In the wake of news from Reuters on Friday that a federal court in California rejected Department of Justice demands that Facebook break, bypass, or remove the encryption in its Messenger app, it’s worth noting how little we still know about such an important dispute.

Depending on what specific relief the government sought from the court, the case may signal a potentially significant threat to the security of Internet-based communications. In a hyperconnected world, the implications of the government’s demand for expanded surveillance capabilities go far beyond any legitimate law enforcement equities in any single case.

Read 10 remaining paragraphs | Comments

Facebook is weaponizing security to erode privacy

At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.

“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.

Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.

But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.

You could say Facebook has ‘hostility to privacy‘ as a core value.

Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.

But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.

On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.

Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.

All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparencyhate speech and abuse, and also directly, and at times closely, on consumer privacy and control

Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.) 

The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”

As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.

But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.

He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.

Security, weaponized 

What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.

Simply put, Facebook is weaponizing security to shield its erosion of privacy.

Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.

Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.

In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.

Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.

Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.

Hence the existential nature of the threat.

The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.

Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.

One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.

It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.

US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.

Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.

So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)

The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.

Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.

While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.

In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.

Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.

So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.

Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:

It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.

At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.

“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.

He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.

“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”

He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).

These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.

But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.

No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.

TechCrunch/Bryce Durbin

Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…

So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.

What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.

Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.

Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.

“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.

So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.

Safe to say, that’s not going to play at all well in Europe.

Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!

Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)

In other contexts this would be called spying — or, well, ‘mass surveillance’.

It’s also how Facebook makes money.

And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.

Except Facebook is a commercial company, not the NSA.

So it’s only fighting to keep being able to carpet-bomb the planet with ads.

Profiting from shadow profiles

Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.

In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.

Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.

It’s just a double whammy of awful, awful behavior.

And of course, there’s more.

A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.

In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.

Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.

Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)

So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)

In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.

One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…

That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.

There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.

These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.

Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.

WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.

On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.

So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.

This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.

No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.

Yet this is a very dangerous strategy, though.

Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.

What’s the best security practice of all? That’s super simple: Not holding data in the first place.

US government loses bid to force Facebook to wiretap Messenger calls

US government investigators have lost a case to force Facebook to wiretap calls made over its Messenger app.

A joint federal and state law enforcement effort investigating the MS-13 gang had pushed a district court to hold the social networking giant in contempt of court for refusing to permit real-time listening in on voice calls.

According to sources speaking to Reuters, the judge later ruled in Facebook’s favor — although, because the case remains under seal, it’s not known for what reason.

The case, filed in a Fresno, Calif. district court, centers on alleged gang members accused of murder and other crimes. The government had been pushing to prosecute 16 suspected gang members, but are said to have leaned on Facebook to obtain further evidence.

Reuters said that an affidavit submitted by an FBI agent said that “there is no practical method available by which law enforcement can monitor” calls on Facebook Messenger . Although Facebook-owned WhatsApp uses end-to-end encryption to prevent eavesdroppers, not even the company can listen in — which law enforcement have long claimed that this hinders investigations.

But Facebook Messenger doesn’t end-to-end encrypt voice calls, making real-time listening in on calls possible.

Although phone companies and telcos are required under US law to allow police and federal agencies access to real-time phone calls with a court-signed wiretap order, internet companies like Facebook fall outside the scope of the law.

Privacy advocates saw this case as a way to remove that exemption, accusing the government of trying to backdoor the encrypted app, just two years after the FBI sued Apple over a similar request to break into the encrypted iPhone belonging to San Bernardino shooter Syed Farook.

FBI declined to comment. Facebook did not respond to a request for comment.