Union’s human rights challenge to Deliveroo dismissed by UK High Court

A UK union that has been fighting to win collective bargaining rights for gig economy riders who provide delivery services via Deliveroo’s platform has had its claim for a judicial review of an earlier blocking decision dismissed by the High Court today.

Six months ago the IWGB Union was granted permission to challenge Deliveroo’s opposition to collective bargaining for couriers on human rights grounds.

The union had already lost a challenge to Deliveroo’s employment classification for couriers last year. Then the Central Arbitration Committee (CAC) ruled that Deliveroo riders could not be considered workers because they had a genuine right to find a substitute to do their job for them.

The union disputes that finding but so far the courts have accepted Deliveroo’s assertion that riders are independent contractors — an employment classification that does not support forming a collective bargaining unit.

Even so, the union sought to pursue a case for collective bargaining on one ground related to Article 11 of the European Convention on Human Rights, which protects freedom of assembly and association.

But the High Court has now dismissed its argument, blocking its claim for a judicial review.

Writing in today’s judgement, Mr Justice Supperstone concludes: “I do not consider that, on the findings made by the CAC, the Riders have the right for which the Union contends under Article 11(1). Neither domestic nor Strasbourg case law supports this contention. Article 11(1) is not engaged in this case.”

Commenting in a statement, IWGB general secretary Dr Jason Moyer-Lee said: “Today’s judgement is a terrible one, not just in terms of what it means for low paid Deliveroo riders, but also in terms of understanding the European Convention on Human Rights,” he said. “Deliveroo riders should be entitled to basic worker rights as well as to the ability to be represented by trade unions to negotiate pay and terms and conditions.”

The union has vowed to appeal the decision.

Deliveroo, meanwhile, described the ruling as a “victory for riders”. It also argues that the judgement is consistent with previous decisions reached across Europe — including in France and the Netherlands.

“We are pleased that today’s judgment upholds the earlier decisions of the High Court and the CAC that Deliveroo riders are self-employed, providing them the flexibility they want,” said Dan Warne, UK MD, in a statement. “In addition to emphatically confirming this under UK national law, the Court also carefully examined the question under European law and concluded riders are self-employed.

“This a victory for riders who have consistently told us the flexibility to choose when and where they work, which comes with self-employment, is their number one reason for riding with Deliveroo. We will continue to seek to offer riders more security and make the case that Government should end the trade off in Britain between flexibility and security.”

Despite not having collective bargaining rights, in recent years UK gig economy workers have carried out a number of wildcat strikes — often related to changes to pricing policies.

Two years ago Deliveroo couriers in the UK staged a number of protests after the company trialed a new pricing structure.

While, in recent months, UberEats couriers in a number of UK cities have protested over pay.

UK Uber drivers have also organized to protest pay and conditions this year.

The UK government revealed a package of labor market reforms early this year that it said were intended to bolster workers rights, including for those in the gig economy.

Although it also announced it would be carrying out a number of consultations — leaving the full details of the reform tbc.

Facebook failed to stop a child bride being auctioned on its platform

Facebook failed to prevent its platform being used to auction a 16-year-old girl off for marriage in South Sudan.

Child early and forced marriage (CEFM) is the most commonly reported form of gender-based violence in South Sudan, according to a recent Plan International report on the myriad risks for adolescent girls living in the war-torn region.

Now it seems girls in that part of the world have to worry about social media too.

Vice reported on the story in detail yesterday, noting that Facebook took down the auction post but not until after the girl had already been married off — and more than two weeks after the family first announced the attention to sell the child via its platform, on October 25.

Facebook said it first learned about the auction post on November 9, after which it says it took it down within 24 hours. It’s not clear how many hours out of the 24 it took Facebook to take the decision to remove the post.

A multimillionaire businessman from South Sudan’s capital city reportedly won the auction after offering a record ‘price’ — of 530 cows, three Land Cruiser V8 cars and $10,000 — to marry the child, Nyalong Ngong Deng Jalang.

Plan International told Vice it’s the first known incident of Facebook being used to auction a child bride.

“It is really concerning because, as it was such a lucrative transaction and it attracted so much attention, we are worried that this could act as an incentive for others to follow suit,” the development organization told Vice.

A different human rights NGO posted a screengrab of the deleted auction post to Twitter, writing: “Despite various appeals made by human rights group, a 16 year old girl child became a victim to an online marriage auction post, which was not taken down by Facebook in South Sudan.”

We asked Facebook to explain how it failed to act in time to prevent the auction and it sent us the following statement, attributed to a spokesperson:

Any form of human trafficking — whether posts, pages, ads or groups is not allowed on Facebook. We removed the post and permanently disabled the account belonging to the person who posted this to Facebook. We’re always improving the methods we use to identify content that breaks our policies, including doubling our safety and security team to more than 30,000 and investing in technology.

The more than two week delay between the auction post going live and the auction post being removed by Facebook raises serious questions about its claims to have made substantial investments in improving its moderation processes.

Human rights groups had directly tried to flag the post to Facebook. The auction had also reportedly attracted heavy local media attention. Yet it still failed to notice and act until weeks later — by which time it was too late because the girl had been sold and married off.

Facebook does not release country-level data about its platform so it’s not clear how many users it has in the South Sudan region.

Nor does it offer a breakdown of the locations of the circa 15,000 people it employs or contracts to carry out content review duties across its global content platform (which has 2BN+ users).

Facebook admits that the content reviewers it uses do not speak every language in the world where its platform is used. Nor do they even speak every language that’s widely used in the world. So it’s highly unlikely it has any reviewers at all with a strong grasp of the indigenous languages spoken in the South Sudan region.

We asked Facebook how many moderators it employs who speak any of the languages in the South Sudan region (which is multilingual). A spokeswoman was unable to provide an immediate answer.

The upshot of Facebook carrying out retrospective content moderation from afar, relying on a tiny number of reviewers (relative to its total users), is that the company is failing to respond to human rights risks as it should.

Facebook has not established on-the-ground teams across its international business with the necessary linguistic and cultural sensitivities to be able to respond directly, or even quickly, to risks being created by its platform in every market where it operates. (A large proportion of its reviewers are sited in Germany — which passed a social media hate speech law a year ago.)

AI is not going to fix that very hard problem either — not in any human time-scale. And in the meanwhile Facebook is letting actual humans take the strain.

But two weeks to notice and takedown a child bride auction is not the kind of metric any business wants to be measured by.

It’s increasingly clear that Facebook’s failure to invest adequately across its international business to oversee and manage the human rights impacts of its technology tools can have a very high cost indeed.

In South Sudan a lack of adequate oversight has resulted in its platform being repurposed as the equivalent of a high-tech slave market.

Facebook also continues to be on the hook for serious failings in Myanmar, where its platform has been blamed for spreading hate speech and accelerating ethnic violence.

You don’t have to look far to see other human rights abuses being aided and abetted by access to unchecked social media tools.

UN warns over human rights impact of a ‘digital welfare state’

The UN special rapporteur on extreme poverty and human rights has raised concerns about the UK’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale, warning in a statement today that the impact of a digital welfare state on vulnerable people will be “immense”.

He has also called for stronger laws and enforcement of a rights-based legal framework to ensure that the use of technologies like AI for public service provision does not end up harming people.

“There are few places in government where these developments are more tangible than in the benefit system,” writes professor Philip Alston. “We are witnessing the gradual disappearance of the postwar British welfare state behind a webpage and an algorithm. In its place, a digital welfare state is emerging. The impact on the human rights of the most vulnerable in the UK will be immense.”

It’s a timely intervention, with UK ministers also now pushing to accelerate the use of digital technologies to transform the country’s free-at-the-point-of-use healthcare system.

Alston’s statement also warns that the push towards automating public service delivery — including through increasing use of AI technologies — is worryingly opaque.

“A major issue with the development of new technologies by the UK government is a lack of transparency. The existence, purpose and basic functioning of these automated government systems remains a mystery in many cases, fuelling misconceptions and anxiety about them,” he writes, adding: “Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts.”

So, much like tech giants in their unseemly disruptive rush, UK government departments are presenting shiny new systems as sealed boxes — and that’s also a blocker to accountability.

“Central and local government departments typically claim that revealing more information on automation projects would prejudice its commercial interests or those of the IT consultancies it contracts to, would breach intellectual property protections, or would allow individuals to ‘game the system’,” writes Alston. “But it is clear that more public knowledge about the development and operation of automated systems is necessary.”

Radical social re-engineering

He argues that the “rubric of austerity” framing of domestic policies put in place since 2010 is misleading — saying the government’s intent, using the trigger of the global financial crisis, has rather been to transform society via a digital takeover of state service provision.

Or, at he puts it: “In the area of poverty-related policy, the evidence points to the conclusion that the driving force has not been economic but rather a commitment to achieving radical social re-engineering.”

Alston’s assessment follows a two week visit to the UK during which he spoke to people across British society, touring public service and community provided institutions such as job centers and food banks; meeting with ministers and officials across all levels of government, as well as opposition politicians; and talking to representatives from civil society institutions, including front line workers.

His statement discusses in detail the much criticized overhaul of the UK’s benefits system, in which the government has sought to combine multiple benefits into a single so-called Universal Credit, zooming in on the “highly controversial” use of “digital-by-default” service provision here — and wondering why “some of the most vulnerable and those with poor digital literacy had to go first in what amounts to a nationwide digital experiment”.

“Universal Credit has built a digital barrier that effectively obstructs many individuals’ access to their entitlements,” he warns, pointing to big gaps in digital skills and literacy for those on low incomes and also detailing how civil society has been forced into a lifeline support role — despite its own austerity-enforced budget constraints.

“The reality is that digital assistance has been outsourced to public libraries and civil society organizations,” he writes, suggesting that for the most vulnerable in society, a shiny digital portal is operating more like a firewall.

“Public libraries are on the frontline of helping the digitally excluded and digitally illiterate who wish to claim their right to Universal Credit,” he notes. “While library budgets have been severely cut across the country, they still have to deal with an influx of Universal Credit claimants who arrive at the library, often in a panic, to get help claiming benefits online.”

Alston also suggests that digital-by-default is — in practice — “much closer to digital only”, with alternative contact routes, such as a telephone helpline, being actively discouraged by government — leading to “long waiting times” and frustrating interactions with “often poorly trained” staff.

Human cost of automated errors

His assessment highlights how automation can deliver errors at scale too — saying he was told by various experts and civil society organizations of problems with the Real Time Information (RTI) system that underpins Universal Credit.

The RTI is supposed to takes data on earnings submitted by employers to one government department (HMRC) and share it with DWP to automatically calculate monthly benefits. But if incorrect (or late) earnings data is passed there’s a knock on impact on the payout — with Alston saying government has chosen to give the automated system the “benefit of the doubt” over and above the claimant.

Yet here a ‘computer says no’ response can literally mean a vulnerable person not having enough money to eat or properly heat their house that month.

“According to DWP, a team of 50 civil servants work full-time on dealing with the 2% of the millions of monthly transactions that are incorrect,” he writes. “Because the default position of DWP is to give the automated system the benefit of the doubt, claimants often have to wait for weeks to get paid the proper amount, even when they have written proof that the system was wrong. An old-fashioned pay slip is deemed irrelevant when the information on the computer is different.”

Another automated characteristic of the benefits system he discusses segments claimants into low, medium and high risk — in contexts such as ‘Risk-based verification’.

This is also problematic as Alston points out that people flagged as ‘higher risk’ are being subject to “more intense scrutiny and investigation, often without even being aware of this fact”.

“The presumption of innocence is turned on its head when everyone applying for a benefit is screened for potential wrongdoing in a system of total surveillance,” he warns. “And in the absence of transparency about the existence and workings of automated systems, the rights to contest an adverse decision, and to seek a meaningful remedy, are illusory.”

Summing up his concerns he argues that for automation to have positive political — and democratic — outcomes it must be accompanied by adequate levels of transparency so that systems can be properly assessed.

Rule of law, not ‘ethics-washing’

“There is nothing inherent in Artificial Intelligence and other technologies that enable automation that threatens human rights and the rule of law. The reality is that governments simply seek to operationalize their political preferences through technology; the outcomes may be good or bad. But without more transparency about the development and use of automated systems, it is impossible to make such an assessment. And by excluding citizens from decision-making in this area we may set the stage for a future based on an artificial democracy,” he writes.

“Transparency about the existence, purpose, and use of new technologies in government and participation of the public in these debates will go a long way toward demystifying technology and clarifying distributive impacts. New technologies certainly have great potential to do good. But more knowledge may also lead to more realism about the limits of technology. A machine learning system may be able to beat a human at chess, but it may be less adept at solving complicated social ills such as poverty.”

His statement also raises concerns about new institutions that are currently being set up by the UK government in the area of big data and AI, which are intended to guide and steer developments — but which he notes “focus heavily on ethics”.

“While their establishment is certainly a positive development, we should not lose sight of the limits of an ethics frame,” he warns. “Ethical concepts such as fairness are without agreed upon definitions, unlike human rights which are law. Government use of automation, with its potential to severely restrict the rights of individuals, needs to be bound by the rule of law and not just an ethical code.”

Calling for existing laws to be strengthened to properly regulate the use of digital technologies in the public sector, Alston also raises an additional worry — warning over a rights carve out the government baked into updated privacy laws for public sector data (a concern we flagged at the start of this year).

On this he notes: “While the EU General Data Protection Regulation includes promising provisions related to automated decision-making 37 and Data Protection Impact Assessments, it is worrying that the Data Protection Act 2018 creates a quite significant loophole to the GDPR for government data use and sharing in the context of the Framework for Data Processing by Government.”

Internet Freedom Declines Again, with ‘Polarized Echo Chambers’ Aiding Censorship Efforts

The amount of freedom on the global Internet has declined for the eighth straight year, with a group of countries moving toward “digital authoritarianism,” according to a new report from Freedom House.

A number of factors, including the spread of false rumors and hateful propaganda online, have contributed to an Internet that “can push citizens into polarized echo chambers that pull at the social fabric of the country,” said the report, released Thursday. These rifts often give aid to antidemocratic forces, including government efforts to censor the Internet, Freedom House said.

During 2018, authoritarians used claims of fake news and of data breaches and other scandals as an excuse to move closer to a Chinese model of Internet censorship, said the report, cosponsored by the Internet Society.

“China is exporting its model of digital authoritarianism throughout the world, posing a serious threat to the future of free and open Internet,” said Sanja Kelly, director for Internet Freedom at Freedom House. “In order to counter it, democratic governments need to showcase that there is a better way to manage the Internet, and that cybersecurity and disinformation can be successfully addressed without infringing on human rights.”

Thirty-six countries sent representatives to Chinese training programs on censorship and surveillance since January 2017. Another 18 countries have purchased monitoring technology or facial recognition systems from Chinese companies during the same time frame.

“Digital authoritarian is being promoted [by China] as a way for governments to control their citizens through technology, inverting the concept of the Internet as an engine for human liberation,” Freedom House said.

About 71 percent of the Internet’s 3.7 billion users live in countries where technology users were arrested or imprisoned for posting content related to political, social, or religious issues, the report said. Fifty-five percent live in countries where political, social or religious content was blocked online, and 48 percent live in countries were people have been attacked or killed for their online activities since June 2017.

About 47 percent of Internet users live in countries where access to social media or messaging platforms were temporarily or permanently blocked.

Freedom House reviewed the Internet-related policies of 65 countries. Internet freedom declined in 26 countries, including the United States, with the biggest score declines in Egypt and Sri Lanka. Nineteen countries posted gains in Internet freedom, although most of the increases were minor, the organization said.

During the year, 17 governments approved or proposed new laws restricting online media in the name of fighting fake news. Eighteen countries increased surveillance efforts.

The most restrictive countries were China, Iran, Ethiopia, Syria, and Cuba, the group said. Iceland, Estonia, Canada, Germany, and Australia were the countries with the most Internet freedom. The United States ranked sixth highest, the U.K. seventh, and Japan ninth.

In a dozen countries, declines in Internet freedom were related to elections. In these countries, the lead-up to an election resulted in a spread of disinformation, new censorship, technical attacks, or arrests of government critics, Freedom House said.

In addition to concerns about censorship and the spread of disinformation, the report also decries a loss of online privacy. Even as some countries push for more personal protections, “the unbridled collection of personal data has broken down traditional notions of privacy,” Freedom House said.

The report offers several recommendations for policymakers, for private companies, and for civil society. Governments should ensure that all Internet-related laws adhere to international human rights laws, and they should enact strong data protection laws, the report recommends.

Members of civil society can work with private companies on fact-checking efforts and can monitor their home countries’ collaboration with Chinese surveillance and censorship efforts, the report says.

In addition to the Internet Society, sponsors of the report include Google, Oath, the New York Community Trust, the Dutch Ministry of Foreign Affairs, and the U.S. Department of State’s Bureau of Democracy, Human Rights, and Labor.

Read the Freedom on the Net report.

The post Internet Freedom Declines Again, with ‘Polarized Echo Chambers’ Aiding Censorship Efforts appeared first on Internet Society.

Big tech must not reframe digital ethics in its image

Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.

The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.

The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.

So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.

As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”

As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.

Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.

But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.

(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)

And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.

This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.

“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”

Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.

And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)

It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.

A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.

“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.

Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.

SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)

The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.

“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.

“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”

Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.

It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.

The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.

They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.

Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.

This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.

The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.

Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.

Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.

A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.

They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.

“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.

“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”

Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).

She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.

She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.

But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.

Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.

But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.

Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.

This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.

Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.

Not so long as the platform retains the power to cause damage at scale.

It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.

“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.

“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”

Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.

On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.

Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.

The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.

His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.

“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.

“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”

“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”

For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.

Also on the stage in person: Apple’s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.

The adtech modeus operandi sums to “surveillance”, Cook asserted.

Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.

“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.

Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.

The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.

So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.

“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.

It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.

No profit making entity should be used as the model for where the ethical line should lie.

Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.

One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.

DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.

But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.

There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.

One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.

Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.

And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.

Call for collective action

Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.

Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)

Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.

“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”

The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.

Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”

“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.

“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”

So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.

Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.

The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.

Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.

Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)

Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.

The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.

The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.

In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.

It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.

Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.

People and civic society must teach them.

A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.

She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.

The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.

She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.

This is vital work to defend values and rights against the overreach of the digital here and now.

“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”

If the conference had one short sharp message it was this: Society must wake up to technology — and fast.

“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.

“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”

This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.

But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.

Apple’s Tim Cook makes blistering attack on the “data industrial complex”

Apple’s CEO Tim Cook has joined the chorus of voices warning that data itself is being weaponized again people and societies — arguing that the trade in digital data has exploded into a “data industrial complex”.

Cook did not namecheck the adtech elephants in the room: Google, Facebook and other background data brokers that profit from privacy-hostile business models. But his target was clear.

“Our own information — from the everyday to the deeply personal — is being weaponized against us with military efficiency,” warned Cook. “These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded and sold.

“Taken to the extreme this process creates an enduring digital profile and lets companies know you better than you may know yourself. Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm.”

“We shouldn’t sugarcoat the consequences. This is surveillance,” he added.

Cook was giving the keynote speech at the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC), which is being held in Brussels this year, right inside the European Parliament’s Hemicycle.

“Artificial intelligence is one area I think a lot about,” he told an audience of international data protection experts and policy wonks, which also included the inventor of the World Wide Web itself, Sir Tim Berners-Lee, another keynote speaker at the event.

“At its core this technology promises to learn from people individually to benefit us all. But advancing AI by collecting huge personal profiles is laziness, not efficiency,” Cook continued.

“For artificial intelligence to be truly smart it must respect human values — including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

That sense of responsibility is why Apple puts human values to be at the heart of its engineering, Cook said.

In the speech, which we previewed yesterday, he also laid out a positive vision for technology’s “potential for good” — when combined with “good policy and political will”.

“We should celebrate the transformative work of the European institutions tasked with the successful implementation of the GDPR. We also celebrate the new steps taken, not only here in Europe but around the world — in Singapore, Japan, Brazil, New Zealand. In many more nations regulators are asking tough questions — and crafting effective reform.

“It is time for the rest of the world, including my home country, to follow your lead.”

Cook said Apple is “in full support of a comprehensive, federal privacy law in the United States” — making the company’s clearest statement yet of support for robust domestic privacy laws, and earning himself a burst of applause from assembled delegates in the process.

Cook argued for a US privacy law to prioritize four things:

  1. data minimization — “the right to have personal data minimized”, saying companies should “challenge themselves” to de-identify customer data or not collect it in the first place
  2. transparency — “the right to knowledge”, saying users should “always know what data is being collected and what it is being collected for, saying it’s the only way to “empower users to decide what collection is legitimate and what isn’t”. “Anything less is a shame,” he added
  3. the right to access — saying companies should recognize that “data belongs to users”, and it should be made easy for users to get a copy of, correct and delete their personal data
  4. the right to security — saying “security is foundational to trust and all other privacy rights”

“We see vividly, painfully how technology can harm, rather than help,” he continued, arguing that platforms can “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense or what is true or false”.

“This crisis is real. Those of us who believe in technology’s potential for good must not shrink from this moment”, he added, saying the company hopes “to work with you as partners”, and saying: “Our missions are closely aligned.”

He also made a sideswipe at tech industry efforts to defang privacy laws — saying that some companies will “endorse reform in public and then resist and undermine it behind closed doors”.

“They may say to you our companies can never achieve technology’s true potential if there were strengthened privacy regulations. But this notion isn’t just wrong it is destructive — technology’s potential is and always must be rooted in the faith people have in it. In the optimism and the creativity that stirs the hearts of individuals. In its promise and capacity to make the world a better place.”

“It’s time to face facts,” he added. “We will never achieve technology’s true potential without the full faith and confidence of the people who use it.”

Opening the conference before Cook took to the stage, Europe’s data protection supervisor Giovanni Buttarelli argued that digitization is driving a new generational shift in the respect for privacy — saying there is an urgent need for regulators and indeed societies to agree on and establish “a sustainable ethics for a digitised society”.

“The so-called ‘privacy paradox’ is not that people have conflicting desires to hide and to expose. The paradox is that we have not yet learned how to navigate the new possibilities and vulnerabilities opened up by rapid digitization,” Buttarelli argued.

“To cultivate a sustainable digital ethics, we need to look, objectively, at how those technologies have affected people in good ways and bad; We need a critical understanding of the ethics informing decisions by companies, governments and regulators whenever they develop and deploy new technologies.”

The EU’s data protection supervisor told an audience largely made up of data protection regulators and policy wonks that laws that merely set a minimum standard are not enough, including the EU’s freshly painted GDPR.

“We need to ask whether our moral compass been suspended in the drive for scale and innovation,” he said. “At this tipping point for our digital society, it is time to develop a clear and sustainable moral code.”

“We do not have a[n ethical] consensus in Europe, and we certainly do not have one at a global level. But we urgently need one,” he added.

“Not everything that is legally compliant and technically feasible is morally sustainable,” Buttarelli continued, pointing out that “privacy has too easily been reduced to a marketing slogan. But ethics cannot be reduced to a slogan.”

“For us as data protection authorities, I believe that ethics is among our most pressing strategic challenges,” he added.

“We have to be able to understand technology, and to articulate a coherent ethical framework. Otherwise how can we perform our mission to safeguard human rights in the digital age?”

Facebook is hiring a director of human rights policy to work on “conflict prevention” and “peace-building”

Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities.

In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does the responsibility we have to respect the individual and human rights of the members of our diverse global community”, saying it’s:

… looking for a Director of Human Rights Policy to coordinate our company-wide effort to address human rights abuses, including by both state and non-state actors. This role will be responsible for: (1) Working with product teams to ensure that Facebook is a positive force for human rights and apply the lessons we learn from our investigations, (2) representing Facebook with key stakeholders in civil society, government, international institutions, and industry, (3) driving our investigations into and disruptions of human rights abusers on our platforms, and (4) crafting policies to counteract bad actors and help us ensure that we continue to operate our platforms consistent with human rights principles.

Among the minimum requirements for the role, Facebook lists experience “working in developing nations and with governments and civil society organizations around the world”.

It adds that “global travel to support our international teams is expected”.

The company has faced fierce criticism in recent years over its failure to take greater responsibility for the spread of disinformation and hate speech on its platform. Especially in international markets it has targeted for business growth via its Internet.org initiative which seeks to get more people ‘connected’ to the Internet (and thus to Facebook).

More connections means more users for Facebook’s business and growth for its shareholders. But the costs of that growth have been cast into sharp relief over the past several years as the human impact of handing millions of people lacking in digital literacy some very powerful social sharing tools — without a commensurately large investment in local education programs (or even in moderating and policing Facebook’s own platform) — has become all too clear.

In Myanmar Facebook’s tools have been used to spread hate and accelerate ethic cleansing and/or the targeting of political critics of authoritarian governments — earning the company widespread condemnation, including a rebuke from the UN earlier this year which blamed the platform for accelerating ethnic violence against Myanmar’s Muslim minority.

In the Philippines Facebook also played a pivotal role in the election of president Rodrigo Duterte — who now stands accused of plunging the country into its worst human rights crisis since the dictatorship of Ferdinand Marcos in the 1970s and 80s.

While in India the popularity of the Facebook-owned WhatsApp messaging platform has been blamed for accelerating the spread of misinformation — leading to mob violence and the deaths of several people.

Facebook famously failed even to spot mass manipulation campaigns going on in its own backyard — when in 2016 Kremlin-backed disinformation agents injected masses of anti-Clinton, pro-Trump propaganda into its platform and garnered hundreds of millions of American voters’ eyeballs at a bargain basement price.

So it’s hardly surprising the company has been equally naive in markets it understands far less. Though also hardly excusable — given all the signals it has access to.

In Myanmar, for example, local organizations that are sensitive to the cultural context repeatedly complained to Facebook that it lacked Burmese-speaking staff — complaints that apparently fell on deaf ears for the longest time.

The cost to American society of social media enabled political manipulation and increased social division is certainly very high. The costs of the weaponization of digital information in markets such as Myanmar looks incalculable.

In the Philippines Facebook also indirectly has blood on its hands — having provided services to the Duterte government to help it make more effective use of its tools. This same government is now waging a bloody ‘war on drugs’ that Human Rights Watch says has claimed the lives of around 12,000 people, including children.

Facebook’s job ad for a human rights policy director includes the pledge that “we’re just getting started” — referring to its stated mission of helping  people “build stronger communities”.

But when you consider the impact its business decisions have already had in certain corners of the world it’s hard not to read that line with a shudder.

Citing the UN Guiding Principles on Business and Human Rights (and “our commitments as a member of the Global Network Initiative”), Facebook writes that its product policy team is dedicated to “understanding the human rights impacts of our platform and to crafting policies that allow us both to act against those who would use Facebook to enable harm, stifle expression, and undermine human rights, and to support those who seek to advance rights, promote peace, and build strong communities”.

Clearly it has an awful lot of “understanding” to do on this front. And hopefully it will now move fast to understand the impact of its own platform, circa fifteen years into its great ‘society reshaping experience’, and prevent Facebook from being repeatedly used to trash human rights.

As well as representing the company in meetings with politicians, policymakers, NGOs and civil society groups, Facebook says the new human rights director will work on formulating internal policies governing user, advertiser, and developer behavior on Facebook. “This includes policies to encourage responsible online activity as well as policies that deter or mitigate the risk of human rights violations or the escalation of targeted violence,” it notes. 

The director will also work with internal public policy, community ops and security teams to try to spot and disrupt “actors that seek to misuse our platforms and target our users” — while also working to support “those using our platforms to foster peace-building and enable transitional justice”.

So you have to wonder how, for example, Holocaust denial continuing to be being protected speech on Facebook will square with that stated mission for the human rights policy director.

At the same time, Facebook is currently hiring for a public policy manager in Francophone, Africa — who it writes can “combine a passion for technology’s potential to create opportunity and to make Africa more open and connected, with deep knowledge of the political and regulatory dynamics across key Francophone countries in Africa”.

That job ad does not explicitly reference human rights — talking only about “interesting public policy challenges… including privacy, safety and security, freedom of expression, Internet shutdowns, the impact of the Internet on economic growth, and new opportunities for democratic engagement”.

But among the role’s listed responsibilities is working with Facebook’s Politics & Government team to “promote the use of Facebook as a platform for citizen and voter engagement to policymakers and NGOs and other political influencers”. So here it looks to be continuing its ‘business as usual’ strategy of pushing for more political activity to take place on Facebook.

If Facebook wants an accelerated understanding of human rights issues around the world it might be better advised to take a more joined up approach to human rights across its own policy staff board, and at least include it among the listed responsibilities of all the policy shapers it’s looking to hire.

From Idea to Action: Beyond the Net Selects 15 Amazing Chapter Projects!

The Beyond the Net Funding Programme is pleased to announce the results of our 2018 Grant Cycle. A total of 49 applications were received, and after a thorough reviewing process, 15 amazing projects were selected.

These projects are at the core of our mission, and will use the Internet to develop Community Networks in underserved areas, to empower women through ICT, as well as bringing awareness on  Internet policies around the world.

This is the result of months of effort from our Chapter Community. Many discussions, numerous clarifications and proposals, updates, and revisions form the Beyond the Net Selection Committee. We are proud of you all.

Please join us in celebrating the following projects!

Developing community networks in the Northern region of Brazil – Brazil Chapter

Supporting and promoting the development of the Internet to enrich people’s lives, the project aim is to contribute to the growth and improvement of community networks policies and practices in Brazilian rural areas, in order to strengthen those who are marginalized. Instituto Nupef will work to develop a new network in the state of Maranhão as well as a developing a communications plan for the Babassu coconut breakers organizations and movements. Objectives include expanding the reach of community networks with broadband Internet, monitoring of legislative and regulatory issues, and consequently documenting the work by disseminating the experiences by way of videos, photos, and texts.

Migrant Community Networks – Mexico Chapter

Aiming to understand how a particular community of migrants lives and communicates beyond societal spaces. We plan to analyze the re-appropriation of space and communication, digital connectivity and social discourse, through observation, data collection in forms of digital communication and social interaction, and by means of audiovisual recording of refugees’ everyday lives. This project doubles as an exploratory and social intervention that will help open a dialogue on connectivity among the migrant community. Objectives include implementation of a community network with trans-border communication in the Tijuana area and the creation of a digital archive of migrant communities’ experiences.

Creation of an Internet Traffic Exchange Point (IXP) – Dominican Republic Chapter

The project aims to create an IXP in a neutral, reliable, safe,  and efficient place, achieving the interconnection and exchange of traffic between those involved. Objectives are to raise awareness among local stakeholders regarding both the need and the advantages of an IXP, reducing costs of international interconnection and maintaining local internet traffic at national borders. Improvement of stability and resilience of the Internet service can optimize response times to security incidents and technical problems and the creation of a “community” of operators will give continuity to the project, promoting its expansion and operation according the best local and international practices.

Improving Livelihood of Women Through ICT Empowerment – Malaysia Chapter

The project target is to train 400 women to use the MyHelper crowdsourcing application to encourage earning extra income. This three-pronged project provides opportunities for women to develop essential entrepreneurial skills through ICT, empowering them to start their own businesses and use the Internet to improve their livelihood. Training modules will be developed in English as well as local languages such as Malay and Tagalog during a 3-month period, benefitting a large pool of women and ensuring the sustainability of the project. The creation and improvement of profiles will increase crowdsource worker visibility and the application of jobs.

Creating Networks – Youth Special Interest Group (SIG)

Firstly, the project aims to map organizations “of young people” in Latin America to identify how many work with issues related to the Internet and ICT, and leveraging its importance.  A website will be created displaying this information, followed by a capacity building phase and introduction, plus chartered topics and sessions related to individual work modules. Objectives will include, after analysis, face-to-face capacity-building sessions on Internet Governance to encourage proactiveness and general connection. Survey results will be published as well as a general guide on the development and experience of the project and the materials used, for use by the general public and in both the Spanish and Portuguese language.

“Multistakeholder Internet Governance Training”- Guinea Chapter

For the first time, a training project aims to set up a multilateral, inclusive, multistakeholder and discussion platform related to general Internet issues in Guinea and particularly on Internet Governance. Discussions will contribute to the development of the Internet at local, regional, and International level. Specific objectives are the training of approximately 70 people from different areas of life, including government, business, and civil society as well as engineers and standards development professionals. A committee will be created to ensure that Guinea’s concerned are addressed as well as addressing the need to increase Internet Governance capacity for Internet users as well as ensuring that stakeholders are well prepared for improved contributions/interactions.

Zaria Community Network and Culture Hub – Nigeria Chapter

The project seeks to use the Internet to improve the quality of education for the formally enrolled, as well as those outside the formal schooling system, as a resource for basic education, vocational development, and self-employment opportunities. A campaign will be run to enlighten communities on the opportunities available. Goals will include the implementation of free-to-use ISM band to reach research and educational institutions, community WiFi hotspots and solar-powered back-up solutions, culture hub web portals, a shared learning management system and a network monitoring infrastructure. A community engagement session for 500 teachers, students, and individuals will be conducted as well as continuous enlightenment campaigns and surveys to estimate effectiveness of strategies.

Women in Cyber Security – Kazakhstan Chapter

The implementation of the project will increase potential, and ensure that young women have the necessary skills and knowledge to understand, participate in, and benefit fully from cybersecurity and their applications as well as creating future role models thus increasing the percentage of women in the field. The aim of the training is to bridge the digital gender divide in cybersecurity in Kazakhstan by conducting 8 training sessions of approximately 50 students over a period of two years. Experienced female trainers will use up-to-date cybersecurity educational programs with the objective of increasing to up to 50% the number of women in this field over the next decade.

LibreRouter Phase 2 – Community Networks Special Interest Group (SIG)

The LibreRouter is the first multi-radio mesh router that is designed for community networks. It enables simple mesh deployment with little to no manual configuration and provides easy to follow documentation on technical aspects but also for planning and coordination. This Phase 2 project intends to cover an important missing piece: organized remote support for LibreRouter based networks. Main objectives are the design and implementation of a support system dashboard with a support request and follow-up mechanism, as well as extending LibreRouzer software tools to improve on problems identified. Other aims include the completion of documentation materials, hardware improvements and exploration of designs with the objective of lowering costs.

Spring of Knowledge – Kyrgyzstan Chapter

Schools in Kyrgyzstan have a great need for teachers with over 2500 teaching positions unfilled every year. The project aims are to improve the quality of education in Kyrgyzstan and increase the number of personnel to allow teachers to spend more time with students as well as providing additional materials to improve their own training. Objectives are to expand opportunities for studies in pilot locations, stimulating independence and responsibility and reducing the divide between school children in developed countries and those living in Kyrgyzstan in both rural and urban areas. Our aim is to increase the digital literacy of schoolchildren in Kyrgyzstan in pilot locations within 1 academic year.

Better Internet for Everyone in Lebanon – Lebanon Chapter

In Lebanon, the daily challenge is the peak time when the Internet user’s consumption outgrows the total bandwidth capacity and the quality of service is degraded for shared bandwidth offerings constituting more than 90% of the residential Internet market. Our project is a new business model for shared bandwidth offerings, consisting of a different pricing model based on the time of use as well as a subscriber panel to monitor service quality and accountability. The proof of concept will be tested first with up to 10 local community WISPs and later with other developing countries and ranging from 50 to 1000 subscribers. Comparisons will be made of aggregated graphs effects, consumption behavior, old vs new ISP revenues, and finally community polls to evaluate the new model and prepare to scale once proven.

DigiGen– Serbia Belgrade Chapter

The project aim is to explore how ICT technologies and the Internet can play a role in decreasing the existing gender digital gap and how to take into consideration gender awareness in developing new and evolving technologies. Our objective is to determine how new technologies can meet societal challenges across gender lines to promote and accelerate access to quality education, entrepreneurship, and innovation. Research topics include understanding the factors for acceptance of new technologies across genders and using the learning acquired for maximum impact and developing a leadership platform in rural areas. Our aim is also to leverage free access to the Internet through “Internet Light” as well as creating digital literacy recommendations in documented form for further program implementation in the region.

Contributing towards better ICT Policy Environment in Nepal – Nepal Chapter

The project goal is to build ICT and Internet related laws and policies in Nepal compatible with both international standards and best practices and ensuring the fundamental human rights of individuals. It will, after analysis, organize consultations with stakeholders and prepare policy recommendations aiming to ensure an open and sustainable Internet and ICT for the benefit of all. Objectives will incorporate the review of draft bills from international standards perspectives, inform major stakeholders of loopholes by sharing policy recommendations, and publishing a policy brief for the enhancement of knowledge. Our aim is to ensure the best adoption of Internet-related laws that will uphold Internet rights.

Empowering Village Development Committee Leaders – Botswana Chapter

In Botswana, Village Development Committees (VDCs), are “the main institutions charged with the responsibility for community development activities.” This project will provide training to VDCs committee leaders on use of the Internet as well as introducing the opportunities on offer. The project aims to target VDCs leaders in 2 remote regions with the aim of empowering these village leaders by showcasing to the best of its ability the benefits of using the Internet. By donating a laptop for use by the VDCs of the 4 most rural areas, we can empower these leaders to access information and facilitate communication. No local program has yet targeted these leaders and yet they are influential in community development. The full objective is to target 40 leaders in 4 regions to become Internet champions in their respective areas and contribute to village development issues in a productive way. 

KASBUY: Promoting Moroccan Women’s Participation in the Digital Economy – Morocco Chapter

Our proposition is the project KASBUY, a web platform to help cooperatives overcome marketing difficulties in advertising their products and reaching out to clients. KASBUY is an e-commerce platform and will allow any registered cooperative to have its own online space from which it will sell its products and manage its business and inventory management activities. The project will encourage the best use of the Internet for sustainable development of local communities and includes opportunities from which women and their families will benefit.  With the promotion and preservation of Moroccan artisanal heritage and the use of a universal and accessible web showroom, we aim to improve the maximum employment for women and families, particularly in rural areas.

Do you have a great idea to make your community better via the Internet? Find out if you’re eligible for a Beyond the Net grant!


Image: Nyirarukobwa Primary School in the Eastern Provice of Rwanda, which was connected to the Internet via a Beyond the Net project, ©Nyani Quarmyne

The post From Idea to Action: Beyond the Net Selects 15 Amazing Chapter Projects! appeared first on Internet Society.