Reminder: Other people’s lives are not fodder for your feeds

#PlaneBae

You should cringe when you read that hashtag. Because it’s a reminder that people are being socially engineered by technology platforms to objectify and spy on each other for voyeuristic pleasure and profit.

The short version of the story attached to the cringeworthy hashtag is this: Earlier this month an individual, called Rosey Blair, spent all the hours of a plane flight using her smartphone and social media feeds to invade the privacy of her seat neighbors — publicly gossiping about the lives of two strangers.

Her speculation was set against a backdrop of rearview creepshots, with a few barely there scribbles added to blot out actual facial features. Even as an entire privacy invading narrative was being spun unknowingly around them.

#PlanePrivacyInvasion would be a more fitting hashtag. Or #MoralVacuumAt35000ft

And yet our youthful surveillance society started with a far loftier idea associated with it: Citizen journalism.

Once we’re all armed with powerful smartphones and ubiquitously fast Internet there will be no limits to the genuinely important reportage that will flow, we were told.

There will be no way for the powerful to withhold the truth from the people.

At least that was the nirvana we were sold.

What did we get? Something that looks much closer to mass manipulation. A tsunami of ad stalking, intentionally fake news and social media-enabled demagogues expertly appropriating these very same tools by gamifying mind-less, ethically nil algorithms.

Meanwhile, masses of ordinary people + ubiquitous smartphones + omnipresent social media feeds seems, for the most part, to be resulting in a kind of mainstream attention deficit disorder.

Yes, there is citizen journalism — such as people recording and broadcasting everyday experiences of aggression, racism and sexism, for example. Experiences that might otherwise go unreported, and which are definitely underreported.

That is certainly important.

But there are also these telling moments of #hashtaggable ethical blackout. As a result of what? Let’s call it the lure of ‘citizen clickbait’ — as people use their devices and feeds to mimic the worst kind of tabloid celebrity gossip ‘journalism’ by turning their attention and high tech tools on strangers, with (apparently) no major motivation beyond the simple fact that they can. Because technology is enabling them.

Social norms and common courtesy should kick in and prevent this. But social media is pushing in an unequal and opposite direction, encouraging users to turn anything — even strangers’ lives — into raw material to be repackaged as ‘content’ and flung out for voyeuristic entertainment.

It’s life reflecting commerce. But a particularly insidious form of commerce that does not accept editorial let alone ethical responsibility, has few (if any) moral standards, and relies, for continued function, upon stripping away society’s collective sense of privacy in order that these self-styled ‘sharing’ (‘taking’ is closer to the mark) platforms can swell in size and profit.

But it’s even worse than that. Social media as a data-mining, ad-targeting enterprise relies upon eroding our belief in privacy. So these platforms worry away at that by trying to disrupt our understanding of what privacy means. Because if you were to consider what another person thinks or feels — even for a millisecond — you might not post whatever piece of ‘content’ you had in mind.

For the platforms it’s far better if you just forget to think.

Facebook’s business is all about applying engineering ingenuity to eradicate the thoughtful friction of personal and societal conscience.

That’s why, for instance, it uses facial recognition technology to automate content identification — meaning there’s almost no opportunity for individual conscience to kick in and pipe up to quietly suggest that publicly tagging others in a piece of content isn’t actually the right thing to do.

Because it’s polite to ask permission first.

But Facebook’s antisocial automation pushes people away from thinking to ask for permission. There’s no button provided for that. The platform encourages us to forget all about the existence of common courtesies.

So we should not be at all surprised that such fundamental abuses of corporate power are themselves trickling down to infect the people who use and are exposed to these platforms’ skewed norms.

Viral episodes like #PlaneBae demonstrate that the same sense of entitlement to private information is being actively passed onto the users these platforms prey on and feed off — and is then getting beamed out, like radiation, to harm the people around them.

The damage is collective when societal norms are undermined.

#PlaneBae

Social media’s ubiquity means almost everyone works in marketing these days. Most people are marketing their own lives — posting photos of their pets, their kids, the latte they had this morning, the hipster gym where they work out — having been nudged to perform this unpaid labor by the platforms that profit from it.

The irony is that most of this work is being done for free. Only the platforms are being paid. Though there are some people making a very modern living; the new breed of ‘life sharers’ who willingly polish, package and post their professional existence as a brand of aspiration lifestyle marketing.

Social media’s gift to the world is that anyone can be a self-styled model now, and every passing moment a fashion shoot for hire — thanks to the largess of highly accessible social media platforms providing almost anyone who wants it with their own self-promoting shopwindow in the world. Plus all the promotional tools they could ever need.

Just step up to the glass and shoot.

And then your vacation beauty spot becomes just another backdrop for the next aspirational selfie. Although those aquamarine waters can’t be allowed to dampen or disrupt photo-coifed tresses, nor sand get in the camera kit. In any case, the makeup took hours to apply and there’s the next selfie to take…

What does the unchronicled life of these professional platform performers look like? A mess of preparation for projecting perfection, presumably, with life’s quotidian business stuffed higgledy piggledy into the margins — where they actually sweat and work to deliver the lie of a lifestyle dream.

Because these are also fakes — beautiful fakes, but fakes nonetheless.

We live in an age of entitled pretence. And while it may be totally fine for an individual to construct a fictional narrative that dresses up the substance of their existence, it’s certainly not okay to pull anyone else into your pantomime. Not without asking permission first.

But the problem is that social media is now so powerfully omnipresent its center of gravity is actively trying to pull everyone in — and its antisocial impacts frequently spill out and over the rest of us. And they rarely if ever ask for consent.

What about the people who don’t want their lives to be appropriated as digital windowdressing? Who weren’t asking for their identity to be held up for public consumption? Who don’t want to participate in this game at all — neither to personally profit from it, nor to have their privacy trampled by it?

The problem is the push and pull of platforms against privacy has become so aggressive, so virulent, that societal norms that protect and benefit us all — like empathy, like respect — are getting squeezed and sucked in.

The ugliness is especially visible in these ‘viral’ moments when other people’s lives are snatched and consumed voraciously on the hoof — as yet more content for rapacious feeds.

#PlaneBae

Think too of the fitness celebrity who posted a creepshot + commentary about a less slim person working out at their gym.

Or the YouTuber parents who monetize videos of their kids’ distress.

Or the men who post creepshots of women eating in public — and try to claim it’s an online art project rather than what it actually is: A privacy violation and misogynistic attack.

Or, on a public street in London one day, I saw a couple of giggling teenage girls watching a man at a bus stop who was clearly mentally unwell. Pulling out a smartphone, one girl hissed to the other: “We’ve got to put this on YouTube.”

For platforms built by technologists without thought for anything other than growth, everything is a potential spectacle. Everything is a potential post.

So they press on their users to think less. And they profit at society’s expense.

It’s only now, after social media has embedded itself everywhere, that platforms are being called out for their moral vacuum; for building systems that encourage abject mindlessness in users — and serve up content so bleak it represents a form of visual cancer.

#PlaneBae

Human have always told stories. Weaving our own narratives is both how we communicate and how we make sense of personal experience — creating order out of events that are often disorderly, random, even chaotic.

The human condition demands a degree of pattern-spotting for survival’s sake; so we can pick our individual path out of the gloom.

But platforms are exploiting that innate aspect of our character. And we, as individuals, need to get much, much better at spotting what they’re doing to us.

We need to recognize how they are manipulating us; what they are encouraging us to do — with each new feature nudge and dark pattern design choice.

We need to understand their underlying pull. The fact they profit by setting us as spies against each other. We need to wake up, personally and collectively, to social media’s antisocial impacts.

Perspective should not have to come at the expense of other people getting hurt.

This week the women whose privacy was thoughtlessly repackaged as public entertainment when she was branded and broadcast as #PlaneBae — and who has suffered harassment and yet more unwelcome attention as a direct result — gave a statement to Business Insider.

“#PlaneBae is not a romance — it is a digital-age cautionary tale about privacy, identity, ethics and consent,” she writes. “Please continue to respect my privacy, and my desire to remain anonymous.”

And as a strategy to push against the antisocial incursions of social media, remembering to respect people’s privacy is a great place to start.

Facebook reportedly hires AI chip head from Google

Facebook is continuing to devote more resources to the development of AI-focused chips, bringing aboard a senior director of engineering from Google who worked on chips for Google’s products to lead its efforts, Bloomberg reports.

We’ve reached out to Google and Facebook for confirmation.

Shahriar Rabii spent nearly seven years at Google before joining Facebook this month as its VP and Head of Silicon according to his LinkedIn profile.

Facebook’s work on AI-focused custom silicon has been the topic of rumors and reports over the past several months. It’s undoubtedly a bold direction for the company though it’s unclear how interested Facebook is in creating custom silicon for consumer devices or if they’re more focused on building for their server business as they also look to accelerate their own research efforts.

Rabii’s work at Google seemed to encompass a good deal of work on chips for consumer devices, specifically work on the Pixel 2’s Visual Core chip which brought machine learning intelligence to the device’s camera.

Facebook has long held hardware ambitions but its Building 8 hardware division appears to be closer than ever to shipping its first products as the company’s rumored work on an Echo Show competitor touchscreen smart speaker continues. Meanwhile, Facebook has also continued building virtual reality hardware built on Qualcomm’s mobile chipsets.

As Silicon Valley’s top tech companies continue to compete aggressively for talent amongst artificial intelligence experts, this marks another departure from Google. Earlier this year, Apple poached Google’s AI head.

As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation

Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own.

And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created.

That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

In what companies have framed as a quest to create “better,” more efficient and more targeted services for consumers, they have tried to solve the problem of user access by moving to increasingly passive (for the user) and intrusive (by the company) forms of identification — culminating in features like Apple’s Face ID and the frivolous filters that Snap overlays over users’ selfies.

Those same technologies are also being used by security and police forces in ways that have gotten technology companies into trouble with consumers or their own staff. Amazon has been called to task for its work with law enforcement, Microsoft’s own technologies have been used to help identify immigrants at the border (indirectly aiding in the separation of families and the virtual and physical lockdown of America against most forms of immigration) and Google faced an internal company revolt over the facial recognition work it was doing for the Pentagon.

Smith posits this nightmare scenario:

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

What’s impressive about this is the intimation that it isn’t already happening (and that Microsoft isn’t enabling it). Across the world, governments are deploying these tools right now as ways to control their populations (the ubiquitous surveillance state that China has assembled, and is investing billions of dollars to upgrade, is just the most obvious example).

In this moment when corporate innovation and state power are merging in ways that consumers are only just beginning to fathom, executives who have to answer to a buying public are now pleading for government to set up some rails. Late capitalism is weird.

But Smith’s advice is prescient. Companies do need to get ahead of the havoc their innovations can wreak on the world, and they can look good while doing nothing by hiding their own abdication of responsibility on the issue behind the government’s.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act,” Smith writes.

The fact is, something does, indeed, need to be done.

As Smith writes, “The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.”

All of this takes on faith that the technology actually works as advertised. And the problem is, right now, it doesn’t.

In an op-ed earlier this month, Brian Brackeen, the chief executive of a startup working on facial recognition technologies, pulled back the curtains on the industry’s not-so-secret huge problem.

Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie.

And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.

There’s really no “nice” way to acknowledge these things.

Smith, himself admits that the technology has a long way to go before it’s perfect. But the implications of applying imperfect technologies are vast — and in the case of law enforcement, not academic. Designating an innocent bystander or civilian as a criminal suspect influences how police approach an individual.

Those instances, even if they amount to only a handful, would lead me to argue that these technologies have no business being deployed in security situations.

As Smith himself notes, “Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way.”

While Smith lays out the problem effectively, he’s less clear on the solution. He’s called for a government “expert commission” to be empaneled as a first step on the road to eventual federal regulation.

That we’ve gotten here is an indication of how bad things actually are. It’s rare that a tech company has pleaded so nakedly for government intervention into an aspect of its business.

But here’s Smith writing, “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”

Given the current state of affairs in Washington, Smith may be asking too much. Which is why perhaps the most interesting — and admirable — call from Smith in his post is for technology companies to slow their roll.

We recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology,” writes Smith. “Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. ‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”

Machine learning boosts Swiss startup’s shot at human-powered land speed record

The current world speed record for riding a bike down a straight, flat road was set in 2012 by a Dutch team, but the Swiss have a plan to topple their rivals — with a little help from machine learning. An algorithm trained on aerodynamics could streamline their bike, perhaps cutting air resistance by enough to set a new record.

Currently the record is held by Sebastiaan Bowier, who in 2012 set a record of 133.78 km/h, or just over 83 mph. It’s hard to imagine how his bike, which looked more like a tiny landbound rocket than any kind of bicycle, could be significantly improved on.

But every little bit counts when records are measured down a hundredth of a unit, and anyway, who knows but that some strange new shape might totally change the game?

To pursue this, researchers at the École Polytechnique Fédérale de Lausanne’s Computer Vision Laboratory developed a machine learning algorithm that, trained on 3D shapes and their aerodynamic qualities, “learns to develop an intuition about the laws of physics,” as the university’s Pierre Baqué said.

“The standard machine learning algorithms we use to work with in our lab take images as input,” he explained in an EPFL video. “An image is a very well-structured signal that is very easy to handle by a machine-learning algorithm. However, for engineers working in this domain, they use what we call a mesh. A mesh is a very large graph with a lot of nodes that is not very convenient to handle.”

Nevertheless, the team managed to design a convolutional neural network that can sort through countless shapes and automatically determine which should (in theory) provide the very best aerodynamic profile.

“Our program results in designs that are sometimes 5-20 percent more aerodynamic than conventional methods,” Baqué said. “But even more importantly, it can be used in certain situations that conventional methods can’t. The shapes used in training the program can be very different from the standard shapes for a given object. That gives it a great deal of flexibility.”

That means that the algorithm isn’t just limited to slight variations on established designs, but it also is flexible enough to take on other fluid dynamics problems like wing shapes, windmill blades or cars.

The tech has been spun out into a separate company, Neural Concept, of which Baqué is the CEO. It was presented today at the International Conference on Machine Learning in Stockholm.

A team from the Annecy University Institute of Technology will attempt to apply the computer-honed model in person at the World Human Powered Speed Challenge in Nevada this September — after all, no matter how much computer assistance there is, as the name says, it’s still powered by a human.

A new hope: AI for news media

To put it mildly, news media has been on the sidelines in AI development. As a consequence, in the age of AI-powered personalized interfaces, the news organizations don’t anymore get to define what’s real news, or, even more importantly, what’s truthful or trustworthy. Today, social media platforms, search engines and content aggregators control user flows to the media content and affect directly what kind of news content is created. As a result, the future of news media isn’t anymore in its own hands. Case closed?

The (Death) Valley of news digitalization

There’s a history: News media hasn’t been quick or innovative enough to become a change maker in the digital world. Historically, news used to be the signal that attracted and guided people (and advertisers) in its own right. The internet and the exponential explosion of available information online changed that for good.

In the early internet, the portals channeled people to the content in which they were interested. Remember Yahoo? As the amount of information increased, the search engine(s) took over, changing the way people found relevant information and news content online. As the mobile technologies and interfaces started to get more prominent, social media with News Feed and tweets took over, changing again the way people discovered media content, now emphasizing the role of our social networks.

Significantly, news media didn’t play an active role in any of these key developments. Quite the opposite, it was late in utilizing the rise of the internet, search engines, content aggregators, mobile experience, social media and other new digital solutions to its own benefit.

The ad business followed suit. First news organizations let Google handle searches on their websites and the upcoming search champion got a unique chance to index media content. With the rise of social media, news organizations, especially in the U.S., turned to Facebook and Twitter to break the news rather than focusing on their own breaking news features. As a consequence, news media lost its core business to the rising giants of the new digital economy.

To put it very strongly, news media hasn’t ever been fully digital in its approach to user experience, business logic or content creation. Think paywalls and e-newspapers for the iPad! The internet and digitalization forced the news media to change, but the change was reactive, not proactive. The old, partly obsolete, paradigms of content creation, audience understanding, user experience and content distribution still actively affect the way news content is created and distributed today (and to be 110 percent clear — this is not about the storytelling and the unbelievable creativity and hard work done by ingenious journalists all around the globe).

Due to these developments, today’s algorithmic gatekeepers like Google and Facebook dominate the information flows and the ad business previously dominated by the news media. Significantly, personalization and the ad-driven business logic of today’s internet behemoths isn’t designed to let the news media flourish on its own terms ever again.

From observers to change makers

News media have been reporting the rise of the new algorithmic world order as an outside observer. And the reporting has been thorough, veracious and enlightening — the stories told by the news media have had a concrete effect on how people perceive our continuously evolving digital realities.

However, as the information flows have moved into the algorithmic black boxes controlled by the internet giants, it has become obvious that it’s very difficult or close to impossible for an outside observer to understand the dynamics that affect how or why a certain piece of information becomes newsworthy and widely spread. For the mainstream news media, Trump’s rise to the presidency came as a “surprise,” and this is but one example of the new dynamics of today’s digital reality.

And here’s a paradox. As the information moves closer to us, to the mobile lock screen and other surfaces that are available and accessible for us all the time, its origins and background motives become more ambiguous than ever.

The current course won’t be changed by commenting on or criticizing the actions of the ruling algorithmic platforms.

The social media combined with self-realizing feedback loops utilizing the latest machine learning methods, simultaneously being vulnerable for malicious or unintended gaming, has led us to the world of “alternative facts” and fake news. In this era of automated troll-hordes and algorithmic manipulation, the ideals of news media sound vitally important and relevant: Distribution of truthful and relevant information; nurturing the freedom of speech; giving the voice to the unheard; widening and enriching people’s worldview; supporting democracy.

But, the driving values of news media won’t ever be fully realized in the algorithmic reality if the news media itself isn’t actively developing solutions that shape the algorithmic reality.

The current course won’t be changed by commenting on or criticizing the actions of the ruling algorithmic platforms. #ChangeFacebook is not on the table for news media. New AI-powered Google News is controlled and developed by Google, based on its company culture and values, and thus can’t be directly affected by the news organizations.

After the rise of the internet and today’s algorithmic rule, we are again on the verge of a significant paradigm shift. Machine learning-powered AI solutions will have an increasingly significant impact on our digital and physical realities. This is again a time to affect the power balance, to affect the direction of digital development and to change the way we think when we think about news — a time for news media to transform from an outside observer into a change maker.

AI solutions for news media

If the news media wants to affect how news content is created, developed, presented and delivered to us in the future, they need to take an active role in AI development. If news organizations want to understand the way data and information are constantly affected and manipulated in digital environments, they need to start embracing the possibilities of machine learning.

But how can news media ever compete with today’s AI leaders?

News organisations have one thing that Google, Facebook and other big internet players don’t yet have: news organizations own the content creation process and thus have a deep and detailed content understanding. By focusing on appropriate AI solutions, they can combine the data related to the content creation and content consumption in a unique and powerful way.

News organizations need to use AI to augment you and me. And they need to augment journalists and the newsroom. What does this mean?

Augment the user-citizen

Personalization has been around for a while, but has it ever been designed and developed in the terms of news media itself? The goal for news media is to combine great content and personalized user experience to build a seamless and meaningful news experience that is in line with journalistic principles and values.

For news, the upcoming real-time machine learning methods, such as online learning, offer new possibilities to understand the user’s preferences in their real-life context. These technologies provide new tools to break news and tell stories directly on your lock screen.

An intelligent notification system sending personalized news notifications could be used to optimize content and content distribution on the fly by understanding the impact of news content in real time on the lock screens of people’s mobile devices. The system could personalize the way the content is presented, whether serving voice, video, photos, augmented reality material or visualizations, based on users’ preferences and context.

Significantly, machine learning can be utilized to create new forms of interaction between people, journalists and the newsroom. Automatically moderated commenting is just one example already in use today. Think if it would be possible to build interactions directly on the lock screen that let the journalists better understand the way content is consumed, simultaneously capturing in real time the emotions conveyed by the story.

By opening up the algorithms and data usage through data visualizations and in-depth articles, the news media could create a new, truly human-centered form of personalization that lets the user know how personalization is done and how it’s used to affect the news experience.

And let’s stop blaming algorithms when it comes to filter bubbles. Algorithms can be used to diversify your news experience. By understanding what you see, it’s also possible to understand what you haven’t seen before. By turning some of the personalization logic upside down, news organizations could create a machine learning-powered recommendation engine that amplifies diversity.

Augment the journalist

In the domain of abstracting and contextualizing new information and unpredictable (news) events, human intelligence is still invincible.

The deep content understanding of journalists can be used to teach an AI-powered news assistant system that would become better over time by learning directly from the journalists using it, simultaneously taking into account the data that flows from the content consumption.

A smart news assistant could point out what kinds of content are connected implicitly and explicitly, for example based on their topic, tone of voice or other meta-data such as author or location. Such an intelligent news assistant could help the journalist understand their content even better by showing which previous content is related to the now-trending topic or breaking news. The stories could be anchored into a meaningful context faster and more accurately.

Innovation and digitalization doesn’t change the culture of news media if it’s not brought into the very core of the news business.

AI solutions could be used to help journalists gather and understand data and information faster and more thoroughly. An intelligent news assistant can remind the journalist if there’s something important that should be covered next week or coming holiday season, for example by recognizing trends in social media or search queries or highlighting patterns in historic coverage. Simultaneously, AI solutions will become increasingly essential for fact-checking and in detecting content manipulation, e.g. recognizing faked images and videos.

An automated content production system can create and annotate content automatically or semi-automatically, for example by creating draft versions based on an audio interview, that are then finished by human journalists. Such a system could be developed further to create news compilations from different content pieces and formats (text, audio, video, image, visualization, AR experiences and external annotations) or to create hyper-personalized atomized news content such as personalized notifications.

The news assistant also could recommend which article should be published next using an editorial push notification, simultaneously suggesting the best time for sending the push notification to the end users. And as a reminder, even though Google’s Duplex is quite a feat, natural language processing (NLP) is far from solved. Human and machine intelligence can be brought together in the very core of the content production and language understanding process. Augmenting the linguistic superpowers of journalists with AI solutions would empower NLP research and development in new ways.

Augment the newsroom

Innovation and digitalization doesn’t change the culture of news media if it’s not brought into the very core of the news business concretely in the daily practices of the newsroom and business development, such as audience understanding.

One could start thinking of the news organization as a system and platform that provides different personalized mini-products to different people and segments of people. Newsrooms could get deeper into relevant niche topics by utilizing automated or semi-automated content production. And the more topics covered and the deeper the reporting, the better the newsroom can produce personalized mini-products, such as personalized notifications or content compilations, to different people and segments.

In a world where it’s increasingly hard to distinguish a real thing from fake, building trust through self-reflection and transparency becomes more important than ever. AI solutions can be used to create tools and practices that enable the news organization and newsroom to understand its own activities and their effects more precisely than ever. At the same time, the same tools can be used to build trust by opening the newsroom and its activities to a wider audience.

Concretely, AI solutions could detect and analyze possible hidden biases in the reporting and storytelling. For example, are some groups of people over-presented in certain topics or materials? What has been the tone of voice or the angle related to challenging multi-faceted topics or widely covered news? Are most of the photos depicting people with a certain ethnic background? Are there important topics or voices that are not presented in the reporting at all? AI solutions also can be used to analyze and understand what kind of content works now and what has worked before, thus giving context-specific insights to create better content in the future.

AI solutions would help reflect the reporting and storytelling and their effects more thoroughly, also giving new tools for decision-making, e.g. to determine what should be covered and why.

Also, such data and information could be visualized to make the impact of reporting and content creation more tangible and accessible for the whole newsroom. Thus, the entire editorial and journalistic decision-making process can become more open and transparent, affecting the principles of news organizations from the daily routines to the wider strategical thinking and management.

Tomorrow’s news organizations will be part human and part machine. This transformation, augmenting human intelligence with machines, will be crucial for the future of news media. To maintain their integrity and trustworthiness, news organizations themselves need to able to define how their AI solutions are built and used. And the only way to fully realize this is for the news organizations to start building their own AI solutions. The sooner, the better — for us all.

Microsoft wants to make you a better team player by nudging you into submission

Microsoft announced a number of new tools for its MyAnalytics tool for Office 365 users today that are geared toward giving employees more data about how they work, as well as ways to improve how teams work together. In today’s businesses, everybody has to be a team player, after all, and if you want to bring technology to bear on this, you first need data — and once you have data, you can go into full-on analytics mode and maybe even throw in a smidge of machine learning, too.

So today, Microsoft is launching two new products: Workplace Analytics and MyAnalytics nudges. Yes, Office 365 will now nudge you to be a better team player. “Building better teams starts with transparent, data-driven dialog—but no one is perfect and sticking to good collaboration habits can be challenging in a fast-paced job,” Microsoft’s Natalie McCullough and Noelle Beaujon, using language only an MBA could love, write in today’s announcement.

I’m not sure what exactly that means or whether I have good collaboration habits or not, but in practice, Office 365 can now nudge you when you need more focus time as your calendar fills up, for example. You can block off those times without leaving your Inbox (or, I guess, you could always ignore this and just set up a standing block of time every day where you don’t accept meetings and just do your job…). MyAnalytics can also now nudge you to delegate meetings to a co-worker when your schedule is busy (because your co-workers aren’t busy and will love you for putting more meetings on your calendar) and tell you to avoid after-hours emails as you draft them to co-workers so they don’t have to work after hours, too (that’s actually smart, but may not work well in every company).

With this new feature, Microsoft is also using some machine learning smarts, of course. MyAnalytics was already able to remind you of tasks you promised to co-workers over email, and now it’ll nudge you when you read new emails from those co-workers, too. Because the more you get nudged, the more likely you are to finish that annoying task you never intended to do but promised your co-worker you would do so he’d go away.

If you’re whole team needs some nudging, Microsoft will also allow the group to enroll in a change program and provide you with lots of data about how you are changing. And if that doesn’t work, you can always set up a few meetings to discuss what’s going wrong.

These new features will roll out this summer. Get ready to be nudged.

Glasswing Ventures closes its artificial intelligence-focused fund with $112 million

One year after receiving a whopping $75 million commitment to invest in early stage companies applying artificial intelligence to various industries, Glasswing Ventures has closed its debut fund with $112 million. 

It’s a significant milestone for a firm that purports to be the largest early stage investor focused on machine learning on the East Coast, and one of the largest early stage funds to be led by women.

Founded by Rudina Seseri alongside her longtime investing partner Rick Grinnell and bolstered by the addition of former portfolio executive Sarah Fay, Glasswing so far has invested in three startups: BotChain (a company spun up from Glasswing’s early investment in the AI management company, Talla); Allure Security, a threat detection company; and Terbium Labs, whose service alerts companies when sensitive or stolen information of theirs appears on the Internet.

For Seseri and Glasswing, the close is actually just the beginning. As she said in a statement:

“Raising an AI-focused fund on the East Coast is just the beginning for Glasswing Ventures. As we embark on a journey to shape the future, we are laser-focused on investing in exceptional founders who leverage AI to build disruptive companies and transform markets. Beyond providing smart capital, we are firmly committed to supporting our entrepreneurs with all facets of building and scaling their businesses.”

The story, for Seseri and her co-founder Grinnell actually begins nearly a decade ago at the venture firm Fairhaven Capital, the rebranded investment arm of the TD Bank Group.

At the time of the firm’s launch in 2016, Glasswing was targeting $150 million for its first fund, with a 2.5% management fee and 20% carried interest (pretty standard terms for a venture fund), according to reporting by Dan Primack back when he was at Fortune.

In a pitchdeck seen by Primack the firm was touting 4.25x return multiple on its investments including 6x realized and 1.8x unrealized in deals like Grinnell’s exit from EqualLogic (which was sold to Dell for $1.46 billion) and Seseri’s investments in Jibo (which is now basically worthless) and SocialFlow (which isn’t).

Fay, who worked at a portfolio investment of Fairhaven’s, was brought on soon after the two partners launched their new venture.

Glasswing definitely benefits from the firm’s proximity to Boston’s stellar universities. And Seseri, a Harvard University graduate maintains close ties with the research community at both Harvard and MIT — tapping luminaries like Tim Berners-Lee to sit on the firm’s advisory council for networking. 

Chase Martin, Marketing and Events Manager, Emma Marty, Operations and Support Coordinator, Rick Grinnell, Founder and Managing Partner, Rudina Seseri, Founder and Managing Partner, Sarah Fay, Managing Director, and Andre Rocha, Investment Associate 

Facial recognition startup Kairos acquires Emotion Reader

Kairos, the face recognition technology used for brand marketing, has announced the acquisition of EmotionReader.

EmotionReader is an Limerick, Ireland-based startup that uses algorithms to analyze facial expressions around video content. The startup allows brands and marketers to measure viewers emotional response to video, analyze viewer response via an analytics dashboard, and make different decisions around media spend based on viewer response.

The acquisition makes sense considering that Kairos core business is focused on facial identification for enterprise clients. Knowing who someone is, paired with how they feel about your content, is a powerful tool for brands and marketers.

The idea for Kairos started when founder Brian Brackeen was making HR time-clocking systems for Apple. People were cheating the system, so he decided to implement facial recognition to ensure that employees were actually clocking in and out when they said they were.

That premise spun out into Kairos, and Brackeen soon realized that facial identification as a service was much more powerful than any niche time clocking service.

But Brackeen is very cautious with the technology Kairos has built.

While Kairos aims to make facial recognition technology (and all the powerful insights that come with it) accessible and available to all businesses, Brackeen has been very clear about the fact that Kairos isn’t interested in selling this technology to government agencies.

Brackeen recently contributed a post right here on TechCrunch outlining the various reasons why governments aren’t ready for this type of technology. Alongside the outstanding invasion of personal privacy, there are also serious issues around bias against people of color.

From the post:

There is no place in America for facial recognition that supports false arrests and murder. In a social climate wracked with protests and angst around disproportionate prison populations and police misconduct, engaging software that is clearly not ready for civil use in law enforcement activities does not serve citizens, and will only lead to further unrest.

As part of the deal, EmotionReader CTO Dr. Stephen Moore will run Kairos’ new Singapore-based R&D center, allowing for upcoming APAC expansion.

Kairos has raised approximately $8 million from investors New World Angels, Kapor Capital, 500 Startups, Backstage Capital, Morgan Stanley, Caerus Ventures, and Florida Institute.