How to save the third wave of technology from itself

As The New York Times recently profiled, new startups are arising to solve the housing crisis. These startups disrupt what ex-AOL CEO Steve Case calls the “Third Wave,” industries with large social impact. Think: housing, healthcare and finance.

To survive, these companies need to ensure compliance with regulations early on, because mistakes here can have large social consequences. To help new entrants survive in these industries, two closely related technologies — legal technology (“legaltech”) and regulation technology (“regtech”) — help companies navigate rules embedded in text, such as contracts or regulations. Without them, incumbents, who have the most resources to hire lawyers to navigate these rules, are set up to dominate in the Third Wave.

Third Wave startups must tread carefully. Unaudited prefabricated housing designs might mean the use of subpar safety measures and tenant deaths during an earthquake. Oversights in financial transactions, for instance, may unintentionally facilitate money laundering. Privacy violations in healthcare data could lead to an unfair increase in insurance premiums for affected individuals.

To mitigate these social harms, regulations can be complex. In finance, for instance, the new Markets in Financial Instruments Directive has 30,000 pages. To comply, banks can spend $1 billion a year (often 20 percent of their operational budget). Citigroup reportedly hired 30,000 lawyers, auditors and compliance officers in 2014.

For startups, ignorance is no longer a viable strategy. In just the past three years, fintech startups have suffered more than $200 million (almost 5 percent of the total venture dollars invested over that same period) in regulatory fines: 50 percent involving consumer mistreatment and 25 percent involving privacy violations. Zenefits fired 17 percent of its staff, including its CEO, after violating insurance brokerage laws. LendingClub paused operations and cut 10 percent of its workforce after violating state usury and unfair dealing laws.

Companies cannot — and should not — avoid their regulatory and social responsibilities.

Uber — once infamous for its “do first, ask for forgiveness later” strategies — now engages with regulators directly, by building partnerships and applying for permits. VCs, such as Evan Burfield in Regulatory Hacking, argue that these strategies are critical for the next wave of startups.

This work requires not only perseverance but also tremendous resources. Large companies, such as J.P. Morgan or even Uber, have the most money and staff to navigate an increasingly complex regulatory landscape. Because of this, they are in the best position to shape the future and the Third Wave.

Legaltech and regtech can change this trend. These technologies use anything from data analytics to decision trees to help companies navigate rules embedded in text, such as regulations and contracts. Since technology is scalable in ways that hiring 30,000 lawyers is not, small innovators can better compete in a big company’s game.

In one example, Fenergo transformed a highly manual document review for Know Your Customer (KYC) regulations using text analysis and rule logic, speeding up the process by 37 percent.

Other related startups are reducing the costs associated with complying with corporate contracts (such as Ironclad), bankruptcy (such as UpSolve), zoning requirements generally (such as Envelope and Symbium) and for accessory dwelling units (such as Cover), permitting processes (such as and energy standards (such as Cove Tool).

Because of this environment, analysts are bullish about these technologies. In 2018, nearly $1 billion has been invested in legaltech. Spend on regtech in finance alone is estimated to rise from $10 billion in 2017 to $76 billion in 2022 (a 700 percent increase in five years). For comparison, spend on the sharing economy is estimated to rise from $18 billion in 2017 to $40 billion in 2022.

In the Third Wave, companies cannot — and should not — avoid their regulatory and social responsibilities. If the scandals of Uber and Facebook are any indication, when a company violates laws or loses its integrity, the public and the stock market respond in kind. Journalistic coverage of breaches and unethical data practices has captured public attention. Waves of data regulation have passed across major jurisdictions, such as China, California and Brazil.

Embracing legaltech and regtech can plant long-term competitive advantages. Adopting technology that automates data protection, for instance, can create better customer experiences. By safely analyzing more data, even smaller companies can quickly generate insights and build programs that provide value to their customers.

Technology can empower companies both large and small to embrace the mitigation of social harms and the promotion of positive impact.

Startup executives should take notice.

Despite short-term questions, games software/hardware to top $200 billion by 2023

There has been some negative sentiment surrounding the games industry recently, with stock prices of public games companies in question in both the U.S. and China. While being contrarian to market sentiment is always risky, it’s also possible that folks might be taking a long-term solution to a short-term problem. Games industry software/hardware combined revenue could drive well over $200 billion of revenue by 2023, and there was a record $5.7 billion investment in games companies in 2018. So what’s going on?

The games industry isn’t one monolithic sector. Depending on how you slice it, the market is made up of 15 sectors, eight platform types (e.g. mobile, PC, console) and even more proprietary hardware/software platforms (e.g. iOS, Android, Xbox One, Sony PS4, Nintendo Switch).

Games software/hardware sector revenue share versus growth (2018-2023)

(Note: See selected data below. Free charts do not include all the numbers, axes and data from Digi-Capital’s Games Report, with underlying data sourced directly from companies and reliable secondary sources.)

Mobile games rule

We first forecast mobile’s dominance of the games market way back in 2011. At that time, many traditional games companies didn’t believe mobile/online games could become the driving force for games. Some of those companies no longer exist, so what’s happening today is nothing new.

Total global mobile app store revenues (gross across games and non-games apps, including app store revenue sharetopped $100 billion for the first time in 2018. Mobile games delivered around three quarters of that number, as they have consistently for years. So where mobile games drove more than $70 billion gross revenue globally last year, they could top $100 billion revenue (again gross, including app store revenue share) in their own right in the next five years. But like all games sectors, mobile games are hit-driven. And this could be the source of some of the mismatch between the market’s understanding of short-term trends and long-term potential.

For example, Supercell’s Clash of Clans and Clash Royale have delivered over $10 billion revenue to date. However, Supercell also saw revenues and profits decline in 2018 for the second year in a row as its franchises matured. Yet Supercell’s newest franchise, Brawl Stars, delivered $100 million revenue within its first two months. Swings and roundabouts.

Epic Games had the biggest breakout mobile games hit of 2018, with Fortnite contributing significantly to a reported $3 billion profit in 2018. It also anchored part of the interest behind a record $1.25 billion fundraising round last year. Yet the company removed once-dominant mobile franchise Infinity Blade from the App Store, and redirected internal development resources to focus on Fortnite by closing Paragon and stopping further development on Unreal Tournament. We will come back to Fortnite in the context of mobile games becoming platforms in their own right.

Perhaps the biggest concern for mobile games after last year is China, in which the regulator ceased approving new games for most of 2018. This weighed particularly heavily on market heavyweights Tencent and NetEase, although the regulator returned to approving their games this year. However, the regulator again stopped accepting games in February, only to approve more games in March. This regulatory risk has resulted in our downgrading Chinese games revenue growth rates until a clearer long-term pattern emerges.

Niantic’s mobile AR smash Pokémon GO took just over 1 percent of mobile games revenue globally last year, and has been reported to drive some astonishingly big numbers: 800 million downloads, more than $2.5 billion lifetime revenue, 147 million MAU, 5 million DAU, 78 percent of users aged 18 to 34, 144 billion steps taken by users, 500 million visits to sponsored locations and Niantic’s valuation of nearly $4 billion (Note: Not all of these figures have been confirmed by Niantic.) Off the back of this, Niantic is exploring Pokémon GO’s potential to become a platform, with GO Snapshot challenging Snapchat, and the Niantic Real World Platform as a serious AR Cloud player. We’ll come back to these.

PC games hardware/software is big, too

PC games hardware/software is made up of four individual sectors, including PC games hardware (gaming computers, upgrades and peripherals), PC games, online (DLC, IAP and subscriptions), PC games (digital sales) and PC games (physical sales). While each subsector has different characteristics, scales and growth rates, together they make up the only part of the market close to mobile games long-term. Google’s new Stadia cloud gaming platform and competitors could also fundamentally impact high-end gaming across all platforms (not just PC). Mobile games software and PC games hardware/software combined could deliver three quarters of total games industry revenues by 2023.

Selected multiplayer PC games (ex-China)

While PC games hardware is massive, users are buying that hardware mainly to play MMO/MOBA games. This part of the market is consolidated around franchises from major public games publishers such as Tencent and Activision Blizzard, as well as independents like Wargaming and Bluehole.

The console abides

Console games were the market leader for games hardware/software for decades, and remain huge despite no longer being an engine of growth. The highest growth here could come from console games (digital sales) and console games (online), with console games hardware and console games (physical sales) both ex-growth long-term. Despite flattish platform growth for console games hardware/software, they could still deliver multiple tens of billions of dollars revenue by 2023.

High-growth from a low base

Of the remaining market sectors, a handful are small today but have high-growth potential long-term. These include VR games, VR hardware, AR games and esports. Yet taken individually, each sector is likely to deliver in the 1 percent to 2 percent range of total games market revenue in five years’ time. So great for indie developers, but more challenging commercially for the big guns in terms of scale.

United nations of games

Geographical games market discussions tend to focus on China and the U.S., but there are more than 50 country markets driving growth at a global level. Scales and growth rates vary dramatically from giant, stable growth countries such as China (even with its current uncertainty), the U.S. and Japan to higher growth markets like India and Russia. In aggregate, Asia could take around half of global games market revenue by 2023 (despite short-term concerns about China). Europe might deliver around a quarter of global revenue, followed by North America at around one fifth in the same time frame. Countries in MEA and Latin America make up the balance at a much lower level.

Concentration versus growth

The law of big numbers caught up with the games industry years ago, with the 10 largest publicly listed games companies taking three quarters of public games company revenues globally (Note: This ratio does not include private games company revenues, which are substantial). When you already produce billions to tens of billions of dollars in revenue, high growth rates aren’t easy to come by as new hits counterbalance maturing franchises.

Public games company revenue share

(Note: Heat map displays relative revenue scale of publicly listed games companies. Private games company revenues not shown on this chart.)

Top grossing mobile games of recent years (outside China) often came from independents. Standouts include Supercell, King, Epic Games, Niantic, Machine Zone and others. Perhaps in response to this dynamic, there was more than $75 billion of games M&A over the last five years. Major games companies have been buying both growth and cash flow.

Mobile games as platforms?

The beauty of what Steve Jobs created with the App Store is that it democratized distribution of apps at scale beyond the early social games market. It also enabled indie games developers to build some of the rocket ships we’ve seen over the last decade. Yet despite massive growth, even the biggest mobile games couldn’t really be described as platforms in the traditional sense. Not yet.

Where Tencent’s WeChat messaging platform looks like a domestic app store rival with its “mini-programs,” some mobile games pureplays are taking very different routes to becoming platforms in their own right.

For Epic Games, the recent Marshmello concert in Fortnite held out the tantalizing prospect of the beginnings of the “Metaverse” on ubiquitous, affordable mobile devices. With 10.7 million concurrent attendees, this represents a significant milestone in the evolution of games as platforms. Given Fortnite’s previous records for streaming on Twitch and concurrent esports tournament viewers, the savvy Tim Sweeney is beginning to leverage all that scale in a totally new way. Together with building its own app store and the quality of its Unreal Engine, the lessons learned from Fortnite and partial owner Tencent are leading to new horizons.

Where Epic Games is building a metaverse that is a little like Ready Player One without the headsets, Niantic has taken a different approach. Leveraging the real-world, big data stream coming from Pokémon GO, Niantic is building the core of an AR cloud ecosystem to challenge Google, Apple and Facebook. It could also move the company far beyond its entertainment origins for real-world navigation, social, e-commerce, advertising and more.

Epic Games and Niantic could become two of the most valuable platform companies in the world, with long-term potential even they might not fully understand yet.

To infinity and beyond

All this potential doesn’t mean that short-term concerns aren’t valid, or that some games companies (even those currently at scale) might not fall from grace. Some of the volatility of recent times could turn out to be right on the money. When we talked to Epic Games’ CEO Tim Sweeney about all of this, he said “I think that we’re just in the final days of a long transition away from the old retail-centric game release model. Good times ahead.”

With the long-term prospects for games still looking positive, the brave, bold and lucky could have a bright future.

Quantum-safe communication over the internet infrastructure? Yeah, that’s doable

Quantum computing promises to do many things for business and industry, processing data at far greater speeds and rates than today’s binary computers can accomplish. But it also promises to do something else — essentially render current security standards useless, as hackers will be able to utilize quantum systems to crack the cryptographic schemes that are used to protect systems today.

We’re closer than ever to the deployment of a commercial quantum computing system — which means that we need to develop a security scheme that will protect data exchanged between quantum computers on the existing internet infrastructure.

Researchers have developed ideas and theories about how to proceed, but we believe that the basic components of a solution are already out there — and it won’t require the development of new hardware or mechanisms to accomplish. Using a combination of overlay security, blockchain, advanced cryptographic systems and Merkle trees with Lamport signatures, we believe we can develop a practical, inexpensive — and even easy to implement — quantum-safe security system for internet exchanges.

That commercial-grade quantum systems are almost — if not already — here, as at least one quantum system has been sold, and a Maryland firm has recently developed a 79-qubit system. One of the operating principles of qubits — the quantum bits that are the bedrock of quantum computing — is the quantum entanglement that allows quantum gate operations over non-neighboring quantum bits (enabling teleportation of qubits). As a result, operations are completed far faster than in standard bit computers — meaning that, given Shor’s algorithm, hackers should have no problem breaking current security systems. Even without quantum computing, some of the algorithms that were thought to be impenetrable were found to be vulnerable; with the power of qubits at work, the danger is even greater.

The asymmetric encryption schemes proposed by Merkle, Diffie Helman and Rivest and Shamir and Addelman, pioneers in the use of cryptography for computer security, brought about a revolution in cryptography. Asymmetric encryption enabled the creation of a symmetric key among communicating parties in a communication link, and is even able to identify the intervention of a malicious party in the communication. This is possible because these encryption schemes allow for the signature of certificates, monolithically associating a public key with the description of the entity to which the public key belongs. The signature is issued by a trusted third party, the certificate authority. This public key infrastructure is the de facto security infrastructure in use today, securing the entire internet — including sensitive and super-secure communications for the military, government and financial institutions.

Open up your favorite tech website on any given day, though, and you’ll likely find news of a breach — sometimes a big one that compromised the data, finances and livelihood of millions. Clearly, even before the quantum computing era, there is work that needs to be done to shore up the internet.

Overlay security as a model

A model for quantum-safe communications is already in use — in the concept of overlay security, used by many services to secure their communications. Let’s take as an example the sending of credit card information to a web site. Clearly, sending that data in clear-text via a single message is asking for trouble. One alternative would be to send the data via different segments — i.e. sending one email with the first digits of the credit card and then another email with the rest. But savvy hackers could compromise the data on a server, essentially using the email servers and the internet server providers to carry out a man-in-the-middle or “tap in” attack, capturing part of or all the digits of the credit card.

But overlay networks provide a window to a solution. Overlay networks provide “closed” networks utilizing security protocols for services atop an existing network (in this case, the internet). As internet services have proliferated, each one — e-mail, SMS, push notifications, messengers such as WhatsApp, Facebook Messenger, Skype, Snapchat, LINE, LinkedIn, Telegram, Weibo, Slack, etc. — have created their own logical secured channels. Each channel, even if using the same physical infrastructure, is secured in its own overlay, with trust in identification and authentication of communications through the channel taken as a matter of faith.

This overlay system provides what could be a model for quantum-safe communications. If, for example, we were to send part of our credit card data (encrypted, of course) via WhatsApp, and another string via Gmail, we would in essence be reproducing the entangled aspect of quantum communications. In this sense, we are using the overlay network these services provide — with the accumulated secrecy, authenticity and identification of the diverse capabilities of the communication channels, applications and protocols — to ensure security.

With that, there are some flaws with this approach. Overlay security uses several channels and random numbers to obtain a high level of confidence in identification, authentication and secrecy. In a quantum-safe security system, security protocols could be used over each channel that would be part of the encrypted data that needs to be reassembled in order to get at the data. If the channels are known — i.e. if hackers know we are using SMS, e-mail, etc, and in which order to authenticate communications — each of those channels could be compromised, with the communication at the very least blocked.

Secret sharing the secret to Quantum-safe security?

One way around this is with the secret-sharing protocol developed by Professor Adi Shamir, which utilizes a variable number of channels to reconstruct a message, depending on the message. Shamir’s secret sharing is based on using polynomials over a finite field, where each “participant” — in our case each channel — receives one point of the polynomial; the secret is the free coefficient of the polynomial. For example, if the polynomial is a random linear function with the secret being the free coefficient, any two participants/channels can reveal the secret, but no single participant/channel has the information needed to reveal the secret. Following the logic, the more polynomials and the more channels, the more “esoteric” the secret becomes, and the more remote the possibility that a hacker can get at it.

Blockchain authentication

One of the “keys” (pun intended) for the public key infrastructure upon which the authentication systems we rely on are built is the certificate authority. A trusted authority signs off on a certificate that associates a public key with the entity description, thus providing assurances that the entity we are contacting and providing authentication to is indeed the entity we intend to contact, and not a rogue pretender. However, the certificate system is far from perfect, and there have been plenty of compromises (see here, here, here and here) over the years.

One way to bolster authentication is to entrust the verification to blockchains. Combined with secret sharing, blockchains could prove a formidable challenge to even the most talented of hackers.

In a blockchain, the identity of a trusted party would be carried out by numerous already trusted entities, including governmental, financial and notary entities. Each trusted entity would have a portion of the security secret (as described above) in its portion of the ledger; when a user seeks to ascertain the trustworthiness of a service or site that relies on this scheme, the security system searches through the ledger for the required polynomials, enabling the creation of a new random symmetric key that can be used in an advanced encryption standard (AES) authentication scheme over a single channel. Unlike the asymmetric encryption largely used today, AES is considered quantum-safe, and a long-enough key length should be enough to protect the communication from the super-charged quantum hacking software that will be working very quickly (taking into account the quadratic search speed-up implied in Grover’s Algorithm). Along with AES, secure hash algorithms (SHA) are also considered to be quantum-safe.

Quantum-safe signatures

Finally, we need a way to sign messages and transactions, like financial transactions, in a way that hackers will not be able to compromise. There are numerous signature schemes already in use, including Lamport one-time signatures, which can utilize a secure hash function, such as secure hash algorithms (SHA). Lamport signatures are fine for single-use authentication, but even better are Merkle trees, which include many private keys in the leaves (which also can be produced by several nested hash functions). Those leaves offer infinite possibilities for private keys, with the root of the tree serving as the public key. Distributing that public key over a blockchain ledger would provide even more security — giving even quantum systems a run for their money in trying to guess authentication information.

There’s been much wringing of hands in recent years about the seemingly inevitable Quantum Apocalypse — the end of security as we know it. In a sense, that’s accurate; if we are using standard systems that utilize standard bits and standard security protocols, then yes, quantum systems will probably kill them on the first day their superior computing power is unleashed.

But it doesn’t have to be that way. There are schemes and technologies — overlay, secret sharing, blockchain, advanced signature systems, etc. that can protect communications even over the standard, open internet. Those technologies are not theoretical; they exist, and are in use in some capacity or another right now. By implementing these systems now, we can segue into the quantum computing era with nary a worry.

Why convertible notes are safer than SAFEs

As the saying goes, where you stand on an issue often rests on where you sit. Translated into startup law and finance, your views on how to approach fundraising are often heavily influenced by where your company and your investors are located. As a startup lawyer at Egan Nelson LLP (E/N), a leading boutique firm focused on tech markets outside of Silicon Valley — like Austin, Seattle, NYC, Denver, etc. — that’s the perspective I bring to this post. 

At a very high level, the three most common financing structures for startup seed rounds across the country are (i) equity, (ii) convertible notes and (iii) SAFEs. Others have come and gone, but never really achieved much traction. As to which one is appropriate for your company’s early funding, there’s no universal answer. It depends heavily on the context; not just of what the company’s own priorities and leverage are, but also the expectations and norms of the investors you plan to approach. Maintaining flexibility, and not getting bogged down by a rigid one-approach-fits-all mindset is important in that regard.

Here’s the TL;DR: When a client comes to me suggesting they might do a SAFE round, my first piece of advice is that a convertible note with a long maturity (three years) and low interest rate (like 2 percent or 3 percent) will give them functionally the same thing — while minimizing friction with more traditional investors.

Why? Read on for more details.

Convertible notes for smaller seed rounds

Convertible securities (convertible notes and SAFEs) are often favored, particularly for smaller rounds (less than $2 million), for their simplicity and speed to close. They defer a lot of the heavier terms and negotiation to a later date. The dominant convertible security (when equity is not being issued) across the country for seed funding is a convertible note, which is basically a debt instrument that is intended to convert into equity in the future when you close a larger round (usually a Series A). The note’s conversion economics are more favorable than what Series A investors pay, due to the greater risk the seed investors took on.

How to build The Matrix

Released this month 20 years ago, “The Matrix” went on to become a cultural phenomenon. This wasn’t just because of its ground-breaking special effects, but because it popularized an idea that has come to be known as the simulation hypothesis. This is the idea that the world we see around us may not be the “real world” at all, but a high-resolution simulation, much like a video game.

While the central question raised by “The Matrix” sounds like science fiction, it is now debated seriously by scientists, technologists and philosophers around the world. Elon Musk is among those; he thinks the odds that we are in a simulation are a billion to one (in favor of being inside a video-game world)!

As a founder and investor in many video game startups, I started to think about this question seriously after seeing how far virtual reality has come in creating immersive experiences. In this article we look at the development of video game technology past and future to ask the question: Could a simulation like that in “The Matrix” actually be built? And if so, what would it take?

What we’re really asking is how far away we are from The Simulation Point, the theoretical point at which a technological civilization would be capable of building a simulation that was indistinguishable from “physical reality.”

[Editor’s note: This article summarizes one section of the upcoming book, “The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics All Agree We Are in a Video Game.“] 

From science fiction to science?

But first, let’s back up.

“The Matrix,” you’ll recall, starred Keanu Reeves as Neo, a hacker who encounters enigmatic references to something called the Matrix online. This leads him to the mysterious Morpheus (played by Laurence Fishburne, and aptly named after the Greek god of dreams) and his team. When Neo asks Morpheus about the Matrix, Morpheus responds with what has become one of the most famous movie lines of all time: “Unfortunately, no one can be told what The Matrix is. You’ll have to see it for yourself.”

Even if you haven’t seen “The Matrix,” you’ve probably heard what happens next — in perhaps its most iconic scene, Morpheus gives Neo a choice: Take the “red pill” to wake up and see what the Matrix really is, or take the “blue pill” and keep living his life. Neo takes the red pill and “wakes up” in the real world to find that what he thought was real was actually an intricately constructed computer simulation — basically an ultra-realistic video game! Neo and other humans are actually living in pods, jacked into the system via a cord into his cerebral cortex.

Who created the Matrix and why are humans plugged into it at birth? In the two sequels, “The Matrix Reloaded” and “The Matrix Revolutions,” we find out that Earth has been taken over by a race of super-intelligent machines that need the electricity from human brains. The humans are kept occupied, docile and none the wiser thanks to their all-encompassing link to the Matrix!  

But the Matrix wasn’t all philosophy and no action; there were plenty of eye-popping special effects during the fight scenes. Some of these now have their own name in the entertainment and video game industry, such as the famous “bullet time.” When a bullet is shot at Neo, the visuals slow down time and manipulate space; the camera moves in a circular motion while the bullet is frozen in the air. In the context of a 3D computer world, this make perfect sense, though now the camera technique is used in both live action and video games.  AI plays a big role too: in the sequels, we find out much more about the agents pursuing Neo, Morpheus and the team. Agent Smith (played brilliantly by Hugo Weaving), the main adversary in the first movie, is really a computer agent — an artificial intelligence meant to keep order in the simulation. Like any good AI villain, Agent Smith (who was voted the 84th most popular movie character of all time!) is able to reproduce itself and overlay himself onto any part of the simulation.

“The Matrix” storyboard from the original movie. (Photo by Jonathan Leibson/Getty Images for Warner Bros. Studio Tour Hollywood)

The Wachowskis, creators of “The Matrix,” claim to have been inspired by, among others, science fiction master Philip K. Dick. Most of us are familiar with Dick’s work from the many film and TV adaptations, ranging from Blade Runner, Total Recall and the more recent Amazon show, The Man in the High Castle.  Dick often explored questions of what was “real” versus “fake” in his vast body of work. These are some of the same themes we will have to grapple with to build a real Matrix: AI that is indistinguishable from humans, implanting false memories and broadcasting directly into the mind.

As part of writing my upcoming book, I interviewed Dick’s wife, Tessa B. Dick, and she told me that Philip K. Dick actually believed we were living in a simulation. He believed that someone was changing the parameters of the simulation, and most of us were unaware that this was going on. This was of course, the theme of his short story, “The Adjustment Team” (which served as the basis for the blockbuster “The Adjustment Bureau,” starring Matt Damon and Emily Blunt).

A quick summary of the basic (non-video game) simulation argument

Today, the simulation hypothesis has moved from science fiction to a subject of serious debate because of several key developments.

The first was when Oxford professor Nick Bostrom published his 2003 paper, “Are You Living in a Simulation?” Bostrom doesn’t say much about video games nor how we might build such a simulation; rather, he makes a clever statistical argument. Bostrom theorized that if a civilization ever got the Simulation Point, it would create many ancestor simulations, each with large numbers (billions or trillions?) of simulated beings. Since the number of simulated beings would vastly outnumber the number of real beings, any beings (including us!) were more likely to be living inside a simulation than outside of it!

Other scientists, like physicists and Cosmos host Neil deGrasse Tyson and Stephen Hawking weighed in, saying they found it hard to argue against this logic.

Bostrom’s argument implied two things that are the subject of intense debate. The first is that if any civilization every reached the Simulation Point, then we are more likely in a simulation now. The second is that we are more likely all AI or simulated consciousness rather than biological ones. On this second point, I prefer to use the “video game” version of the simulation argument, which is a little different than Bostrom’s version.

Video games hold the key

Let’s look more at the video game version of the argument, which rests on the rapid pace of development of video game and computer graphics technology over the past decades. In video games, we have both “players” who exist outside of the video game, and “characters” who exist inside the game. In the game, we have PCs (player characters) that are controlled (you might say mentally attached to the players), and NPCs (non-player characters) that are the simulation artificial characters.

Fifty years of the internet

When my team of graduate students and I sent the first message over the internet on a warm Los Angeles evening in October, 1969, little did we suspect that we were at the start of a worldwide revolution. After we typed the first two letters from our computer room at UCLA, namely, “Lo” for “Login,” the network crashed.

Hence, the first Internet message was “Lo” as in “Lo and behold” – inadvertently, we had delivered a message that was succinct, powerful, and prophetic.

The ARPANET, as it was called back then, was designed by government, industry and academia so scientists and academics could access each other’s computing resources and trade large research files, saving time, money and travel costs. ARPA, the Advanced Research Projects Agency, (now called “DARPA”) awarded a contract to scientists at the private firm Bolt Beranek and Newman to implement a router, or Interface Message Processor; UCLA was chosen to be the first node in this fledgling network.

By December, 1969, there were only four nodes – UCLA, Stanford Research Institute, the University of California-Santa Barbara and the University of Utah. The network grew exponentially from its earliest days, with the number of connected host computers reaching 100 by 1977, 100,000 by 1989, a million by the early 1990’s, and a billion by 2012; it now serves more than half the planet’s population.

Along the way, we found ourselves constantly surprised by unanticipated applications that suddenly appeared and gained huge adoption across the Internet; this was the case with email, the World Wide Web, peer-to-peer file sharing, user generated content, Napster, YouTube, Instagram, social networking, etc.

It sounds utopian, but in those early days, we enjoyed a wonderful culture of openness, collaboration, sharing, trust and ethics. That’s how the Internet was conceived and nurtured.  I knew everyone on the ARPANET in those early days, and we were all well-behaved. In fact, that adherence to “netiquette” persisted for the first two decades of the Internet.

Today, almost no one would say that the internet was unequivocally wonderful, open, collaborative, trustworthy or ethical. How did a medium created for sharing data and information turn into such a mixed blessing of questionable information? How did we go from collaboration to competition, from consensus to dissention, from a reliable digital resource to an amplifier of questionable information?

The decline began in the early 1990s when spam first appeared at the same time there was an intensifying drive to monetize the Internet as it reached deeply into the world of the consumer. This enabled many aspects of the dark side to emerge (fraud, invasion of privacy, fake news, denial of service, etc.).

It also changed the nature of internet technical progress and innovations as risk aversion began to stifle the earlier culture of “moon shots”. We are currently still suffering from those shifts. The internet was designed to promote decentralized information, democracy and consensus based upon shared values and factual information. In this it has disappointed to fully achieve the aspirations of its founding fathers.

As the private sector gained more influence, their policies and goals began to dominate the nature of the Internet.  Commercial policies gained influence, companies could charge for domain registration, and credit card encryption opened the door for e-commerce. Private firms like AOL, CompuServe and Earthlink would soon charge monthly fees for access, turning the service from a public good into a private enterprise.

This monetization of the internet has changed it flavor. On the one hand, it has led to valuable services of great value. Here one can list pervasive search engines, access to extensive information repositories, consumer aids, entertainment, education, connectivity among humans, etc.  On the other hand, it has led to excess and control in a number of domains.

Among these one can identify restricted access by corporations and governments, limited progress in technology deployment when the economic incentives are not aligned with (possibly short term) corporate interests, excessive use of social media for many forms of influence, etc.

If we ask what we could have done to mitigate some of these problems, one can easily name two.  First, we should have provided strong file authentication – the ability to guarantee that the file that I receive is an unaltered copy of the file I requested. Second, we should have provided strong user authentication – the ability for a user to prove that he/she is whom they claim to be.

Had we done so, we should have turned off these capabilities in the early days (when false files were not being dispatched and when users were not falsifying their identities). However, as the dark side began to emerge, we could have then gradually turned on these protections to counteract the abuses at a level to match the extent of the abuse. Since we did not provide an easy way to provide these capabilities from the start, we suffer from the fact that it is problematic to do so for today’s vast legacy system we call the Internet.

A silhouette of a hacker with a black hat in a suit enters a hallway with walls textured with blue internet of things icons 3D illustration cybersecurity concept

Having come these 50 years since its birth, how is the Internet likely to evolve over the next 50? What will it look like?

That’s a foggy crystal ball. But we can foresee that it is fast on its way to becoming “invisible” (as I predicted 50 years ago) in the sense that it will and should disappear into the infrastructure.

It should be as simple and convenient to use as is electricity; electricity is straightforwardly available via a trivially simple interface by plugging it into the wall; you don’t know or care how it gets there or where it comes from, but it delivers its services on demand.

Sadly, the internet is far more complicated to access than that. When I walk into a room, the room should know I’m there and it should provide to me the services and applications that match my profile, privileges and preferences.  I should be able to interact with the system using the usual human communication methods of speech, gestures, haptics, etc.

We are rapidly moving into such a future as the Internet of Things pervades our environmental infrastructure with logic, memory, processors, cameras, microphones, speakers, displays, holograms, sensors. Such an invisible infrastructure coupled with intelligent software agents imbedded in the internet will seamlessly deliver such services. In a word, the internet will essentially be a pervasive global nervous system.

That is what I judge will be the likely essence of the future infrastructure. However, as I said above, the applications and services are extremely hard to predict as they come out of the blue as sudden, unanticipated, explosive surprises!  Indeed, we have created a global system for frequently shocking us with surprises – what an interesting world that could be!

Pre- and Post-Money SAFEs: Choosing the right one for your startup

With Y Combinator’s Demo Day taking place at Pier 48 in San Francisco next week, its largest batch of companies ever is getting ready to present to an audience of select investors. Having taken Atrium through Demo Day myself, I have first-hand knowledge of the process. When the founders have finished their pitches, the time to talk numbers will closely follow. Chief among the many decisions founders will face during this time is whether to opt for the Pre-Money SAFE or the new Post-Money SAFE, the two standardized legal documents that YC has introduced in recent years.

Both versions are meant to make the process fast, easy and fair for both parties in the early-stage fundraising process. But there are crucial differences between the two that founders should examine carefully.

Essentially, the Pre-Money SAFE is exceptionally favorable to founders because it gets them pre-valuation funding like a convertible note, but debt-free. The Post-Money SAFE sweetens some of the terms for investors, like locking in their percentage ownership in a priced round later on.

Overall, we expect the Post-Money version to become more common, especially if the company is raising a round above $1 million or $2 million, and the investors have more leverage to ask for it in the negotiation.

(Note: This article is aimed at giving founders a general understanding of the changes from Pre-Money SAFEs to Post-Money SAFEs. The information provided is based on my professional experience and opinions, and should not be used without careful consideration and advice by qualified advisors and legal counsel. Also, to learn more and ask questions about Pre and Post-Money SAFEs, join me on April 16th for a webinar where I’ll dive in a bit deeper.)

Two structures for raising startup investment

Today there are two general ways of structuring a startup fundraising round. The first can be called a “priced equity round,” and is characterized by the sale of preferred stock with a fixed valuation.

What to watch for in a VC term sheet

When startup founders review a VC term sheet, they are mostly only interested in the pre-money valuation and the board composition. They assume the rest of the language is “standard” and they don’t want to ruffle any feathers with their new VC partner by “nickel and diming the details.” But these details do matter.

VCs are savvy and experienced negotiators, and all of the language included in the term sheet is there because it is important to them. In the vast majority of cases, every benefit and protection a VC gets in a term sheet comes with some sort of loss or sacrifice on the part of the founders – either in transferring some control away from the founders to the VC, shifting risk from the VC to the founders, or providing economic benefits to the VC and away from the founders. And you probably have more leverage to get better terms than you may think. We are in an era of record levels of capital flowing into the venture industry and more and more firms targeting seed stage companies. This competition makes it harder for VCs to dictate terms the way they used to.

But like any negotiating partner, a VC will likely be evaluating how savvy you appear to be in approaching a proposed term sheet when deciding how hard they are going to push on terms. If the VC sees you as naïve or green, they can easily take advantage of that in negotiating beneficial terms for themselves. So what really matters when you are negotiating a term sheet? As a founder, you want to come out of the financing with as much overall control of the company and flexibility in shaping the future of the company as possible and as much of a share in the future economic prosperity of the company as possible. With these principles in mind, let’s take a look at four specific issues in a term sheet that are often overlooked by founders and company counsel:

  • What counts in pre-money capitalization
  • The CEO common director
  • Drag-along provisions
  • Liquidation preference.

What counts in pre-money capitalization