VCs like what they are hearing out of the podcasting sector

Podcasts are television for the earbud generation.

And podcasts have been around for a surprisingly long time. If you’re one of the folks who got hooked on podcasts around 2014, when Sarah Koenig and other producers from This American Life launched the wildly popular Serial podcast, you might think that it’s a brand new medium. But podcasts — audio that’s packaged and syndicated over RSS — have been around since the early 2000s.

And although many podcasters make money, typically through sponsorships, the podcasting industry (such as it is) hasn’t received much in the way of venture funding until quite recently. 2017 was a pivotal year for venture investment in the industry.

A venture-ready industry?

In the chart below, we plot deal and dollar volume for venture rounds raised by companies that are either in Crunchbase’s <a href=”https://www.crunchbase.com/search/organizations/field/organizations/categories/podcast”>podcast category or use the word “podcast” in their descriptions:

In charts like this, one typically expects a significant spike in dollar volume to come from one really big round, but that’s not what happened in the podcast world. Rather, there were several large deals struck with early-stage companies in the space. Here are some of the highlights from 2017:

So far in 2018, a number of other podcasting startups also raised venture funding, including West Hollywood-based podcast network Wondery, which raised $5 million in a Series A round. A company with a name that’s a little on-the-nose, The Podcast App, went through Y Combinator.

VC interest in podcasting: Why now?

Why has the podcast industry taken so long to appeal to VCs in a big way? In part, it’s a fairly decentralized industry. While there are some larger podcasting networks, most podcasts are still produced and promoted independently. But, perhaps more importantly, the business value of podcasts has been difficult to quantify until relatively recently. Unlike a web page or streaming video platform, where basically every user action can be tracked and optimized, historically it’s been difficult to analyze podcast listening habits and target ads.

But this is changing. Podcasts are now a mainstream medium for news and entertainment. And in December 2017, Apple, a longtime podcast booster and the largest distributor of podcasts, rolled out podcast episode analytics. This lets podcast producers and their advertisers know whether people actually listened to the entire episode and heard the ads. (Note: a few smaller podcast players offered similar analytics and ad monitoring features before Apple did.)

This leads some investors to believe they can achieve “venture scale” returns by putting money into podcasting startups.


Source: Tech Crunch

Former Twitter employees prove innovation isn’t just for profit

There’s no secret the tech industry suffers a reputation for harmfully disrupting a community. But not everyone in tech is to blame for the negative effects. In 2015 Ben Kovacs and Joel Lunenfeld founded the non-profit Guardian Gym, a buy-one give-one mixed martial arts gym/after-school program that now boasts over 300 adult members and youth mentees. Kovacs attributes the success of the gyms growth partly to Dick Costolo’s example at Twitter.

“He talked to the people, he made everyone feel important, everyone thought he was their friend,” Kovacs said. “And I realized that we needed to build a similar type culture if we wanted to be here for the next couple of decades.”

The first of its kind gym is outgrowing its current space and is in the process of securing a second location to meet the community’s needs. Kovacs plans on having a proper classroom space, nap pods, indoor/outdoor BBQ and lounge area, as well as an all-youth jiujitsu and boxing program from 5-9 p.m. that runs concurrently with the adults.

“Imagine a place a kid could go every day and essentially have everything they need to be healthy, to get their exercise out, eat nutritious meals, and of course do their homework,” said Kovacs.

You can donate to the new site on their Gofundme page.


Source: Tech Crunch

‘Arrested Development’ struggles to shake off recent controversies

Netflix’s revival of Arrested Development may have had a mixed reception from critics and fans, but the dysfunctional Bluth family isn’t done yet.

Five years after the premiere of the much-anticipated fourth season, Arrested Development is back for season five — or rather, the first eight episodes of the season, with more to follow later this year. On the latest episode of the Original Content podcast, we’re joined by TechCrunch’s Lucas Matney to discuss our thoughts on the show.

For many fans, this new season may feel like a return to form. Not everything works — there’s still some awkward editing and greenscreen — but it’s back to the format of the show’s classic episodes, with lots more delightful bickering between characters like Michael (Jason Bateman), George Michael (Michael Cera), Gob (Will Arnett), Maeby (Alia Shawkat) and Lucille (Jessica Walter).

Unfortunately, it’s tough to talk about the show’s quality without also acknowledging a recent group interview with The New York Times, where actor Jeffrey Tambor was asked about yelling at Walter, who apparently started crying while a number of the male cast members to seemed to defend Tambor’s behavior. They’ve since apologized, but we still wrestle with how the controversy changes our perception of the show.

We also talk about another big piece of streaming news, namely Amazon’s decision to revive The Expanse after it was canceled by Syfy.

You can listen in the player below, subscribe using Apple Podcasts or find us in your podcast player of choice. If you like the show, please let us know by leaving a review on Apple. You also can send us feedback directly.


Source: Tech Crunch

Will smart home tech make us care more about privacy?

For most people, the thought of a smart device sharing their intimate conversations and sending those recordings along to their acquaintances is the stuff of dystopian nightmares. And for one family in Portland, it’s a nightmare that became all too real when their Amazon Echo sent a recording of a private conversation to a random contact in their phone book.

Mercifully, the recorded conversation was fairly banal — a chat about home renovations. But as smart home technology is swiftly being integrated into our daily lives and private spaces, it’s not difficult to imagine far worse scenarios.

Smart speakers record residents’ conversations. Thermostats equipped with motion sensors track the whereabouts of each household member, and when they leave the house. Refrigerators remember grocery lists and spending habits. One thing is clear: when residents invite smart technology into their homes, they are gambling with their privacy.

Ironically, the smart home may turn out to  be the salvation of online privacy itself. Internet companies have gotten away with hoarding people’s personal data for so long in part because of what experts call “the privacy paradox”: while most people claim to care deeply about online privacy, very few of them take action to protect it. Just look at the recent furor over Facebook’s lack of data privacy protections, which resulted in the compromise of 87 million users’ personal information. Though plenty of people tweeted they would #DeleteFacebook, how many actually permanently closed their accounts? Certainly far fewer than 87 million.

While experts disagree about why this paradox exists, at least some of the problem seems rooted in the fact that online space is virtual, whereas our privacy instincts evolved in physical space. By bringing virtual privacy incursions into the physical world—particularly into the protected private space of the home—smart home technology could short-circuit that dynamic.

The internet is intangible, and so its privacy risks appear to be too. It’s one thing to know, in the back of your mind, that Facebook has the ability to comb through your private messages. But when devices in your home are recording your spoken conversations and physical movements, it’s harder to ignore the looming threat of potentially disastrous privacy violations.

If smart fridges and smart locks get people to take online privacy as seriously as physical privacy, they could do what the Equifax hack and other high-profile data breaches could not: actually get people to change their behavior. If users vote for privacy with their feet—or their wallets—they could spur a wholesale rethinking of the online economy, away from one-sided exploitation and toward greater trust and transparency.

Privacy in virtual space

In Western culture, the home has long been recognized as a protected zone; the Talmud includes prohibitions against putting in windows in a house that directly look into a neighbor’s. When a stranger peeps through our window or listens at our door, millennia-old norms tell us we should chase them away. This desire for isolation may stem from a fundamental biological need; whether you’re a human or a possum, physical withdrawal means concealment and protection from predation, making privacy an evolutionary life-or-death matter.

But websites and apps have no physical presence in our lives. A software algorithm, no matter how malicious, doesn’t have the visceral menace of an unknown face at the glass. The internet disarms us by making our interactions feel abstract, even unreal. One 2016 study posited that this sense of unreality leads to contradictory attitudes about online privacy: while people know rationally that they should be concerned about virtual incursions, they simply don’t have a strong “gut feeling” about it intuitively. And when making decisions in the moment, gut feeling often wins out.

The problem is exacerbated by the fact that online, there is less of a clear distinction between private and public space. We use social media to communicate simultaneously with hundreds or thousands of anonymous followers and with our closest friends. Email inboxes, Slack channels, and the like are more obviously “closed” spaces, but even there it’s often unclear to users which algorithms might be listening in. Even Snapchat—known for auto-deleting users’ photos, videos, and chats to protect their privacy—announced it would allow retargeted ads in fall 2017, to relatively little backlash. It’s hard to think about protecting ourselves from the stranger peeping in the window when we’re not even sure if it’s a public or private space he or she is looking into. What’s more, many users tend to imagine online “walls” that aren’t really there.

Multiple studies have shown that the mere existence of a privacy policy on a website makes users feel more secure, even though a policy in itself is no guarantee that their data won’t be sold to third parties.

“How secure are your light bulbs?”

When the internet enters the clearly private space of the home, some of that ambiguity will to disappear. It’s telling that a November 2017 survey by Deloitte found that consumers are more cautious in general about smart home devices compared to general online activities or even other categories of IoT. Forty percent of respondents said that they felt smart home technology “reveals too much about their personal lives,” while another 40 percent said they were worried about their usage being tracked. By comparison, they were less mistrustful of other IoT applications like autonomous vehicles and smart car technology, even though they have similar tracking capabilities.

And that survey only considers peoples’ reaction to fairly abstract privacy risks. The reality is that in a smart home, security vulnerabilities and data breaches can have much more dramatic real-world impacts. On his blog Charged, developer and journalist Owen Williams recently detailed his experience trying to figure out who or what kept overriding his brightness settings for his Phillips Hue smart light bulbs. It turned out that an app he’d enabled to dim his office lights at night had taken over all the bulbs hooked up to Williams’ Hue system and was keeping them at one uniform brightness.

As Williams points out, if a malicious app accomplished the same feat, it could extort money from the user by “randomly changing the brightness or color of lights until they pay.” When a cyberattack results in lights that won’t stop flashing—or doors that won’t lock, windows that won’t close, or a fridge turns itself off and melts all your ice cream—it’s logical that people’s reactions to digital privacy incursions will become that much more extreme.

Image courtesy of RamCreativ

Trust is the antidote

How can internet companies thrive in the privacy-sensitive space of the home? If privacy behavior is mostly about gut feelings, they’ll need to reinforce positive ones by winning consumers’ trust.

Trust has not historically been a major factor in the adoption of complex new technologies—research into technology acceptance models on both virtual and IOT systems shows that usability has been much more important. Even heavy users of Google and Facebook probably wouldn’t say that they trust either company very deeply.

However, a look at another internet giant, Airbnb, shows how this calculus changes when users’ homes and not just their online identities are involved. Airbnb puts trust at the core of its business model. Hosts are only willing to open their homes to strangers because the company empowers them with access to information about potential guests (which the guests themselves choose to provide), including their bio, reviews, and public Facebook profile.

By focusing on forging connections between hosts and guests, Airbnb builds community and reduces the uncertainty that pervades users’ relationships with so many internet companies. Airbnb is also relatively transparent about how it collects and analyzes user data, and often puts it to use in ways that increase users’ control over how they use the platform—for instance, to generate more accurate pricing suggestions for hosts. The result: it pushes users’ concerns about opening their homes or staying in others’ spaces out of the realm of gut feeling into that of a more considered, rational (and easy to ignore) concern.

If they want to thrive amid rising privacy concerns in the long term, manufacturers of smart home products, would be wise to take a page from Airbnb’s book. They should find ways to forge trust through absolute transparency, sharing with customers what data is being collected and how it’s being used. They should create new business models that don’t rely on collecting terabytes and terabytes of personal data, but on building trust – and even community – with customers.=

Companies should not only implement best practices for personal data encryption, storage, sharing, and deletion, but design their products around the customer’s ability to control their own data. If the development of IoT follows this path, the next 10 to 15 years won’t bring an inevitable erosion of privacy, but its renaissance.


Source: Tech Crunch

Whither VR/AR?

“Despite many pronouncements that 2016 was the year of VR, a more apt word for virtual reality might be absence,” The Economist observed caustically last summer, noting that during that year forecasts of combined sales of VR hardware and software dropped from $5.1bn to $3.6bn to the harsh reality of $1.8bn. But hey, one rough holiday season does not an industry make, right? Surely in 2017 things began to —

— oh. “Shock Stat: In 2017, VR Headset Shipments For Most Top Brands Went DOWN Compared To 2016.” So much for the many predictions that VR headset shipments would grow exponentially for years. Crow appears to be the appetizer for nearly every industry dinner these days. But that was before the Oculus Go, right? Except … the Go seems to have sold at most a quarter of a million units in its first few weeks, far behind the comparably priced Nintendo Switch released months earlier, and as I write this languishes well outside the top 20 of Amazon’s “Video Games > Accessories” bestsellers.

I mean. These aren’t terrible numbers. Sony’s PlayStation VR has sold almost 3 million units! … which is to say, it’s reached almost 4% of PlayStation owners. But aren’t VR and AR supposed to be the Next Big Thing, not the Next Little Niche? And doesn’t that mean their reach is supposed to grow exponentially, not linearly?

AR is in everyone’s hands, of course, courtesy of Apple’s ARKit, Google’s ARCore, Facebook’s AR Studio, etc. But, quick, name a popular/successful AR smartphone app a) that isn’t Pokémon GO b) doesn’t involve furniture!

If I’m pointing accusatory fingers at anyone I’m pointing them at myself. I too expected VR/AR to be much further along by now. I though we’d see hit games that could only be played in VR. I thought Pokémon GO, which launched twenty-three months ago, was the harbinger of a whole new wave of AR worlds, some of which would then begin to interrelate and cross over. In the long run maybe it will still seem that way. But in the short term —

— well, I dropped by the Augmented World Expo in Santa Clara this week, and my main takeaway was that the industry has essentially abandoned the consumer AR/VR space, at least for now. Everyone’s aiming at AR/VR for work now. But how many jobs are there, really, where complex information needs to be accessed in a hands-free way? How many problems can be solved by VR conferencing but not videoconferencing? Sure, they exist, and the tech can be spectacularly great for them; but, again, for now at least, we’re talking Next Little Niche.

I did see one really eye-opening thing, which led me to the sudden belief that the humble QR code will achieve its apotheosis in mixed reality:

…but what use is a bridge between two worlds when nobody bothers spending any time in one of them?

“But gaming!” you say. “I mean, immersive storytelling!” Sure. I’m super excited about that too; I’m a novelist in my spare time, after all. And that is the industry bright spot right now; “location-based VR,” i.e. “VR arcades,” are growing in number, and they seem like an obvious fit with the recent upsurge of immersive theater such as Punchdrunk‘s Sleep No More, Meow Wolf‘s House of Eternal Return, and The Latitude.

…But all the VR / mixed-reality immersive storytelling I’ve seen has been really cool for about 15 minutes max, heavy on hype and buzzwords, and basically failed at telling anything more than the crudest of stories. “Rather than storytelling, you’re storyliving,” enthused some Industrial Light & Magic folks at an event I went to a few months ago, and that sure sounds nice — but the VR ‘storyliving’ I’ve seen to date is all far, far less sophisticated than that of even my teenage Dungeons & Dragons campaigns.

I know. It’s the very early days of a new technology. It’s expensive. It’s still hardware-intensive. We’re still figuring out its best uses, and how it interacts with human physical location, and a whole new grammar of storytelling. But the Oculus Kickstarter launched almost six years ago, and I’ve seen a whole lot of VR/AR/mixed-reality demos since then, and every time, I walk away thinking: “This technology has so much potential.”

But in order to be the Next Big Thing at some point you have to actually start realizing your potential. Maybe Magic Leap will do it. (Not joking. At least, not entirely.) Otherwise, though, the disheartening truth is that, despite the low-price new standalone hardware, despite all the effort that’s gone into software and design and storytelling, I still don’t feel like we’re meaningfully closer to that than we were two years ago. Please, somebody show me I’m wrong.


Source: Tech Crunch

It’s OK to leave Facebook

The slow-motion privacy train wreck that is Facebook has many users, perhaps you, thinking about leaving or at least changing the way you use the social network. Fortunately for everyone but Mark Zuckerberg, it’s not nearly has hard to leave as it once was. The main thing to remember is that social media is for you to use, and not vice versa.

Social media has now become such an ordinary part of modern life that, rather than have it define our interactions, we can choose how we engage with it. That’s great! It means that everyone is free to design their own experience, taking from it what they need instead of participating to an extent dictated by social norms or the progress of technology.

Here’s why now is a better time than ever to take control of your social media experience. I’m going to focus on Facebook, but much of this is applicable to Instagram, Twitter, LinkedIn, and other networks as well.

Stalled innovation means a stable product

The Facebooks of 2005, 2010, and 2015 were very different things and existed in very different environments. Among other things over that eventful ten-year period, mobile and fixed broadband exploded in capabilities and popularity; the modern world of web-native platforms matured and became secure and reliable; phones went from dumb to smart to, for many, their primary computer; and internet-based companies like Google, Facebook, and Amazon graduated from niche players to embrace and dominate the world at large.

It’s been a transformative period for lots of reasons and in lots of ways. And products and services that have been there the whole time have been transformed almost continuously. You’d probably be surprised at what they looked like and how limited they were not long ago. Many things we take for granted today online were invented and popularized just in the last decade.

But the last few years have seen drastically diminished returns. Where Facebook used to add features regularly that made you rely on it more and more, now it is desperately working to find ways to keep people online. Why is that?

Well, we just sort of reached the limit of what a platform like Facebook can or should do, that’s all! Nothing wrong with that.

It’s like improving a car — no matter how many features you add or engines you swap in, it’ll always be a car. Cars are useful things, and so is Facebook. But a car isn’t a truck, or a bike, or an apple, and Facebook isn’t (for example) a broadcast medium, a place for building strong connections, or a VR platform (as hard as they’re trying).

The things that Facebook does well and that we have all found so useful — sharing news and photos with friends, organizing events, getting and staying in contact with people — haven’t changed considerably in a long time. And as the novelty has worn off those things, we naturally engage in them less frequently and in ways that make more sense to us.

Facebook has become the platform it was intended to be all along, with its own strengths and weaknesses, and its failure to advance beyond that isn’t a bad thing. In fact, I think stability is a good thing. Once you know what something is and will be, you can make an informed choice about it.

The downsides have become obvious

Every technology has its naysayers, and social media was no exception — I was and to some extent remain one myself. But over the years of changes these platforms have gone through, some fears were shown to be unfounded or old-fashioned.

The idea that people would cease interacting in the “real world” and live in their devices has played out differently from how we expected, surely; trying to instruct the next generation on the proper way to communicate with each other has never worked out well for the olds. And if you told someone in 2007 that foreign election interference would be as much a worry for Facebook as oversharing and privacy problems, you might be met with incredulous looks.

Other downsides were for the most part unforeseen. The development of the bubble or echo chamber, for instance, would have been difficult to predict when our social media systems weren’t also our news-gathering systems. And the phenomenon of seeing only the highlights of others’ lives posted online, leading to self esteem issues in those who view them with envy, is an interesting but sad development.

Whether some risk inherent to social media was predicted or not, or proven or not, people now take such risks seriously. The ideas that one can spend too much time on social networks, or suffer deleterious effects from them, or feel real pain or turmoil because of interactions on them are accepted (though sadly not always without question).

Taking the downsides of something as seriously as the upsides is another indicator of the maturity of that thing, at least in terms of how society interacts with it. When the hype cycle winds down, realistic judgment takes its place and the full complexities of a relationship like the one between people and social media can be examined without interference.

Between the stability of social media’s capabilities and the realism with which those capabilities are now being considered, choice is no longer arbitrary or absolute. Your engagement is not being determined by them any more.

Social media has become a rich set of personal choices

Your experience may differ from mine here, but I feel that in those days of innovation among social networks your participation was more of a binary. You were either on or you were off.

The way they were advancing and changing defined how you engaged with them by adding and opting you into features, or changing layouts and algorithms. It was hard to really choose how to engage in any meaningful way when the sands were shifting under your feet (or rather, fingertips). Every few months brought new features and toys and apps, and you sort of had to be there, using them as proscribed, or risk being left behind. So people either kept up or voluntarily stayed off.

Now all that has changed. The ground rules are set, and have been for long enough that there is no risk that if you left for a few months and come back, things would be drastically different.

As social networks have become stable tools used by billions, any combination or style of engagement with them has become inherently valid.

Your choice to engage with Facebook or Instagram does not boil down to simply whether you are on it or not any more, and the acceptance of social media as a platform for expression and creation as well as socializing means that however you use it or present on it is natural and no longer (for the most part) subject to judgment.

That extends from choosing to make it an indispensable tool in your everyday life to quitting and not engaging at all. There’s no longer an expectation that the former is how a person must use social media, and there is no longer a stigma to the latter of disconnectedness or Luddism.

You and I are different people. We live in different places, read different books, enjoy different music. We drive different cars, prefer different restaurants, like different drinks. Why should we be the same in anything as complex as how we use and present ourselves on social media?

It’s analogous, again, to a car: you can own one and use it every day for a commute, or use it rarely, or not have one at all — who would judge you? It has nothing to do with what cars are or aren’t, and everything to do with what a person wants or needs in the circumstances of their own life.

For instance, I made the choice to remove Facebook from my phone over a year ago. I’m happier and less distracted, and engage with it deliberately, on my terms, rather than it reaching out and engaging me. But I have friends who maintain and derive great value from their loose network of scattered acquaintances, and enjoy the immediacy of knowing and interacting with them on the scale of minutes or seconds. And I have friends who have never been drawn to the platform in the first place, content to select from the myriad other ways to stay in touch.

These are all perfectly good ways to use Facebook! Yet only a few years ago the zeitgeist around social media and its exaggerated role in everyday life — resulting from novelty for the most part — meant that to engage only sporadically would be more difficult, and to disengage entirely would be to miss out on a great deal (or fear that enough that quitting became fraught with anxiety). People would be surprised that you weren’t on Facebook and wonder how you got by.

Try it and be delighted

Social networks are here to improve your life the same way that cars, keyboards, search engines, cameras, coffee makers, and everything else are: by giving you the power to do something. But those networks and the companies behind them were also exerting power over you and over society in general, the way (for example) cars and car makers exerted power over society in the ’50s and ’60s, favoring highways over public transportation.

Some people and some places, more than others, are still subject to the influence of car makers — ever try getting around L.A. without one? And the same goes for social media — ever try planning a birthday party without it? But the last few years have helped weaken that influence and allow us to make meaningful choices for ourselves.

The networks aren’t going anywhere, so you can leave and come back. Social media doesn’t control your presence.

It isn’t all or nothing, so you can engage at 100 percent, or zero, or anywhere in between. Social media doesn’t decide how you use it.

You won’t miss anything important, because you decide what is important to you. Social media doesn’t share your priorities.

Your friends won’t mind, because they know different people need different things. Social media doesn’t care about you.

Give it a shot. Pick up your phone right now and delete Facebook. Why not? The absolute worst that will happen is you download it again tomorrow and you’re back where you started. But it could also be, as it was for me and has been for many people I’ve known, like shrugging off a weight you didn’t even realize you were bearing. Try it.


Source: Tech Crunch

Looks like macOS 10.14 will have a new dark mode and an Apple News app

Apple’s Worldwide Developers Conference is just a couple of days away, but some of the updates appear to have been revealed early.

Specifically, developer Steve Troughton-Smith tweeted some screenshots this morning of what he said was macOS 10.14. And while the screenshots focused on Xcode 10, they also revealed a couple of bigger changes to the operating system.

For one thing, it looks like the new version of macOS will include a more comprehensive dark mode — one that doesn’t just darken the menu bar and the dock, but applies much more broadly, affecting apps and even the Trash can. The screenshots also include an icon for Apple News in the dock, so there’s probably a new desktop version of the app on the way.

How did Troughton-Smith get ahold of these screenshots? He said Apple posted a preview video for Xcode to the Mac App Store API — a video he then shared with 9to5Mac. So it seems the Mac App Store will start include preview videos like this one (the iOS App Store already does).

Ahead of WWDC, there have been rumors that Apple will launch “universal” apps that work on both desktop and mobile. Nothing here confirms that, but it does suggest Apple is working to make iOS and macOS — and their respective App Stores — more similar.


Source: Tech Crunch

Not just another decentralized web whitepaper?

Given all the hype and noise swirling around crypto and decentralized network projects, which runs the full gamut from scams and stupidity, to very clever and inspired ideas, the release of yet another whitepaper does not immediately set off an attention klaxon.

But this whitepaper — which details a new protocol for achieving consensus within a decentralized network — is worth paying more attention to than most.

MaidSafe, the team behind it, are also the literal opposite of fly-by-night crypto opportunists. They’ve been working on decentralized networking since long before the space became the hot, hyped thing it is now.

Their overarching mission is to engineer an entirely decentralized Internet which bakes in privacy, security and freedom of expression by design — the ‘Safe’ in their planned ‘Safe Network’ stands for ‘Secure access for everyone’ — meaning it’s encrypted, autonomous, self-organizing, self-healing. And the new consensus protocol is just another piece towards fulfilling that grand vision.

What’s consensus in decentralized networking terms? “Within decentralized networks you must have a way of the network agreeing on a state — such as can somebody access a file or confirming a coin transaction, for example — and the reason you need this is because you don’t have a central server to confirm all this to you,” explains MaidSafe’s COO Nick Lambert, discussing what the protocol is intended to achieve.

“So you need all these decentralized nodes all reaching agreement somehow on a state within the network. Consensus occurs by each of these nodes on the network voting and letting the network as a whole know what it thinks of a transaction.

“It’s almost like consensus could be considered the heart of the networks. It’s required for almost every event in the network.”

We wrote about MaidSafe’s alternative, server-less Internet in 2014. But they actually began work on the project in stealth all the way back in 2006. So they’re over a decade into the R&D at this point.

The network is p2p because it’s being designed so that data is locally encrypted, broken up into pieces and then stored distributed and replicated across the network, relying on the users’ own compute resources to stand in and take the strain. No servers necessary.

The prototype Safe Network is currently in an alpha testing stage (they opened for alpha in 2016). Several more alpha test stages are planned, with a beta release still a distant, undated prospect at this stage. But rearchitecting the entire Internet was clearly never going to be a day’s work.

MaidSafe also ran a multimillion dollar crowdsale in 2014 — for a proxy token of the coin that will eventually be baked into the network — and did so long before ICOs became a crypto-related bandwagon that all sorts of entities were jumping onto. The SafeCoin cryptocurrency is intended to operate as the inventive mechanism for developers to build apps for the Safe Network and users to contribute compute resource and thus bring MaidSafe’s distributed dream alive.

Their timing on the token sale front, coupled with prudent hodling of some of the Bitcoins they’ve raised, means they’re essentially in a position of not having to worry about raising more funds to build the network, according to Lambert.

A rough, back-of-an-envelope calculation on MaidSafe’s original crowdsale suggests, given they raised $2M in Bitcoin in April 2014 when the price for 1BTC was up to around $500, the Bitcoins they obtained then could be worth between ~$30M-$40M by today’s Bitcoin prices — though that would be assuming they held on to most of them. Bitcoin’s price also peaked far higher last year too.

As well as the token sale they also did an equity raise in 2016, via the fintech investment platform bnktothefuture, pulling in around $1.7M from that — in a mixture of cash and “some Bitcoin”.

“It’s gone both ways,” says Lambert, discussing the team’s luck with Bitcoin. “The crowdsale we were on the losing end of Bitcoin price decreasing. We did a raise from bnktothefuture in autumn of 2016… and fortunately we held on to quite a lot of the Bitcoin. So we rode the Bitcoin price up. So I feel like the universe paid us back a little bit for that. So it feels like we’re level now.”

“Fundraising is exceedingly time consuming right through the organization, and it does take a lot of time away from what you wants to be focusing on, and so to be in a position where you’re not desperate for funding is a really nice one to be in,” he adds. “It allows us to focus on the technology and releasing the network.”

The team’s headcount is now up to around 33, with founding members based at the HQ in Ayr, Scotland, and other engineers working remotely or distributed (including in a new dev office they opened in India at the start of this year), even though MaidSafe is still not taking in any revenue.

This April they also made the decision to switch from a dual licensing approach for their software — previously offering both an open source license and a commercial license (which let people close source their code for a fee) — to going only open source, to encourage more developer engagement and contributions to the project, as Lambert tells it.

“We always see the SafeNetwork a bit like a public utility,” he says. “In terms of once we’ve got this thing up and launched we don’t want to control it or own it because if we do nobody will want to use it — it needs to be seen as everyone contributing. So we felt it’s a much more encouraging sign for developers who want to contribute if they see everything is fully open sourced and cannot be closed source.”

MaidSafe’s story so far is reason enough to take note of their whitepaper.

But the consensus issue the paper addresses is also a key challenge for decentralized networks so any proposed solution is potentially a big deal — if indeed it pans out as promised.

 

Protocol for Asynchronous, Reliable, Secure and Efficient Consensus

MaidSafe reckons they’ve come up with a way of achieving consensus on decentralized networks that’s scalable, robust and efficient. Hence the name of the protocol — ‘Parsec’ — being short for: ‘Protocol for Asynchronous, Reliable, Secure and Efficient Consensus’.

They will be open sourcing the protocol under a GPL v3 license — with a rough timeframe of “months” for that release, according to Lambert.

He says they’ve been working on Parsec for the last 18 months to two years — but also drawing on earlier research the team carried out into areas such as conflict-free replicated data types, synchronous and asynchronous consensus, and topics such as threshold signatures and common coin.

More specifically, the research underpinning Parsec is based on the following five papers: 1. Baird L. The Swirlds Hashgraph Consensus Algorithm: Fair, Fast, Byzantine Fault Tolerance, Swirlds Tech Report SWIRLDS-TR-2016-01 (2016); 2. Mostefaoui A., Hamouna M., Raynal M. Signature-Free Asynchronous Byzantine Consensus with t <n/3 and O(n 2 ) Messages, ACM PODC (2014); 3. Micali S. Byzantine Agreement, Made Trivial, (2018); 4. Miller A., Xia Y., Croman K., Shi E., Song D. The Honey Badger of BFT Protocols, CCS (2016); 5. Team Rocket Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies, (2018).

One tweet responding to the protocol’s unveiling just over a week ago wonders whether it’s too good to be true. Time will tell — but the potential is certainly enticing.

Bitcoin’s use of a drastically energy-inefficient ‘proof of work’ method to achieve consensus and write each transaction to its blockchain very clearly doesn’t scale. It’s slow, cumbersome and wasteful. And how to get blockchain-based networks to support the billions of transactions per second that might be needed to sustain the various envisaged applications remains an essential work in progress — with projects investigating various ideas and approaches to try to overcome the limitation.

MaidSafe’s network is not blockchain-based. It’s engineered to function with asynchronous voting of nodes, rather than synchronous voting, which should avoid the bottleneck problems associated with blockchain. But it’s still decentralized. So it needs a consensus mechanism to enable operations and transactions to be carried out autonomously and robustly. That’s where Parsec is intended to slot in.

The protocol does not use proof of work. And is able, so the whitepaper claims, to achieve consensus even if a third of the network is comprised of malicious nodes — i.e. nodes which are attempting to disrupt network operations or otherwise attack the network.

Another claimed advantage is that decisions made via the protocol are both mathematically guaranteed and irreversible.

“What Parsec does is it can reach consensus even with malicious nodes. And up to a third of the nodes being malicious is what the maths proofs suggest,” says Lambert. “This ability to provide mathematical guarantees that all parts of the network will come to the same agreement at a point in time, even with some fault in the network or bad actors — that’s what Byzantine Fault Tolerance is.”

In theory a blockchain using proof of work could be hacked if any one entity controlled 51% of the nodes on the network (although in reality it’s likely that such a large amount of energy would be required it’s pretty much impractical).

So on the surface MaidSafe’s decentralized network — which ‘only’ needs 33% of its nodes to be compromised for its consensus decisions to be attacked — sounds rather less robust. But Lambert says it’s more nuanced than the numbers suggest. And in fact the malicious third would also need to be nodes that have the authority to vote. “So it is a third but it’s a third of well reputed nodes,” as he puts it.

So there’s an element of proof of stake involved too, bound up with additional planned characteristics of the Safe Network — related to dynamic membership and sharding (Lambert says MaidSafe has additional whitepapers on both those elements coming soon).

“Those two papers, particularly the one around dynamic membership, will explain why having a third of malicious nodes is actually harder than just having 33% of malicious nodes. Because the nodes that can vote have to have a reputation as well. So it’s not just purely you can flood the Safe Network with lots and lots of malicious nodes and override it only using a third of the nodes. What we’re saying is the nodes that can vote and actually have a say must have a good reputation in the network,” he says.

“The other thing is proof of stake… Everyone is desperate to move away from proof of work because of its environmental impact. So proof of stake — I liken it to the Scottish landowners, where people with a lot of power have more say. In the cryptocurrency field, proof of stake might be if you have, let’s say, 10 coins and I have one coin your vote might be worth 10x as much authority as what my one coin would be. So any of these mechanisms that they come up with it has that weighting to it… So the people with the most vested interests in the network are also given the more votes.”

Sharding refers to closed groups that allow for consensus votes to be reached by a subset of nodes on a decentralized network. By splitting the network into small sections for consensus voting purposes the idea is you avoid the inefficiencies of having to poll all the nodes on the network — yet can still retain robustness, at least so long as subgroups are carefully structured and secured.

“If you do that correctly you can make it more secure and you can make things much more efficient and faster,” says Lambert. “Because rather than polling, let’s say 6,000 nodes, you might be polling eight nodes. So you can get that information back quickly.

“Obviously you need to be careful about how you do that because with much less nodes you can potentially game the network so you need to be careful how you secure those smaller closed groups or shards. So that will be quite a big thing because pretty much every crypto project is looking at sharding to make, certainly, blockchains more efficient. And so the fact that we’ll have something coming out in that, after we have the dynamic membership stuff coming out, is going to be quite exciting to see the reaction to that as well.”

Voting authority on the Safe Network might be based on a node’s longevity, quality and historical activity — so a sort of ‘reputation’ score (or ledger) that can yield voting rights over time.

“If you’re like that then you will have a vote in these closed groups. And so a third of those votes — and that then becomes quite hard to game because somebody who’s then trying to be malicious would need to have their nodes act as good corporate citizens for a time period. And then all of a sudden become malicious, by which time they’ve probably got a vested stake in the network. So it wouldn’t be possible for someone to just come and flood the network with new nodes and then be malicious because it would not impact upon the network,” Lambert suggests.

The computing power that would be required to attack the Safe Network once it’s public and at scale would also be “really, really significant”, he adds. “Once it gets to scale it would be really hard to co-ordinate anything against it because you’re always having to be several hundred percent bigger than the network and then have a co-ordinated attack on it itself. And all of that work might get you to impact the decision within one closed group. So it’s not even network wide… And that decision could be on who accesses one piece of encrypted shard of data for example… Even the thing you might be able to steal is only an encrypted shard of something — it’s not even the whole thing.”

Other distributed ledger projects are similarly working on Asynchronous Byzantine Fault Tolerant (AFBT) consensus models, including those using directed acrylic graphs (DAGs) — another nascent decentralization technology that’s been suggested as an alternative to blockchain.

And indeed AFBT techniques predate Bitcoin, though MaidSafe says these kind of models have only more recently become viable thanks to research and the relative maturing of decentralized computing and data types, itself as a consequence of increased interest and investment in the space.

However in the case of Hashgraph — the DAG project which has probably attracted the most attention so far — it’s closed source, not open. So that’s one major difference with MaidSafe’s approach. 

Another difference that Lambert points to is that Parsec has been built to work in a dynamic, permissionless network environment (essential for the intended use-case, as the Safe Network is intended as a public network). Whereas he claims Hashgraph has only demonstrated its algorithms working on a permissioned (and therefore private) network “where all the nodes are known”.

He also suggests there’s a question mark over whether Hashgraph’s algorithm can achieve consensus when there are malicious nodes operating on the network. Which — if true — would limit what it can be used for.

“The Hashgraph algorithm is only proven to reach agreement if there’s no adversaries within the network,” Lambert claims. “So if everything’s running well then happy days, but if there’s any maliciousness or any failure within that network then — certainly on the basis of what’s been published — it would suggest that that algorithm was not going to hold up to that.”

“I think being able to do all of these things asynchronously with all of the mathematical guarantees is very difficult,” he continues, returning to the core consensus challenge. “So at the moment we see that we have come out with something that is unique, that covers a lot of these bases, and is a very good use for our use-case. And I think will be useful for others — so I think we like to think that we’ve made a paradigm shift or a vast improvement over the state of the art.”

 

Paradigm shift vs marginal innovation

Despite the team’s conviction that, with Parsec, they’ve come up with something very notable, early feedback includes some very vocal Twitter doubters.

For example there’s a lengthy back-and-forth between several MaidSafe engineers and Ethereum researcher Vlad Zamfir — who dubs the Parsec protocol “overhyped” and a “marginal innovation if that”… so, er, ouch.

Lambert is, if not entirely sanguine, then solidly phlegmatic in the face of a bit of initial Twitter blowback — saying he reckons it will take more time for more detailed responses to come, i.e. allowing for people to properly digest the whitepaper.

“In the world of async BFT algorithms, any advance is huge,” MaidSafe CEO David Irvine also tells us when we ask for a response to Zamfir’s critique. “How huge is subjective, but any advance has to be great for the world. We hope others will advance Parsec like we have built on others (as we clearly state and thank them for their work).  So even if it was a marginal development (which it certainly is not) then I would take that.”

“All in all, though, nothing was said that took away from the fact Parsec moves the industry forward,” he adds. “I felt the comments were a bit juvenile at times and a bit defensive (probably due to us not agreeing with POS in our Medium post) but in terms of the only part commented on (the coin flip) we as a team feel that part could be much more concrete in terms of defining exactly how small such random (finite) delays could be. We know they do not stop the network and a delaying node would be killed, but for completeness, it would be nice to be that detailed.”

A developer source of our own in the crypto/blockchain space — who’s not connected to the MaidSafe or Ethereum projects — also points out that Parsec “getting objective review will take some time given that so many potential reviewers have vested interest in their own project/coin”.

It’s certainly fair to say the space excels at public spats and disagreements. Researchers pouring effort into one project can be less than kind to rivals’ efforts. (And, well, given all the crypto Lambos at stake it’s not hard to see why there can be no love lost — and, ironically, zero trust — between competing champions of trustless tech.)

Another fundamental truth of these projects is they’re all busily experimenting right now, with lots of ideas in play to try and fix core issues like scalability, efficiency and robustness — often having different ideas over implementation even if rival projects are circling and/or converging on similar approaches and techniques.

“Certainly other projects are looking at sharding,” says Lambert. “So I know that Ethereum are looking at sharding. And I think Bitcoin are looking at that as well, but I think everyone probably has quite different ideas about how to implement it. And of course we’re not using a blockchain which makes that another different use-case where Ethereum and Bitcoin obviously are. But everyone has — as with anything — these different approaches and different ideas.”

“Every network will have its own different ways of doing [consensus],” he adds when asked whether he believes Parsec could be adopted by other projects wrestling with the consensus challenge. “So it’s not like some could lift [Parsec] out and just put it in. Ethereum is blockchain-based — I think they’re looking at something around proof of stake, but maybe they could take some ideas or concepts from the work that we’re open sourcing for their specific case.

“If you get other blockchain-less networks like IOTA, Byteball, I think POA is another one as well. These other projects it might be easier for them to implement something like Parsec with them because they’re not using blockchain. So maybe less of that adaption required.”

Whether other projects will deem Parsec worthy of their attention remains to be seen at this point with so much still to play for. Some may prefer to expend effort trying to rubbish a rival approach, whose open source tech could, if it stands up to scrutiny and operational performance, reduce the commercial value of proprietary and patented mechanisms also intended to grease the wheels of decentralized networks — for a fee.

And of course MaidSafe’s developed-in-stealth consensus protocol may also turn out to be a relatively minor development. But finding a non-vested expert to give an impartial assessment of complex network routing algorithms conjoined to such a self-interested and, frankly, anarchical industry is another characteristic challenge of the space.

Irvine’s view is that DAG based projects which are using a centralized component will have to move on or adopt what he dubs “state of art” asynchronous consensus algorithms — as MaidSafe believes Parsec is — aka, algorithms which are “more widely accepted and proven”.

“So these projects should contribute to the research, but more importantly, they will have to adopt better algorithms than they use,” he suggests. “So they can play an important part, upgrades! How to upgrade a running DAG based network? How to had fork a graph? etc. We know how to hard fork blockchains, but upgrading DAG based networks may not be so simple when they are used as ledgers.

“Projects like Hashgraph, Algorand etc will probably use an ABFT algorithm like this as their whole network with a little work for a currency; IOTA, NANO, Bytball etc should. That is entirely possible with advances like Parsec. However adding dynamic membership, sharding, a data layer then a currency is a much larger proposition, which is why Parsec has been in stealth mode while it is being developed.

“We hope that by being open about the algorithm, and making the code open source when complete, we will help all the other projects working on similar problems.”

Of course MaidSafe’s team might be misguided in terms of the breakthrough they think they’ve made with Parsec. But it’s pretty hard to stand up the idea they’re being intentionally misleading.

Because, well, what would be the point of that? While the exact depth of MaidSafe’s funding reserves isn’t clear, Lambert doesn’t sound like a startup guy with money worries. And the team’s staying power cannot be in doubt — over a decade into the R&D needed to underpin their alt network.

It’s true that being around for so long does have some downsides, though. Especially, perhaps, given how hyped the decentralized space has now become. “Because we’ve been working on it for so long, and it’s been such a big project, you can see some negative feedback about that,” as Lambert admits.

And with such intense attention now on the space, injecting energy which in turn accelerates ideas and activity, there’s perhaps extra pressure on a veteran player like MaidSafe to be seen making a meaningful contribution — ergo, it might be tempting for the team to believe the consensus protocol they’ve engineered really is a big deal.

To stand up and be counted amid all the noise, as it were. And to draw attention to their own project — which needs lots of external developers to buy into the vision if it’s to succeed, yet, here in 2018, it’s just one decentralization project among so many. 

 

The Safe Network roadmap

Consensus aside, MaidSafe’s biggest challenge is still turning the sizable amount of funding and resources the team’s ideas have attracted to date into a bona fide alternative network that anyone really can use. And there’s a very long road to travel still on that front, clearly.

The Safe Network is in alpha 2 testing incarnation (which has been up and running since September last year) — consisting of around a hundred nodes that MaidSafe is maintaining itself.

The core decentralization proposition of anyone being able to supply storage resource to the network via lending their own spare capacity is not yet live — and won’t come fully until alpha 4.

“People are starting to create different apps against that network. So we’ve seen Jams — a decentralized music player… There are a couple of storage style apps… There is encrypted email running as well, and also that is running on Android,” says Lambert. “And we have a forked version of the Beaker browser — that’s the browser that we use right now. So if you can create websites on the Safe Network, which has its own protocol, and if you want to go and view those sites you need a Safe browser to do that, so we’ve also been working on our own browser from scratch that we’ll be releasing later this year… So there’s a number of apps that are running against that alpha 2 network.

“What alpha 3 will bring is it will run in parallel with alpha 2 but it will effectively be a decentralized routing network. What that means is it will be one for more technical people to run, and it will enable data to be passed around a network where anyone can contribute their resources to it but it will not facilitate data storage. So it’ll be a command line app, which is probably why it’ll suit technical people more because there’ll be no user interface for it, and they will contribute their resources to enable messages to be passed around the network. So secure messaging would be a use-case for that.

“And then alpha 4 is effectively bringing together alpha 2 and alpha 3. So it adds a storage layer on top of the alpha 3 network — and at that point it gives you the fully decentralized network where users are contributing their resources from home and they will be able to store data, send messages and things of that nature. Potentially during alpha 4, or a later alpha, we’ll introduce test SafeCoin. Which is the final piece of the initial puzzle to provide incentives for users to provide resources and for developers to make apps. So that’s probably what the immediate roadmap looks like.”

On the timeline front Lambert won’t be coaxed into fixing any deadlines to all these planned alphas. They’ve long ago learnt not to try and predict the pace of progress, he says with a laugh. Though he does not question that progress is being made.

“These big infrastructure projects are typically only government funded because the payback is too slow for venture capitalists,” he adds. “So in the past you had things like Arpanet, the precursor to the Internet — that was obviously a US government funded project — and so we’ve taken on a project which has, not grown arms and legs, but certainly there’s more to it than what was initially thought about.

“So we are almost privately funding this infrastructure. Which is quite a big scope, and I will say why it’s taking a bit of time. But we definitely do seem to be making lots of progress.”


Source: Tech Crunch

Scaling startups are setting up secondary hubs in these cities

America’s mayors have spent the past nine months tripping over each other to curry favor with Amazon.com in its high-profile search for a second headquarters.

More quietly, however, a similar story has been playing out in startup-land. Many of the most valuable venture-backed companies are venturing outside their high-cost headquarters and setting up secondary hubs in smaller cities.

Where are they going? Nashville is pretty popular. So is Phoenix. Portland and Raleigh also are seeing some jobs. A number of companies also have a high number of remote offerings, seeking candidates with coveted skills who don’t want to relocate.

Those are some of the findings from a Crunchbase News analysis of the geographic hiring practices of U.S. unicorns. Since most of these companies are based in high-cost locations, like the San Francisco Bay Area, Boston and New York, we were looking to see if there is a pattern of setting up offices in smaller, cheaper cities. (For more on survey technique, see Methodology section below.)

Here is a look at some of the hotspots.

Nashville

One surprise finding was the prominence of Nashville among secondary locations for startup offices.

We found at least four unicorns scaling up Nashville offices, plus another three with growing operations in or around other Tennessee cities. Here are some of the Tennessee-loving startups:

When we referred to Nashville’s popularity with unicorns as surprising, that was largely because the city isn’t known as a major hub for tech startups or venture funding. That said, it has a lot of attributes that make for a practical and desirable location for a secondary office.

Nashville’s attractions include high quality of life ratings, a growing population and economy, mild climate and lots of live music. Home prices and overall cost of living are also still far below Silicon Valley and New York, even though the Nashville real estate market has been on a tear for the past several years. An added perk for workers: Tennessee has no income tax on wages.

Phoenix

Phoenix is another popular pick for startup offices, particularly West Coast companies seeking a lower-cost hub for customer service and other operations that require a large staff.

In the chart below, we look at five unicorns with significant staffing in the desert city:

 

Affordability, ease of expansion and a large employable population look like big factors in Phoenix’s appeal. Homes and overall cost of living are a lot cheaper than the big coastal cities. And there’s plenty of room to sprawl.

One article about a new office opening also cited low job turnover rates as an attractive Phoenix-area attribute, which is an interesting notion. Startup hubs like San Francisco and New York see a lot of job-hopping, particularly for people with in-demand skill sets. Scaling companies may be looking for people who measure their job tenure in years rather than months.

Those aren’t the only places

Nashville and Phoenix aren’t the only hotspots for unicorns setting up secondary offices. Many other cities are also seeing some scaling startup activity.

Let’s start with North Carolina. The Research Triangle region is known for having a lot of STEM grads, so it makes sense that deep tech companies headquartered elsewhere might still want a local base. One such company is cybersecurity unicorn Tanium, which has a lot of technical job openings in the area. Another is Docker, developer of software containerization technology, which has open positions in Raleigh.

The Orlando metro area stood out mostly due to Robinhood, the zero-fee stock and crypto trading platform that recently hit the $5 billion valuation mark. The Silicon Valley-based company has a significant number of open positions in Lake Mary, an Orlando suburb, including HR and compliance jobs.

Portland, meanwhile, just drew another crypto-loving unicorn, digital currency transaction platform Coinbase. The San Francisco-based company recently opened an office in the Oregon city and is currently in hiring mode.

Anywhere with a screen

But you don’t have to be anywhere in particular to score jobs at many fast-growing startups. A lot of unicorns have a high number of remote positions, including specialized technical roles that may be hard to fill locally.

GitHub, which makes tools developers can use to collaborate remotely on projects, does a particularly good job of practicing what it codes. A notable number of engineering jobs open at the San Francisco-based company are available to remote workers, and other departments also have some openings for telecommuters.

Others with a smattering of remote openings include Silicon Valley-based cybersecurity provider CrowdStrike, enterprise software developer Apttus and also Docker.

Not everyone is doing it

Of course, not every unicorn is opening large secondary offices. Many prefer to keep staff closer to home base, seeking to lure employees with chic workplaces and lavish perks. Other companies find that when they do expand, it makes strategic sense to go to another high-cost location.

Still, the secondary hub phenomenon may offer a partial antidote to complaints that a few regions are hogging too much of the venture capital pie. While unicorns still overwhelmingly headquarter in a handful of cities, at least they’re spreading their wings and providing more jobs in other places, too.

Methodology

For this analysis, we were looking at U.S. unicorns with secondary offices in other North American cities. We began with a list of 125 U.S.-based companies and looked at open positions advertised on their websites, focusing on job location.

We excluded job offerings related to representing a local market. For instance, a San Francisco company seeking a sales rep in Chicago to sell to Chicago customers doesn’t count. Instead, we looked for openings for team members handling core operations, including engineering, finances and company-wide customer support. We also excluded secondary offices outside of North America.

Additionally, we were looking principally for companies expanding into lower-cost areas. In many cases, we did see companies strategically adding staff in other high-cost locations, such as New York and Silicon Valley.

A final note pertains to Austin, Texas. We did see several unicorns based elsewhere with job openings in Austin. However, we did not include the city in the sections above because Austin, although a lower-cost location than Silicon Valley, may also be characterized as a large, mature technology and startup hub in its own right.


Source: Tech Crunch

Gillmor Gang: Hollywood Signs

The Gillmor Gang — Frank Radice, Keith Teare, Esteban Kolsky, Michael Markman, and Steve Gillmor . Recorded live Friday, June 1, 2018. Why Mary Meeker’s report is real news, the streaming economy flexes its muscles, sit-down comedy.

G3: Ethical Healing — Mary Hodder, Francine Hardaway, Maria Ogneva, and Tina Chase Gillmor. Recorded live Thursday, May 31, 2018.

@stevegillmor, @ekolsky, @fradice, @mickeleh, @kteare

Produced and directed by Tina Chase Gillmor @tinagillmor

Liner Notes

Live chat stream

The Gillmor Gang on Facebook

G3: Ethical Healing

G3 chat stream

G3 on Facebook


Source: Tech Crunch