HBO’s new trailer for Westworld season 2 showcases robot-induced chaos

As HBO prepares itself for the end of Game of Thrones, it’s apparent that they’re putting weight on Westworld to take over as the network’s dominant fantasy epic. A new trailer dropped today for the show’s second season and it’s clear that the robot uprising is going to take the brutal, violent spirit of the season-one finale and pour it over the existential questions that are the backbone of the show.

Things are going to get even darker very quickly, it looks like.

The new trailer captures all of the actions and struggles of Dolores, Teddy, Bernard, Maeve and the Man in Black with an orchestral cover of Nirvana’s “Heart-Shaped Box” playing in the background. While the end of the last season suggested that a new Shogunworld destination would play a major role in season two, our peeks at the destination have been pretty limited in the first pair of trailers.

We’re still seeing the show’s central characters traverse through the desolation of the old West and the high-tech opulence of the world behind it.

Westworld’s second season begins April 22 on HBO.


Source: Tech Crunch

The CW goes live on Hulu with Live TV

Hulu has added the live, linear version of the CW to its Hulu with Live TV platform.

Hulu has had a deal with the CW to offer streaming on-demand content from the network, but this is the first time that the CW will be available live on Hulu.

The company first launched Hulu with Live TV in the summer of 2017, offering more than 50 channels for $39.99/month, complete with access to Hulu’s on-demand content library and 50 hours of DVR storage.

The service launched with some competition from YouTube, which launched a similar offering called YouTube TV in April 2017.

According to a report from January, Hulu with Live TV has around 450,000 subscribers, while YouTube TV has 300,000 subscribers.

Live CW on Hulu is not available everywhere, but will be on Hulu with Live TV in the following markets: Philadelphia, San Francisco, Atlanta, Tampa, Detroit, Seattle, Sacramento, Pittsburgh. The company says it’s rolling out live CW to more markets soon.


Source: Tech Crunch

Security flaw in Grindr exposed locations to third-party service

Users of Grindr, the popular dating app for gay men, may have been broadcasting their location despite having disabled that particular feature. Two security flaws allowed for discovery of location data against a user’s will, though they take a bit of doing.

The first of the flaws, which were discovered by Trever Faden and reported first by NBC News, allowed users to see a variety of data not available normally: who had blocked them, deleted photos, locations of people who had chosen not to share that data, and more.

The catch is that if you wanted to find out about this, you had to hand over your username and password to Faden’s purpose-built website, C*ckblocked (asterisk original), which would then scour your Grindr account for this hidden metadata.

Of course it’s a bad idea to surrender your credentials to any third party whatsoever, but regardless of that, this particular third party was able to find data that a user should not have access to in the first place.

The second flaw involved location data being sent unencrypted, meaning a traffic snooper might be able to detect it.

It may not sound too serious to have someone watching a wi-fi network know a person’s location — they’re there on the network, obviously, which narrows it down considerably. But users of a gay dating app are members of a minority often targeted by bigots and governments, and having their phone essentially send out a public signal saying “I’m here and I’m gay” without their knowledge is a serious problem.

I’ve asked Grindr for comment and confirmation; the company told NBC News that it had changed how data was handled in order to prevent the C*ckblocked exploit (the site has since been shut down), but did not address the second issue.


Source: Tech Crunch

Cruise’s CTO, a former Uber manager, is out

General Motors’ self-driving car unit, Cruise Automation, is parting ways with CTO A.G. Gangadhar, Bloomberg first reported. This comes after public complaints pertaining to his role in fostering an alleged unsafe work environment for women.

“After serious consideration, Cruise and AG have elected to part ways,” a Cruise spokesperson told TechCrunch in a statement. “We wish him the best in all future endeavors.”

Before Cruise, Gangadhar had most recently worked at Uber, where he led the company’s storage, machine learning and infrastructure groups. Gandadhar, who left Uber in July, was reportedly a director former Uber engineer Susan Fowler referenced in her blog post about mismanagement, sexual harassment and other issues at Uber. His departure, however, was reportedly unrelated to Fowler’s claims.


Source: Tech Crunch

Hide 3D paintings anywhere with AR app Artopia

Public places may soon be filled with secret pieces of art unlocked by looking through the lens of AR, if Artopia’s cheerily creative app catches on. It essentially lets you geocache your 3D scribbles so anyone else can find, appreciate, and share them.

Artopia, currently in beta for Android and iOS, is a straightforward combination of AR painting and real world discovery. You make your art by selecting brushes, colors, and so on and moving your phone as you would the brush. Grab objects and move them around, attach them, etc.

When you’re done, save it and its precise location is saved to Artopia’s service. Now anyone passing by will be able to see it (a map shows nearby creations) and who made it, give it a like, and maybe draw some complementary work nearby.

It’s simple (in concept, not in execution), but also a thoroughly pleasant and natural combo. Of course, there will also be a report button in case someone draws a fence of phalluses around your house (for example), and the usual caveats of crowd-sourced content and moderation apply.

Artopia was created by Kuwaiti developer Omar Khalil, so the density of art might be a bit higher around the American University of Kuwait. But if this sounds like something you’re into, apply to get into the beta and start filling the parks and streets around your neighborhood with color and shape.


Source: Tech Crunch

Scotty Labs raises $6 million for remote-controlled autonomous car platform

Scotty Labs, a tele-operations company that is working on technology to enable people to remotely control self-driving cars, has raised a $6 million seed round from Gradient Ventures with participation from Horizon Ventures and Hemi Ventures. Gradient Ventures is an early-stage venture fund housed within Google.

“Usman and I founded Scotty on the belief that human intelligence is critical to solving the autonomous driving problem,” Scotty co-founder and CEO Tobenna Arodiogbu wrote in blog post. “The company exists to answer the fundamental questions — what role do humans play in the future of robotics and automation, and how do we leverage human and machine intelligence to build a better future?”

That’s what led to the creation of the company’s first product, a tele-operations platform that lets humans virtually control cars. The idea, Arodiogbu wrote, is this type of human intervention will help “solve some of the hardest edge cases of driving, while allowing AV companies and their teams to focus on what they do best — building and improving their autonomous driving technology.”

If something goes wrong, a human could theoretically intervene from the safety of their home, rather than from the car itself. Scotty Labs’ first partner is Voyage, an Udacity spin-out that’s aiming to build a fully self-driving taxi platform. In October, Voyage began testing its self-driving vehicles in retirement communities.

“We decided to work with Voyage as a partner because we are excited by and fundamentally believe in the work they are doing,” Arodiogbu wrote. “We believe it is critical to provide autonomy to the communities that need it the most. We also both share a belief that human intelligence will be needed to achieve level 4 autonomy, and we share a deep and uncompromising focus on safety above speed in the deployment of fully autonomous systems. We will continue to support Voyage in the coming months and years as they achieve their goal of building a level 4 autonomous fleet.”


Source: Tech Crunch

Microsoft can ban you for using offensive language

A report by CSOOnline presented the possibility that Microsoft would be able to ban “offensive language” from Skype, Xbox, and, inexplicably, Office. The post, which cites Microsoft’s new terms of use, said that the company would not allow users to “publicly display or use the Services to share inappropriate content or material (involving, for example, nudity, bestiality, pornography, offensive language, graphic violence, or criminal activity)” and that you could lose your Xbox Live Membership if you curse out a kid Overwatch.

“We are committed to providing our customers with safe and secure experiences while using our services. The recent changes to the Microsoft Service Agreement’s Code of Conduct provide transparency on how we respond to customer reports of inappropriate public content,” said a Microsoft spokesperson. The company notes that “Microsoft Agents” do not watch Skype calls and that they can only respond to complaints with clear evidence of abuse. The changes, which go into effect May 1, allows Microsoft to ban you from it services if you’re found passing “inappropriate content” or using “offensive language.”

These new rules give Microsoft more power over abusive users and it seems like Microsoft is cracking down on bad behavior on its platforms. This is good news for victims of abuse in private communications channels on Microsoft products and may give trolls pause before they yell something about your mother on Xbox. We can only dare to dream.


Source: Tech Crunch

Data is not the new oil

 

It’s easier than ever to build software, which makes it harder than ever to build a defensible software business. So it’s no wonder investors and entrepreneurs are optimistic about the potential of data to form a new competitive advantage. Some have even hailed data as “the new oil.” We invest exclusively in startups leveraging data and AI to solve business problems, so we certainly see the appeal — but the oil analogy is flawed.

In all the enthusiasm for big data, it’s easy to lose sight of the fact that all data is not created equal. Startups and large corporations alike boast about the volume of data they’ve amassed, ranging from terabytes of data to quantities surpassing all of the information contained in the Library of Congress. Quantity alone does not make a “data moat.”

Firstly, raw data is not nearly as valuable as data employed to solve a problem. We see this in the public markets: companies that serve as aggregators and merchants of data, such as Nielsen and Acxiom, sustain much lower valuation multiples than companies that build products powered by data in combination with algorithms and ML, such as Netflix or Facebook. The current generation of AI startups recognize this difference and apply machine learning models to extract value from the data they collect.

Even when data is put to work powering ML-based solutions, the size of the data set is only one part of the story. The value of a data set, the strength of a data moat, comes from context. Some applications require models to be trained to a high degree of accuracy before they can provide any value to a customer, while others need little or no data at all. Some data sets are truly proprietary, others are readily duplicated. Some data decays in value over time, while other data sets are evergreen. The application determines the value of the data.

Defining the “data appetite”

Machine learning applications can require widely different amounts of data to provide valuable features to the end user.

MAP threshold

In the cloud era, the idea of the minimum viable product (or MVP) has taken hold — that collection of software features which has just enough value to seek initial customers. In the intelligence era, we see the analog emerging for data and models: the minimum level of accurate intelligence required to justify adoption. We call this the minimum algorithmic performance (MAP).

Most applications don’t require 100 percent accuracy to create value. For example, a productivity tool for doctors might initially streamline data entry into electronic health record systems, but over time could automate data entry by learning from what doctors enter in the system. In this case, the MAP is zero, because the application has value from day one based on software features alone. Intelligence can be added later. However, solutions where AI is central to the product (for example, a tool to identify strokes from CT scans), would likely need to equal the accuracy of status quo (human-based) solutions. In this case the MAP is to match the performance of human radiologists, and an immense volume of data might be needed before a commercial launch is viable.

Performance threshold

Not every problem can be solved with near 100 percent accuracy. Some problems are too complex to fully model given the current state of the art; in that case, volume of data won’t be a silver bullet. Adding data might incrementally improve the model’s performance, but quickly hit diminishing marginal returns.

At the other extreme, some problems can be solved with near 100 percent accuracy with a very small training set, because the problem being modeled is relatively simple, with few dimensions to track and few variations in outcome.

In short, the amount of data you need to effectively solve a problem varies widely. We call the amount of training data needed to reach viable levels of accuracy the performance threshold.

AI-powered contract processing is a good example of an application with a low performance threshold. There are thousands of contract types, but most of them share key fields: the parties involved, the items of value being exchanged, time frame, etc. Specific document types like mortgage applications or rental agreements are highly standardized in order to comply with regulation. Across multiple startups, we’ve seen algorithms that automatically process documents needing only a few hundred examples to train to an acceptable degree of accuracy.

Entrepreneurs need to thread a needle. If the performance threshold is high, you’ll have a bootstrap problem acquiring enough data to create a product to drive customer usage and more data collection. Too low, and you haven’t built much of a data moat!

Stability threshold

Machine learning models train on examples taken from the real-world environment they represent. If conditions change over time, gradually or suddenly, and the model doesn’t change with it, the model will decay. In other words, the model’s predictions will no longer be reliable.

For example, Constructor.io is a startup that uses machine learning to rank search results for e-commerce websites. The system observes customer clicks on search results and uses that data to predict the best order for future search results. But e-commerce product catalogs are constantly changing. A model that weighs all clicks equally, or trained only on a data set from one period of time, risks overvaluing older products at the expense of newly introduced and currently popular products.

Keeping the model stable requires ingesting fresh training data at the same rate that the environment changes. We call this rate of data acquisition the stability threshold.

Perishable data doesn’t make for a very good data moat. On the other hand, ongoing access to abundant fresh data can be a formidable barrier to entry when the stability threshold is low.

Identifying opportunities with long-term defensibility

The MAP, performance threshold and stability threshold are all central elements to identifying strong data moats.

First-movers may have a low MAP to enter a new category, but once they have created a category and lead it, the minimum bar for future entrants is to equal or exceed the first mover.

Domains requiring less data to reach the performance threshold and less data to maintain that performance (the stability threshold) are not very defensible. New entrants can readily amass enough data and match or leapfrog your solution. On the other hand, companies attacking problems with low performance threshold (don’t require too much data) and a low stability threshold (data decays rapidly) could still build a moat by acquiring new data faster than the competition.

More elements of a strong data moat

AI investors talk enthusiastically about “public data” versus “proprietary data” to classify data sets, but the strength of a data moat has more dimensions, including:

  • Accessibility
  • Time — how quickly can the data be amassed and used in the model? Can the data be accessed instantly, or does it take a significant amount of time to obtain and process?
  • Cost — how much money is needed to acquire this data? Does the user of the data need to pay for licensing rights or pay humans to label the data?
  • Uniqueness — is similar data widely available to others who could then build a model and achieve the same result? Such so-called proprietary data might better be termed “commodity data” — for example: job listings, widely available document types (like NDAs or loan applications), images of human faces.
  • Dimensionality — how many different attributes are described in a data set? Are many of them relevant to solving the problem?
  • Breadth — how widely do the values of attributes vary? Does the data set account for edge cases and rare exceptions? Can data or learnings be pooled across customers to provide greater breadth of coverage than data from just one customer?
  • Perishability — how broadly applicable over time is this data? Is a model trained from this data durable over a long time period, or does it need regular updates?
  • Virtuous loop — can outcomes such as performance feedback or predictive accuracy be used as inputs to improve the algorithm? Can performance compound over time?

Software is now a commodity, making data moats more important than ever for companies to build a long-term competitive advantage. With tech titans democratizing access to AI toolkits to attract cloud computing customers, data sets are one of the most important ways to differentiate. A truly defensible data moat doesn’t come from just amassing the largest volume of data. The best data moats are tied to a particular problem domain, in which unique, fresh, data compounds in value as it solves problems for customers.


Source: Tech Crunch

Nvidia CEO comments on GPU shortage caused by Ethereum

There’s currently a shortage of Nvidia GPUs and Nvidia’s CEO pointed to Ethereum distributed ledgers as the cause. Today at Nvidia’s GTC conference he spoke to a group of journalists following his keynote address and addressed the shortage.

Huang simply stated that Nvidia is not in the business of cryptocurrency or distributed ledgers. As such, he stated he preferred if his company’s GPUs were used the areas Nvidia is targeting though explained why Nvidia’s products are used for crypto mining.

“[Cryptocurrency] is not our business,” he said. “Gaming is growing and workstation is growing because of ray tracing.” He noted that Nvidia’s high performance business is also growing and these are the areas he wished Nvidia could allocate units for.

Huang explained why crypto miners are using Nvidia’s products echoing what he told me in an interview last week.

“We’re sold out of many of our high-end SKUs, and so it’s a real challenge keeping [graphic cards] in the marketplace for games,” he said, adding “At the highest level the way to think about that is because of the philosophy of cryptocurrency — which is really about taking advantage of distributed high-performance computing — there are supercomputers in the hands of almost everybody in the world so that no singular force or entity that can control the currency.”

So what is he going to do about it? “We have to build a whole lot more,” he told TechCrunch last week. “The video supply chain is working really hard, and you know all of our partners are working around the clock. We’ve got to come closer to the demand of the market. And right now, we’re not anywhere near close to that and so we’re just going to have to keep running.”


Source: Tech Crunch

Lightspeed just filed for $1.8 billion in new funding, as the race continues

Just a day after General Catalyst, the 18-year-old venture firm, revealed plans in an SEC filing to raise a record $1.375 billion in capital to shower on startups, another firm that we’d said was likely to file any second has done just that.

According to a fresh SEC filing, Lightspeed Venture Partners, also 18 years old at this point, is raising a record $1.8 billion in new capital commitments from its investors, just two years after raising what was then a record for the firm: $1.2 billion in funding across two funds (one early stage and the other for “select” companies in its portfolio that had garnered traction).

Still on our watch list: news of bigger-and-better-than-ever funds from other firms that announced their latest funds roughly two years ago, including Founders Fund, Andreessen Horowitz, and Accel Partners.

The supersizing of venture firms isn’t a shock, as we wrote yesterday — though it’s also not necessarily good for returns, as we also noted. Right now, venture firms are reacting in part to the $100 billion SoftBank Vision Fund, which SoftBank has hinted is merely the first of more gigantic funds it plans to raise, including from investors in the Middle East who’d like to plug more money into Silicon Valley than they’ve been able to do historically.

The game, as ever, has also changed, these firms could argue. For one thing, the size of rounds has soared in recent years, making it easy for venture firms to convince themselves that to “stay in the game,” they need to have more cash at their disposal.

Further, so-called limited partners from universities, pension funds and elsewhere, want to plug more money into venture capital, given the lackluster performance some other asset classes have produced.

When they want to write bigger checks to the funds in which they are already investors, the funds often try accommodating them out of loyalty. (We’re guessing the greater management fees they receive, which are tied to the amount of assets they manage, are also persuasive.)

What’s neglected in this race is the fact that the biggest outcomes can usually be traced to the earlier rounds in which VCs participate. Look at Sequoia’s early investment in Dropbox, for example, or Lightspeed’s early check to Snapchat. No matter the outcome of these companies, short of total failure, both venture firms will have made a mint, unlike later investors that might not be able to say the same.

There is also ample evidence that it’s far harder to produce meaningful returns to investors when managing a giant fund. (This Kaufmann study from 2012 is among the mostly highly cited, if you’re curious.)

Whether raising so much will prove wise for Lightspeed is an open question. What is not in doubt: Lightspeed is right now among the best-performing venture firms in Silicon Valley.

In addition to being the first institutional investor in now publicly traded Snap, the company wrote early checks to MuleSoft, which staged a successful IPO in 2018; in StitchFix, which staged a successful IPO in 2018; in AppDynamics, which sold to Cisco for $3.7 billion last year. It was an early investor in Nimble Storage, which sold to Hewlett Packard Enterprise for just north of $1 billion in cash last March. And just two weeks ago, another of its portfolio companies, Zscaler, also staged a successful IPO.

At a StrictlyVC event hosted last year by this editor, firm cofounders Ravi Mhatre and Barry Eggers talked about their very long “overnight” success story, and about the importance of funding companies early to help them set up durable businesses.

It will be interesting to see whether this new capital is invested in more early-stage deals, or the firm sees growing opportunity to compete at the growth stage. Probably both? Stay tuned.

Pictured, left to right: investors Semil Shah, Ravi Mhatre, and Barry Eggers.


Source: Tech Crunch