Facebook quietly acquired another UK AI startup and almost no one noticed

Over the last few years, Facebook has been busy building out AI capabilities in areas like computer vision, natural language processing (NLP) and ‘deep learning,’ in part by acquiring promising startups in the space.

Understandably, this has seen the U.S. social networking giant look to the U.K. for AI talent, including an acqui-hire of NLP startup Bloosbury AI in 2018, and most recently, acquiring Scape Technologies, a British company using computer vision to offer more accurate location positioning for augmented reality.

Now TechCrunch has learned that a third U.K. acquisition quietly took place this December, seeing Facebook acquire Deeptide Ltd., the company behind Atlas ML, which is also the custodian of “Papers With Code,” the free and open resource for machine learning papers and code.

A regulatory filing for Deeptide reveals that Facebook became a majority owner on 13th December 2019. The same day, Atlas ML co-founder Robert Stojnic published a Medium post titled “Papers with Code is joining Facebook AI,” which went largely unnoticed outside of the machine learning research community.

Terms of the deal — or even that the acquisition took place — weren’t announced by Facebook at the time, beyond Stojnic’s sanctioned post. However, according to my sources within London’s tech community, the ballpark price is thought to have been around $40 million or thereabouts.

Founded in 2018 by Stojnic and Ross Taylor, Atlas ML wanted to “make it easier to discover and apply deep learning research”. The young startup was an alumni of Entrepreneur First (EF) — along with Bloomsbury and Scape — and raised subsequent seed funding from Episode1 and Kindred Capital.

I’ve contacted Facebook for comment and will update this post if and when I hear back.


Source: Tech Crunch

After $479M round on $12.4B valuation, Snowflake CEO says IPO is next step

Snowflake, the cloud-based data warehouse company, doesn’t tend to do small rounds. On Friday night word leaked out about its latest mega round. This one was for $479 million on a $12.4 billion valuation. That’s triple the company’s previous $3.9 billion valuation from October 2018, and CEO Frank Slootman suggested that the company’s next finance event is likely an IPO.

Dragoneer Investment led the round along with new investor Salesforce Ventures. Existing Snowflake investors Altimeter Capital, ICONIQ Capital, Madrona Venture Group, Redpoint Ventures, Sequoia, and Sutter Hill Ventures also participated. The new round brings the total raised to over $1.4 billion, according to PitchBook data.

All of this investment begs the question when this company goes public. As you might expect, Slootman is keeping his cards close to the vest, but he acknowledges that is the next logical step for his organization, even if he is not feeling pressure to make that move right now.

“I think the earliest that we could actually pull that trigger is probably early- to mid-summer timeframe. But whether we do that or not is a totally different question because we’re not in a hurry, and we’re not getting pressure from investors,” he said.

He grants that the pressure is about allowing employees to get their equity out of the company, which can only happen once the company goes public. “The only reason that there’s always a sense of pressure around this is because it’s important for employees, and I’m not minimizing that at all. That’s a legitimate thing. So, you know, it’s certainly a possibility in 2020 but it’s also a possibility the year thereafter. I don’t see it happening any later than that,” he said.

The company’s most recent round prior to this was $450 million in October 2018. Slootman says that he absolutely didn’t need the money, but the capital was there, and the chance to forge a relationship with Salesforce also was key in their thinking in taking this funding.

“At a high level, the relationship is really about allowing Salesforce data to be easily accessed inside Snowflake. Not that it’s impossible to do that today because there are lots of tools that will help you do that, but this relationship is about making that seamless and frictionless, which we find is really important,” Slootman said.

Snowflake now has relationships with AWS, Microsoft Azure and Google Cloud Platform, and has a broad content strategy to have as much quality data (like Salesforce) on the platform. Slootman says that this helps induce a network effect, while helping move data easily between major cloud platforms, a big concern as more companies adopt a multiple cloud vendor strategy.

“One of the key distinguishing architectural aspects of Snowflake is that once you’re on our platform, it’s extremely easy to exchange data with other Snowflake users. That’s one of the key architectural underpinnings. So content strategy induces network effect which in turn causes more people, more data to land on the platform, and that serves our business model,” he said.

Slootman says investors want to be part of his company because it’s solving some real data interchange pain points in the cloud market, and the company’s growth shows that in spite of its size, that continues to attract new customers at high rate.

“We just closed off our previous fiscal year which ended last Friday, and our revenue grew at 174%. For the scale that we are, this by far the fastest growing company out there…So, that’s not your average asset,” he said.

The company has 3400 active customers, which he defines as customers who were actively using the platform in the last month. He says that they have added 500 new customers alone in the last quarter.


Source: Tech Crunch

Watch two rocket launches live, including a Space Station supply flight and a mission to study the Sun

There are two – that’s right two – launches happening this Sunday, and both are set to broadcast live on NASA’s official stream above. The first is a NASA International Space Station resupply mission, with a Norhtrop Grumman Cygnus spacecraft launching aboard an Antares rocket from Wallops Island in Virginia at 5:39 PM EST (2:39 PM PST). The second is the launch of the Solar Orbiter spacecraft, a joint scientific mission by NASA and the European Space Agency (ESA) that’s set to take off aboard a United Launch Alliance (ULA) Atlas V rocket from Cape Canaveral, Florida at 11:03 PM EST (8:03 PM PST).

The ISS resupply mission is the 13th operated by Northrop Grumman, and will carry around 8,000 lbs of experiment materials, supplies for the STation’s astronaut crew, and additional cargo including various cargo. If all goes to plan, the Cygnus spacecraft will get to the Space Station on Tuesday at around 4:30 AM EST, where astronauts on board will capture the spacecraft with the station’s robotic arm for docking.

The NASA/ESA Solar Orbiter mission is a bit more of an event, since it’s a launch of a very special payload with a dedicated mission to study the Sun, launching aboard a brand new custom configuration of ULA’s Atlas V rocket tailor-made for the Orbiter. The Orbiter has a mass of nearly 4,000 lbs, and a wingspan of nearly 60 feet, and is carrying a complement of 10 instruments for gathering data from our Solar System’s central player.

Solar Orbiter will take the first ever direct images of the Sun’s poles once it arrives at our star, but it first has to get there, using the gravitational force of both Earth and Venus to help propel it along its path. Already, the planned launch of Solar Orbiter has been delayed by a few days – and timing is key to making sure those gravitational forces can work as designed to get it to tis goal, so here’s hoping today’s launch goes off as planned.

As its name implies, Solar Orbiter is designed to orbit the Sun – and it’ll do so from a relatively close distance of around 26 million miles away. That’s closer than Mercury, the planet in our solar system closest to the Sun, and at that distance it’ll still face max temperatures of around 520 degrees Celsius (968 degrees Fahrenheit). To endure those temps, the spacecraft is protected by a titanium heat shield that will always be oriented towards the star, and even its solar panels will actually have to tilt away from the Sun during the spacecraft’s closest approach to make sure they don’t get too hot while powering the satellite.

Solar Orbiter will study the Sun’s polar regions, as mentioned, and shed some light on how its magnetic field and emissions of particles from the star affect its surrounding cosmic environment, including the region of space that we inhabit here on Earth. After launch, Orbiter should make its way to Venus for a flyby this December, then cost paths with Earth for a planned approach in November, 2021, before making its first close approach to the Sun in 2022.

Check back above for live views of both launches, with the stream for the first mission kicking off shortly after 5 PM EST (2PM PST).


Source: Tech Crunch

The war against space hackers: how the JPL works to secure its missions from nation-state adversaries

NASA’s Jet Propulsion Laboratory designs, builds, and operates billion-dollar spacecraft. That makes it a target. What the infosec world calls Advanced Persistent Threats — meaning, generally, nation-state adversaries — hover outside its online borders, constantly seeking access to its “ground data systems,” its networks on Earth, which in turn connect to the ground relay stations through which those spacecraft are operated.

Their presumptive goal is to exfiltrate secret data and proprietary technology, but the risk of sabotage of a billion-dollar mission also exists. Over the last few years, in the wake of multiple security breaches which included APTs infiltrating their systems for months on end, the JPL has begun to invest heavily in cybersecurity.

I talked to Arun Viswanathan, a key NASA cyber security researcher, about that work, which is a fascinating mix of “totally representative of infosec today” and “unique to the JPL’s highly unusual concerns.” The key message is firmly in the former category, though: information security has to be proactive, not reactive.

Each mission at JPL is like its own semi-independent startup, but their technical constraints tend to be very unlike those of Valley startups. For instance, mission software is usually homegrown/innovative, because their software requirements are so much more stringent: for instance, you absolutely cannot have software going rogue and consuming 100% of CPU on a space probe.

Successful missions can last a very long time, so the JPL has many archaic systems, multiple decades old, which are no longer supported by anyone; they have to architect their security solutions around the limitations of that ancient software. Unlike most enterprises, they are open to the public, who tour the facilities by the hundred. Furthermore, they have many partners, such as other space agencies, with privileged access to their systems.

All that … while being very much the target of nation-state attackers. Theirs is, to say the last, an interesting threat model.

Viswanathan has focused largely on two key projects. One is the creation of a model of JPL’s ground data systems — all its heterogeneous networks, hosts, processes, applications, file servers, firewalls, etc. — and a reasoning engine on top of it. This then can be queried programmatically. (Interesting technical side note: the query language is Datalog, a non-Turing-complete offshoot of venerable Prolog which has had a resurgence of late.)

Previous to this model, no one person could confidently answer “what are the security risks of this ground data system?” As with many decades-old institutions, that knowledge was largely trapped in documents and brains.

With the model, ad hoc queries such as “could someone in the JPL cafeteria access mission-critical servers?” can be asked, and the reasoning engine will search out pathways, and itemize their services and configurations. Similarly, researchers can work backwards from attackers’ goals to construct “attack trees,” paths which attackers could use to conceivably reach their goal, and map those against the model, to identify mitigations to apply.

His other major project is to increase the JPL’s “cyber situational awareness” — in other words, instrumenting their systems to collect and analyze data, in real time, to detect attacks and other anomalous behavior. For instance, a spike in CPU usage might indicate a compromised server being used for cryptocurrency mining.

In the bad old days, security was reactive: if someone had a problem and couldn’t access their machine, they’d call, but that was the extent of their observability. Nowadays, they can watch for malicious and anomalous patterns which range from the simple, such as a brute-force attack indicated by many failed logins followed by a successful one, to the much more complex, e.g. machine-learning based detection of a command system operating outside its usual baseline parameters.

Of course, sometimes it’s just an anomaly, not an attack. Conversely, this new observability is also helping to identify system inefficiencies, memory leakage, etcetera, proactively rather than reactively.

This may all seem fairly basic if you’re accustomed to, say, your Digital Ocean dashboard and its panoply of server analygics. But re-engineering an installed base of heterogeneous complex legacy systems for observability at scale is another story entirely. Looking at the borders and interfaces isn’t enough; you have to observe all the behavior inside the perimeter too, especially in light of partners with privileged access, who might abuse that access if compromised. (This was the root cause of the infamous 2018 attack on the JPL.)

While the JPL’s threat model is fairly unique, Viswanathan’s work is quite representative of our brave new world of cyberwarfare. Whether you’re a space agency, a big company, or a growing startup, your information security nowadays needs to be proactive. Ongoing monitoring of anomalous behavior is key, as is thinking like an attacker; reacting after you find out something bad happened is not enough. May your organization learn this the easy way, rather than joining the seemingly endless of headlines telling us all of breach after breach.


Source: Tech Crunch

Startups Weekly: Asana numbers likely to be what the market wants

[Editor’s note: Want to get this weekly review of news that startups can use? Just subscribe here.] 

Asana may get more attention than the average SaaS company due to the Facebook pedigrees and outspoken views of its founders, but in practice it’s a low-profile, cash-efficient machine. Today, the productivity toolmaker does not need to raise cash via a traditional IPO, as we explored this week following its filing for a direct listing, even though it hasn’t raised that much money compared to other unicorns.

Alex Wilhelm dug into public numbers on Extra Crunch to make an educated guess about its pricing prospects:

Let’s presume that Asana crossed the $100 million ARR mark as 2018 came to a close. And, for the sake of discussion, that its eight quarters of revenue growth acceleration left the company with a 60% expansion rate. Then, Asana would have closed up 2019 with $160 million in ARR. (You can easily change up the numbers by tweaking when the company reached the nine-figure ARR mark and its ensuing growth rate.). …

Asana is likely worth more than its final private valuation of $1.5 billion. Presuming it can get a bog-standard 12x multiple on its ARR, the company would be worth $1.8 billion. If it can do better, or is larger than that, the value of the firm quickly rises.

Unlike Casper’s struggles, and One Medical’s somewhat surprising consumery pop, Asana is a straightforward bet for a good public performance based on traditional SaaS metrics. Stay tuned for more next week.

GettyImages 926051128

VCs are still pouring money into open source

In this week’s investor survey, Arman Tabatabai talked to 18 of the most active and successful investors in open-source and devops software about the latest trends. The money going into the sector has grown by 10% CAGR over the last five years, and nobody he talked to plans to slow down — in fact, many said the market was under-heated, or just halfway there. Why? Every company is trying to become more of a software company, developers now get to make more adoption and purchasing decisions, and there are countless software problems yet to solve.

The investors in Part 1 of the survey on Extra Crunch:

The investors in Part 2:

GettyImages 860704620

The latest startup funds are even more meta

It seems like everyone wants to invest in tech startups these days, including any large company or government body — and even tech startups. In the latest news on this long-running trend, cap table management unicorn Carta is starting its own fund to invest in companies. Given its in-house data and broad relationships in the industry, this seems like great positioning for some hot deals (as long as the clients on the platform don’t mind, of course).

Meanwhile, a couple of successful, currently active founders will also be ramping up their seed investments. Superhuman founder and CEO Rahul Vohra and Eventjoy founder Todd Goldberg are teaming up to create “The Todd & Rahul Angel Fund” which will put $7 million from an LP base of other founders and operators to work. The dollars involved may be small, but the signaling is likely to be very high.

Organized (tech) labor

Silicon Valley investors and founders have avoided unions for decades by giving employees a cut of the ownership directly. But is this arrangement changing? The rise of gig work, the questions about high valuations and future stock prices, the grind of life at many unicorn startups, and general concern about tech culture and ethics have combined to make some workers look harder at unions, as Megan Rose Dickey covered this week in an ongoing series.

Other workers, meanwhile, are striking out to form tech coops that share ownership from the start. She talked to a couple folks on this front as well, including one coop that is helping ride-share drivers to make more money.

Around the horn

Here’s why so many fintech startups are loaning to small businesses (EC)

Europe risks squandering its global advantage in deep tech innovation (TC)

What to expect when pitching European VCs (TC)

Dear Sophie: My H-1B was renewed, but I’m getting laid off (EC)

Latin America takes the global lead in VC directed to female co-founders (TC)

Why VCs are dumping money into insurance marketplaces (EC)

As a top manager leaves amid fundraising woes, SoftBank’s vision looks dimmer — and schadenfreude abounds (TC)

Why this VC thinks we’re heading for a cloud slowdown (EC)

#EquityPod

In this week’s episode, Alex and Danny sat down with Rick Yang of NEA, examined Casper and One Medical in more detail, and covered a few new funds and fundraises — including more thoughts on the Asana numbers. Check it out!


Source: Tech Crunch

‘A city where you can pilot almost anything and figure out if it’s going to work’

As founding executive director of Tech:NYC, Julie Samuels is one of the state’s most prominent advocates for the tech sector, both in Albany and at City Hall.

Samuels, a lawyer by training, came to New York after serving as executive director of Engine, a San Francisco organization on which Tech:NYC is modeled. In an interview with TechCrunch, Samuels spoke about several issues, including her rationale for why, despite the controversy over Amazon’s decision not to build its second headquarters in Queens, the area is well-positioned for the next wave of tech innovation.

TechCrunch: What is the need for organizations like Tech:NYC and Engine?

Julie Samuels: As the tech industry matures, it is incredibly important that there are organizations [that] represent these companies politically, civically, making sure they have a seat at the table with so many public policy debates. There is no shortage of public policy debates surrounding technology.

It is also incredibly important that there are organizations who are talking from the viewpoint of smaller companies and startups. There are a lot of organizations that represent the biggest and most well-known companies, including Tech:NYC. But [we] also have hundreds of members who are small and growing startups. We think that diversity of the ecosystem is what really sets the technology sector apart and it is something we want to foster and celebrate.

Who are your members, then?


Source: Tech Crunch

Why your next TV needs ‘filmmaker mode’

TVs this year will ship with a new feature called “filmmaker mode,” but unlike the last dozen things the display industry has tried to foist on consumers, this one actually matters. It doesn’t magically turn your living room into a movie theater, but it’s an important step in that direction.

This new setting arose out of concerns among filmmakers (hence the name) that users were getting a sub-par viewing experience of the media that creators had so painstakingly composed.

The average TV these days is actually quite a quality piece of kit compared to a few years back. But few ever leave their default settings. This was beginning to be a problem, explained LG’s director of special projects, Neil Robinson, who helped define the filmmaker mode specification and execute it on the company’s displays.

“When people take TVs out of the box, they play with the settings for maybe five minutes, if you’re lucky,” he said. “So filmmakers wanted a way to drive awareness that you should have the settings configured in this particular way.”

In the past they’ve taken to social media and other platforms to mention this sort of thing, but it’s hard to say how effective a call to action is, even when it’s Tom Cruise and Chris McQuarrie begging you:

While very few people really need to tweak the gamma or adjust individual color levels, there are a couple settings that are absolutely crucial for a film or show to look the way it’s intended. The most important are ones that fit under the general term “motion processing.”

These settings have a variety of fancy-sounding names, like “game mode,” “motion smoothing,” “truemotion,” and such like, and they are on by default on many TVs. What they do differs from model to model, but it amounts to taking content at, say, 24 frames per second, and converting it to content at, say, 120 frames per second.

Generally this means inventing the images that come between the 24 actual frames — so if a person’s hand is at point A in one frame of a movie and point C in the next, motion processing will create a point B to go in between — or B, X, Y, Z, and dozens more if necessary.

This is bad for several reasons:

First, it produces a smoothness of motion that lies somewhere between real life and film, giving an uncanny look to motion-processed imagery that people often say reminds them of bad daytime TV shot on video — which is why people call it the “soap opera effect.”

Second, some of these algorithms are better than others, and some media is more compatible than the rest (sports broadcasts, for instance). While at best they produce the soap opera effect, at worst they can produce weird visual artifacts that can distract even the least sensitive viewer.

And third, it’s an aesthetic affront to the creators of the content, who usually crafted it very deliberately, choosing this shot, this frame rate, this shutter speed, this take, this movement, and so on with purpose and a careful eye. It’s one thing if your TV has the colors a little too warm or the shadows overbright — quite another to create new frames entirely with dubious effect.

So filmmakers, and in particular cinematographers, whose work crafting the look of the movie is most affected by these settings, began petitioning TV companies to either turn motion processing off by default or create some kind of easily accessible method for users to disable it themselves.

Ironically, the option already existed on some displays. “Many manufacturers already had something like this,” said Robinson. But with different names, different locations within the settings, and different exact effects, no user could really be sure what these various modes actually did. LG’s was “Technicolor Expert Mode.” Does that sound like something the average consumer would be inclined to turn on? I like messing with settings, and I’d probably keep away from it.

So the movement was more about standardization than reinvention. With a single name, icon, and prominent placement instead of being buried in a sub-menu somewhere, this is something people may actually see and use.

Not that there was no back-and-forth on the specification itself. For one thing, filmmaker mode also lowers the peak brightness of the TV to a relatively dark 100 nits — at a time when high brightness, daylight visibility, and contrast ratio are specs manufacturers want to show off.

The reason for this is, very simply, to make people turn off the lights.

There’s very little anyone in the production of a movie can do to control your living room setup or how you actually watch the film. But restricting your TV to certain levels of brightness does have the effect of making people want to dim the lights and sit right in front. Do you want to watch movies in broad daylight, with the shadows pumped up so bright they look grey? Feel free, but don’t imagine that’s what the creators consider ideal conditions.

Photo: Chris Ryan / Getty Images

“As long as you view in a room that’s not overly bright, I’d say you’re getting very close to what the filmmakers saw in grading,” said Robinson. Filmmaker mode’s color controls are a rather loose, he noted, but you’ll get the correct aspect ratio, white balance, no motion processing, and generally no weird surprises from not delving deep enough in the settings.

The full list of changes can be summarized as follows:

  • Maintain source frame rate and aspect ratio (no stretched or sped up imagery)
  • Motion processing off (no smoothing)
  • Peak brightness reduced (keeps shadows dark — this may change with HDR content)
  • Sharpening and noise reduction off (standard items with dubious benefit)
  • Other “image enhancements” off (non-standard items with dubious benefit)
  • White point at D65/6500K (prevents colors from looking too warm or cool)

All this, however, relies on people being aware of the mode and choosing to switch to it. Exactly how that will work depends on several factors. The ideal option is probably a filmmaker mode button right on the clicker, which is at least theoretically the plan.

The alternative is a content specification — as opposed to a display one — that allows TVs to automatically enter filmmaker mode when a piece of media requests it to. But this requires content providers to take advantage of the APIs that make the automatic switching possible, so don’t count on it.

And of course this has its own difficulties, including privacy concerns — do you really want your shows to tell your devices what to do and when? So a middle road where the TV prompts the user to “Show this content in filmmaker mode? Yes/No” and automatic fallback to the previous settings afterwards might be the best option.

There are other improvements that can be pursued to make home viewing more like the theater, but as Robinson pointed out, there are simply fundamental differences between LCD and OLED displays and the projectors used in theaters — and even then there are major differences between projectors. But that’s a whole other story.

At the very least, the mode as planned represents a wedge that content purists (it has a whiff of derogation but they may embrace the term) can widen over time. Getting the average user to turn off motion processing is the first and perhaps most important step — everything after that is incremental improvement.

So which TVs will have filmmaker mode? It’s unclear. LG, Vizio, and Panasonic have all committed to bringing models out with the feature, and it’s even possible it could be added to older models with a software update (but don’t count on it). Sony is a holdout for now. No one is sure exactly which models will have filmmaker mode available, so just cast an eye over the spec list of you’re thinking of getting and, if you’ll take my advice, don’t buy a TV without it.


Source: Tech Crunch

California’s new privacy law is off to a rocky start

California’s new privacy law was years in the making.

The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.

But to say things are going well is a stretch.

Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.

Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”

“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.

“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.

Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.

“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.

PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.

But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.

Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.

(Image: Twitter/@jeanqasaur)

The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.

Others require sending in even more sensitive information just to prove it’s them.

Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.

Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.

As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.

Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.

The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.

(Screenshot: Zack Whittaker/TechCrunch)

TechCrunch alerted Mine — and the two requesters — to the security lapse.

“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”

For now, many startups have caught a break.

The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.

“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.

CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.

But with the looming threat of hefty fines just months away, time is running out for the non-compliant.


Source: Tech Crunch

This Week in Apps: Chinese giants take on Google Play, Iowa caucus disaster, TikTok’s power over App Store charts

Welcome back to This Week in Apps, the Extra Crunch series that recaps the latest OS news, the applications they support and the money that flows through it all.

The app industry is as hot as ever with a record 204 billion downloads in 2019 and $120 billion in consumer spending in 2019, according to App Annie’s recently released “State of Mobile” annual report. People are now spending 3 hours and 40 minutes per day using apps, rivaling TV. Apps aren’t just a way to pass idle hours — they’re a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus.

In this Extra Crunch series, we help you keep up with the latest news from the world of apps, delivered on a weekly basis.

This week, we look at the app making headlines for causing a disaster in Iowa, TikTok’s power to move apps up the charts, all the news from Apple’s new betas, the plan from Chinese mobile giants to take on Google Play, subscription scams, plus app trends and other news.

Headlines

Iowa’s caucus app was a disaster

A smartphone app really screwed things up in Iowa. The app, built by Shadow Inc., was designed to help the Iowa Democratic Party tabulate votes from the caucuses. But instead of helping, the app failed, causing a massive delay of almost an entire day. According to The New York Times, the app was quickly put together in just the past two months — and wasn’t properly tested.


Source: Tech Crunch

3 unicorn takeaways from the Casper and One Medical IPOs

With Casper’s public offering earlier this week, we’ve closed the book on the first two venture-backed IPOs of note in 2020. Casper, joined by One Medical, carried over $870 million of private capital, venture and otherwise, across the finish line.

Even though each IPO featured an unprofitable tech-enabled business that had posted sub-30% growth and gross margins under 50% (far more, in the case of One Medical), they wound up miles apart in terms of their market reception and resulting valuation, measured in revenue multiples terms.

So what can we learn from the two IPOs as we look ahead to other unicorn debuts in 2020? A great number of things that help set the stage for the rest of 2020’s IPO class. Let’s discuss three observations that stick out the most.

Tech-enabled businesses can secure high-flying valuations in public offerings

The surprise of the year so far has been the public market’s reaction to One Medical’s IPO. The company, today worth $3.13 billion, is trading at 11.3x times the top end of its 2019 revenue projections (the company has yet to close the books on its Q4 accounting).


Source: Tech Crunch