After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.


Source: Tech Crunch

Original Content podcast: ‘Queer Eye’ season two is even more of a tearjerker

It’s only been a couple months since we reviewed the first season of Netflix’s revival of Queer Eye, but the show’s Fab Five are already back with another eight episodes where they remake the homes, wardrobes and lives.

For season two, however, they mix things up a little — not only does the format feel more varied, but the folks being helped now include a woman and a transgendered man.

On the latest episode of the Original Content podcast, we’re joined by Henry Pickavet (editorial director at TechCrunch and co-host of the CTRL+T podcast) to discuss the show. We’re all fans: Queer Eye has its shortcomings, but it really works for us, with multiple episodes ending with tears, on- and off-screen.

We also recap some of the latest streaming and entertainment news, including AT&T’s acquisition of Time Warner, Comcast’s new bid for Fox and Netflix’s addition of Minecraft: Story Mode.

You can listen in the player below, subscribe using Apple Podcasts or find us in your podcast player of choice. If you like the show, please let us know by leaving a review on Apple. You also can send us feedback directly.


Source: Tech Crunch

The techlash

People hate hubris and hypocrisy more than they hate evil, which is, I think, why we’re seeing the beginnings of a bipartisan cultural backlash against the tech industry. A backlash which is wrongly conceived and wrongly targeted … but not entirely unfounded. It’s hard to shake the sense that, as an industry, we are currently abdicating some of our collective responsibility to the world.

I don’t want to overstate the case. The tech industry remained the single most trusted entity in America as recently as last year, according to the Edelman Trust Barometer. Jeff Bezos is the wealthiest man in the world, and Elon Musk probably its highest-profile billionaire; of course they’re going to attract flak from all sides.

Furthermore, tech has become enormously more powerful and influential over the last decade. The Big Five tech companies now occupy the top five slots on the Fortune 500, whereas in 2008, Hewlett-Packard was tech’s lone Top Ten representative at #9. Power breeds resentment. Some kind of backlash was inevitable.

And yet — the tech industry is by some distance the least objectionable of the world’s power centers right now. The finance industry has become, to paraphrase Rolling Stone, a vampire squid wrapped around the our collective economic throat, siphoning off a quarter of our lifeblood via increasingly complex financial structures which provide very little benefit to the rest of us. But a combination of learned helplessness and lack of hypocrisy — in that very few hedge fund managers pretend to be making the world a better place for anyone but their clients — shields them from anything like the rancor they deserve.

Meanwhile, we’re in the midst of the worldwide right-wing populist uprising which has led governments around the world to treat desperate refugees like nonhuman scum; turning them away by the boatload in Europe; imprisoning them on a godforsaken remote island in Australia; tearing children from their parents and caging them in America.

Tesla and Amazon’s treatment of factory and warehouse workers is at best questionable and at worst egregiously wrong … though if they were all replaced by robots, that would eliminate those complaints but also all of those jobs, which makes the complaints look pretty short-sighted. But it’s not whataboutism to suggest that outrage should be proportional to the relative scale of the offense in question. If it isn’t, then that indicates some seriously skewed priorities. What is it about the tech industry’s relatively venial sins, compared to those of finance and government, which so sticks in the craw of its critics?

Partly it’s the perceived hubris and hypocrisy — that we talk about “making the world a better place” when in fact we sometimes seem to only be making it a better place for ourselves. Life is pretty nice for those of us in the industry, and keeps getting nicer. We like to pretend that slowly, bit by bit, life is getting better for everyone else, too, while or sometimes even because we focus on our cool projects, and the rest of the world will get to live like us too.

Which is even true, for a lot of people! I was in China a couple of months ago: it has changed almost inconceivably since my first visit two decades ago, and overwhelmingly for the better, despite all of the negative side effects of that change. The same is true for India. That’s 2.6 billion people right there whose lives have mostly been transformed for the better over the last couple of decades, courtesy of capitalism and technology. The same is true for other, smaller populations around the world.

However. There are many, many millions of people, including throngs in our own back yards, for whom the world has gotten decidedly worse over the last ten years, sometimes as a result of those same changes or related ones (such as increasing inequality, which is at least arguably partly driven by technology.) Many more have been kept out of, or driven away from, our privileged little world for no good reason. Why is it somehow OK for us to shrug and turn our backs on them? The tech industry is enormously powerful now, and Peter Parker was on to something when he said: “with great power comes great responsibility.”

So why is it that we’re only willing to work on really cool long-term goals like electric cars and space exploration, and not the messy short-term stuff like inequality, housing, and the ongoing brutal oppression of refugees and immigrants? Don’t tell me it’s because those fields are too regulated and political; space travel and road transportation are heavily regulated and not exactly apolitical in case you haven’t noticed.

That painful, difficult stuff is for governments, we say. That’s for international diplomacy. That’s some one else’s problem. Until recently — and maybe even still, for now — this has been true. But with growing power comes growing responsibility. At some point, and a lot of our critics think we have already passed it, those problems become ours, too. Kudos to people like Salesforce’s Marc Benioff, who says “But we cannot delegate these complex problems off to the government and say, “We’re not all part of it,”” for beginning to tackle them.

Let’s hope he’s only among the first. And let’s hope we find a way for technology to help with the overarching problem of incompetent and/or malevolent governments, while we’re at it.


Source: Tech Crunch

TechCrunch’s Startup Battlefield is coming soon to Beirut, São Paolo and Lagos

Everyone knows there are thriving startup communities outside of obvious hubs, like San Francisco, Berlin, Bangalore and Beijing, but they don’t always get the support they deserve. Last year, TechCrunch took a major page from its playbook, the Startup Battlefield competition, and staged the event in Nairobi, Kenya to find the best early stage startup in Sub-Saharan Africa, and also to Sydney, Australia, to find the same for Australia and New Zealand. Both were successes, thanks to talented founders and the hard traveling TechCrunch team. And now we’re pleased to announce that we’re stepping up our commitment to emerging ecosystems.

TechCrunch is once again teaming up with Facebook, our partner for last year’s Nairobi event, to bring the Startup Battlefield to three major cities representing regions with vital, emerging startup communities. In Beirut, TechCrunch’s editors will strive to find the best early stage startup in the Middle East and North Africa. In São Paolo, the hunt is for the best in Latin America. And in Lagos, Nigeria, TechCrunch will once again find the top startup in Sub-Saharan Africa.

Early stage startups are welcome to apply. We will choose 15 companies in each region to compete, and we will provide travel support for the finalists to reach the host city. The finalists will also receive intensive coaching from TechCrunch’s editors to hone their pitches to a razor’s edge before they take the stage in front of top venture capitalists from the region and around the world. Winners will receive $25,000 plus a trip for two to the next TechCrunch Disrupt event, where they can exhibit free of charge, and, if qualified, have a chance to be selected to participate in the Startup Battlefield competition associated with that Disrupt. In the world of founders, the Startup Battlefield finalists are an elite; the more than 750 Startup Battlefield alums have raised over $8 billion and produced 100+ exits to date.

What are the dates? They will be finalized shortly but Beirut is on track for early October, São Paolo for early November, and Lagos in early December.  In the meantime, founders eager start an application for one of these Startup Battlefields may do so 
by visiting apply.techcrunch.com . Look for more details next week.

Interested in sponsoring one of the events? Email us at Sponsors@TechCrunch.com


Source: Tech Crunch

Facebook’s new AI research is a real eye-opener

There are plenty of ways to manipulate photos to make you look better, remove red eye or lens flare, and so on. But so far the blink has proven a tenacious opponent of good snapshots. That may change with research from Facebook that replaces closed eyes with open ones in a remarkably convincing manner.

It’s far from the only example of intelligent “in-painting,” as the technique is called when a program fills in a space with what it thinks belongs there. Adobe in particular has made good use of it with its “context-aware fill,” allowing users to seamlessly replace undesired features, for example a protruding branch or a cloud, with a pretty good guess at what would be there if it weren’t.

But some features are beyond the tools’ capacity to replace, one of which is eyes. Their detailed and highly variable nature make it particularly difficult for a system to change or create them realistically.

Facebook, which probably has more pictures of people blinking than any other entity in history, decided to take a crack at this problem.

It does so with a Generative Adversarial Network, essentially a machine learning system that tries to fool itself into thinking its creations are real. In a GAN, one part of the system learns to recognize, say, faces, and another part of the system repeatedly creates images that, based on feedback from the recognition part, gradually grow in realism.

From left to right: “Exemplar” images, source images, Photoshop’s eye-opening algorithm, and Facebook’s method.

In this case the network is trained to both recognize and replicate convincing open eyes. This could be done already, but as you can see in the examples at right, existing methods left something to be desired. They seem to paste in the eyes of the people without much consideration for consistency with the rest of the image.

Machines are naive that way: they have no intuitive understanding that opening one’s eyes does not also change the color of the skin around them. (For that matter, they have no intuitive understanding of eyes, color, or anything at all.)

What Facebook’s researchers did was to include “exemplar” data showing the target person with their eyes open, from which the GAN learns not just what eyes should go on the person, but how the eyes of this particular person are shaped, colored, and so on.

The results are quite realistic: there’s no color mismatch or obvious stitching because the recognition part of the network knows that that’s not how the person looks.

In testing, people mistook the fake eyes-opened photos for real ones, or said they couldn’t be sure which was which, more than half the time. And unless I knew a photo was definitely tampered with, I probably wouldn’t notice if I was scrolling past it in my newsfeed. Gandhi looks a little weird, though.

It still fails in some situations, creating weird artifacts if a person’s eye is partially covered by a lock of hair, or sometimes failing to recreate the color correctly. But those are fixable problems.

You can imagine the usefulness of an automatic eye-opening utility on Facebook that checks a person’s other photos and uses them as reference to replace a blink in the latest one. It would be a little creepy, but that’s pretty standard for Facebook, and at least it might save a group photo or two.


Source: Tech Crunch

First look at Instagram’s self-policing Time Well Spent tool

Are you Overgramming? Instagram is stepping up to help you manage overuse rather than leaving it to iOS and Android’s new screen time dashboards. Last month after TechCrunch first reported Instagram was prototyping a Usage Insights feature, the Facebook sub-company’s CEO Kevin System confirmed its forthcoming launch.

Tweeting our article, Systrom wrote “It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Now we have our first look at the tool via Jane Manchun Wong, who’s recently become one of TechCrunch’s favorite sources thanks to her skills at digging new features out of apps’ Android APK code. Though Usage Insights might change before an official launch, these screenshots give us an idea of what Instagram will include. Instagram declined to comment, saying it didn’t have any more to share about the feature at this time.

This unlaunched version of Instagram’s Usage Insights tool offers users a daily tally of their minutes spent on the app. They’ll be able to set a time spent daily limit, and get a reminder once they exceed that. There’s also a shortcut to manage Instagram’s notifications so the app is less interruptive. Instagram has been spotted testing a new hamburger button that opens a slide-out navigation menu on the profile. That might be where the link for Usage Insights shows up, judging by this screenshot.

Instagram doesn’t appear to be going so far as to lock you out of the app after your limit, or fading it to grayscale which might annoy advertisers and businesses. But offering a handy way to monitor your usage that isn’t buried in your operating system’s settings could make users more mindful.

Instagram has an opportunity to be a role model here, especially if it gives its Usage Insights feature sharper teeth. For example,  rather than a single notification when you hit your daily limit, it could remind you every 15 minutes after, or create some persistent visual flag so you know you’ve broken your self-imposed rule.

Instagram has already started to push users towards healthier behavior with a “You’re all caught up” notice when you’ve seen everything in your feed and should stop scrolling.

I expect more apps to attempt to self-police with tools like these rather than leaving themselves at the mercy of iOS’s Screen Time and Android’s Digital Wellbeing features that offer more drastic ways to enforce your own good intentions.

Both let you see overall usage of your phone and stats about individual apps. iOS lets you easily dismiss alerts about hitting your daily limit in an app but delivers a weekly usage report (ironically via notification), while Android will gray out an app’s icon and force you to go to your settings to unlock an app once you exceed your limit.

For Android users especially, Instagram wants to avoid looking like such a time sink that you put one of those hard limits on your use. In that sense, self-policing shows both empathy for its users’ mental health, but is also a self-preservation strategy. With Instagram slated to launch a long-form video hub that could drive even longer session times this week, Usage Insights could be seen as either hypocritical or more necessary than ever.

New time management tools coming to iOS (left) and Android (right). Images via The VergeInstagram is one of the world’s most beloved apps, but also one of the most easily abused. From envy spiraling as you watch the highlights of your friends’ lives to body image issues propelled by its endless legions of models, there are plenty of ways to make yourself feel bad scrolling the Insta feed. And since there’s so little text, no links, and few calls for participation, it’s easy to zombie-browse in the passive way research shows is most dangerous.

We’re in a crisis of attention. Mobile app business models often rely on maximizing our time spent to maximize their ad or in-app purchase revenue. But carrying the bottomless temptation of the Internet in our pockets threatens to leave us distracted, less educated, and depressed. We’ve evolved to crave dopamine hits from blinking lights and novel information, but never had such an endless supply.

There’s value to connecting with friends by watching their days unfold through Instagram and other apps. But tech giants are thankfully starting to be held responsible for helping us balance that with living our own lives.


Source: Tech Crunch

VCs serve up a large helping of cash to startups disrupting food

Here is what your daily menu might look like if recently funded startups have their way.

You’ll start the day with a nice, lightly caffeinated cup of cheese tea. Chase away your hangover with a cold bottle of liver-boosting supplement. Then slice up a few strawberries, fresh-picked from the corner shipping container.

Lunch is full of options. Perhaps a tuna sandwich made with a plant-based, tuna-free fish. Or, if you’re feeling more carnivorous, grab a grilled chicken breast fresh from the lab that cultured its cells, while crunching on a side of mushroom chips. And for extra protein, how about a brownie?

Dinner might be a pizza so good you send your compliments to the chef — only to discover the chef is a robot. For dessert, have some gummy bears. They’re high in fiber with almost no sugar.

Sound terrifying? Tasty? Intriguing? If you checked tasty and intriguing, then here is some good news: The concoctions highlighted above are all products available (or under development) at food and beverage startups that have raised venture and seed funding this past year.

These aren’t small servings of capital, either. A Crunchbase News analysis of venture funding for the food and beverage category found that startups in the space gobbled up more than $3 billion globally in disclosed investment over the past 12 months. That includes a broad mix of supersize deals, tiny seed rounds and everything in-between.

Spending several hours looking at all these funding rounds leaves one with a distinct sense that eating habits are undergoing a great deal of flux. And while we can’t predict what the menu of the future will really hold, we can highlight some of the trends. For this initial installment in our two-part series, we’ll start with foods. Next week, we’ll zero in on beverages.

Chickenless nuggets and fishless tuna

For protein lovers disenchanted with commercial livestock farming, the future looks good. At least eight startups developing plant-based and alternative proteins closed rounds in the past year, focused on everything from lab meat to fishless fish to fast-food nuggets.

New investments add momentum to what was already a pretty hot space. To date, more than $600 million in known funding has gone to what we’ve dubbed the “alt-meat” sector, according to Crunchbase data. Actual investment levels may be quite a bit higher since strategic investors don’t always reveal round size.

In recent months, we’ve seen particularly strong interest in the lab-grown meat space. At least three startups in this area — Memphis Meats, SuperMeat and Wild Type — raised multi-million dollar rounds this year. That could be a signal that investors have grown comfortable with the concept, and now it’s more a matter of who will be early to market with a tasty and affordable finished product.

Makers of meatless versions of common meat dishes are also attracting capital. Two of the top funding recipients in our data set include Seattle Food Tech, which is working to cost-effectively mass-produce meatless chicken nuggets, and Good Catch, which wants to hook consumers on fishless seafoods. While we haven’t sampled their wares, it does seem like they have chosen some suitable dishes to riff on. After all, in terms of taste, both chicken nuggets and tuna salad are somewhat removed from their original animal protein sources, making it seemingly easier to sneak in a veggie substitute.

Robot chefs

Another trend we saw catching on with investors is robot chefs. Modern cooking is already a gadget-driven process, so it’s not surprising investors see this as an area ripe for broad adoption.

Pizza, the perennial takeout favorite, seems to be a popular area for future takeover by robots, with at least two companies securing rounds in recent months. Silicon Valley-based Zume, which raised $48 million last year, uses robots for tasks like spreading sauce and moving pies in and out of the oven. France’s EKIM, meanwhile, recently opened what it describes as a fully autonomous restaurant staffed by pizza robots cooking as customers watch.

Salad, pizza’s healthier companion side dish, is also getting roboticized. Just this week, Chowbotics, a developer of robots for food service whose lineup includes Sally the salad robot, announced an $11 million Series A round.

Those aren’t the only players. We’ve put together a more complete list of recently launched or funded robot food startups here.

Beyond sugar

Sugar substitutes aren’t exactly a new area of innovation. Diet Rite, often credited as the original diet soda, hit the market in 1958. Since then, we’ve had 60 years of mass-marketing for low-calorie sweeteners, from aspartame to stevia.

It’s not over. In recent quarters, we’ve seen a raft of funding rounds for startups developing new ways to reduce or eliminate sugar in many of the foods we’ve come to love. On the dessert and candy front, Siren Snacks and SmartSweets are looking to turn favorite indulgences like brownies and gummy bears into healthy snack options.

The quest for good-for-you sugar also continues. The latest funding recipient in this space appears to be Bonumuse, which is working to commercialize two rare sugars, Tagatose and Allulose, as lower-calorie and potentially healthier substitutes for table sugar. We’ve compiled a list of more sugar-reduction-related startups here.

Where is it all headed?

It’s tough to tell which early-stage food startups will take off and which will wind up in the scrap bin. But looking in aggregate at what they’re cooking up, it looks like the meal of the future will be high in protein, low in sugar and prepared by a robot.


Source: Tech Crunch

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 


Source: Tech Crunch

Crown, a new app from Tinder’s parent company, turns dating into a game

If you’re already resentful of online dating culture and how it turned finding companionship into a game, you may not be quite ready for this: Crown, a new dating app that actually turns getting matches into a game. Crown is the latest project to launch from Match Group, the operator of a number of dating sites and apps including Match, Tinder, Plenty of Fish, OK Cupid, and others.

The app was thought up by Match Product Manager Patricia Parker, who understands first-hand both the challenges and the benefits of online dating – Parker met her husband online, so has direct experience in the world of online dating.

Crown won Match Group’s internal “ideathon,” and was then developed in-house by a team of millennial women, with a goal of serving women’s needs in particular.

The main problem Crown is trying to solve is the cognitive overload of using dating apps. As Match Group scientific advisor Dr. Helen Fisher explained a few years ago to Wired, dating apps can become addictive because there’s so much choice.

“The more you look and look for a partner the more likely it is that you’ll end up with nobody…It’s called cognitive overload,” she had said. “There is a natural human predisposition to keep looking—to find something better. And with so many alternatives and opportunities for better mates in the online world, it’s easy to get into an addictive mode.”

Millennials are also prone to swipe fatigue, as they spend an average of 10 hours per week in dating apps, and are being warned to cut down or face burnout.

Crown’s approach to these issues is to turn getting matches into a game of sorts.

While other dating apps present you with an endless stream of people to pick from, Crown offers a more limited selection.

Every day at noon, you’re presented with 16 curated matches, picked by some mysterious algorithm. You move through the matches by choosing who you like more between two people at a time.

That is, the screen displays two photos instead of one, and you “crown” your winner. (Get it?) This process then repeats with two people shown at a time, until you reach your “Final Four.”

Those winners are then given the opportunity to chat with you, or they can choose to pass.

In addition to your own winners, you may also “win” the crown among other brackets, which gives you more matches to contend with.

Of course, getting dubbed a winner is a stronger signal on Crown than on an app like Tinder, where it’s more common for matches to not start conversations. This could encourage Crown users to chat, given they know there’s more of a genuine interest since they “beat out” several others. But on the flip side, getting passed on Crown is going to be a lot more of an obvious “no,” which could be discouraging.

“It’s like a ‘Bachelorette’-style process of elimination that helps users choose between quality over quantity,” explains Andy Chen, Vice President, Match Group. “Research shows that the human brain can only track a set number of relationships…and technology has not helped us increase this limit.”

Chen is referring to the Dunbar number, which says that people can only really maintain a max of some 150 social relationships. Giving users a never-ending list of possible matches on Tinder, then, isn’t helping people feel like they have options – it’s overloading the brain.

While turning matchmaking into a game feels a bit dehumanizing – maybe even more so than on Tinder, with its Hot-or-Not-inspired vibe – the team says Crown actually increases the odds, on average, of someone being selected, compared with traditional dating apps.

“When choosing one person over another, there is always a winner. The experience actually encourages a user playing the game to find reasons to say yes,” says Chen.

Crown has been live in a limited beta for a few months, but is now officially launched in L.A. (how appropriate) with more cities to come. For now, users outside L.A. will be matched with those closet to them.

There are today several thousand users on the app, and it’s organically growing, Chen says.

Plus, Crown is seeing day-over-day retention rates which are “already as strong” as Match Group’s other apps, we’re told.

Sigh. 

The app is a free download on iOS only for now. An Android version is coming, the website says.

 


Source: Tech Crunch

Judge says ‘literal but nonsensical’ Google translation isn’t consent for police search

Machine translation of foreign languages is undoubtedly a very useful thing, but if you’re going for anything more than directions or recommendations for lunch, its shallowness is a real barrier. And when it comes to the law and constitutional rights, a “good enough” translation doesn’t cut it, a judge has ruled.

The ruling (PDF) is not hugely consequential, but it is indicative of the evolving place in which translation apps find themselves in our lives and legal system. We are fortunate to live in a multilingual society, but for the present and foreseeable future it seems humans are still needed to bridge language gaps.

The case in question involved a Mexican man named Omar Cruz-Zamora, who was pulled over by cops in Kansas. When they searched his car, with his consent, they found quite a stash of meth and cocaine, which naturally led to his arrest.

But there’s a catch: Cruz-Zamora doesn’t speak English well, so the consent to search the car was obtained via an exchange facilitated by Google Translate — an exchange that the court found was insufficiently accurate to constitute consent given “freely and intelligently.”

The fourth amendment prohibits unreasonable search and seizure, and lacking a warrant or probable cause, the officers required Cruz-Zamora to understand that he could refuse to let them search the car. That understanding is not evident from the exchange, during which both sides repeatedly fail to comprehend what the other is saying.

Not only that, but the actual translations provided by the app weren’t good enough to accurately communicate the question. For example, the officer asked “¿Puedo buscar el auto?” — the literal meaning of which is closer to “can I find the car,” not “can I search the car.” There’s no evidence that Cruz-Zamora made the connection between this “literal but nonsensical” translation and the real question of whether he consented to a search, let alone whether he understood that he had a choice at all.

With consent invalidated, the search of the car is rendered unconstitutional, and the charges against Cruz-Zamora are suppressed.

It doesn’t mean that consent is impossible via Google Translate or any other app — for example, if Cruz-Zamora had himself opened his trunk or doors to allow the search, that likely would have constituted consent. But it’s clear that app-based interactions are not a sure thing. This will be a case to consider not just for cops on the beat looking to help or investigate people who don’t speak English, but in courts as well.

Providers of machine translation services would have us all believe that those translations are accurate enough to use in most cases, and that in a few years they will replace human translators in all but the most demanding situations. This case suggests that machine translation can fail even the most basic tests, and as long as that possibility remains, we have to maintain a healthy skepticism.


Source: Tech Crunch