Registration is open for TC Sessions: Robotics + AI 2020

It’s time to get your robotics fix, startup fans. That’s right, TC Sessions: Robotics & AI returns to UC Berkeley’s Zellerbach Hall on March 3, 2020. Join us for a day-long deep-dive focused on the intersection of robotics and AI — arguably two of the most exciting and world-changing technologies.

Registration is now open. Save the date and save $100 when you buy an early bird ticket to TC Sessions: Robotics & AI 2020. Want to save even more? Buy in bulk. You’ll save an extra 18% when you purchase four or more tickets at once.

This is our fourth year hosting this event and last year, 1,500 founders, technologists, engineering students and investors heard TechCrunch editors interview top leaders in AI and robotics, participated in workshops, watched live demos, attended speaker Q&As and enjoyed world-class networking. With so many advances in a range of technologies like AI, GPUs, sensors (to name just a few), it’s an exciting time to be part of this rapidly evolving space.

We’re building out the speaker roster and agenda, so keep checking back. In the meantime, take a look at last year’s agenda to get a sense of the quality programming you can expect.

Boston Dynamics founder Marc Raibert, a perennial favorite at TC Sessions: Robotics & AI, offers this perspective on the conference. It “blends the best of thoughtful, research-focused robotics with a unique business in technology focus.”

TC Sessions: Robotics & AI takes place on March 3, 2020 at UC Berkeley’s Zellerbach Hall. It’s not too early to save the date, and it’s never too early to save $100 on the price of admission. Join the top people in robotics and AI for a full day devoted to world-changing technologies.

Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics & AI 2020? Contact our sponsorship sales team by filling out this form.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a4fad19c68e846fecc75f11477e3b068’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();


Source: Tech Crunch

WhatsApp blames — and sues — mobile spyware maker NSO Group over its zero-day calling exploit

WhatsApp has filed a suit in federal court accusing Israeli mobile surveillance maker NSO Group of creating an exploit that was used hundreds of times to hack into target’s phone.

The lawsuit, filed in a California federal court, said the mobile surveillance outfit “developed their malware in order to access messages and other communications after they were decrypted” on target devices.

The attack worked by exploiting an audio-calling vulnerability in WhatsApp. Users may  appear to get an ordinary call, but the malware would quietly infect the device with spyware, giving the attackers full access to the device.

In some cases it happened so quickly, the target’s phone may not have rung at all.

Because WhatsApp is end-to-end encrypted, it’s near-impossible to access the messages as they traverse the internet. But in recent years, governments and mobile spyware companies have begun targeting the devices where the messages were sent or received. The logic goes that if you hack the device, you can obtain its data.

Thats’s what WhatsApp says happened.

WhatsApp, owned by Facebook, quickly patched the vulnerability. Although blame fell fast on NSO Group, WhatsApp did not publicly accuse the company at the time — until now.

In an op-ed posted shortly after the suit was filed, WhatsApp head Will Cathart said the messaging giant “learned that the attackers used servers and Internet-hosting services that were previously associated” with NSO Group, and that certain WhatsApp accounts used during the attacks were traced back to the company.

“While their attack was highly sophisticated, their attempts to cover their tracks were not entirely successful,” said Cathart.

The attack involved disguising the malicious code as call settings, allowing the surveillance outfit to deliver the code as if it came from WhatsApp’s signaling servers. Once the malicious calls were delivered to the target’s phone, they “injected the malicious code into the memory of the target device — even when the target did not answer the call,” the complaint read. When the code was run, it sent a request to the surveillance company’s servers, and downloaded additional malware to the target’s device.

In total, some 1,400 targeted devices were affected by the exploit, the lawsuit said.

Most people were unaffected by the WhatsApp exploit. But WhatsApp said that over a hundred human rights defenders, journalists and “other members of civil society” were targeted by the attack.

Other targets included government officials and diplomats.

We’ve reached out to NSO Group for comment, but did not hear back.


Source: Tech Crunch

Tech giants still not doing enough to fight fakes, says European Commission

It’s a year since the European Commission got a bunch of adtech giants together to spill ink on a voluntary Code of Practice to do something — albeit, nothing very quantifiable — as a first step to stop the spread of disinformation online.

Its latest report card on this voluntary effort sums to the platforms could do better.

The Commission said the same in January. And will doubtless say it again. Unless or until regulators grasp the nettle of online business models that profit by maximizing engagement. As the saying goes, lies fly while the truth comes stumbling after. So attempts to shrink disinformation without fixing the economic incentives to spread BS in the first place are mostly dealing in cosmetic tweaks and optics.

Signatories to the Commission’s EU Code of Practice on Disinformation are: Facebook, Google, Twitter, Mozilla, Microsoft and several trade associations representing online platforms, the advertising industry, and advertisers — including the Internet Advertising Bureau (IAB) and World Federation of Advertisers (WFA).

In a press release assessing today’s annual reports, compiled by signatories, the Commission expresses disappointment that no other Internet platforms or advertising companies have signed up since Microsoft joined as a late addition to the Code this year.

“We commend the commitment of the online platforms to become more transparent about their policies and to establish closer cooperation with researchers, fact-checkers and Member States. However, progress varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny,” write commissioners Věra Jourová, Julian King, and Mariya Gabriel said in a joint statement. [emphasis ours]

“While the 2019 European Parliament elections in May were clearly not free from disinformation, the actions and the monthly reporting ahead of the elections contributed to limiting the space for interference and improving the integrity of services, to disrupting economic incentives for disinformation, and to ensuring greater transparency of political and issue-based advertising. Still, large-scale automated propaganda and disinformation persist and there is more work to be done under all areas of the Code. We cannot accept this as a new normal,” they add.

The risk, of course, is that the Commission’s limp-wristed code risks rapidly cementing a milky jelly of self-regulation in the fuzzy zone of disinformation as the new normal, as we warned when the Code launched last year.

The Commission continues to leave the door open (a crack) to doing something platforms can’t (mostly) ignore — i.e. actual regulation — saying it’s assessment of the effectiveness of the Code remains ongoing.

But that’s just a dangled stick. At this transitionary point between outgoing and incoming Commissions, it seems content to stay in a ‘must do better’ holding pattern. (Or: “It’s what the Commission says when it has other priorities,” as one source inside the institution put it.)

A comprehensive assessment of how the Code is working is slated as coming in early 2020 — i.e. after the new Commission has taken up its mandate. So, yes, that’s the sound of the can being kicked a few more months on.

Summing up its main findings from signatories’ self-marked ‘progress’ reports, the outgoing Commission says they have reported improved transparency between themselves vs a year ago on discussing their respective policies against disinformation. 

But it flags poor progress on implementing commitments to empower consumers and the research community.

“The provision of data and search tools is still episodic and arbitrary and does not respond to the needs of researchers for independent scrutiny,” it warns. 

This is ironically an issue that one of the signatories, Mozilla, has been an active critic of others over — including Facebook, whose political ad API it reviewed damningly this year, finding it not fit for purpose and “designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation”. So, er, ouch.

The Commission is also critical of what it says are “significant” variations in the scope of actions undertaken by platforms to implement “commitments” under the Code, noting also differences in implementation of platform policy; cooperation with stakeholders; and sensitivity to electoral contexts persist across Member States; as well as differences in EU-specific metrics provided.

But given the Code only ever asked for fairly vague action in some pretty broad areas, without prescribing exactly what platforms were committing themselves to doing, nor setting benchmarks for action to be measured against, inconsistency and variety is really what you’d expect. That and the can being kicked down the road. 

The Code did extract one quasi-firm commitment from signatories — on the issue of bot detection and identification — by getting platforms to promise to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.

A year later it’s hard to see clear sign of progress on that goal. Although platforms might argue that what they claim is increased effort toward catching and killing malicious bot accounts before they have a chance to spread any fakes is where most of their sweat is going on that front.

Twitter’s annual report, for instance, talks about what it’s doing to fight “spam and malicious automation strategically and at scale” on its platform — saying its focus is “increasingly on proactively identifying problematic accounts and behaviour rather than waiting until we receive a report”; after which it says it aims to “challenge… accounts engaging in spammy or manipulative behavior before users are ​exposed to ​misleading, inauthentic, or distracting content”.

So, in other words, if Twitter does this perfectly — and catches every malicious bot before it has a chance to tweet — it might plausibly argue that bot labels are redundant. Though it’s clearly not in a position to claim it’s won the spam/malicious bot war yet. Ergo, its users remain at risk of consuming inauthentic tweets that aren’t clearly labeled as such (or even as ‘potentially suspect’ by Twitter). Presumably because these are the accounts that continue slipping under its bot-detection radar.

There’s also nothing in Twitter’s report about it labelling even (non-malicious) bot accounts as bots — for the purpose of preventing accidental confusion (after all satire misinterpreted as truth can also result in disinformation). And this despite the company suggesting a year ago that it was toying with adding contextual labels to bot accounts, at least where it could detect them.

In the event it’s resisted adding any more badges to accounts. While an internal reform of its verification policy for verified account badges was put on pause last year.

Facebook’s report also only makes a passing mention of bots, under a section sub-headed “spam” — where it writes circularly: “Content actioned for spam has increased considerably, since we found and took action on more content that goes against our standards.”

It includes some data-points to back up this claim of more spam squashed — citing a May 2019 Community Standards Enforcement report — where it states that in Q4 2018 and Q1 2019 it acted on 1.8 billion pieces of spam in each of the quarters vs 737 million in Q4 2017; 836 million in Q1 2018; 957 million in Q2 2018; and 1.2 billion in Q3 2018. 

Though it’s lagging on publishing more up-to-date spam data now, noting in the report submitted to the EC that: “Updated spam metrics are expected to be available in November 2019 for Q2 and Q3 2019″ — i.e. conveniently late for inclusion in this report.

Facebook’s report notes ongoing efforts to put contextual labels on certain types of suspect/partisan content, such as labelling photos and videos which have been independently fact-checked as misleading; labelling state-controlled media; and labelling political ads.

Labelling bots is not discussed in the report — presumably because Facebook prefers to focus attention on self-defined spam-removal metrics vs muddying the water with discussion of how much suspect activity it continues to host on its platform, either through incompetence, lack of resources or because it’s politically expedient for its business to do so.

Labelling all these bots would mean Facebook signposting inconsistencies in how it applies its own policies –in a way that might foreground its own political bias. And there’s no self-regulatory mechanism under the sun that will make Facebook fess up to such double-standards.

For now, the Code’s requirement for signatories to publish an annual report on what they’re doing to tackle disinformation looks to be the biggest win so far. Albeit, it’s very loosely bound self-reporting. While some of these ‘reports’ don’t even run to a full page of A4-text — so set your expectations accordingly.

The Commission has published all the reports here. It has also produced its own summary and assessment of them (here).

“Overall, the reporting would benefit from more detailed and qualitative insights in some areas and from further big-picture context, such as trends,” it writes. “In addition, the metrics provided so far are mainly output indicators rather than impact indicators.”

Of the Code generally — as a “self-regulatory standard” — the Commission argues it has “provided an opportunity for greater transparency into the platforms’ policies on disinformation as well as a framework for structured dialogue to monitor, improve and effectively implement those policies”, adding: “This represents progress over the situation prevailing before the Code’s entry into force, while further serious steps by individual signatories and the community as a whole are still necessary.”


Source: Tech Crunch

The future of cybersecurity VC investing with Lightspeed’s Arif Janmohamed

There are two types of enterprise startups: those that create value and those that protect value. Cybersecurity is most definitely part of the latter group, and as a vertical, it has sprawled the past few years as the scale of attacks on companies, organizations, and governments has continuously expanded.

That may be a constant threat for the executives of major companies, but for cybersecurity VCs who pick the right startup targets for investment, it’s a potential gold mine. Here at Extra Crunch, we compiled a list of top VCs who have invested in cybersecurity and enterprise more broadly and asked them what’s interesting in the space these days. We compiled ten of their responses as part of our investor survey and you should definitely take a look for their interesting takes on the space.

But we wanted to go a bit deeper on the topic to learn more about what’s happening right now in cybersecurity. So today, we talk with Arif Janmohamed of Lightspeed Venture Partners, one of the leading investors at one of the top enterprise VC firms in the world. He’s invested in companies ranging from cloud-access security broker Netskope and search analytics platform ThoughtSpot to Qubole (big data analytics), Nutanix (hyper-converged infrastructure), and Arceo.ai (cyber risk management).

Arif head color web

Arif Janmohamed. Image via Lightspeed Venture Partners

TechCrunch’s security guru Zack Whittaker, managing editor Danny Crichton and operations editor Arman Tabatabai sat down with him to discuss what he’s seeing at the earliest stages in cybersecurity, which trends are being ignored by the industry and what he sees as the future of security in an always-changing present.

Introduction and Background

The following interview has been condensed and edited for clarity.

Danny Crichton: Let’s start with a bit of your background.

Arif Janmohamed: Sure. I’m on the early-stage side, so I have the most fun when I’m working with founders at the very earliest stages of company formation, where I can focus on company design, product and go-to-market and then find the right balance of teams to fill that out.

I’m on the board of Netskope, which is a cloud-security company. That one I did the Series B back in 2013. I’m on the board of TripActions, which is a corporate travel company, I did that one and then led the Series A and the Series B. I’m on the board of Moveworks, which is an AI engine for IT that was seeded by me and then I’ve supported them through their subsequent financing. I’m also on the board of a number of other companies.

Am I purely security-focused? The answer is no, I’m very much enterprise-focused. Security in my mind really fits within that rubric of the enterprise stack that’s getting rebuilt for a cloud-first world.

What’s snake oil and what has real value?

Zack Whittaker: So I’ve got a question that I just want to jump right in with. I’m always curious about this, especially when it comes to the very early stage, how do you go about distinguishing between potential snake oil and the things that seem really viable in the security world?


Source: Tech Crunch

Facebook staff demand Zuckerberg limit lies in politcal ads

Submit campaign ads to fact checking, limit microtargeting, cap spending, observe silence periods, or at least warn users. These are the solutions Facebook employees put forward in an open letter pleading with CEO Mark Zuckerberg and company leadership to address misinformation in political ads.

The letter, obtained by the New York Times’ Mike Isaac, insists that “Free speech and paid speech are not the same thing . . . Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for.” The letter was posted to Facebook’s internal collaboration forum a few weeks ago.

The sentiments echo what I called for in a TechCrunch opinion piece on October 13th calling on Facebook to ban political ads. Unfettered misinformation in political ads on Facebook lets politicians and their supporters spread inflammatory and inaccurate claims about their views and their rivals while racking up donations to buy more of these ads.

The social network can still offer freedom of expression to political campaigns on their own Facebook Pages while limiting the ability of the richest and most dishonest to pay to make their lies the loudest. We suggested that if Facebook won’t drop political ads, they should be fact checked and/or use an array of generic “vote for me” or “donate here” ad units that don’t allow accusations. We also criticized how microtargeting of communities vulnerable to misinformation and instant donation links make Facebook ads more dangerous than equivalent TV or radio spots.

Mark Zuckerberg Hearing In Congress

The Facebook CEO, Mark Zuckerberg, testified before the House Financial Services Committee on Wednesday October 23, 2019 Washington, D.C. (Photo by Aurora Samperio/NurPhoto via Getty Images)

Over 250 employees of Facebook’s 35,000 staffers have signed the letter, that declares “We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.” It suggests the current policy undermines Facebook’s election integrity work, confuses users about where misinformation is allowed, and signals Facebook is happy to profit from lies.

The solutions suggested include:

  1. Don’t accept political ads unless they’re subject to third-party fact checks
  2. Use visual design to more strongly differentiate between political ads and organic non-ad posts
  3. Restrict microtargeting for political ads including the use of Custom Audiences since microtargeted hides ads from as much public scrutiny that Facebook claims keeps politicians honest
  4. Observe pre-election silence periods for political ads to limit the impact and scale of misinformation
  5. Limit ad spending per politician or candidate, with spending by them and their supporting political action committees combined
  6. Make it more visually clear to users that political ads aren’t fact-checked

A combination of these approaches could let Facebook stop short of banning political ads without allowing rampant misinformation or having to police individual claims.

Facebook’s response to the letter was “We remain committed to not censoring political speech, and will continue exploring additional steps we can take to bring increased transparency to political ads.” But that straw-man’s the letter’s request. Employees aren’t asking politicians to be kicked off Facebook or have their posts/ads deleted. They’re asking for warning labels and limits on paid reach. That’s not censorship.

Zuckerberg Elections 1

Zuckerberg had stood resolute on the policy despite backlash from the press and lawmakers including Representative Alexandria Ocasio-Cortez (D-NY). She left him tongue-tied during a congressional testimony when she asked exactly what kinds of misinfo were allowed in ads.

But then Friday Facebook blocked an ad designed to test its limits by claiming Republican Lindsey Graham had voted for Ocasio-Cortez’s Green Deal he actually opposes. Facebook told Reuters it will fact-check PAC ads

One sensible approach for politicians’ ads would be for Facebook to ramp up fact-checking, starting with Presidential candidates until it has the resources to scan more. Those fact-checked as false should receive an interstitial warning blocking their content rather than just a “false” label. That could be paired with giving political ads a bigger disclaimer without making them too prominent looking in general and only allowing targeting by state.

Deciding on potential spending limits and silent periods would be more messy. Low limits could even the playing field and broad silent periods especially during voting periods could prevent voter suppression. Perhaps these specifics should be left to Facebook’s upcoming independent Oversight Board that acts as a supreme court for moderation decisions and policies.

fb arbiter of truth

Zuckerberg’s core argument for the policy is that over time, history bends towards more speech, not censorship. But that succumbs to utopic fallacy that assumes technology evenly advantages the honest and dishonest. In reality, sensational misinformation spreads much further and faster than level-headed truth. Microtargeted ads with thousands of variants undercut and overwhelm the democratic apparatus designed to punish liars, while partisan news outlets counter attempts to call them out.

Zuckerberg wants to avoid Facebook becoming the truth police. But as we and employees have put forward, there a progressive approaches to limiting misinformation if he’s willing to step back from his philosophical orthodoxy.

The full text of the letter from Facebook employees to leadership about political ads can be found below, via the New York Times:

We are proud to work here.

Facebook stands for people expressing their voice. Creating a place where we can debate, share different opinions, and express our views is what makes our app and technologies meaningful for people all over the world.

We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.”

This is our company.

We’re reaching out to you, the leaders of this company, because we’re worried we’re on track to undo the great strides our product teams have made in integrity over the last two years. We work here because we care, because we know that even our smallest choices impact communities at an astounding scale. We want to raise our concerns before it’s too late.

Free speech and paid speech are not the same thing.

Misinformation affects us all. Our current policies on fact checking people in political office, or those running for office, are a threat to what FB stands for. We strongly object to this policy as it stands. It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.

Allowing paid civic misinformation to run on the platform in its current state has the potential to:

— Increase distrust in our platform by allowing similar paid and organic content to sit side-by-side — some with third-party fact-checking and some without. Additionally, it communicates that we are OK profiting from deliberate misinformation campaigns by those in or seeking positions of power.

— Undo integrity product work. Currently, integrity teams are working hard to give users more context on the content they see, demote violating content, and more. For the Election 2020 Lockdown, these teams made hard choices on what to support and what not to support, and this policy will undo much of that work by undermining trust in the platform. And after the 2020 Lockdown, this policy has the potential to continue to cause harm in coming elections around the world.

Proposals for improvement

Our goal is to bring awareness to our leadership that a large part of the employee body does not agree with this policy. We want to work with our leadership to develop better solutions that both protect our business and the people who use our products. We know this work is nuanced, but there are many things we can do short of eliminating political ads altogether.

These suggestions are all focused on ad-related content, not organic.

1. Hold political ads to the same standard as other ads.

a. Misinformation shared by political advertisers has an outsized detrimental impact on our community. We should not accept money for political ads without applying the standards that our other ads have to follow.

2. Stronger visual design treatment for political ads.

a. People have trouble distinguishing political ads from organic posts. We should apply a stronger design treatment to political ads that makes it easier for people to establish context.

3. Restrict targeting for political ads.

a. Currently, politicians and political campaigns can use our advanced targeting tools, such as Custom Audiences. It is common for political advertisers to upload voter rolls (which are publicly available in order to reach voters) and then use behavioral tracking tools (such as the FB pixel) and ad engagement to refine ads further. The risk with allowing this is that it’s hard for people in the electorate to participate in the “public scrutiny” that we’re saying comes along with political speech. These ads are often so micro-targeted that the conversations on our platforms are much more siloed than on other platforms. Currently we restrict targeting for housing and education and credit verticals due to a history of discrimination. We should extend similar restrictions to political advertising.

4. Broader observance of the election silence periods

a. Observe election silence in compliance with local laws and regulations. Explore a self-imposed election silence for all elections around the world to act in good faith and as good citizens.

5. Spend caps for individual politicians, regardless of source

a. FB has stated that one of the benefits of running political ads is to help more voices get heard. However, high-profile politicians can out-spend new voices and drown out the competition. To solve for this, if you have a PAC and a politician both running ads, there would be a limit that would apply to both together, rather than to each advertiser individually.

6. Clearer policies for political ads

a. If FB does not change the policies for political ads, we need to update the way they are displayed. For consumers and advertisers, it’s not immediately clear that political ads are exempt from the fact-checking that other ads go through. It should be easily understood by anyone that our advertising policies about misinformation don’t apply to original political content or ads, especially since political misinformation is more destructive than other types of misinformation.

Therefore, the section of the policies should be moved from “prohibited content” (which is not allowed at all) to “restricted content” (which is allowed with restrictions).

We want to have this conversation in an open dialog because we want to see actual change.

We are proud of the work that the integrity teams have done, and we don’t want to see that undermined by policy. Over the coming months, we’ll continue this conversation, and we look forward to working towards solutions together.

This is still our company.


Source: Tech Crunch

Spider eyes inspire a new kind of depth-sensing camera

As robots and gadgets continue to pervade our everyday lives, they increasingly need to see in 3D — but as evidenced by the notch in your iPhone, depth-sensing cameras are still pretty bulky. A new approach inspired by how some spiders sense the distance to their prey could change that.

Jumping spiders don’t have room in their tiny, hairy heads for structured light projectors and all that kind of thing. Yet they have to see where they’re going and what they’re grabbing in order to be effective predators. How do they do it? As is usually the case with arthropods, in a super weird but interesting way.

Instead of having multiple eyes capturing a slightly different image and taking stereo cues from that, as we do, each of the spider’s eyes is in itself a depth-sensing system. Each eye is multi-layered, with transparent retinas seeing the image with different amounts of blur depending on distance. The differing blurs from different eyes and layers are compared in the spider’s small nervous system and produce an accurate distance measurement — using incredibly little in the way of “hardware.”

Researchers at Harvard have created a high-tech lens system that uses a similar approach, producing the ability to sense depth without traditional optical elements.

cover1

The “metalens” created by electrical engineering professor Federico Capasso and his team detects an incoming image as two similar ones with different amounts of blur, like the spider’s eye does. These images compared using an algorithm also like the spider’s — at least in that it is very quick and efficient — and the result is a lovely little real-time, whole-image depth calculation.

FlyGif

The process is not only efficient, meaning it can be done with very little computing hardware and power, but it can be extremely compact: the one used for this experiment was only 3 millimeters across.

This means it could be included not just on self-driving cars and industrial robots but on small gadgets, smart home items, and of course phones — probably won’t replace Face ID, but it’s a start.

The paper describing the metalens system will be published today in the Proceedings of the National Academy of Sciences.


Source: Tech Crunch

Omidyar Network CEO opens up about VC-influenced philanthropy

In 2004, eBay founder Pierre Omidyar and his wife, Pam, set aside some of the wealth they acquired after the online marketplace went public and created Omidyar Network, a philanthropic investment firm “dedicated to harnessing the power of markets,” according to an official overview.

Since then, the firm — which operates a 501(c)(3) nonprofit and an LLC — has committed $839 million in nonprofit grants and $735 million in for-profit investments. Today, 60 employees in Mumbai, London, Washington D.C. and Redwood City look for opportunities to invest and contribute across four main areas: Reimagining Capitalism, Beneficial Technology, Discovering Emergent Issues, and Expanding Human Capability.

In 2018, coinciding with a strategic shift that saw Omidyar Network spin out several of its initiatives, the firm elevated to CEO Mike Kubzansky, who had started the firm’s Intellectual Capital arm. In a wide-ranging discussion, Scott Bade spoke to Kubzansky about Omidyar Network’s origins and evolution, and his approach to venture philanthropy.

(This interview has been edited for length and clarity.)

Scott Bade: Omidyar Network has stood out because of its unique structure as both a grant-making institution and as an investor. Could you describe how Omidyar Network got started and how it evolved over the last decade and a half?

Mike Kubzansky: Pierre [Omidyar] originally started the Family Foundation. But having looked at the experience of eBay, he became frustrated that he couldn’t [achieve] the same scale of impact [that eBay had] in a conventional grant-making structure. So we converted Omidyar Family Foundation to Omidyar Network in 2004 with the fundamental insight to add to the classic 501(c)(3) structure of a foundation an LLC to enable us to invest in companies. 

Great, and by investment, how does that work? Are you a typical LP or is there a different investment thesis?

Yeah, so historically first it’s worth saying, being influenced by Silicon Valley DNA, we have typically taken a venture lens on things and typically have invested at the seed or Series A round. Again, that comes straight out of the Silicon Valley experience.

Within that, we’ve had this notion of investing across the returns continuum. In some cases, we feel you can get a fully risk-adjusted market rate return. In some cases you might be ahead of the market, or looking at a firm that’s actually having a market-level impact, in addition to a firm-level impact. In those cases we’ve been willing to take a lower rate of return, at least at entry, in terms of what we would invest in. Typically it’s been venture, part of it syndicate; we have never taken a majority share in a company. 

Before we dig deeper into the programmatic work, I want to dig deeper on your methodology. Clearly when it comes to both defining impact and figuring out how to measure it and maximize it, ON has been different from traditional philanthropy. But how do you define whether a given objective warrants either a grant or an investment or an advocacy approach?

You’ve hit on a question that we’ve spent a lot of time discussing internally. Having this flexible capital structure enables you to range across a lot of different forms of engagement in the world. So our thinking currently is – and this gets into our strategy shift – focus less on things that are easy to measure, like service delivery and financial inclusion and how many people are reached, and focus much more on upstream structural power, rules of the game, mindsets and beliefs about the underlying systems, which we think actually are at the root cause of a lot of the distress and income inequality we see in the world today. 

Thinking like venture capitalists

You talked about thinking like a venture capitalist. Does that mean that that even with your philanthropy or advocacy you take on greater risks that are a long shot at achieving, but perhaps have a high-expected value return? 

Yeah, so you’ve hit on exactly an issue that’s really important to us, which is the ability to take risk. Philanthropic money is the most risk-tolerant capital out there, whether it’s deployed for-profit or not-for-profit or on advocacy. And we view part of our role, in terms of social impact, as being risk capital for very difficult issues that society needs to take on. That mindset pervades how we think about approaching a problem.

We think about risk in a bunch of different ways: one, the ability to take on long-term issues which others may not be able to take on because they’re trying to make quarterly profits or that sort of thing. So there’s where we can take a run at some of the upstream rules of the game and checks on power, which might take time to accomplish. We [also] take it as an ability to take on difficult issues as well, not just time consuming, not just ones that have long-time horizons.

So what is your theory of change? Is your goal to be a think or do tank, is it to be an advocacy group, is it to shape norms, is it to fund pilots or some combination of that? 

Yeah, I think we are, it’s fair to say we are still working through that, but we are in the process of putting out our points of view on what we think needs to change under capitalism and under technology. So for instance, we’ve published a point of view on what we think good digital ID looks like and ought to be. 

Under the Reimagining Capitalism banner, our take is that it is going to take a mix of things. One [part is] about rebalancing structural power. For instance, working people  have not typically seen any of the gains over the last 40 years where profits and productivity have gone up very dramatically but wages have stayed stagnant. So how do you rebalance power between working people and the companies or the capital sources that are working in the economy?

And so our theory of change includes some level of, how do you change the way people understand economics – everything from how you teach economics to how you measure to result of our economy, not in GDP but perhaps in wellbeing or other formats [like] by income decile – all the way straight through to ideas about who the economy is for. 

We would argue that neoliberalism is a version of capitalism, it is not capitalism itself,  and that we can get to a better version of capitalism if we change some of these underlying beliefs and mindsets about the economy. 

… The original ethos of the Valley has tracked through to our notion that we want to see power redistributed back to people and away from concentrated sources of power. 

How has being in Silicon Valley, the mindset of being in the tech world, influenced that thesis on capitalism? 


Source: Tech Crunch

Max Q: International Astronautical Congress 2019 recap edition

Our weekly round-up of what’s going on in space technology is back, and it’s a big one (and a day late) because last week was the annual International Astronautical Congress. I was on the ground in Washington, D.C. for this year’s event, and it’s fair to say that the top-of-mind topics were 1) Public-private partnerships on future space exploration; 2) So-called ‘Old Space’ or established companies vs./collaborating with so-called ‘New Space’ or younger companies, and 3) who will own and control space as it becomes a resource trough, and through what mechanisms.

There’s a lot to unpack there, and I plan to do so not all at once, but through conversations and coverage to follow. In the meantime, here’s just a taste based on the highlights from my perspective at the show.

1. SpaceX aims for 2022 Moon landing for Starship

SpaceX timelines are basically just incredibly optimistic dreams, but it’s still worth paying attention to what timeframes the company is theoretically marching towards, because they do at least provide some kind of baseline from which to extrapolate actual timelines based on past performance.

There’s a reason SpaceX wants to send its newest there that early, however – beyond being aggressive to motivate the team. The goal is to use that demonstration mission to set up actual cargo transportation flights, to get stuff to the lunar surface ahead of NASA’s planned 2024 human landing.

48954138962 f3b04101c4 k 2

2. Starlink satellite service should go live next year

More SpaceX news, but significant because it could herald the beginning of a new era where the biggest broadband providers are satellite constellation operators. SpaceX COO and President Gwynne Shotwell says that the company’s Starlink broadband service should go live for consumers next year. Elon also used it this week to send a tweet, so it’s working in some capacity already.

3. NASA’s Jim Bridenstine details how startups will be able to participate in the U.S. mission to return to the Moon to stay

Bridenstine did a lot of speaking and press opportunities at IAC this year, which makes sense since it’s the first time the U.S. has hosted the show in many years. But I managed to get one question in, and the NASA Administrator detailed how he sees entrepreneurs contributing to his ambitious goal of returning to the Moon (this time to set up a more or less permanent presence) by 2024.

4. Virgin Galactic goes public

Virgin Galactic listed itself on the New York Stock Exchange today, and we got our very first taste of what public market investors think about space tourism and commercial human spaceflight. So far, looks like they… approve? Stock is trading up about 2 percent as of this writing, at least.

5. Bezos announces a Blue Origin-led space dream team

Amazon CEO Jeff Bezos got a first-ever IAC industry award during the show (it has an actual name but it seems pretty clear it’s an invention designed to fish billionaire space magnates to the stage). The award is fine, but the actual news is that Blue Origin is teaming up with space frenemies Lockheed Martin, Northrop Grumman and Draper – old and new space partnering to develop a full-featured lunar lander system to help get payloads to the surface of the Moon.

6. Rocket Lab is developing a ride-share offering for the Moon and more

Launch startup Rocket Lab has become noteworthy for being among the extremely elite group of new space companies that is actually launching payloads to orbit for paying customers. It wants to do more, of course, and one of its new goals is to adapt its Photon payload delivery spacecraft to bring customer satellites and research equipment to the Moon – and eventually beyond, too. Why? Customer demand, according to Rocket Lab CEO Peter Beck.

7. Europe’s space tech industry is heading for a boom

It seems like there’s a lot of space startup activity the world over, but Europe has possibly more than its fair share, thanks in part to the very encouraging efforts of the multinational European Space Agency. (Extra Crunch subscription required.)


Source: Tech Crunch

Denny’s inks deal with Beyond Meat to supply new menu item — Denny’s Beyond Burger

Denny’s signed an agreement with the plant based food manufacturer, Beyond Meat, to use Beyond’s meat replacement in a new menu item — the Denny’s Beyond Burger.

Beyond Meat and its largest rival, Impossible Foods, are engaged in a fierce competition to provide meat alternatives to some of the nation’s largest food companies, but increasingly Beyond Meat is pulling away.

In recent months the company has signed agreements with McDonald’s and Denny’s, and expanded a supply agreement with Dunkin for signature sandwiches.

The initial pilot with Denny’s will include all of the South Carolina-based restaurant chain’s Los Angeles Denny’s. At Denny’s, the . beyond burger will come with tomatoes, onions, lettuce, pickles, American cheese and a special sauce on a multigrain bun.

As part of the promotion behind the rollout of the sandwich, Denny’s in Los Angeles will give guests a free burger on Halloween night with the purchase of a sandwich. The restaurant chain (and a former employer of mine) will roll out the Beyond Burger nationwide in 2020.

“We could not be more excited to announce this game-changing partnership with Beyond Meat,” said John Dillon, chief brand officer for Denny’s, in a statement. “As a company we strive to evolve with the tastes and demands of our customers and we knew finding a plant-based option that met our incredibly high-quality standards and taste expectations was critical in staying at the top of our game. The new Beyond Burger at Denny’s offers guests a great tasting burger, and we’re delighted to launch it in Los Angeles, and will be preparing for the national rollout in 2020.”


Source: Tech Crunch

Meet Utah’s next unicorn

Weave, a developer of patient communications software focused on the dental and optometry market, was the first Utah-headquartered company to graduate from Y Combinator in 2014. Now, it’s poised to enter a small but growing class startups in the ‘Silicon Slopes’ to garner ‘unicorn’ status.

The business announced a $70 million Series D last week at a valuation of $970 million. Tiger Global Management led the round, with participation from existing backers Catalyst Investors, Bessemer Venture Partners, Crosslink Capital, Pelion Venture Partners and LeadEdge Capital.

The company was founded in 2011 and fully bootstrapped until enrolling in the Silicon Valley accelerator program five years ago. Since then, it’s raised a total of $156 million in private funding, tripling its valuation with the latest infusion of capital.

Weave

“Our aim with this funding round is to exceed our customers’ expectations at every touchpoint, investing heavily in the products we create, the markets we serve and the overall customer experience we provide,” Weave co-founder and chief executive officer Brandon Rodman said in a statement. “We will continue to invest in our customers, our products and our people to build a solid, sustainable, and scalable business.”

Weave charges its customers, small and medium-sized businesses, upwards of $500 per month for access to its Voice Over IP-based unified communications service. Rodman previously launched a scheduling service for dentists and realized the opportunity to integrate texting, phone service, fax and reviews to facilitate the patient-provider relationship.

While his second effort, Weave, has long been targeting the dentistry and optometry market, Rodman told Venture Beat last year the opportunities for the company are endless: “Ultimately, if a business needs to communicate with their customer, we see that as a possible future customer of Weave.”

Based in Lehi, Weave added 250 employees this year with total headcount now reaching 550. The company claims to have doubled its revenue in 2018, too. While we don’t have any real insight into its financials, given the interest it’s garnered amongst Bay Area investors, we’re guessings it’s posting some pretty attractive numbers.

“Weave has some of the best retention numbers we’ve ever seen for an SMB SaaS company,” Catalyst partner Tyler Newton said in a statement. “We’re continually impressed by their accelerated growth and results.”


Source: Tech Crunch