JBL’s $250 Google Assistant smart display is now available for pre-order

It’s been a week since Lenovo’s Google Assistant-powered smart display went on sale. Slowly but surely, its competitors are launching their versions, too. Today, JBL announced that its $249.95 JBL Link View is now available for pre-order, with an expected ship date of September 3, 2018.

JBL went for a slightly different design than Lenovo (and the upcoming LG WK9), but in terms of functionality, these devices are pretty much the same. The Link View features an 8-inch HD screen; unlike Lenovo’s Smart Display, JBL is not making a larger 10-inch version. It’s got two 10W speakers and the usual support for Bluetooth, as well as Google’s Chromecast protocol.

JBL says the unit is splash proof (IPX4), so you can safely use it to watch YouTube recipe videos in your kitchen. It also offers a 5MP front-facing camera for your video chats and a privacy switch that lets you shut off the camera and microphone.

JBL, Lenovo and LG all announced their Google Assistant smart displays at CES earlier this. Lenovo was the first to actually ship a product, and both the hardware as well as Google’s software received a positive reception. There’s no word on when LG’s WK9 will hit the market.


Source: Tech Crunch

Midterm attackers cited Black Lives Matter in false flag Facebook rally

Unknown midterm election attackers that Facebook has removed were hosting a political rally next month that they pinned on Black Lives Matter, Antifa, and other organizations, according to third-party event websites that scraped the now-removed Facebook events.

Facebook provided an image of the deleted “No Unite The Right 2 – DC” event as part of its announcement today that merely showed its image, title, date, location, and that a Page called “Resisters” was one of the hosts of the propaganda event. But a scraped event description TechCrunch discovered on Rallyist provides deeper insight into the disruptive information operation. Facebook won’t name the source of the election interference but said the attackers shared a connection through a single account to the Russian Internet Research Agency responsible for 2016 presidential election interference on Facebook.

“We are calling all anti-fascists and people of good conscience to participate in international days of action August 10 through August 12 and a mass mobilization in Washington DC” the description reads. “We occupy ICE offices, confront racism, antisemitism, islamaphobia, xenophobia, and white nationalism. We will be in the streets on August 10-12, and we intend to win.”

But what’s especially alarming is how the event description concludes [emphasis mine, in full below]. “Signed, Black Lives Matter Charlottesville, Black Lives Matter D.C., Charlottesville Summer of Resistance Welcoming Committee Agency, Crimethinc Ex-Workers Collective, Crushing Colonialism, D.C. Antifascist Collective, Future is Feminists, Holler Network, Hoods4Justice, The International, Capoeira Angola Foundation-DC (FICA-DC), Libertarian Socialist Caucus Of The DSA, March For Racial Justice, Maryland Antifa, One People’s Project, Resist This (Former DisruptJ20), Rising Tide North America, Smash Racism D.C., Showing Up for Racial Justice Charlottesville, Suffolk County DSA, Workers Against Racism, 350 DC.”

It’s unclear if the attackers effectively ‘forged’ the signature of these groups, or duped them into signing off on supporting the rally. The attackers were potentially trying to blame these groups for the rallies in an effort to further sow discord in the political landscape.

Facebook initially provided no comment about the description of the event, but then confirmed that it was originally created by the attackers’ since-deleted Page ‘Resisters’ which then later added several legitimate organizations as co-hosts: Millenials For Revolution, March To Confront White Supremacy – from Charlottesville to DC, Workers Against Racism – WAR, Smash Racism DC, and Tune Out Trump. Strangely, those co-hosts have relaunched a new event with a similar name “Nazis Not Welcome No Unite The Right 2” and similar description including a similar but expanded “Signed by” list, and now include BLM Charlottesville and D.C. as co-hosts.

Meanwhile, Facebook also shared an image of a November 4th, 2017 “Trump Nightmare Must End – NYC” event, also without details of the description. A scraped version on the site AllEvents shows the description as “History has shown that fascism must be stopped before it becomes too late. There is only one force that can stop this nightmare: we, the people, acting together. On November 4 we’ll take to the streets demanding that Trump regime must go! We meet at Times Square (42 St and Broadway) at 2 PM!”

The co-opting of left-wing messaging and protests is a powerful strategy for the election interferers. It could provide the right-wing with excuses to claim that all left-wing protest against Trump or white supremacy is actually foreign governments or hackers, and that those protests don’t represent the views of real Americans.


Source: Tech Crunch

TV Time debuts an analytics platform for the streaming era

TV Time, the consumer app that helps bingers keep track of where they are with favorite shows and socialize with fellow viewers, is today expanding its business with the launch of an analytics platform called TVLytics. The new service will allow creators and distributors to tap into real-time data from across more than 60,000 TV shows. It will also offer other anonymized data collected from viewers, including things like on which platforms viewers watched, their favorite characters, bingeing behavior, viewers’ locations, anticipation from fans for new episodes, social engagement and more.

The data is pulled from the app’s community of around a million daily users from more than 200 countries who check in with the app some 45 million times per month. To date, TV Time has tracked more than 10 billion TV episodes, and has seen 210 million reactions.

TV Time began its life as a source for TV show GIFs known as WhipClip, but later pivoted to a social TV community after acquiring TVShow Time in December 2016. This proved to be a smart move on its part, as the company has grown to 12 million registered users (and growing).

The app’s core functionality is focused on offering TV viewers a place where they can follow shows and mark off the ones they’ve watched — something that’s especially helpful in the streaming era where people are often hopping from one binge-watching session to another, then back again, or are watching multiple series at once and need to remember where they left off.

In addition to being a utility for tracking shows, the app offers a community section for each episode where fans can post photos, videos, GIFs and memes, as well as like and comment on the content others share. Viewers can even leave video reactions about each episode, in a format similar to the “Stories” found on apps like Instagram or Snapchat.

TV Time also interjects questions of its own — asking about your reaction (good, funny, wow, sad, etc.), favorite character, device watched on and more. And it inserts its own polls in the middle of the fan discussion page, which ask about pivotal moments from the episode and what people thought.

With the launch of analytics, TV Time aims to make use of all this data by offering it to clients in the TV industry who are looking for more comprehensive viewership data for planning purposes.

Of course, TV Time’s data is not a Nielsen equivalent — it’s user-generated and self-reported. That means it’s not going to be able to tell content creators, networks, distributors and other clients how many people are watching a show exactly. Nor can it give a holistic overview of the show’s fan base. TV Time’s viewers skew younger — in the 18 to 34-year-old range — and only around 10 to 15 percent are based in the U.S., though that market is the fastest growing.

But TV Time can tap into the reactions and sentiments shared by a subset of a show’s most engaged fans.

Its paying clients today include a handful of TV networks, streaming services and talent agencies that have been testing the app in beta for around a month. They use TV Time’s analytics to help spot trends, develop and expand a show’s audience and make decisions about how to cast and market their shows. Some have also used it in advertising negotiations. Customers pay a flat annual subscription fee for access to this data, but TV Time won’t disclose exact pricing.

“We’ve been testing it to figure out which of the insights we’ve launched are most valuable. That’s how we landed on things like the completion rate, the binge rate, affinity reports, mobility scores and favorite characters,” explains TV Time head of Programming, Jeremy Reed.

The value offered by TVLytics data doesn’t just come from the data itself, but also how hard it is to collect. In today’s fragmented TV viewing ecosystem, consumers now watch across devices, and split their time between live TV, recorded TV, live TV delivered over the internet, subscription video services and internet video sites, like YouTube.

In addition, TV Time notes that, overall, the number of long-form shows on television has grown by 69 percent since 2012, with nearly 500 scripted original series airing in 2017, citing data from FX Research Networks. The majority of these scripted shows are coming from over-the-top platforms such as Netflix, Amazon and others. That’s a lot of TV content to keep up with, especially as consumers hop between devices — even in the midst of a single episode.

What TV Time does is keep all this viewing data together in a single destination, and can make connections about what viewers are watching across platforms — from TV to Netflix and beyond.

“With studios — they’re looking two years out in producing content. They start to see trends in types of characters, and certainly start to see the characters of this show resonate with the characters of this other show and start to see the overlap,” notes Reed. Plus, he adds, that overlap is “agnostic to platform.”

TV Time data is put to use for consumers as well, in terms of helping to recommend their next binge.

And now its community is demanding the ability to track movies, too — especially now that streaming services are backing their own feature films. Reed says this isn’t something TV Time has planned for the near-term, as there’s so much to do around episodic content — but that it’s absolutely “a never-say-never” kind of thing, he hints.

Santa Monica-based TV Time’s team of 35 is backed by $60+ million in funding, according to Crunchbase, from investors including Eminence Capital, WME, IVP, Raine Ventures and Greycroft, plus individual entertainment and media industry executives like Ari Emanuel, Peter Guber, Steve Bornstein, Scooter Braun, Gordon Crawford and Ron Zuckerman.


Source: Tech Crunch

Digital therapeutics are just what the doctor ordered for patients — and for global healthcare systems

It would be hard to argue that digital products have a net-positive impact on our health. Most are designed to provide the same dopamine hit as a slot machine. We all know someone who wasted their youth playing games that were designed to be all-consuming, with the World Health Organization recently going so far as to categorize video game addiction as a mental health disorder.

But this habit-forming power of digital products can be used for therapeutic benefit too, often by changing the behavior that causes disease or ill health. This new range of products is being commonly referred to as digital therapeutics. These apps and services offer evidence-based and personalized behavioral therapy, and cater to a broad cross-section of illnesses and conditions — from diabetes to loneliness, and everything in-between.

Given the difficulty developing traditional therapeutics, the likelihood of the next blockbuster treatment or cure emerging from digital therapeutics is ever-increasing. And thanks to their low cost, adaptability and speed-of-deployment, they could have a transformative impact on millions of lives, and on ailing healthcare systems.

I live and work in the U.K., so I will be using the NHS as a recurring reference point in this article — however, fee-for-service, or value-based healthcare systems equally stand to benefit.

Digital therapeutics work for patients…

A range of startups are leading the charge in digital therapeutics, tackling some of the biggest problems facing patients and our healthcare system today. And the evidence proves that these treatments work.

Type 2 diabetes, the type determined mostly by diet and lifestyle, has been called the “scourge of the 21st century” by the Royal College of Physicians. And rightly so: the NHS spends around £12 billion annually, or 10 percent of its budget, treating the condition. However, in many cases, lifestyle change alone is enough to prevent, or even cure it. OurPath has developed a digital program that does exactly that, with a recent study showing a mean 7.5kg weight loss in participants, which is enough to put type 2 diabetes sufferers into remission.

Another leader is QuitGenius, whose app helps 36 percent of its users to quit smoking completely — versus just 3 percent of smokers who are able to quit on their own. Smoking is a massive burden on our collective health, and global healthcare systems. In the U.K. alone, smoking cigarettes led to an estimated 16 percent of all deaths.

While one in four of us suffer from a mental health condition, we can all benefit from looking after our mental well-being.

For those suffering from a mental health condition, Ieso has been a leader in delivering psychological therapies digitally, and has shown that standard treatments (like cognitive behavioral therapy) are more effective when delivered digitally (e.g. via messaging app) than in person.

However, while one in four of us suffer from a mental health condition, we can all benefit from looking after our mental well-being. Newer entrants like HelloSelf are helping all of us be our best selves, initially by providing digital access to therapists, and by building an AI life coach that helps us deeply understand what makes us happy, and what we can do to improve our mental well-being.

Other players, like Soma Analytics, Unmind and SilverCloud, are helping users look after our mental well-being where most feel most stressed: at work. The data behind these products demonstrates a triple win: a reduction in stress levels for employees, boosted productivity for employers and reduced burden on our public healthcare system.

Digital therapeutics are also a great fit for notoriously complex conditions like IBS, a condition affecting 800 million people, 60 percent of whom go on to develop depression or anxiety, hitherto only treated imperfectly by a range of measures from restricted diet to antidepressants. Companies like Bold Health are using data to personalize treatments and improve outcomes, and pioneering the use of hypnotherapy to treat IBS.

… and our healthcare systems need digital therapeutics to work!

Bringing traditional therapeutics to market is becoming exponentially more expensive. The full explanation of this is Eroom’s law; however, in short: the cost to develop a new drug has doubled every nine years since 1950. And even after a lengthy testing and approval process, drugs may have unintended consequences. Or, quite simply, they might not work at all.

It now takes on average 14 years and $2.5 billion to develop a market-ready drug.

Additionally, healthcare systems are under pressure from aging populations and tightening purse strings. This is, of course, particularly true in the U.K.

Against this backdrop, digital therapeutics are a great solution. They are relatively cheap to develop — all the companies I have mentioned raised less than $5 million to develop their products. This is particularly true in contrast to traditional therapeutics — it now takes on average 14 years and $2.5 billion to develop a market-ready drug.

The digital delivery method means it is much easier to collect data, iterate and refine the treatment and evidence efficacy, allowing treatments to change with the needs of the population. Quantifying the resulting cost savings is tricky, but healthcare consultancy IQVIA recently released a report estimating the NHS would save £170 million if it adopted currently available digital therapeutics in five disease areas (with £131 million saved in diabetes alone).

Digital therapeutics companies have so far found success in selling direct to consumers, even in the U.K., where healthcare is theoretically free at the point of service for all. However, helped by the evidence that they work, the NHS is “learning” how to purchase and prescribe digital therapeutics. The NHS recently launched App Library (still in beta), showcasing trusted digital apps to consumers; and AppScript, a platform for doctors to discover, prescribe and track the best digital health apps, is being rolled out across GP surgeries in the U.K.

And if they were to develop their own digital therapeutic solutions, national health systems like the NHS would be at a tremendous advantage, thanks to the huge amounts of longitudinal health data they own (data relating to how patients, and their health, fare over time).

Consumers are discovering digital therapeutics, and the treatments are already transforming lives. Now that the body of evidence shows they work, it is my hope that healthcare systems, particularly the U.K.’s NHS, begin to reap the benefits offered by this new treatment mode.


Source: Tech Crunch

Can Electronauts help make VR more social?

Virtual reality is an isolating experience. You power it up, strap the headset on and just sort of drift off into your own world. But maybe that doesn’t have to be the case. Maybe there’s a way to slip into a virtual world and still interact with your surroundings.

Electronauts presents an interesting example. Survios sees the title as a party game — something akin to what Guitar Hero/Rock Band was at the height of their collective powers, when people would set them up in their living room and invite friends over to play.

The new title has one decided advantage over those older games, however: It’s impossible to hit a wrong note. That’s kind of the whole point, in fact. Unlike the gamification of Guitar Hero/Rock Band, Electronauts is more experiential, designed to create remixes of songs on the fly.

I played a near final version of the title at a private demo in New York the other week, and mostly enjoyed the experience — my own personal hang-ups about doing VR in front of a room full of strangers aside. The experience has a very Daft Punk/Tron vibe to it as you operate a spaceship control while hurtling through psychedelic space.

There are several ways to interact with the basic track in the process, using the Vive or Oculus controller. The more complex tasks take some figuring out — I was lucky and happened to have the game’s creators in the room with me at the time. I suppose not everyone has that luxury, but the good news here is that the title is designed so that, regardless of what you do, you can’t really mess it up.

I can see how that might be tiresome for some. Again, there’s no scoring built into the title, so while it can be collaborative, you don’t actually compete against anyone. The idea is just to, well, make music. Hooked up to a big screen and a home theater speaker system, it’s easy to see how it could add an extra dimension to a home gathering, assuming, of course, the music selection is your cup of tea.

Here’s the full rundown of songs [deep breath]

  • The Chainsmokers – Roses (ft. ROZES)

  • ODESZA – Say My Name (ft. Zyra)

  • Steve Aoki & Boehm – Back 2 You (ft. WALK THE MOON)

  • Tiesto & John Christian – I Like It Loud (ft. Marshall Masters & The Ultimate MC)

  • ZHU & Tame Impala – My Life

  • ZHU & NERO – Dreams

  • ZHU – Intoxicate

  • 12th Planet – Let Me Help You (ft. Taylr Renee)

  • Netsky – Nobody

  • Dada Life – B Side Boogie, Higher Than The Sun, We Want Your Soul

  • Keys N Krates – Dum Dee Dum [Dim Mak Records]

  • Krewella & Yellow Claw – New World (ft. Vava)

  • Krewella – Alibi

  • Amp Live & Del The Funky Homosapien – Get Some of Dis

  • DJ Shadow – Bergshrund (ft. Nils Frahm)

  • 3LAU – Touch (ft. Carly Paige)

  • Machinedrum – Angel Speak (ft. Melo-X), Do It 4 U (ft. Dawn Richard)

  • People Under The Stairs – Feels Good

  • Tipper – Lattice

  • TOKiMONSTA – Don’t Call Me (ft. Yuna), I Wish I Could (ft. Selah Sue)

  • Reid Speed & Frank Royal – Get Wet

  • AHEE – Liftoff

  • BIJOU – Gotta Shine (ft. Germ) [Dim Mak Records]

  • Anevo – Can’t Stop (ft. Heather Sommer) [Dim Mak Records]

  • KRANE & QUIX – Next World [Dim Mak Records]

  • B-Sides & SWAGE – On The Floor [Dim Mak Records]

  • Gerald Le Funk vs. Subshock & Evangelos – 2BAE [Dim Mak Records]

  • Max Styler – Heartache (Taiki Nulight Remix), All Your Love [Dim Mak Records]

  • Riot Ten & Sirenz – Scream! [Dim Mak Records]

  • Fawks – Say You Like It (ft. Medicienne) [Dim Mak Records]

  • Taiki Nulight – Savvy [Dim Mak Records]

  • Jovian – ERRBODY

  • Madnap – Heat

  • MIKNNA – Trinity Ave, Us

  • 5AM – Peel Back (ft. Wax Future)

  • Jamie Prado & Gregory Doveman – Young (Club Mix)

  • Coral Fusion – Klip [Survios original]

  • GOODHENRY – Wonder Wobble [Survios original]

  • Starbuck – Mist [Survios original]

Can’t say I go in for most of those, but I can pick out a handful I wouldn’t mind sticking in rotation — Del the Funky Homosapien, DJ Shadow and the People Under the Stars, for instance. I wouldn’t be too surprised to see additional music packs arriving, as the company secures more licensing deals.

Meantime, Electronauts will be available on Steam for the Oculus Rift and HTC Vive, priced at $20. The PlayStation version will run $18. For those who want an even more public experience, it will be arriving in Survios’ 38 VR Arcade Network location.


Source: Tech Crunch

Test.ai nabs $11M Series A led by Google to put bots to work testing apps

For developers, the process of determining whether every new update is going to botch some core functionality can take up a lot of time and resources, and things get far more complicated when you’re managing a multitude of apps.

Test.ai is building a comprehensive system for app testing that relies on bots, not human labor, to see whether an app is ready to start raking in the downloads.

The startup has just closed an $11 million Series A round led by Gradient Ventures, Google’s AI-focused venture fund. Also participating in the round were e.ventures, Uncork Capital and Zetta Venture Partners. Test.ai, which was founded in 2015, has raised $17.6 million to date.

“Every advancement in training AI systems enables an advancement in user testing, and test.ai is the leader in AI-powered testing technology. We’re excited to help them supercharge their growth as they test every app in the world,” Gradient Ventures founder Anna Patterson said in a statement. “In a couple years, AI testing will be ingrained into every company’s product flow.”

The company’s technology doesn’t just leverage AI to cut down on how long it takes for an app to be tested; there are much lengthier processes it helps eliminate when it comes to developers readying lists of scenarios to be tested. Test.ai has trained their bots on “tens of thousands of apps” to help it understand what an app looks like and what interface patterns they’re typically composed of. From there, they’re able to build their own scenario list and find what works and what doesn’t.

That can mean, in the case of an app like our own, tracking down a bookmark button and then deducing that there are certain process that users would go through to use its functionality.

Right now, the utility is in the fact that bots scale so broadly and so quickly. While a startup working on a single app may have the flexibility to choose amongst a few options, larger enterprises with several aging products having to grapple with updated systems are in a bit more of a bind. Some of Test.ai’s larger unnamed partners that “make app stores” or devices are working at the stratospheric level having to verify tens of thousands of apps to ensure that everything is in working order.

“That’s an easy sell for us, almost too easy, because they don’t have the resources to individually test ten thousand apps every time something like Android gets updated,” CEO Jason Arbon tells TechCrunch.

The startup’s capabilities operate on a much more quantitative scale than human-powered competitors like UserTesting, which tend to emphasize testing for feedback that’s a bit more qualitative in nature. Test.ai’s founders believe that their system will be able to grapple with more nebulous concepts in the future as it analyzes more apps, and that it’s already gaining insights into concepts like whether a product appears “trustworthy,” though there are certainly other areas where bots are trailing the insights that can be delivered by human testers.

The founders say they hope to use this latest funding to scale operations for their growing list of enterprise clients and hire some new people.


Source: Tech Crunch

Optoro raises $75 million more to make it easier for brands to manage and resell returned and excess inventory

As the economy has chugged along, so have retail sales, which last year capped their strongest year since 2014. Online sales has been especially brisk, growing 16 percent between 2016 and 2017 alone, according to the U.S. Commerce Department, which estimates that consumers spent $453.5 billion online last year.

Of course, with every booming market comes supporting cast members that benefit. Such is the case with eight-year-old, Washington, D.C.-based Optoro, which itself just rang up $75 million in new funding. A logistics company, Optoro’s software helps retailers — both online and off — more easily re-sell inventory that has been returned by customers.

That’s a big number. The overall amount of merchandise returned as a percent of total sales last year was 10 percent in 2017, according to the National Retail Federation. In dollars, that’s $351 billion.

Right now, that includes sales from big box retailers and many other “legacy” companies that allow shoppers to buy items — and return them — in their stores. But as online sales rise, so do online returns. Indeed, Optoro cofounder and CEO Tobin Moore tells the WSJ that the “return rate from e-commerce sales is two to three times the return rate of brick-and-mortar” and “sometimes higher in fashion and apparel.” And with most retailers also paying for shipping on returns — after all, a happy customer is a repeat customer — it’s a a major logistics cost for these online brands.

Little wonder that Optoro, which uses data analytics and multi-channel online marketing to determine the best path for each item (ostensibly maximizing recovery and reducing environmental waste in the process) is a hit with a growing base of customers.

A growing number of investors is getting behind the company, too. Optoro’s newest round was led by Franklin Templeton Investments, but the company has now raised at least $200 million altogether, including from Revolution Growth, Generation Investment Management, Grotech Ventures, and even the UPS Strategic Enterprise Fund.


Source: Tech Crunch

Google gives Chrome the virtual reality treatment

Google is injecting a little Chrome into its VR platform, bringing the web browser to Daydream headsets, the company announced today. It’s been a long time coming considering the depths of Google’s WebVR experimentation on desktop and mobile Chrome.

The Mountain View tech giant announced it was working on this quite a while ago, back at I/O 2017.

Google has been moving pretty slowly with any big Daydream updates lately, all while Facebook’s Oculus has driven heavy news to its mobile platform thanks to new standalone hardware. Daydream rolled out its own positionally-tracked headset with Lenovo earlier this summer but a major lack of content has been the system’s biggest issue. Bringing the web to Daydream could help correct this, and directing more mobile developer attention to WebVR might be a positive move for Google as it looks to make content discovery more simple.

Last year, the company made it so that you could open WebVR content in mobile Chrome on your phone and then drop it into a Cardboard headset and check out the content, with this you’ll be able to launch inside VR, explore inside VR and then move onto something else.

Loading desktop webpages inside a VR headset doesn’t necessarily seem earth-shatteringly disruptive but there are some optimizations Google has ensured that some non-WebVR content gets special treatment including a “cinema mode” that drops videos into a special environment to keep your eyes on the content. You’ll also get incognito mode, voice search and access to your saved bookmarks.

The browser is available for Lenovo’s Mirage Solo as well as Google’s own Daydream View headset and you’ll gain access after updating Chrome on Android.

The web is largely still an untested wilderness for virtual reality that nobody is racing to conquer given headset volume is still pretty low and a lot of wind has been sucked out of VR’s sails lately. There’s a lot of interesting stuff that the web enables for virtual social environments especially and though most major powers are drawing attention to their own platforms, a platform like Chrome arriving on Daydream could start to spark some developer imagination for what’s possible.


Source: Tech Crunch

Google’s lead lawyer moves into a global policy role

Google is promoting its top lawyer, Kent Walker, into a global policy position, CNBC reports. Walker, Google SVP and general counsel, has already been a public voice in the company’s recent privacy tangles, but will move into a formal role as senior vice president of global affairs, overseeing Google’s policy, trust and safety, corporate philanthropy and legal teams.

Last year, Walker joined Richard Salgado, Google’s Director, Law Enforcement and Information Security, to head to Capitol Hill for the first round of reckoning on big tech’s failure to mitigate political disinformation campaigns during the 2016 U.S. presidential election.

Since then, Walker has commented publicly on Google’s policies around political ad transparency and extremist content on YouTube, among other policy issues facing the company. With social platforms at an ethical crossroads globally and tech chafing at its newly forced compliance with international privacy laws, any public-facing global policy role will be very much in the spotlight in 2018 and beyond.

Google hired Walker away from eBay in 2006, where he served as the company’s deputy general counsel. Prior to his time at eBay (and AOL, prior to that), Walker was an assistant U.S. attorney with the Department of Justice.


Source: Tech Crunch

OpenAI’s robotic hand doesn’t need humans to teach it human behaviors

Gripping something with your hand is one of the first things you learn to do as an infant, but it’s far from a simple task, and only gets more complex and variable as you grow up. This complexity makes it difficult for machines to teach themselves to do, but researchers at Elon Musk and Sam Altman-backed OpenAI have created a system that not only holds and manipulates objects much like a human does, but developed these behaviors all on its own.

Many robots and robotic hands are already proficient at certain grips or movements — a robot in a factory can wield a bolt gun even more dexterously than a person. But the software that lets that robot do that task so well is likely to be hand-written and extremely specific to the application. You couldn’t for example, give it a pencil and ask it to write. Even something on the same production line, like welding, would require a whole new system.

Yet for a human, picking up an apple isn’t so different from pickup up a cup. There are differences, but our brains automatically fill in the gaps and we can improvise a new grip, hold an unfamiliar object securely and so on. This is one area where robots lag severely behind their human models. And furthermore, you can’t just train a bot to do what a human does — you’d have to provide millions of examples to adequately show what a human would do with thousands of given objects.

The solution, OpenAI’s researchers felt, was not to use human data at all. Instead, they let the computer try and fail over and over in a simulation, slowly learning how to move its fingers so that the object in its grasp moves as desired.

The system, which they call Dactyl, was provided only with the positions of its fingers and three camera views of the object in-hand — but remember, when it was being trained, all this data is simulated, taking place in a virtual environment. There, the computer doesn’t have to work in real time — it can try a thousand different ways of gripping an object in a few seconds, analyzing the results and feeding that data forward into the next try. (The hand itself is a Shadow Dexterous Hand, which is also more complex than most robotic hands.)

In addition to different objects and poses the system needed to learn, there were other randomized parameters, like the amount of friction the fingertips had, the colors and lighting of the scene and more. You can’t simulate every aspect of reality (yet), but you can make sure that your system doesn’t only work in a blue room, on cubes with special markings on them.

They threw a lot of power at the problem: 6144 CPUs and 8 GPUs, “collecting about one hundred years of experience in 50 hours.” And then they put the system to work in the real world for the first time — and it demonstrated some surprisingly human-like behaviors.

The things we do with our hands without even noticing, like turning an apple around to check for bruises or passing a mug of coffee to a friend, use lots of tiny tricks to stabilize or move the object. Dactyl recreated several of them, for example holding the object with a thumb and single finger while using the rest to spin to the desired orientation.

What’s great about this system is not just the naturalness of its movements and that they were arrived at independently by trial and error, but that it isn’t tied to any particular shape or type of object. Just like a human, Dactyl can grip and manipulate just about anything you put in its hand, within reason of course.

This flexibility is called generalization, and it’s important for robots that must interact with the real world. It’s impossible to hand-code separate behaviors for every object and situation in the world, but a robot that can adapt and fill in the gaps while relying on a set of core understandings can get by.

As with OpenAI’s other work, the paper describing the results is freely available, as are some of the tools they used to create and test Dactyl.


Source: Tech Crunch