Twitter hires God-is Rivera as global director of culture and community

Twitter has brought on its first-ever global director of culture and community, God-is Rivera. As global director of culture and community, Rivera will report to Global Head of Culture, Engagement and Experiential Nola Weinstein. Rivera previously led internal diversity and inclusion efforts at VMLY&R, a digital and creative agency.

“As a black woman who has worked in industries in which I have been underrepresented, I feel a great responsibility to amplify and support diverse communities, and they exist in full force on Twitter,” Rivera said in a statement. “The team has shown a passion to serve and spotlight their most active users and I am honored to step into this new role as a part of that commitment.”

For context, 26 percent of U.S. adults who identify as black use Twitter, while 24 percent of white-identified adults and 20 percent of Latinx-identified adults in the U.S. use Twitter, according to a March 2018 survey from Pew Research Center.

At Twitter, the plan is for Rivera to “better serve and engage communities” on Twitter through the company’s brand marketing, campaigns, events and other experiences. Internally, Rivera will be tasked with ensuring Twitter’s campaigns and programs are inclusive and “reflective of the communities we serve,” according to Twitter’s press release. Externally, Rivera will be responsible for developing relationships and programs with content creators, community leaders, brands and more — similar to the one with HBO’s Insecure.

Here’s the internal note Weinstein sent to Twitter employees earlier today:

Team,

I am so excited to welcome @GodisRivera to the team as Twitter’s new Global Director of Culture & Community. She captivated us at #OneTeam with her enlightening presentation on #BlackTwitter and we are thrilled that she will now be bringing her passion and perspective inside.

In this newly created role, God-is will help lead our efforts to better serve and engage the powerful voices and global communities who take to Twitter to share, discover and discuss what matters to them. This will come to life through Twitter’s brand efforts, campaigns, events and experiences. She will help ensure that our programs are connective, inclusive and reflective of the communities we serve. You can imagine more efforts that engage and excite our communities like #HereWeAre, #NBATwitter, thoughtful tweetups, etc.

God-is’ deep expertise in marketing and social strategy, cultural understanding and ability to elevate and connect communities makes for a rare and incredibly powerful combination. She was previously Director, Inclusion and Cultural Resonance at VMLY&R, where she led internal diversity efforts to fuse the importance of internal culture and representation to creative work outputs. In 2018, God-is was named an Ad Age “Woman to Watch” and Adweek “Disruptor” for continuing to fight for representation and equity in the advertising industry. She currently resides in New York, NY with her husband and daughter.

On a personal note, I have had the pleasure of spending time with God-is at #HereWeAre, #Influence, and #OneTeam and her energy, passion and positivity are infectious. I know her presence will make a difference and am excited by all that the culture & experiential team will create together.

God-is will start on November 12th and will be based in NYC reporting to me.

Please join me in welcoming her to the flock!


Source: Tech Crunch

The Minte raises $2.25 million in seed funding to bring hotel-style housekeeping to luxury residences

As an MBA student at the University of Chicago’s Booth school, Kathleen Wilson was struck with an idea while looking at businesses that provided daily housekeeping in one of her classes. Given the density and physical structure of many apartment buildings, she wondered why a housekeeper couldn’t similarly push a cart down the hall and spend an hour or less in each unit.

To test out her theory, Wilson and a classmate started cleaning the apartments of friends, spending 30 minutes to an hour at a time and trying to establish a reasonable price point for the work. Armed with enough data, Wilson then landed at a local real estate tech accelerator, formed her company, and locked down her first property management company client, Waterton — and her efforts have been gaining momentum since.

In fact, her 20-month-old startup, The Minte, which now employs roughly 60 people, is today announcing that it has raised $2.25 million in a round that brings the company’s total seed funding to $4.7 million. Dundee Venture Capital led this newest round; other investors in the company include MATH Venture Partners, Revolution’s Rise of the Rest Seed Fund, Firebrand Ventures, Blue Note Ventures and numerous angel investors. We had a quick chat with Wilson earlier this week to learn more.

TC: Can you tell us a bit more about your customers? Are they all property management companies like Waterton?

KW: We only provide service to apartments and condos, so our clients are currently property management companies such as Greystar, Bozzuto, Lincoln Property Company, and CA Ventures. We have just under 70 properties in Chicago, another 20 in D.C., and we’ve been launching 6 to 10 new properties in each market each month.

TC: The Minte promises to make a housekeeper available to a property full-time, correct?

KW: Yes. A housekeeper is located on site for residents to book cleaning services with them, so that residents are provided with consistency and trust. To be clear, our housekeepers are full-time Minte employees with health benefits and paid-time off.  We keep our housekeeping cart and supplies at each property, and there’s a place for housekeepers to go if they have a bit of downtime, although that’s rare.

We do have some housekeepers who split their time between properties, either if the property is smaller or if we’re still in the first couple months of service and still building demand.

TC: What makes the company think people would prefer to work with The Minte versus housekeepers they know? These are trust-heavy relationships, a feature that other housecleaning startups have overlooked to their detriment.

KW: Exactly. We bring the personal trust by having the same housekeeper assigned to the property, which allows the housekeeper to get to know the residents, and we bring the corporate side of trust by having insurance, QA by managers, and the ability to send a backup housekeeper if someone is out sick. We also have top-notch, live customer service if there is ever an issue.

TC: What does your quality assurance process involve?

KW: It’s a multi-tier process. First, we’ve implemented an eight-day training program for all new housekeepers. Second, housekeepers and housekeeping managers with whom we work almost always have hotel backgrounds, having worked at the Waldorf Astoria, The Conrad, and Sofitel, to name a few. Third, housekeeping managers do random spot checks of service. And fourth, users can rate and comment on every service, which we review in real time. It’s company policy to reach out to the resident any time something is less than four stars.

Also worth mentioning: our products are eco-friendly, P&G products, so there’s no compromise on the quality of our supplies.

TC: How do clients pay, and how much do they pay? Is this a subscription model?

KW: They can pay à la carte — paying $30 for a hotel-style service, $90 for a deep clean for a one-bedroom apartment, for example — but over half of our cleans are residents who are on a recurring package. For customers on a package, they can customize how many deep cleans and/or hotel-style cleans they have every four weeks, including which days those cleans occur.

TC: The home services model is more prone to leakage, meaning people form relationships and stop using the platform. Is this a concern?

KW: Our employees are full-time, so this is essentially a non-issue for us. With our housekeepers on our schedule throughout the entire week, its not feasible for someone to poach them.

Potentially a resident could do this on a weekend, but in our experience, people want housekeepers to come when they are not home. Furthermore, the property manager would tell us if our housekeeper was getting keys outside of their Minte schedule.

TC: And how are you marketing the company?

KW: Through our partnership with the property managers, primarily.

TC: How will you use your new funding?

KW: We’ll continue to enhance our tech. Our app is out this week, and we’re rolling out our smart home integration in the coming months. We’re making our button — which is physical hardware that goes on the wall inside each unit — more readily available. We’ll also expand more into condos and corporate housing and target our third city in early 2019.


Source: Tech Crunch

Zume reportedly snags $375 million from SoftBank for its robotic food operations

Zume, the robotics and logistics company that got its start slinging out pizza, just raised $375 million from SoftBank, the WSJ first reported. SoftBank is also reportedly looking to invest an additional $375 million, which would value the company at a $2.25 billion valuation. The round comes a couple of months after reports of SoftBank looking to invest anywhere from $500 million to $750 million in Zume.

Zume, which started back in 2015, uses robotics, artificial intelligence, automation and mobile kitchen technologies to predict food trends and serve up freshly cooked foods. The startup owns a patent for delivery trucks that can cook food while en route to customers.

Earlier this year, Zume created a larger umbrella company to house Zume Pizza, now a subsidiary of Zume. That marked Zume’s more ambitious approach to move beyond pizza and license its technology to restaurants looking to deploy food trucks.

“Pizza was our prototype,” Zume CEO Alex Garden told TC’s Brian Heater back in April. “There’s no reason why this technology wouldn’t work for any restaurant or any food category. Any restaurant who wants to adopt our system can now easily do that. They don’t have to be experts in technology or appliance manufacturing. They can just be restaurateurs, who have a more flexible offering for customers.”

Zume had previously raised about $70 million in funding. I’ve reached out to Zume and SoftBank and will update this story if I hear back.


Source: Tech Crunch

Thomas Reardon and CTRL-Labs are building an API for the brain

From Elon’s Neuralink to Bryan Johnson’s Kernel, a new wave of businesses are specifically focusing on ways to access, read and write from the brain.

The holy grail lies in how to do that without invasive implants, and how to do it for a mass market.

One company aiming to do just that is New York-based CTRL-labs, who recently closed a $28 million Series B. The team, comprising over 12 PHDs, is decoding individual neurons and developing an electromyography-based armband that reads the nervous signals travelling from the brain to the fingers. These signals are then translated into desired intentions, enabling anything from thought-to-text to moving objects.

Scientists have known about electrical activity in the brain since Hans Berger first recorded it using an EEG in 1924, and the term “brain computer interface” (BCI) was coined as early as the 1970s by Jacques Vidal at UCLA. Since then most BCI applications have been tested in the military or medical realm. Although it’s still the early innings of neurotech commercialization, in recent years the pace of capital going in and company formation has picked up. 

For a conversation with Flux I sat down with Thomas Reardon the CEO of CTRL-labs and discussed his journey to founding the company. Reardon explained why New York is the best place to build a machine learning based business right now and how he recruits top talent. He shares what developers can expect when the CTRL-kit ships in Q1 and explains how a brain control interface may well make the smartphone redundant.

An excerpt is published below. Full transcript on Medium.

AMLG: I’m excited to have Thomas Reardon on the show today. He is the co-founder and CEO of CTRL-labs a company building the next generation of non-invasive neural computing here in Manhattan. He’s just cycled from uptown — thanks for coming down here to Chinatown. Reardon was previously the founder of a startup called Avegadro, which was acquired by Openwave. He also spent time at Microsoft where he was project lead on Internet Explorer. He’s one of the founders of the Worldwide Web Consortium, a body that has established many of the standards that still govern the Web, and he’s one of the architects of XML and CSS. Why don’t we get into your background, how you got to where you are today and why you’re the most excited to be doing what you’re doing right now.

W3 is an international standards organization founded and led by Tim Berners Lee.

TR: My background — well I’m a bit of an old man so this is a longer story. I have a commercial software background. I didn’t go to college when I was younger. I started a company at 19 years old and ended up at Microsoft back in 1990, so this was before the Windows revolution stormed the world. I spent 10 years at Microsoft. The biggest part of that was starting up the Internet Explorer project and then leading the internet architecture effort at Microsoft so that’s how I ended up working on things like CSS and XML, some of the web nerds out there should be deeply familiar with those terms. Then after doing another company that focused on the mobile Internet, Phone.com and Openwave, where I served as CTO, I got a bit tired of the Web. I got fatigued at the sense that the Web was growing up not to introduce any new technology experience or any new computer science to the world. It was just transferring bones from one grave to another. We were reinventing everything that had been invented in the 80s and early 90s and webifying it but we weren’t creating new experiences. I got profoundly turned off by the evolution of the Web and what we were doing to put it on mobile devices. We weren’t creating new value for people. We weren’t solving new human problems. We were solving corporate problems. We were trying to create new leverage for the entrenched companies.

So I left tech in 2003. Effectively retired. I decided to go and get a proper college education. I went and studied Greek and Latin and got a degree in classics. Along the way I started studying neuroscience and was fascinated by the biology of neurons. This led me to grad school and doing a Ph.D. which I split across Duke and Columbia. I’d woken up some time in like 2005 2006 and was reading an article in The New York Times. It was something about a cell and I scratched my head and said, we all hear that term we all talk about cells and cells in the body, but I have no idea what a cell really is. To the point where a New York Times article was too deep for me, and that almost embarrassed me and shocked me and led me down this path of studying biology in a deeper almost molecular way.

AMLG: So you were really in the heart of it all when you were working at Microsoft and building your startup. Now you are building this company in New York — we’ve got Columbia and NYU and there’s a lot of commercial industries — does that feel different for you, building a company here?

TR: Well let’s look at the kind of company we’re building. We’re building a company which is at its heart about machine learning. We’re in an era in which every startup tries to have a slide in their deck that says something about ML, but most of them are a joke in comparison. This is the place in the world to build a company that has machine learning at its core. Between Columbia and NYU and now Cornell Tech, and the unbelievably deep bench of machine learning talent embedded in the finance industry, we have more ML people at an elite level in New York than any place on earth. It’s dramatic. Our ability to recruit here is unparalleled. We beat the big five all the time. We’re now 42 people and half of them are Ph.D. scientists. For every single one of them we were competing against Google, Facebook, Apple.

AMLG: Presumably this is a more interesting problem for them to work on. If they want to go work at Goldman in AI they can do that for a couple of years, make some dollars and then come back and do the interesting stuff.

TR: They can make a bigger salary but they will work on something that nobody in the rest of the world will ever get to hear about. The reason why people don’t talk about all this ML talent here is when it’s embedded in finance you never get to hear about it. It’s all secret. Underneath the waters. The work we’re doing and this new generation of companies that have ML at their core — even a company like Spotify is, on the one hand fundamentally a licensing and copyright arbitrage company, but on the other hand what broke out for Spotify was their ML work. It was fundamental to the offer. That’s the kind of thing that’s happening in New York again and again now. There’s lots of companies — like a hardware company — that would be scary to build in New York. We have a significant hardware component to what we’re doing. It is hard to recruit A team world-class hardware folks in New York but we can get them. We recently hired the head of product from Peloton who formerly ran Makerbot.

AMLG: We support that and believe there’s a budding pool here. And I guess the third bench is neuro, which Columbia is very strong in.

Larry Abbott helped found the Center of Theoretical Neuroscience at Columbia

TR: Yes as is NYU. Neuroscience is in some sense the signature department at Columbia. The field breaks across two domains — the biological and the computational. Computational neuroscience is machine learning for real neurons, building operating computational models of how real neurons do their work. It’s the field that drives a lot of the breakthroughs in machine learning. We have these biologically inspired concepts in machine learning that come from computational neuroscience. Colombia has by far the top computational neuroscience group in the world and probably the top biological neuroscience group in the world. There are five Nobel Prize winners in the program and Larry Abbott the legend of theoretical neuroscience. It’s its an unbelievably deep bench.

AMLG: How do you recruit people that are smarter than you? This is a question that everyone listening wants to know.

Patrick Kaifosh, Thomas Reardon, Tim Machado the co-founders of CTRL-labs

TR: I’m not dumb but I’m not as smart as my co-founder and I’m not as smart as half of the scientific staff inside the company. I affectionately refer to my co-founder as a mutant. Patrick Kaifosh, who’s chief scientist. He is one of the smartest human beings I’ve ever known. Patrick is one of those generational people that can change our concept of what’s possible, and he does that in a first principles way. The recruiting part is to engage people in a way that lets them know that you’re going to take all the crap away that allows them to work on the hardest problems with the best people.

AMLG: I believe it and I’ve met some of them. So what was the conversation with Kaifosh and Tim when when you first sat down and decided to pursue the idea?

TR: So we were wrapping up our graduate studies, the three of us. We were looking at what it would be like to stay in academia and the bureaucracy involved in trying to be a working scientist in academia and writing grants. We were looking around at the young faculty members we saw at Columbia and thought, that doesn’t look like they’re having fun.

AMLG: When you were leaving Columbia it sounds like there wasn’t another company idea. Was it clear that this was the idea that you wanted to pursue at that time?

TR: What we knew is we wanted to do something collaborative. We did not think, let’s go build a brain machine interface. We don’t actually like that phrase, we like to call them neural interfaces. We didn’t think about neural interfaces at all. The second idea we had, an ingredient we put into the stew and started mixing up was, was that we wanted to leverage experimental technologies from neuroscience that hadn’t yet been commercialized. In some sense this was like when Genentech was starting in the mid 70s. We had found the crystal structure of DNA back in the late 40s, there had been 30 years of molecular biology, we figured out DNA then RNA then protein synthesis then ribosome. Thirty years of molecular biology but nobody had commercialized it yet. Then Genentech came along with this idea that we could make synthetic protein, that we could start to commercialize some of these core experimental techniques and do translation work and bring value back to humanity. It was all just sitting there on the shelf ready to be exploited.

We thought OK what are the technologies in neuroscience that we use at the bench that could be exploited? For instance spike sorting, the ability to listen with a single electrode to lots of neurons at the same time and see all the different electrical impulses and de-convolve them. You get this big noisy signal and you can see the individual neurons activity. So we started playing with that idea, lets harvest the last 30 or 40 years of bench experimental neuroscience. What are the techniques that were invented that we could harvest?

AMLG: We’ve been reading about these things and there’s been so much excitement about BMI but you haven’t really seen things in market things that people can hack around with. I don’t know why that gap hasn’t been filled. Does no one have the balls to go take these off the shelf and try and turn them into something or is it a timing question?

The brain has upper motor neurons in the cortex which map to lower motor neurons in the spinal cord, which send long axons down to contact the muscles. They release neurotransmitters that turn individual muscle fibres on and off. Motor units have 1:1 correspondence with motor neurons. When motor neurons fire in the spinal cord, an output signal from the brain, you get a direct response in the muscle. If those EMG signals can be decoded, then you can decode the zeros and ones of the nervous system — action potential

TR: Some of this is chutzpah and some of it is timing. The technologies that we are leveraging weren’t fully developed for how we’re using them. We had to do some invention since we started the company three years ago. But they were far enough along that you could imagine the gap and come up with a way to cross the gap. How could we, for instance, decode an individual neuron using a technology called electromyography. Electromyography has been around for probably over a century and that’s the ability to — 

AMLG: Thats what we call EMG.

TR: EMG. Yes you can record the electrical activity of a muscle. EKG electrocardiography is basically EMG for the heart alone. You’re looking at the electrical activity of the heart muscles. We thought if you improve this legacy technology of EMG sufficiently, if you improve the signal to noise, you ought to be able to see the individual fibers of a muscle. If you know some neuroanatomy what you figure out is that the individual fibers correspond to individual neurons. And by listening to individual fibers we can now reconstruct the activity of individual neurons. That’s the root of a neural interface. The ability to listen to an individual neuron.

EEG toy “the Force Trainer”

AMLG: My family are Star Wars fans and we had a device one Christmas that we sat around playing with, the force trainer. If you put the device around your head and stare long enough the thing is supposed to move. Everything I’ve ever tried has been like that has been like that Force Trainer, a little frustrating — 

TR: Thats EEG, electroencephalography. That’s when you put something on your skull and record the electrical activity. The waves of activity that happen in the cortex, in the outer part of your brain.

AMLG: And it doesn’t work well because the skull is too thick?

TR: There’s a bunch of reasons why it doesn’t work that well. The unfortunate thing is that when most people hear about it that’s one of the first things they think about like, oh well all my thinking is up here in the cortex right underneath my skull and that’s what you’re interfacing with. That is actually —

AMLG: A myth?

TR: Both a myth and the wrong approach. I’m going have to go deep on this one because it’s subtle but important. The first thing is let’s just talk about the signal qualities of EEG versus what we’re doing where we listen to individual neurons and do it without having to drill into your body or place an electrode inside of you. EEG is trying to listen to the activity of lots of neurons all at the same time tens of thousands hundreds of thousands of neurons and kind of get a sense of what the roar of those neurons is. I liken it to sitting outside of Giant Stadium with a microphone trying to listen to a conversation in Section 23 Row 4 seat 9. You can’t do it. At best you can tell is that one of the teams scored you hear the roar of the entire stadium. That’s basically what we have with EEG today. The ability to hear the roar. So for instance we say the easiest thing to decode with EMG is surprise. I could put a headset on you and tell if you’re surprised.

AMLG: That doesn’t seem too handy.

TR: Yup not much more than that. Turns out surprise is this global brain state and your entire brain lights up. In every animal that we do this in surprise looks the same — it’s a big global Christmas tree that lights up across the entire brain. But you can’t use that for control. And this cuts to the name of our company, CTRL-labs. I don’t just want to decode your state. I want to give you the ability to control things in the world in a way that feels magical. It feels like Star Wars. I want you to feel like the Star Wars Emperor. What we’re trying to do is give you control and a kind of control you’ve never experienced before.

The MYO armband by Canadian startup Thalmic Labs

AMLG: This is control over motion right? Maybe you can clarify — where I’ve seen other companies like MYO, which was an armband, it was really motion capture where people were capturing how you intended to gesture, rather than what you were thinking about?

TR: Yeah. In some sense we’re a successor to MYO (Thalmic Labs) — if Thalmic had been built by neuroscientists you would have ended up on the path that we’re on now.

Thomas Reardon demonstrating Myo control

We have two regimes of control, one we call Myo control and the other we call Neuro control. Myo control is our ability to decode what ultimately becomes your movements. The electrical input to your muscles that cause your muscles to contract, and then when you stop activating them they slowly relax. We can decode the electrical activity that goes into those muscles even before the movement has started and even before it ends and recapitulate that in a virtual way. Neuro control is something else. It’s kind of exotic and you have to try it to believe it. We can get to the level of the electrical activity of neurons — individual neurons — and train you rapidly on the order of seconds to control something. So imagine you’re playing a video game and you want to push a button to hop like you’re playing Sonic the Hedgehog. I can train you in seconds to turn on a single neuron in your spinal cord to control that little thing.

AMLG: When I came to visit your lab in 2016 the guy had his hand out here. I tried it — it was an asteroid field.

TR: Asteroids, the old Atari game.

Patrick Kaifosh playing Asteroids — example of Neuro Control [from CTRL-labs, late 2017]

AMLG: Classic. And you’re doing fruit ninja now too? It gets harder and harder.

TR: It does get harder and harder. So the idea here is that rather than moving you can just turn these neurons on and off and control something. Really there’s no muscle activity at that point you’re just activating individual neurons, they might release a little pulse, a little electrical chemical transmission to the muscle, but the muscle can’t respond at that level. What you find out is rather than using your neurons to control say your five fingers, you can use your neurons to control 30 virtual fingers without actually moving your hand at all.

AMLG: What does that mean for neuroplasticity. Do you have to imagine the third hand fourth hand fifth hand, or your tail like in Avatar?

TR: This is why I focus on the concept of control. We’re not trying to decode what you’re “thinking.” I don’t know what a thought is and there’s nobody in neuroscience who does know what a thought is. Nobody. We don’t know what consciousness is and we don’t know what thoughts are. They don’t exist in one part of the brain. Your brain is one cohesive organ and that includes your spinal cord all the way up. All of that embodies thought.

Inside Out (2015, Pixar). Great movie. Not how the brain, thoughts or consciousness work

AMLG: That’s a pretty crazy thought as thoughts go. I’m trying to mull that one over.

TR: It is. I want to pound that home. There’s not this one place. There’s not a little chair (to refer to Dan Dennett) there’s not like a chair in a movie theater inside your brain where the real you sits watching what’s happening and directing it. No, there’s just your overall brain and you’re in there somewhere across all of it. It’s that collection of neurons together that give you this sense of consciousness.

What we do with Neuro Control and with CTRL-kit the device that we’ve built is give you feedback. We show you by giving you direct feedback in real time, millisecond level feedback, how to train a neuron to go move say a cursor up and down, to go chase something or to jump over something. The way this works is that we engage your motor nervous system. Your brain has a natural output port — a USB port if you will — that generates output. In some sense this is sad for people, but I have to tell you your brain doesn’t do anything except turn muscles on and off. That’s the final output of the brain. When you’re generating speech when you’re blinking your eyes at me when you’re folding your hands and using your hands to talk to me when you’re moving around when you’re feeding yourself. Your brain is just turning muscles on and off. That’s it. There is nothing else. It does that via motor neurons. Most of those are in your spine. Those motor neurons, it’s not so much that they’re plastic — they’re adaptive. So motor control is this ability to use neurons for very adaptive tasks. Take a sip of water from that bottle right in front of you. Watch what you’re doing.

Intention capture — rather than going through devices to interact, CTRL-labs will take the electrical activity of the body and decode that directly, allowing us to use that high bandwidth information to interact with all output devices. [Watch Reardon’s full keynote at O’Reilly]

AMLG: Watch me spill it all over myself — 

TR: You’re taking a sip. Everything you just did with that bottle you’ve never done that before. You’ve never done that task. In fact you just did a complicated thing, you actually put it around the microphone and had to use one hand then use the other hand to take the cap off the bottle. You did all of that without thinking. There was no cognitive load involved in that. That bottle is different than any other bottle, its slippery it’s got a certain temperature, the weight changes. Have you ever seen these robots try to pour water. It’s comical how difficult it is. You do it effortlessly, like you’re really good —

AMLG: Well I practiced a few times before we got here.

TR: Actually you did practice! The first year two years of your life. That’s all you were doing was practicing, to get ready for what you just did. Because when you’re born you can’t do that. You can’t control your hands you can’t control your body. You actually do something called motor babbling where you just shake your hands around and move your legs and wiggle your fingers and you’re trying to create a map inside your brain of how your body works and to gain control. But gain flexible, adaptive control.

AMLG: That’s the natural training that babies do, which is sort of what you’re doing in terms of decoding ?

TR: We are leveraging that same process you went through when you were a year to two years old to help you gain new skills that go beyond your muscles. So that was all about you learning how to control your muscles and do things. I want to emphasize what you did again is more complex than anything else you do. It’s more complex than language than math than social skills. Eight billion people on earth that have a functioning nervous system, every other one of them no matter what their IQ can do it really well. That’s the part of the brain that we’re interfacing with. That ability to adapt in real time to a task skillfully. That’s not plasticity in neuroscience. It’s adaptation.

AMLG: What does that mean in terms of the amount of decoding you’ve had to do. Because you’ve got a working demo. And I know that people have to train for their own individual use right?

Myo control attempts to understand what each of the 14 muscles in the arm are doing, then deconvolve the signal into individual channels that map out to muscles. If they can build an accurate online map CTRL-labs believes there is no reason to have a keyboard or mouse 

 

TR: In Myo control it works for anybody right out of the box. With Neuro control it adjusts to you. In fact the model that’s built is custom to you, it wouldn’t work on anybody else it wouldn’t work on your twin. Because your twin would train it differently. DNA is not determinative of your nervous output. What you have to realize is we haven’t decoded the brain —  there’s 15 billion neurons there. What we’ve done is created a very reduced but highly functional piece of hardware that listens to neurons in the spinal cord and gives you feedback that allows you to individually control those neurons.

When you think about the control that you exploit every day it’s built up of two kinds of things what we call continuous control — think of that as a joystick, left and right, and much left how much right. Those are continuous controls. Then we have discrete controls or symbols. Think of that as button pushing or typing. Every single control problem you face, and that’s what your day is filled with whether taking a sip of water walking down the street getting in a car driving a car. All of the control problems reduce to some combination of continuous control (swiping) and discrete control (button pushing.) We have this ability to get you to train these synthetic forms of up down left right dimensions if you will, that allows you to control things without moving but then allow you to move beyond the five fingers in your hand and get access to say 30 virtual fingers. What that opens up? Well think about everything you control.

AMLG: I’m picturing 30 virtual fingers right now —and I do want to get into VR, there’s lots of forms one can take in there. The surprising thing to me in terms of target uses and there’s so many uses you can imagine for this in early populations, was that you didn’t start the company for clinical populations or motor pathologies right? A lot of people have been working on bionics. I have a handicapped brother— I’ve been to his school and have seen the kids with all sorts of devices. They’re coming along, and obviously in the army they’ve been working on this. But you are not coming at it from that approach?

TR: Correct. We started the company almost ruthlessly focused on eight billion people. The market of eight billion. Not the market of a million or 10 million who have motor pathologies. In some sense this is the part that’s informed by my Microsoft time. So in the academy when you’re doing neuroscience research almost everybody focuses on pathologies, things that break in the nervous system and what we can do to help people and work around them. They’ll work on Parkinsons or Alzheimers or ALS for motor pathologies. What commercial companies get to do is bring new kinds of deep technology to mass markets, but which then feed back to clinical communities. By pushing and making this stuff work at scale across eight billion people, the problems that we have to solve will ultimately be the same problems that people who want to bring relief to people with motor pathologies need to solve. If you do it at scale lots of things fall out that wouldn’t have otherwise fallen out.

AMLG: It’s fascinating because you’re starting with we’re gonna go big. You’ve said you would like your devices, whether sold by you or by partners, to be on a million people within three or four years. A lot of things start in the realm of science but don’t get commercialized on a large scale. When you launched Explorer, at one point it had 95 percent market share so you’ve touched that many people before — 

Internet Explorer browser market share, 2002–2016

TR: Yes and it’s addicting, when you’ve been able to put software into a billion plus hands. That’s the kind of scale that you want to work on and that’s the kind of impact that I want to have and the team wants to have.

AMLG: How do you get something like this to that scale?

TR: One user at a time. You pick segments in which there are serious problems to solve and proximal problems. You’ve talked about VR. We think we solve a key problem in virtual reality augmented reality mixed reality. These emerging, immersive computing paradigms. No immersive computing technology so far has won. There is no default. There’s no standard. Nobody’s pointing at any thing and saying “oh I can already see how that’s the one that’s going to win.” It’s not Oculus it’s not Microsoft Hololens it’s not Magic Leap. But the investment is still happening and we’re now years into this new round of virtual realities. The investment is happening because people still have a hunger for it. We know we want immersive computing to work. What’s not working? It’s kind of obvious. We designed all of these experiences to get data, images, sounds into you. The human input problem. These immersive technologies do breakthrough work to change human input. But they’ve done nothing so far to change human output. That’s where we come in. You can’t have a successful immersive computing platform without solving the human output problem of how do I control this? How do I express my intentions? How do I express language inside of virtual reality? Am I typing or am I not typing?

AMLG: Everyone’s doing the iPad right now. You go into VR and you’re holding a thing that’s mimicking the real world.

TR: What we call skeuomorphic experiences that mimic real life, and that’s terrible. The first developer kits for the Oculus Rift you know shipped with an Xbox controller. Oh my god is that dumb. There’s a myth that the only way to create a new technology is to make sure it has a deep bridge to the past. I call bullshit on that. We’ve been stuck in that model and it’s one of the diseases of the venture world, “we’re Uber for neurons” and it’s Uber for this or that.

AMLG: Well ironically people are afraid to take risks in venture. If you suddenly design a new way of communicating or doing human output it’s, “that’s pretty risky, it should look more like the last thing.”

TR: I’m deeply thankful to the firms that stepped up to fund us, Spark and Matrix and most recently Lux and Google Ventures. We’ve got venture folks who want to look around the bend and make a big bet on a big future.


Source: Tech Crunch

Google walkout organizer: ‘I hope I still have a career in Silicon Valley after this’

Shouting “women’s rights are worker’s rights” and a number of other #TimesUp and #MeToo chants, upwards of 1,000 Google employees gathered in San Francisco’s Harry Bridges Plaza Thursday to protest the company’s handling of sexual harassment and misconduct cases.

Staffers from all of Google’s San Francisco offices were in attendance. An organizer, who declined to be named, told TechCrunch there were 1,500 Google employees across the globe that participated in the 48-hour effort to arrange a worldwide walkout. The effort was a major success. More than 3,000 Googlers and supporters of the movement attended the New York City walkout alone. As many as 1,000 Googlers and others came out for the San Francisco walkout, which the organizers said, was double the number they expected.

Cathay Bi, a Google employee in San Francisco and one of the walkout organizers, told a group of journalists at the rally that she was conflicted with participating in the walkout and ultimately decided not to go public with her own story of sexual harassment.

“I experienced sexual harassment at Google and I didn’t feel safe talking about it,” said Bi, pictured above. “That feeling of not being safe is why I’m out here today. I’d love it if everyone felt safe talking about it.”

“There were many times over the course of the last 24 hours that I emailed the group and said ‘I’m not doing this because I’m scared’ but that fear is something everyone else feels,” she said. “I said to myself last night, I hope I still have a career in Silicon Valley after this.”

Other organizers declined to go on the record.

There were protests around the globe today, including in London, Dublin, Montreal, Singapore, New York City, San Francisco, Seattle and Cambridge, following a New York Times investigation that revealed Google had given Android co-creator Andy Rubin a $90 million exit package despite multiple relationships with other Google staffers and credible accusations of sexual misconduct made against him. That story, coupled with tech’s well-established issue of harassment and discrimination toward women and underrepresented minorities, was a catalyst for today’s rallies.

At the rally, Googlers read off their list of demands, which includes an end to forced arbitration in cases of harassment and discrimination, a commitment to end pay and opportunity inequity and a clear, inclusive process for reporting sexual misconduct safely and anonymously.

They’re also requesting that the search giant promote chief diversity officer Danielle Brown to a role in which she reports directly to chief executive officer Sundar Pichai, as well as the addition of an employee representative to the company’s board of directors.

Here’s the statement from Pichai Google provided to TechCrunch this morning: “Earlier this week, we let Googlers know that we are aware of the activities planned for today and that employees will have the support they need if they wish to participate. Employees have raised constructive ideas for how we can improve our policies and our processes going forward. We are taking in all their feedback so we can turn these ideas into action.”

Now, employees around the Globe will await Google’s highly-anticipated course of “action.”

“These types of changes don’t happen overnight,” Bi said. “If we expected them overnight we would have the wrong expectations of how these movements take place.”


Source: Tech Crunch

Walmart adds an AR scanner to its iOS app for product comparisons

Walmart is giving augmented reality a shot. The retailer today announced the launch of a new AR scanning tool in its iPhone application which will help customers with product comparisons. However, unlike a typical barcode scanner meant only to compare prices on one item at a time, Walmart’s AR scanner can be panned about across store shelves, offering details on pricing and customer ratings beneath the products it sees.

The technology was first developed by a team at an internal Walmart hackathon using Apple’s ARKit technology. At the time, their idea was to create a scanning experience that worked faster and felt faster when used by customers. They also wanted to build a scanner that offered more than just price comparisons.

“Walmart store shoppers love using our mobile app barcode scanner as a price checker. Our team sees the potential of this product as so much more, though,” explains Tim Sears, senior engineering manager at Walmart Labs, in a post announcing the feature’s launch. “When a customer launches the scanner, they get a direct connection between the digital and the physical world that their screen and camera lens creates for them,” he says.

The team won the hackathon, then went on to further redesign the experience to become the one that’s live today in Walmart’s application.

To use the scanner, you launch the feature in the Walmart app, then point it at the products on the shelf you want to compare. As you move the phone between one item and the other, the product tile at the bottom of the screen will update with information, including the product name, price and star rating across however many reviews it has received on Walmart.com. A link to related products is also available.

The AR scanner was designed to anchor dots to what you’ve scanned, but uses smaller dots instead of anchoring the entire content to the product itself to overcome the problems that could occur when multiple items are scanned together in a close space.

Despite the supposed advantages of AR scanning over a simpler barcode scan, it still remains to be seen to what extent consumers will adopt the feature now that it’s live.

Walmart isn’t the only retailer to give AR a go. Others have used it in various ways, including Amazon, Target, Wayfair and many more. But in several cases, AR’s adoption by retailers have been focused on visualizing products in your home, or — in the case of Target’s AR “studio” — makeup on your face.

Walmart’s AR scanner goes after a more practical use.

The AR Scanner is in the latest version of the Walmart iOS app (18.20 and higher), and works on iPhones that run at least iOS 11.3. This latter requirement is due to its use of ARKit 1.5, but will limit the audience largely to those with newer iPhones.


Source: Tech Crunch

FabFitFun surpasses $200 million in revenue as it hits million-customer milestone

At least one million people will be receiving the next FabFitFun box as the Los Angeles company surpasses $200 million in revenue and continues its run as one of the startups to watch in the Los Angeles tech community. 

As it renews its focus on media — doubling down on new programming in a bid to reach further into repeatable revenue through subscriptions that encompass more than just retail — the company is trying to frame itself as more than just makeup and accessories in a box.

“When we think of the potential behind the business … there are a few businesses in the world for whom membership is a no brainer. Netflix is, Spotify is and we think FabFitFun is a no brainer,” said Daniel Broukhim co-founder and co-chief executive of FabFitFun. 

The company’s reach spans demographics and geography, according to co-chief executives (and brothers) Daniel and Michael Broukhim, with users ranging in age from 15 to 85 and subscriptions covering all 50 states.

FabFitFun is truly for every woman – whether you are a millennial, a mom of three, or a fashion-forward 50 year old; we see this milestone as a celebration of the diversity of our members and that’s why we launched the #IamFabFitFun initiative,” said Katie Rosen Kitchens, the company’s other co-founder and editor in chief, in a statement. “We have members from all walks of life – from nurses to lawyers, software developers, police officers, makeup artists, fashion designers, dog walkers, interior designers and more.”

It began as a media business reviewing new products and has only taken on a small amount of venture funding since its inception, but the business has become a social phenomenon and has moved into retail, launching brands like ISH, Summer & Rose and Chic & Tonic.

There will be a pop up with Macy’s department stores in the holiday season to merge the subscription business with brick-and-mortar retailers and the company is expanding further into health and wellness.

“We think about it in the context of a lifestyle,” said Michael Broukhim. “We’ve only been doing a low-level pilot on the fashion side. In the food space we’ve had snack vendors who have snacks in the main box. There’s a FabFitFun way to shop, [with] a discovery orientation. We do the heavy lifting for you and become storytellers in curating your life for you.”

The goal is to become a curator for more of its members’ interests, he said. “We want to do that for pretty much everything that someone consumes,” Michael said. “There’s the everything approach where you know what you want and you type it in and you get it and you click it and you get it delivered in maybe two days. Then there’s the FabFitFun approach…. it’s a trusted relationship where we learn about all the time and put as much of the process on autopilot as possible.”

With all of that, replenishment is not a focus for the company at the moment. “It’s upside for our brand partners,” said Daniel. “We’re helping their products get discovered. Then those members go to the brand partners and can continue on those relationships.”


Source: Tech Crunch

After canceling ‘Rift 2’ overhaul, Oculus plans a modest update to flagship VR headset

Facebook’s virtual reality arm may soon find itself in the unfamiliar position of playing catch-up with hardware competitors.

Last week, TechCrunch reported that Oculus co-founder Brendan Iribe had decided to leave Facebook partially due to his “fundamentally different views on the future of Oculus” and decisions surrounding the cancellation of a next-generation “Rift 2” project.

The company’s prototype “Rift 2” device, codenamed Caspar, was a “complete redesign” of the original Rift headset, a source familiar with the matter tells us. Its cancellation signified an interest by Facebook leadership to focus on more accessible improvements to the core Rift experience that wouldn’t require the latest PC hardware to function. Iribe did not agree with the direction, with a source telling us that he was specifically not interested in “offering compromised experiences that provided short-term user growth but sacrificed on comfort and performance.”

Former Oculus CEO Brendan Iribe sharing details on the Oculus Rift in 2015

In the wake of the overhaul’s cancellation, the company will be pursuing a more modest product update — possibly called the “Rift S” — to be released as early as next year, which makes minor upgrades to the device’s display resolution while more notably getting rid of the external sensor tracking system, sources tell us. Instead, the headset will utilize the integrated “inside-out” Insight tracking system which is core to Facebook’s recently-announced Oculus Quest standalone headset.

The “Constellation” tracking system on the current-generation Rift offers precise accuracy thanks to the static external sensors that track the headset and Touch controllers. While the Insight system would likely offer users a much more simplified setup process, a clear pain point of the first-generation product, “inside-out” tracking systems have greater limitations when it comes to the lighting conditions they work in and are generally less accurate than systems with external trackers.

While Oculus has long led the way on hardware advances, this release could be seen as the company playing catch-up with competitors like Microsoft, which has partnered with OEMs including Samsung, Lenovo and LG to release headsets on its Windows Mixed Reality platform that also feature inside-out tracking as well as higher resolution displays than the Oculus Rift.

“While we don’t comment on rumors/speculation about our future products, as we shared last week, PC VR remains a part of our strategy and is a category we will continue to invest in. In addition to hardware, we have a robust software roadmap and are funding content well into 2020,” an Oculus spokesperson told TechCrunch.

Facebook CEO Mark Zuckerberg introducing the $399 Oculus Quest

There are some clear benefits for Oculus pushing iterative hardware in an iPhone-like “S” manner, especially around affordability, as a more drawn out device life cycle gives both Oculus and PC component manufacturers time to reduce VR’s high barrier to entry in terms of cost.

The cancellation of its Caspar “Rift 2” project, does suggest a less aggressive pace of innovation for the company with its flagship premium VR product. The move away from a redesign could alienate early adopters and send them to other platforms. It could also lead Oculus into a situation where new titles that take advantage of the latest systems aren’t compatible with Rift hardware.

At its Oculus Connect developer conference, Facebook CEO Mark Zuckerberg shared that the Oculus Rift, Quest and Go represented “the completion of its first-generation of VR products.” As Zuckerberg continues to double-down on his long-term goal to bring 1 billion users into VR, the need to build the Oculus user base is growing more important but it’s unclear how essential the company believes leading the high-end PC VR market is to defining that early mainstream success.


Source: Tech Crunch

SpaceX shuffles Starlink leadership, hoping to accelerate launch

SpaceX is changing the lineup at the Seattle-based offices of Starlink, the company’s nascent satellite broadband division. A Reuters report depicts a whirlwind visit by CEO Elon Musk as a middle management bloodbath, but SpaceX says it’s just the usual fast-moving space company stuff.

Starlink plans to put thousands of satellites into space with which to blanket the world in broadband — SpaceX isn’t the only aspirant to this plan, but it is farther along than some. It launched a pair of prototype satellites in February, Tintin A and B, which are reportedly working perfectly well as ongoing test platforms.

Space is no place to rush into, however, but that clashed with aggressive timelines set by Musk years ago and apparently not quite being met. Reuters reported that several leads on the project were pushing for more testing, and Musk visited Seattle to provide a kick in the pants.

Among those reported fired were VP of satellites Rajeev Badyal and designer Mark Krebs, both of whom have overseen the project through and after launch. SpaceX did not directly confirm their departures but confirmed that Starlink had seen significant restructuring.

“We have incorporated lessons learned and re-organized to allow for the next design iteration to be flown in short order,” a SpaceX representative told TechCrunch, saying the move was consistent with the “rapid iteration design and testing” the company is known for.

Will it be enough to put more birds in the air by mid-2019, as Musk hopes? That remains to be seen, but the SpaceX strategy of launching early and often has so far paid off in the long run, so perhaps this maneuver will as well.


Source: Tech Crunch

CBS launches a streaming entertainment network, ET Live

CBS is today launching another streaming network, this time focused on entertainment news. The service, which is called ET Live, was developed by CBS Interactive and CBS TV’s “Entertainment Tonight” news magazine, and will be available both as a standalone app as well as a part of the CBS streaming app aimed at cord cutters, CBS All Access.

The new service will deliver 24/7 coverage of entertainment news, including breaking news, celebrity interviews, features, behind-the-scenes and red carpet coverage, plus trends stories across celebrity fashion, beauty and lifestyle.

The content isn’t just a rehash of the “Entertainment Tonight” on-air broadcast, the network claims. Instead, it will feature original programming and a roster of new hosts, including Lauren Zima, Denny Directo, Cassie DiLaura, Tanner Thomason, Jason Carter and Melicia Johnson.

The flagship show’s current hosts — Nancy O’Dell, Kevin Frazier, Nischelle Turner and Keltie Knight — will make regular appearances, however, to promote what’s up next and other exclusives.

At launch, the service is available on its own website at ETLive.com and through an ET Live app on iOS, Android, Apple TV and Amazon Fire TV, with more platforms expected in the future.

It’s also being integrated into CBS All Access’s live feed across platforms, and as a feed within CBSN, the network’s 24/7 streaming news service.

The new streaming network is the latest of several launches aimed at bringing more CBS content to a new generation of viewers who no longer tune in to traditional pay TV.

A few months ago, CBS debuted a portfolio of streaming services under the brand CBS Local. These help deliver local news to cord cutters and other digital media consumers, including its CBS All Access subscribers. It also operates news network CBSN, which it added to CBS All Access last year. And it launched streaming sports news service CBS Sports HQ earlier this year. This can now also be found in CBS All Access.

Like CBSN, CBS Sports HQ and your local CBS News (where available), the new ET Live feed is available in the “Live” section of the CBS All Access app. Users can toggle between the various live streams with a tap, then can choose to watch live or jump back to watch previous segments on demand.

ET’s brand made sense to be the next to transition to reach over-the-top viewers because of its existing reach, including on digital platforms. The TV show has nearly 5 million daily viewers, while the ETonline.com website averages 20 million monthly U.S. uniques, per comScore. Its social audience is even larger, with more than 70 million U.S. users monthly, the network says.

“From CBS All Access to CBSN and CBS Sports HQ, we are dedicated to bringing consumers best-in-class streaming services,” said Rob Gelick, executive vice president and general manager, CBS Entertainment Digital for CBS Interactive, in a statement about the launch.

“ET Live is a natural expansion of our strategy and expertise in this area. We have the great advantage of being able to apply key learnings from our leading digital entertainment properties and marry that with the #1 entertainment brand in ‘Entertainment Tonight’ to create a new offering for the next generation of entertainment consumers, those that are platform-agnostic and expect content to be accessible anytime, anywhere,” he said.


Source: Tech Crunch