Elon Musk says Tesla won’t make e-scooters, but might consider electric bikes

Tesla won’t be joining the scooter wars. But electric bikes? Yeah, maybe.

During a lengthy podcast with Recode’s Kara Swisher, Tesla CEO Elon Musk talked about everything from AI and his fights on Twitter with journalists to Saudi Arabia and Mars. Even scooters. Of course, scooters.

But don’t get your hopes up for a Tesla scooter. According to Musk, they lack dignity. Swisher’s persistence on the topic wasn’t enough to

Here’s the exchange. You can listen to the entire 80-minute session here.

Kara: Make a scooter. Make a scooter and I’ll go for it. They actually are electric, what am I talking about?

Elon: I don’t know, there was some people in the studio who wanted to make a scooter, but I was like, “Uh, no.”

Kara: I love the scooter, no, get on the scooter.

Elon: It lacks dignity.

Kara: No, it doesn’t lack dignity.

Elon: Yes, they do.

Kara: They don’t lack dignity, what are you talking about?

Elon: Have you tried driving one of those things? They —

Kara: Yes, I do it all the time, I look fantastic.

Elon: They do not, you are laboring under an illusion.

Kara: All right, well, everybody at Lime, don’t worry, Elon Musk is not coming for you.

Elon: Electric bike. I think we might do an electric bike, yeah.


Source: Tech Crunch

TikTok surpassed Facebook, Instagram, Snapchat & YouTube in downloads last month

Beijing-based ByteDance’s 2017 acquisition of tween and teen-focused social app Musical.ly is paying off. The company this year merged Musical.ly with its own short video app TikTok as a means of entering the U.S. market. Today, the result of that merger is sitting at the top of the U.S. App Store, ahead of Facebook. More importantly, it recently surpassed Facebook, Instagram, YouTube and Snapchat in monthly installs for the first time in September.

According to data from app intelligence firm Sensor Tower, TikTok’s installs were higher than those of Facebook, Instagram, Snapchat and YouTube in the U.S. last month.

It surpassed the four other apps in terms of daily downloads on September 29, with 29.7 percent the downloads from this cohort of apps, the firm says.

Since then, it has continued to increase its market share among this group of apps, reaching as high as 42.4 percent of downloads among the apps just days ago, on October 30.

In September, TikTok’s installs grew around 31 percent from the prior month to reach approximately 3.81 million on the App Store and Google Play combined. This beat No. 2 Facebook, which had 3.53 million first-time installs.

Year-over-year, TikTok’s U.S. installs were up 237 percent from 1.13 million in October 2017.

As floods of new users join TikTok, the app has also flirted with passing some of these leading social apps in the App Store’s Top Charts, at times, too. Today, it’s ahead of Facebook (No. 7) and Messenger (No. 5) as it sits in the No. 4 position, for example. But it’s behind YouTube (No. 1), Instagram (No. 2), and Snapchat (No. 3).

However, at other times it’s gotten as high as No. 3 in the Overall Free Apps Top Chart, according to App Annie data.

douyin tiktok musically

App researcher Apptopia reports similar findings, in terms of TikTok’s surge. However, it noted that the app’s engagement rates (the portion of monthly users who open the app daily) was still behind the rest of the group. Apptopia said TikTok had a 29 percent engagement rate, compared with Facebook’s 96%, Instagram’s 95%, Snapchat’s 95%, and YouTube’s 95%.

It also noted the app’s gains have come, in part, from increased ad spend across Facebook, Google’s mobile ad platform AdMob, in-app ad platform Vungle, and others. Other gains are attributed to the merger.

In June, TikTok (known as Douyin in China) reported reaching a global monthly active user count of 500 million across 150 countries and regions, which is around the time when Instagram reached one billion monthly actives, for comparison’s sake.


Source: Tech Crunch

Village Global’s accelerator introduces founders to Bill Gates, Reid Hoffman, Eric Schmidt and more

Village Global is leveraging its network of tech luminaries to support the next generation of entrepreneurs.

The $100 million early-stage venture capital firm, which counts Microsoft’s Bill Gates, Facebook’s Mark Zuckerberg, Alphabet’s Eric Schmidt, Amazon’s Jeff Bezos, LinkedIn’s Reid Hoffman and many other high-profile techies as limited partners (LPs), quietly announced on Friday that the accelerator it piloted earlier this year would become a permanent fixture.

Called Network Catalyst, Village provides formation-stage startups with $150,000 and three-months of programming in exchange for 7 percent equity. Its key offering, however, is access to its impressive roster of LPs.

To formally announce Network Catalyst, Village brought none other than Bill Gates to San Francisco for a fireside chat with Eventbrite CEO Julia Hartz . During the hour-long talk, Gates handed out candid advice on building a successful company, insights on philanthropy and predictions on the future of technology. He later met individually with the founders of Village’s portfolio companies.

“I have a fairly hardcore view that there should be a very large sacrifice made during those early years,” Gates said. “In those early years, you need to have a team that’s pretty maniacal about the company.”

During the Q&A session, Gates regurgitated one of his great anecdotes. In the early days of Microsoft, he would memorize his employee’s license plates so he knew when they were coming and going, quietly noting who was working the longest hours. He admitted, to no one’s surprise, that he struggled with work-life balance.

“I think you can over worship the idea of working extremely hard,” he said. “For my particular makeup, it’s really true I didn’t believe in weekends or vacations … Once I got in my 30s,’ I could hardly imagine how I’d done that because by then something natural thing inside of me kicked in and I loved weekends and my girlfriend liked vacations and that turned out to be a great thing.”

Gates has been an active investor in Village since it emerged one year ago. VMWare founder Diane Greene, Disney CEO Bob Iger and Spanx CEO Sara Blakely are also on the firm’s long list of LPs.

Village is led by four general partners: Erik Torenberg, Product Hunt’s first employee; LinkedIn’s former chief of staff Ben Casnocha; Chegg’s former chief business officer Anne Dwane; and former Canaan partner Ross Fubini. They initially filed to raise a $50 million fund in mid-2017 but ultimately closed on $100 million in March. The firm relies heavily on scouts — angel investors and others knowledgeable of the startup world — to source deals. The scouts, in return, earn a portion of the firm’s returns.

Former Alphabet chairman Eric Schmidt.

An accelerator program has been part of Village’s plan since the beginning.

Pinterest CEO Ben Silbermann, Fidelity CEO Abby Johnson, Hoffman, Iger, Blakely and Schmidt all worked with Network Catalyst’s debut cohort of founders. Village co-founder Anne Dwane said Hoffman and former Twitter CEO Ev Williams have signed on to work with the next cohort.

“It is about contacts, not content,” Dwane told TechCrunch. “The most important thing is who you can meet to help you take your business forward.”

San Francisco-based VeriSIM, a startup building AI-enabled biosimulation models, was among the debut class of companies that participated in Network Catalyst. Jo Varshney, the company’s founder and CEO, said the accelerator’s personalization and customization set it apart from competing options.

“It seemed like I had a team of people working alongside me even though I’m a solo founder,” Varshney told TechCrunch.

After completing the program, Schmidt introduced Varshney to a number of investors. She quickly closed a $1.5 million seed round.

“One year in and I already have a one-on-one meeting with Bill Gates,” she added.

Applications for the accelerator close on Dec. 7 with programming kicking off Jan. 14. Village plans to enroll at least 12 companies across industries.


Source: Tech Crunch

Twitter hires God-is Rivera as global director of culture and community

Twitter has brought on its first-ever global director of culture and community, God-is Rivera. As global director of culture and community, Rivera will report to Global Head of Culture, Engagement and Experiential Nola Weinstein. Rivera previously led internal diversity and inclusion efforts at VMLY&R, a digital and creative agency.

“As a black woman who has worked in industries in which I have been underrepresented, I feel a great responsibility to amplify and support diverse communities, and they exist in full force on Twitter,” Rivera said in a statement. “The team has shown a passion to serve and spotlight their most active users and I am honored to step into this new role as a part of that commitment.”

For context, 26 percent of U.S. adults who identify as black use Twitter, while 24 percent of white-identified adults and 20 percent of Latinx-identified adults in the U.S. use Twitter, according to a March 2018 survey from Pew Research Center.

At Twitter, the plan is for Rivera to “better serve and engage communities” on Twitter through the company’s brand marketing, campaigns, events and other experiences. Internally, Rivera will be tasked with ensuring Twitter’s campaigns and programs are inclusive and “reflective of the communities we serve,” according to Twitter’s press release. Externally, Rivera will be responsible for developing relationships and programs with content creators, community leaders, brands and more — similar to the one with HBO’s Insecure.

Here’s the internal note Weinstein sent to Twitter employees earlier today:

Team,

I am so excited to welcome @GodisRivera to the team as Twitter’s new Global Director of Culture & Community. She captivated us at #OneTeam with her enlightening presentation on #BlackTwitter and we are thrilled that she will now be bringing her passion and perspective inside.

In this newly created role, God-is will help lead our efforts to better serve and engage the powerful voices and global communities who take to Twitter to share, discover and discuss what matters to them. This will come to life through Twitter’s brand efforts, campaigns, events and experiences. She will help ensure that our programs are connective, inclusive and reflective of the communities we serve. You can imagine more efforts that engage and excite our communities like #HereWeAre, #NBATwitter, thoughtful tweetups, etc.

God-is’ deep expertise in marketing and social strategy, cultural understanding and ability to elevate and connect communities makes for a rare and incredibly powerful combination. She was previously Director, Inclusion and Cultural Resonance at VMLY&R, where she led internal diversity efforts to fuse the importance of internal culture and representation to creative work outputs. In 2018, God-is was named an Ad Age “Woman to Watch” and Adweek “Disruptor” for continuing to fight for representation and equity in the advertising industry. She currently resides in New York, NY with her husband and daughter.

On a personal note, I have had the pleasure of spending time with God-is at #HereWeAre, #Influence, and #OneTeam and her energy, passion and positivity are infectious. I know her presence will make a difference and am excited by all that the culture & experiential team will create together.

God-is will start on November 12th and will be based in NYC reporting to me.

Please join me in welcoming her to the flock!


Source: Tech Crunch

The Minte raises $2.25 million in seed funding to bring hotel-style housekeeping to luxury residences

As an MBA student at the University of Chicago’s Booth school, Kathleen Wilson was struck with an idea while looking at businesses that provided daily housekeeping in one of her classes. Given the density and physical structure of many apartment buildings, she wondered why a housekeeper couldn’t similarly push a cart down the hall and spend an hour or less in each unit.

To test out her theory, Wilson and a classmate started cleaning the apartments of friends, spending 30 minutes to an hour at a time and trying to establish a reasonable price point for the work. Armed with enough data, Wilson then landed at a local real estate tech accelerator, formed her company, and locked down her first property management company client, Waterton — and her efforts have been gaining momentum since.

In fact, her 20-month-old startup, The Minte, which now employs roughly 60 people, is today announcing that it has raised $2.25 million in a round that brings the company’s total seed funding to $4.7 million. Dundee Venture Capital led this newest round; other investors in the company include MATH Venture Partners, Revolution’s Rise of the Rest Seed Fund, Firebrand Ventures, Blue Note Ventures and numerous angel investors. We had a quick chat with Wilson earlier this week to learn more.

TC: Can you tell us a bit more about your customers? Are they all property management companies like Waterton?

KW: We only provide service to apartments and condos, so our clients are currently property management companies such as Greystar, Bozzuto, Lincoln Property Company, and CA Ventures. We have just under 70 properties in Chicago, another 20 in D.C., and we’ve been launching 6 to 10 new properties in each market each month.

TC: The Minte promises to make a housekeeper available to a property full-time, correct?

KW: Yes. A housekeeper is located on site for residents to book cleaning services with them, so that residents are provided with consistency and trust. To be clear, our housekeepers are full-time Minte employees with health benefits and paid-time off.  We keep our housekeeping cart and supplies at each property, and there’s a place for housekeepers to go if they have a bit of downtime, although that’s rare.

We do have some housekeepers who split their time between properties, either if the property is smaller or if we’re still in the first couple months of service and still building demand.

TC: What makes the company think people would prefer to work with The Minte versus housekeepers they know? These are trust-heavy relationships, a feature that other housecleaning startups have overlooked to their detriment.

KW: Exactly. We bring the personal trust by having the same housekeeper assigned to the property, which allows the housekeeper to get to know the residents, and we bring the corporate side of trust by having insurance, QA by managers, and the ability to send a backup housekeeper if someone is out sick. We also have top-notch, live customer service if there is ever an issue.

TC: What does your quality assurance process involve?

KW: It’s a multi-tier process. First, we’ve implemented an eight-day training program for all new housekeepers. Second, housekeepers and housekeeping managers with whom we work almost always have hotel backgrounds, having worked at the Waldorf Astoria, The Conrad, and Sofitel, to name a few. Third, housekeeping managers do random spot checks of service. And fourth, users can rate and comment on every service, which we review in real time. It’s company policy to reach out to the resident any time something is less than four stars.

Also worth mentioning: our products are eco-friendly, P&G products, so there’s no compromise on the quality of our supplies.

TC: How do clients pay, and how much do they pay? Is this a subscription model?

KW: They can pay à la carte — paying $30 for a hotel-style service, $90 for a deep clean for a one-bedroom apartment, for example — but over half of our cleans are residents who are on a recurring package. For customers on a package, they can customize how many deep cleans and/or hotel-style cleans they have every four weeks, including which days those cleans occur.

TC: The home services model is more prone to leakage, meaning people form relationships and stop using the platform. Is this a concern?

KW: Our employees are full-time, so this is essentially a non-issue for us. With our housekeepers on our schedule throughout the entire week, its not feasible for someone to poach them.

Potentially a resident could do this on a weekend, but in our experience, people want housekeepers to come when they are not home. Furthermore, the property manager would tell us if our housekeeper was getting keys outside of their Minte schedule.

TC: And how are you marketing the company?

KW: Through our partnership with the property managers, primarily.

TC: How will you use your new funding?

KW: We’ll continue to enhance our tech. Our app is out this week, and we’re rolling out our smart home integration in the coming months. We’re making our button — which is physical hardware that goes on the wall inside each unit — more readily available. We’ll also expand more into condos and corporate housing and target our third city in early 2019.


Source: Tech Crunch

Zume reportedly snags $375 million from SoftBank for its robotic food operations

Zume, the robotics and logistics company that got its start slinging out pizza, just raised $375 million from SoftBank, the WSJ first reported. SoftBank is also reportedly looking to invest an additional $375 million, which would value the company at a $2.25 billion valuation. The round comes a couple of months after reports of SoftBank looking to invest anywhere from $500 million to $750 million in Zume.

Zume, which started back in 2015, uses robotics, artificial intelligence, automation and mobile kitchen technologies to predict food trends and serve up freshly cooked foods. The startup owns a patent for delivery trucks that can cook food while en route to customers.

Earlier this year, Zume created a larger umbrella company to house Zume Pizza, now a subsidiary of Zume. That marked Zume’s more ambitious approach to move beyond pizza and license its technology to restaurants looking to deploy food trucks.

“Pizza was our prototype,” Zume CEO Alex Garden told TC’s Brian Heater back in April. “There’s no reason why this technology wouldn’t work for any restaurant or any food category. Any restaurant who wants to adopt our system can now easily do that. They don’t have to be experts in technology or appliance manufacturing. They can just be restaurateurs, who have a more flexible offering for customers.”

Zume had previously raised about $70 million in funding. I’ve reached out to Zume and SoftBank and will update this story if I hear back.


Source: Tech Crunch

Thomas Reardon and CTRL-Labs are building an API for the brain

From Elon’s Neuralink to Bryan Johnson’s Kernel, a new wave of businesses are specifically focusing on ways to access, read and write from the brain.

The holy grail lies in how to do that without invasive implants, and how to do it for a mass market.

One company aiming to do just that is New York-based CTRL-labs, who recently closed a $28 million Series B. The team, comprising over 12 PHDs, is decoding individual neurons and developing an electromyography-based armband that reads the nervous signals travelling from the brain to the fingers. These signals are then translated into desired intentions, enabling anything from thought-to-text to moving objects.

Scientists have known about electrical activity in the brain since Hans Berger first recorded it using an EEG in 1924, and the term “brain computer interface” (BCI) was coined as early as the 1970s by Jacques Vidal at UCLA. Since then most BCI applications have been tested in the military or medical realm. Although it’s still the early innings of neurotech commercialization, in recent years the pace of capital going in and company formation has picked up. 

For a conversation with Flux I sat down with Thomas Reardon the CEO of CTRL-labs and discussed his journey to founding the company. Reardon explained why New York is the best place to build a machine learning based business right now and how he recruits top talent. He shares what developers can expect when the CTRL-kit ships in Q1 and explains how a brain control interface may well make the smartphone redundant.

An excerpt is published below. Full transcript on Medium.

AMLG: I’m excited to have Thomas Reardon on the show today. He is the co-founder and CEO of CTRL-labs a company building the next generation of non-invasive neural computing here in Manhattan. He’s just cycled from uptown — thanks for coming down here to Chinatown. Reardon was previously the founder of a startup called Avegadro, which was acquired by Openwave. He also spent time at Microsoft where he was project lead on Internet Explorer. He’s one of the founders of the Worldwide Web Consortium, a body that has established many of the standards that still govern the Web, and he’s one of the architects of XML and CSS. Why don’t we get into your background, how you got to where you are today and why you’re the most excited to be doing what you’re doing right now.

W3 is an international standards organization founded and led by Tim Berners Lee.

TR: My background — well I’m a bit of an old man so this is a longer story. I have a commercial software background. I didn’t go to college when I was younger. I started a company at 19 years old and ended up at Microsoft back in 1990, so this was before the Windows revolution stormed the world. I spent 10 years at Microsoft. The biggest part of that was starting up the Internet Explorer project and then leading the internet architecture effort at Microsoft so that’s how I ended up working on things like CSS and XML, some of the web nerds out there should be deeply familiar with those terms. Then after doing another company that focused on the mobile Internet, Phone.com and Openwave, where I served as CTO, I got a bit tired of the Web. I got fatigued at the sense that the Web was growing up not to introduce any new technology experience or any new computer science to the world. It was just transferring bones from one grave to another. We were reinventing everything that had been invented in the 80s and early 90s and webifying it but we weren’t creating new experiences. I got profoundly turned off by the evolution of the Web and what we were doing to put it on mobile devices. We weren’t creating new value for people. We weren’t solving new human problems. We were solving corporate problems. We were trying to create new leverage for the entrenched companies.

So I left tech in 2003. Effectively retired. I decided to go and get a proper college education. I went and studied Greek and Latin and got a degree in classics. Along the way I started studying neuroscience and was fascinated by the biology of neurons. This led me to grad school and doing a Ph.D. which I split across Duke and Columbia. I’d woken up some time in like 2005 2006 and was reading an article in The New York Times. It was something about a cell and I scratched my head and said, we all hear that term we all talk about cells and cells in the body, but I have no idea what a cell really is. To the point where a New York Times article was too deep for me, and that almost embarrassed me and shocked me and led me down this path of studying biology in a deeper almost molecular way.

AMLG: So you were really in the heart of it all when you were working at Microsoft and building your startup. Now you are building this company in New York — we’ve got Columbia and NYU and there’s a lot of commercial industries — does that feel different for you, building a company here?

TR: Well let’s look at the kind of company we’re building. We’re building a company which is at its heart about machine learning. We’re in an era in which every startup tries to have a slide in their deck that says something about ML, but most of them are a joke in comparison. This is the place in the world to build a company that has machine learning at its core. Between Columbia and NYU and now Cornell Tech, and the unbelievably deep bench of machine learning talent embedded in the finance industry, we have more ML people at an elite level in New York than any place on earth. It’s dramatic. Our ability to recruit here is unparalleled. We beat the big five all the time. We’re now 42 people and half of them are Ph.D. scientists. For every single one of them we were competing against Google, Facebook, Apple.

AMLG: Presumably this is a more interesting problem for them to work on. If they want to go work at Goldman in AI they can do that for a couple of years, make some dollars and then come back and do the interesting stuff.

TR: They can make a bigger salary but they will work on something that nobody in the rest of the world will ever get to hear about. The reason why people don’t talk about all this ML talent here is when it’s embedded in finance you never get to hear about it. It’s all secret. Underneath the waters. The work we’re doing and this new generation of companies that have ML at their core — even a company like Spotify is, on the one hand fundamentally a licensing and copyright arbitrage company, but on the other hand what broke out for Spotify was their ML work. It was fundamental to the offer. That’s the kind of thing that’s happening in New York again and again now. There’s lots of companies — like a hardware company — that would be scary to build in New York. We have a significant hardware component to what we’re doing. It is hard to recruit A team world-class hardware folks in New York but we can get them. We recently hired the head of product from Peloton who formerly ran Makerbot.

AMLG: We support that and believe there’s a budding pool here. And I guess the third bench is neuro, which Columbia is very strong in.

Larry Abbott helped found the Center of Theoretical Neuroscience at Columbia

TR: Yes as is NYU. Neuroscience is in some sense the signature department at Columbia. The field breaks across two domains — the biological and the computational. Computational neuroscience is machine learning for real neurons, building operating computational models of how real neurons do their work. It’s the field that drives a lot of the breakthroughs in machine learning. We have these biologically inspired concepts in machine learning that come from computational neuroscience. Colombia has by far the top computational neuroscience group in the world and probably the top biological neuroscience group in the world. There are five Nobel Prize winners in the program and Larry Abbott the legend of theoretical neuroscience. It’s its an unbelievably deep bench.

AMLG: How do you recruit people that are smarter than you? This is a question that everyone listening wants to know.

Patrick Kaifosh, Thomas Reardon, Tim Machado the co-founders of CTRL-labs

TR: I’m not dumb but I’m not as smart as my co-founder and I’m not as smart as half of the scientific staff inside the company. I affectionately refer to my co-founder as a mutant. Patrick Kaifosh, who’s chief scientist. He is one of the smartest human beings I’ve ever known. Patrick is one of those generational people that can change our concept of what’s possible, and he does that in a first principles way. The recruiting part is to engage people in a way that lets them know that you’re going to take all the crap away that allows them to work on the hardest problems with the best people.

AMLG: I believe it and I’ve met some of them. So what was the conversation with Kaifosh and Tim when when you first sat down and decided to pursue the idea?

TR: So we were wrapping up our graduate studies, the three of us. We were looking at what it would be like to stay in academia and the bureaucracy involved in trying to be a working scientist in academia and writing grants. We were looking around at the young faculty members we saw at Columbia and thought, that doesn’t look like they’re having fun.

AMLG: When you were leaving Columbia it sounds like there wasn’t another company idea. Was it clear that this was the idea that you wanted to pursue at that time?

TR: What we knew is we wanted to do something collaborative. We did not think, let’s go build a brain machine interface. We don’t actually like that phrase, we like to call them neural interfaces. We didn’t think about neural interfaces at all. The second idea we had, an ingredient we put into the stew and started mixing up was, was that we wanted to leverage experimental technologies from neuroscience that hadn’t yet been commercialized. In some sense this was like when Genentech was starting in the mid 70s. We had found the crystal structure of DNA back in the late 40s, there had been 30 years of molecular biology, we figured out DNA then RNA then protein synthesis then ribosome. Thirty years of molecular biology but nobody had commercialized it yet. Then Genentech came along with this idea that we could make synthetic protein, that we could start to commercialize some of these core experimental techniques and do translation work and bring value back to humanity. It was all just sitting there on the shelf ready to be exploited.

We thought OK what are the technologies in neuroscience that we use at the bench that could be exploited? For instance spike sorting, the ability to listen with a single electrode to lots of neurons at the same time and see all the different electrical impulses and de-convolve them. You get this big noisy signal and you can see the individual neurons activity. So we started playing with that idea, lets harvest the last 30 or 40 years of bench experimental neuroscience. What are the techniques that were invented that we could harvest?

AMLG: We’ve been reading about these things and there’s been so much excitement about BMI but you haven’t really seen things in market things that people can hack around with. I don’t know why that gap hasn’t been filled. Does no one have the balls to go take these off the shelf and try and turn them into something or is it a timing question?

The brain has upper motor neurons in the cortex which map to lower motor neurons in the spinal cord, which send long axons down to contact the muscles. They release neurotransmitters that turn individual muscle fibres on and off. Motor units have 1:1 correspondence with motor neurons. When motor neurons fire in the spinal cord, an output signal from the brain, you get a direct response in the muscle. If those EMG signals can be decoded, then you can decode the zeros and ones of the nervous system — action potential

TR: Some of this is chutzpah and some of it is timing. The technologies that we are leveraging weren’t fully developed for how we’re using them. We had to do some invention since we started the company three years ago. But they were far enough along that you could imagine the gap and come up with a way to cross the gap. How could we, for instance, decode an individual neuron using a technology called electromyography. Electromyography has been around for probably over a century and that’s the ability to — 

AMLG: Thats what we call EMG.

TR: EMG. Yes you can record the electrical activity of a muscle. EKG electrocardiography is basically EMG for the heart alone. You’re looking at the electrical activity of the heart muscles. We thought if you improve this legacy technology of EMG sufficiently, if you improve the signal to noise, you ought to be able to see the individual fibers of a muscle. If you know some neuroanatomy what you figure out is that the individual fibers correspond to individual neurons. And by listening to individual fibers we can now reconstruct the activity of individual neurons. That’s the root of a neural interface. The ability to listen to an individual neuron.

EEG toy “the Force Trainer”

AMLG: My family are Star Wars fans and we had a device one Christmas that we sat around playing with, the force trainer. If you put the device around your head and stare long enough the thing is supposed to move. Everything I’ve ever tried has been like that has been like that Force Trainer, a little frustrating — 

TR: Thats EEG, electroencephalography. That’s when you put something on your skull and record the electrical activity. The waves of activity that happen in the cortex, in the outer part of your brain.

AMLG: And it doesn’t work well because the skull is too thick?

TR: There’s a bunch of reasons why it doesn’t work that well. The unfortunate thing is that when most people hear about it that’s one of the first things they think about like, oh well all my thinking is up here in the cortex right underneath my skull and that’s what you’re interfacing with. That is actually —

AMLG: A myth?

TR: Both a myth and the wrong approach. I’m going have to go deep on this one because it’s subtle but important. The first thing is let’s just talk about the signal qualities of EEG versus what we’re doing where we listen to individual neurons and do it without having to drill into your body or place an electrode inside of you. EEG is trying to listen to the activity of lots of neurons all at the same time tens of thousands hundreds of thousands of neurons and kind of get a sense of what the roar of those neurons is. I liken it to sitting outside of Giant Stadium with a microphone trying to listen to a conversation in Section 23 Row 4 seat 9. You can’t do it. At best you can tell is that one of the teams scored you hear the roar of the entire stadium. That’s basically what we have with EEG today. The ability to hear the roar. So for instance we say the easiest thing to decode with EMG is surprise. I could put a headset on you and tell if you’re surprised.

AMLG: That doesn’t seem too handy.

TR: Yup not much more than that. Turns out surprise is this global brain state and your entire brain lights up. In every animal that we do this in surprise looks the same — it’s a big global Christmas tree that lights up across the entire brain. But you can’t use that for control. And this cuts to the name of our company, CTRL-labs. I don’t just want to decode your state. I want to give you the ability to control things in the world in a way that feels magical. It feels like Star Wars. I want you to feel like the Star Wars Emperor. What we’re trying to do is give you control and a kind of control you’ve never experienced before.

The MYO armband by Canadian startup Thalmic Labs

AMLG: This is control over motion right? Maybe you can clarify — where I’ve seen other companies like MYO, which was an armband, it was really motion capture where people were capturing how you intended to gesture, rather than what you were thinking about?

TR: Yeah. In some sense we’re a successor to MYO (Thalmic Labs) — if Thalmic had been built by neuroscientists you would have ended up on the path that we’re on now.

Thomas Reardon demonstrating Myo control

We have two regimes of control, one we call Myo control and the other we call Neuro control. Myo control is our ability to decode what ultimately becomes your movements. The electrical input to your muscles that cause your muscles to contract, and then when you stop activating them they slowly relax. We can decode the electrical activity that goes into those muscles even before the movement has started and even before it ends and recapitulate that in a virtual way. Neuro control is something else. It’s kind of exotic and you have to try it to believe it. We can get to the level of the electrical activity of neurons — individual neurons — and train you rapidly on the order of seconds to control something. So imagine you’re playing a video game and you want to push a button to hop like you’re playing Sonic the Hedgehog. I can train you in seconds to turn on a single neuron in your spinal cord to control that little thing.

AMLG: When I came to visit your lab in 2016 the guy had his hand out here. I tried it — it was an asteroid field.

TR: Asteroids, the old Atari game.

Patrick Kaifosh playing Asteroids — example of Neuro Control [from CTRL-labs, late 2017]

AMLG: Classic. And you’re doing fruit ninja now too? It gets harder and harder.

TR: It does get harder and harder. So the idea here is that rather than moving you can just turn these neurons on and off and control something. Really there’s no muscle activity at that point you’re just activating individual neurons, they might release a little pulse, a little electrical chemical transmission to the muscle, but the muscle can’t respond at that level. What you find out is rather than using your neurons to control say your five fingers, you can use your neurons to control 30 virtual fingers without actually moving your hand at all.

AMLG: What does that mean for neuroplasticity. Do you have to imagine the third hand fourth hand fifth hand, or your tail like in Avatar?

TR: This is why I focus on the concept of control. We’re not trying to decode what you’re “thinking.” I don’t know what a thought is and there’s nobody in neuroscience who does know what a thought is. Nobody. We don’t know what consciousness is and we don’t know what thoughts are. They don’t exist in one part of the brain. Your brain is one cohesive organ and that includes your spinal cord all the way up. All of that embodies thought.

Inside Out (2015, Pixar). Great movie. Not how the brain, thoughts or consciousness work

AMLG: That’s a pretty crazy thought as thoughts go. I’m trying to mull that one over.

TR: It is. I want to pound that home. There’s not this one place. There’s not a little chair (to refer to Dan Dennett) there’s not like a chair in a movie theater inside your brain where the real you sits watching what’s happening and directing it. No, there’s just your overall brain and you’re in there somewhere across all of it. It’s that collection of neurons together that give you this sense of consciousness.

What we do with Neuro Control and with CTRL-kit the device that we’ve built is give you feedback. We show you by giving you direct feedback in real time, millisecond level feedback, how to train a neuron to go move say a cursor up and down, to go chase something or to jump over something. The way this works is that we engage your motor nervous system. Your brain has a natural output port — a USB port if you will — that generates output. In some sense this is sad for people, but I have to tell you your brain doesn’t do anything except turn muscles on and off. That’s the final output of the brain. When you’re generating speech when you’re blinking your eyes at me when you’re folding your hands and using your hands to talk to me when you’re moving around when you’re feeding yourself. Your brain is just turning muscles on and off. That’s it. There is nothing else. It does that via motor neurons. Most of those are in your spine. Those motor neurons, it’s not so much that they’re plastic — they’re adaptive. So motor control is this ability to use neurons for very adaptive tasks. Take a sip of water from that bottle right in front of you. Watch what you’re doing.

Intention capture — rather than going through devices to interact, CTRL-labs will take the electrical activity of the body and decode that directly, allowing us to use that high bandwidth information to interact with all output devices. [Watch Reardon’s full keynote at O’Reilly]

AMLG: Watch me spill it all over myself — 

TR: You’re taking a sip. Everything you just did with that bottle you’ve never done that before. You’ve never done that task. In fact you just did a complicated thing, you actually put it around the microphone and had to use one hand then use the other hand to take the cap off the bottle. You did all of that without thinking. There was no cognitive load involved in that. That bottle is different than any other bottle, its slippery it’s got a certain temperature, the weight changes. Have you ever seen these robots try to pour water. It’s comical how difficult it is. You do it effortlessly, like you’re really good —

AMLG: Well I practiced a few times before we got here.

TR: Actually you did practice! The first year two years of your life. That’s all you were doing was practicing, to get ready for what you just did. Because when you’re born you can’t do that. You can’t control your hands you can’t control your body. You actually do something called motor babbling where you just shake your hands around and move your legs and wiggle your fingers and you’re trying to create a map inside your brain of how your body works and to gain control. But gain flexible, adaptive control.

AMLG: That’s the natural training that babies do, which is sort of what you’re doing in terms of decoding ?

TR: We are leveraging that same process you went through when you were a year to two years old to help you gain new skills that go beyond your muscles. So that was all about you learning how to control your muscles and do things. I want to emphasize what you did again is more complex than anything else you do. It’s more complex than language than math than social skills. Eight billion people on earth that have a functioning nervous system, every other one of them no matter what their IQ can do it really well. That’s the part of the brain that we’re interfacing with. That ability to adapt in real time to a task skillfully. That’s not plasticity in neuroscience. It’s adaptation.

AMLG: What does that mean in terms of the amount of decoding you’ve had to do. Because you’ve got a working demo. And I know that people have to train for their own individual use right?

Myo control attempts to understand what each of the 14 muscles in the arm are doing, then deconvolve the signal into individual channels that map out to muscles. If they can build an accurate online map CTRL-labs believes there is no reason to have a keyboard or mouse 

 

TR: In Myo control it works for anybody right out of the box. With Neuro control it adjusts to you. In fact the model that’s built is custom to you, it wouldn’t work on anybody else it wouldn’t work on your twin. Because your twin would train it differently. DNA is not determinative of your nervous output. What you have to realize is we haven’t decoded the brain —  there’s 15 billion neurons there. What we’ve done is created a very reduced but highly functional piece of hardware that listens to neurons in the spinal cord and gives you feedback that allows you to individually control those neurons.

When you think about the control that you exploit every day it’s built up of two kinds of things what we call continuous control — think of that as a joystick, left and right, and much left how much right. Those are continuous controls. Then we have discrete controls or symbols. Think of that as button pushing or typing. Every single control problem you face, and that’s what your day is filled with whether taking a sip of water walking down the street getting in a car driving a car. All of the control problems reduce to some combination of continuous control (swiping) and discrete control (button pushing.) We have this ability to get you to train these synthetic forms of up down left right dimensions if you will, that allows you to control things without moving but then allow you to move beyond the five fingers in your hand and get access to say 30 virtual fingers. What that opens up? Well think about everything you control.

AMLG: I’m picturing 30 virtual fingers right now —and I do want to get into VR, there’s lots of forms one can take in there. The surprising thing to me in terms of target uses and there’s so many uses you can imagine for this in early populations, was that you didn’t start the company for clinical populations or motor pathologies right? A lot of people have been working on bionics. I have a handicapped brother— I’ve been to his school and have seen the kids with all sorts of devices. They’re coming along, and obviously in the army they’ve been working on this. But you are not coming at it from that approach?

TR: Correct. We started the company almost ruthlessly focused on eight billion people. The market of eight billion. Not the market of a million or 10 million who have motor pathologies. In some sense this is the part that’s informed by my Microsoft time. So in the academy when you’re doing neuroscience research almost everybody focuses on pathologies, things that break in the nervous system and what we can do to help people and work around them. They’ll work on Parkinsons or Alzheimers or ALS for motor pathologies. What commercial companies get to do is bring new kinds of deep technology to mass markets, but which then feed back to clinical communities. By pushing and making this stuff work at scale across eight billion people, the problems that we have to solve will ultimately be the same problems that people who want to bring relief to people with motor pathologies need to solve. If you do it at scale lots of things fall out that wouldn’t have otherwise fallen out.

AMLG: It’s fascinating because you’re starting with we’re gonna go big. You’ve said you would like your devices, whether sold by you or by partners, to be on a million people within three or four years. A lot of things start in the realm of science but don’t get commercialized on a large scale. When you launched Explorer, at one point it had 95 percent market share so you’ve touched that many people before — 

Internet Explorer browser market share, 2002–2016

TR: Yes and it’s addicting, when you’ve been able to put software into a billion plus hands. That’s the kind of scale that you want to work on and that’s the kind of impact that I want to have and the team wants to have.

AMLG: How do you get something like this to that scale?

TR: One user at a time. You pick segments in which there are serious problems to solve and proximal problems. You’ve talked about VR. We think we solve a key problem in virtual reality augmented reality mixed reality. These emerging, immersive computing paradigms. No immersive computing technology so far has won. There is no default. There’s no standard. Nobody’s pointing at any thing and saying “oh I can already see how that’s the one that’s going to win.” It’s not Oculus it’s not Microsoft Hololens it’s not Magic Leap. But the investment is still happening and we’re now years into this new round of virtual realities. The investment is happening because people still have a hunger for it. We know we want immersive computing to work. What’s not working? It’s kind of obvious. We designed all of these experiences to get data, images, sounds into you. The human input problem. These immersive technologies do breakthrough work to change human input. But they’ve done nothing so far to change human output. That’s where we come in. You can’t have a successful immersive computing platform without solving the human output problem of how do I control this? How do I express my intentions? How do I express language inside of virtual reality? Am I typing or am I not typing?

AMLG: Everyone’s doing the iPad right now. You go into VR and you’re holding a thing that’s mimicking the real world.

TR: What we call skeuomorphic experiences that mimic real life, and that’s terrible. The first developer kits for the Oculus Rift you know shipped with an Xbox controller. Oh my god is that dumb. There’s a myth that the only way to create a new technology is to make sure it has a deep bridge to the past. I call bullshit on that. We’ve been stuck in that model and it’s one of the diseases of the venture world, “we’re Uber for neurons” and it’s Uber for this or that.

AMLG: Well ironically people are afraid to take risks in venture. If you suddenly design a new way of communicating or doing human output it’s, “that’s pretty risky, it should look more like the last thing.”

TR: I’m deeply thankful to the firms that stepped up to fund us, Spark and Matrix and most recently Lux and Google Ventures. We’ve got venture folks who want to look around the bend and make a big bet on a big future.


Source: Tech Crunch

Google walkout organizer: ‘I hope I still have a career in Silicon Valley after this’

Shouting “women’s rights are worker’s rights” and a number of other #TimesUp and #MeToo chants, upwards of 1,000 Google employees gathered in San Francisco’s Harry Bridges Plaza Thursday to protest the company’s handling of sexual harassment and misconduct cases.

Staffers from all of Google’s San Francisco offices were in attendance. An organizer, who declined to be named, told TechCrunch there were 1,500 Google employees across the globe that participated in the 48-hour effort to arrange a worldwide walkout. The effort was a major success. More than 3,000 Googlers and supporters of the movement attended the New York City walkout alone. As many as 1,000 Googlers and others came out for the San Francisco walkout, which the organizers said, was double the number they expected.

Cathay Bi, a Google employee in San Francisco and one of the walkout organizers, told a group of journalists at the rally that she was conflicted with participating in the walkout and ultimately decided not to go public with her own story of sexual harassment.

“I experienced sexual harassment at Google and I didn’t feel safe talking about it,” said Bi, pictured above. “That feeling of not being safe is why I’m out here today. I’d love it if everyone felt safe talking about it.”

“There were many times over the course of the last 24 hours that I emailed the group and said ‘I’m not doing this because I’m scared’ but that fear is something everyone else feels,” she said. “I said to myself last night, I hope I still have a career in Silicon Valley after this.”

Other organizers declined to go on the record.

There were protests around the globe today, including in London, Dublin, Montreal, Singapore, New York City, San Francisco, Seattle and Cambridge, following a New York Times investigation that revealed Google had given Android co-creator Andy Rubin a $90 million exit package despite multiple relationships with other Google staffers and credible accusations of sexual misconduct made against him. That story, coupled with tech’s well-established issue of harassment and discrimination toward women and underrepresented minorities, was a catalyst for today’s rallies.

At the rally, Googlers read off their list of demands, which includes an end to forced arbitration in cases of harassment and discrimination, a commitment to end pay and opportunity inequity and a clear, inclusive process for reporting sexual misconduct safely and anonymously.

They’re also requesting that the search giant promote chief diversity officer Danielle Brown to a role in which she reports directly to chief executive officer Sundar Pichai, as well as the addition of an employee representative to the company’s board of directors.

Here’s the statement from Pichai Google provided to TechCrunch this morning: “Earlier this week, we let Googlers know that we are aware of the activities planned for today and that employees will have the support they need if they wish to participate. Employees have raised constructive ideas for how we can improve our policies and our processes going forward. We are taking in all their feedback so we can turn these ideas into action.”

Now, employees around the Globe will await Google’s highly-anticipated course of “action.”

“These types of changes don’t happen overnight,” Bi said. “If we expected them overnight we would have the wrong expectations of how these movements take place.”


Source: Tech Crunch

Walmart adds an AR scanner to its iOS app for product comparisons

Walmart is giving augmented reality a shot. The retailer today announced the launch of a new AR scanning tool in its iPhone application which will help customers with product comparisons. However, unlike a typical barcode scanner meant only to compare prices on one item at a time, Walmart’s AR scanner can be panned about across store shelves, offering details on pricing and customer ratings beneath the products it sees.

The technology was first developed by a team at an internal Walmart hackathon using Apple’s ARKit technology. At the time, their idea was to create a scanning experience that worked faster and felt faster when used by customers. They also wanted to build a scanner that offered more than just price comparisons.

“Walmart store shoppers love using our mobile app barcode scanner as a price checker. Our team sees the potential of this product as so much more, though,” explains Tim Sears, senior engineering manager at Walmart Labs, in a post announcing the feature’s launch. “When a customer launches the scanner, they get a direct connection between the digital and the physical world that their screen and camera lens creates for them,” he says.

The team won the hackathon, then went on to further redesign the experience to become the one that’s live today in Walmart’s application.

To use the scanner, you launch the feature in the Walmart app, then point it at the products on the shelf you want to compare. As you move the phone between one item and the other, the product tile at the bottom of the screen will update with information, including the product name, price and star rating across however many reviews it has received on Walmart.com. A link to related products is also available.

The AR scanner was designed to anchor dots to what you’ve scanned, but uses smaller dots instead of anchoring the entire content to the product itself to overcome the problems that could occur when multiple items are scanned together in a close space.

Despite the supposed advantages of AR scanning over a simpler barcode scan, it still remains to be seen to what extent consumers will adopt the feature now that it’s live.

Walmart isn’t the only retailer to give AR a go. Others have used it in various ways, including Amazon, Target, Wayfair and many more. But in several cases, AR’s adoption by retailers have been focused on visualizing products in your home, or — in the case of Target’s AR “studio” — makeup on your face.

Walmart’s AR scanner goes after a more practical use.

The AR Scanner is in the latest version of the Walmart iOS app (18.20 and higher), and works on iPhones that run at least iOS 11.3. This latter requirement is due to its use of ARKit 1.5, but will limit the audience largely to those with newer iPhones.


Source: Tech Crunch

FabFitFun surpasses $200 million in revenue as it hits million-customer milestone

At least one million people will be receiving the next FabFitFun box as the Los Angeles company surpasses $200 million in revenue and continues its run as one of the startups to watch in the Los Angeles tech community. 

As it renews its focus on media — doubling down on new programming in a bid to reach further into repeatable revenue through subscriptions that encompass more than just retail — the company is trying to frame itself as more than just makeup and accessories in a box.

“When we think of the potential behind the business … there are a few businesses in the world for whom membership is a no brainer. Netflix is, Spotify is and we think FabFitFun is a no brainer,” said Daniel Broukhim co-founder and co-chief executive of FabFitFun. 

The company’s reach spans demographics and geography, according to co-chief executives (and brothers) Daniel and Michael Broukhim, with users ranging in age from 15 to 85 and subscriptions covering all 50 states.

FabFitFun is truly for every woman – whether you are a millennial, a mom of three, or a fashion-forward 50 year old; we see this milestone as a celebration of the diversity of our members and that’s why we launched the #IamFabFitFun initiative,” said Katie Rosen Kitchens, the company’s other co-founder and editor in chief, in a statement. “We have members from all walks of life – from nurses to lawyers, software developers, police officers, makeup artists, fashion designers, dog walkers, interior designers and more.”

It began as a media business reviewing new products and has only taken on a small amount of venture funding since its inception, but the business has become a social phenomenon and has moved into retail, launching brands like ISH, Summer & Rose and Chic & Tonic.

There will be a pop up with Macy’s department stores in the holiday season to merge the subscription business with brick-and-mortar retailers and the company is expanding further into health and wellness.

“We think about it in the context of a lifestyle,” said Michael Broukhim. “We’ve only been doing a low-level pilot on the fashion side. In the food space we’ve had snack vendors who have snacks in the main box. There’s a FabFitFun way to shop, [with] a discovery orientation. We do the heavy lifting for you and become storytellers in curating your life for you.”

The goal is to become a curator for more of its members’ interests, he said. “We want to do that for pretty much everything that someone consumes,” Michael said. “There’s the everything approach where you know what you want and you type it in and you get it and you click it and you get it delivered in maybe two days. Then there’s the FabFitFun approach…. it’s a trusted relationship where we learn about all the time and put as much of the process on autopilot as possible.”

With all of that, replenishment is not a focus for the company at the moment. “It’s upside for our brand partners,” said Daniel. “We’re helping their products get discovered. Then those members go to the brand partners and can continue on those relationships.”


Source: Tech Crunch