Black Friday Sale: 2-for-1 passes to Disrupt Berlin

Synchronize your watches startup fans, and get ready to score serious savings on passes to Disrupt Berlin 2019. For today only, you can get 2 passes for the price of one. Our Black Friday sale starts now and runs through 11:59pm CET on 29 November. Don’t miss out!

Simply purchase a pass to Disrupt Berlin now (Founder passes start at just €645 + VAT), and you’ll get two passes for the price of one. Split the cost with a colleague, gift the pass to a client or bring a member of your team to Disrupt. No matter how you choose to use that extra pass, you’ll reap extra value. Go BOGO — buy your passes — before the 24-hour clock runs out.

Now you and your buddy can get ready to make the most out of two program – and opportunity-packed days in Berlin. Connection is the name of the game at Disrupt events, and there’s no better place to start promising conversations than Startup Alley. You’ll find hundreds of early-stage startups and sponsors exhibiting an array of products, platforms and services that span the tech spectrum.

Looking for customers, collaborators, incubators, investors? Need manufacturing advice or simply want to talk shop with other founders? Startup Alley has that and more. Be sure to check out the TC Top Picks — our hand-picked cohort of exceptional startups that represent the best in these specific tech categories: AI/Machine Learning, BioTech/HealthTech, Blockchain, FinTech, Mobility, Privacy/Security, Retail/eCommerce, Robotics/IoT/Hardware, SaaS and Social Impact & Education.

There’s plenty to experience outside the Alley, and the Disrupt Berlin agenda can help you make the most of your time. Be in the room when TechCrunch editors interview CEOs from companies such as Away, UIPath and Naspers, as well as leading investors from Atomico, SoftBank and GV.

If you’re a founder (aspiring or otherwise), don’t miss what goes down on the Extra Crunch stage. You’ll hear panelists discuss important startup trends and offer actionable tips and advice on topics like scaling a business, product management, raising money and building a brand.

There’s so much more to experience at Disrupt Berlin: The Hackathon, the always-epic Startup Battlefield pitch competition, workshops and Q&A Sessions. It all happens on 11-12 December, and now you have 24 hours to double up on value. Buy your pass before the clock runs out at 11:59pm CET on 29 November, and you’ll get a second one free. Go BOGO!

Is your company interested in sponsoring or exhibiting at Disrupt Berlin 2019? Contact our sponsorship sales team by filling out this form.

Source: Tech Crunch

Gift Guide: Black Friday tech deals that are actually worth considering

Ah, Black Friday. The day of a zillion “deals” — some good, many bad, most just meant to clear the shelves for next year’s models.

Hidden amidst ten thousand “LOWEST PRICE EVER! ONE DAY ONLY!” e-mails, though, are a handful of solid deals on legitimately good stuff.

Whether you’re trying to save some coin heading into Christmas or you just want to beef up your own gear collection, we’ve picked a few things that seemed worthwhile while trying to sift out most of the junk. We’ll add new deals throughout the day as we hear about them.

(Pro tip: Want to check if something on Amazon is actually on sale, or if they just tinkered with the price ahead of time to make it look like you’re getting a discount? Check a historical price checker like camelcamelcamel to see the price over time.)

Amazon Devices

alexa echo amazon 9250064

Amazon generally slashes prices on its own devices to get the Black Friday train moving, and this year is no different.

  • The 4K Fire TV Stick, usually $50, is down to $25
  • The non-4K Fire TV Stick, usually $40, is down to $20. For the $5 difference, though, I’d go with the 4K model above. Future-proofing!
  • The incredibly good Kindle Oasis is about 30% off this week — $175 for the 8GB model (usually $249), or $199 for the 32GB model (usually $279)
  • If you’ve got Alexa devices around your house and are looking to expand, the current generation Echo Dot is down to $22 (usually $49) while the bigger, badder Echo Plus is down to $99 (usually $150)

Google Devices

google pixel 4 010 1

  • Google’s latest flagship Android phone, the very, very good Pixel 4, is $200 off at $599 (usually $799) for an unlocked model. The heftier Pixel 4 XL, meanwhile, is down to $699 from $899.
  • The less current but still solid Pixel 3a is down to $299 (usually $399).
  • The Nest Mini (formerly known as Google Home Mini) is down to $30 from its usual price of $49.
  • The Nest Protect smoke alarm (both the wired and battery versions) are down to $99 (usually $119)
  • The 4K-ready Chromecast Ultra is down to $49(usually $69), while the non-4K Chromecast is currently $25 (usually $35.)


With both Amazon and Google slashing at the prices for their streaming devices, Roku isn’t looking to be left out. The company’s 4K-friendly Roku Ultra is down to $48 (usually $100), complete with a pair of JBL headphones you can plug into the remote for almost-wireless listening.

Xbox, Playstation, and Nintendo Switch

If you’ve yet to pick up any of this generation’s consoles, now honestly isn’t a terrible time (as long as you can do it at a discount.) Both Microsoft and Sony are prepping to launch new consoles in 2020, but that means you’ve got years and years of really great games from this generation to pluck through — and it’ll probably be a few months before there’s much worthwhile/exclusive on the new consoles, anyway. Nintendo, meanwhile, just revised the Switch to significantly improve its battery life in August.

Microsoft has dumped the price on the 1 terabyte Xbox One X down to $349 (usually $499), including your choice of Gears of War 5, NBA 2K20, or the pretty much brand new Star Wars Jedi: Fallen Order. The Xbox One S meanwhile, is down to $149 (usually $249) with copies of Minecraft, Sea of Thieves, and about $20 worth of Fortnite Vbucks. (Be aware that the One S has no disc drive, so anything you play on that one must be a digital/downloaded copy. That’s not a huge issue! But be aware of it, particularly if you’ve got a slower internet connection or limited monthly bandwidth.)

Likewise, Sony has a killer deal on the Playstation 4 — $199 gets you a 1TB PS4 and copies of God of War, The Last Of Us (Remastered), and Horizon Zero Dawn. The deal is available at most of the big box retailers (Best Buy/Walmart/GameStop/Target/etc), though it seems to be going in-and-out of stock everywhere so you might have to poke around a bit.

Deals on the Switch console itself are few and far between so far (and many of the deals are for the older model with the weaker battery), but you can pick up a pair of Joy-Con controllers for $60 versus the usual $80.


Airpods Pro

Apple deals don’t tend to get too wild on Black Friday — especially not on the latest generation hardware. This year, though, there’s some surprisingly worthwhile stuff.


Looking to expand your Sonos setup? Most things in the company’s line-up are on sale right now, including:

  • The Sonos Beam (the smaller of the company’s two sound bars), usually $399, is down to $299.
  • The bigger soundbar, the Sonos Playbar, is down from $699 to $529
  • The massive Playbase (like a soundbar, except you sit your entire TV on it) is down from $699 to $559.
  • A two-pack of Play:1 speakers is going for $230 on (usually $170-200 each), though you’ll need to be a Costco member to access it.

Ridiculously cheap microSD cards

The cost of microSD cards has plummeted over the last year, seemingly bottoming out for Black Friday. SanDisk’s 512GB microSD card was going for $100-$150 just a few months ago; today it’s down to $64. Need a faster model? The 512GB Extreme MicroSDXC was $200 earlier this year, and now it’s down to $80.

Steam games

Valve’s annual Autumn Steam sale is underway, slashing prices on a bunch of top notch games — like Grand Theft Auto 5 for $15 (usually $30), Portal 2 for a buck, The Witness for $20 (usually $40), Return of Obra Dinn for $16 (usually $20), Soul Calibur 6 for $18 (usually $60), or the just released (and absurdly fun) Jackbox Party Pack 6 for $23 (usually $30).

Oh! And Valve’s Steam controller is down to $5 (from $50)… with the caveat that it’s because they’re discontinuing it and honestly for most games it’s just an okay controller.

Source: Tech Crunch

2019 Thanksgiving e-commerce sales show 14% rise on 2018, $470M spent so far

With popular social networks seeing some downtime, shops closed, and many people off work today for Thanksgiving, bargain hunters are flocking online to start their holiday shopping. Adobe says that so far some $470 million has been spent online, a rise of 14.5% compared to sales figures from the same time last year, with sales patterns largely on track to hit its prediction of $4.4 billion in sales today. And as of 11.30 Pacific time, Shopify notes that there are around 4,500 transactions per minute, working out to just under $400,000 spent each minute.

Adobe Analytics tracks sales in real-time for 80 of the top 100 US retailers, covering 55 million SKUs and some 1 trillion transactions during the holiday sales period. Shopify, meanwhile, uses data from across the range of online retailers that use Shopify APIs to run their sales.

Black Friday (the day after Thanksgiving) used to be seen as the traditional start to holiday sales, but consumers spending time at home on Thanksgiving itself are increasingly coming online — on a day when most brick-and-mortar stores are closed — to get the ball rolling.

This year, Thanksgiving is coming a week later this year than in 2018 (when it fell on the 22nd of the month), which will make for a more compressed, and potentially more frenzied, selling period.

As Sarah pointed out yesterday, many retailers this year made an early jump on their Black Friday deals, and so far some $53 billion has been spent in the month of November up to today. This year’s holiday sales overall are predicted to hit nearly $144 billion.

We’ll be updating this post with more figures as they come in.

As a point of comparison, in 2018, online sales hit $3.7 billion, according to Adobe’s analysis.

Adobe notes that in the $53 billion spent so far this month, all 27 days in November have surpassed $1 billion in sales. Eight days passed $2 billion, and yesterday saw $2.9 billion in sales. That was up 22% on a year ago, which either points to increased sales overall, or simply that the strategy of extending “holiday” shopping to start earlier and earlier is paying off for retailers.

Another interesting insight is that some $18.2B in purchases have been made by smartphones this month, which is up 49.5% compared to last year.

“The strong online sales performance to-date suggests that holiday shopping starts much earlier than ever before. Steep discounts on popular items like computers on the day before Thanksgiving indicate that many of the season’s best deals are already up for grabs. This has led to significant growth in online sales (16.1% YoY increase) so far. What will be important for retailers to track is whether the early discounts will drive continued retail growth overall, or if they have induced consumers to spend their holiday budgets earlier,” noted Jason Woosley, vice president of commerce product & platform at Adobe.

Source: Tech Crunch

Scientists turn undersea fiber optic cables into seismographs

Monitoring seismic activity all over the world is an important task, but one that requires equipment to be at the site it’s measuring — difficult in the middle of the ocean. But new research from Berkeley could turn existing undersea fiber optic cables into a network of seismographs, creating an unprecedented global view of the Earth’s tectonic movements.

Seismologists get almost all their data from instruments on land, which means most of our knowledge about seismic activity is limited to a third of the planet’s surface. We don’t even know where all the faults are since there’s been no exhaustive study or long-term monitoring of the ocean floor.

“There is a huge need for seafloor seismology,” explained lead study author Nathaniel Lindsey in a Berkeley news release. “Any instrumentation you get out into the ocean, even if it is only for the first 50 kilometers from shore, will be very useful.”

Of course, the reason we haven’t done so is because it’s very hard to place, maintain, and access the precision instruments required for long-term seismic work underwater. But what if there were instruments already out there just waiting for us to take advantage of them? That’s the idea Lindsey and his colleagues are pursuing with regard to undersea fiber optic cables.

These cables carry data over long distances, sometimes as part of the internet’s backbones, and sometimes as part of private networks. But one thing they all have in common is that they use light to do so — light that gets scattered and distorted if the cable shifts or changes orientation.

By carefully monitoring this “backscatter” phenomenon it can be seen exactly where the cable bends and to what extent — sometimes to within a few nanometers. That means that researchers can observe a cable to find out the source of seismic activity with an extraordinary level of precision.

The technique is called Distributed Acoustic Sensing, and it essentially treats the cable as if it were a series of thousands of individual motion sensors. The cable the team tested on is 20 kilometers worth of of Monterey Bay Aquarium Research Institute’s underwater data infrastructure — which divided up into some ten thousand segments that can detect the slightest movement of the surface to which they’re attached.

“This is really a study on the frontier of seismology, the first time anyone has used offshore fiber-optic cables for looking at these types of oceanographic signals or for imaging fault structures,” said Berkeley National Lab’s Jonathan Ajo-Franklin.

After hooking up MBARI’s cable to the DAS system, the team collected a ton of verifiable information: movement from a 3.4-magnitude quake miles inland, maps of known but unmapped faults in the bay, and water movement patterns that also hint at seismic activity.

The main science node of the Monterey Accelerated Research System. Good luck keeping crabs out of there.

The best part, Lindsey said, is that you don’t even need to attach equipment or repeaters all along the length of the cable. “You just walk out to the site and connect the instrument to the end of the fiber,” he said.

Of course most major undersea cables don’t just have a big exposed end for random researchers to connect to. And the signals that the technology uses to measure backscatter could conceivably interfere with others, though of course there is work underway to test that and prevent it if possible.

If successful the larger active cables could be pressed into service as research instruments, and could help illuminate the blind spot that seismologists have as far as the activity and features of the ocean floor. The team’s work is published today in the journal Science.

Source: Tech Crunch

Will the future of work be ethical? Founder perspectives

In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT’s famous Media Lab, examined how AI and robotics are changing the future of work.

Greg’s essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls “a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well.” In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy.

Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.

Below, Greg speaks to two founders of innovative startups whose work provoked much discussion at the EmTech Next conference. Moxi, the robot assistant created by Andrea Thomasz of Diligent Robotics and her team, was a constant presence in the Media Lab reception hall immediately outside the auditorium in which all the main talks took place. And Prayag Narula of LeadGenius was featured, alongside leading tech anthropologist Mary Gray, in a panel on “Ghost Work” that sparked intense discussion throughout the conference and beyond.

Andrea Thomaz is the Co-Founder and CEO of Diligent Robotics. Image via MIT Technology Review

Could you give a sketch of your background?

Andrea Thomaz: I was always doing math and science, and did electrical engineering as an Undergrad at UT Austin. Then I came to MIT to do my PhD. It really wasn’t until grad school that I started doing robotics. I went to grad school interested in doing AI and was starting to get interested in this new machine learning that people were starting to talk about. In grad school, at the MIT Media Lab, Cynthia Breazeal was my advisor, and that’s where I fell in love with social robots and making robots that people want to be around and are also useful.

Say more about your journey at the Media Lab?

My statement of purpose for the Media Lab, in 1999, was that I thought that computers that were smarter would be easier to use. I thought AI was the solution to HCI [Human-computer Interaction]. So I came to the Media Lab because I thought that was the mecca of AI plus HCI.

It wasn’t until my second year as a student there that Cynthia finished her PhD with Rod Brooks and started at the Media Lab. And then I was like, “Oh wait a second. That’s what I’m talking about.”

Who is at the Media Lab now that’s doing interesting work for you?

For me, it’s kind of the same people. Patty Maes has kind of reinvented her group since those days and is doing fluid interfaces; I always really appreciate the kind of things they’re working on. And Cynthia, her work is still very seminal in the field.

So now, you’re a CEO and Founder?

CEO and Co-Founder of Diligent Robotics. I had twelve years in academia in between those. I finished my PhD, went and I was a professor at Georgia Tech in computing, teaching AI and robotics and I had a robotics lab there.

Then I got recruited away to UT Austin in electrical and computer engineering. Again, teaching AI and having a robotics lab. Then at the end of 2017, I had a PhD student who was graduating and also interested in commercialization, my Co-Founder and CTO Vivian Chu.

Let’s talk about the purpose of the human/robot interaction. In the case of your company, the robot’s purpose is to work alongside humans in a medical setting, who are doing work that is not necessarily going to be replaced by a robot like Moxi. How does that work exactly?

One of the reasons our first target market [is] hospitals is, that’s an industry where they’re looking for ways to elevate their staff. They want their staff to be performing, “at the top of their license.” You hear hospital administrators talking about this because there’s record numbers of physician burnout, nurse burnout, and turnover.

They really are looking for ways to say, “Okay, how can we help our staff do more of what they were trained to do, and not spend 30% of their day running around fetching things, or doing things that don’t require their license?” That for us is the perfect market [for] collaborative robots.” You’re looking for ways to automate things that the people in the environment don’t need to be doing, so they can do more important stuff. They can do all the clinical care.

In a lot of the hospitals we’re working with, we’re looking at their clinical workflows and identifying places where there’s a lot of human touch, like nurses making an assessment of the patient. But then the nurse finishes making an assessment [and] has to run and fetch things. Wouldn’t it be better if as soon as that nurse’s assessment hit the electronic medical record, that triggered a task for the robot to come and bring things? Then the nurse just gets to stay with the patient.

Those are the kind of things we’re looking for: places you could augment the clinical workflow with some automation and increase the amount of time that nurses or physicians are spending with patients.

So your robots, as you said before, do need human supervision. Will they always?

We are working on autonomy. We do want the robots to be doing things autonomously in the environment. But we like to talk about care as a team effort; we’re adding the robot to the team and there’s parts of it that the robot’s doing and parts of it that the human’s doing. There may be places where the robot needs some input or assistance and because it’s part of the clinical team. That’s how we like to think about it: if the robot is designed to be a teammate, it wouldn’t be very unusual for the robot to need some help or supervision from a teammate.

That seems different than what you could call Ghost Work.

Right. In most service robots being deployed today, there is this remote supervisor that is either logged in and checking in on the robots, or at least the robots have the ability to phone home if there’s some sort of problem.

That’s where some of this Ghost Work comes in. People are monitoring and keeping track of robots in the middle of the night. Certainly that may be part of how we deploy our robots as well. But we also think that it’s perfectly fine for some of that supervision or assistance to come out into the forefront and be part of the face-to-face interaction that the robot has with some of its coworkers.

Since you could potentially envision a scenario in which your robots are monitored from off-site, in a kind of Ghost Work setting, what concerns do you have about the ways in which that work can be kind of anonymized and undercompensated?

Currently we are really interested in our own engineering staff having high-touch customer interaction that we’re really not looking to anonymize. If we had a robot in the field and it was phoning home about some problem that was happening, at our early stage of the company, that is such a valuable interaction that in our company that wouldn’t be anonymous. Maybe the CTO would be the one phoning in and saying, “What happened? I’m so interested.”

I think we’re still at a stage where all of the customer interactions and all of the information we can get from robots in the field are such valuable pieces of information.

But how are you envisioning best-case scenarios for the future? What if your robots really are so helpful that they’re very successful and people want them everywhere? Your CTO is not going to take all those calls. How could you do this in a way that could make your company very successful, but also handle these responsibilities ethically?

Source: Tech Crunch

Will the future of work be ethical? Future leader perspectives

In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT’s famous Media Lab, examined how AI and robotics are changing the future of work.

Greg’s essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls “a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well.” In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy.

Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.

Below he speaks to two conference attendees who had crucial insights to share. Meili Gupta is a high school senior at Phillips Exeter Academy, an elite boarding school in New Hampshire; Gupta attended the EmTech Next conference with her mother and has attended with family in previous years as well; her voice and thoughts on privilege and inequality in education and technology are featured prominently in Greg’s essay. Walter Erike is a 31-year-old independent consultant and SAP Implementation Senior Manager. from Philadelphia. Between conference session, he and Greg talked about diversity and inclusion at tech conferences and beyond.

Meili Gupta is a senior at Phillips Exeter Academy. Image via Meili Gupta

Greg Epstein: How did you come to be at EmTech Next?

Meili Gupta: I am a rising high school senior at Phillips Exeter Academy; I’m one of the managing editors for my school’s science magazine called Matter Magazine.

I [also] attended the conference last year. My parents have come to these conferences before, and that gave me an opportunity to come. I am particularly interested in the MIT Technology Review because I’ve grown up reading it.

You are the Managing Editor of Matter, a magazine about STEM at your high school. What subjects that Matter covers are most interesting to you?

This year we published two issues. The first featured a lot of interviews from top {AI} professors like Professor Fei-Fei Li, at Stanford. We did a review for her and an interview with Professor Olga Russakovsky at Princeton. That was an AI special issue and, being at this conference you hear about how AI will transform industries.

The second issue coincided with Phillips Exeter Global Climate Action Day. We focused both on environmentalism clubs at Exeter and environmentalism efforts worldwide. I think Matter, as the only stem magazine on campus has a responsibility in doing that.

AI and climate: in a sense, you’ve already dealt with this new field people are calling the ethics of technology. When you hear that term, what comes to mind?

As a consumer of a lot of technology and as someone of the generation who has grown up with a phone in my hand, I’m aware my data is all over the internet. I’ve had conversations [with friends] about personal privacy and if I look around the classroom, most people have covers for the cameras on their computers. This generation is already aware [of] ethics whenever you’re talking about computing and the use of computers.

About AI specifically, as someone who’s interested in the field and has been privileged to be able to take courses and do research projects about that, I’m hearing a lot about ethics with algorithms, whether that’s fake news or bias or about applying algorithms for social good.

What are your biggest concerns about AI? What do you think needs to be addressed in order for us to feel more comfortable as a society with increased use of AI?

That’s not an easy answer; it’s something our society is going to be grappling with for years. From what I’ve learned at this conference, from what I’ve read and tried to understand, it’s a multidimensional solution. You’re going to need computer programmers to learn the technical skills to make their algorithms less biased. You’re going to need companies to hire those people and say, “This is our goal; we want to create an algorithm that’s fair and can do good.” You’re going to need the general society to ask for that standard. That’s my generation’s job, too. WikiLeaks, a couple of years ago, sparked the conversation about personal privacy and I think there’s going to be more sparks.

Seems like your high school is doing some interesting work in terms of incorporating both STEM and a deeper, more creative than usual focus on ethics and exploring the meaning of life. How would you say that Exeter in particular is trying to combine these issues?

I’ll give a couple of examples of my experience with that in my time at Exeter, and I’m very privileged to go to a school that has these opportunities and offerings for its students.

Don’t worry, that’s in my next question.

Absolutely. With the computer science curriculum, starting in my ninth grade they offered a computer science 590 about [introduction to] artificial intelligence. In the fall another 590 course was about self driving cars, and you saw the intersection between us working in our robotics lab and learning about computer vision algorithms. This past semester, a couple students, and I was involved, helped to set up a 999: an independent course which really dove deep into machine learning algorithms. In the fall, there’s another 590 I’ll be taking called social innovation through software engineering, which is specifically designed for each student to pick a local project and to apply software, coding or AI to a social good project.

I’ve spent 15 years working at Harvard and MIT. I’ve worked around a lot of smart and privileged people and I’ve supported them. I’m going to ask you a question about Exeter and about your experience as a privileged high school student who is getting a great education, but I don’t mean it from a perspective of it’s now me versus you.

Of course you’re not.

I’m trying to figure this out for myself as well. We live in a world where we’re becoming more prepared to talk about issues of fairness and justice. Yet by even just providing these extraordinary educational experiences to people like you and me and my students or whomever, we’re preparing some people for that world better than others. How do you feel about being so well prepared for this sort of world to come that it can actually be… I guess my question is, how do you relate to the idea that even the kinds of educational experiences that we’re talking about are themselves deepening the divide between haves and have nots?

I completely agree that the issue between haves and have nots needs to be talked about more, because inequality between the upper and the lower classes is growing every year. This morning, Mr. Isbell from Georgia Tech talk was really inspiring. For example, at Phillips Exeter, we have a social service club called ESA which houses more than 70 different social service clubs. One I’m involved with, junior computer programming, teaches programming to local middle school students. That’s the type of thing, at an individual level and smaller scale, that people can try to help out those who have not been privileged with opportunities to learn and get ahead with those skills.

What Mr. Isbell was talking about this morning was at a university level and also tying in corporations bridge that divide. I don’t think that the issue itself should necessarily scare us from pushing forward to the frontier to say, the possibility that everybody who does not have a computer science education in five years won’t have a job.

Today we had that debate about role or people’s jobs and robot taxes. That’s a very good debate to have, but it sometimes feeds a little bit into the AI hype and I think it may be a disgrace to society to try to pull back technology, which has been shown to have the power to save lives. It can be two transformations that are happening at the same time. One, that’s trying to bridge an inequality and is going to come in a lot of different and complicated solutions that happen at multiple levels and the second is allowing for a transformation in technology and AI.

What are you hoping to get out of this conference for yourself, as a student, as a journalist, or as somebody who’s going into the industry?

The theme for this conference is the future of the workforce. I’m a student. That means I’m going to be the future of the workforce. I was hoping to learn some insight about what I may want to study in college. After that, what type of jobs do I want to pursue that are going to exist and be in demand and really interesting, that have an impact on other people? Also, as a student, in particular that’s interested in majoring in computer science and artificial intelligence, I was hoping to learn about possible research projects that I could pursue in the fall with this 590 course.

Right now, I’m working on a research project with a Professor at the University of Maryland about eliminating bias in machine learning algorithms. What type of dataset do I want to apply that project to? Where is the need or the attention for correcting bias in the AI algorithms?

As a journalist, I would like to write a review summarizing what I’ve learned so other [Exeter students] can learn a little too.

What would be your biggest critique of the conference? What could be improved?

Source: Tech Crunch

Twitter to add a way to ‘memorialize’ accounts for deceased users before removing inactive ones

Twitter has changed its tune regarding inactive accounts after receiving a lot of user feedback: It will now be developing a way to “memorialize” user accounts for those who have passed away, before proceeding with a plan it confirmed this week to deactivate accounts that are inactive in order to “present more accurate, credible information” on the service. To the company’s credit, it reacted swiftly after receiving a significant amount of negative feedback on this move, and it seems like the case of deceased users simply wasn’t considered in the decision to proceed with terminating dormant accounts.

After Twitter confirmed the inactive account (those that haven’t tweeted in more than six months) cleanup on Tuesday, a number of users noted that this would also have the effect of erasing the content of accounts whose owners have passed away. TechCrunch alum Drew Olanoff wrote about this impact from a personal perspective, asking Twitter to reconsider their move in light of the human impact and potential emotional cost.

In a thread today detailing their new thinking around inactive accounts, Twitter explained that its current inactive account policy has actually always been in place, but that they haven’t been diligent about enforcing it. They’re going to begin doin so in the European Union partly in accordance with local privacy laws, citing GDPR specifically. But the company also says it will now not be removing any inactive accounts before first implementing a way for inactive accounts belonging to deceased users to be “memorialized,” which presumably means preserving their content.

Twitter went on to day that it might expand or refine its inactive account policy to ensure it works with global privacy regulations, but will be sure to communicate these changes broadly before they go into effect.

It’s not yet clear what Twitter will do to offer this ‘memorialization’ of accounts, but there is some precedent they can look to for cues: Facebook has a ‘memorialized accounts’ feature that it introduced for similar reasons.

Source: Tech Crunch

Latin America roundup: Neobanks raise $205M+; Softbank backs VTEX

Argentina’s Ualá became the most recent Latin American fintech to receive a growth-stage funding ($150 million) from Asian investors, Tencent and Softbank. 

This marks Tencent’s second round of investment in Ualá, the first coming in April 2019. Tencent also invested $180M in Brazil’s leading neobank, Nubank in 2018. With Ualá, Tencent and Softbank will join a team of investors including Soros, Goldman Sachs, Endeavor, Monashees, Ribbit Capital, and Jefferies LLC, who have backed Ualá since it was founded in 2016. Ualá has provided over 1.3M accounts for unbanked and under-banked Argentine customers in the past two years and recently launched new products for lending and savings. 

Ualá was not the only neobank celebrating a significant round this month; Brazil’s Neon raised a $94M Series B round from Banco Votorantim and General Atlantic just one week earlier. Neon offers a fully-digital bank card to almost 2M customers across Brazil, mostly concentrated in Rio de Janeiro and São Paulo. The round will enable Neon to expand beyond Brazil’s biggest cities and double its user base in 2020. 

Neon has raised $121M to date, with previous investors Quona Capital, Propel Venture Partners, Omidyar Network, and Monashees, also joining their most recent round. The two-year-old startup has been expanding its product offerings to include credit, investment, and most recently, a personal lending line in July 2019.

Neon’s products are helping to bring banking services to a famously complex and competitive market in Brazil. Brazil’s largest neobank, Nubank, is valued at $10B+, has 10M customers in Brazil and Mexico, and is now the most-downloaded neobank in the world. Brazil’s banking sector is one of the most lucrative in the world, with credit card interest rates reaching triple digits, whereas Nubank and competitors offer more US-style rates, putting Brazilian banks on the defensive against disruptors like Nubank and Neon who will drive competition. 

With strong funding from Asia, Brazilian, and US-based backers, these neobanks are gaining traction across the region to provide banking services to the 50% of Latin America’s population that is still excluded from traditional financial institutions. 

Softbank invests $140M in VTEX

VTEX, a Brazilian cloud e-commerce platform for large companies, joined the growing list of Softbank’s Brazilian portfolio companies, including QuintoAndar, MadeiraMadeira, Creditas, Buser, Gympass, and Loggi. The Japanese investor is supporting VTEX with a $140M investment to help the startup expand internationally and develop new products. 

VTEX already has 14 offices in Latin America, Europe, and the US, and serves over 2500 global clients including Ambev, Nestle, North Face, Coca Cola, and General Electric. VTEX’s solution involves a comprehensive digital commerce platform including order management, B2B marketplaces, web and in-store points of commerce, and customer service. As the back-end for some of the world’s largest companies, VTEX provides an enormous opportunity for integration with other marketplaces and platforms. 

LinkedIn expands to Mexico

Mexico is Latin America’s second-largest market after Brazil for many US tech companies like Uber and Facebook. In November 2019, both LinkedIn and Stripe announced their intention to expand into the Mexican market with offices and operations. Over 13 million of Linkedin’s 92 million total clients are in Mexico, making this country a logical place for Linkedin’s second Latin America office. Linkedin opened their first Latin America offices in São Paulo in 2013. 

The Mexican office will open in July 2020 and will help LinkedIn produce more Spanish-language content, as well as bring users closer to large clients like BBVA and Aeromexico. 

Notable Rounds and Acquisitions from November

  • Brazilian bank Itaú acquired growth-hacking and digital consulting startup, Zup, for a $140M deal that will be disbursed over four years. Zup will help the bank improve and develop digital channels for customer acquisition and management. Although Itaú now owns 51% of Zup, the two companies will continue to operate separately and under different brands for the foreseeable future. Acquisitions of this size are still very rare in Latin America and provide liquidity into the startup ecosystem that can promote the development of a more dynamic environment for tech companies. 
  • MUY Tech, a Colombia cloud kitchen startup, raised $15M this month to expand into Mexico and Brazil. MUY uses AI technology to predict food trends and create less waste, allowing users to order personalized meals from MUY’s physical restaurants or through a mobile app. The startup currently serves more than 200,000 meals per month, according to founder Jose Calderon, who previously exited Domicilios to Delivery Hero. Mexican investor ALLVP led the round with support from previous investor Seeya.
  • Brazilian mobility startup Kovi raised a $30M Series B led by Global Founders Capital and Quona Capital, with support from previous investors Monashees, Maya Capital, Kevin Efrusy, Y Combinator, Broadhaven Ventures, Justin Mateen, and ONEVC. Kovi rents cars to drivers that work for rideshare companies like Uber, Didi, or Cabify to make quality vehicles available to these potential gig-economy workers. They will use this investment to grow the team and fleet, as well as exploring new geographies. 
  • Mexico’s virtual supermarket, Justo, raised $10M in a seed round from Foundation Capital to continue growing in the local market. Justo is the first grocery store in Mexico with no physical branches, using a D2C model that has been increasing in popularity in Latin America. The startup was founded by Ricardo Weder, the former president of Cabify, earlier in 2019 to disrupt the wasteful grocery industry. 
  • Brazil’s identity verification startup, idwall, raised $10M from Qualcomm Ventures to continue developing facial recognition software that helps large companies like Loggi, 99, and OLX to verify the identity of their employees and customers.

Looking ahead to December, Latin American financial institutions are on the lookout for a shaky future based on the recent unrest in countries like Chile, Bolivia, Ecuador, and Colombia. This instability might provide a competitive edge for fintech startups who can use real-time data to adapt more quickly to the changing situation. 

What to watch next? International investors have not pulled out of the region despite recent political turmoil and many are willing to wait out this period to support their startups. While we may not have access to Q4 2019 for a few months, it will be interesting to see if growth and investment have been rocked by the changes of the past two months. Certainly the status quo for the traditional players in Latin America is rapidly changing, potentially leaving room for startups to take over more market share and compete for disgruntled customers.

Source: Tech Crunch

Box looks to balance growth and profitability as it matures

Prevailing wisdom states that as an enterprise SaaS company evolves, there’s a tendency to sacrifice profitability for growth — understandably so, especially in the early days of the company. At some point, however, a company needs to become profitable.

Box has struggled to reach that goal since going public in 2015, but yesterday, it delivered a mostly positive earnings report. Wall Street seemed to approve, with the stock up 6.75% as we published this article.

Box CEO Aaron Levie says the goal moving forward is to find better balance between growth and profitability. In his post-report call with analysts, Levie pointed to some positive numbers.

“As we shared in October [at BoxWorks], we are focused on driving a balance of long-term growth and improved profitability as measured by the combination of revenue growth plus free cash flow margin. On this combined metric, we expect to deliver a significant increase in FY ’21 to at least 25% and eventually reaching at least 35% in FY ’23,” Levie said.

Growing the platform

Part of the maturation and drive to profitability is spurred by the fact that Box now has a more complete product platform. While many struggle to understand the company’s business model, it provides content management in the cloud and modernizing that aspect of enterprise software. As a result, there are few pure-play content management vendors that can do what Box does in a cloud context.

Source: Tech Crunch

This robot scientist has conducted 100,000 experiments in a year

Science is exciting in theory, but it can also be dreadfully dull. Some experiments require hundreds or thousands of repetitions or trials — an excellent opportunity to automate. That’s just what MIT scientists have done, creating a robot that performs a certain experiment, observes the results, and plans a follow-up… and has now done so 100,000 times in the year it’s been operating.

The field of fluid dynamics involves a lot of complex and unpredictable forces, and sometimes the best way to understand them is to repeat things over and over until patterns emerge. (Well, it’s a little more complex than that, but this is neither the time nor the place to delve into the general mysteries of fluid dynamics.)

One of the observations that needs to be performed is of “vortex-induced vibration,” a kind of disturbance that matters a lot to designing ships that travel through water efficiently. It involves close observation of an object moving through water… over, and over, and over.

Turns out it’s also a perfect duty for a robot to take over. But the Intelligent Tow Tank, as they call this robotic experimentation platform, is designed not just to do the mechanical work of dragging something through the water, but to intelligently observe the results, change the setup accordingly to pursue further information, and continue doing that until it has something worth reporting.

“The ITT has already conducted about 100,000 experiments, essentially completing the equivalent of all of a Ph.D. student’s experiments every 2 weeks,” say the researchers in their paper, published today in Science Robotics.

The hard part, of course, was not designing the robot (though that was undoubtedly difficult as well) but the logic that lets it understand, at a surface level so to speak, the currents and flows of the fluid system and conduct follow-up experiments that produce useful results.

Normally a human (probably a grad student) would have to observe every trial — the parameters of which may be essentially random — and decide how to move forward. But this is rote work — not the kind of thing an ambitious researcher would like to spend their time doing.

So it’s a blessing that this robot, and others like it, could soon take over the grunt work while humans focus on high-level concepts and ideas. The paper notes other robots at CMU and elsewhere that have demonstrated how automation of such work could proceed.

“This constitutes a potential paradigm shift in conducting experimental research, where robots, computers, and humans collaborate to accelerate discovery and to search expeditiously and effectively large parametric spaces that are impracticable with the traditional approach,” the team writes.

You can read the paper describing the Intelligent Tow Tank here.

Source: Tech Crunch