DefinedCrowd offers mobile apps to empower its AI-annotating masses

DefinedCrowd, the Startup Battlefield alumnus that produces and refines data for AI-training purposes, has just debuted iOS and Android apps for its army of human annotators. It should help speed up a process that the company already touts as one of the fastest in the industry.

It’s no secret that AI relies almost totally on data that has been hand-annotated by humans, pointing out objects in photos, analyzing the meaning of sentences or expressions, and so on. Doing this work has become a sort of cottage industry, with many annotators doing it part time or between other jobs.

There’s a limit, however, to what you can do if the interface you must use to do it is only available on certain platforms. Just as others occasionally answer an email or look over a presentation while riding the bus or getting lunch, it’s nice to be able to do work on mobile — essential, really, at this point.

To that end DefinedCrowd has made its own app, which shares the Neevo branding of the company’s annotation community, that lets its annotators work whenever they want, tackling image or speech annotation tasks on the go. It’s available on iOS and Android starting today.

It’s a natural evolution of the market, CEO Daniela Braga told me. There’s a huge demand for this kind of annotation work, and it makes no sense to restrict the schedules or platforms of the people doing it. She suggested everyone in the annotation space would have apps soon, just as every productivity or messaging service does. And why not?

The company is growing quickly, going from a handful of employees to over a hundred, spread over its offices in Lisbon, Porto, Seattle, and Tokyo. The market, likewise, is exploding as more and more companies find that AI is not just applicable to what they do, but not out of their reach.


Source: Tech Crunch

Biofourmis raises $35M to develop smarter treatments for chronic diseases

Biofourmis, a Singapore-based startup pioneering a distinctly tech-based approach to the treatment of chronic conditions, has raised a $35 million Series B round for expansion.

The round was led by Sequoia India and MassMutual Ventures, the VC fund from Massachusetts Mutual Life Insurance Company. Other investors who put in include EDBI, the corporate investment arm of Singapore’s Economic Development Board, China-based healthcare platform Jianke and existing investors Openspace Ventures, Aviva Ventures and SGInnovate, a Singapore government initiative for deep tech startups. The round takes Biofourmis to $41.6 million raised to date, according to Crunchbase.

This isn’t your typical TechCrunch funding story.

Biofourmis CEO Kuldeep Singh Rajput moved to Singapore to start a PhD, but he dropped out to start the business with co-founder Wendou Niu in 2015 because he saw the potential to “predict disease before it happens,” he told TechCrunch in an interview.

AI-powered specialist post-discharge care

There are a number of layers to Biofourmis’ work, but essentially it uses a combination of data collected from patients and an AI-based system to customize treatments for post-discharge patients. The company is focused on a range of therapeutics, but its most advanced is cardiac, so patients who have been discharged after heart failure or other heart-related conditions.

With that segment of patients, the Biofourmis platform uses a combination of data from sensors — medical sensors rather than consumer wearables, which are worn 24/7 — and its tech to monitor patient health, detect problems ahead of time and prescribe an optimum treatment course. That information is disseminated through companion mobile apps for patients and caregivers.

Bioformis uses a mobile app as a touch point to give patients tailored care and drug prescriptions after they are discharged from hospital

That’s to say that medicine works differently on different people, so by collecting and monitoring data and crunching numbers, Biofourmis can provide the best drug to help optimize a patient’s health through what it calls a ‘digital pill.’ That’s not Matrix-style futurology, it’s more like a digital prescription that evolves based on the needs of a patient in real-time. It plans to use a network of medical delivery platforms, including Amazon-owned PillPack, to get the drugs to patients within hours.

Yes, that’s future tense because Biofourmis is waiting on FDA approval to commercialize its service. That’s expected to come by the end of this year, Singh Rajput told TechCrunch. But he’s optimistic given clinical trials, which have covered some 5,000 patients across 20 different sites.

On the tech side, Singh Rajput said Biofourmis has seen impressive results with its predictions. He cited tests in the U.S. which enabled the company to “predict heart failure 14 days in advance” with around 90 percent sensitivity. That was achieved using standard medical wearables at the cost of hundreds of dollars, rather than thousands with advanced kit such as Heartlogic from Boston Scientific — although the latter has a longer window for predictions.

The type of disruption that Biofourmis might appear to upset the applecart for pharma companies, but Singh Rajput maintains that the industry is moving towards a more qualitative approach to healthcare because it has been hard to evaluate the performance of drugs and price them accordingly.

“Today, insurance companies are blinded not having transparency on how to price drugs,” he said. “But there are already 50 drugs in the market paying based on outcomes so the market is moving in that direction.”

Outcome-based payments mean insurance firms reimburse all outcomes based on the performance of the drugs, in other words how well patients recover. The rates vary, but a lack of reduction in remission rates can see insurers lower their payouts because drugs aren’t working as well as expected.

Singh Rajput believes Biofourmis can level the playing field and added more granular transparency in terms of drug performance. He believes pharma companies are keen to show their products perform better than others, so over the long-term that’s the model Biofourmis wants to encourage.

Indeed, the confidence is such that Biofourmis intends to initially go to market via pharma companies, who will sell the package into clinics bundled with their drugs, before moving to work with insurance firms once traction is gained. While the Biofourmis is likely to be bundled with initial medication, the company will take a commission of 5-10 percent on the recommended drugs sold through its digital pill.

Biofourmis CEO and co-founder Kuldeep Singh Rajput dropped out of his PhD course to start the company in 2015

Doubling down on the US

With its new money, Biofourmis is doubling down on that imminent commercialization by relocating its headquarters to Boston. It will retain its presence in Singapore, where it has 45 people who handle software and product development, but the new U.S. office is slated to grow from 14 staff right now to up to 120 by the end of the year.

“The U.S. has been a major market focus since day one,” Singh Rajput said. “Being closer to customers and attracting the clinical data science pool is critical.”

While he praised Singapore and said the company remains committed to the country — adding EDBI to its investors is certainly a sign — he admitted that Boston, where he once studied, is a key market for finding “data scientists with core clinical capabilities.”

That expansion is not only to bring the cardio product to market, but also to prepare products to cover other therapeutics. Right now, it has six trials in place that cover pain, orthopedics and oncology. There are also plans to expand in other markets outside of the U.S, and in particular Singapore and China, where Biofourmis plans to lead on Jianke.

Not lacking in confidence, Singh Rajput told TechCrunch that the company is on course to reach a $1 billion valuation when it next raises funding, that’s estimated as 18 months away and the company isn’t saying how much it is worth today.

Singh Rajput did confirm, however, that the round was heavily oversubscribed, and that the startup rebuffed investment offers from pharma companies in order to “avoid a conflict of interest and stay neutral.”

He is also eying a future IPO, which is tentatively set for 2023 — although by then, Singh Rajput said, Biofourmis would need at least two products in the market.

There’s a long way to go before then, but this round has certainly put Biofourmis and its digital pill approach on the map within the tech industry.


Source: Tech Crunch

Daily Crunch: Instagram influencer contact info exposed

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Millions of Instagram influencers had their private contact data scraped and exposed

A massive database containing contact information for millions of Instagram influencers, celebrities and brand accounts was found online by a security researcher.

We traced the database back to Mumbai-based social media marketing firm Chtrbox. Shortly after we reached out, Chtrbox pulled the database offline.

2. US mitigates Huawei ban by offering temporary reprieve

Last week, the Trump administration effectively banned Huawei from importing U.S. technology, a decision that forced several American companies, including Google, to take steps to sever their relationships. Now, the Department of Commerce has announced that Huawei will receive a “90-day temporary general license” to continue to use U.S. technology to which it already has a license.

3. GM’s car-sharing service Maven to exit eight cities

GM is scaling back its Maven car-sharing company and will stop service in nearly half of the 17 North American cities in which it operates.

4. Maisie Williams’ talent discovery startup Daisie raises $2.5M, hits 100K members

The actress who became famous playing Arya Stark on “Game of Thrones” has fresh funding for her startup.

5. ByteDance, TikTok’s parent company, plans to launch a free music streaming app

The company, which operates popular app TikTok, has held discussions with music labels to launch the app as soon as the end of this quarter.

6. Future Family launches a $200 membership for fertility coaching

In its recent user research, Future Family found that around 70% of new customers had yet to see a fertility doctor. So today, the startup is rolling out a new membership plan that offers customers a dedicated fertility coach, and helps them find a doctor in their area.

7. When will customers start buying all those AI chips?

Danny Crichton says it’s the best and worst time to be in semiconductors right now. (Extra Crunch membership required.)


Source: Tech Crunch

Stein Mart embraces the enemy with installation of Amazon Lockers in nearly 200 stores

Another brick-and-mortar retailer is turning to Amazon to help save its struggling business. Today, discount chain operator Stein Mart announced it will install Amazon Hub lockers in nearly 200 stores as soon as next month. The lockers are self-serve kiosks that allow Amazon shoppers to take advantage of in-store pickup and returns.

The deal will bring increased foot traffic to Stein Mart stores, potentially increasing its sales.

Meanwhile, Amazon gains the advantage of a brick-and-mortar presence for delivery and returns without having to invest in more real estate or making an acquisition, as it did with Whole Foods. The move also benefits Amazon’s battle with Walmart — the latter which has been quick to leverage its brick-and-mortar locations to aid its online shoppers.

Walmart stores, for example, offer self-serve pickup towers for online orders, curbside pickup for groceries and other household needs, and other in-store pickup options. Last fall, it also began offering in-store returns for items from third-party marketplace sellers.

Stein Mart’s deal follows a larger industry trend of retailers and brands collaborating with, instead of fighting with, Amazon.

For example, department store chain Kohl’s recently expanded its own Amazon partnership.

Over the past couple of years, Kohl’s had been working with the e-commerce giant by allowing Amazon shoppers to bring their returns to one hundred Kohl’s stores across the U.S. The deal resulted in increased foot traffic and revenues — and some would say it even saved Kohl’s.

In April, Kohl’s said the Amazon returns program would expand to all 1,150 of its U.S. locations.

Stein Mart, which last year made Retail Dive’s list of 12 retailers at risk of bankruptcy, has been fighting across multiple fronts to survive. It has improved its merchandise, cleaned out inventory, cut costs, and tested services like ship-to-store. More recently, it began testing “endless aisles” (kiosks to connect store shoppers to broader online inventory), added mobile checkout and introduced a smarter fulfillment logic system to help fill web orders.

The company had also hinted last year it was open to almost anything, saying it planned to  “explore strategic alternatives” to help improve its declining sales.

Despite its improvements, the chain still ended up with a disappointing 2018 holiday sales season, and remains in need a bigger boost to its bottom line. That’s where the Amazon Hub lockers come in.

The program allows Amazon shoppers to choose a Locker location at their nearest Stein Mart as their shipping address for their online orders at checkout. When their item arrives, they’ll receive an email along with a barcode that’s used to pick up their package during store hours.

This immediately should increase foot traffic to Stein Mart stores, as it has at Kohl’s, Whole Foods, and other Amazon Locker locations. Over time, the hope is that Stein Mart sales will improve as well, if it’s able to successfully market its own in-store merchandise to the Amazon shoppers who drop by.

“We are thrilled to offer this innovative delivery experience to Amazon customers while introducing new shoppers to Stein Mart,” said Hunt Hawkins, Stein Mart’s CEO, in a statement. “Customer service and convenience are top priorities at Stein Mart, and the ability to give both to Amazon customers was a big factor in our decision to introduce this program.”

Stein Mart says the lockers will be available by early June.

Investors responded favorably to the news — as shares jumped 45 percent after the Amazon deal was announced.


Source: Tech Crunch

The Exit: Getaround’s $300M roadtrip

In August of last year, Getaround scored $300 million from Softbank. Eight months later they handed that same amount to Drivy, a Parisian peer-to-peer car rental service that was Getaround’s ticket to tapping into European markets.

Both companies shared similar visions for the future of car ownership, they were about the same size, both were flirting with expanding beyond their home market, but only one had the power of the Vision Fund behind it.

The Exit is a new series at TechCrunch. It’s an exit interview of sorts with a VC who was in the right place at the right time but made the right call on an investment that paid off. [Have feedback? Shoot me an email at lucas@techcrunch.com] 

Alven Capital’s Jeremy Uzan

Alven Capital partner Jeremy Uzan first invested in Drivy’s seed round in 2013. Uzan joined Index Ventures co-leading a $2 million round that valued the company at less than $10 million. The firms would later join forces again for the company’s $8.3 million Series A.

I chatted at length with Uzan about what lies ahead for the Drive team, what Paris’s startup scene is still in desperate need of, and how Softbank’s power is becoming even more impossible to ignore.

The interview has been edited for length and clarity. 


Getting the checkbook

Lucas Matney: So before we dive into this acquisition, tell me a little bit about how you got to the point where you were writing these checks in the first place.

Jeremy Uzan: So, I studied computer science and business and then spent three years as a tech banker. I was actually in a very small investment banking boutique in Paris helping young startups to raise their Series A rounds. They were all French companies, my first deal was with the YouTube competitor DailyMotion.


Source: Tech Crunch

Why is Facebook doing robotics research?

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.


Source: Tech Crunch

Talk key takeaways from KubeCon 2019 with TechCrunch writers

The Linux Foundation’s annual KubeCon conference is going down at the Fira Gran Via exhibition center in Barcelona, Spain this week and TechCrunch is on the scene covering all the latest announcements.

The KubeCon/CloudNativeCon conference is the world’s largest gathering for the topics of Kubernetes, DevOps and cloud-native applications. TechCrunch’s Frederic Lardinois and Ron Miller will be on the ground at the event. Wednesday at 9:00 am PT, Frederic and Ron will be sharing what they saw and what it all means with Extra Crunch members on a conference call.

Tune in to dig into what happened onstage and off and ask Frederic and Ron any and all things Kubernetes, open source development or dev tools

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.


Source: Tech Crunch

Instagram’s IGTV copies TikTok’s AI, Snapchat’s design

Instagram conquered Stories, but it’s losing the battle for the next video formats. TikTok is blowing up with an algorithmically suggested vertical one-at-a-time feed featuring videos of users remixing each other’s clips. Snapchat Discover’s 2 x infinity grid has grown into a canvas for multi-media magazines, themed video collections, and premium mobile TV shows.

Instagram’s IGTV…feels like a flop in comparison. Launched a year ago, it’s full of crudely cropped & imported viral trash from around the web. The long-form video hub that lives inside both a homescreen button in Instagram as well as a standalone app has failed to host lengthier must-see original vertical content. Sensor Tower estimates that the IGTV app has just 4.2 million installs worldwide with just 7,700 new ones per day — implying less than half a percent of Instagram’s billion-plus users have downloaded it. IGTV doesn’t rank on the overall charts and hangs low at #191 on the US – Photo & Video app charts according to App Annie.

Now Instagram has quietly overhauled the design of IGTV’s space inside its main app to crib what’s working from its two top competitors. The new design showed up in last week’s announcements for Instagram Explore’s new Shopping and IGTV discovery experiences, but the company declined to answer questions about it.

IGTV has ditched its category-based navigation system’s tabs like “For You”, “Following”, “Popular”, and “Continue Watching” for just one central feed of algorithmically suggested videos — much like TikTok. This affords a more lean-back, ‘just show me something fun’ experience that relies on Instagram’s AI to analyze your behavior and recommend content instead of putting the burden of choice on the viewer.

IGTV has also ditched its awkward horizontal scrolling design that always kept a clip playing in the top half of the screen. Now you’ll scroll vertically through a 2 x infinity grid of recommended clips in a what looks just like Snapchat Discover feed. Once you get past a first video that auto-plays up top, you’ll find a full-screen grid of things to watch. You’ll only see the horizontal scroller in the standalone IGTV app, or if you tap into an IGTV video, and then tap the Browse button for finding a next clip while the last one plays up top.

Instagram seems to be trying to straddle the designs of its two competitors. The problem is that TikTok’s one-at-a-time feed works great for punchy, short videos that get right to the point. If you’re bored after 5 second you swipe to the next. IGTV’s focus on long-form means its videos might start too slowly to grab your attention if they were auto-played full-screen in the feed rather than being chosen by a viewer. But Snapchat makes the most of the two previews per row design IGTV has adopted because professional publishers take the time to make compelling cover thumbnail images promoting their content. IGTV’s focus on independent creators means fewer have labored to make great cover images, so viewers have to rely on a screenshot and caption.

Instagram is prototyping a number of other features to boost engagement across its app, as discovered by reverse engineering specialist and frequent TechCrunch tipster Jane Manchun Wong. Those include options to blast a direct message to all your Close Friends at once but in individual message threads, see a divider between notifications and likes you have or haven’t seen, or post a Chat sticker to Stories that lets friends join a group message thread about that content. And to better compete with TikTok, it may let you add lyrics stickers to Stories that appear word-by-word in sync with Instagram’s licensed music soundtrack feature, and share Music Stories to Facebook.

When I spoke with Instagram co-founder and ex-CEO Kevin Systrom last year a few months after IGTV’s launch, he told me “It’s a new format. It’s different. We have to wait for people to adopt it and that takes time . . . Everything that is great starts small.”

But to grow large, IGTV needs to demonstrate how long-form portrait mode video can give us a deeper look at the nuances of the influencers and topics we care about. The company has rightfully prioritized other drives like safety and well-being with features that hide bullies and deter overuse. But my advice from August still stands despite all the ground Instagram has lost in the meantime. “Concentrate on teaching creators how to find what works on the format and incentivizing them with cash and traffic. Develop some must-see IGTV and stoke a viral blockbuster. Prove the gravity of extended, personality-driven vertical video.” Until the content is right, it won’t matter how IGTV surfaces it.


Source: Tech Crunch

In-car commerce startup Cargo extends Uber partnership to Brazil

Cargo, the startup that brings the convenience store into ride-hailing vehicles, is making its first international expansion through an exclusive partnership with Uber in Brazil.

Uber drivers in São Paulo and Rio de Janeiro will now be able to sign up for Cargo and potentially earn additional income by products to passengers during their ride.

Cargo, which launched in 2017, provides qualified ridesharing drivers with free boxes filled with the kinds of goods you might find in a convenience store, including snacks and phone chargers. Riders can use Cargo’s mobile web menu on their smartphones (without downloading an app) to buy what they need.

The expansion into Brazil includes a relationship am/pm convenience stores. In Brazil, about 2,500 am/pm stores are operated and located Ipiranga gas stations. Uber drivers that sign up with Cargo will collect their boxes of products at these stores.

The announcement is an extension of a partnership with Uber that began last July in San Francisco and Los Angeles. Cargo and Uber have added more U.S. cities to the partnership, including Boston, Miami, New York and Washington D.C.

The move will give Cargo access to the more than 600,000 Uber drivers in Brazil. It also signals the beginning of what will be a broader global expansion for the company. Some 20,000 U.S. drivers have used the Cargo service. 

In October, Cargo announced it had raised $22 million in a Series A round led by Founders Fund. The Series A round included additional investment from from Aquiline Technology Growth, Coatue Management and a number of high-profile entertainment, gaming and technology executives such as Zynga  founder Mark Pincus, Twitch’s former CSO Colin Carrier, media investor Vivi Nevo, former NBA commissioner David Stern, Def Jam Records CEO Paul Rosenberg, Steve Aoki, Maria Shriver and Patrick and Christina Schwarzenegger.

To date, Cargo has raised $30 million in venture funding.


Source: Tech Crunch

When will customers start buying all those AI chips?

It’s the best and worst time to be in semiconductors right now. Silicon Valley investors are once again owning up to their namesakes and taking a deep interest in next-generation silicon, with leading lights like Graphcore in the United Kingdom hitting unicorn status while weirdly named and stealthy startups like Groq in the Bay Area grow up.

Growth in chips capable of processing artificial intelligence workflows is expected to swell phenomenally over the coming years. As Asa Fitch at the Wall Street Journal noted yesterday, “Demand for chips specialized for AI is growing at such a pace the industry can barely keep up. Sales of such chips are expected to double this year to around $8 billion and reach more than $34 billion by 2023, according to Gartner projections.”

Yet, all those rosy projections don’t suddenly make the financial results of companies like Nvidia any easier to swallow. The company reported its quarterly earnings last week, and the results were weak — pretty much across the board.


Source: Tech Crunch