Madrona Venture Group launches $100M acceleration fund

Seattle’s Madrona Venture Group has long been one of the most prominent early-stage funds in the backyard of Amazon and Microsoft. Now, however, the firm is starting to look beyond the Pacific Northwest with the launch of its $100 million Acceleration Fund, which will expand its geographic reach to the entire U.S. and give it a vehicle to invest in later rounds.

The new fund will see Madrona make more investments at the Series B and C stage. While Madrona has made a wide variety of investments over the years, including some into consumer services, its focus has long been on enterprise cloud companies, ranging from Apptio to Smartsheets and Heptio (which VMware recently acquired). We’ll see a similar focus with this new fund, as Madrona managing director Matt McIlwain told me, with an emphasis on cloud and applied machine learning companies. Unlike Madrona’s current focus on the Pacific Northwest — and Seattle in particular — this fund will also invest in companies across the country.

“Our long-time strategy has been early stage, broad-based technology, Pacific Northwest,” McIlwain told me. “We call it an acceleration fund because we want to differentiate it from what some people call opportunity funds, which is more of a ‘put more money into my existing company.’ This is not that. This is new money into great companies that have reached that initial product-market fit and that want to accelerate their growth.”

Madrona also expects that these companies have reached product differentiation and founders and key executives that can sell those products.

McIlwain noted that Madrona has selectively made some of these investments in companies like Tigera, Snowflake and Accolade over the years already. This new fund gives the firm a dedicated vehicle to invest in companies where it believes it can add more value at this later stage.

“When I joined Accolade almost four years ago – the mission was to accelerate the company’s growth by finding the best talent to build a world-class product and distribution team,” said Accolade CEO Raj Singh “To do that, you need world-class partners. Having worked with Matt McIlwain and Madrona on both the Apptio and Amperity board of directors, reaching out to Madrona was high on my priority list on day one. And they have lived up to my expectations – helping with customer acquisition, critical hires, key partnerships, and invaluable counsel.”

McIlwain told me that Madrona has yet to make its first investment from the new fund. “But we’re eager to find that first one that’ll be special enough,” he said.


Source: Tech Crunch

Tesla sued in wrongful death lawsuit that alleges Autopilot caused crash

The family of Walter Huang, an Apple engineer who died after his Tesla Model X with Autopilot engaged crashed into a highway median, is suing Tesla. The State of California Department of Transportation is also named in the lawsuit.

The wrongful death lawsuit, filed in in California Superior Court, County of Santa Clara, alleges that errors by Tesla’s Autopilot driver assistance system caused the crash that killed Huang on March 23, 2018. Huang, who was 38, died when his 2017 Tesla Model X hit a highway barrier on Highway 101 in Mountain View, California.

The lawsuit alleges that Tesla’s Autopilot driver assistance system misread lane lines, failed to detect the concrete media, failed to brake and instead accelerated into the median.

A Tesla spokesperson declined to comment on the lawsuit.

“Mrs. Huang lost her husband, and two children lost their father because Tesla is beta testing its Autopilot software on live drivers,” B. Mark  Fong, a partner at law firm Minami Tamaki said in a statement.

Other allegations against Tesla include product liability, defective product design, failure to warn, breach of warranty, intentional and negligent misrepresentation and false advertising. California DOT is also named in the lawsuit because the concrete highway median that Huang’s vehicle struck was missing its crash attenuator guard, according to the filing. Caltrans failed to replace the guard after an earlier crash there, the lawsuit alleges.

The lawsuit aims to “ensure the technology behind semi-autonomous cars is safe before it is released on the roads, and its risks are not withheld or misrepresented to the public,” said Doris Cheng, a partner at Walkup, Melodia, Kelly & Schoenberger, who is also representing the family.

In the days following the crash, Tesla released two blog posts and ended up scuffling with the National Transportation Safety Board, which had sent investigators to the crash scene.

Tesla’s March 30 blog post acknowledged Autopilot had been engaged at the time of the crash. Tesla said the driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.

Those comments prompted a response from the NTSB, which indicated it was “unhappy with the release of investigative information by Tesla.” The NTSB requires companies who are a party to an agency accident investigation to not release details about the incident to the public without approval.

Tesla CEO Elon Musk would soon chime in via Twitter to express his own disappointment and criticism of the NTSB.

Three weeks after the crash, Tesla issued a statement placing the blame on Huang and denying moral or legal liability for the crash.

“According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location. The crash happened on a clear day with several hundred feet of visibility ahead, which means that the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so.”

The relationship between NTSB and Tesla would disintegrate further following the statement. Tesla said it withdrew from its party agreement with the NTSB. Within a day, NTSB claimed that it had removed Tesla as a party to its crash investigation.

A preliminary report from the NTSB didn’t make any conclusions of what caused the crash. But it did find that the vehicle accelerated from 62 mph to 70.8 mph in the final three seconds before impact and moved left as it approached the paved gore area dividing the main travel lane of 101 and Highway 85 exit ramp.

The report also found that in the 18 minutes and 55 seconds prior to impact, the Tesla provided two visual alerts and one auditory alert for the driver to place his hands on the steering wheel. The alerts were made more than 15 minutes before the crash.

Huang’s hands were detected on the steering wheel only 34 seconds during the last minute before impact. No pre-crash braking or evasive steering movement was detected, the report said.

The case is Sz Hua Huang et al v. Tesla Inc., The State of California, no. 19CV346663.

 


Source: Tech Crunch

Golden unveils a Wikipedia alternative focused on emerging tech and startups

Jude Gomila, who previously sold his mobile advertising company Heyzap to RNTS Media, is taking on a new challenge — building a “knowledge base” that can fill in Wikipedia’s blind spots, particularly when it comes to emerging technologies and startups.

While Gomila is officially launching Golden today, it’s already full of content about things like the latest batch of Y Combinator startups and morphogenetic engineering. And it’s already raised $5 million from Andreessen Horowitz, Gigafund, Founders Fund, SV Angel, Liquid 2 Ventures/Joe Montana, plus a long list of individual angel investors including Gomila’s Heyzap co-founder Immad Akhund.

To state the obvious: Wikipedia is an incredibly useful website, but Gomila pointed out that notable companies and technologies like SV Angel, Benchling, Lisk and Urbit don’t currently have entries. Part of the problem is what he called Wikipedia’s “arbitrary notability threshold,” where pages are deleted for not being notable enough. (Full disclosure: This is also what happened year ago to the Wikipedia page about yours truly — which I swear I didn’t write myself.)

Perhaps that threshold made sense when Wikipedia was just getting started and the infrastructure costs were higher, but Gomila said it doesn’t make sense now. In determining what should be included in Golden, he said the “more fundamental” question is more about existence: “Does this company exist? Does Anthony Ha exist?” If so, there’s a good chance that it should have a page on Golden, at least eventually.

In his blog post outlining his vision for the site, Gomila wrote:

We live in an age of extreme niches, an age when validation and completeness is more important than notability. Our encyclopedia on Golden doesn’t have limited shelf space — we eventually want to map everything that exists. Special relativity was not notable to the general public the moment Einstein released his seminal paper, but certainly was later on — could this have been the kind of topic to be removed from the world’s canon if it was discovered today?

Golden homepage

Gomila said he’s also bringing some new technologies and fresh approaches to the problem. Some of this is pretty straightforward, like allowing users to embed video, academic appears and other multimedia content onto Golden pages.

At the same time, he’s hoping to make it much easier to write and edit Golden pages. You do so in a WYSIWYG editor that doesn’t require you to know any HTML, and the site will help you with automated suggestions, for example pulling out author and title information when you’re adding a link to another site.

Gomila said that this will allow users to work much more quickly, so that “one hour spent on Golden is effectively 100 hours on other platforms.”

There’s also an emphasis on transparency, which includes features like “high resolution citations” (citations that make it extra clear which statement you’re trying to provide evidence for) and the fact that Golden account names are tied to your real identity — in other words, you’re supposed to edit pages under your own name. Gomila said the site backs this up with bot detection and “various protection mechanisms” designed to ensure that users aren’t pretending to be someone they’re not.

“I’m sure there will always be trolls up their usual tricks, but they will be on the losing side,” he told me.

AI Suggestions

If you think someone has added incorrect or misleading information to a page, you can flag it as an issue. Gomila suggested AI could also play a more editorial role by pointing out when someone is using language that’s biased or seems too close to marketing-speak.

“AI can have bias and humans can have bias,” he acknowledged, but he’s hoping that both elements working together can help Golden get closer to the truth. He added that “rather than us editorially changing things, our team will act like normal users” who can edit and flag issues.

Golden is available to users for free, without advertising. Gomila said his initial plan for making money is charging investment funds and large companies for a more sophisticated query tool.


Source: Tech Crunch

Oculus announces a VR subscription service for enterprises

Oculus is getting serious about monetizing VR for enterprise.

The company has previously sold specific business versions of the headsets, but now they’re adding a pricey annual device management subscription.

Oculus Go for business starts at $599 (64 GB) and the enterprise Oculus Quest starts at $999 (128 GB). These fees include the first year of enterprise device management and support which goes for $180 per year per device.

Here’s what that fee gets you:

This includes a dedicated software suite offering device setup and management tools, enterprise-grade service and support, and a new user experience customized for business use cases.

The new Oculus for Business launches in the fall.


Source: Tech Crunch

Developers can now verify mobile app users over WhatsApp instead of SMS

Facebook today released a new SDK that allows mobile app developers to integrate WhatsApp verification into Account Kit for iOS and Android. This will allow developers to build apps where users can opt to receive their verification codes through the WhatsApp app installed on their phone, instead through SMS.

Today, many apps give users the ability to sign up using only a phone number — a now popular alternative to Facebook Login, thanks to the social network’s numerous privacy scandals which led to fewer people choosing to use Facebook with third-party apps.

Plus, using phone numbers to sign up is common with a younger generation of users who don’t have Facebook accounts — and sometimes barely use email, except for joining apps and services.

When using a phone number to sign in, it’s common for the app to confirm the user by sending a verification code over SMS to the number provided. The user then enters that code to create their account. This process can also be used when logging in, as part of a multi-factor verification system where a user’s account information is combined with this extra step for added security.

While this process is straightforward and easy enough to follow, SMS is not everyone’s preferred messaging platform. That’s particularly true in emerging markets like India, where 200 million people are on WhatsApp, for example. In addition, those without an unlimited messaging plan are careful not to overuse texting when it can be avoided.

That’s where the WhatsApp SDK comes in. Once integrated into an iOS or Android app, developers can offer to send users their verification code over WhatsApp instead of text messaging. They can even choose to disable SMS verification, notes Facebook.

This is all a part of WhatsApp’s Account Kit, which is a larger set of developer tools designed to allow people to quickly register and login to apps or websites using only a phone number and email, no password required.

This WhatsApp verification codes option has been available on WhatsApp’s web SDK since late 2018, but hadn’t been available with mobile apps until today.


Source: Tech Crunch

Google employees are staging a sit-in to protest reported retaliation

Google employees are staging a sit-in tomorrow to protest the alleged retaliation at the hands of managers toward employees. The plan is to stage the sit-in tomorrow at 11 a.m.

“From being told to go on sick leave when you’re not sick, to having your reports taken away, we’re sick of retaliation,” Google employees tweeted via @GoogleWalkout. “Six months ago, we walked out. This time, we’re sitting in.”

Google declined to comment on the sit-in but pointed to its previous statement regarding retaliation:

“We prohibit retaliation in the workplace and publicly share our very clear policy,” a Google spokesperson told TechCrunch. “To make sure that no complaint raised goes unheard at Google, we give employees multiple channels to report concerns, including anonymously, and investigate all allegations of retaliation.”

This comes six months after 20,000 Google employees walked out following the company’s mishandling of sexual harassment allegations. Last week, two Google employees accused the company of retaliating against them for organizing the walkout, Wired first reported.

Meredith Whittaker, the lead of Google’s Open Research and one of the organizers of the walkout, said her role was “changed dramatically.” Fellow walkout organizer Claire Stapleton said her manager told her she would be demoted and lose half of her reports.

That was followed by an employee-led town hall meeting to hear from other employees who had faced retaliation at Google. Yesterday, Googlers publicly shared additional stories of retaliation on Medium. Here’s one:

My retaliators were punished with “coaching”

I reported my tech lead to my manager for sexual harassment, but my manager thought I was “overreacting.” I then reported my manager, as I could no longer feel comfortable working with this colleague every day while no action was being taken. The tech lead provided unsolicited feedback in my perf that took four months for the perf team to remove. The manager boxed me out and denied my promotion nomination by my peers. Eventually HR found there was retaliation but simply offered “coaching” to the teach lead and manager. I was asked to accept this. I refused. No additional actions were taken. They both still work at Google.

In response, Google Global Director of Diversity, Equity & Inclusion Melonie Parker began publicly sharing the company’s workplace policies on harassment, discrimination and retaliation. That policy specifically states Google prohibits retaliation for “raising a concern about a violation of policy or law or participating in an investigation relating to a violation of policy or law. Retaliation means taking an adverse action against an employee or TVC as a consequence of reporting, for expressing an intent to report, for assisting another employee in an effort to report, for testifying or assisting in a proceeding involving sexual harassment under any federal, state or local anti-discrimination law, or for participating in the investigation of what they believe in good faith to be a possible violation of our Code of Conduct, Google policy or the law.”


Source: Tech Crunch

Hackers went undetected in Citrix’s internal network for six months

Hackers gained access to technology giant Citrix’s networks six months before they were discovered, the company has confirmed.

In a letter to California’s attorney general, the virtualization and security software maker said the hackers had “intermittent access” to its internal network from October 13, 2018 until March 8, 2019, two days after the FBI alerted the company to the breach.

Citrix said the hackers “removed files from our systems, which may have included files containing information about our current and former employees and, in limited cases, information about beneficiaries and/or dependents.”

Initially the company said hackers stole business documents. Now it’s saying the stolen information may have included names, Social Security numbers and financial information.

Citrix said in a later update on April 4 that the attack was likely a result of password spraying, which attackers use to breach accounts by brute-forcing from a list of commonly used passwords that aren’t protected with two-factor authentication.

We asked Citrix how many staff were sent data-breach notification letters, but a spokesperson did not immediately comment.

Under California law, the authorities must be informed of a breach if more than 500 state residents are involved.

Read more:


Source: Tech Crunch

Diving into TED2019, the state of social media, and internet behavior

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. Last week, TechCrunch’s Anthony Ha gave us his recap of the TED2019 conference and offered key takeaways from the most interesting talks and provocative ideas shared at the event.

Under the theme, ‘Bigger Than Us’, the conference featured talks, Q&A’s, and presentations from a wide array of high-profile speakers, including an appearance from Twitter CEO Jack Dorsey which was the talk of the week. Anthony dives deeper into the questions raised in his onstage interview that kept popping up: How has social media warped our democracy? How can the big online platforms fight back against abuse and misinformation? And what is the Internet good for, anyway?

“…So I would suggest that probably five years ago, the way that we wrote about a lot of these tech companies was too positive and they weren’t as good as we made them sound. Now the pendulum has swung all the way in the other direction, where they’re probably not as bad we make them sound…

…At TED, you’d see the more traditional TED talks about, “Let’s talk about the magic of finding community in the internet.” There were several versions of that talk this year. Some of them very good, but now you have to have that conversation with the acknowledgement that there’s much that is terrible on the internet.”

Ivan Poupyrev

Image via Ryan Lash / TED

Anthony also digs into what really differentiates the TED conference from other tech events, what types of people did and should attend the event, and even how he managed to get kicked out of the theater for typing too loud.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


Source: Tech Crunch

Why did last night’s ‘Game of Thrones’ look so bad? Here comes the science!

Last night’s episode of “Game of Thrones” was a wild ride and inarguably one of an epic show’s more epic moments — if you could see it through the dark and the blotchy video. It turns out even one of the most expensive and meticulously produced shows in history can fall prey to the scourge of low quality streaming and bad TV settings.

The good news is this episode is going to look amazing on Blu-ray or potentially in future, better streams and downloads. The bad news is that millions of people already had to see it in a way its creators surely lament. You deserve to know why this was the case. I’ll be simplifying a bit here because this topic is immensely complex, but here’s what you should know.

(By the way, I can’t entirely avoid spoilers, but I’ll try to stay away from anything significant in words or images.)

It was clear from the opening shots in last night’s episode, “The Longest Night,” that this was going to be a dark one. The army of the dead faces off against the allied living forces in the darkness, made darker by a bespoke storm brought in by, shall we say, a Mr. N.K., to further demoralize the good guys.

If you squint you can just make out the largest army ever assembled

Thematically and cinematographically, setting this chaotic, sprawling battle at night is a powerful creative choice and a valid one, and I don’t question the showrunners, director, and so on for it. But technically speaking, setting this battle at night, and in fog, is just about the absolute worst case scenario for the medium this show is native to: streaming home video. Here’s why.

Compression factor

Video has to be compressed in order to be sent efficiently over the internet, and although we’ve made enormous strides in video compression and the bandwidth available to most homes, there are still fundamental limits.

The master video that HBO put together from the actual footage, FX, and color work that goes into making a piece of modern media would be huge: hundreds of gigabytes if not terabytes. That’s because the master has to include all the information on every pixel in every frame, no exceptions.

Imagine if you tried to “stream” a terabyte-sized video file. You’d have to be able to download 200 megabytes per second for the full 80 minutes of this episode. Few people in the world have that kind of connection — it would basically never stop buffering. Even 20 megabytes per second is asking too much by a long shot. 2 is doable — slightly under the 25 megabit speed (that’s bits… divide by 8 to get bytes) we use to define broadband download speeds.

So how do you turn a large file into a small one? Compression — we’ve been doing it for a long time, and video, though different from other types of data in some ways, is still just a bunch of zeroes and ones. In fact it’s especially susceptible to strong compression because of how one video frame is usually very similar to the last and the next one. There are all kinds of shortcuts you can take that reduce the file size immensely without noticeably impacting the quality of the video. These compression and decompression techniques fit into a system called a “codec.”

But there are exceptions to that, and one of them has to do with how compression handles color and brightness. Basically, when the image is very dark, it can’t display color very well.

The color of winter

Think about it like this: There are only so many ways to describe colors in a few words. If you have one word you can say red, or maybe ochre or vermilion depending on your interlocutor’s vocabulary. But if you have two words you can say dark red, darker red, reddish black, and so on. The codec has a limited vocabulary as well, though its “words” are the numbers of bits it can use to describe a pixel.

This lets it succinctly describe a huge array of colors with very little data by saying, this pixel has this bit value of color, this much brightness, and so on. (I didn’t originally want to get into this, but this is what people are talking about when they say bit depth, or even “highest quality pixels.)

But this also means that there are only so many gradations of color and brightness it can show. Going from a very dark grey to a slightly lighter grey, it might be able to pick 5 intermediate shades. That’s perfectly fine if it’s just on the hem of a dress in the corner of the image. But what if the whole image is limited to that small selection of shades?

Then you get what we see last night. See how Jon (I think) is made up almost entirely of only a handful of different colors (brightnesses of a similar color, really) in with big obvious borders between them?

This issue is called “banding,” and it’s hard not to notice once you see how it works. Images on video can be incredibly detailed, but places where there are subtle changes in color — often a clear sky or some other large but mild gradient — will exhibit large stripes as the codec goes from “darkest dark blue” to “darker dark blue” to “dark blue,” with no “darker darker dark blue” in between.

Check out this image.

Above is a smooth gradient encoded with high color depth. Below that is the same gradient encoded with lossy JPEG encoding — different from what HBO used, obviously, but you get the idea.

Banding has plagued streaming video forever, and it’s hard to avoid even in major productions — it’s just a side effect of representing color digitally. It’s especially distracting because obviously our eyes don’t have that limitation. A high-definition screen may actually show more detail than your eyes can discern from couch distance, but color issues? Our visual systems flag them like crazy. You can minimize it, but it’s always going to be there, until the point when we have as many shades of grey as we have pixels on the screen.

So back to last night’s episode. Practically the entire show took place at night, which removes about 3/4 of the codec’s brightness-color combos right there. It also wasn’t a particularly colorful episode, a directorial or photographic choice that highlighted things like flames and blood, but further limited the ability to digitally represent what was on screen.

It wouldn’t be too bad if the background was black and people were lit well so they popped out, though. The last straw was the introduction of the cloud, fog, or blizzard, whatever you want to call it. This kept the brightness of the background just high enough that the codec had to represent it with one of its handful of dark greys, and the subtle movements of fog and smoke came out as blotchy messes (often called “compression artifacts” as well) as the compression desperately tried to pick what shade was best for a group of pixels.

Just brightening it doesn’t fix things, either — because the detail is already crushed into a narrow range of values, you just get a bandy image that never gets completely black, making it look washed out, as you see here:

(Anyway, the darkness is a stylistic choice. You may not agree with it, but that’s how it’s supposed to look and messing with it beyond making the darkest details visible could be counterproductive.)

Now, it should be said that compression doesn’t have to be this bad. For one thing, the more data it is allowed to use, the more gradations it can describe, and the less severe the banding. It’s also possible (though I’m not sure where it’s actually done) to repurpose the rest of the codec’s “vocabulary” to describe a scene where its other color options are limited. That way the full bandwidth can be used to describe a nearly monochromatic scene even though strictly speaking it should be only using a fraction of it.

But neither of these are likely an option for HBO: Increasing the bandwidth of the stream is costly, since this is being sent out to tens of millions of people — a bitrate increase big enough to change the quality would also massively swell their data costs. When you’re distributing to that many people, that also introduces the risk of hated buffering or errors in playback, which are obviously a big no-no. It’s even possible that HBO lowered the bitrate because of network limitations — “Game of Thrones” really is on the frontier of digital distribution.

And using an exotic codec might not be possible because only commonly used commercial ones are really capable of being applied at scale. Kind of like how we try to use standard parts for cars and computers.

This episode almost certainly looked fantastic in the mastering room and FX studios, where they not only had carefully calibrated monitors with which to view it but also were working with brighter footage (it would be darkened to taste by the colorist) and less or no compression. They might not even have seen the “final” version that fans “enjoyed.”

We’ll see the better copy eventually, but in the meantime the choice of darkness, fog, and furious action meant the episode was going to be a muddy, glitchy mess on home TVs.

And while we’re on the topic…

You mean it’s not my TV?

Couple watching TV on their couch.Well… to be honest, it might be that too. What I can tell you is that simply having a “better” TV by specs, such as 4K or a higher refresh rate or whatever, would make almost no difference in this case. Even built-in de-noising and de-banding algorithms would be hard pressed to make sense of “The Long Night.” And one of the best new display technologies, OLED, might even make it look worse! Its “true blacks” are much darker than an LCD’s backlit blacks, so the jump to the darkest grey could be way more jarring.

That said, it’s certainly possible that your TV is also set up poorly. Those of us sensitive to this kind of thing spend forever fiddling with settings and getting everything just right for exactly this kind of situation.

Usually “calibration” is actually a pretty simple process of making sure your TV isn’t on the absolute worst settings, which unfortunately many are out of the box. Here’s a very basic three-point guide to “calibrating” your TV:

  1. Go through the “picture” or “video” menu and turn off anything with a special name, like “TrueMotion,” “Dynamic motion,” “Cinema mode,” or anything like that. Most of these make things look worse, especially anything that “smooths” motion. Turn those off first and never ever turn them on again. Don’t mess with brightness, gamma, color space, anything you have to turn up or down from 50 or whatever.
  2. Figure out lighting by putting on a good, well-shot movie in the situation you usually watch stuff — at night maybe, with the hall light on or whatever. While the movie is playing, click through any color presets your TV has. These are often things like “natural,” “game,” “cinema,” “calibrated,” and so on and take effect right away. Some may make the image look too green, or too dark, or whatever. Play around with it and whichever makes it look best, use that one. You can always switch later – I myself switch between a lighter and darker scheme depending on time of day and content.
  3. Don’t worry about HDR, dynamic lighting, and all that stuff for now. There’s a lot of hype about these technologies and they are still in their infancy. Few will work out of the box and the gains may or may not be worth it. The truth is a well shot movie from the ’60s or ’70s can look just as good today as a “high dynamic range” show shot on the latest 8K digital cinema rig. Just focus on making sure the image isn’t being actively interfered with by your TV and you’ll be fine.

Unfortunately none of these things will make “The Long Night” look any better until HBO releases a new version of it. Those ugly bands and artifacts are baked right in. But if you have to blame anyone, blame the streaming infrastructure that wasn’t prepared for a show taking risks in its presentation, risks I would characterize as bold and well executed, unlike the writing in the show lately. Oops, sorry, couldn’t help myself.

If you really want to experience this show the way it was intended, the fanciest TV in the world wouldn’t have helped last night, though when the Blu-ray comes out you’ll be in for a treat. But here’s hoping the next big battle takes place in broad daylight.


Source: Tech Crunch

Interactive content is coming to Walmart’s Vudu & the BBC

Netflix’s early experiments with interactive content may not have always hit the mark. Its flagship effort on this front, Black Mirror: Bandersnatch, was a frustrating experiment — and now, the subject of a lawsuit. But the industry has woken up to the potential of personalized programming. Not only is Netflix pursuing more interactive content, including perhaps a rom-com, others are following suit with interactive offerings of their own, including Amazon, Google — and now, it seems — Walmart and the BBC.

A couple of months ago, Amazon’s e-book division Audible launched professionally performed audio stories for Alexa devices in order to test whether voice-controlled choose-your-own-adventure style narratives would work on smart speakers, like the Amazon Echo.

YouTube is also developing interactive programming and live specials, including its own choose-your-own-adventure-style shows.

Now, according to a new report from Bloomberg, Walmart is placing its own bet on interactive media — but with an advertising-focused twist. Through its investment in interactive media company Eko, Walmart will debut several new shows for its streaming service Vudu that feature “shoppable” advertisements. That is, instead of just seeing an ad for a product that Walmart carries, customers will be able to buy the products seen in the shows, too.

Bloomberg’s report is light on the details — more is expected at Walmart’s Newfronts announcement this week — but Eko has already developed ads tied to interactive TV where the ad that plays matches the emotion of the viewer/participant, based on their choices within the branching narrative. It also created ads that viewers click their way through, seeing different versions of the ad’s story with each click.

And today, the BBC announced it’s venturing into interactive content for the first time, too.

As part of its NewFronts announcements, the broadcaster unveiled its plans for interactive news programming within its technology news show Click.

For the show’s 1,000th episode airing later this year, it will introduce a full-length branching narrative episode, where the experience is personalized and localized to individual viewers. Unlike choose-your-own-adventure style programs that present only a few options to pick from, viewers will also answer questions at the beginning of the show to tailor their experience.

Part of the focus will be on presenting different versions of the program based on the viewer’s own technical knowledge, the BBC said.

A team of a dozen coders is currently building the episode, so the broadcaster can’t yet confirm how many different variations will be available in the end, or what topics will be featured on the episode. However, one topic being considered is lab-grown meat, we’re told.

The BBC says it’s very much planning to make interactivity an ongoing effort going forward.

This collective rush to interactive, personalized programming may lead some to believe this is indeed the next big thing in media and entertainment. But the reality is that these shows are costly to produce and difficult to scale compared with traditional programming. Plus, viewer reaction has been mixed so far.

Some may decide further experiments aren’t worth pursuing if they don’t produce a bump in viewership, subscriber numbers, or advertiser click-throughs — depending on which metric they care about.

In the meantime, though, it will be interesting to see these different approaches to interactive content make their debut.


Source: Tech Crunch