Google looks to former Oracle exec Thomas Kurian to move cloud business along

Diane Greene announced on Friday that she was stepping down after three years running Google’s cloud business. She will stay on until the first of the year to help her successor, Thomas Kurian in the transition. He left Oracle at the end of September after more than 20 years with the company, and is charged with making Google’s cloud division more enterprise-friendly, a goal that has oddly eluded the company.

Greene was brought on board in 2015 to bring some order and enterprise savvy to the company’s cloud business. While she did help move them along that path, and grew the cloud business, it simply hasn’t been enough. There have been rumblings for months that Greene’s time was coming to an end.

So the torch is being passed to Kurian, a man who spent over two decades at a company that might be the exact opposite of Google. He ran product at Oracle, a traditional enterprise software company. Oracle itself has struggled to make the transition to a cloud company, but Bloomberg reported in September that one of the reasons Kurian was taking a leave of absence at the time was a difference of opinion with Chairman Larry Ellison over cloud strategy. According to the report, Kurian wanted to make Oracle’s software available on public clouds like AWS and Azure (and Google Cloud). Ellison apparently didn’t agree and a couple of weeks later Kurian announced he was moving on.

Even though Kurian’s background might not seem to be perfectly aligned with Google, it’s important to keep in mind that his thinking was evolving. He was also in charge of thousands of products and helped champion Oracle’s move to the cloud. He has experience successfully nurturing products enterprises have wanted, and perhaps that’s the kind of knowledge Google was looking for in its next cloud leader.

Ray Wang, founder and principal analyst at Constellation Research says Google still needs to learn to support the enterprise, and he believes Kurian is the right person to help the company get there. “Kurian knows what’s required to make a cloud company work for enterprise customers,” Wang said.

If he’s right, perhaps an old-school enterprise executive is just what Google requires to turn its Cloud division into an enterprise-friendly powerhouse. Greene has always maintained that it was still early days for the cloud and Google had plenty of time to capture part of the untapped market, a point she reiterated in her blog post on Friday. “The cloud space is early and there is an enormous opportunity ahead,” she wrote.

She may be right about that, but marketshare positions seem to be hardening. AWS, which was first to market, has an enormous marketshare lead with over 30 percent by most accounts. Microsoft is the only company with the market strength at the moment to give them a run for their money and the only other company with double digit market share numbers. In fact, Amazon has a larger marketshare than the next four companies combined, according to data from Synergy Research.

While Google is always mentioned in the Big 3 cloud companies with AWS and Microsoft, with around $4 billion revenue a year, it has a long way to go to get to the level of these other companies. Despite Greene’s assertions, time could be running out to make a run. Perhaps Kurian is the person to push the company to grab some of that untapped market as companies move more workloads to the cloud. At this point, Google is counting on him to do just that.


Source: Tech Crunch

The slow corrosion of techno-optimism

Two weeks from now, the Swahilipot Hub, a hackerspace / makerspace / center for techies and artists in Mombasa, Kenya, is hosting a Pwani Innovation Week, “to stimulate the innovation ecosystem in the Pwani Region.” Some of its organizers showed me around Mombasa’s cable landing site some years ago; they’re impressive people. The idea of the Hub and its forthcoming event fills me with unleavened enthusiasm, and optimism … and a bleak realization that it’s been a while since I’ve felt this way about a tech initiative.

What happened? How did we go from predictions that the tech industry would replace the hidebound status quo with a new democratized openness, power to the people, now that we all carry a networked supercomputer in our pocket … to widespread, metastasizing accusations of abuse of power? To cite just a few recent examples: Facebook being associated with genocide and weaponized disinformation; Google with sexual harassment and nonconsensual use of patients’ medical data; and Amazon’s search for a new headquarters called “shameful — it should be illegal” by The Atlantic.

To an extent some of this was inevitable. The more powerful you become, the less publicly acceptable it is to throw your increasing weight around like Amazon has done. I’m sure that to Google, subsuming DeepMind is a natural, inevitable corporate progression, a mere structural reshuffling, and it’s not their fault that the medical providers they’re working with never got explicit consent from their patients to share the provided data. Facebook didn’t know it was going to be a breeding ground for massive disinformation campaigns; it was, and remains, a colossal social experiment in which we are all participating, despite the growing impression that its negatives may outweigh its positives. And at both the individual and corporate levels, as a company grows more powerful, “power corrupts” remains an inescapable truism.

But let’s not kid ourselves. There’s more going on here than mischance and the natural side effects of growth, and this is particularly true for Facebook and Twitter. When we talk about loss of faith in tech, most of the time, I think, we mean loss of faith in social media. It’s true that we don’t want them to become censors. The problem is that they already are, as a side effect, via their algorithms which show posts and tweets with high “engagement” — i.e. how vehemently users respond. The de facto outcome is to amplify outrage, and hence disinformation.

It may well be true, in a neutral environment, that the best answer to bad speech is more speech. The problem is that Facebook and Twitter are anything but neutral environments. Their optimization for “engagement” is a Brobdingnagian thumb on their scales, tilting their playing fields into whole Himalayas of advantages for bad faith, misinformation, disinformation, outrage and hate.

This optimization isn’t even necessary for their businesses to be somewhat successful. In 2014, Twitter had a strict chronological timeline, and recorded a $100 million profit before stock-based compensation — with relatively primitive advertising infrastructure, compared to today. Twitter and Facebook could kill the disinformation problem tomorrow, with ease, by switching from an algorithmic, engagement-based timeline back to a strict chronological one.

Never going to happen, of course. It would hurt their profits and their stock price too much. Just like Google was never going to consider itself bound to DeepMind’s cofounder’s assurance two years ago that “DeepMind operates autonomously from Google.” Just like Amazon was never going to consider whether siphoning money from local governments at its new so-called “co-headquarters” was actually going to be good for its new homes. Because while technology has benefited individuals, enormously, it’s really benefited technology’s megacorporations, and they’re going to follow their incentives, not ours.

Mark Zuckerberg’s latest post begins: “Many of us got into technology because we believe it can be a democratizing force for putting power in people’s hands.” I agree with that statement. Many of us did. But, looking back, were we correct? Is it really what the available evidence show us? Has it, perhaps, put some power in people’s hands — but delivered substantially more to corporations and governments?

I fear that the available evidence seems to confirm, instead, the words of tech philosopher-king Maciej Ceglowski. His most relevant rant begins with a much simpler, punchier phrase: “Technology concentrates power.” Today it seems harder than ever to argue with that.


Source: Tech Crunch

Vision Direct reveals breach that skimmed customer credit cards

European online contact lens supplier Vision Direct has revealed a data breach which compromised full credit card details for a number of its customers, as well as personal information.

Compromised data includes full name, billing address, email address, password, telephone number and payment card information, including card number, expiry date and CVV.

It’s not yet clear how many of Vision Direct’s customers are affected — we’ve reached out to the company with questions.

Detailing the data theft in a post on its website Vision Direct writes that customer data was compromised between 12.11am GMT November 3, 2018 and 12.52pm GMT November 8 — with any logged in users who were ordering or updating their information on visionDirect.co.uk in that time window potentially being affected.

It says it has emailed customers to notify them of the data theft.

“This data was compromised when entering data on the website and not from the Vision Direct database,” the company writes on its website. “The breach has been resolved and our website is working normally.”

“We advise any customers who believe they may have been affected to contact their banks or credit card providers and follow their advice,” it adds.

(As an aside, Fintech startup Revolut didn’t hang around waiting for concerned customers to call — blogging today that, on hearing the breach news, it quickly identified 80 of its customers who had been affected. “As a precaution, we immediately contacted all affected customers letting them know that we had cancelled their existing cards and would be sending them a replacement one for free,” it adds.)

Vision Direct says affected payment methods include Visa, Mastercard and Maestro — but not PayPal (although it says PayPal users’ personal data may still have been swiped).

It claims existing personal data previously stored in its database was not affected by the breach — writing that the theft “only impacted new information added or updated on the VisionDirect.co.uk website” (and only during the aforementioned time window).

“All payment card data is stored with our payment providers and so stored payment card information was not affected by the breach,” it adds.

Data appears to have been compromised via a Javascript keylogger running on the Vision Direct website, according to security researcher chatter on Twitter.

After the breach was made public, security researcher Troy Mursch quickly found a fake Google Analytics script had been running on Vision Direct’s UK website:

The malicious script also looks to have affected additional Vision Direct domains in Europe; and users of additional ecommerce sites (at least one of which they found still running the fake script)…

Another security researcher, Willem de Groot, picked up on the scam in September, writing in a blog post then that: “The domain g-analytics.com is not owned by Google, as opposed to its legitimate google-analytics.com counterpart. The fraud is hosted on a dodgy Russian/Romanian/Dutch/Dubai network called HostSailor.”

He also found the malware had “spread to various websites”, saying its creator had crafted “14 different copies over the course of 3 weeks”, and tailored some versions to include a fake payment popup form “that was built for a specific website”.

“These instances are still harvesting passwords and identities as of today,” de Groot warned about two months before Vision Direct got breached.


Source: Tech Crunch

Microsoft to shut down HockeyApp

Microsoft announced plans to shut down HockeyApp and replace it with Visual Studio App Center. The company acquired the startup behind HockeyApp back in 2014. And if you’re still using HockeyApp, the service will officially shut down on November 16, 2019.

HockeyApp was a service that let you distribute beta versions of your app, get crash reports and analytics. There are other similar SDKs, such as Google’s Crashlytics, TestFairy, Appaloosa, DeployGate and native beta distribution channels (Apple’s TestFlight and Google Play Store’s beta feature).

Microsoft hasn’t really been hiding its plans to shut down the service. Last year, the company called App Center “the future of HockeyApp”. The company has also been cloning your HockeyApp projects into App Center for a while.

It doesn’t mean that you’ll find the same features in App Center just yet. The company has put up a page with a feature roadmap. Let’s hope that Microsoft has enough time to release everything before HockeyApp shuts down.


Source: Tech Crunch

China’s hottest news app Jinri Toutiao announces new CEO

You may not have heard of ByteDance, but you probably know its red-hot video app TikTok, which gobbled up Musical.ly in August. The Beijing-based company also runs a popular news aggregator called Jinri Toutiao, which means “today’s headlines” in Chinese, and the app just assigned a new CEO.

At a company event on Saturday, Chen Lin, an early ByteDance employee, made his first appearance as Toutiao’s new CEO. That means Toutiao’s creator Zhang Yiming has handed the helm to Chen, who previously headed product management for the news app.

Zhang’s not going anywhere though. A company spokesperson told TechCrunch that he remains as the CEO of ByteDance, which operates a slew of media apps besides TikTok and Toutiao to lock horns with China’s tech giants Baidu, Alibaba, and Tencent.

The story of ByteDance started when Zhang created Toutiao in 2012. The news app collects articles and videos from third-party providers and uses AI algorithms to personalize content for users. Toutiao flew off the shelves and soon went on to incubate new media products, including a Quora-like Q&A platform and TikTok, known as Douyin in China.

The handover may signal a need for Zhang to step back from daily operations in his brainchild and oversee strategies for ByteDance, which has swollen into the world’s highest-valued startup. The company spokesperson did not provide further details on the reshuffle.

Toutiao itself is installed on over 240 million monthly unique devices, which makes it a top news aggregator in China, according to data analytics firm iResearch. TikTok and Douyin collectively command 500 million monthly active users around the world, while Musical.ly has a userbase of 100 million, the company previously announced.

Toutiao’s success has prompted Tencent, which is best known for creating WeChat and controlling a large slice of China’s gaming market, to build its own AI-powered news app. Toutiao’s fledgling advertising business has also stepped on the toes of Baidu, which makes the bulk of its income from search ads. More recently, Toutiao muscled in on Alibaba’s territory with an ecommerce feature.

At the Saturday event, Chen also shared updates that hint at Toutiao’s growing ambition. For one, the news goliath is working to help content providers cash in through a suite of tools, for instance, ecommerce sales and virtual gifts from livestreaming. The move is poised to help Toutiao retain quality creators as the race to grab digital eyeball time intensifies in China.

Toutiao also recently launched its first wave of “mini programs,” or stripped-down versions of native apps that operate inside super apps like Toutiao. Tencent has proven the system to be a big traffic driver after WeChat mini programs crossed two million daily users.

Lastly, Toutiao said it will take more proactive measures to monitor what users consume. In recent months, the news app has run afoul of media regulators who slashed it for hosting illegal and “inappropriate” content. Douyin has faced similar criticisms. While ByteDance prides itself on automated distribution, the company has demonstrated a willingness to abide with government rules by hiring thousands of human censors and using AI to filter content.


Source: Tech Crunch

Quantum computing, not AI, will define our future

The word “quantum” gained currency in the late 20th century as a descriptor signifying something so significant, it defied the use of common adjectives. For example, a “quantum leap” is a dramatic advancement (also an early ’90’s television series starring Scott Bakula).

At best, that is an imprecise (though entertaining) definition. When “quantum” is applied to “computing,” however, we are indeed entering an era of dramatic advancement.

Quantum computing is technology based on the principles of quantum theory, which explains the nature of energy and matter on the atomic and subatomic level. It relies on the existence of mind-bending quantum-mechanical phenomena, such as superposition and entanglement.

Erwin Schrödinger’s famous 1930’s thought experiment involving a cat that was both dead and alive at the same time was intended to highlight the apparent absurdity of superposition, the principle that quantum systems can exist in multiple states simultaneously until observed or measured. Today quantum computers contain dozens of qubits (quantum bits), which take advantage of that very principle. Each qubit exists in a superposition of zero and one (i.e., has non-zero probabilities to be a zero or a one) until measured. The development of qubits has implications for dealing with massive amounts of data and achieving previously unattainable level of computing efficiency that are the tantalizing potential of quantum computing.

While Schrödinger was thinking about zombie cats, Albert Einstein was observing what he described as “spooky action at a distance,” particles that seemed to be communicating faster than the speed of light. What he was seeing were entangled electrons in action. Entanglement refers to the observation that the state of particles from the same quantum system cannot be described independently of each other. Even when they are separated by great distances, they are still part of the same system. If you measure one particle, the rest seem to know instantly. The current record distance for measuring entangled particles is 1,200 kilometers or about 745.6 miles. Entanglement means that the whole quantum system is greater than the sum of its parts.

If these phenomena make you vaguely uncomfortable so far, perhaps I can assuage that feeling simply by quoting Schrödinger, who purportedly said after his development of quantum theory, “I don’t like it, and I’m sorry I ever had anything to do with it.”

Various parties are taking different approaches to quantum computing, so a single explanation of how it works would be subjective. But one principle may help readers get their arms around the difference between classical computing and quantum computing. Classical computers are binary. That is, they depend on the fact that every bit can exist only in one of two states, either 0 or 1. Schrödinger’s cat merely illustrated that subatomic particles could exhibit innumerable states at the same time. If you envision a sphere, a binary state would be if the “north pole,” say, was 0, and the south pole was 1. In a qubit, the entire sphere can hold innumerable other states and relating those states between qubits enables certain correlations that make quantum computing well-suited for a variety of specific tasks that classical computing cannot accomplish. Creating qubits and maintaining their existence long enough to accomplish quantum computing tasks is an ongoing challenge.

IBM researcher Jerry Chow in the quantum computing lab at IBM’s T.J. Watson Research Center.

Humanizing Quantum Computing

These are just the beginnings of the strange world of quantum mechanics. Personally, I’m enthralled by quantum computing. It fascinates me on many levels, from its technical arcana to its potential applications that could benefit humanity. But a qubit’s worth of witty obfuscation on how quantum computing works will have to suffice for now. Let’s move on to how it will help us create a better world.

Quantum computing’s purpose is to aid and extend the abilities of classical computing. Quantum computers will perform certain tasks much more efficiently than classical computers, providing us with a new tool for specific applications. Quantum computers will not replace their classical counterparts. In fact, quantum computers require classical computer to support their specialized abilities, such as systems optimization.

Quantum computers will be useful in advancing solutions to challenges in diverse fields such as energy, finance, healthcare, aerospace, among others. Their capabilities will help us cure diseases, improve global financial markets, detangle traffic, combat climate change, and more. For instance, quantum computing has the potential to speed up pharmaceutical discovery and development, and to improve the accuracy of the atmospheric models used to track and explain climate change and its adverse effects.

I call this “humanizing” quantum computing, because such a powerful new technology should be used to benefit humanity, or we’re missing the boat.

Intel’s 17-qubit superconducting test chip for quantum computing has unique features for improved connectivity and better electrical and thermo-mechanical performance. (Credit: Intel Corporation)

An Uptick in Investments, Patents, Startups, and more

That’s my inner evangelist speaking. In factual terms, the latest verifiable, global figures for investment and patent applications reflect an uptick in both areas, a trend that’s likely to continue. Going into 2015, non-classified national investments in quantum computing reflected an aggregate global spend of about $1.75 billion USD,according to The Economist. The European Union led with $643 million. The U.S. was the top individual nation with $421 million invested, followed by China ($257 million), Germany ($140 million), Britain ($123 million) and Canada ($117 million). Twenty countries have invested at least $10 million in quantum computing research.

At the same time, according to a patent search enabled by Thomson Innovation, the U.S. led in quantum computing-related patent applications with 295, followed by Canada (79), Japan (78), Great Britain (36), and China (29). The number of patent families related to quantum computing was projected to increase 430 percent by the end of 2017

The upshot is that nations, giant tech firms, universities, and start-ups are exploring quantum computing and its range of potential applications. Some parties (e.g., nation states) are pursuing quantum computing for security and competitive reasons. It’s been said that quantum computers will break current encryption schemes, kill blockchain, and serve other dark purposes.

I reject that proprietary, cutthroat approach. It’s clear to me that quantum computing can serve the greater good through an open-source, collaborative research and development approach that I believe will prevail once wider access to this technology is available. I’m confident crowd-sourcing quantum computing applications for the greater good will win.

If you want to get involved, check out the free tools that the household-name computing giants such as IBM and Google have made available, as well as the open-source offerings out there from giants and start-ups alike. Actual time on a quantum computer is available today, and access opportunities will only expand.

In keeping with my view that proprietary solutions will succumb to open-source, collaborative R&D and universal quantum computing value propositions, allow me to point out that several dozen start-ups in North America alone have jumped into the QC ecosystem along with governments and academia. Names such as Rigetti Computing, D-Wave Systems, 1Qbit Information Technologies, Inc., Quantum Circuits, Inc., QC Ware, Zapata Computing, Inc. may become well-known or they may become subsumed by bigger players, their burn rate – anything is possible in this nascent field.

Developing Quantum Computing Standards

 Another way to get involved is to join the effort to develop quantum computing-related standards. Technical standards ultimately speed the development of a technology, introduce economies of scale, and grow markets. Quantum computer hardware and software development will benefit from a common nomenclature, for instance, and agreed-upon metrics to measure results.

Currently, the IEEE Standards Association Quantum Computing Working Group is developing two standards. One is for quantum computing definitions and nomenclature so we can all speak the same language. The other addresses performance metrics and performance benchmarking to enable measurement of quantum computers’ performance against classical computers and, ultimately, each other.

The need for additional standards will become clear over time.


Source: Tech Crunch

WhatsApp could wreck Snapchat again by copying ephemeral messaging

WhatsApp already ruined Snapchat’s growth once. WhatsApp Status, its clone of Snapchat Stories, now has 450 million daily active users compared to Snapchat’s 188 million. That’s despite its 24-hour disappearing slideshows missing tons of features including augmented reality selfie masks, animated GIFs, or personalized avatars like Bitmoji. A good enough version of Stories conveniently baked into the messaging app beloved in the developing world where Snapchat wasn’t proved massively successful. Snapchat actually lost total daily users in Q2 and Q3 2018, and even lost Rest Of World daily users in Q2 despite that being where late stage social networks rely on for growth.

That’s why it’s so surprising that WhatsApp hasn’t already copied the other big Snapchat feature, ephemeral messaging. When chats can disappear, people feel free to be themselves — more silly, more vulnerable, more expressive. For teens who’ve purposefully turned away from the permanence of the Facebook profile timeline, there’s a sense of freedom in ephemerality. You don’t have to worry about old stuff coming back to haunt or embarass you. Snapchat rode this idea to become a cultural staple for the younger generation.

Yet right now WhatsApp only lets you send permanent photos, videos, and texts. There is an Unsend option, but it only works for an hour after a message is sent. That’s far from the default ephemerality of Snapchat where seen messages disappear once you close the chat window unless you purposefully tap to save them.

Instagram has arrived at a decent compromise. You can send both permanent and temporary photos and videos. Text messages are permanent by default, but you can unsend even old ones. The result is the flexibility to both chat through expiring photos and off-the-cuff messages knowing they will or can disappear, while also being able to have reliable, utilitarian chats and privately share photos for posterity without the fear that one wrong tap could erase them. When Instagram Direct added ephemeral messaging, it saw a growth spurt to over 375 million monthly users as of April 2017.

Snapchat lost daily active users the past two quarters

WhatsApp should be able to build this pretty easily. Add a timer option when people send media so photos or videos can disappear after 10 seconds, a minute, an hour, or a day. Let people add a similar timer to specific messages they send, or set a per chat thread default for how long your messages last similar to fellow encrypted messaging app Signal.

Snap CEO Evan Spiegel’s memo leaked by Cheddar’s Alex Heath indicates that he views chat with close friends as the linchpin of his app that was hampered by this year’s disastrous redesign. He constantly refers to Snapchat as the fastest way to communicate. That might be true for images but not necessarily text, as BTIG’s Rich Greenfield points out, citing how expiring text can causes conversations to break down. It’s likely that Snapchat will double-down on messaging now that Stories has been copied to death.

Given its interest in onboarding older users, that might mean making texts easier to keep permanent or at least lengthening how long they last before they disappear. And with its upcoming Project Mushroom re-engineering of the Snapchat app so it works better in developing markets, Snap will increasingly try to become WhatsApp.

…Unless WhatsApp can become Snapchat first. Spiegel proved people want the flexibility of temporary messaging. Who cares who invented something if it can be brought to more people to deliver more joy? WhatsApp should swallow its pride and embrace the ephemeral.


Source: Tech Crunch

How cities can fix tourism hell

A steep and rapid rise in tourism has left behind a wake of economic and environmental damage in cities around the globe. In response, governments have been responding with policies that attempt to limit the number of visitors who come in. We’ve decided to spare you from any more Amazon HQ2 talk and instead focus on why cities should shy away from reactive policies and should instead utilize their growing set of technological capabilities to change how they manage tourists within city lines.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.
  

The struggle for cities to manage “Overtourism”

Well – it didn’t take long for the phrase “overtourism” to get overused. The popular buzzword describes the influx of tourists who flood a location and damage the quality of life for full-time residents. The term has become such a common topic of debate in recent months that it was even featured this past week on Oxford Dictionaries’ annual “Words of the Year” list.

But the expression’s frequent appearance in headlines highlights the growing number of cities plagued by the externalities from rising tourism.

In the last decade, travel has become easier and more accessible than ever. Low-cost ticketing services and apartment-rental companies have brought down the costs of transportation and lodging; the ubiquity of social media has ticked up tourism marketing efforts and consumer demand for travel; economic globalization has increased the frequency of business travel; and rising incomes in emerging markets have opened up travel to many who previously couldn’t afford it.

Now, unsurprisingly, tourism has spiked dramatically, with the UN’s World Tourism Organization (UNWTO) reporting that tourist arrivals grew an estimated 7% in 2017 – materially above the roughly 4% seen consistently since 2010. The sudden and rapid increase of visitors has left many cities and residents overwhelmed, dealing with issues like overcrowding, pollution, and rising costs of goods and housing.

The problems cities face with rising tourism are only set to intensify. And while it’s hard for me to imagine when walking shoulder-to-shoulder with strangers on tight New York streets, the number of tourists in major cities like these can very possibly double over the next 10 to 15 years.

China and other emerging markets have already seen significant growth in the middle-class and have long runway ahead. According to the Organization for Economic Co-operation and Development (OECD), the global middle class is expected to rise from the 1.8 billion observed in 2009 to 3.2 billion by 2020 and 4.9 billion by 2030. The new money brings with it a new wave of travelers looking to catch a selfie with the Eiffel Tower, with the UNWTO forecasting international tourist arrivals to increase from 1.3 billion to 1.8 billion by 2030.

With a growing sense of urgency around managing their guests, more and more cities have been implementing policies focused on limiting the number of tourists that visit altogether by imposing hard visitor limits, tourist taxes or otherwise.

But as the UNWTO points out in its report on overtourism, the negative effects from inflating tourism are not solely tied to the number of visitors in a city but are also largely driven by touristy seasonality, tourist behavior, the behavior of the resident population, and the functionality of city infrastructure. We’ve seen cities with few tourists, for example, have experienced similar issues to those experienced in cities with millions.

While many cities have focused on reactive policies that are meant to quell tourism, they should instead focus on technology-driven solutions that can help manage tourist behavior, create structural changes to city tourism infrastructure, while allowing cities to continue capturing the significant revenue stream that tourism provides.

Smart city tech enabling more “tourist-ready” cities

THOMAS COEX/AFP/Getty Images

Yes, cities are faced with the headwind of a growing tourism population, but city policymakers also benefit from the tailwind of having more technological capabilities than their predecessors. With the rise of smart city and Internet of Things (IoT) initiatives, many cities are equipped with tools such as connected infrastructure, lidar-sensors, high-quality broadband, and troves of data that make it easier to manage issues around congestion, infrastructure, or otherwise.

On the congestion side, we have already seen companies using geo-tracking and other smart city technologies to manage congestion around event venues, roads, and stores. Cities can apply the same strategies to manage the flow of tourist and resident movement.

And while you can’t necessarily prevent people from people visiting the Louvre or the Coliseum, cities are using a variety of methods to incentivize the use of less congested space or disperse the times in which people flock to highly-trafficked locations by using tools such as real-time congestion notifications, data-driven ticketing schedules for museums and landmarks, or digitally-guided tours through uncontested routes.

Companies and municipalities in cities like London and Antwerp are already working on using tourist movement tracking to manage crowds and help notify and guide tourists to certain locations at the most efficient times. Other cities have developed augmented reality tours that can guide tourists in real-time to less congested spaces by dynamically adjusting their routes.

A number of startups are also working with cities to use collected movement data to help reshape infrastructure to better fit the long-term needs and changing demographics of its occupants. Companies like Stae or Calthorpe Analytics use analytics on movement, permitting, business trends or otherwise to help cities implement more effective zoning and land use plans. City planners can use the same technology to help effectively design street structure to increase usable sidewalk space and to better allocate zoning for hotels, retail or other tourist-friendly attractions.

Focusing counter-overtourism efforts on smart city technologies can help adjust the behavior and movement of travelers in a city through a number of avenues, in a way tourist caps or tourist taxes do not.

And at the end of the day, tourism is one of the largest sources of city income, meaning it also plays a vital role in determining the budgets cities have to plow back into transit, roads, digital infrastructure, the energy grid, and other pain points that plague residents and travelers alike year-round. And by disallowing or disincentivizing tourism, cities can lose valuable capital for infrastructure, which can subsequently exacerbate congestion problems in the long-run.

Some cities have justified tourist taxes by saying the revenue stream would be invested into improving the issues overtourism has caused. But daily or upon-entry tourist taxes we’ve seen so far haven’t come close to offsetting the lost revenue from disincentivized tourists, who at the start of 2017 spent all-in nearly $700 per day in the US on transportation, souvenirs and other expenses according to the U.S. National Travel and Tourism Office.

In 2017, international tourism alone drove to $1.6 trillion in earnings and in 2016, travel & tourism accounted for roughly 1 in 10 jobs in the global economy according to the World Travel and Tourism Council. And the benefits of travel are not only economic, with cross-border tourism promoting transfers of culture, knowledge and experience.

But to be clear, I don’t mean to say smart city technology initiatives alone are going to solve overtourism. The significant wave of growth in the number of global travelers is a serious challenge and many of the issues that result from spiking tourism, like housing affordability, are incredibly complex and come down to more than just data. However, I do believe cities should be focused less on tourist reduction and more on solutions that enable tourist management.

Utilizing and allocating more resources to smart city technologies can not only more effectively and structurally limit the negative impacts from overtourism, but it also allows cities to benefit from a significant and high growth tourism revenue stream. Cities can then create a virtuous cycle of reinvestment where they plow investment back into its infrastructure to better manage visitor growth, resident growth, and quality of life over the long-term. Cities can have their cake and eat it too.

And lastly, some reading while in transit:


Source: Tech Crunch

Stoop aims to improve your news diet with an easy way to find and read newsletters

Stoop is looking to provide readers with what CEO Tim Raybould described as “a healthier information diet.”

To do that, it’s launched an iOS and Android app where you can browse through different newsletters based on category, and when you find one you like, it will direct you to the standard subscription page. If you provide your Stoop email address, you’ll then be able to read all your favorite newsletters in the app.

“The easiest way to describe it is: It’s like a podcast app but for newsletters,” Raybould said. “It’s a big directory of newsletters, and then there’s the side where you can consume them.”

Why newsletters? Well, he argued that they’re one of the key ways for publishers to develop a direct relationship with their audience. Podcasts are another, but he said newsletters are “an order of magnitude more important” because you can convey more information with the written word and there are lower production costs.

That direct relationship is obviously an important one for publishers, particularly as Facebook’s shifting priorities have made it clear that publications need to “establish the right relationship to readers, as opposed to renting someone else’s audience.” But Raybould said it’s better for readers too, because you’ll spend your time on journalism that’s designed to provide value, not just attract clicks: “You will find you use the newsfeed less and consume more of your content directly from the source.”

“Most content [currently] is distributed through a third party and that software is choosing what to surface next not based on the quality of the content, but based on what’s going to keep people scrolling,” he added. “Trusting an algorithm with what you’re going to read next is like trusting a nutritionist who’s incentivized based on how many chips you eat.”

Stoop Discover

So Raybould is a fan of newsletters, but he said the current system is pretty cumbersome. There’s no one place where you can find new newsletters to read, and you may also hesitate to subscribe to another one because it “crowds out your personal inbox.” So Stoop is designed to reduce the friction, making it easy to subscribe to and read as many newsletters as your heart desires.

Raybould said the team has already curated a directory of around 650 newsletters (including TechCrunch’s own Daily Crunch) and the list continues to grow. Additional features include a “shuffle” option to discover new newsletters, plus the ability to share a newsletter with other Stoop users, or to forward it to your personal address where they can be sent along to whomever you like.

The Stoop app is free, with Raybould hoping to eventually add a premium plan for features like full newsletter archives. He’s also hoping to collaborate with publishers — initially, most publishers will probably treat Stoop readers as just another set of subscribers, but Raybould said they could get access to additional analytics and also make subscriptions easier by integrating with the app’s instant subscribe option.

And the company’s ambitions even go beyond newsletters. Raybould said Stoop is the first consumer product from a team with a larger mission to help publishers. They’re also working on OpenBundle, an initiative around bundled news subscriptions with a planned launch in 2019 or 2020.

“The overarching thing that is the same is the OpenBundle thesis and the Stoop thesis,” he said. “Getting publishers back in the role of delivering content directly to the audience is the antidote to the newsfeed.”


Source: Tech Crunch