Twitter unveils new political ad guidelines set to go into effect this summer

Following the unrelenting wave of controversy around Russian interference in the 2016 presidential election, Twitter announced new guidelines today for political advertisements on the social networking site.

The policy, which will go into effect this summer ahead of midterm elections, will look towards preventing foreign election interference by requiring organizations to self-identify and certify that they are based in the U.S., this will entail organization registered by the Federal Elections Committee to present their FEC ID, while other orgs will have to present a notarized form, the company says.

Orgs buying political ads will also have to comply with a stricter set of rules for how they present their profiles. Twitter will mandate that the account header, profile photo and organization name are consistent with how the organization presents itself online elsewhere, a policy likely designed to ensure that orgs don’t try to obfuscate their identity or present their accounts in a way that would confuse users that the account belonged to a political organization.

In a blog post, the company noted that there would also be a special type of identifying badge for promoted content from these certified advertisers in the future.

Back in April — in the midst of Facebook’s Cambridge Analytica scandal — Twitter publicly shared its support for the Honest Ads Act. This Political Campaigning Policy will be followed up by the company’s work on a unified Ads Transparency Center which the company has promised “will dramatically increase transparency for political and issue ads, providing people with significant detail on the origin of each ad.”


Source: Tech Crunch

The AI in your non-autonomous car

Sorry. Your next car probably won’t be autonomous. But, it will still have artificial intelligence (AI).

While most of the attention has been on advanced driver assistance systems (ADAS) and autonomous driving, AI will penetrate far deeper into the car. These overlooked areas offer fertile ground for incumbents and startups alike. Where is the fertile ground for these features? And where is the opportunity for startups?

Inside the cabin

Inward-facing AI cameras can be used to prevent accidents before they occur. These are currently widely deployed in commercial vehicles and trucks to monitor drivers to detect inebriation, distraction, drowsiness and fatigue to alert the driver. ADAS, inward-facing cameras and coaching have shown to drastically decrease insurance costs for commercial vehicle fleets.

The same technology is beginning to penetrate personal vehicles to monitor driver-related behavior for safety purposes. AI-powered cameras also can identify when children and pets are left in the vehicle to prevent heat-related deaths (on average, 37 children die from heat-related vehicle deaths in the U.S. each year).

Autonomous ridesharing will need to detect passenger occupancy and seat belt engagement, so that an autonomous vehicle can ensure passengers are safely on board a vehicle before driving off. They’ll also need to identify that items such as purses or cellphones are not left in the vehicle upon departure.

AI also can help reduce crash severity in the event of an accident. Computer vision and sensor fusion will detect whether seat belts are fastened and estimate body size to calibrate airbag deployment. Real-time passenger tracking and calibration of airbags and other safety features will become a critical design consideration for the cabin of the future.

Beyond safety, AI also will improve the user experience. Vehicles as a consumer product have lagged far behind laptops, tablets, TVs and mobile phones. Gesture recognition and natural language processing make perfect sense in the vehicle, and will make it easier for drivers and passengers to adjust driving settings, control the stereo and navigate.

Under the hood

AI also can be used to help diagnose and even predict maintenance events. Currently, vehicle sensors produce a huge amount of data, but only spit out simple codes that a mechanic can use for diagnosis. Machine learning may be able to make sense of widely disparate signals from all the various sensors for predictive maintenance and to prevent mechanical issues. This type of technology will be increasingly valuable for autonomous vehicles, which will not have access to hands-on interaction and interpretation.

AI also can be used to detect software anomalies and cybersecurity attacks. Whether the anomaly is malicious or just buggy code, it may have the same effect. Vehicles will need to identify problems quickly before they can propagate on the network.

Cars as mobile probes

In addition to providing ADAS and self-driving features, AI can be deployed on vision systems (e.g. cameras, radar, lidar) to turn the vehicle into a mobile probe. AI can be used to create high-definition maps that can be used for vehicle localization, identifying road locations and facades of addresses to supplement in-dash navigation systems, monitoring traffic and pedestrian movements and monitoring crime, as well as a variety of new emerging use cases.

Efficient AI will win

Automakers and suppliers are experimenting to see which features are technologically possible and commercially feasible. Many startups are tackling niche problems, and some of these solutions will prove their value. In the longer-term, there will be so many features that are possible (some cataloged here and some yet unknown) that they will compete for space on cost-constrained hardware.

Making a car is not cheap, and consumers are price-sensitive. Hardware tends to be the cost driver, so these piecewise AI solutions will need to be deployed simultaneously on the same hardware. The power requirements will add up quickly, and even contribute significantly to the total energy consumption of the vehicle.

It has been shown that for some computations, algorithmic advances have outpaced Moore’s Law for hardware. Several companies have started building processors designed for AI, but these won’t be cheap. Algorithmic development in AI will go a long way to enabling the intelligent car of the future. Fast, accurate, low-memory, low-power algorithms, like XNOR.ai* will be required to “stack” these features on low-cost, automotive-grade hardware.

Your next car will likely have several embedded AI features, even if it doesn’t drive itself.

* Full disclosure: XNOR.ai is an Autotech Ventures portfolio company.


Source: Tech Crunch

Hitlist’s new premium service puts a travel agent in your pocket

Hitlist, a several-years old app for finding cheap flights has begun rolling out a subscription tier that will turn it into something more akin to your own mobile travel agent. While the core app experience which monitor airlines for flight deals will continue to be free, the new premium upgrade will unlock a handful of other useful features, including advanced filtering, exclusive members-only fares, and even custom travel advice from the Hitlist team.

The idea, says founder and CEO Gillian Morris, goes back to the original idea that inspired her to create Hitlist in the first place.

“Going back to the very beginning, Hitlist was essentially me giving travel advice to friends,” she says. “People had the time, inclination, and money to travel, but didn’t book because they got lost in the search process. When I sent custom advice, like ‘you said you wanted to go to Istanbul, there are $500 direct round trips in May available right now, that’s a good price and the weather will be good and the tulip festival, this unique cultural experience, will be happening’ – 4 out of 5 people would book,” Morris explains.

“I wouldn’t be able to scale that level of advice at the beginning, so we focused on just the flight deals. But now we have four years’ worth of data that we can learn from – browsing and searching within Hitlist – and we can start to build more sophisticated models that will inside and enable people to travel at scale,” she says.

The new subscription feature will offer users the ability to better filter airline deals by things like the carrier, number of stops, and the time of day of both the departure and return.

It’s also working with airlines to market “closed group” fares that aren’t accessible through flight search engines, but are available to select travel agents and other resellers that market to a closed user group. These will be flagged in the app as “members-only” fares.

Hitlist says it’s currently working with one airline and, through a third party, with several more. But because this is still in a pilot phase and is only live with select users, it can’t say which.

Meanwhile, the app will continue to focus on helping users find the best, low-cost fares – not only by tracking deals – but also by bundling low-cost carriers and traditional airlines together. However, it won’t promote dates that are likely to be cancelled by airlines, nor will it venture into legally gray areas like skipping legs of a flight (like Skiplagged) to find cheaper fares.

Beyond just finding cheap flights – which remains a competitive space – Hitlist aims to offer users a more personalized experience, more like what you would have gotten with a travel agent in the past.

For starters, it developed a proprietary machine learning algorithm that sorts through over 50 million fares’ worth of data per day to find deals that appeal to each individual user. It also learns from how you use it – browsing flights, or how you react to alerts, for example.

“The app gets to know you better over time, just like a human travel agent would,” says Morris. “With the premium upgrade, we’re gaining more insight to the traveler’s preferences that helps us to develop even more sophisticated A.I. to provide advice and make sure you’re getting the best deal.”

When you find a flight you like, Hitlist will direct you over to a partner’s site – like the airline or online travel agency such as CheapOair.

Where the app differs from others who are also trying to replace the travel agent – like Lola, Pana or Hyper – is that Hitlist doesn’t offer a chat interface. Morris feels that ultimately, travelers don’t want to talk to a chatbot – they just want to browse and discover, then have an experience that’s tailored for them as the app gets smarter about what they like.

That’s where Hitlist’s editorially curated suggestions come in, which can be as broad as “escape to Mexico” or as weird and quirky as “best cities to find wild kittens.” (Yes really.)

Hitlist will also help travelers by offering a variety of travel advice to help them make a decision – similar to how Morris used to advise her friends. For example, it might suggest the best days to fly (similar to Google Flights or Hopper), or tell you about the baggage fees, or even what sort of events might be happening at a destination.

Since its launch, Hitlist has grown to over a million mostly millennial travelers, who have collectively saved over $25 million on their flights by booking at the right time.

The new subscription plan is live now on iOS as an in-app purchase for $4.99 per month, but offers a better rate for quarterly or annual subscriptions, at $4.00/mo and $3/mo, respectively. It will roll out on Android later in the year.


Source: Tech Crunch

Navigating the risks of artificial intelligence and machine learning in low-income countries

On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer, and standing amidst a group of hoodie-clad software developers typing away diligently at their laptops against a backdrop of Star Wars and xkcd comic wallpaper.

I wasn’t in Silicon Valley: I was in Johannesburg, South Africa, meeting with a firm that is designing machine learning (ML) tools for a local project backed by the U.S. Agency for International Development.

Around the world, tech startups are partnering with NGOs to bring machine learning and artificial intelligence (AI) to bear on problems that the international aid sector has wrestled with for decades. ML is uncovering new ways to increase crop yields for rural farmers. Computer vision lets us leverage aerial imagery to improve crisis relief efforts. Natural language processing helps usgauge community sentiment in poorly connected areas. I’m excited about what might come from all of this. I’m also worried.

AI and ML have huge promise, but they also have limitations. By nature, they learn from and mimic the status quo–whether or not that status quo is fair or just. We’ve seen AI or ML’s potential to hard-wire or amplify discrimination, exclude minorities, or just be rolled out without appropriate safeguards–so we know we should approach these tools with caution. Otherwise, we risk these technologies harming local communities, instead of being engines of progress.

Seemingly benign technical design choices can have far-reaching consequences. In model development, tradeoffs are everywhere. Some are obvious and easily quantifiable — like choosing to optimize a model for speed vs. precision. Sometimes it’s less clear. How you segment data or choose an output variable, for example, may affect predictive fairness across different sub-populations. You could end up tuning a model to excel for the majority while failing for a minority group.

Image courtesy of Getty Images

These issues matter whether you’re working in Silicon Valley or South Africa, but they’re exacerbated in low-income countries. There is often limited local AI expertise to tap into, and the tools’ more troubling aspects can be compounded by histories of ethnic conflict or systemic exclusion. Based on ongoing research and interviews with aid workers and technology firms, we’ve learned five basic things to keep in mind when applying AI and ML in low-income countries:

  1. Ask who’s not at the table. Often, the people who build the technology are culturally or geographically removed from their customers. This can lead to user-experience failures like Alexa misunderstanding a person’s accent. Or worse. Distant designers may be ill-equipped to spot problems with fairness or representation. A good rule of thumb: if everyone involved in your project has a lot in common with you, then you should probably work hard to bring in new, local voices.
  2. Let other people check your work. Not everyone defines fairness the same way, and even really smart people have blind spots. If you share your training data, design to enable external auditing, or plan for online testing, you’ll help advance the field by providing an example of how to do things right. You’ll also share risk more broadly and better manage your own ignorance. In the end, you’ll probably end up building something that works better.
  3. Doubt your data. A lot of AI conversations assume that we’re swimming in data. In places like the U.S., this might be true. In other countries, it isn’t even close. As of 2017, less than a third of Africa’s 1.25 billion people were online. If you want to use online behavior to learn about Africans’ political views or tastes in cinema, your sample will be disproportionately urban, male, and wealthy. Generalize from there and you’re likely to run into trouble.
  4. Respect context. A model developed for a particular application may fail catastrophically when taken out of its original context. So pay attention to how things change in different use cases or regions. That may just mean retraining a classifier to recognize new types of buildings, or it could mean challenging ingrained assumptions about human behavior.
  5. Automate with care. Keeping humans ‘in the loop’ can slow things down, but their mental models are more nuanced and flexible than your algorithm. Especially when deploying in an unfamiliar environment, it’s safer to take baby steps and make sure things are working the way you thought they would. A poorly-vetted tool can do real harm to real people.

AI and ML are still finding their footing in emerging markets. We have the chance to thoughtfully construct how we build these tools into our work so that fairness, transparency, and a recognition of our own ignorance are part of our process from day one. Otherwise, we may ultimately alienate or harm people who are already at the margins.

The developers I met in South Africa have embraced these concepts. Their work with the non-profit Harambee Youth Employment Accelerator has been structured to balance the perspectives of both the coders and those with deep local expertise in youth unemployment; the software developers are even foregoing time at their hip offices to code alongside Harambee’s team. They’ve prioritized inclusivity and context, and they’re approaching the tools with healthy, methodical skepticism. Harambee clearly recognizes the potential of machine learning to help address youth unemployment in South Africa–and they also recognize how critical it is to ‘get it right’. Here’s hoping that trend catches on with other global startups too.


Source: Tech Crunch

Elon Musk has a very bad idea for a website rating journalists

Elon Musk has, as I imagine he often does during meetings or long car rides, come up with an idea for a new thing. Unlike the HyperLoop, which was cool, and various space-related ideas, which we know he’s at least partly expert about, this one is just plain bad. It’s basically Yelp But For Journalism.

He may as well have said, I found this great can marked “worms” and I’m going to open it up, or, I’ve determined a new method for herding cats.

The idea of holding publications and people accountable is great. Unfortunately it is the kind of problem that does not yield to even the best of intentions and smart engineering, because it is quickly complicated by the ethical, procedural, and practical questions of crowdsourcing “the truth.”

He agreed with another Twitter user, whose comment is indistinguishable from sarcasm:

My guess is Musk does not often use Yelp, and has never operated a small business like a restaurant or salon.

Especially in today’s fiercely divided internet landscape, there is no reliable metric for truth or accountability. Some will say the New York Times is the most trusted newspaper in America — others will call it a debased rag with a liberal agenda. Individual stories will receive the same treatment, with some disputing what they believe are biases and others disputing those same things as totally factual.

And while the truth lies somewhere in between these extremes, it is unlikely to be the mathematical mean of them. The “wisdom of the crowd,” so often evoked but so seldom demonstrated, cannot depend on an equal number of people being totally wrong in opposite ways, producing a sort of stable system of bias.

The forces at work here — psychological, political, sociological, institutional — are subtle and incalculable.

The origins of this faith, and of the idea that there is somehow a quorum of truth-seekers in this age of deception, are unclear.

Facebook’s attempts to crowdsource the legitimacy of news stories has had mixed results, and the predictable outcome is of course that people simply report news they disagree with as false. Independent adjudicators are needed, and Facebook has fired and hired them by the hundred, yet to arrive at some system that produces results worth talking about.

Fact-checking sites perform an invaluable service, but they are labor-intensive, not a self-regulating system like what Musk proposes. Such systems are inevitably and notoriously ruled by chaos, vote brigades, bots, infiltrators, agents provocateur, and so on.

Easier said than done — in fact, often said and never done, for years and years and years, by some some of the smartest people in the industry. It’s not to say it is impossible, but Musk’s glib positivity and ignorance or dismissal of a decade and more of efforts on this front are not inspiring. (Nate Silver, for one, is furious.)

Likely as a demonstration of his “faith in the people,” if there are any on bot-ridden Twitter, he has put the idea up for public evaluation.

Currently the vote is about 90 percent yes. It’s hard to explain how dumb this is. Yet like most efforts it will be instructive, both to others attempting to tame the zeitgeist, and hopefully to Musk.


Source: Tech Crunch

GUN raises more than $1.5M for its decentralized database system

GUN is an open-source decentralized database service that allows developers to build fast peer-to-peer applications that will work, even when their users are offline. The company behind the project (which should probably change its name and logo…) today announced that it has raised just over $1.5 million in a seed round led by Draper Associates. Other investors include Salesforce’s Marc Benioff through Aloha Angels, as well as Boost VC, CRCM and other angel investors.

As GUN founder Mark Nadal told me, it’s been about four years since he started working on this problem, mostly because he saw the database behind his early projects as a single point of failure. When the database goes down, most online services will die with it, after all. So the idea behind GUN is to offer a decentralized database system that offers real-time updates with eventual consistency. You can use GUN to build a peer-to-peer database or opt for a multi-master setup. In this scheme, a cloud-based server simply becomes another peer in the network (though one with more resources and reliability than a user’s browser). GUN users get tools for conflict resolution and other core features out of the box and the data is automatically distributed between peers. When users go offline, data is cached locally and then merged back into this database once they come online.

Nadal built the first prototype of GUN back in 2014, based on a mix of Firebase, MySQL, MongoDB and Cassandra. That was obviously a bit of a hack, but it gained him some traction among developers and enough momentum to carry the idea forward.

Today, the system has been used to build everything from a decentralized version of Reddit (which isn’t currently working) that can handle a few million uniques per month and a similarly decentralized YouTube clone.

Nadal also argues that his system has major speed advantages over some of the incumbents. “From our initial tests we find that for caching, our product is 28 times faster than Redis, MongoDB and others. Now we are looking for partnerships with companies pioneering technology in gaming, IoT, VR and distributed machine learning,” he said.

The Dutch Navy is already using it for some IoT services on its ships and a number of other groups are using it for their AI/ML services. Because its use cases are similar to that of many blockchain projects, Nadal is also looking at how he can target some of those developers to take a closer look at GUN.


Source: Tech Crunch

It’s unconstitutional for Trump to block people on Twitter

A uniquely 21st-century constitutional question received a satisfying answer today from a federal judge: President Trump cannot block people on Twitter, as it constitutes a violation of their First Amendment rights. The court also ruled he must unblock all previously blocked users. “No government official is above the law,” the judge concluded.

The question was brought as part of a suit brought by the Knight First Amendment Institute, which alleged that the official presidential Twitter feed amounts to a public forum, and that the government barring individuals from participating in it amounted to limiting their right to free speech.

After consideration, New York Southern District Judge Naomi Reice Buchwald determined that this is indeed the case:

We hold that portions of the @realDonaldTrump account — the “interactive space” where Twitter users may directly engage with the content of the President’s tweets — are properly analyzed under the “public forum” doctrines set forth by the Supreme Court, that such space is a designated public forum, and that the blocking of the plaintiffs based on their political speech constitutes viewpoint discrimination that violates the First Amendment.

The president’s side argued that Trump has his own rights, and that in this case the choice not to engage with certain people on Twitter is among them. These are both true, Judge Buchwald found, but that doesn’t mean blocking is okay.

There is nothing wrong with a government official exercising their First Amendment rights by ignoring someone. And indeed that is what the “mute” function on Twitter is equivalent to. No harm is done to either party by the president choosing not to respond, and so he is free to do so.

But to block someone both prevents that person from seeing tweets and from responding to them, preventing them from even accessing a public forum. As the decision puts it:

We reject the defendants’ contentions that the First Amendment does not apply in this case and that the President’s personal First Amendment interests supersede those of plaintiffs…

While we must recognize, and are sensitive to, the President’s personal First Amendment rights, he cannot exercise those rights in a way that infringes the corresponding First Amendment rights of those who have criticized him.

The court also examined the evidence and found that despite the Executive’s arguments that his Twitter accounts are, for various reasons, in part private and not subject to rules limiting government spaces, the president’s Twitter is definitively a public forum, meeting the criteria set out some time back by the Supreme Court.

At this point in time President Trump has by definition performed unconstitutional acts, but the court was not convinced that any serious legal remedy needs to be applied. And not because the Executive side of the case said it was monstrous of the Judicial to dare to tell it what to do:

While we find entirely unpersuasive the Government’s parade of horribles regarding the judicial interference in executive affairs presented by an injunction directing the President to comply with constitutional restrictions… declaratory relief is likely to achieve the same purpose.

By this the judge means that while the court would be legally in the clear if it issued an official order binding the Executive, but that there’s no reason to do so. Instead, merely declaring that the president has violated the rules of the Constitution should be more than enough to compel his team to take the appropriate action.

Specifically, Trump and (it is implied but not stated specifically) all public officials are to unblock any blocked users on Twitter and never hit that block button again:

No government official is above the law and because all government officials are presumed to follow the law once the judiciary has said what the law is, we must assume that the President and Scavino will remedy the blocking we have held to be unconstitutional.

No timeline is set but it’s clear that the Executive is on warning. You can read the full decision here.

“We’re pleased with the court’s decision, which reflects a careful application of core First Amendment principles to government censorship on a new communications platform,” said executive director of the Knight Institute, Jameel Jaffer, in a press release.

This also sets an interesting precedent as regarding other social networks; in fact, the Institute is currently representing a user in a similar complaint involving Facebook, but it is too early to draw any conclusions. The repercussions of this decision are likewise impossible to predict at this time, including whether and how other officials, such as senators and governors, are also bound by these rules. Legal scholars and political agents will almost certainly weigh in on the issue heavily over the coming weeks.


Source: Tech Crunch

Spotify launches ‘The Game Plan,’ a 10-part educational video series for artists

On the same day that Spotify’s class-action settlement with musicians gets final approval, the company is making a big push to encourage artists to participate on its streaming service – in this case, by offering them a host of educational material to help them get started. The streaming service today is launching its own video series dubbed The Game Plan, which instructs artists on how to get started using “Spotify for Artists,” and the other steps they have to take to make their music available for streaming.

The series includes short videos like: Getting Your Music Up; What Is Spotify for Artists?; Releasing Music; Building Your Artist Profile; Understanding Your Audience; How to Read Your Data; Engaging Your Audience; The Follow Button, Promoting Your Work, and Building Your Team.

In the videos, Spotify attempts to demystify the world of streaming with tips about things like when is the best time to release music, how and why to use listening data, how to upload your music, when to hire a lawyer (irony alert), and more.

The series will also feature interviews with experts, including Spotify staff, industry vets, and artists themselves, including Rick Ross, Little Dragon, Mike Posner, and Vérité.

The idea is that, by sharing this knowledge with the wider community, Spotify will be able to help artists build their careers, the company explains. Naturally, it wants them to build those careers and invest in learning Spotify’s tools – not those from its rivals.

“From successful musicians, to employees who are industry experts, the Spotify community has a wealth of music industry knowledge,” said Charlie Hellman, Head of Creator Marketplace, Spotify, in a statement about the launch. “We want to equip artists at all stages of their career with that powerful knowledge, and make it as accessible as possible.”

The video series’ debut comes at a time when there’s increased competition for Spotify, including from the just now launched YouTube Music streaming service, which takes direct aim at Spotify with a similar price point and the addition of music videos, including harder-to-find performances that are often just on YouTube. Plus, Apple’s new Netflix-like streaming service is rumored to be launching next year as a bundle with Apple Music.

The Game Plan begins as a 10-part video series, but Spotify says there’s more to come in the future.


Source: Tech Crunch

Uber is done testing self-driving cars in Arizona

Uber, which had already pulled its autonomous cars off the road following a fatal crash in Tempe, Arizona, is officially calling it quits in the state of Arizona, The Wall Street Journal first reported, citing an internal memo from Uber Advanced Technologies Group lead Eric Meyhofer.

As part of the wind-down, Uber has let go 300 of its test drivers. This comes after the state of Arizona in March officially barred Uber from testing its autonomous vehicles on public roads.

“We’re committed to self-driving technology, and we look forward to returning to public roads in the near future,” an Uber spokesperson said in a statement. “In the meantime, we remain focused on our top-to-bottom safety review, having brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture.”

Uber is hoping to have its self-driving cars performing tests on public roads again within the next few months, Uber CEO Dara Khosrowshahi said at an Uber conference earlier this month. Once the National Transportation Safety Board completes its investigation of the Tempe crash, Uber plans to continue testing in San Francisco, Toronto and Pittsburgh. But if Uber wants to continue its tests in California, it will need to apply for a new permit, as well as “address any follow-up analysis or investigations from the recent crash in Arizona,” DMV Deputy Director/Chief Counsel Brian Soublet wrote in a letter to Uber in March. Uber may also need to set up a meeting with the DMV.


Source: Tech Crunch

Alexa gets smarter about calendar appointments

As digital assistants improve, we’re learning new things to expect from them, but the tasks that a real-life assistant may have handled before can still be a bit of a challenge to home assistants.

Amazon’s Alexa voice assistant is gaining functionality to help it get smarter about working with your calendar. The new abilities will let users move appointments around and schedule meetings based on other people’s availability.

If you’ve been shared on someone’s calendar availability, Alexa will be able to suggest times that work for both of you. Just say, “Alexa schedule a meeting with [name]” and Amazon’s assistant will search through your schedule for a good time, suggesting up to two time slots that could work.

On a more basic feature level, Alexa won’t make you cancel appointments and reschedule them if a meeting time changes. You’ll be able to just ask Alexa to move an existing meeting, something that should have probably been supported from the beginning, but hey, better late than never.

Both of these features are available to U.S. users today.


Source: Tech Crunch