Q Actions Auto Mode

What if cars could understand ALL voice commands?

By | App Actions, Auto Mode, Digital Assistants, News, Voice | No Comments

The following transcript was taken from a casual conversation with my son.

Son: Dad, what are you working on?

Me: It’s a new feature in our product called “Auto Mode”.  We just released it in version 2.1 of our Q Actions App for Android.  We even made a video of it.  We can watch it after dinner if you’re interested.

Son: The feature sounds cool.  What’s it look like?

Me: Well, here.  We have this special setting that switches our software to look like the screen in a car. See how the screen is wider than it is tall? Yeah, that’s because most car screens are like that too.

Son: Wait. How do you get your software into cars? Can’t you just stick the tablet on the dashboard?

Me: Humm, not quite.  We develop the software so that car makers can combine it with their own software inside the car’s console.  We’ll even make it look like they developed it by using their own colors and buttons. I’m showing you how this works on a tablet because it’s easier to demonstrate to other people – we just tell them to pretend it’s the car console.  Until cars put our software into their consoles, we’ll make it easy for users to use “Auto Mode” directly on their phones. Just mount the phone on the car’s dash and say “turn on auto mode” – done!

Son:  So how do you use it?  And what does that blue button with a microphone in it do?

Me:  Well, we want anyone in the car to be able to say a command like “navigate to Great America” or “what’s the weather like in San Jose?” or “who are Twenty One-Pilots?”.  The button is simply a way to tell the car to listen. When we hear a command, our software figures out what to do and what to show on the console in the car. Sometimes it even speaks back the answer.  Now we don’t always want people to have to press the button on the screen so we’ll work with the car makers to add a button on the steering wheel or even a microphone that is always listening for a special phrase such as “Ok, Q” to start.

Son: How does it do that?  I mean, the command part.

Me: Good question.  Since you’re smart and know a little about software, I’ll keep it short.  Our software takes a command and tries to figure out what app or service can best provide the answer.  For example, if the command is about showing the route to say, an amusement park like Great America, we’ll ask Google Maps to handle it, which it does really well. Lots of cars come installed with mapping software like Google Maps so it’s best to let them handle those. For other types of commands that ask for information, like “what’s the weather like in San Jose” or  “who are Twenty One Pilots”, we’ll send it off to servers in the cloud. They then send us back answers and we format it and display it on the screen – in a pretty looking card like this one.

Me: Sometimes, apps running on our phones can best answer these commands and we use them to handle it.

Son: Wait. Phones?  How are phones involved? I only see you using a tablet.

Me:  Ahhh.  You’ve discovered our coolest feature.  We use Apps already installed on your phone.   Do you see those rectangle-looking things in the upper right corner of the tablet? The ones with the pictures and names of people? Well, those are phone profiles.  They appear when a person connects their phone, running our Q Actions app, to the car’s console through Bluetooth, sort of like you do with wireless earbuds. When connected, our software in the console sends the phone your commands and the phone in turn attempts to execute the command using one of the installed apps.   Let me explain with an example. Let’s pretend you track your daily homework assignments using the Google Tasks app on your phone. Now you hop into the car and your phone automatically pairs with the console. Now I asked you to show me your homework assignments. You then press the mic button and say “show my homework tasks”.  The software in the console would intelligently route the command to your phone (because Google Tasks is not on the console), open Google Tasks on your phone, grab all your homework assignments and send them back to the console to be displayed in a nice card. Oh, and it would also speak back your homework assignments as well. Let’s see what happens when I tell it to view my tasks.

Son:  Big deal.  I can just pick up my phone and do that.  Why do I need to use voice for that?

Me: Because if you’re the driver, you don’t want to be fumbling around with your phone, possibly getting into an accident! Remember, this is supposed to help drivers with safe, “hands-free” operation. You put your phone in a safe place and our software figures out how to use it to get the answers.

Son: Why can’t the car makers put all these apps in the console so you don’t have to use your phone?

Me: Great question.  Most people carry their phones on them at all times, especially when they drive.  And these phones have all their favorite apps with all their important personal information stored in them.  There’s no way the car makers could figure out which apps to include when you buy the car. And even if you could download these apps onto the console, all your personal information that’s on your phone would have to transferred over to the console, app by app.  Clumsy if you ask me. I prefer to keep my information on my phone and private, thank you very much!

Son: Oh. Now I get it.  So what else does the software do?

Me: The console can call a family member.  If you say “call Dad”, the software looks for ‘dad’ in your phone’s address book and dials the number associated with it.  But wait. You’re probably thinking ‘What’s so special about that? All the cool cars do it”. Well, we know that a bunch of apps can make phone calls so we show you which ones and let you decide.  Also, If you have two numbers for ‘dad’, say a home and mobile number, the software will ask you to choose one to call. Let’s see how this works when I say “call Dad”.

Me: It asks you to pick an app.  I say ‘phone’ and then it asks me to pick a number since my dad has both a home and mobile number.  I say ‘mobile’ and it dials the number through my phone.

Son: Cool. But what if I have two people with the same name, like Julie?

Me: It will ask you to pick a ‘Julie’ when it finds more than one.  And it will remember that choice next time you ask it to call Julie.  See what happens when I want to call Jason. It shows me all the people in my address book who are named Jason along with their phone numbers.  If a person has more than one number it will say ‘Multiple’

Son: Wow.  What else?

Me: How about sending a message on WhatsApp? Or setting up a team meeting in the calendar. Or joining a meeting from the car if you are running late. Or even checking which one of your friends have birthdays today.   All these actions are performed on your phone using the apps you are familiar with and use.

Son: Which app shows you your friends birthdays? That’s kind of neat.

Me: Facebook

Son: I don’t use Facebook. I use Instagram. It’s way better.  Plus all the cool kids use it now.

Me:

Me: You get the picture though, right?

Son: Sure.

Son: So what if all of my friends are in the car with you and we connect to the console?  How does the software know where to send the command?

Me: We use the person’s voice to identify who they are and route the command to the right person’s phone automatically.

Son: Really? That seems way too hard.

Me: Not really.  Although we haven’t implemented it yet, the technology exists to do this sort of thing today.

Son: Going back to main screen, why does the list of actions under ‘Recent’ and ‘Favorites’ change when you change people?

Me: Oh, you noticed that!   Whenever the software switches to a new profile, we grab the ‘Recent’ and ‘Favorites’ sections from that person’s phone and display it in the tablet, er, console.  This is our way of making the experience more personalized or familiar to the way the app appears on your phone. In fact, the ‘Favorites’ are like handy shortcuts for frequently used actions, like “call Mom”.  

Me: One more thing.  Remember the other buttons on the home screen? One looked like a music note, the other a picture for messaging and so on.  Well, when you press those, a series of icons appear across the screen, each showing an action that belongs to that group.  If your phone had Spotify installed, we would show you a few Spotify actions. If Pandora was installed, we would show you Pandora actions and so on.   Check out what happens when I activate my profile. Notice how Pandora appears? That’s because Pandora is on my phone and not on the tablet like Google Play Music and YouTube Music.

  

Me: Same is true for messaging and calling.   Actions from apps installed on your phone would appear.  You would simply tap on the icon to run the action.   In fact, if you look carefully, you’ll notice that all the actions that show up on the console are also in the ‘My Actions’ screen in the Q Actions app on your Android Phone.   Check out what’s on the tablet vs. my phone.

 .     

Son: Yep.

Me: Oh and before I forget, there’s one last item I’d like to tell you about.

Son: What’s that.

Me: Notifications.  If you send me a message on WhatsApp, Messenger or WeChat, a screen will popup letting me know I have a message from you.  I can listen to the message by pressing a button or respond to the message – by voice, of course, all while keeping my focus on the road.   You’ll get the response just as if I had sent it while holding the phone.

Son:  Cool. I’ll have fun sending you messages on your way home from work.

Me: Grrrrrr.

Son: Hey, can I try this out on my phone?

Me: Sure.  Just download our latest app from the Google Play Store.  After you get it installed, goto the Preferences section under Settings and check the box that says ‘Auto Mode’ (BETA).  You’ll automatically be switched into Auto Mode on your phone. Now this becomes your console in the car.

Of course, things appear a bit smaller than on your phone than what I’ve shown you on the tablet.  Oh, and since you’re not connected to another phone, all the commands you give it will be performed by apps on your phone.   Try it out and let me know what you think.

Son:  Ok. I’ll play around with it this week.

Me: Great.  Now let’s go see what your mom’s made us for dinner.

 

Q Actions 2.0

Do more with Voice! Q Actions 2.0 now available on Google Play

By | Action Recipes, App Actions, Artificial Intelligence, Conversation, Digital Assistants, Natural Language, Voice, Voice Search | No Comments

Do more with Voice

Q Actions 2.0 is here. With this release, we wanted to focus on empowering users throughout their day. As voice is playing a more prevalent part in our everyday lives, we’re uncovering more use cases where Q Actions can be of help. In Q Actions 2.0, you’ll find new features and enhancements that are more conversational and useful.

Directed Dialogue™

Aiqudo believes the interaction with a voice assistant should be casual, intuitive, and conversational. Q Actions understands naturally spoken commands and is aware of the apps installed on your phone, so it will only return personalized actions that are relevant to you. When a bit more information is required from you to complete a task, Q Actions will guide the conversation until it fully understands what you want to do. Casually chat with Q Actions and get things done.

Sample commands:

  • “create new event” (Google Calendar)
  • “message Mario (WhatsApp, Messenger, SMS)
  • “watch a movie/tv show” (Netflix, Hulu)
  • “play some music” (Spotify, Pandora, Google Play Music, Deezer)

Q Cards™

In addition to providing relevant app actions from personal apps that are installed on your phone, Q Actions will now display rich information through Q Cards™. Get up-to-date information from cloud services on many topics: flight status, stock pricing, restaurant info, and more. In addition to presenting the information in a simple and easy-to-read card, Q Cards™ support Talkback and will read aloud relevant information.

Sample commands:

  • “What’s the flight status of United 875?”
  • “What’s the current price of AAPL?”
  • “Find Japanese food

Voice Talkback™

There are times when you need information but do not have the luxury of looking at a screen. Voice Talkback™ is a feature that reads aloud the critical snippets of information from an action. This enables you to continue to be productive, without the distraction of looking at a screen. Execute your actions safely and hands-free.

Sample commands:

  • “What’s the stock price of Tesla?” (E*Trade)
    • Q: “Tesla is currently trading at $274.96”
  • “Whose birthday is it today?” (Facebook)
    • Q: “Nelson Wynn and J Boss are celebrating birthdays today”
  • “Where is the nearest gas station?”
    • Q: “Nearest gas at Shell on 2029 S Bascom Ave and 370 E Campbell Ave, 0.2 miles away, for $4.35”

Compound Commands

An enhancement to our existing curated Actions Recipes, users can now create Action Recipes on the fly using Compound Command. Simply join two of your favorite actions using “and” into a single command. This allows the users the capability to create millions of Action Recipe combinations from our database of 4000+ actions.

Sample commands:

  • “Play Migos on Spotify and set volume to max”
  • “Play NPR and navigate to work”
  • “Tell Monica I’m boarding the plane now and view my boarding pass”

Simply do more with voice! Q Actions is now available on Google Play.

Q Actions - Directed Dialogue

Q Actions – Task completion through Directed Dialogue™

By | Conversation, Digital Assistants, Natural Language, User Interface, Voice | No Comments

When an action or a set of actions require specific input parameters, Directed Dialogue™ allows the user to submit the required information through very simple, natural back-and-forth conversation. Enhanced with parameter validation, and user confirmation,Directed Dialogue™ allows complex tasks to be performed with confidence.Directed Dialogue™ is  not about open-ended conversations, but  it about getting things done, simply and efficiently.

With Q Actions, Directed Dialogue™ is automatically enabled  for every action in the system because we know the semantic requirements of each and every action’s parameters. It is not constrained, and  applies across all actions across all verticals.

Another application of Directed Dialogue™ is input refinement. Let’s say I want to purchase batteries. If I just say, “add batteries to my shopping cart” I can get the wrong product added to my cart, as on Alexa, which does the wrong thing for a new product order (the right thing happens on a reorder). In the case of Q Actions, I can provide the brand Duracell and the type 9V 4 pack with very simpleDirected Dialogue™, and exactly the right product is added to my cart – in the Amazon or Walmart app.

Get Q Actions today.

Auto in-cabin experience

The Evolution of Our In-Car Experience

By | Digital Assistants, User Interface, Voice | No Comments

As the usage model for cars continues to shift away from traditional ownership and leasing to on-demand, ridesharing, and in the future, autonomous vehicle (AV) scenarios, how we think about our personal, in-car experience will need to shift as well.

Unimaginable just a few short years ago, today, we think nothing of jumping into our car and streaming our favorite music through the built-in audio system using our Spotify or Pandora subscription. We also expect the factory-installed navigation system to instantly pull up our favorite or most-commonly used locations (after we’ve entered them) and present us with the best route to or from our current one. And once we pair our smartphone with the media system, we can have our text and email messages not only appear on the onboard screen but also read to us using built-in text-to-speech capabilities.  It’s a highly personalized experience in our car.

When we use a pay-as-you-go service, such as Zipcar, we know we’re unlikely to have access to all of the tech comforts of our own vehicle, but we can usually find a way to get our smartphone paired for handsfree calling and streaming music using Bluetooth. If not, we end up using the navigation app on our phone and awkwardly holding it while driving, trying to multitask. It’s not pretty. And when we hail a rideshare, we don’t expect to have access to any of the creature comforts of our own car.

But what if we could?

Just as our relationship to media shifted from an ownership model–CDs or MP3 files on iPods–to subscription-based experiences that are untethered to a specific device but can be accessed anywhere at any time, it’s time to shift our thinking about in-car experiences in the same way.

It’s analogous to accessing your Amazon account and continuing to watch the new season of “True Detective” on the TV at your Airbnb–at the exact episode where you left off last week. Or listening to your favorite Spotify channel at your friend’s house through her speakers.

All your familiar apps (not just the limited Android Auto or Apple CarPlay versions) and your personalized in-car experience–music, navigation, messaging, even video (if you’re a passenger, of course)–will be transportable to any vehicle you happen to jump into, whether it’s a Zipcar, rental car or some version of a rideshare that’s yet to be developed. What’s more, you’ll be able to easily and safely access these apps using voice commands. Whereas today our personal driving environment is tied to our own vehicle, it will become something that’s portable, evolving as our relationship to cars changes over time.

Just on the horizon of this evolution in our relationship with automobiles? Autonomous vehicles, or AVs, in which we become strictly a passenger, perhaps one of several people sharing a ride. Automobile manufacturers today are thinking deeply about what this changing relationship means to them and to their brands. Will BMW become “The Ultimate Riding Machine?”(As a car guy, I personally hope not!)  And if so, what will be the differentiators?

Many car companies see the automobile as a new digital platform, for which each manufacturer creates its own, branded, in-car digital experience. In time, when we hail a rideshare or an autonomous vehicle, we could request a Mercedes because we know that we love the Mercedes in-car digital experience, as well as the leather seats and the smooth ride.

What happens if we share the ride in the AV, because, well, they are rideshare applications after all? The challenge for the car companies becomes creating a common denominator of services that define that branded experience while still enabling a high degree of personalization. Clearly, automobile manufacturers don’t want to become dumb pipes on wheels, but if we all just plug in our headphones and live on our phones, comfy seats alone aren’t going to drive brand loyalty for Mercedes. On the other hand, we don’t all want to listen to that one guy’s death metal playlist all the way to the city.  

The car manufacturers cannot create direct integrations to all services to accommodate infinite personalization. In the music app market alone there are at least 15 widely used apps, but what if you’re visiting from China? Does your rideshare support China’s favorite music app, QQ?  We’ve already made our choices in the apps we have on our phones, so transporting that personalized experience into the shared in-car experience is the elegant way to solve that piece of the puzzle.

This vision of the car providing a unique digital experience is not that far-fetched, nor is it that far away from becoming reality. It’s not only going to change our personal ridesharing experience, but it’s also going to be a change-agent for differentiation in the automobile industry.

And it’s going to be very interesting to watch.

Q Actions power Moto Voice

Q Actions Platform now powers App Actions in Moto Voice. #HelloMoto

By | App Actions, Digital Assistants, Machine Learning, Voice | No Comments

Our first official day at Aiqudo was in April, 2017.  One year later, we are excited to announce that our Q Actions platform is now live and powering app actions in Moto Voice. The experience is being rolled out, as we speak, to millions of users using Motorola phones in 7 languages in 12 markets, with more to come. Watch the coverage of the always on voice capabilities during Motorola’s recent launch event. 

Most of the app actions we power are not currently available in other digital assistant platforms – actions in apps like Facebook, Whatsapp, Wechat, Netflix, Spotify, Hulu, Waze, to mention a few.  And we just got started …

On supported Motorola phones, you just say “Hello Moto” and issue simple commands – hands free.

Our solution provides high utility to users. You can get things done instantly within your favo(u)rite apps, privately and without having to register credentials. Check out the Voice-to-Action™ experience in the video below:

We’ve addressed several hard technical problems, including:

  • Command matching for simple, intuitive commands in multiple languages: You speak naturally – no need to learn a specific syntax. A single command can provide matching actions from multiple apps, providing user choice.
  • Action execution of personal app actions:  We execute actions in your favo(u)rite apps, including your private actions, without requiring registration or login credentials. We use several techniques for action execution, and can even execute tasks consisting of multiple actions in different apps.
  • Action on boarding operations: We support actions in multiple versions of apps simultaneously – in multiple locales. Our on boarding process takes minutes, does not mandate APIs, coding or developer engagement, enabling rapid scale. Our flexible Machine Learning systems are trained incrementally with simple exemplary commands.

We will be writing more about our contributions in these areas over the next few weeks.

For the most powerful, fully hands free experience, get a new phone with always on Moto Voice, and say “Hello Moto”!

Or, for other Android phones, you can download the Q Actions app from the Play Store.

Moto Voice - App Actions powered by Aiqudo

Aiqudo Announces Global Voice Assistant Agreement with Motorola

By | Digital Assistants, News, Press, Voice | No Comments

Aiqudo to Power Actions for Motorola’s Moto Voice Experience, Launch Globally in Seven Languages

BUSINESSWIRE. SAN JOSE, CALIFORNIA, APRIL 30, 2018

Aiqudo, a Voice AI pioneer that lets people use voice commands to execute actions in mobile apps, today announced that they have entered into an agreement for Aiqudo to work with Motorola’s Moto Voice experience globally. Aiqudo’s technology is now available in select Motorola smartphones in major markets around the world in English, Spanish, Portuguese, French, Italian, German and Russian. The technology will integrate seamlessly with the top apps in each market.

“With this agreement, Motorola brings Aiqudo’s unique virtual assistant capabilities to tens of millions of customers,” said John Foster, Aiqudo CEO and Co-Founder. “Motorola, who pioneered the mobile phone business, is now pioneering voice as the primary interface to our digital worlds and we are excited to work with them. At Aiqudo, we’ve solved issues with existing voice-assisted platforms to ensure an experience that offers ease of use and seamless integration with favorite apps without users needing to learn new syntax and running up against walled gardens that are so common with current solutions. This agreement brings to market far more powerful interactions than have ever before been enabled with voice on any device. We are thrilled to work with Motorola to bring what we believe is the next wave of voice to consumers.”

Aiqudo’s technology is now available in select Motorola smartphones in major markets around the world in English, Spanish, Portuguese, French, Italian, German and Russian.

 

Aiqudo’s powerful Voice-to-Action™ platform brings the ease and power of voice to the mobile app ecosystem. Mobile apps have become central to consumers’ lives, providing massive utility across entertainment, shopping, navigation, messaging, and more. With Aiqudo, Moto Voice allows instant access to these actions, enabling consumers to use their favorite mobile apps with simple, intuitive natural-language commands – hands free. Users get both verbal and visual results, which is essential for making quick decisions, and their private data stays private, within the apps they originally entered it in.

“Aiqudo helps users get things done quickly with the most ubiquitous assistant – the phone, meaning you don’t lose functionality when you walk out of your living room or home. And it’s easy to use because the Q Platform does not require the user to learn a new command syntax or specify an app by name. The Q platform learns from the user instead of requiring the user to learn new skills,” said Rajat Mukherjee, Aiqudo CTO and Co-Founder. “Our Voice AI, built by our team in Belfast, Northern Ireland, enables rapid scalability to multiple languages and localization for apps in each of the markets we will be expanding to with Motorola. Aiqudo voice enables users’ favorite apps in each market and supports regional language variations, for example between Spain and Mexico.”

About Aiqudo

Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps through its Voice-to-Action™ platform. It lets people use natural voice commands to execute actions in mobile apps across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, enabling consumers to get things done quickly and easily.

Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without APIs or developer dependencies.

To view product demo videos, visit Aiqudo’s YouTube channel.

BUSINESSWIRE: Aiqudo Announces Global Voice Assistant Agreement with Motorola

Why Apps?

By | App Actions, Digital Assistants, Voice | No Comments

With all the hype around chatbots, skills, and other forms of custom voice UX, we’re often asked why we chose mobile apps as the first target domain for Q Actions – our voice AI platform.

The short answer is: apps are where the utility is – consumers spent a trillion hours using mobile apps last year. With voice, all those familiar apps are even easier to use.

We believe there is a critical gap in the voice assistant marketplace. The ideal assistant MUST:

  • Be ubiquitous – not just available in the kitchen or living room
  • Provide high utility – help us do useful things we do every day
  • Work intuitively – let users speak naturally, without the need to learn new syntax
  • Offer user choice – across platforms, applications and devices
  • Be private and secure – on device where possible

Mobile apps remain the best way to achieve these goals. Your phone is always with you, and mobile apps provide high utility for you wherever you are.

Venturebeat ran a survey last year asking 1000 people “Which of these (app, mobile website, or chatbot) would you prefer to use in order to engage with a brand?” There was a clear winner. Apps! It’s particularly interesting because these 1000 respondents were self-described “chatbot users”.

Users prefer Apps

Users prefer to use Apps

 

Are we at “Peak App” and does it even matter?

We often hear the concept of “Peak App”, which describes a general state of app fatigue. In this narrative, people already have all the apps they need, so they no longer download new apps. And for developers, this peak means creating new apps is no longer exciting, and breaking through as a new app is increasingly rare, so maybe develop a skill or a chatbot for one of the closed platforms and see how that goes (aka starting over with your customers).  

Global app download rates defy the idea of “Peak App”. We’ve seen 60% growth over the past three years, and this trend continues in 2018, with app downloads (and revenue) breaking records yet again in Q1.

app downloads

App Downloads continue to grow

 

People continue to spend more money in the app economy. Both iOS and Google Play saw 20% year-over-year growth in worldwide consumer spend in Q4 2017. The total app spend in 2017 was $17 billion.

App Spend

App Spend continues to grow

As noted by Mary Meeker in her 2017 Internet Trends report, Internet Usage (Engagement) continues to grow (+4% Y/Y), with mobile >3 Hours / Day per User vs. <1 Five Years Ago, USA

Mobile continues to dominate time spent

 

Peak is a moot point anyway, because…

People use 30 to 40 apps, and still have another 50+ apps installed on their phone.

More apps can be used

Many app are usable, but out of sight.

 

We want to bring easy-to-use voice AI to the apps people use, while also helping them make use of those apps installed but not used. Out of sight is out of mind, but if you could just ask, and if the right action in the right app is executed, you’re more likely to use those installed apps. Further, if I don’t have to know which app can execute my command, I can just say what I want and our Q Actions platform will:

  • Understand what you intend to do
  • Determine which apps can get it done
  • Execute the action using the most relevant app installed on your phone

It’s easier, more natural, and … faster! It reduces time to action.

Unlocking Utility

This approach unlocks the high utility of mobile apps by putting the effort of app discovery on the voice AI platform, not the consumer.

ComScore’s “2017 U.S. Mobile App Report” illustrates that many people have apps they consider “Hidden Gems”. These are gems because they are helpful and offer high utility when needed, but are not in the top 25 most used apps. We help people make use of these gems by simply issuing natural voice commands.

Hidden Gems

Hidden gems : apps that are not head apps, but provide huge utility

 

Most of these “hidden gems”, along with millions more – photo apps, payment apps, airline apps, etc. are just not available in existing voice platforms. Alexa Skills offer limited utility compared to mobile apps already installed on your phone.

Critical Gaps In The Big Voice Platforms

The big voice platforms don’t currently support many of the most popular, helpful, and engaging mobile apps. Here’s a look at top mobile apps vs apps currently supported through an Alexa Skill.

Apps on Assistants

Many popular apps are not available on Alexa or Google Assistant

 

Current voice platforms don’t support enough useful actions. Even those apps supported by Alexa, Google Assistant, Cortana, Siri, et al, often limit voice support to a small number of app functions. For example, with Alexa, I can order a Lyft, but I can’t schedule one, or look at my ride history. Voice should make using these familiar apps easier, not require you to remember what Lyft can do with Alexa.

Don’t Reinvent The Wheel

Current voice platforms require new, custom development, ongoing maintenance and support. Why would a developer reinvent the wheel just to offer voice support to their customers, expanding their maintenance and support requirements in the process?

Voice enabling your existing app gets developers and brands started on capturing customer voice search commands, a valuable asset that should be protected from competitors, some of which operate digital platforms eager to disintermediate brands from their customers.

Apps Are Useful, Personal, Private, and Secure

A compelling consumer voice experience is our goal, and apps are a great starting point. Further, because you already trust the apps you use, and we don’t require registration or any user credentials, we execute the right actions for you privately.  We can enable personal actions, like playing your personal playlists, viewing your photos, sending payments to friends, and messaging family – quickly and securely.

Through our Q Action Kit (developer SDK) or our Q Actions App, Aiqudo’s action intent AI connects voice computing to the mobile app ecosystem, helping you take action quickly and easily, wherever you go.

This Week In Voice Podcast March 15 2018

This Week in Voice – Podcast with Aiqudo

By | Digital Assistants, Voice, Voice Search | No Comments

Season 2, Episode 8 of the “This Week In Voice” podcast features Aiqudo’s co-founders discussing the latest developments in the world of voice.

Hear CEO John and CTO Rajat provide their opinions and perspectives on several recent developments: Alexa’s new “follow-up mode”, Google’s recently announced multi-step routines, the availability of Alexa and Assistant on tablets, the current issues with these assistant platforms on phones, the challenges for banking, payments and other private activities using standalone voice assistants, and the potential proliferation of vertical and specialized voice applications.

You can listen to the podcast here.

Other options:

The Next Billion

By | User Interface, Voice | No Comments

When we started Aiqudo less than a year ago, we were focused on voice as the next big thing in tech, a UI that has the potential to be the most profound and disruptive change in consumer technology to date. Being a Silicon Valley company, we naturally focused on how voice could impact our world — savvy tech users who wanted the easiest, fastest way possible to use their technology, and the changes natural language voice could bring about for businesses serving us: voice search, voice commerce, hands free apps while driving, etc.

But as we work with partners who are focused on global deployments across multiple languages, we’re coming to realize that voice could have a much more far-reaching impact. When interacting with technology becomes completely seamless and intuitive, we will extend access to technology to billions of new users in emerging markets where mobile internet devices have arrived but where language or literacy issues may present barriers to usage.

Today, mobile carriers are pushing hard to capture new users in these frontier markets, offering inexpensive Android phones with unlimited data plans and putting internet connections into more hands than ever. Voice interfaces, localized for languages and for locale-specific apps, will unlock the final accessibility challenge for these users, allowing the benefits of the internet to reach far deeper into many societies that have until now been on the other side of the digital divide. Voice has the potential to become the universal interface to the digital world.

New users and a new user interface will certainly mean new entry points and new modalities of use for a broad range of established businesses. Industries that VCs would consider over with, done, un-investable in developed markets will be up for grabs again, and serving billions of users. We’ll see new business models, serving localized needs with localized solutions — this won’t be a walkover for the established incumbents. The next disruption is likely to have its roots far from Silicon Valley.

These are the next billion internet users, and voice is the interface that will power their digital experience.

Walled gardens

Open or Walled?

By | Artificial Intelligence, Digital Assistants, Machine Learning, Voice | No Comments

Voice has the promise to be the next disruption, upending massive, established business models and putting search, commerce, messaging, and navigation up for grabs again. But a walled garden mentality could stifle that disruption.

Even over its relatively short history, we see a pattern of behavior on the Internet: some innovator creates a marketplace for consumers, helping to organize information (Yahoo and AOL in their first iterations), commerce (Amazon), a place to keep in touch with our friends (Facebook), and they create huge consumer value in bringing us together, providing us with tools that make it easy to navigate, buy, message, etc. But as the audience grows, there is always a slide away from an open marketplace toward a walled garden, with the marketplace operators initially becoming toll takers and moving toward ever greater control (and monetization) of their users’ experience, and more recently, their data.

Mobile carriers in the US tried to erect walled gardens around their users in the 1.0 version of mobile content — the carriers thought they had captive users and captive vendors, and so created closed platforms that forced their subscribers to buy content from them. Predictably, monopoly providers offered narrow product offerings at high prices and squeezed their vendors so hard that there was no free cash flow for innovation. Mobile content stagnated, as the carriers failed to cultivate fertile ecosystems in which vendors could make money and in which consumers had a growing variety of new and interesting content. When the iPhone came along (thankfully Steve Jobs could wave his magic wand over the guys at AT&T), consumers could finally use their phones to get to the Internet for the content they wanted, and the carriers went back to being dumb pipes.

Will voice platforms become walled gardens?

If you want to enable your users to reach you through Alexa, you have to create a Skill. Then you have to train your users to invoke your Skill using a precise syntax. Likewise Google Assistant. For Siri, your business has to fit into one of the handful of domains that SiriKit recognizes. There’s a reason we refer to them as voice platforms — their owners are in control.

Initially, there are good QA reasons for this, making sure we get a good user experience. But pretty quickly, the walls will become constraints on who can be included in the garden (will Amazon and Facebook play nice together?), and ultimately, what will be the tax that must be paid in order to offer services in the garden. As users, this results in less openness, fewer choices, and constraints on our ability to quickly and easily do what we want to do, which typically includes using different services from all of the different platform providers (does Tencent really think that if you block people from Alipay inside WeChat that users will stop using Alipay?)

The carriers’ experience should be a cautionary tale — walled gardens, with their limited choices and monopolist pricing are bad for consumers; the Internet is a place of unlimited choice, the world of mobile apps is vast and diverse, again allowing for broad consumer choice — this is what we expect, and if our horizons are constrained by a platform’s policies, we’ll abandon it. The carriers fumbled Mobile Content 1.0; their walled gardens never met their promise to become massive businesses, and today they don’t even exist.

Voice interfaces should be our gateway to everything we want to do, whether it’s in Alexa, in our mobile apps, or in our connected cars or homes. So will voice platforms be these open gateways that make our lives easier, or will they be cramped walled gardens that try to make our choices for us, funneling us to a narrow selection of preferred vendors?