Thought to Action

Thought to Action!

By | Artificial Intelligence, Machine Learning | No Comments

Here at Aiqudo, we’re always working on new ways to drive Actions and today we’re excited to announce a breakthrough in human-computer interaction that facilitates these operations.  We’re calling it “Thought to Action™”. It’s in early-stage development, but shows promising results.

Here’s how it works. We capture user brainwave signals via implanted neural-synaptic receptors and transfer the resulting waveforms over BLE to our cloud where advanced AI and machine learning models translate the user’s “thoughts” into specific app actions that are then executed on the user’s mobile device.   In essence we’ve transcended the use of voice to drive actions. Just think about the possibilities. Reduce messy and embarrassing moments when your phone’s speech recognizer gets your command wrong. “Tweet Laura, I love soccer” might end up as “Tweet Laura, I’d love to sock her”. With “Thought to Action™” we get it right all the time. And perfect for use in today’s noisy environments. Low on gas and you’re driving your entire kids soccer team home from a winning match, you can simply think “Find me the nearest gas station” and let Aiqudo do the rest.  Find yourself in a boring meeting? Send a text to a friend using just your thoughts.

Stay tuned as we work to bring this newest technology to a phone near you.

Auto in-cabin experience

The Evolution of Our In-Car Experience

By | Digital Assistants, User Interface, Voice | No Comments

As the usage model for cars continues to shift away from traditional ownership and leasing to on-demand, ridesharing, and in the future, autonomous vehicle (AV) scenarios, how we think about our personal, in-car experience will need to shift as well.

Unimaginable just a few short years ago, today, we think nothing of jumping into our car and streaming our favorite music through the built-in audio system using our Spotify or Pandora subscription. We also expect the factory-installed navigation system to instantly pull up our favorite or most-commonly used locations (after we’ve entered them) and present us with the best route to or from our current one. And once we pair our smartphone with the media system, we can have our text and email messages not only appear on the onboard screen but also read to us using built-in text-to-speech capabilities.  It’s a highly personalized experience in our car.

When we use a pay-as-you-go service, such as Zipcar, we know we’re unlikely to have access to all of the tech comforts of our own vehicle, but we can usually find a way to get our smartphone paired for handsfree calling and streaming music using Bluetooth. If not, we end up using the navigation app on our phone and awkwardly holding it while driving, trying to multitask. It’s not pretty. And when we hail a rideshare, we don’t expect to have access to any of the creature comforts of our own car.

But what if we could?

Just as our relationship to media shifted from an ownership model–CDs or MP3 files on iPods–to subscription-based experiences that are untethered to a specific device but can be accessed anywhere at any time, it’s time to shift our thinking about in-car experiences in the same way.

It’s analogous to accessing your Amazon account and continuing to watch the new season of “True Detective” on the TV at your Airbnb–at the exact episode where you left off last week. Or listening to your favorite Spotify channel at your friend’s house through her speakers.

All your familiar apps (not just the limited Android Auto or Apple CarPlay versions) and your personalized in-car experience–music, navigation, messaging, even video (if you’re a passenger, of course)–will be transportable to any vehicle you happen to jump into, whether it’s a Zipcar, rental car or some version of a rideshare that’s yet to be developed. What’s more, you’ll be able to easily and safely access these apps using voice commands. Whereas today our personal driving environment is tied to our own vehicle, it will become something that’s portable, evolving as our relationship to cars changes over time.

Just on the horizon of this evolution in our relationship with automobiles? Autonomous vehicles, or AVs, in which we become strictly a passenger, perhaps one of several people sharing a ride. Automobile manufacturers today are thinking deeply about what this changing relationship means to them and to their brands. Will BMW become “The Ultimate Riding Machine?”(As a car guy, I personally hope not!)  And if so, what will be the differentiators?

Many car companies see the automobile as a new digital platform, for which each manufacturer creates its own, branded, in-car digital experience. In time, when we hail a rideshare or an autonomous vehicle, we could request a Mercedes because we know that we love the Mercedes in-car digital experience, as well as the leather seats and the smooth ride.

What happens if we share the ride in the AV, because, well, they are rideshare applications after all? The challenge for the car companies becomes creating a common denominator of services that define that branded experience while still enabling a high degree of personalization. Clearly, automobile manufacturers don’t want to become dumb pipes on wheels, but if we all just plug in our headphones and live on our phones, comfy seats alone aren’t going to drive brand loyalty for Mercedes. On the other hand, we don’t all want to listen to that one guy’s death metal playlist all the way to the city.  

The car manufacturers cannot create direct integrations to all services to accommodate infinite personalization. In the music app market alone there are at least 15 widely used apps, but what if you’re visiting from China? Does your rideshare support China’s favorite music app, QQ?  We’ve already made our choices in the apps we have on our phones, so transporting that personalized experience into the shared in-car experience is the elegant way to solve that piece of the puzzle.

This vision of the car providing a unique digital experience is not that far-fetched, nor is it that far away from becoming reality. It’s not only going to change our personal ridesharing experience, but it’s also going to be a change-agent for differentiation in the automobile industry.

And it’s going to be very interesting to watch.

Semiotics

AI for Voice to Action – Part 3: The importance of Jargon to understanding User Intent

By | Artificial Intelligence, Command Matching, Machine Learning | No Comments

In my last post I discussed how semiotics and observing how discourse communities interact had influenced the design of our machine learning algorithms. I also emphasized the importance of discovering jargon words as part of our process of understanding user commands and intents.

In this post, we describe in more depth how this “theory” behind our algorithms actually works. We also discussed what constitute good jargon words.  “Computer” is a poor example of a jargon word because it is too broad in meaning, whereas a term relating to a computer chip, e.g. “Threadripper” (a gaming processor from AMD) would be a better example as it is more specific in meaning and is used in fewer contexts.

Jargon terms and Entropy

So – how do we identify good jargon terms and what do we do with them in order to understand user commands?

To do this we use entropy. In general entropy is a measure of chaos or disorder and, in an information theory context, it can be used to determine how much information is conveyed by a term. Because jargon words have a very narrow and specific meaning within specific discourse communities, they have lower entropy (more information value) than broader more general terms.

To determine entropy we take each term in our synthetic documents (see this post for more information of how we create this data set) and build a probability profile of co-occurring terms. The diagram below shows an example (partial) probability distribution for the term ‘computer’.

Entropy

Figure 1: Entropy – probability distributions for jargon terms

These co-occurring terms can be thought of as the context for each potential jargon word. We then use this probability profile to determine the entropy of the word. If that entropy is low then we consider it to be a candidate jargon word.

Having identified the low entropy jargon words in our synthetic command documents, we then use their probability distributions as attractors for these documents themselves. In this way (as seen in the diagram below) we create a set of document clusters where each cluster relates semantically to a jargon term. (Note: in the interest of clarity, clusters are described using high level topic as opposed to the jargon words themselves in the figure below).

Clusters derived from Synthetic Documents

Figure 2: Using jargon words as attractors to form clusters

We then build a graph within each cluster that connects documents based on how similar they are in terms of meaning. We identify ‘neighborhoods’ within these graphs that relate to areas of intense similarity. For example a cluster may be about “cardiovascular fitness” whereas a neighborhood may be more specifically about “High Intensity Training”, or “rowing” or “cycling”, etc.

Clusters and Neighborhoods

Figure 3: Neighborhoods for the cluster “cardiovascular fitness”

These neighborhoods can be thought of as sub-topics within the overall cluster topic. Within each sub-topic we can then extract important meaning-based phrases that precisely describe what that neighborhood is about. e.g. “HIIT”, “anaerobic high-intensity period”, “cardio session”, etc.

Meaning based phrases for sub-topics

Figure 4: Meaning based phrases for the “high intensity training” sub-topic

In this way we create meaning-based structure from completely unstructured content. Documents from the same cluster relate to the same discourse community. Documents from the same cluster that share similar important terms or phrases can be regarded as relating to the same sub-topic. If two clusters share a large number of important phrases then this represents a dialog between two discourse communities. If multiple important phrases are shared among many clusters then this represents a dialogue among multiple communities.

So having described a little bit about the algorithms themselves, how do they help us understand the correct meaning behind a user’s command? Given this contextual partitioning of the data into discourses based on jargon terms, we can disambiguate among the many different meanings a term can have. For example, if the user were to say ‘open the window’ – we will be able to understand that there is a meaning (discourse) relating to both buildings and to software but if the user were to say ‘minimize the window’, we would understand that this could only have a software meaning and context. Fully understanding the nuances behind a user’s command is, of course, much more complicated than what I have just described, but the goal here is to give a high level overview of the approach.

In subsequent posts, we will discuss how we extract parameters from commands, accurately determine which app action to execute, and how we pass the correct parameters to that action.  

David Patterson and Vladimir Dobrynin

Aiqudo, Inc.

Silicon Valley Voice Pioneer Aiqudo Unveils Its Latest Software Platform

By | Press | No Comments

Enables Anyone to Use their Voice to Control and Interact with 1000’s of Mobile Apps

SAN JOSE CALIFORNIA (BUSINESS WIRE), December 12, 2018

Aiqudo unveiled a set of breakthrough advances to Q Actions, its industry-leading voice enablement platform, that for the first time makes it possible for anyone to navigate their lives through their mobile apps seamlessly using a natural voice. Now, mobile applications can talk back to users to confirm instructions, conduct multi-step processes and even proactively alert users to new messages and read them back.

“Our Directed Dialogue feature helps users to easily complete complex tasks”

Unlike other voice platforms, Aiqudo serves users by working directly with apps users have downloaded on their mobile phones, eliminating the self-serving walled gardens erected by other voice platforms. Consumers may never be able to check Facebook instant messages from Alexa or access an Amazon wish list from Google Assistant and go shopping. Aiqudo removes this obstacle and makes voice the simplest, fastest, most intuitive interface for consumer technologies.

“By focusing on extending dominance in their legacy businesses such as ecommerce or search, the major voice platforms have failed to deliver on their own hype around voice,” said John Foster CEO of Aiqudo. “We’ve taken a better route focused on making voice truly useful today. We’re app-centric, platform-agnostic and let consumers use voice on their own terms, not just when they’re standing next to a device in their living rooms. Our voice assistant needs to be available to us whether we’re in a car, on a train with our hands full or wandering around an amusement park.”

At the center of the latest version of Aiqudo are features such as:

  • Directed Dialogue: Aiqudo quickly and easily guides users to successful actions, prompting them to provide all required pieces of information, whether it’s a calendar event requiring start and end times, location and event name, or providing party size and time for booking a table at a restaurant.
  • Compound Commands: Your favorite apps and mobile phone features can now work collaboratively to get everyday requests completed. Executing multiple actions with a single command is easier than ever – navigate with Waze or other traffic app and notify your friends of a late arrival with your favorite messaging app– and it’s done with one single request.
  • Voice Talkback: Don’t want to be distracted looking at your phone? Aiqudo can read back results from your favorite apps such as news headlines, stock quotes and message responses.

“We’ve taken a better route focused on making voice truly useful today. We’re app-centric, platform-agnostic and let consumers use voice on their own terms, not just when they’re standing next to a device in their living rooms.”

“Our Directed Dialogue feature helps users to easily complete complex tasks,” said Rajat Mukherjee CTO of Aiqudo. “A user is only prompted to provide any missing information required by an action that she has not already provided in a command. Because we understand the semantics of all actions in the system, directed dialog works out-of-the-box for every one of our actions and does not require configuration, customized training or huge volumes of training data.”

Deploying a semiotics-based language modeling platform enables multi-lingual natural language commands, while Aiqudo’s app analysis engine allows rapid on boarding of apps to provide high utility and broad coverage across apps. Today Aiqudo supports thousands of applications ranging from ecommerce apps like Amazon, Walmart, or eBay, entertainment apps like Netflix, Spotify, or Pandora, to favorite messaging and social apps including WhatsApp, WeChat, Messenger and more.

Aiqudo Q Actions 2.0 will be available on Google Play by year end, and the company has already struck OEM relationships with the likes of Motorola for the technology to be embedded directly into phones.

To view product demo videos, visit Aiqudo’s YouTube channel.

About Aiqudo
Aiqudo (pronounced: “eye-cue-doe”) is a software pioneer that connects the nascent world of digital voice assistants to the useful, mature world of mobile apps through its Voice-to-Action™ platform. It lets people use voice commands to execute actions in mobile apps across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language requests and then triggers instant actions via mobile apps consumers prefer to use to get things done quickly and with less effort. For more info, visit: http://www.aiqudo.com

Business Wire: Silicon Valley Voice Pioneer, Aiqudo, Unveils Its Latest Software Platform

 

Voice Enable System Settings with Q Actions 1.3.3!

By | App Actions, Digital Assistants, News, Voice Search | No Comments

Somewhere in the Android Settings lies the option for you turn on Bluetooth, turn off Wifi, and change sound preferences. These options are usually buried deep under menus and sub-menus. Discoverability is an issue and navigating to the options usually means multiple taps within the Settings app. Yes, there’s a search bar within the Settings app, but it’s clunky, requires typing and only returns exact matches. Some of these options are accessible through the quick settings bar, but discovery and navigation issues still exist. 

In the latest release, simply tell Q Actions what System Settings you want to change. Q Actions can now control your Bluetooth, Wifi, music session, and sound settings through voice.

Configure your Settings:

  • “turn on/off bluetooth”
  • “turn wifi on/off”

Control your music:

  • “play next song”
  • “pause music”
  • “resume my music”

Toggle your sound settings:

  • “enable do not disturb”
  • “mute ringer”
  • “increase the volume”
  • “put my phone on vibrate”

In addition to placing calls to your Contacts, Q Actions helps you manage Contacts via voice. Easily add a recent caller as a contact in your phonebook or share a friend’s contact info with simple commands. If you have your contact’s address in your Contacts, you can also get directions to the address using your favorite navigation app.

Place calls to Contacts:

  • “call Jason Chen
  • “dial Mario on speaker”

Manage and share your Contacts:

  • “save recent number as Mark Johnson
  • “edit Helen’s contact information“
  • “share contact info of Daniel Phan
  • “view last incoming call”

Bridge the gap between your Contacts and navigation apps:

  • “take me to Rob’s apartment”
  • “how do I get to Mike’s house?”

Unlock your phone’s potential with voice! Q Actions is now available on Google Play.

Poison Bottle

AI for Voice to Action – Part 2: Machine Learning Algorithms

By | Artificial Intelligence, Command Matching, Machine Learning, Natural Language | No Comments

My last post discussed the important step of automatically generating vast amounts of relevant content relating to commands to which we apply our machine learning algorithms. Here I want to delve into the design of our algorithms.

Given a command, our algorithms need to:

  1.   Understand the meaning and intent behind the command
  2.   Identify and extract parameters from it
  3.   Determine which app action is most appropriate
  4.   Execute the chosen action and pass the relevant parameters to the action

This post and the next one will address point 1. The other points will be covered in subsequent posts.

So how do we understand what a user means based on their command? Typically commands are short (3 or 4 terms), which makes it very difficult to disambiguate among the multiple meanings a term can have. So if someone says “search for Boston” do they want directions to a city or do they want to listen to a rock band on Spotify? In order to disambiguate among all the possibilities we need to know if a) any of the command terms can have different meanings, b) what those meanings are and finally c) which is the correct one based on context.

Semiotics

In order to do this we developed a suite of algorithms which feed off the data we generated previously (See post #3). These algorithms are inspired by semiotics, the study of how meaning is communicated. Semiotics originated as a theory of how we interpret the meaning of signs and symbols. Given a sign in one context, for example a flag with a skull and crossbones on it, you would assign a particular meaning to it (i.e. Pirates).

Pirate Symbol

Whereas, if you changed the context to a bottle, then the meaning changes completely

Poison Bottle

Poison – do not drink!

Linguists took these ideas and applied them to language and how, given a term (e.g. ‘window’), its meaning can change depending on the meaning of the words around it in the sentence (meanings could be physical window in a room, software window, window of opportunity, etc.).  By applying these ideas to our data we can understand the different meanings a term can have based on its context.

Discourse Communities

We also drew inspiration from discourse communities. A discourse community is a group of people involved in and communicating about a particular topic. They tend to use the same language for important concepts (sometimes called jargon) within their community, and these terms have a specific, understood and agreed meaning within the community to make communication easier. For example members of a cycling community have their own set of terms that is fairly unique to them that they all understand and adhere to. If you want to see what I mean, go here and learn the meanings of such terms as an Athena, a Cassette, a Chamois (very important!) and many other terms. Similarly motor enthusiasts will have their own ‘lingo’. If you want to be able to differentiate your AWS from your ABS and your DDI from your DPF then get up to speed here.

Our users use apps, so in addition we would expect to discover gaming discourses, financial discourses, music discourses, social media discourses and so on. Our goal was to develop a suite of machine learning algorithms which could automatically identify these communities through their important jargon terms. By identifying the jargon terms we can build a picture of the relationship between these terms and other terms used by each discourse community within our data. A characteristic of jargon words is that they have a very narrow meaning within a discourse compared to other terms. For example the term ‘computer’ is a very general term that can have multiple meanings across many discourses – programming, desktop, laptop, tablet, phone, firmware, networks etc. … ‘Computer’ isn’t a very good example of a jargon term as it is too general and broad in meaning. We want to identify narrow, specific terms that have a very precise meaning within a single discourse, e.g. a specific type of processor, or a motherboard. Our algorithms do a remarkable job of identifying these jargon terms and are foundational to our ability to extract meaning, precisely understand user commands and thereby the real intent that lies behind them.

In my next post I will go into the details behind the algorithms that enable us to identify these narrow-meaning, community-specific jargon terms and ultimately to build a model that understands the meaning and intent behind user queries.

Data Augmentation

AI for Voice to Action – Part 1: Data

By | Artificial Intelligence, Machine Learning, Voice Search | No Comments

At Aiqudo two critical problems we solve in voice control are the action discovery problem and the cognitive load problem.

In my first post I discussed how using technology to overcome the challenges of bringing voice control into the mainstream motivated me to get out of bed in the morning. I get a kick out of seeing someone speaking naturally to their device and smiling when it does exactly what they wanted.

In our second post in the series we discussed how Aiqudo has built the the largest (and growing) mobile app action index in the world and our process for on-boarding actions. On-boarding an action only  takes minutes – there is no programming involved and we are not reliant on the app developer to set this up or provide an API. This enables enormous scalability of actions compared to the Amazon and Google approaches that rely on a programming solution where developers are required to code to these platforms, add specific intents, and go through a painful approval process.

In this post  I wanted to start to elaborate on our overall approach and discuss specifically how we create the large amounts of content for our patented machine learning algorithms to analyze, in order to be able to understand a user’s intent. Ours is a significant achievement since even large teams are facing challenges in solving this problem in a generic fashion – as the following quote from Amazon shows.   

“The way we’re solving that is that you’ll just speak, and we will find the most relevant skill that can answer your query … The ambiguity in that language, and the incredible number of actions Alexa can take, that’s a super hard AI problem.” – Amazon

At Aiqudo, we have already solved the challenge that Amazon is working on. Our users don’t have to specify which app to use  and we automatically pick the right actions for their command thereby reducing the cognitive load for the user.

The starting point for generating the content we need is the end of the action on-boarding process, when a few sample commands are added to the action. These training commands enable us to start the machine learning processes that enable us to

  1. extract the correct meaning from the natural language command
  2. understand the intent; and
  3. execute the correct action on the best app

The first step in this process is to gather content relating to each command on-boarded (command content). As is typical with machine learning approaches we are data hungry – the more data we have, the better our performance. Therefore we use numerous data repositories specific to on-boarded commands and apps and interrogate them to identify related content that can be used to augment the language used in the command.

Content Augmentation for Machine Learning

Content augmentation removes noise and increases the semantic coverage of terms

 

Teaching a machine to correctly understand what a user intends from just a few terms in a command is problematic (as it would be for a human) – there isn’t enough context to fully understand the command – e.g. ‘open the window’ – is this a software related command or a command related to a room? Augmenting the command with additional content adds a lot more context for the algorithms to better understand meaning and intent. This augmented content forms the basis of a lexicon of terms relating to each on-boarded command. Later, when we apply our machine learning algorithms this provides the raw data to enable us to build and understand meaning – e.g. we can understand that a movie is similar to a film, rain is related to weather, the term ‘window’ has multiple meanings and so on.

It is equally important that each command’s lexicon is highly relevant to the command and low in noise – for this reason we automatically assess each term within the lexicon to determine its relevance and remove noise. Once we have the low noise lexicon this becomes a final lexicon of terms relating to each command. We then generate multiple command documents from the lexicon for each command. Each command document is generated by selecting terms based on the probability of its occurrence within the command’s lexicon. The more likely a term occurs within the command’s lexicon, the more likely it is to occur in a command document. Note by doing this we are synthetically creating documents which do not make sense to a human, but are a reflection of the probabilities of occurrence of terms in the command’s lexicon. It is these synthetically created command documents which we use to train our machine learning algorithms to understand meaning and intent. Because these are synthetically generated we can also control the number of command documents we create to fine tune the learning process.

Once we have carefully created a relevant command lexicon and built a repository of documents which relate to each command that has been on-boarded, we are ready to analyze the content, identify topics and subtopics, disambiguate among the different meanings words have and understand contextual meaning.  Our innovative content augmentation approach allows us to quickly deploy updated machine learned models that can immediately match new command variants, so we don’t have to wait for large numbers of live queries for training as with other approaches.

The really appealing thing about this approach is it is language agnostic – it allows us to facilitate users speaking in any language by interrogating multilingual content repositories. Currently we are live in 12 markets in 7 languages and and are enabling new languages. We’re proud of this major accomplishment in such a short timeframe.  

In my next post in this series, I will say a little more about the machine learning algorithms we have developed that have enabled us to build such a scalable, multi-lingual solution.

Ever-growing index of App Actions

The largest mobile App Action index in the world!

By | App Actions, Digital Assistants, Voice Search | No Comments

You often hear the phrase “Going from 0 to 1” when it comes to the accomplishment of reaching a first milestone – an initial product release, the first user, the first partner, the first sale.   Here at Aiqudo, I believe our “0 to 1” moment occurred at the end of the summer in 2017 when we reached our aspirational goal of on-boarding a total of 1000 Actions. It was a special milestone for us as we had built an impressive library of actions across a broad category of apps, using simple software tools, in a relatively short time, with only a handful of devs and interns.  For comparison, we were only 5 months in operation and already had one tenth the number of actions as that “premier bookseller in the cloud” company. These were not actions for games and trivia – these were high utility actions in mobile apps that were not available in other voice platforms. On top of that, we did it all without a single app developer’s help – no APIs required. That’s right, no outside help!

So how were we able to accomplish this? Quite simply, we took the information we knew about Android and Android apps and built a set of tools and techniques that allowed us to reach specific app states or execute app functions.  Our initial approach provided simple record and replay mechanics allowing us to reach virtually any app state that could be reached by the user. Consequently, actions such as showing a boarding pass for an upcoming flight, locating nearby friends through social media or sending a message could be built, tested, and deployed in a matter of minutes with absolutely no programming involved!   But we haven’t stopped there. We also incorporate app-specific and system-level intents whenever possible, providing even more flexibility to the action on-boarding process and our growing library of actions including those that control Alarms, Calendar, Contacts, Email, Camera, Messaging and Phone to name a few. With the recent addition of system level actions, we now offer a catalog of very useful actions for controlling various mobile device settings such as audio controls, display orientation and brightness, wifi, bluetooth,  flash and speaker volume.

Our actions on-boarding process and global actions library solves the action discovery problem that we described in an earlier post. We do the heavy lifting, so all you need to say is show my actions”, or “show my actions for Facebook” and get going! And you don’t need to register your credentials to invoke your personal actions.

Today our action library is ~4000 strong and supports 7 languages across 12 locales.  Not bad for a company less than a year and a half old! We haven’t fully opened up the spigot either! 

Of course, all of this would not be possible without the hard work of the Aiqudo on-boarding team whose job, among other things, is to create and maintain Actions for our reference Q Actions app as well as our partner integrations.   The team continues to add new and interesting actions to the Aiqudo Action library and optimize and re-onboard actions as needed to maintain a high quality of service.

Check back with us for a follow-on post where we’ll discuss how our team maintains actions through automated testing.

vintage alarm clock

What motivates me to get out of bed in the morning?

By | Artificial Intelligence, Digital Assistants, Voice Search | No Comments

A while back a friend bought an Alexa speaker. He was so excited about the prospects of speaking to his device and getting cool  things done without leaving the comfort of his chair. A few weeks later when I next saw him I asked how he was getting on with it and his reply was very insightful and typical of the problems current voice platforms pose.

Initially when he plugged it in, after asking the typical questions everyone does (‘what is the weather’ and ‘play music by Adele’) he set about seeing what other useful things he could do. He quickly found out that it wasn’t easy to find out what 3rd party skills were integrated with Alexa (I call this the action discovery problem). When he found a resource to provide this information he went about adding skills – local news headlines, a joke teller, Spotify (requiring registration), quiz questions and so on. Then he hit his next problem – in order to use these skills he had to learn a very specific set of commands in order to execute the functionality. This was fine for two or three skills but it very soon became overwhelming. He found himself forgetting the precise language to use for each specific skill and soon became frustrated (the cognitive load problem).

Last week when I saw him again he had actually given the speaker to his son who was using it as a music player in his bedroom. Once the initial ‘fun’ of the device wore off it became apparent that there was very little real utility from it for him. While some skills had value it was painful to find out about them in the first place, add them to Alexa and then remember the specific commands to execute them…

The reason I found this so interesting was that these are precisely the problems we have solved at Aiqudo. Our goal is to provide consumers a truly natural voice interface to actions, starting with all the functionality in their phone apps, without having to remember specific commands needed to execute them. For example if I want directions to the SAP centre in San Jose to watch the Sharks I might say, ‘navigate to the SAP Centre’,  ‘I want to drive to the SAP Centre’ or ‘directions to the SAP Centre’. Since a user can use any of these commands, or other variants, they should all just work. Constraining users to learn the precise form of a command just frustrates them and provides a poor user experience. In order to leverage the maximum utility from voice, we need to understand the meaning and intent behind the command irrespective of what the user says and be able to execute the right action.

So how do we do it?

This is not a simple answer, so we plan to cover the main points in a series of blog posts over the coming weeks. These will focus at a high level on the processes, the technology, the challenges and the rationale behind our approach. Our process has 2 main steps.

  • Understand the functionality available in each app and on-board these actions into our Action Index
  • Understand the intent of a user’s command and subsequently, automatically execute the correct action.

In step 1, by doing the ‘heavy lifting’ and understanding the functionality available within the app ecosystem for users, we overcome the action discovery problem my friend had with his Alexa speaker. Users can simply say what they want to do and we find the best action to execute automatically – the user doesn’t need to do anything. In fact if they don’t have an appropriate app on their device for the command they have just issued we actually recommend it to them and they can install it!  

Similarly in step 2, by allowing users the freedom to speak naturally and choose whatever linguistic form of commands they wish, we overcome the second problem with Alexa – the cognitive load problemusers no longer have to remember very specific commands to execute actions. Voice should be the most intuitive user interface – just say what you want to do.  We built the Aiqudo platform to understand the wide variety of ways users might phrase their commands, allowing users to go from voice to action easily and intuitively.  And did I mention that the Aiqudo platform is multilingual, enabling natural language commands in any language the user chooses to speak in.

So getting back to my initial question – what motivates me to get out of bed in the morning? – well, I’m excited to use technology to bring the utility of the entire app ecosystem to users all over the world so they can speak naturally to their devices and get stuff done without having to think about it!

In the next post in this series, we’ll talk about step 1making the functionality in apps available to users.

Automatically personalized action recipes.

Automate your day with Action Recipes in Q Actions 1.3.2!

By | Action Recipes, App Actions, Digital Assistants, News | No Comments

Finding yourself routinely using the same set of actions as you commute to work or prepare for your workout session? Action Recipes string together your favorite actions to help you get through the day. With Action Recipes, you can run multiple actions from the apps you use with simple voice commands.

Hands on the steering wheel as you start your daily commute to work?

“start my morning commute”

  • Start streaming NPR as Google Maps navigates you through the best route to work

Earbuds on, phone stowed, as you get ready for your routine run around the neighborhood or nearby trail?

“start my workout”

  • Play your favorite tracks on Spotify, Pandora, or Google Play Music as MapMyRun, Mi Fit or Google Fit logs your workout session

Hands tied as you gather your carry-ons and prepare to board the plane?

“ready to board”

  • Send someone a quick message through SMS, WhatsApp, or WeChat as United, American Airlines, Alaska, or Delta brings up your boarding pass

These Action Recipes are already created for your convenience. Just grab the latest version of Q Actions from the Google Play Store and swipe left until you reach the My Action Recipes page to preview your supported Action Recipes. More interesting recipes will just start surfacing here as they come online.

Action Recipes are automatically personalized for you – the right actions are executed based on the apps you use for these tasks. We are working on further controls and customizability.

App Store Icon

We would love to hear your feedback on these Action Recipes. Please let us know what you think!