Erica - Bank of America Voice Assistant

A Voice Success Story: Erica from Bank of America

By | Digital Assistants | No Comments

It’s no secret that a growing number of companies are recognizing the opportunities for new, branded experiences presented by voice interfaces powered by AI. In fact, Gartner predicts that 25 percent of digital workers will use virtual assistants daily by 2021, and brands already using chatbots have seen the number of leads they collect increase by as much as 600 percent over traditional lead generation methods.

These AI-driven voice assistants and chatbots have also become useful cost-cutting tools for companies with large subscriber bases – banks, insurance companies, and mobile phone operators, to name a few. A 2017 Juniper Research report calculates that, for every inquiry handled by a chatbot, banks save four minutes of an agent’s time, which translates to a cost saving of $0.70 per query. These platforms are expected to save banks an estimated $7.3 billion in operational costs by 2023. 

The real opportunity presented by voice assistants is in delighting the customer and strengthening brand loyalty, which inevitably drives revenue. We’re entering an exciting time where voice has the ability to redefine the relationship that consumers have with their technology and open up aspects or functionality that the user didn’t previously know — or know they even cared –about. 

A 2017 PwC report described chatbots as adding “a new dimension to the power of ‘personal touch’ and massively [enhancing] customer delight and loyalty.” 

In my own life, I can’t think of a better example of this than Erica, Bank of America’s AI-driven virtual financial assistant. Working in and following the space for a few years, I am really impressed with what Bank of America has built for its customers in Erica.

Erica caters to the bank’s customer service requirements in a number of ways: sending notifications to customers, providing balance information, sharing money-saving tips, providing credit report updates, facilitating bill payments, and helping customers with simple transactions. Recently, BofA expanded Erica’s capabilities to help clients make smarter financial decisions by providing them with personalized, proactive insight. 

For me, instead of calling the BofA customer service 800 number and spending 20 to 30 minutes navigating menus, waiting on hold, or being transferred and repeating the process all over again, I can talk to Erica and quickly complete transactions. Erica averages a mere three minutes time-to-resolution via voice within the app. Think about all the things you could get done in those saved minutes instead, not to mention a break on your blood-pressure medicine.

Another aspect where Erica shines for me is in exposing capabilities within the app that aren’t obvious or are buried deep in the menu structure. One feature I use all the time is the ability to put an international travel notice on my card before I leave the country (so my credit card works overseas) — sometimes I even use it standing in the TSA security line. Another feature I love is being able to find my routing and account numbers quickly and easily by simply asking Erica. Who hasn’t spent valuable time on a fishing expedition in their banking app while hoping the webpage (waiting for automatic payment information) doesn’t time-out first?

The proof of the value of Erica’s voice interface is in the user adoption numbers:  just over a year after introduction, Erica has surpassed 7 million users and has handled more than 50 million client requests. And since launching Erica’s proactive insights in late 2018, daily client engagement with Erica has more than doubled. In an interview with American Banker, BofA’s head of digital banking attributes Erica’s strong adoption to its easy-to-use transaction-search functions and financial advice, two areas where the bank continues to focus on harnessing the power of voice to delight its customers.

Thing is, for all of Erica’s benefits for both consumers and BofA, building this kind of voice-activated assistance in-house — from scratch — isn’t fast, easy, or cheap. The Erica development team boasted 100 people in 2017 — before introduction — and has surely grown by now, given her success. And it took those 100 people nearly two years to get Erica ready for prime time, at a cost estimated at $30 million dollars. Why so expensive? As one BofA VP noted, during development, the bank “learned [that] there are over 2,000 different ways to ask us to move money.” 

At Aiqudo, we’ve figured out — and operationalized — the technical heavy lifting needed to create a voice assistant: NLU, intent detection, action execution, multiple languages, the analytics platform; there’s no reason for partners to reinvent the wheel. We provide partners with a turnkey voice capability in their app. Developers retain control of this critical new Voice UI (and all of their users’ data) rather than surrendering the direct relationship with their users to voice platforms. Until now, developers have been required to create skills for each voice platform, which risks commoditizing the app and losing the brand they have worked so hard to develop. In contrast, Aiqudo offers a cost-effective solution that allows developers to focus on adding value to their app rather than on customizing for voice.

Disclaimer: Bank of America developed their voice technology without the assistance or use of Aiqudo technology.

Q Actions Auto Mode

What if cars could understand ALL voice commands?

By | App Actions, Auto Mode, Digital Assistants, News, Voice | No Comments

The following transcript was taken from a casual conversation with my son.

Son: Dad, what are you working on?

Me: It’s a new feature in our product called “Auto Mode”.  We just released it in version 2.1 of our Q Actions App for Android.  We even made a video of it.  We can watch it after dinner if you’re interested.

Son: The feature sounds cool.  What’s it look like?

Me: Well, here.  We have this special setting that switches our software to look like the screen in a car. See how the screen is wider than it is tall? Yeah, that’s because most car screens are like that too.

Son: Wait. How do you get your software into cars? Can’t you just stick the tablet on the dashboard?

Me: Humm, not quite.  We develop the software so that car makers can combine it with their own software inside the car’s console.  We’ll even make it look like they developed it by using their own colors and buttons. I’m showing you how this works on a tablet because it’s easier to demonstrate to other people – we just tell them to pretend it’s the car console.  Until cars put our software into their consoles, we’ll make it easy for users to use “Auto Mode” directly on their phones. Just mount the phone on the car’s dash and say “turn on auto mode” – done!

Son:  So how do you use it?  And what does that blue button with a microphone in it do?

Me:  Well, we want anyone in the car to be able to say a command like “navigate to Great America” or “what’s the weather like in San Jose?” or “who are Twenty One-Pilots?”.  The button is simply a way to tell the car to listen. When we hear a command, our software figures out what to do and what to show on the console in the car. Sometimes it even speaks back the answer.  Now we don’t always want people to have to press the button on the screen so we’ll work with the car makers to add a button on the steering wheel or even a microphone that is always listening for a special phrase such as “Ok, Q” to start.

Son: How does it do that?  I mean, the command part.

Me: Good question.  Since you’re smart and know a little about software, I’ll keep it short.  Our software takes a command and tries to figure out what app or service can best provide the answer.  For example, if the command is about showing the route to say, an amusement park like Great America, we’ll ask Google Maps to handle it, which it does really well. Lots of cars come installed with mapping software like Google Maps so it’s best to let them handle those. For other types of commands that ask for information, like “what’s the weather like in San Jose” or  “who are Twenty One Pilots”, we’ll send it off to servers in the cloud. They then send us back answers and we format it and display it on the screen – in a pretty looking card like this one.

Me: Sometimes, apps running on our phones can best answer these commands and we use them to handle it.

Son: Wait. Phones?  How are phones involved? I only see you using a tablet.

Me:  Ahhh.  You’ve discovered our coolest feature.  We use Apps already installed on your phone.   Do you see those rectangle-looking things in the upper right corner of the tablet? The ones with the pictures and names of people? Well, those are phone profiles.  They appear when a person connects their phone, running our Q Actions app, to the car’s console through Bluetooth, sort of like you do with wireless earbuds. When connected, our software in the console sends the phone your commands and the phone in turn attempts to execute the command using one of the installed apps.   Let me explain with an example. Let’s pretend you track your daily homework assignments using the Google Tasks app on your phone. Now you hop into the car and your phone automatically pairs with the console. Now I asked you to show me your homework assignments. You then press the mic button and say “show my homework tasks”.  The software in the console would intelligently route the command to your phone (because Google Tasks is not on the console), open Google Tasks on your phone, grab all your homework assignments and send them back to the console to be displayed in a nice card. Oh, and it would also speak back your homework assignments as well. Let’s see what happens when I tell it to view my tasks.

Son:  Big deal.  I can just pick up my phone and do that.  Why do I need to use voice for that?

Me: Because if you’re the driver, you don’t want to be fumbling around with your phone, possibly getting into an accident! Remember, this is supposed to help drivers with safe, “hands-free” operation. You put your phone in a safe place and our software figures out how to use it to get the answers.

Son: Why can’t the car makers put all these apps in the console so you don’t have to use your phone?

Me: Great question.  Most people carry their phones on them at all times, especially when they drive.  And these phones have all their favorite apps with all their important personal information stored in them.  There’s no way the car makers could figure out which apps to include when you buy the car. And even if you could download these apps onto the console, all your personal information that’s on your phone would have to transferred over to the console, app by app.  Clumsy if you ask me. I prefer to keep my information on my phone and private, thank you very much!

Son: Oh. Now I get it.  So what else does the software do?

Me: The console can call a family member.  If you say “call Dad”, the software looks for ‘dad’ in your phone’s address book and dials the number associated with it.  But wait. You’re probably thinking ‘What’s so special about that? All the cool cars do it”. Well, we know that a bunch of apps can make phone calls so we show you which ones and let you decide.  Also, If you have two numbers for ‘dad’, say a home and mobile number, the software will ask you to choose one to call. Let’s see how this works when I say “call Dad”.

Me: It asks you to pick an app.  I say ‘phone’ and then it asks me to pick a number since my dad has both a home and mobile number.  I say ‘mobile’ and it dials the number through my phone.

Son: Cool. But what if I have two people with the same name, like Julie?

Me: It will ask you to pick a ‘Julie’ when it finds more than one.  And it will remember that choice next time you ask it to call Julie.  See what happens when I want to call Jason. It shows me all the people in my address book who are named Jason along with their phone numbers.  If a person has more than one number it will say ‘Multiple’

Son: Wow.  What else?

Me: How about sending a message on WhatsApp? Or setting up a team meeting in the calendar. Or joining a meeting from the car if you are running late. Or even checking which one of your friends have birthdays today.   All these actions are performed on your phone using the apps you are familiar with and use.

Son: Which app shows you your friends birthdays? That’s kind of neat.

Me: Facebook

Son: I don’t use Facebook. I use Instagram. It’s way better.  Plus all the cool kids use it now.

Me:

Me: You get the picture though, right?

Son: Sure.

Son: So what if all of my friends are in the car with you and we connect to the console?  How does the software know where to send the command?

Me: We use the person’s voice to identify who they are and route the command to the right person’s phone automatically.

Son: Really? That seems way too hard.

Me: Not really.  Although we haven’t implemented it yet, the technology exists to do this sort of thing today.

Son: Going back to main screen, why does the list of actions under ‘Recent’ and ‘Favorites’ change when you change people?

Me: Oh, you noticed that!   Whenever the software switches to a new profile, we grab the ‘Recent’ and ‘Favorites’ sections from that person’s phone and display it in the tablet, er, console.  This is our way of making the experience more personalized or familiar to the way the app appears on your phone. In fact, the ‘Favorites’ are like handy shortcuts for frequently used actions, like “call Mom”.  

Me: One more thing.  Remember the other buttons on the home screen? One looked like a music note, the other a picture for messaging and so on.  Well, when you press those, a series of icons appear across the screen, each showing an action that belongs to that group.  If your phone had Spotify installed, we would show you a few Spotify actions. If Pandora was installed, we would show you Pandora actions and so on.   Check out what happens when I activate my profile. Notice how Pandora appears? That’s because Pandora is on my phone and not on the tablet like Google Play Music and YouTube Music.

  

Me: Same is true for messaging and calling.   Actions from apps installed on your phone would appear.  You would simply tap on the icon to run the action.   In fact, if you look carefully, you’ll notice that all the actions that show up on the console are also in the ‘My Actions’ screen in the Q Actions app on your Android Phone.   Check out what’s on the tablet vs. my phone.

 .     

Son: Yep.

Me: Oh and before I forget, there’s one last item I’d like to tell you about.

Son: What’s that.

Me: Notifications.  If you send me a message on WhatsApp, Messenger or WeChat, a screen will popup letting me know I have a message from you.  I can listen to the message by pressing a button or respond to the message – by voice, of course, all while keeping my focus on the road.   You’ll get the response just as if I had sent it while holding the phone.

Son:  Cool. I’ll have fun sending you messages on your way home from work.

Me: Grrrrrr.

Son: Hey, can I try this out on my phone?

Me: Sure.  Just download our latest app from the Google Play Store.  After you get it installed, goto the Preferences section under Settings and check the box that says ‘Auto Mode’ (BETA).  You’ll automatically be switched into Auto Mode on your phone. Now this becomes your console in the car.

Of course, things appear a bit smaller than on your phone than what I’ve shown you on the tablet.  Oh, and since you’re not connected to another phone, all the commands you give it will be performed by apps on your phone.   Try it out and let me know what you think.

Son:  Ok. I’ll play around with it this week.

Me: Great.  Now let’s go see what your mom’s made us for dinner.

 

Q Actions 2.0

Do more with Voice! Q Actions 2.0 now available on Google Play

By | Action Recipes, App Actions, Artificial Intelligence, Conversation, Digital Assistants, Natural Language, Voice, Voice Search | No Comments

Do more with Voice

Q Actions 2.0 is here. With this release, we wanted to focus on empowering users throughout their day. As voice is playing a more prevalent part in our everyday lives, we’re uncovering more use cases where Q Actions can be of help. In Q Actions 2.0, you’ll find new features and enhancements that are more conversational and useful.

Directed Dialogue™

Aiqudo believes the interaction with a voice assistant should be casual, intuitive, and conversational. Q Actions understands naturally spoken commands and is aware of the apps installed on your phone, so it will only return personalized actions that are relevant to you. When a bit more information is required from you to complete a task, Q Actions will guide the conversation until it fully understands what you want to do. Casually chat with Q Actions and get things done.

Sample commands:

  • “create new event” (Google Calendar)
  • “message Mario (WhatsApp, Messenger, SMS)
  • “watch a movie/tv show” (Netflix, Hulu)
  • “play some music” (Spotify, Pandora, Google Play Music, Deezer)

Q Cards™

In addition to providing relevant app actions from personal apps that are installed on your phone, Q Actions will now display rich information through Q Cards™. Get up-to-date information from cloud services on many topics: flight status, stock pricing, restaurant info, and more. In addition to presenting the information in a simple and easy-to-read card, Q Cards™ support Talkback and will read aloud relevant information.

Sample commands:

  • “What’s the flight status of United 875?”
  • “What’s the current price of AAPL?”
  • “Find Japanese food

Voice Talkback™

There are times when you need information but do not have the luxury of looking at a screen. Voice Talkback™ is a feature that reads aloud the critical snippets of information from an action. This enables you to continue to be productive, without the distraction of looking at a screen. Execute your actions safely and hands-free.

Sample commands:

  • “What’s the stock price of Tesla?” (E*Trade)
    • Q: “Tesla is currently trading at $274.96”
  • “Whose birthday is it today?” (Facebook)
    • Q: “Nelson Wynn and J Boss are celebrating birthdays today”
  • “Where is the nearest gas station?”
    • Q: “Nearest gas at Shell on 2029 S Bascom Ave and 370 E Campbell Ave, 0.2 miles away, for $4.35”

Compound Commands

An enhancement to our existing curated Actions Recipes, users can now create Action Recipes on the fly using Compound Command. Simply join two of your favorite actions using “and” into a single command. This allows the users the capability to create millions of Action Recipe combinations from our database of 4000+ actions.

Sample commands:

  • “Play Migos on Spotify and set volume to max”
  • “Play NPR and navigate to work”
  • “Tell Monica I’m boarding the plane now and view my boarding pass”

Simply do more with voice! Q Actions is now available on Google Play.

Q Actions - Action Recipes and Compound Commands

Q Actions – Complex tasks through Compound Commands

By | Artificial Intelligence, Command Matching, Conversation, Uncategorized | No Comments

In many cases, a single action does the job.

Say it. Do it!

Often, however, a task require multiple actions to be performed across multiple independent apps. On-the-go, you just want things done quickly and efficiently without having to worry about which actions to run, and which apps need to be in the mix.

Compound commands allow you to do just that – just say what you want to do – naturally –  and, assuming this makes sense and you have  access to the relevant apps, the right actions are magically  executed. It’s not that complicated – just say “navigate to the tech museum and call Kevin”, firing off Maps and WhatsApp in the process.  Driving, and in a hurry to catch the train? Just say “navigate to the Caltrain station and buy a train ticket” launching Maps and the Caltrain app in sequence.  Did you just hear the announcement that your plane is ready to board? Say “show my boarding pass and tell susan I’m boarding now” (American, United, Delta,…)  and (Whatsapp, Messenger,…) and you’re ready to get on the flight home – one, two … do!

Compound commands are … complex magic to get things done … simply!

Q Actions - Voice Talkback

Q Actions – Voice feedback from apps using Talkback™

By | App Actions, Conversation, Digital Assistants, User Interface | No Comments

Wonder why you can’t talk to your apps, and why your apps can’t talk back to you?  Stop wondering, as Talkback™ in Q Actions does exactly that. Ask “show my tasks” and the system executes the right action (Google Tasks) and, better yet, tells you what your tasks are – safely and hands-free, as you drive your car.

Driving to work and stuck in traffic?  Ask “whose birthday is it today?” and hear the short list of your friends celebrating their birthdays (Facebook). You can then say  “tell michael happy birthday”  to wish Mike (WhatsApp or Messenger). And if you are running low on gas, just say “find me a gas station nearby” and Talkback™ will tell you where the nearest gas station is and how much you’ll pay for a gallon of unleaded fuel.

Say it. Do it. Hear it spoken back!

Q Actions - Directed Dialogue

Q Actions – Task completion through Directed Dialogue™

By | Conversation, Digital Assistants, Natural Language, User Interface, Voice | No Comments

When an action or a set of actions require specific input parameters, Directed Dialogue™ allows the user to submit the required information through very simple, natural back-and-forth conversation. Enhanced with parameter validation, and user confirmation,Directed Dialogue™ allows complex tasks to be performed with confidence.Directed Dialogue™ is  not about open-ended conversations, but  it about getting things done, simply and efficiently.

With Q Actions, Directed Dialogue™ is automatically enabled  for every action in the system because we know the semantic requirements of each and every action’s parameters. It is not constrained, and  applies across all actions across all verticals.

Another application of Directed Dialogue™ is input refinement. Let’s say I want to purchase batteries. If I just say, “add batteries to my shopping cart” I can get the wrong product added to my cart, as on Alexa, which does the wrong thing for a new product order (the right thing happens on a reorder). In the case of Q Actions, I can provide the brand Duracell and the type 9V 4 pack with very simpleDirected Dialogue™, and exactly the right product is added to my cart – in the Amazon or Walmart app.

Get Q Actions today.

Thought to Action

Thought to Action!

By | Artificial Intelligence, Machine Learning | No Comments

Here at Aiqudo, we’re always working on new ways to drive Actions and today we’re excited to announce a breakthrough in human-computer interaction that facilitates these operations.  We’re calling it “Thought to Action™”. It’s in early-stage development, but shows promising results.

Here’s how it works. We capture user brainwave signals via implanted neural-synaptic receptors and transfer the resulting waveforms over BLE to our cloud where advanced AI and machine learning models translate the user’s “thoughts” into specific app actions that are then executed on the user’s mobile device.   In essence we’ve transcended the use of voice to drive actions. Just think about the possibilities. Reduce messy and embarrassing moments when your phone’s speech recognizer gets your command wrong. “Tweet Laura, I love soccer” might end up as “Tweet Laura, I’d love to sock her”. With “Thought to Action™” we get it right all the time. And perfect for use in today’s noisy environments. Low on gas and you’re driving your entire kids soccer team home from a winning match, you can simply think “Find me the nearest gas station” and let Aiqudo do the rest.  Find yourself in a boring meeting? Send a text to a friend using just your thoughts.

Stay tuned as we work to bring this newest technology to a phone near you.

Auto in-cabin experience

The Evolution of Our In-Car Experience

By | Digital Assistants, User Interface, Voice | No Comments

As the usage model for cars continues to shift away from traditional ownership and leasing to on-demand, ridesharing, and in the future, autonomous vehicle (AV) scenarios, how we think about our personal, in-car experience will need to shift as well.

Unimaginable just a few short years ago, today, we think nothing of jumping into our car and streaming our favorite music through the built-in audio system using our Spotify or Pandora subscription. We also expect the factory-installed navigation system to instantly pull up our favorite or most-commonly used locations (after we’ve entered them) and present us with the best route to or from our current one. And once we pair our smartphone with the media system, we can have our text and email messages not only appear on the onboard screen but also read to us using built-in text-to-speech capabilities.  It’s a highly personalized experience in our car.

When we use a pay-as-you-go service, such as Zipcar, we know we’re unlikely to have access to all of the tech comforts of our own vehicle, but we can usually find a way to get our smartphone paired for handsfree calling and streaming music using Bluetooth. If not, we end up using the navigation app on our phone and awkwardly holding it while driving, trying to multitask. It’s not pretty. And when we hail a rideshare, we don’t expect to have access to any of the creature comforts of our own car.

But what if we could?

Just as our relationship to media shifted from an ownership model–CDs or MP3 files on iPods–to subscription-based experiences that are untethered to a specific device but can be accessed anywhere at any time, it’s time to shift our thinking about in-car experiences in the same way.

It’s analogous to accessing your Amazon account and continuing to watch the new season of “True Detective” on the TV at your Airbnb–at the exact episode where you left off last week. Or listening to your favorite Spotify channel at your friend’s house through her speakers.

All your familiar apps (not just the limited Android Auto or Apple CarPlay versions) and your personalized in-car experience–music, navigation, messaging, even video (if you’re a passenger, of course)–will be transportable to any vehicle you happen to jump into, whether it’s a Zipcar, rental car or some version of a rideshare that’s yet to be developed. What’s more, you’ll be able to easily and safely access these apps using voice commands. Whereas today our personal driving environment is tied to our own vehicle, it will become something that’s portable, evolving as our relationship to cars changes over time.

Just on the horizon of this evolution in our relationship with automobiles? Autonomous vehicles, or AVs, in which we become strictly a passenger, perhaps one of several people sharing a ride. Automobile manufacturers today are thinking deeply about what this changing relationship means to them and to their brands. Will BMW become “The Ultimate Riding Machine?”(As a car guy, I personally hope not!)  And if so, what will be the differentiators?

Many car companies see the automobile as a new digital platform, for which each manufacturer creates its own, branded, in-car digital experience. In time, when we hail a rideshare or an autonomous vehicle, we could request a Mercedes because we know that we love the Mercedes in-car digital experience, as well as the leather seats and the smooth ride.

What happens if we share the ride in the AV, because, well, they are rideshare applications after all? The challenge for the car companies becomes creating a common denominator of services that define that branded experience while still enabling a high degree of personalization. Clearly, automobile manufacturers don’t want to become dumb pipes on wheels, but if we all just plug in our headphones and live on our phones, comfy seats alone aren’t going to drive brand loyalty for Mercedes. On the other hand, we don’t all want to listen to that one guy’s death metal playlist all the way to the city.  

The car manufacturers cannot create direct integrations to all services to accommodate infinite personalization. In the music app market alone there are at least 15 widely used apps, but what if you’re visiting from China? Does your rideshare support China’s favorite music app, QQ?  We’ve already made our choices in the apps we have on our phones, so transporting that personalized experience into the shared in-car experience is the elegant way to solve that piece of the puzzle.

This vision of the car providing a unique digital experience is not that far-fetched, nor is it that far away from becoming reality. It’s not only going to change our personal ridesharing experience, but it’s also going to be a change-agent for differentiation in the automobile industry.

And it’s going to be very interesting to watch.

Semiotics

AI for Voice to Action – Part 3: The importance of Jargon to understanding User Intent

By | Artificial Intelligence, Command Matching, Machine Learning | No Comments

In my last post I discussed how semiotics and observing how discourse communities interact had influenced the design of our machine learning algorithms. I also emphasized the importance of discovering jargon words as part of our process of understanding user commands and intents.

In this post, we describe in more depth how this “theory” behind our algorithms actually works. We also discussed what constitute good jargon words.  “Computer” is a poor example of a jargon word because it is too broad in meaning, whereas a term relating to a computer chip, e.g. “Threadripper” (a gaming processor from AMD) would be a better example as it is more specific in meaning and is used in fewer contexts.

Jargon terms and Entropy

So – how do we identify good jargon terms and what do we do with them in order to understand user commands?

To do this we use entropy. In general entropy is a measure of chaos or disorder and, in an information theory context, it can be used to determine how much information is conveyed by a term. Because jargon words have a very narrow and specific meaning within specific discourse communities, they have lower entropy (more information value) than broader more general terms.

To determine entropy we take each term in our synthetic documents (see this post for more information of how we create this data set) and build a probability profile of co-occurring terms. The diagram below shows an example (partial) probability distribution for the term ‘computer’.

Entropy

Figure 1: Entropy – probability distributions for jargon terms

These co-occurring terms can be thought of as the context for each potential jargon word. We then use this probability profile to determine the entropy of the word. If that entropy is low then we consider it to be a candidate jargon word.

Having identified the low entropy jargon words in our synthetic command documents, we then use their probability distributions as attractors for these documents themselves. In this way (as seen in the diagram below) we create a set of document clusters where each cluster relates semantically to a jargon term. (Note: in the interest of clarity, clusters are described using high level topic as opposed to the jargon words themselves in the figure below).

Clusters derived from Synthetic Documents

Figure 2: Using jargon words as attractors to form clusters

We then build a graph within each cluster that connects documents based on how similar they are in terms of meaning. We identify ‘neighborhoods’ within these graphs that relate to areas of intense similarity. For example a cluster may be about “cardiovascular fitness” whereas a neighborhood may be more specifically about “High Intensity Training”, or “rowing” or “cycling”, etc.

Clusters and Neighborhoods

Figure 3: Neighborhoods for the cluster “cardiovascular fitness”

These neighborhoods can be thought of as sub-topics within the overall cluster topic. Within each sub-topic we can then extract important meaning-based phrases that precisely describe what that neighborhood is about. e.g. “HIIT”, “anaerobic high-intensity period”, “cardio session”, etc.

Meaning based phrases for sub-topics

Figure 4: Meaning based phrases for the “high intensity training” sub-topic

In this way we create meaning-based structure from completely unstructured content. Documents from the same cluster relate to the same discourse community. Documents from the same cluster that share similar important terms or phrases can be regarded as relating to the same sub-topic. If two clusters share a large number of important phrases then this represents a dialog between two discourse communities. If multiple important phrases are shared among many clusters then this represents a dialogue among multiple communities.

So having described a little bit about the algorithms themselves, how do they help us understand the correct meaning behind a user’s command? Given this contextual partitioning of the data into discourses based on jargon terms, we can disambiguate among the many different meanings a term can have. For example, if the user were to say ‘open the window’ – we will be able to understand that there is a meaning (discourse) relating to both buildings and to software but if the user were to say ‘minimize the window’, we would understand that this could only have a software meaning and context. Fully understanding the nuances behind a user’s command is, of course, much more complicated than what I have just described, but the goal here is to give a high level overview of the approach.

In subsequent posts, we will discuss how we extract parameters from commands, accurately determine which app action to execute, and how we pass the correct parameters to that action.  

David Patterson and Vladimir Dobrynin

Aiqudo, Inc.

Silicon Valley Voice Pioneer Aiqudo Unveils Its Latest Software Platform

By | Press | No Comments

Enables Anyone to Use their Voice to Control and Interact with 1000’s of Mobile Apps

SAN JOSE CALIFORNIA (BUSINESS WIRE), December 12, 2018

Aiqudo unveiled a set of breakthrough advances to Q Actions, its industry-leading voice enablement platform, that for the first time makes it possible for anyone to navigate their lives through their mobile apps seamlessly using a natural voice. Now, mobile applications can talk back to users to confirm instructions, conduct multi-step processes and even proactively alert users to new messages and read them back.

“Our Directed Dialogue feature helps users to easily complete complex tasks”

Unlike other voice platforms, Aiqudo serves users by working directly with apps users have downloaded on their mobile phones, eliminating the self-serving walled gardens erected by other voice platforms. Consumers may never be able to check Facebook instant messages from Alexa or access an Amazon wish list from Google Assistant and go shopping. Aiqudo removes this obstacle and makes voice the simplest, fastest, most intuitive interface for consumer technologies.

“By focusing on extending dominance in their legacy businesses such as ecommerce or search, the major voice platforms have failed to deliver on their own hype around voice,” said John Foster CEO of Aiqudo. “We’ve taken a better route focused on making voice truly useful today. We’re app-centric, platform-agnostic and let consumers use voice on their own terms, not just when they’re standing next to a device in their living rooms. Our voice assistant needs to be available to us whether we’re in a car, on a train with our hands full or wandering around an amusement park.”

At the center of the latest version of Aiqudo are features such as:

  • Directed Dialogue: Aiqudo quickly and easily guides users to successful actions, prompting them to provide all required pieces of information, whether it’s a calendar event requiring start and end times, location and event name, or providing party size and time for booking a table at a restaurant.
  • Compound Commands: Your favorite apps and mobile phone features can now work collaboratively to get everyday requests completed. Executing multiple actions with a single command is easier than ever – navigate with Waze or other traffic app and notify your friends of a late arrival with your favorite messaging app– and it’s done with one single request.
  • Voice Talkback: Don’t want to be distracted looking at your phone? Aiqudo can read back results from your favorite apps such as news headlines, stock quotes and message responses.

“We’ve taken a better route focused on making voice truly useful today. We’re app-centric, platform-agnostic and let consumers use voice on their own terms, not just when they’re standing next to a device in their living rooms.”

“Our Directed Dialogue feature helps users to easily complete complex tasks,” said Rajat Mukherjee CTO of Aiqudo. “A user is only prompted to provide any missing information required by an action that she has not already provided in a command. Because we understand the semantics of all actions in the system, directed dialog works out-of-the-box for every one of our actions and does not require configuration, customized training or huge volumes of training data.”

Deploying a semiotics-based language modeling platform enables multi-lingual natural language commands, while Aiqudo’s app analysis engine allows rapid on boarding of apps to provide high utility and broad coverage across apps. Today Aiqudo supports thousands of applications ranging from ecommerce apps like Amazon, Walmart, or eBay, entertainment apps like Netflix, Spotify, or Pandora, to favorite messaging and social apps including WhatsApp, WeChat, Messenger and more.

Aiqudo Q Actions 2.0 will be available on Google Play by year end, and the company has already struck OEM relationships with the likes of Motorola for the technology to be embedded directly into phones.

To view product demo videos, visit Aiqudo’s YouTube channel.

About Aiqudo
Aiqudo (pronounced: “eye-cue-doe”) is a software pioneer that connects the nascent world of digital voice assistants to the useful, mature world of mobile apps through its Voice-to-Action™ platform. It lets people use voice commands to execute actions in mobile apps across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language requests and then triggers instant actions via mobile apps consumers prefer to use to get things done quickly and with less effort. For more info, visit: http://www.aiqudo.com

Business Wire: Silicon Valley Voice Pioneer, Aiqudo, Unveils Its Latest Software Platform