Internships: The New Normal

By | Natural Language, Startup Culture | No Comments

Kenny Kang

Intern Voice: Kenny Kang

Working at a startup can be described as, interesting, but in the best way possible. As a comparison, my summer roommate interned for a larger corporate company and we developed two completely different ideas of a what a ‘normal’ working environment is. Apparently, it isn’t ‘normal’ for an internship project to become a feature in the company’s main product. It also isn’t ‘normal’ to have the opportunity of presenting directly to C suite executives. And it definitely isn’t ‘normal’ to be talking to your CEO about his wild college days during company outings. I could write entire essays about all the reasons I loved working at Aiqudo, but there was one that always made my friends question my sanity: I kept describing the work itself as ‘fun!’. Even for a startup, I’m not sure how normal that is.

During my internship, I created a question answering service which dealt with knowledge-based queries. For example, questions like “How old is Tom Brady?” or “Which movies were Lawrence Fishburne and Keanu Reeves in together?”. While the problem itself was interesting, it was the freedom I had that made it really engaging. Since there isn’t always a straightforward solution when dealing with Natural Language Processing (NLP) problems, I needed to constantly approach the next obstacle in new ways, such as using certain tools in unconventional ways or reading up on the latest research. Each new day felt like solving a new puzzle, and that’s what made it consistently so enjoyable! Of course, I ran into plenty of issues that seemed impossible to get around, but luckily, I had an amazing mentor, Sunil, who was always there to point me in the right direction.

This past summer has been an incredible experience. I came in thinking I would leave with a few new skills. Not only have I learned several valuable skills, I’ve also developed a newfound confidence in my ability to think through complex problems, and set a higher bar for any company I’d want to work for in the future.

Q Actions on iOS

Announcing The New Q Actions For iOS

By | Digital Assistants, News, Voice, Voice Search | No Comments

For over 2 years Aiqudo has been leading the charge of deep app integration with voice assistants on Android phones.  Today, our Android platform continues to do many things that no other platform can. Now, we’re incredibly proud to announce the latest release of our Q Actions app for iOS.  We’ve been working on the latest iOS release for months, and it represents a full suite of actions functionality driven by the new ActionKit SDK for iOS. This new ActionKit is also what iOS developers can use to easily configure voice into their own apps.  

iOS is a more restrictive and closed ecosystem than Android.  Many of the platform capabilities that Android provides are not available to third-party developers in Apple’s ecosystem.  For instance, apps are not allowed to freely communicate with each other, and it’s difficult to determine what apps are installed.  Such restrictions challenge digital assistants like Q Actions, which rely on knowledge of a user’s apps to provide relevant results and the ability to communicate with apps in order to automate and execute actions in other apps.

Q Actions for iOS enables app developers to define their own voice experience for their users rather than being subject to the limitations of SiriKit or Siri Shortcuts. Currently, SiriKit limits developers’ ability to expose functionality in Siri, allowing only broad categories that dilute the differentiated app experiences that developers have built.  With Q Actions for iOS, brands and businesses will be able to maintain their differentiating features and brand recognition, rather than conform to a generalized category.

With this release, we took a hard look at what was needed to build a comparable experience to what we have on Android.  To make it more powerful for iOS app developers, we pushed most of the functionality into the ActionKit SDK. The result is that ActionKit powers all the actions available in the app, allowing developers to offer an equivalent experience in their iOS app.  The ActionKit SDK is available for embedding in any iOS app today.

Let’s take a look at what Q Actions and the Aiqudo platform offers right now:

Easily discover actions for your phone

Q Actions helpfully provides an Action Summary with a categorized list of apps and actions for your device.  Browse by category, tap on an app to view sample commands, or tap a command to execute the action.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Go beyond Siri

Q Actions supports hundreds of new actions!  Watch Netflix Originals or stream live video on Facebook with simple commands like “watch Narcos” or “stream live video”.

 

 

 

 

 

 

 

 

 

 

 

 

 

True Natural Language

Q Actions for iOS leverages Aiqudo’s proprietary, semiotics-based language modeling system to power support for natural language commands. Rather than the exact match syntax required by Siri Shortcuts, Aiqudo understands the wide variations in commands that consumers use when interacting naturally with their voice. Plus, Aiqudo is multilingual, currently supporting commands in seven languages worldwide.

Content-rich Cards for informational queries

Get access to web results from Bing, translate phrases or look at stock quotes directly from Q Actions.  Get rich audio and visual feedback from cards.

 

 

 

 

 

 

 

 

There’s still a lot to come!  We’ve already shown how Aiqudo can enable a better voice experience in the car.  We’ve also seen how voice can help users engage meaningfully with your app.  We’re working hard to build a ubiquitous voice assistant platform and this release on iOS gets us one step closer.  Stay tuned as we’ll be talking more about some of the challenges of bringing our voice platform to iOS and iOS app developers, and more importantly, how we’re aligned with Apple’s privacy-centric approach.

QTime: My Time as an Aiqudo Intern

By | Startup Culture | No Comments

Jordan Knox

Intern Voice: Jordan Knox

When I started my internship at Aiqudo, I had next to no experience doing any of the things I was here to do. I came in to Aiqudo very scared that I was completely unqualified and that I would immediately make a fool of myself and get fired. But it was so much better than I had expected. As soon as I got here, I immediately felt welcome and like I was one of the team. I loved every bit of time I got to work here, and I’m so sad for it to end. While I was here, I learned more than I have throughout all of college so far. I learned an entirely new programming language and now feel more comfortable using that language than any of the others that I had been using for years! While I was here, I created a suite of useful Slack actions that went into production. 

We’ll write in more detail about Voice in Slack, but here are some examples of what’s now possible to do super-easily in Slack with simple voice commands:

  • “send a message to John”
  • “send a poll to the General channel asking what’s for lunch today”
  • “post a gif about bananas”

I don’t think I know anybody else who had a summer internship that led to them writing programs that actually went into production. Unlike all of them, I now have something that I can pull up on anybody’s phone and say “Hey look, I made that!” 

I got a ton of experience working with both backend stuff like RESTful APIs and data manipulation, but then I also got to do a lot with the frontend working with HTML/CSS and user interaction with computing systems. I learned more than just technical skills, like how startups function and how to work with over 15 people on a single project. All in all, this internship has been invaluable to me and I wouldn’t trade it for anything else. This opportunity has not only made me a better software engineer, but also made me a million times more confident in myself and my abilities. The people here are some of the best people that I have ever met and I cannot wait to start working here again part-time during the school year.

Interning at Aiqudo: A Summer Well Spent

By | Startup Culture | No Comments

Dylan Hwang

Intern Voice: Dylan Hwang

As I was walking to the office of Aiqudo for the first time, I was not sure what to expect. A startup would surely provide more opportunities for me to grow and learn as a student. But a small and fast-paced environment could also result in a higher degree of disorganization, leading to aimless wandering, not sure of what to do next and where. However, this all changed as soon as I was greeted by my colleagues and mentors. They were friendly and warm heartedly opened themselves up to accepting me as part of the Aiqudo family.

Each week flew by as I learned to apply the plethora of knowledge that I have gained in a classroom setting to the industry. Whether it was applying the theory of databases to real-world use, or learning how to “google,” these experiences have all helped me learn more about the connection of school education to industry.

In an internship, one very important aspect is the mentor. Whether or not the mentor has a plan greatly influences the efficacy and productivity of an internship. The mentorship that Sunil, my mentor, provided was invaluable. In the startup environment, everything is very fast-paced and everyone always has a task at hand. Sunil was no exception. But despite all the work that he had, he didn’t just agree to help guide me, he had a desire to. He made time to guide me through the backend structure and answer any questions that I had. If I ever had questions about a specific topic, I would not be scared or worried to ask him. This was not something that was specific to just Sunil. All my other colleagues were very open and caring. They checked up on me and frequently made sure that I didn’t have any other questions that they could answer. They genuinely wanted for me to have a good time at Aiqudo and to learn as much as I could.

Another aspect of my internship that stood out was the freedom I had. Working at Aiqudo, I was given the freedom to work on a variety of different topics. If there was ever anything specific I wanted to work on, whether that was the backend data storage or a pipeline, I knew that I could ask and Sunil would find something that would fit what I wanted to do. This allowed great freedom to not only work in the backend, but also to design machine learning models in areas to further optimize the architectural pipelines.

From this internship, I have learned much more than I could have imagined. I was able to dabble with every aspect of a company.  Whether that was the backend, frontend, or even the business aspect of a startup, the ability to truly get a taste of everything is one that is hard to come by.

Erica - Bank of America Voice Assistant

A Voice Success Story: Erica from Bank of America

By | Digital Assistants | No Comments

It’s no secret that a growing number of companies are recognizing the opportunities for new, branded experiences presented by voice interfaces powered by AI. In fact, Gartner predicts that 25 percent of digital workers will use virtual assistants daily by 2021, and brands already using chatbots have seen the number of leads they collect increase by as much as 600 percent over traditional lead generation methods.

These AI-driven voice assistants and chatbots have also become useful cost-cutting tools for companies with large subscriber bases – banks, insurance companies, and mobile phone operators, to name a few. A 2017 Juniper Research report calculates that, for every inquiry handled by a chatbot, banks save four minutes of an agent’s time, which translates to a cost saving of $0.70 per query. These platforms are expected to save banks an estimated $7.3 billion in operational costs by 2023. 

The real opportunity presented by voice assistants is in delighting the customer and strengthening brand loyalty, which inevitably drives revenue. We’re entering an exciting time where voice has the ability to redefine the relationship that consumers have with their technology and open up aspects or functionality that the user didn’t previously know — or know they even cared –about. 

A 2017 PwC report described chatbots as adding “a new dimension to the power of ‘personal touch’ and massively [enhancing] customer delight and loyalty.” 

In my own life, I can’t think of a better example of this than Erica, Bank of America’s AI-driven virtual financial assistant. Working in and following the space for a few years, I am really impressed with what Bank of America has built for its customers in Erica.

Erica caters to the bank’s customer service requirements in a number of ways: sending notifications to customers, providing balance information, sharing money-saving tips, providing credit report updates, facilitating bill payments, and helping customers with simple transactions. Recently, BofA expanded Erica’s capabilities to help clients make smarter financial decisions by providing them with personalized, proactive insight. 

For me, instead of calling the BofA customer service 800 number and spending 20 to 30 minutes navigating menus, waiting on hold, or being transferred and repeating the process all over again, I can talk to Erica and quickly complete transactions. Erica averages a mere three minutes time-to-resolution via voice within the app. Think about all the things you could get done in those saved minutes instead, not to mention a break on your blood-pressure medicine.

Another aspect where Erica shines for me is in exposing capabilities within the app that aren’t obvious or are buried deep in the menu structure. One feature I use all the time is the ability to put an international travel notice on my card before I leave the country (so my credit card works overseas) — sometimes I even use it standing in the TSA security line. Another feature I love is being able to find my routing and account numbers quickly and easily by simply asking Erica. Who hasn’t spent valuable time on a fishing expedition in their banking app while hoping the webpage (waiting for automatic payment information) doesn’t time-out first?

The proof of the value of Erica’s voice interface is in the user adoption numbers:  just over a year after introduction, Erica has surpassed 7 million users and has handled more than 50 million client requests. And since launching Erica’s proactive insights in late 2018, daily client engagement with Erica has more than doubled. In an interview with American Banker, BofA’s head of digital banking attributes Erica’s strong adoption to its easy-to-use transaction-search functions and financial advice, two areas where the bank continues to focus on harnessing the power of voice to delight its customers.

Thing is, for all of Erica’s benefits for both consumers and BofA, building this kind of voice-activated assistance in-house — from scratch — isn’t fast, easy, or cheap. The Erica development team boasted 100 people in 2017 — before introduction — and has surely grown by now, given her success. And it took those 100 people nearly two years to get Erica ready for prime time, at a cost estimated at $30 million dollars. Why so expensive? As one BofA VP noted, during development, the bank “learned [that] there are over 2,000 different ways to ask us to move money.” 

At Aiqudo, we’ve figured out — and operationalized — the technical heavy lifting needed to create a voice assistant: NLU, intent detection, action execution, multiple languages, the analytics platform; there’s no reason for partners to reinvent the wheel. We provide partners with a turnkey voice capability in their app. Developers retain control of this critical new Voice UI (and all of their users’ data) rather than surrendering the direct relationship with their users to voice platforms. Until now, developers have been required to create skills for each voice platform, which risks commoditizing the app and losing the brand they have worked so hard to develop. In contrast, Aiqudo offers a cost-effective solution that allows developers to focus on adding value to their app rather than on customizing for voice.

Disclaimer: Bank of America developed their voice technology without the assistance or use of Aiqudo technology.

Q Actions Auto Mode

What if cars could understand ALL voice commands?

By | App Actions, Auto Mode, Digital Assistants, News, Voice | No Comments

The following transcript was taken from a casual conversation with my son.

Son: Dad, what are you working on?

Me: It’s a new feature in our product called “Auto Mode”.  We just released it in version 2.1 of our Q Actions App for Android.  We even made a video of it.  We can watch it after dinner if you’re interested.

Son: The feature sounds cool.  What’s it look like?

Me: Well, here.  We have this special setting that switches our software to look like the screen in a car. See how the screen is wider than it is tall? Yeah, that’s because most car screens are like that too.

Son: Wait. How do you get your software into cars? Can’t you just stick the tablet on the dashboard?

Me: Humm, not quite.  We develop the software so that car makers can combine it with their own software inside the car’s console.  We’ll even make it look like they developed it by using their own colors and buttons. I’m showing you how this works on a tablet because it’s easier to demonstrate to other people – we just tell them to pretend it’s the car console.  Until cars put our software into their consoles, we’ll make it easy for users to use “Auto Mode” directly on their phones. Just mount the phone on the car’s dash and say “turn on auto mode” – done!

Son:  So how do you use it?  And what does that blue button with a microphone in it do?

Me:  Well, we want anyone in the car to be able to say a command like “navigate to Great America” or “what’s the weather like in San Jose?” or “who are Twenty One-Pilots?”.  The button is simply a way to tell the car to listen. When we hear a command, our software figures out what to do and what to show on the console in the car. Sometimes it even speaks back the answer.  Now we don’t always want people to have to press the button on the screen so we’ll work with the car makers to add a button on the steering wheel or even a microphone that is always listening for a special phrase such as “Ok, Q” to start.

Son: How does it do that?  I mean, the command part.

Me: Good question.  Since you’re smart and know a little about software, I’ll keep it short.  Our software takes a command and tries to figure out what app or service can best provide the answer.  For example, if the command is about showing the route to say, an amusement park like Great America, we’ll ask Google Maps to handle it, which it does really well. Lots of cars come installed with mapping software like Google Maps so it’s best to let them handle those. For other types of commands that ask for information, like “what’s the weather like in San Jose” or  “who are Twenty One Pilots”, we’ll send it off to servers in the cloud. They then send us back answers and we format it and display it on the screen – in a pretty looking card like this one.

Me: Sometimes, apps running on our phones can best answer these commands and we use them to handle it.

Son: Wait. Phones?  How are phones involved? I only see you using a tablet.

Me:  Ahhh.  You’ve discovered our coolest feature.  We use Apps already installed on your phone.   Do you see those rectangle-looking things in the upper right corner of the tablet? The ones with the pictures and names of people? Well, those are phone profiles.  They appear when a person connects their phone, running our Q Actions app, to the car’s console through Bluetooth, sort of like you do with wireless earbuds. When connected, our software in the console sends the phone your commands and the phone in turn attempts to execute the command using one of the installed apps.   Let me explain with an example. Let’s pretend you track your daily homework assignments using the Google Tasks app on your phone. Now you hop into the car and your phone automatically pairs with the console. Now I asked you to show me your homework assignments. You then press the mic button and say “show my homework tasks”.  The software in the console would intelligently route the command to your phone (because Google Tasks is not on the console), open Google Tasks on your phone, grab all your homework assignments and send them back to the console to be displayed in a nice card. Oh, and it would also speak back your homework assignments as well. Let’s see what happens when I tell it to view my tasks.

Son:  Big deal.  I can just pick up my phone and do that.  Why do I need to use voice for that?

Me: Because if you’re the driver, you don’t want to be fumbling around with your phone, possibly getting into an accident! Remember, this is supposed to help drivers with safe, “hands-free” operation. You put your phone in a safe place and our software figures out how to use it to get the answers.

Son: Why can’t the car makers put all these apps in the console so you don’t have to use your phone?

Me: Great question.  Most people carry their phones on them at all times, especially when they drive.  And these phones have all their favorite apps with all their important personal information stored in them.  There’s no way the car makers could figure out which apps to include when you buy the car. And even if you could download these apps onto the console, all your personal information that’s on your phone would have to transferred over to the console, app by app.  Clumsy if you ask me. I prefer to keep my information on my phone and private, thank you very much!

Son: Oh. Now I get it.  So what else does the software do?

Me: The console can call a family member.  If you say “call Dad”, the software looks for ‘dad’ in your phone’s address book and dials the number associated with it.  But wait. You’re probably thinking ‘What’s so special about that? All the cool cars do it”. Well, we know that a bunch of apps can make phone calls so we show you which ones and let you decide.  Also, If you have two numbers for ‘dad’, say a home and mobile number, the software will ask you to choose one to call. Let’s see how this works when I say “call Dad”.

Me: It asks you to pick an app.  I say ‘phone’ and then it asks me to pick a number since my dad has both a home and mobile number.  I say ‘mobile’ and it dials the number through my phone.

Son: Cool. But what if I have two people with the same name, like Julie?

Me: It will ask you to pick a ‘Julie’ when it finds more than one.  And it will remember that choice next time you ask it to call Julie.  See what happens when I want to call Jason. It shows me all the people in my address book who are named Jason along with their phone numbers.  If a person has more than one number it will say ‘Multiple’

Son: Wow.  What else?

Me: How about sending a message on WhatsApp? Or setting up a team meeting in the calendar. Or joining a meeting from the car if you are running late. Or even checking which one of your friends have birthdays today.   All these actions are performed on your phone using the apps you are familiar with and use.

Son: Which app shows you your friends birthdays? That’s kind of neat.

Me: Facebook

Son: I don’t use Facebook. I use Instagram. It’s way better.  Plus all the cool kids use it now.

Me:

Me: You get the picture though, right?

Son: Sure.

Son: So what if all of my friends are in the car with you and we connect to the console?  How does the software know where to send the command?

Me: We use the person’s voice to identify who they are and route the command to the right person’s phone automatically.

Son: Really? That seems way too hard.

Me: Not really.  Although we haven’t implemented it yet, the technology exists to do this sort of thing today.

Son: Going back to main screen, why does the list of actions under ‘Recent’ and ‘Favorites’ change when you change people?

Me: Oh, you noticed that!   Whenever the software switches to a new profile, we grab the ‘Recent’ and ‘Favorites’ sections from that person’s phone and display it in the tablet, er, console.  This is our way of making the experience more personalized or familiar to the way the app appears on your phone. In fact, the ‘Favorites’ are like handy shortcuts for frequently used actions, like “call Mom”.  

Me: One more thing.  Remember the other buttons on the home screen? One looked like a music note, the other a picture for messaging and so on.  Well, when you press those, a series of icons appear across the screen, each showing an action that belongs to that group.  If your phone had Spotify installed, we would show you a few Spotify actions. If Pandora was installed, we would show you Pandora actions and so on.   Check out what happens when I activate my profile. Notice how Pandora appears? That’s because Pandora is on my phone and not on the tablet like Google Play Music and YouTube Music.

  

Me: Same is true for messaging and calling.   Actions from apps installed on your phone would appear.  You would simply tap on the icon to run the action.   In fact, if you look carefully, you’ll notice that all the actions that show up on the console are also in the ‘My Actions’ screen in the Q Actions app on your Android Phone.   Check out what’s on the tablet vs. my phone.

 .     

Son: Yep.

Me: Oh and before I forget, there’s one last item I’d like to tell you about.

Son: What’s that.

Me: Notifications.  If you send me a message on WhatsApp, Messenger or WeChat, a screen will popup letting me know I have a message from you.  I can listen to the message by pressing a button or respond to the message – by voice, of course, all while keeping my focus on the road.   You’ll get the response just as if I had sent it while holding the phone.

Son:  Cool. I’ll have fun sending you messages on your way home from work.

Me: Grrrrrr.

Son: Hey, can I try this out on my phone?

Me: Sure.  Just download our latest app from the Google Play Store.  After you get it installed, goto the Preferences section under Settings and check the box that says ‘Auto Mode’ (BETA).  You’ll automatically be switched into Auto Mode on your phone. Now this becomes your console in the car.

Of course, things appear a bit smaller than on your phone than what I’ve shown you on the tablet.  Oh, and since you’re not connected to another phone, all the commands you give it will be performed by apps on your phone.   Try it out and let me know what you think.

Son:  Ok. I’ll play around with it this week.

Me: Great.  Now let’s go see what your mom’s made us for dinner.

 

Q Actions 2.0

Do more with Voice! Q Actions 2.0 now available on Google Play

By | Action Recipes, App Actions, Artificial Intelligence, Conversation, Digital Assistants, Natural Language, Voice, Voice Search | No Comments

Do more with Voice

Q Actions 2.0 is here. With this release, we wanted to focus on empowering users throughout their day. As voice is playing a more prevalent part in our everyday lives, we’re uncovering more use cases where Q Actions can be of help. In Q Actions 2.0, you’ll find new features and enhancements that are more conversational and useful.

Directed Dialogue™

Aiqudo believes the interaction with a voice assistant should be casual, intuitive, and conversational. Q Actions understands naturally spoken commands and is aware of the apps installed on your phone, so it will only return personalized actions that are relevant to you. When a bit more information is required from you to complete a task, Q Actions will guide the conversation until it fully understands what you want to do. Casually chat with Q Actions and get things done.

Sample commands:

  • “create new event” (Google Calendar)
  • “message Mario (WhatsApp, Messenger, SMS)
  • “watch a movie/tv show” (Netflix, Hulu)
  • “play some music” (Spotify, Pandora, Google Play Music, Deezer)

Q Cards™

In addition to providing relevant app actions from personal apps that are installed on your phone, Q Actions will now display rich information through Q Cards™. Get up-to-date information from cloud services on many topics: flight status, stock pricing, restaurant info, and more. In addition to presenting the information in a simple and easy-to-read card, Q Cards™ support Talkback and will read aloud relevant information.

Sample commands:

  • “What’s the flight status of United 875?”
  • “What’s the current price of AAPL?”
  • “Find Japanese food

Voice Talkback™

There are times when you need information but do not have the luxury of looking at a screen. Voice Talkback™ is a feature that reads aloud the critical snippets of information from an action. This enables you to continue to be productive, without the distraction of looking at a screen. Execute your actions safely and hands-free.

Sample commands:

  • “What’s the stock price of Tesla?” (E*Trade)
    • Q: “Tesla is currently trading at $274.96”
  • “Whose birthday is it today?” (Facebook)
    • Q: “Nelson Wynn and J Boss are celebrating birthdays today”
  • “Where is the nearest gas station?”
    • Q: “Nearest gas at Shell on 2029 S Bascom Ave and 370 E Campbell Ave, 0.2 miles away, for $4.35”

Compound Commands

An enhancement to our existing curated Actions Recipes, users can now create Action Recipes on the fly using Compound Command. Simply join two of your favorite actions using “and” into a single command. This allows the users the capability to create millions of Action Recipe combinations from our database of 4000+ actions.

Sample commands:

  • “Play Migos on Spotify and set volume to max”
  • “Play NPR and navigate to work”
  • “Tell Monica I’m boarding the plane now and view my boarding pass”

Simply do more with voice! Q Actions is now available on Google Play.

Q Actions - Action Recipes and Compound Commands

Q Actions – Complex tasks through Compound Commands

By | Artificial Intelligence, Command Matching, Conversation, Uncategorized | No Comments

In many cases, a single action does the job.

Say it. Do it!

Often, however, a task require multiple actions to be performed across multiple independent apps. On-the-go, you just want things done quickly and efficiently without having to worry about which actions to run, and which apps need to be in the mix.

Compound commands allow you to do just that – just say what you want to do – naturally –  and, assuming this makes sense and you have  access to the relevant apps, the right actions are magically  executed. It’s not that complicated – just say “navigate to the tech museum and call Kevin”, firing off Maps and WhatsApp in the process.  Driving, and in a hurry to catch the train? Just say “navigate to the Caltrain station and buy a train ticket” launching Maps and the Caltrain app in sequence.  Did you just hear the announcement that your plane is ready to board? Say “show my boarding pass and tell susan I’m boarding now” (American, United, Delta,…)  and (Whatsapp, Messenger,…) and you’re ready to get on the flight home – one, two … do!

Compound commands are … complex magic to get things done … simply!

Q Actions - Voice Talkback

Q Actions – Voice feedback from apps using Talkback™

By | App Actions, Conversation, Digital Assistants, User Interface | No Comments

Wonder why you can’t talk to your apps, and why your apps can’t talk back to you?  Stop wondering, as Talkback™ in Q Actions does exactly that. Ask “show my tasks” and the system executes the right action (Google Tasks) and, better yet, tells you what your tasks are – safely and hands-free, as you drive your car.

Driving to work and stuck in traffic?  Ask “whose birthday is it today?” and hear the short list of your friends celebrating their birthdays (Facebook). You can then say  “tell michael happy birthday”  to wish Mike (WhatsApp or Messenger). And if you are running low on gas, just say “find me a gas station nearby” and Talkback™ will tell you where the nearest gas station is and how much you’ll pay for a gallon of unleaded fuel.

Say it. Do it. Hear it spoken back!

Q Actions - Directed Dialogue

Q Actions – Task completion through Directed Dialogue™

By | Conversation, Digital Assistants, Natural Language, User Interface, Voice | No Comments

When an action or a set of actions require specific input parameters, Directed Dialogue™ allows the user to submit the required information through very simple, natural back-and-forth conversation. Enhanced with parameter validation, and user confirmation,Directed Dialogue™ allows complex tasks to be performed with confidence.Directed Dialogue™ is  not about open-ended conversations, but  it about getting things done, simply and efficiently.

With Q Actions, Directed Dialogue™ is automatically enabled  for every action in the system because we know the semantic requirements of each and every action’s parameters. It is not constrained, and  applies across all actions across all verticals.

Another application of Directed Dialogue™ is input refinement. Let’s say I want to purchase batteries. If I just say, “add batteries to my shopping cart” I can get the wrong product added to my cart, as on Alexa, which does the wrong thing for a new product order (the right thing happens on a reorder). In the case of Q Actions, I can provide the brand Duracell and the type 9V 4 pack with very simpleDirected Dialogue™, and exactly the right product is added to my cart – in the Amazon or Walmart app.

Get Q Actions today.