Will develop solutions that simplify the process of integrating voice assistants into a variety of devices
BUSINESSWIRE, LAS VEGAS, January 6, 2020
Aiqudo, a leading voice technology pioneer, today announced that it is collaborating with embedded voice and vision AI leader Sensory to bring to market comprehensive voice solutions that serve as white-label alternatives for voice services and assistants. The two companies are working on solutions targeting automotive, mobile app, smart home and wearable device applications.
Currently, companies and brands must piece together different technologies to fully implement a voice solution. With their technologies combined, Aiqudo and Sensory will deliver a fully integrated end-to-end solution that combines Sensory’s wake word, voice biometrics and natural language recognition technologies with Aiqudo’s multilingual intent understanding and action execution (Q Actions®) to provide the complete Voice to Action® experience consumers expect.
“Voice adoption continues to grow rapidly, and brands are always exploring ways to streamline the process of integrating a convenient voice UX into their products,” said Todd Mozer, Sensory’s CEO. “Working with Aiqudo allows our two companies to provide the industry a turn-key solution for integrating powerful voice assistants into their products that feature brand-specific wake words and are capable of recognizing who is speaking.”
“Users just need to enter the cabin with their smartphones. There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.
With both Aiqudo and Sensory positioned as leaders in their respective fields, this collaboration is a natural fit as their technologies are highly complementary. The initial integration is focused on the automotive vertical that will be showcased at 2020 Consumer Electronics Show.
Aiqudo’s Auto Mode highlights a highly personalized user experience in the car using the Q Actions® platform. Enhanced with Sensory’s wake word (TrulyHandsfree™) and voice ID (TrulySecure™) functionality, multiple users seamlessly access their personal mobile devices just by using their voice to execute personal actions. “Users just need to enter the cabin with their smartphones,” said Rajat Mukherjee, Aiqudo CTO. “There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.
“Brands increasingly want to create their own branded voice experiences for their customers,” said John Foster, CEO of Aiqudo. “Working with Sensory, we have created the easiest and fastest way for brands to bring the power and convenience of voice to their customers. We are excited to integrate our areas of practice and expertise to deliver a comprehensive solution.”
To view the demo onsite while at CES, please email Aiqudo at CES@aiqudo.com.
For over 25 years, Sensory has pioneered and developed groundbreaking applications for machine learning and embedded AI – turning those applications into household technologies. Pioneering the concept of always-listening speech recognition more than a decade ago, Sensory’s flexible wake word, small to large vocabulary speech recognition, and natural language understanding technologies are fueling today’s voice revolution. Additionally, its biometric recognition technologies are making everything from unlocking a device to authenticating users for digital transactions faster, safer and more convenient. Sensory’s technologies are widely deployed in numerous markets, including automotive, home appliances, home entertainment, IoT, mobile phones, wearables and more, and have shipped in over two billion units of leading consumer products and software applications.
For more information about this announcement, Sensory or its technologies, visit https://www.sensory.com/, contact email@example.com or for press inquiries contact firstname.lastname@example.org.
Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps and cloud services through its Voice to Action® platform. It lets people use natural voice commands to execute actions in mobile apps and cloud services across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, cloud services, or device actions, enabling consumers to get things done quickly and easily.
Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without mandating APIs or developer dependencies.
Aiqudo, a voice technology pioneer, announced ahead of CES 2020 a partnership with premium electric vehicle manufacturer BYTON, bringing the power of Aiqudo’s Voice AI platform to BYTON cars. Aiqudo’s Voice to Action® platform will enable interacting with your favorite apps on your mobile phone hands-free while driving, seamlessly integrating with BYTON’s unique Digital Experience.
Aiqudo’s industry-leading Voice AI platform will voice-enable actions in native apps within the BYTON ecosystem as well as intelligently launch app actions on personal mobile devices in the vehicle. BYTON drivers and passengers will be able to navigate, make calls, send messages, listen to music, shop, join meetings, make payments and more using simple voice commands with apps they use and love. In its integration with BYTON, Aiqudo incorporates the personalization and individual choice reflected by consumers’ favorite apps, as well as personal elements within apps such as preferred playlists, contacts, or favorites, all without user registration or setup. The BYTON experience powered by Aiqudo delivers the safest, easiest and most useful way to use a mobile device while in the car.
“A seamless voice experience is integral to BYTON’s groundbreaking user experience and Aiqudo Voice will make accessing your favorite apps convenient and safe,” said Jeff Chung, BYTON Vice President of Digital Engineering. “Aiqudo’s white label solution allows us to explore new possibilities with our expanding partnerships in the BYTON digital ecosystem.”
“BYTON’s vision of the car of the future, equipped for autonomous driving, will accelerate the need that users have to access their personal digital lives everywhere. Aiqudo makes this easy!”
Aiqudo’s voice platform comprises a semiotics-based intent engine that understands natural language commands in 7 languages currently, plus an action execution capability across thousands of applications that consumers rely on daily. The company’s white-label voice platform allows car manufacturers, phone and smart device OEMs and mobile app developers to define unique voice experiences for their customers.
“Byton has reimagined the relationship between cars and the people who drive or ride in them, placing voice-based interactions at the center of the in-car experience. We believe that voice will soon be the primary way people interact with their digital world. We’re partnering with BYTON to bring a high utility, personalized voice experience to their automobiles,” said John Foster, CEO of Aiqudo. “The in-car experience is a prime use case demonstrating the power of voice. Customers can now drive safely, undistracted and hands-free, and still use their favorite apps just by using their voice.”
Aiqudo’s Action Kit functionality will be offered to app developers through the BYTON developer portal.
“Action Kit enables BYTON app developers to easily and effortlessly enable voice within their applications for the company’s range of cars and expansive infotainment systems,” said Dr. Rajat Mukherjee, Aiqudo CTO. “BYTON’s vision of the car of the future, equipped for autonomous driving, will accelerate the need that users have to access their personal digital lives everywhere. Aiqudo makes this easy!”
Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps and cloud services through its Voice to Action® platform. It lets people use natural voice commands to execute actions in mobile apps across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, cloud services, or device actions, enabling consumers to get things done quickly and easily.
Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without mandating APIs or developer dependencies.
BYTON is a global premium electric vehicle manufacturer that is creating the world’s first smart device on wheels. By integrating advanced digital technologies to offer a smart, connected, and comfortable mobility experience, the company is designing an EV that will meet the demands of an increasingly digital lifestyle now and into the future.
The company’s global headquarters and state-of-the-art manufacturing center are located in Nanjing, China. Its global R&D hub is located in the heart of Silicon Valley and devoted to the development of BYTON’s groundbreaking intelligent car experience, digital ecosystem, advanced connectivity, as well as other cutting-edge technologies. BYTON’s design and concept vehicle center is located in Munich, Germany.
BYTON’s core management team is made up of top innovators from leading-edge companies such as BMW, Tesla, Google, and Apple. This diverse group of leaders from China, Europe, and the US share the singular vision of creating an unprecedented automotive experience.
For over 2 years Aiqudo has been leading the charge of deep app integration with voice assistants on Android phones. Today, our Android platform continues to do many things that no other platform can. Now, we’re incredibly proud to announce the latest release of our Q Actions app for iOS. We’ve been working on the latest iOS release for months, and it represents a full suite of actions functionality driven by the new ActionKit SDK for iOS. This new ActionKit is also what iOS developers can use to easily configure voice into their own apps.
iOS is a more restrictive and closed ecosystem than Android. Many of the platform capabilities that Android provides are not available to third-party developers in Apple’s ecosystem. For instance, apps are not allowed to freely communicate with each other, and it’s difficult to determine what apps are installed. Such restrictions challenge digital assistants like Q Actions, which rely on knowledge of a user’s apps to provide relevant results and the ability to communicate with apps in order to automate and execute actions in other apps.
Q Actions for iOS enables app developers to define their own voice experience for their users rather than being subject to the limitations of SiriKit or Siri Shortcuts. Currently, SiriKit limits developers’ ability to expose functionality in Siri, allowing only broad categories that dilute the differentiated app experiences that developers have built. With Q Actions for iOS, brands and businesses will be able to maintain their differentiating features and brand recognition, rather than conform to a generalized category.
With this release, we took a hard look at what was needed to build a comparable experience to what we have on Android. To make it more powerful for iOS app developers, we pushed most of the functionality into the ActionKit SDK. The result is that ActionKit powers all the actions available in the app, allowing developers to offer an equivalent experience in their iOS app. The ActionKit SDK is available for embedding in any iOS app today.
Let’s take a look at what Q Actions and the Aiqudo platform offers right now:
Easily discover actions for your phone
Q Actions helpfully provides an Action Summary with a categorized list of apps and actions for your device. Browse by category, tap on an app to view sample commands, or tap a command to execute the action.
Go beyond Siri
Q Actions supports hundreds of new actions! Watch Netflix Originals or stream live video on Facebook with simple commands like “watch Narcos” or “stream live video”.
True Natural Language
Q Actions for iOS leverages Aiqudo’s proprietary, semiotics-based language modeling system to power support for natural language commands. Rather than the exact match syntax required by Siri Shortcuts, Aiqudo understands the wide variations in commands that consumers use when interacting naturally with their voice. Plus, Aiqudo is multilingual, currently supporting commands in seven languages worldwide.
Content-rich Cards for informational queries
Get access to web results from Bing, translate phrases or look at stock quotes directly from Q Actions. Get rich audio and visual feedback from cards.
There’s still a lot to come! We’ve already shown how Aiqudo can enable abetter voice experience in the car. We’ve also seen how voice can help usersengage meaningfully with your app. We’re working hard to build a ubiquitous voice assistant platform and this release on iOS gets us one step closer. Stay tuned as we’ll be talking more about some of the challenges of bringing our voice platform to iOS and iOS app developers, and more importantly, how we’re aligned with Apple’s privacy-centric approach.
The following transcript was taken from a casual conversation with my son.
Son: Dad, what are you working on?
Me: It’s a new feature in our product called “Auto Mode”. We just released it in version 2.1 of our Q Actions App for Android. We even made a video of it. We can watch it after dinner if you’re interested.
Son: The feature sounds cool. What’s it look like?
Me: Well, here. We have this special setting that switches our software to look like the screen in a car. See how the screen is wider than it is tall? Yeah, that’s because most car screens are like that too.
Son: Wait. How do you get your software into cars? Can’t you just stick the tablet on the dashboard?
Me: Humm, not quite. We develop the software so that car makers can combine it with their own software inside the car’s console. We’ll even make it look like they developed it by using their own colors and buttons. I’m showing you how this works on a tablet because it’s easier to demonstrate to other people – we just tell them to pretend it’s the car console. Until cars put our software into their consoles, we’ll make it easy for users to use “Auto Mode” directly on their phones. Just mount the phone on the car’s dash and say “turn on auto mode” – done!
Son: So how do you use it? And what does that blue button with a microphone in it do?
Me: Well, we want anyone in the car to be able to say a command like “navigate to Great America” or “what’s the weather like in San Jose?”or “who are Twenty One-Pilots?”. The button is simply a way to tell the car to listen. When we hear a command, our software figures out what to do and what to show on the console in the car. Sometimes it even speaks back the answer. Now we don’t always want people to have to press the button on the screen so we’ll work with the car makers to add a button on the steering wheel or even a microphone that is always listening for a special phrase such as “Ok, Q” to start.
Son: How does it do that? I mean, the command part.
Me: Good question. Since you’re smart and know a little about software, I’ll keep it short. Our software takes a command and tries to figure out what app or service can best provide the answer. For example, if the command is about showing the route to say, an amusement park like Great America, we’ll ask Google Maps to handle it, which it does really well. Lots of cars come installed with mapping software like Google Maps so it’s best to let them handle those. For other types of commands that ask for information, like “what’s the weather like in San Jose” or “who are Twenty One Pilots”, we’ll send it off to servers in the cloud. They then send us back answers and we format it and display it on the screen – in a pretty looking card like this one.
Me: Sometimes, apps running on our phones can best answer these commands and we use them to handle it.
Son: Wait. Phones? How are phones involved? I only see you using a tablet.
Me: Ahhh. You’ve discovered our coolest feature. We use Apps already installed on your phone. Do you see those rectangle-looking things in the upper right corner of the tablet? The ones with the pictures and names of people? Well, those are phone profiles. They appear when a person connects their phone, running our Q Actions app, to the car’s console through Bluetooth, sort of like you do with wireless earbuds. When connected, our software in the console sends the phone your commands and the phone in turn attempts to execute the command using one of the installed apps. Let me explain with an example. Let’s pretend you track your daily homework assignments using the Google Tasks app on your phone. Now you hop into the car and your phone automatically pairs with the console. Now I asked you to show me your homework assignments. You then press the mic button and say “show my homework tasks”. The software in the console would intelligently route the command to your phone (because Google Tasks is not on the console), open Google Tasks on your phone, grab all your homework assignments and send them back to the console to be displayed in a nice card. Oh, and it would also speak back your homework assignments as well. Let’s see what happens when I tell it to view my tasks.
Son: Big deal. I can just pick up my phone and do that. Why do I need to use voice for that?
Me: Because if you’re the driver, you don’t want to be fumbling around with your phone, possibly getting into an accident! Remember, this is supposed to help drivers with safe, “hands-free” operation. You put your phone in a safe place and our software figures out how to use it to get the answers.
Son:Why can’t the car makers put all these apps in the console so you don’t have to use your phone?
Me: Great question. Most people carry their phones on them at all times, especially when they drive. And these phones have all their favorite apps with all their important personal information stored in them. There’s no way the car makers could figure out which apps to include when you buy the car. And even if you could download these apps onto the console, all your personal information that’s on your phone would have to transferred over to the console, app by app. Clumsy if you ask me. I prefer to keep my information on my phone and private, thank you very much!
Son: Oh. Now I get it. So what else does the software do?
Me:The console can call a family member. If you say “call Dad”, the software looks for ‘dad’ in your phone’s address book and dials the number associated with it. But wait. You’re probably thinking ‘What’s so special about that? All the cool cars do it”. Well, we know that a bunch of apps can make phone calls so we show you which ones and let you decide. Also, If you have two numbers for ‘dad’, say a home and mobile number, the software will ask you to choose one to call. Let’s see how this works when I say “call Dad”.
Me: It asks you to pick an app. I say ‘phone’ and then it asks me to pick a number since my dad has both a home and mobile number. I say ‘mobile’ and it dials the number through my phone.
Son: Cool. But what if I have two people with the same name, like Julie?
Me: It will ask you to pick a ‘Julie’ when it finds more than one. And it will remember that choice next time you ask it to call Julie. See what happens when I want to call Jason. It shows me all the people in my address book who are named Jason along with their phone numbers. If a person has more than one number it will say ‘Multiple’
Son: Wow. What else?
Me: How about sending a message on WhatsApp? Or setting up a team meeting in the calendar. Or joining a meeting from the car if you are running late. Or even checking which one of your friends have birthdays today. All these actions are performed on your phone using the apps you are familiar with and use.
Son: Which app shows you your friends birthdays? That’s kind of neat.
Son: I don’t use Facebook. I use Instagram. It’s way better. Plus all the cool kids use it now.
Me: You get the picture though, right?
Son: So what if all of my friends are in the car with you and we connect to the console? How does the software know where to send the command?
Me: We use the person’s voice to identify who they are and route the command to the right person’s phone automatically.
Son: Really? That seems way too hard.
Me: Not really. Although we haven’t implemented it yet, the technology exists to do this sort of thing today.
Son: Going back to main screen, why does the list of actions under ‘Recent’ and ‘Favorites’ change when you change people?
Me:Oh, you noticed that! Whenever the software switches to a new profile, we grab the ‘Recent’ and ‘Favorites’ sections from that person’s phone and display it in the tablet, er, console. This is our way of making the experience more personalized or familiar to the way the app appears on your phone. In fact, the ‘Favorites’ are like handy shortcuts for frequently used actions, like “call Mom”.
Me: One more thing. Remember the other buttons on the home screen? One looked like a music note, the other a picture for messaging and so on. Well, when you press those, a series of icons appear across the screen, each showing an action that belongs to that group. If your phone had Spotify installed, we would show you a few Spotify actions. If Pandora was installed, we would show you Pandora actions and so on. Check out what happens when I activate my profile. Notice how Pandora appears? That’s because Pandora is on my phone and not on the tablet like Google Play Music and YouTube Music.
Me: Same is true for messaging and calling. Actions from apps installed on your phone would appear. You would simply tap on the icon to run the action. In fact, if you look carefully, you’ll notice that all the actions that show up on the console are also in the ‘My Actions’ screen in the Q Actions app on your Android Phone. Check out what’s on the tablet vs. my phone.
Me: Oh and before I forget, there’s one last item I’d like to tell you about.
Son: What’s that.
Me: Notifications. If you send me a message on WhatsApp, Messenger or WeChat, a screen will popup letting me know I have a message from you. I can listen to the message by pressing a button or respond to the message – by voice, of course, all while keeping my focus on the road. You’ll get the response just as if I had sent it while holding the phone.
Son: Cool. I’ll have fun sending you messages on your way home from work.
Son: Hey, can I try this out on my phone?
Me: Sure. Just download our latest app from the Google Play Store. After you get it installed, goto the Preferences section under Settings and check the box that says ‘Auto Mode’ (BETA). You’ll automatically be switched into Auto Mode on your phone. Now this becomes your console in the car.
Of course, things appear a bit smaller than on your phone than what I’ve shown you on the tablet. Oh, and since you’re not connected to another phone, all the commands you give it will be performed by apps on your phone. Try it out and let me know what you think.
Son: Ok. I’ll play around with it this week.
Me: Great. Now let’s go see what your mom’s made us for dinner.
Somewhere in the Android Settings lies the option for you turn on Bluetooth, turn off Wifi, and change sound preferences. These options are usually buried deep under menus and sub-menus. Discoverability is an issue and navigating to the options usually means multiple taps within the Settings app. Yes, there’s a search bar within the Settings app, but it’s clunky, requires typing and only returns exact matches. Some of these options are accessible through the quick settings bar, but discovery and navigation issues still exist.
In the latest release, simply tell Q Actions what System Settings you want to change. Q Actions can now control your Bluetooth, Wifi, music session, and sound settings through voice.
Configure your Settings:
“turn on/off bluetooth”
“turn wifi on/off”
Control your music:
“play next song”
“resume my music”
Toggle your sound settings:
“enable do not disturb”
“increase the volume”
“put my phone on vibrate”
In addition to placing calls to your Contacts, Q Actions helps you manage Contacts via voice. Easily add a recent caller as a contact in your phonebook or share a friend’s contact info with simple commands. If you have your contact’s address in your Contacts, you can also get directions to the address using your favorite navigation app.
Place calls to Contacts:
“call Jason Chen”
“dial Mario on speaker”
Manage and share your Contacts:
“save recent number as Mark Johnson”
“edit Helen’s contact information“
“share contact info of Daniel Phan”
“view last incoming call”
Bridge the gap between your Contacts and navigation apps:
“take me to Rob’s apartment”
“how do I get to Mike’s house?”
Unlock your phone’s potential with voice! Q Actions is now available on Google Play.
Finding yourself routinely using the same set of actions as you commute to work or prepare for your workout session? Action Recipes string together your favorite actions to help you get through the day. With Action Recipes, you can run multiple actions from the apps you use with simple voice commands.
Hands on the steering wheel as you start your daily commute to work?
“start my morning commute”
Start streaming NPR as Google Maps navigates you through the best route to work
Earbuds on, phone stowed, as you get ready for your routine run around the neighborhood or nearby trail?
“start my workout”
Play your favorite tracks on Spotify, Pandora, or Google Play Music as MapMyRun, Mi Fit or Google Fit logs your workout session
Hands tied as you gather your carry-ons and prepare to board the plane?
“ready to board”
Send someone a quick message through SMS, WhatsApp, or WeChat as United, American Airlines, Alaska, or Delta brings up your boarding pass
These Action Recipes are already created for your convenience. Just grab the latest version of Q Actions from the Google Play Store and swipe left until you reach the My Action Recipes page to preview your supported Action Recipes. More interesting recipes will just start surfacing here as they come online.
Action Recipes are automatically personalized for you – the right actions are executed based on the apps you use for these tasks. We are working on further controls and customizability.
We would love to hear your feedback on these Action Recipes. Please let us know what you think!
Aiqudo to Power Actions for Motorola’s Moto Voice Experience, Launch Globally in Seven Languages
BUSINESSWIRE. SAN JOSE, CALIFORNIA, APRIL 30, 2018
Aiqudo, a Voice AI pioneer that lets people use voice commands to execute actions in mobile apps, today announced that they have entered into an agreement for Aiqudo to work with Motorola’s Moto Voice experience globally. Aiqudo’s technology is now available in select Motorola smartphones in major markets around the world in English, Spanish, Portuguese, French, Italian, German and Russian. The technology will integrate seamlessly with the top apps in each market.
“With this agreement, Motorola brings Aiqudo’s unique virtual assistant capabilities to tens of millions of customers,” said John Foster, Aiqudo CEO and Co-Founder. “Motorola, who pioneered the mobile phone business, is now pioneering voice as the primary interface to our digital worlds and we are excited to work with them. At Aiqudo, we’ve solved issues with existing voice-assisted platforms to ensure an experience that offers ease of use and seamless integration with favorite apps without users needing to learn new syntax and running up against walled gardens that are so common with current solutions. This agreement brings to market far more powerful interactions than have ever before been enabled with voice on any device. We are thrilled to work with Motorola to bring what we believe is the next wave of voice to consumers.”
Aiqudo’s technology is now available in select Motorola smartphones in major markets around the world in English, Spanish, Portuguese, French, Italian, German and Russian.
Aiqudo’s powerful Voice-to-Action™ platform brings the ease and power of voice to the mobile app ecosystem. Mobile apps have become central to consumers’ lives, providing massive utility across entertainment, shopping, navigation, messaging, and more. With Aiqudo, Moto Voice allows instant access to these actions, enabling consumers to use their favorite mobile apps with simple, intuitive natural-language commands – hands free. Users get both verbal and visual results, which is essential for making quick decisions, and their private data stays private, within the apps they originally entered it in.
“Aiqudo helps users get things done quickly with the most ubiquitous assistant – the phone, meaning you don’t lose functionality when you walk out of your living room or home. And it’s easy to use because the Q Platform does not require the user to learn a new command syntax or specify an app by name. The Q platform learns from the user instead of requiring the user to learn new skills,” said Rajat Mukherjee, Aiqudo CTO and Co-Founder. “Our Voice AI, built by our team in Belfast, Northern Ireland, enables rapid scalability to multiple languages and localization for apps in each of the markets we will be expanding to with Motorola. Aiqudo voice enables users’ favorite apps in each market and supports regional language variations, for example between Spain and Mexico.”
Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps through its Voice-to-Action™ platform. It lets people use natural voice commands to execute actions in mobile apps across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, enabling consumers to get things done quickly and easily.
Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without APIs or developer dependencies.
As our US team continues to grow and we schedule more partner meetings and candidate visits, we realized pretty quickly that we need a new, dedicated office space. A place to call home. We moved out of SPACES at the end of January, and now, after several weeks of working out of a temporary space in our new building, we are now finally settled in to our new office.
Check out the pictures before and after our official move-in. We’re at 1901 S. Bascom Ave, Campbell, CA. Suite 1220. Visit us!!
Google Assistant is popular among Android users. It is also integrated with Google Home, Google’s smart assistant device for the home. However, Google Assistant has only a a limited set of supported Actions. There are many actions that Assistant currently does not perform optimally for you, even on your phone. Instead of executing the right actions in the relevant app, Assistant offers web search results in many cases:
“Show my boarding pass” (You want to pull up your boarding pass in your airline app when you are in the security line at the airport)
“I need a haircut” (You want to check in to your favorite salon)
“Who’s at the front door?” (You want to open up the video from your security device app)
We integrated the Q Actions Android appwith Google Assistant. Now you can talk to Google Assistant and execute actions instantly in your favorite apps on your mobile device. You can do this on Google Assistant on your phone or in Google Home.
The first step is to open Q Actions with the command – “Talk to Q Actions”
Now you can talk natively to Q Actions voice application, for example:
“Play narcos” -> will open the Netflix app and start playing the TV show
“I’d like to board” -> will open the United Airlines app and take you right to your boarding pass
“Show hotels in Chicago” -> will open the HotelTonight app and show hotel deals in Chicago
“Show my photo albums” -> will allow you to choose between Flickr and Google Photos apps (if you have both apps) and will show your albums
It’s easy and convenient to use Q Actions with Google Assistant. Simply say what you want to do. You don’t need to learn a specific syntax; you can speak naturally. Further, Q personalizes actions based on the apps installed on your device. If you have two apps that are a match you will get an option to choose, using voice. In the future the system will learn your preferences and execute the best action for you.
We would love your feedback on your experience with using Q Actions with Google Assistant. We are in the process of adding more features, so stay tuned!