Covid Information

QTime: What I Learned as an Aiqudo Intern

By App Actions, Startup Culture, Uncategorized, Voice No Comments

Mithil Chakraborty

Intern Voice: Mithil Chakraborty

Hi! My name is Mithil Chakraborty and I’m currently a senior at Saratoga High School. During the summer of 2020, I had the privilege of interning at Aiqudo for 6 weeks as a Product Operations intern. Although I had previously coded in Java, HTML/Javascript, and Python, this was still my first internship at a company. Coming in, I was excited but a bit uncertain thinking that I would not be able to fully understand the core technology (Q System) or how the app’s actions are created. But even amidst the COVID-19 Pandemic, I learned a tremendous amount about not only on boarding and debugging actions, but how startups work; the drive from each of the employees was admirable and really stood out to me. As the internship progressed, I felt like a part of the team. Phillip, Mark, and Steven did a great job making me feel welcome and explaining the Q Tools program, Q App, and on boarding procedures. 

As I played around with the app, I realized how cool the capabilities were. During the iOS stage of my internship, I verified and debugged numerous iOS Q App actions and contributed to the latest release of the iOS Q Actions app. From there, I researched new actions to on board for Android, focusing on relevant information and new apps. As a result, I proposed actions that would display COVID-19 information in Facebook and open Messenger Rooms. Through this process, I learned how to implement Voice Talkback too for the Facebook COVID-19 info action, using Android Device Monitor and Q Tools. The unique actions I finally on boarded included:

  • “show me coronavirus info” >> talks back first 3 headlines in COVID-19 Info Center Pane on Facebook 
  • “open messenger rooms” >> creates and opens a Messenger Room

Covid Information

Users don’t have to say an exact phrase in order for the app to execute the correct action; the smart AI-based intent matching system will only run the relevant actions from Facebook or Messenger based on the user’s query.  The user does not even have to mention the app by name – the system picks the right app automatically.

When these actions finally got implemented, it felt rewarding to see my work easily accessible on smartphones; thereafter, I told my friends and family about the amazing Q Actions app so they could see my work. Throughout my Aiqudo internship, the team was incredibly easy to talk to and they always encouraged questions. It showed me the real-life applications of software engineering and AI, which I hadn’t been exposed to before, and the importance of collaboration and perseverance, especially when I was debugging pesky actions for iOS. This opportunity taught me in a hands-on way the business and technical skills needed for a startup like Aiqudo to be nimble and successful, which I greatly appreciated. Overall, my time at Aiqudo was incredibly memorable and I hope to be back soon.

Thank you Phillip, Mark, Steven, Rajat and the rest of the Aiqudo team for giving me this valuable experience this summer! 

AssetCare

mCloud Brings Natural Language Processing to Connected Workers through Partnership with Aiqudo

By Artificial Intelligence, Asset Management, News, Press, Uncategorized, User Interface, Voice No Comments

CANADA NEWSWIRE, VANCOUVER, OCTOBER 1, 2020

mCloud Technologies Corp. (TSX-V: MCLD) (OTCQB: MCLDF) (“mCloud”   or the “Company”), a leading provider of asset management solutions combining IoT, cloud computing, and artificial intelligence (“AI”), today announced it has entered into a strategic partnership with Aiqudo Inc. (“Aiqudo”), leveraging Aiqudo’s Q Actions® Voice AI platform and Action Kit SDK to bring new voice-enabled interactions to the Company’s AssetCare™️ solutions for Connected Workers.

By combining AssetCare with Aiqudo’s powerful Voice to Action® platform, mobile field workers will be able to interact with AssetCare solutions through a custom digital assistant using natural language.

“mCloud’s partnership with Aiqudo provides AssetCare with a distinct competitive edge as we deliver AssetCare to our oil and gas, nuclear, wind, and healthcare customers all around the world. Connected workers will benefit from reduced training time, ease of use, and support for multiple languages” 

In the field, industrial asset operators and field technicians will be able to communicate with experts, find documentation, and pull up relevant asset data instantly and effortlessly. This will expedite the completion of asset inspections and operator rounds – an industry-first using hands-free, simple, and intuitive natural commands via head mounted smart glasses. Professionals will be able to call up information on-demand with a single natural language request, eliminating the need to search using complex queries or special commands.

Here’s a demonstration of mCloud’s AssetCare capabilities on smart glasses with Aiqudo.

“mCloud’s partnership with Aiqudo provides AssetCare with a distinct competitive edge as we deliver AssetCare to our oil and gas, nuclear, wind, and healthcare customers all around the world,” said Dr. Barry Po, mCloud’s President, Connected Solutions and Chief Marketing Officer. “Connected workers will benefit from reduced training time, ease of use, and support for multiple languages.”

“We are excited to power mCloud solutions with our Voice to Action platform, making it easier for connected workers using AssetCare to get things done safely and quickly,” said Dr. Rajat Mukherjee, Aiqudo’s Co-Founder and CTO. “Our flexible NLU and powerful Action Engine are perfect for creating custom voice experiences for applications on smart glasses and smartphones.”

Aiqudo technology will join the growing set of advanced capabilities mCloud is now delivering by way of its recent acquisition of kanepi Group Pty Ltd. (“kanepi”). The Company announced on September 22 it expected to roll out new Connected Worker capabilities to 1,000 workers in China by the end of the year, targeting over 20,000 in 2021.

BUSINESSWIRE:  mCloud Brings Natural Language Processing to Connected Workers through Partnership with Aiqudo

Official website: www.mcloudcorp.com  Further Information: mCloud Press 

About mCloud Technologies Corp.

mCloud is creating a more efficient future with the use of AI and analytics, curbing energy waste, maximizing energy production, and getting the most out of critical energy infrastructure. Through mCloud’s AI-powered AssetCare™ platform, mCloud offers complete asset management solutions in five distinct segments: commercial buildings, renewable energy, healthcare, heavy industry, and connected workers. IoT sensors bring data from connected assets into the cloud, where AI and analytics are applied to maximize their performance.

Headquartered in Vancouver, Canada with offices in twelve locations worldwide, the mCloud family includes an ecosystem of operating subsidiaries that deliver high-performance IoT, AI, 3D, and mobile capabilities to customers, all integrated into AssetCare. With over 100 blue-chip customers and more than 51,000 assets connected in thousands of locations worldwide, mCloud is changing the way energy assets are managed.

mCloud’s common shares trade on the TSX Venture Exchange under the symbol MCLD and on the OTCQB under the symbol MCLDF. mCloud’s convertible debentures trade on the TSX Venture Exchange under the symbol MCLD.DB. For more information, visit www.mcloudcorp.com.

About Aiqudo

Aiqudo’s Voice to Action® platform voice enables applications across multiple hardware environments including mobile phones, IoT and connected home devices, automobiles, and hands-free augmented reality devices.  Aiqudo’s Voice AI comprises a unique natural language command understanding engine, the largest Action Index and action execution platform available, and the company’s Voice Graph analytics platform to drive personalization based on behavioral insights.   Aiqudo powers customizable white label voice assistants that give our partners control of their voice brand and enable them to define their users’ voice experience.  Aiqudo currently powers the Moto Voice digital assistant experience on Motorola smartphones in 7 languages across 12 markets in North and South America, Europe, India and Russia.  Aiqudo is based in Campbell, CA with offices in Belfast, Northern Ireland.

SOURCE mCloud Technologies Corp.

For further information:

Wayne Andrews, RCA Financial Partners Inc., T: 727-268-0113, wayne.andrews@mcloudcorp.com; Barry Po, Chief Marketing Officer, mCloud Technologies Corp., T: 866-420-1781

Classifier Architecture

A Classifier Tuned to Action Commands

By Artificial Intelligence, Command Matching, Machine Learning No Comments

One thing we have learned through our journey of building the Q Actions® Voice platform is that there are few things as unpredictable as what users will say to their devices. These range from noise or nonsense queries (utterances with no obvious intent such as “this is really great”), to genuine queries such as “when does the next Caltrain leave for San Francisco”. We needed a way to filter the noise before passing genuine queries to Q Actions. As we thought about this further, we decided to categorize the genuine commands into the following 4 classes:

  • Noise or nonsense commands
  • Action Commands that Apps were best suited to answer (such as the Caltrain query above)
  • Queries that were informational in nature, such as “how tall is Tom Cruise”
  • Mathematical queries – “what is the square route of 2024”.

This classifier would enable us to route each query internally within our platform to provide the best user experience. So we set about building a 4-class classifier for Noise, App, Informational & Math. Since we have the world’s largest mobile Action library, and Action commands are our specialty, it was critical to attain as high a classification accuracy as possible for the App type so we route as many valid user commands as possible to our proprietary Action execution engine.

We considered a number of different approaches initially when deciding the best technology to use to do this. These included convolutional & recurrent Multilayer Perceptron’s (MLP), a 3 layer MLP and Transformer models such as BERT & ALBERT plus one we trained ourselves to allow for assessing the impact of different hyperparameters (number of heads, depth etc). We also experimented with different ways to embed the query information within the networks such as word embeddings (Word2vec & Glove) and sentence embeddings such as USE and NNLM.

We created a number of data sets with which to train and test the different models. Our goal was to identify the best classifier to deploy in production as determined by its ability to accurately classify the commands in each test set. We used existing valid user commands for our App Action training & test data sets. Question datasets were gathered from sources such as  Kaggle, Quora and Stanford QA. Mathematical queries were generated using a program written in house and from https://github.com/deepmind/mathematics_dataset. Noise data was obtained from actual noisy queries based on our live traffic from powering Motorola’s Moto Voice Assistant. All this data was split into training and test sets and used to train and test each of our models. The following table shows the size of each data set.

Dataset Training set size Test set size
APP 1794616 90598
Noise 71201 45778
Informational 128180 93900
Math 154518 22850

The result of our analysis was that the 3 layer MLP with USE embedding provided us with the best overall classification accuracy across all 4 categories.

The architecture of this classifier is shown in the following schematic. It gives a posterior probabilistic classification for an input query.

Classifier Architecture

Figure 1  Overview of the model

In effect, the network consisted of two components : the embedding layer followed by a 3 layer feed forward MLP. The first layer consists of N dense units, the second M dense units (where M < N) and the output is a softmax function which is typically used for multi class classification and will assign a probability for each class. As can be seen from Figure 1 the “APP” class has the highest probability and would be the model prediction for the command ‘Call Bill’.

The embedding layer relies on a Tensorflow hub module, which has two advantages:

  • we don’t have to worry about text preprocessing
  • we can benefit from transfer learning (utilizing a pre trained model on a large volume data often based on transformer techniques for text classification )

The hub module used is based on the Universal Sentence encoder (USE) which can give us a rich semantic representation of queries and can also be fine-tuned for our task. USE is much more powerful than word embedding processes as it can embed not only words but phrases and sentences. It is trained on a variety of data sources and a variety of tasks with the aim of dynamically facilitating a wide diversity of natural language understanding tasks.  The output from this embedding layer is a 512-dimensional vector.

We expect similar sentences to have similar embeddings as shown in the following heatmap, where the more similar two sentences are, the darker the color is. Similarity is based on cosine similarity of vectors. We demonstrate the strong similarity between two APP commands (‘view my profile’, view my Facebook profile’); two INFORMATIONAL queries (‘What is Barack obama’s age’, How old is Barack obama’) and two MATH queries (‘calculate 2+2’ ‘add 2+2’)

Heatmap

Figure 2  Semantic similarity

The MLP’s two hidden layers consist of N=500 and M=100 units.  If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance only in terms of the training data (overfitting) but degrade generalization (poorer performance on the test data). This is why it is important to ensure MLP settings are chosen based on the performance on a range of unseen test sets.

In terms of overall performance, our model gives us an accuracy of 98.8% for APP, 86.9% for Informational, 83.5% for Mathematical and 52.3% for Noise. From this it can be seen that we achieved our goal of correctly classifying almost all App Action commands correctly. Informational and Mathematical commands also had a high degree of accuracy, while noise was the worst performing class. The reason Noise was the poorest is because Noise is very difficult to define. Noise can range from grammatically correct sentences with no relevance to the other 3 categories (such as “the weather is hot today”) to complete random nonsense. This is very hard to predict in advance to create a good training set for. We are still working on this aspect of our classifier and plan to improve its performance on this category in the future as a result of improved training data.

Niall Rooney and David Patterson

Q Actions 1.6.2 just released to App Store!

By App Actions, Artificial Intelligence, Conversation, Digital Assistants, Knowledge, Machine Learning, Natural Language, Voice Search No Comments

New Q Actions version now in the App Store

This version of Q Actions features contextual downstream actions, integration with your calendar, as well as under the bonnet improvements to our matching engines. Q Actions help users power through their day by being more useful and thoughtful.

Contextual Awareness

Q Actions understands the context when performing your actions. Let’s say you call a contact in your phonebook with the command “call Tiffany”. You can then follow-up with the command “navigate to her house”. Q Actions is aware of the context based on your previous command and is able to use that information in a downstream action.


  • say “call Tiffany”
    • then “navigate to her house”

Calendar Integration


Stay on top of your schedule and daily events with the recently added Calendar actions. Need to see what’s coming up next? Just ask “when is my next meeting?” and Q Actions will return a card with all the important event information. Need to quickly schedule something on your calendar? Say “create a new event” and after a few questions, your event is booked. On the go and need to join a video conferencing meeting? Simply say “join my next meeting” and Q Actions will take you directly to your meeting in Google Meet. All you have to do from there is confirm your camera/audio settings and join!

  • “when is my next meeting?”
  • “create a new event”
  • “join my next meeting”

Simply do more with voice! Q Actions is now available on the App Store.

Q Card for Tom Petty

What can you do with that Thing?

By Conversation, Knowledge, User Interface, Voice Search No Comments

Often, when you have something to do, you start by searching for information about a particular Thing. Sometimes, you know exactly what that Thing is, but often, you find the Thing by using information related to it. 

“Who is Taylor Swift?” → Taylor Swift

“Who directed Avatar”  → “James Cameron”

The “Thing” is what we call a Knowledge Entity and something that you can do with that Thing is what we call a Downstream Action. The bond between that Knowledge Entity and the Downstream Action is what we refer to as Actionable Knowledge.

Actionable Knowledge

How do we do this? Our Knowledge database holds information about all kinds of entities such as movies, TV series, athletes, corporations etc. These Entities have rich semantic structure; we have detailed information about the different attributes of these Entities along with the Actions one can perform on those entities. An Action may be generic (watch a show), but can also be explicitly connected to a mobile app or service (watch the show on Disney+). This knowledge allows the user to follow up on an Entity command with an Action. 

For example, asking a question such as “How tall is Tom Brady?”  allows you to get his height i.e., 6’ 4” or 1.93 metres (based on the Locale of who’s asking) since Knowledge captures these important attributes about Tom Brady. Note that these attributes are different for different types of Entities. That is determined by the Schema of the Entity, which allows validation, normalization and transformation of data.

A command like “Who is Tom Brady?” returns a Q Card with information about Tom Brady, as shown below. As there may be multiple entities referring to “Tom Brady”, a popularity measure is computed so that the correct Tom Brady is returned, based on popularity, context and your current session. Popularity is a special attribute that is computed from multiple attributes of the entity. An Entity Card surfaces the various attributes associated with the attribute, such as when Tom Brady was born, how tall and heavy he is, and what sport he plays. There are also attributes that define potential Actions that can follow, so “go to his Instagram” will instantly take you to Tom Brady’s account in the Instagram app. 

Q Card for Tom Brady

Actions are about getting things done! Here’s another example of being able to instantly go from information to Action using Actionable Knowledge.  Asking “Who is Tom Petty?” followed by a command “listen to him on Spotify” will start playing his music. This is a powerful feature that provides a great user experience and rapid Time to Action® .

Q Card for Tom Petty

The three pillars of the Aiqudo’s Q Actions Platform allow us to implement downstream Actions:

  1. Semantically rich Entities in Actionable Knowledge
  2. AI-based Search
  3. Powerful Action execution engine for mobile apps and cloud services

AI Search

We are not limited by just the name of the entity. Our AI-based search allows you to find entities using various attributes of the entity. For example, you can search for stock information by saying “How is Tesla stock doing today?” or “Show me TSLA stock price”.   Aiqudo understands both the corporation name or the stock ticker when it needs to find information on a company’s stock price.  Some apps like Yahoo Finance can only understand the stock ticker; it may not be built to accept the name of the company as an input. Our platform allows us to fill this gap by decoupling action execution from search intent detection. A middle-tier federation module acts as a bridge between intent extraction and Action execution by extracting the right attributes of the Entity returned by the search to those required by the Action execution engine. In the above example it extracts the stockTicker attribute (TSLA),  from the corporation entity retrieved by the search (Tesla) and feeds it to the Action engine. 

Q Card for Tesla Stock

Voila! Job done!

So, what can you do with that Thing? Well, you can instantly perform a meaningful Action on it using the apps on your mobile phone. In the example above, you can jump to Yahoo News to get the latest finance news about Tesla, or go to the stock quote screen within E*Trade, the app you use and trust, to buy Tesla shares and make some money!

Mobile Accessibility

Accessibility plus utility plus convenience!

By Digital Assistants, User Interface, Voice Search No Comments

It’s great to see various platforms announce specific accessibility features on this Global Accessibility Awareness Day.

A feature that caught our attention today was Google’s Assistant-powered Action Blocks.

It’s a new app that allows users to create simple shortcuts to Actions they commonly perform. They are powered by Google Assistant, but allow for invocation through a tap.

My Actions and Favorites

We built this functionality into Aiqudo’s Q Actions when we launched it in 2017. Our approach is different in several ways:

  • The user does not need to do any work, Q Actions does it automatically for the user
  • Q Actions builds these dynamically – your most recently used Actions, and your favorite ones are automatically tracked for you – you just need to say “show my actions”
  • These handy Action shortcuts are available to you with one swipe to the right in the Q Actions app. One tap to invoke your favorite action. 
  • There’s no new app, just for accessibility – it’s built in to your Assistant interface for convenience – you just need to say “Hello Q, show my Actions”
  • There are hundreds of unique high-utility Actions you can perform that are not available in any other platform, including Google Assistant. Here are a few examples:
    • “whose birthday is it today?” (Facebook)
    • “show my orders” (Amazon, Walmart)
    • “start meditating” (Headspace)
    • “watch the Mandalorian” (Disney+)
    • “watch Fierce Queens” (Quibi)
    • “show my tasks” (Microsoft To Do, Google Tasks)
    • “show my account balances” (Etrade)
    • “join my meeting with my camera off” (Google Hangouts, Zoom)
    • “call Mark” (Whatsapp, Messenger, Teams, Slack,…)
    • “send money to John” (PayPal)
    • . and on and on and on…

It’s just easier, better and more powerful! 

And available to everyone!

Bark to Action

Bark to Action

By Conversation, Startup Culture No Comments

Mike and his Corgi

“It’s a dog’s life!” – Aiqudo work from home #4

Every Friday afternoon, during this lockdown, the Silicon Valley Team gets together for “Social Hour”. We can only assume that the Belfast team is more focused on Irish Whiskey and Guinness on Friday evenings!

Online meetings are not a new thing for Aiqudo. We’ve been doing this twice a week for our entire global team since Belfast came on board in mid-2017!! We’re experts at effective virtual meetings.

And … unlike other startups that just place cute photos of their dogs on Teams’ pages on their websites, our dogs actually participate and provide their opinions during our meetings. Their barks have bite!!

Happy Friday!
#BarkToAction

Monopoly top hat

Aiqudo + Banking = Voice To Transaction

By App Actions, Voice, Voice Search No Comments

We all remember playing playing the game Monopoly as kids right? Well I recently stumbled upon a version of the game that uses voice commands to control the game flow and act as the “bank” – a role most of us avoided so we wouldn’t have to deal with all the annoying transactions such as selling properties and buildings, collecting taxes, exchanging currency, and paying out people as they passed “GO”. Admittedly, it’s a novel use for voice commands in a classic game. But what about using voice to manage apps controlling our real money?

Voice to power more than games

Back in June of last year, we wrote a blog post that talked about the power of voice to perform activities in mobile banking apps. The post specifically referenced Bank of America’s Erica virtual voice assistant as a tool to help users accomplish common, often time-consuming banking activities without the need to memorize complex menus or worse, speak to the dreaded online customer service representative.  The net result of this; a simple, pleasant, user experience that builds brand loyalty and customer retention.  

Enter Aiqudo Voice to Action®

Well, that got me thinking. I’ve been an E*Trade banking customer for years and all this time I’ve never really used voice to make payments, transfer money or check balances. 

Nonetheless, I decided to see if I could recreate and hopefully improve upon my previous experience – this time using our very own Q Actions app.  The following video highlights some of my efforts. 

But can I trust this new way of banking?

Yes. You may have noticed in the video that I am not providing credentials to access my account in E*Trade.  That’s because I’m already authenticated. Previous to shooting the video, I had provided credentials, by way of fingerprint biometric, as part of the very first action execution.  Note that Aiqudo did not manage this process; it was handled completely by the mobile app. And because of this, the data used to hold the credential, lives entirely in the app itself and is neither passed to nor processed by Aiqudo systems at all.  This separation of duties maintains privacy of user data and hence increases trust in using the technology.

A personalized experience

Personalization is a word typically used to describe how an app or other system function adjusts to provide an experience tailored specifically to a user.  It’s often used in conjunction with AI and machine learning systems as the end result of acquiring, processing and suggesting courses of action or data upon which to act.  We enable personalization in the previous actions a couple of ways. If you have, say, more than one voice-enabled banking app on your mobile device (similar to what I have), our system can be configured to remember the user’s preferred app action.  For instance, if I were to say the command “check my balances” Aiqudo suggests actions from both E*Trade AND Wells Fargo. If I choose the E*Trade action, the next time I say the command it will remember E*Trade and perform the action right away – no need to ask again.  Likewise, whenever the action requires the user to provide input such as account number or payee, the system can store these away for subsequent use. These are simple examples but add a nice touch to an already-useful integration. 

What if I don’t bank with E*Trade? What can I do with other apps and is it safe?

Aiqudo maintains similar actions for apps like Venmo and Paypal that allow “send money to <username>” type actions. In each of these cases, Aiqudo defers authentication to the app before completing the transaction and also ensures that the data used by the action in the app, e.g., the payee’s phone number or email address never leaves the device or the app. The following video illustrates this.

With the proper integration of our ActionKit SDK into a banking app such as E*Trade,  the end user reaps the benefits of a trusted, highly- useful voice-powered interface that enables complex and often multi-step operations with ease and reduces Time to Action® for many activities within the app.

Actionable Knowledge

Q Actions 2.4: “Under the Hood” improvements for Productivity and Utility

By App Actions, Artificial Intelligence, Digital Assistants, Voice, Voice Search No Comments

Q Actions 2.4 now available on Google Play

The recent release of Q Actions 2.4 emphasizes Aiqudo’s focus on productivity and utility through voice. As voice assistants are becoming an increasingly ubiquitous part of our daily lives, Aiqudo aims to empower users to get things done. Many of the improvements and enhancements are “under the hood” – we’ve increased personalization and expanded the knowledge that drives our Actions.

Actionable KnowledgeTM

Our content-rich Q Cards leverage Actionable Knowledge to extend functionality into popular 3rd party apps. Start by asking about an artist, music group, sports athlete, or celebrity: “who is Tom Hanks. Aiqudo’s Q Card not only presents information about the actor, but will ask “what next?”. You say “view his Twitter account” or “go to his Instagram”, Actionable Knowledge will drop you exactly where you want to go!

Sample Actionable Knowledge Flow:

  • Ask “who is Taylor Swift?”
  • Select one of the supported Actionable Knowledge apps
    • “listen to her on Spotify”
    • “go to her Facebook profile”
    • “check out her Instagram”

Personalization … with privacy

Q Actions is already personalized, showing you Action choices based on the apps you already trust. We can now leverage personal data as signals to personalize your experience, while still protecting your privacy. It’s another iteration of our continued focus and dedication to increase productivity and augment utility using voice.  For example, if you checked in to your United Airlines flight, and then, the following day, say “show my boarding pass”, the United Airlines action is promoted to the top – exactly what you’d expect the system to do for you.

Our new Personal Data Manager allows secure optimization for specific apps. If you have a Spotify  playlist called “Beach Vibes”, and you say “play Beach Vibes”, we understand what you want and we will promote your personal playlist over a random public channel by that name. Your playlists are not shipped off the device to our servers, but we can still use the relevant information to short-cut your day!  If “Casimo Caputo” is a friend in Facebook Messenger, Messenger will trump WhatsApp for “tell Casimo Caputo let’s meet for lunch”. But “message Mark Smith let’s play Fifa tonight” brings up WhatsApp since Mark Smith is your WhatsApp buddy.

Simply do more with voice! Q Actions is now available on Google Play.

Aiqudo Auto Mode

Aiqudo and Sensory Collaborate on Holistic Voice Solutions

By App Actions, Auto Mode, News, User Interface, Voice No Comments

Will develop solutions that simplify the process of integrating voice assistants into a variety of devices  

BUSINESSWIRE, LAS VEGAS, January 6, 2020

Aiqudo, a leading voice technology pioneer, today announced that it is collaborating with embedded voice and vision AI leader Sensory to bring to market comprehensive voice solutions that serve as white-label alternatives for voice services and assistants. The two companies are working on solutions targeting automotive, mobile app, smart home and wearable device applications.

Currently, companies and brands must piece together different technologies to fully implement a voice solution. With their technologies combined, Aiqudo and Sensory will deliver a fully integrated end-to-end solution that combines Sensory’s wake word, voice biometrics and natural language recognition technologies with Aiqudo’s multilingual intent understanding and action execution (Q Actions®) to provide the complete Voice to Action® experience consumers expect.

“Voice adoption continues to grow rapidly, and brands are always exploring ways to streamline the process of integrating a convenient voice UX into their products,” said Todd Mozer, Sensory’s CEO. “Working with Aiqudo allows our two companies to provide the industry a turn-key solution for integrating powerful voice assistants into their products that feature brand-specific wake words and are capable of recognizing who is speaking.”

“Users just need to enter the cabin with their smartphones. There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.

With both Aiqudo and Sensory positioned as leaders in their respective fields, this collaboration is a natural fit as their technologies are highly complementary.  The initial integration is focused on the automotive vertical that will be showcased at 2020 Consumer Electronics Show.

Aiqudo’s Auto Mode highlights a highly personalized user experience in the car using the Q Actions® platform. Enhanced with Sensory’s wake word (TrulyHandsfree™) and voice ID (TrulySecure™) functionality,  multiple users seamlessly access their personal mobile devices just by using their voice to execute personal actions. “Users just need to enter the cabin with their smartphones,” said Rajat Mukherjee, Aiqudo CTO. “There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.  

“Brands increasingly want to create their own branded voice experiences for their customers,” said John Foster, CEO of Aiqudo. “Working with Sensory, we have created the easiest and fastest way for brands to bring the power and convenience of voice to their customers. We are excited to integrate our areas of practice and expertise to deliver a comprehensive solution.”  

To view the demo onsite while at CES, please email Aiqudo at CES@aiqudo.com. 

BUSINESSWIRE: Aiqudo and Sensory Collaborate on Holistic Voice Solutions

About Sensory

For over 25 years, Sensory has pioneered and developed groundbreaking applications for machine learning and embedded AI – turning those applications into household technologies. Pioneering the concept of always-listening speech recognition more than a decade ago, Sensory’s flexible wake word, small to large vocabulary speech recognition, and natural language understanding technologies are fueling today’s voice revolution. Additionally, its biometric recognition technologies are making everything from unlocking a device to authenticating users for digital transactions faster, safer and more convenient. Sensory’s technologies are widely deployed in numerous markets, including automotive, home appliances, home entertainment, IoT, mobile phones, wearables and more, and have shipped in over two billion units of leading consumer products and software applications.

For more information about this announcement, Sensory or its technologies, visit https://www.sensory.com/, contact sales@sensory.com or for press inquiries contact press@sensory.com.

About Aiqudo

Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps and cloud services through its Voice to Action® platform. It lets people use natural voice commands to execute actions in mobile apps and cloud services across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, cloud services, or device actions, enabling consumers to get things done quickly and easily.

Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without mandating APIs or developer dependencies.

To see Aiqudo in action, visit Aiqudo’s YouTube channel (youtube.com/aiqudo)