Category

Voice

Voice for the Connected Worker

Natural Voice Recognition for Safety and Productivity in Industrial IoT

By Asset Management, Natural Language, Uncategorized, User Interface, Voice No Comments
Voice for AssetCare

Voice for AssetCare

Jim Christian

Jim Christian, Chief Technology and Product Officer, mCloud

It is estimated that there are 20 million field technicians operating worldwide. A sizable percentage of those technicians can’t always get to information they need to do their jobs. 

Why is that? After all, we train technicians, provide modern mobile devices and specialized apps, and send them to the field with stacks of information and modern communications. Smartphone sales in the United States grew from $3.8 billion in 2005 to nearly $80 billion in 2020. So why isn’t that enough?

One problem is that tools that work fine in offices don’t necessarily work for field workers. A 25-year old mobile app designer forgets that a 55-year old field worker cannot read small text on a small screen or see with the glare of natural light. In industrial and outdoor settings field workers frequently wear gloves and other protective gear. Consider a technician who needs to enter data on a mobile device while outside in freezing weather. This worker could easily choose to wait to enter data until he’s back in his truck and can take off his gloves, and as a result, not entering the data exactly right. Or a technician may need to wear gloves and may find it difficult to type on a mobile device.

A voice-based interface can be a great help in these situations. Wearable devices that respond to voice are becoming more common. For instance, RealWear makes a headset that is designed to be worn with a hardhat, and one model is intrinsically safe and can be used in hazardous areas. But voice interfaces have not become popular in industrial settings. Why is that?

We could look to the OODA loop–short for Observe, Orient, Decide, and Act–for insights. The OODA concept was developed by the U.S. Air Force as a mental model for fighter pilots. Fighter pilots need to act quickly. Understanding the OODA loop that applies in a particular situation is helpful to improve, to act more quickly and decisively. Field technicians don’t have life-and-death situations to evaluate, but the OODA loop still applies. The speed and accuracy of their work depends on their OODA loop for the task at hand.

Consider two technicians who observe an unexpected situation, perhaps a failed asset. John orients himself by taking off his gloves to call his office, then searches for drawings in his company’s document management, and then calls his office again to confirm his diagnosis. Meanwhile, Jane orients herself by doing the same search, but talking instead of typing, keeping her eyes on the asset all the time. Assuming that the voice system is robust, Jane is able to use her eyes and her voice at the same time, accelerating her Observe and Orient phases. Jane will do a faster, better job. A system where the Observe and Orient phases are difficult–John’s experience–can be inferior and will be rejected by users, whereas Jane’s experience with a short, easy OODA loop will be acceptable.

A downside of speaking to a device is that traditional voice recognition systems can be painfully slow and limited. These systems recognize the same commands that a user would type or click with a mouse, but most people type and click much faster than they talk. Consider the sequence of actions required to take a picture and send it to someone on a smartphone using your fingers: open photo app, take a picture, close the photo app, open the photo gallery app, select the picture, select the open photo to share that picture, select a recipient, type a note, and hit send. That could be nine or ten distinct operations. Many people can do this rapidly with their fingers, even if it is a lot of steps. Executing that same sequence with an old-style, separate voice command for each step would be slow and painful and most people would find it worse than useless. 

The solution is natural voice recognition, where the voice system recognizes what the speaker intends and understands what “call Dave” means. Humans naturally understand that a phrase such as “call Dave” is shorthand for a long sequence (“pick up the phone”, “open the contact list”, “search for ‘Dave'”, etc.).  Natural voice recognition has come a long way in recent years and systems like Siri and Alexa have become familiar for personal use. Field workers often have their own shorthand for their industry, like “drop the transmission” or “flush the drum”, which their peers understand but Siri or Alexa don’t.

At mCloud, we see great potential in applying natural voice recognition to field work in industries such as oil & gas. Consider a field operator who is given a wearable device with a camera and voice control, and who is able to say things like, “take a picture and send it to John” or “take a picture, add a note ‘new corrosion under insulation at the north pipe rack’ and send to Jane” or “give me a piping diagram of the north pipe rack.”  This worker will have no trouble accessing useful information, and in using that information to orient himself to make good decisions. An informed field operator will get work done faster, with less trouble, and greater accuracy.

The U.S. Chemical Safety Board analyzes major safety incidents at oil & gas and chemical facilities. A fair number of incidents have a contributing factor of field workers not knowing something or not having the right information. For instance, an isobutane release at a Louisiana refinery in 2016 occurred in part when field operators used the wrong procedure to remove the gearbox on a plug valve. There was a standard procedure but about 3% of the plug valves in the refinery were an older design that required different steps to remove the gearbox. This is an example where the field workers were wearing protective gear and followed the procedure that was correct for a different type of valve, wrong for the valve in front of them. Field workers like this generally have written procedures, but occasionally the work planner misses something or reality in the field is different than what was expected. This  means that field workers need to adapt, perhaps by calling for help or looking up information such as alternate procedures.

Examples where natural voice recognition can help include finding information, calling other people for advice, recording measurements and observations, inspecting assets, stepping through repair procedures, describing the state of an asset along with recommendations and questions, writing a report about the work done, and working with other people to accomplish tasks. Some of these examples are ad hoc tasks, like taking a picture or deciding to call someone. Other examples are part of larger, structured jobs. An isolation procedure in a chemical plant or replacing a transmission are examples of complex procedures with multiple steps that can require specialized information or where unexpected results from one step may require the field worker to get re-oriented, find new information, or get help.

Aiqudo has powerful tech for natural voice recognition and mCloud is pleased to be working with Aiqudo to apply this technology. Working together, we can help field workers get what they need by simply asking for it, talk to the right people by simply asking for help, confirm their status in a natural way, and in general get the right job done, effectively and without mistakes.


This post is authored by Jim Christian, Chief Technology and Product Officer, mCloud.

Aiqudo and mCloud recently announced a strategic partnership that brings natural Voice technology into emerging devices, such as smart glasses, to support AR/VR and connected worker applications, and also into new domains such as industrial IoT, remote support and healthcare.

Covid Information

QTime: What I Learned as an Aiqudo Intern

By App Actions, Startup Culture, Uncategorized, Voice No Comments

Mithil Chakraborty

Intern Voice: Mithil Chakraborty

Hi! My name is Mithil Chakraborty and I’m currently a senior at Saratoga High School. During the summer of 2020, I had the privilege of interning at Aiqudo for 6 weeks as a Product Operations intern. Although I had previously coded in Java, HTML/Javascript, and Python, this was still my first internship at a company. Coming in, I was excited but a bit uncertain thinking that I would not be able to fully understand the core technology (Q System) or how the app’s actions are created. But even amidst the COVID-19 Pandemic, I learned a tremendous amount about not only on boarding and debugging actions, but how startups work; the drive from each of the employees was admirable and really stood out to me. As the internship progressed, I felt like a part of the team. Phillip, Mark, and Steven did a great job making me feel welcome and explaining the Q Tools program, Q App, and on boarding procedures. 

As I played around with the app, I realized how cool the capabilities were. During the iOS stage of my internship, I verified and debugged numerous iOS Q App actions and contributed to the latest release of the iOS Q Actions app. From there, I researched new actions to on board for Android, focusing on relevant information and new apps. As a result, I proposed actions that would display COVID-19 information in Facebook and open Messenger Rooms. Through this process, I learned how to implement Voice Talkback too for the Facebook COVID-19 info action, using Android Device Monitor and Q Tools. The unique actions I finally on boarded included:

  • “show me coronavirus info” >> talks back first 3 headlines in COVID-19 Info Center Pane on Facebook 
  • “open messenger rooms” >> creates and opens a Messenger Room

Covid Information

Users don’t have to say an exact phrase in order for the app to execute the correct action; the smart AI-based intent matching system will only run the relevant actions from Facebook or Messenger based on the user’s query.  The user does not even have to mention the app by name – the system picks the right app automatically.

When these actions finally got implemented, it felt rewarding to see my work easily accessible on smartphones; thereafter, I told my friends and family about the amazing Q Actions app so they could see my work. Throughout my Aiqudo internship, the team was incredibly easy to talk to and they always encouraged questions. It showed me the real-life applications of software engineering and AI, which I hadn’t been exposed to before, and the importance of collaboration and perseverance, especially when I was debugging pesky actions for iOS. This opportunity taught me in a hands-on way the business and technical skills needed for a startup like Aiqudo to be nimble and successful, which I greatly appreciated. Overall, my time at Aiqudo was incredibly memorable and I hope to be back soon.

Thank you Phillip, Mark, Steven, Rajat and the rest of the Aiqudo team for giving me this valuable experience this summer! 

AssetCare

mCloud Brings Natural Language Processing to Connected Workers through Partnership with Aiqudo

By Artificial Intelligence, Asset Management, News, Press, Uncategorized, User Interface, Voice No Comments

CANADA NEWSWIRE, VANCOUVER, OCTOBER 1, 2020

mCloud Technologies Corp. (TSX-V: MCLD) (OTCQB: MCLDF) (“mCloud”   or the “Company”), a leading provider of asset management solutions combining IoT, cloud computing, and artificial intelligence (“AI”), today announced it has entered into a strategic partnership with Aiqudo Inc. (“Aiqudo”), leveraging Aiqudo’s Q Actions® Voice AI platform and Action Kit SDK to bring new voice-enabled interactions to the Company’s AssetCare™️ solutions for Connected Workers.

By combining AssetCare with Aiqudo’s powerful Voice to Action® platform, mobile field workers will be able to interact with AssetCare solutions through a custom digital assistant using natural language.

“mCloud’s partnership with Aiqudo provides AssetCare with a distinct competitive edge as we deliver AssetCare to our oil and gas, nuclear, wind, and healthcare customers all around the world. Connected workers will benefit from reduced training time, ease of use, and support for multiple languages” 

In the field, industrial asset operators and field technicians will be able to communicate with experts, find documentation, and pull up relevant asset data instantly and effortlessly. This will expedite the completion of asset inspections and operator rounds – an industry-first using hands-free, simple, and intuitive natural commands via head mounted smart glasses. Professionals will be able to call up information on-demand with a single natural language request, eliminating the need to search using complex queries or special commands.

Here’s a demonstration of mCloud’s AssetCare capabilities on smart glasses with Aiqudo.

“mCloud’s partnership with Aiqudo provides AssetCare with a distinct competitive edge as we deliver AssetCare to our oil and gas, nuclear, wind, and healthcare customers all around the world,” said Dr. Barry Po, mCloud’s President, Connected Solutions and Chief Marketing Officer. “Connected workers will benefit from reduced training time, ease of use, and support for multiple languages.”

“We are excited to power mCloud solutions with our Voice to Action platform, making it easier for connected workers using AssetCare to get things done safely and quickly,” said Dr. Rajat Mukherjee, Aiqudo’s Co-Founder and CTO. “Our flexible NLU and powerful Action Engine are perfect for creating custom voice experiences for applications on smart glasses and smartphones.”

Aiqudo technology will join the growing set of advanced capabilities mCloud is now delivering by way of its recent acquisition of kanepi Group Pty Ltd. (“kanepi”). The Company announced on September 22 it expected to roll out new Connected Worker capabilities to 1,000 workers in China by the end of the year, targeting over 20,000 in 2021.

BUSINESSWIRE:  mCloud Brings Natural Language Processing to Connected Workers through Partnership with Aiqudo

Official website: www.mcloudcorp.com  Further Information: mCloud Press 

About mCloud Technologies Corp.

mCloud is creating a more efficient future with the use of AI and analytics, curbing energy waste, maximizing energy production, and getting the most out of critical energy infrastructure. Through mCloud’s AI-powered AssetCare™ platform, mCloud offers complete asset management solutions in five distinct segments: commercial buildings, renewable energy, healthcare, heavy industry, and connected workers. IoT sensors bring data from connected assets into the cloud, where AI and analytics are applied to maximize their performance.

Headquartered in Vancouver, Canada with offices in twelve locations worldwide, the mCloud family includes an ecosystem of operating subsidiaries that deliver high-performance IoT, AI, 3D, and mobile capabilities to customers, all integrated into AssetCare. With over 100 blue-chip customers and more than 51,000 assets connected in thousands of locations worldwide, mCloud is changing the way energy assets are managed.

mCloud’s common shares trade on the TSX Venture Exchange under the symbol MCLD and on the OTCQB under the symbol MCLDF. mCloud’s convertible debentures trade on the TSX Venture Exchange under the symbol MCLD.DB. For more information, visit www.mcloudcorp.com.

About Aiqudo

Aiqudo’s Voice to Action® platform voice enables applications across multiple hardware environments including mobile phones, IoT and connected home devices, automobiles, and hands-free augmented reality devices.  Aiqudo’s Voice AI comprises a unique natural language command understanding engine, the largest Action Index and action execution platform available, and the company’s Voice Graph analytics platform to drive personalization based on behavioral insights.   Aiqudo powers customizable white label voice assistants that give our partners control of their voice brand and enable them to define their users’ voice experience.  Aiqudo currently powers the Moto Voice digital assistant experience on Motorola smartphones in 7 languages across 12 markets in North and South America, Europe, India and Russia.  Aiqudo is based in Campbell, CA with offices in Belfast, Northern Ireland.

SOURCE mCloud Technologies Corp.

For further information:

Wayne Andrews, RCA Financial Partners Inc., T: 727-268-0113, wayne.andrews@mcloudcorp.com; Barry Po, Chief Marketing Officer, mCloud Technologies Corp., T: 866-420-1781

Monopoly top hat

Aiqudo + Banking = Voice To Transaction

By App Actions, Voice, Voice Search No Comments

We all remember playing playing the game Monopoly as kids right? Well I recently stumbled upon a version of the game that uses voice commands to control the game flow and act as the “bank” – a role most of us avoided so we wouldn’t have to deal with all the annoying transactions such as selling properties and buildings, collecting taxes, exchanging currency, and paying out people as they passed “GO”. Admittedly, it’s a novel use for voice commands in a classic game. But what about using voice to manage apps controlling our real money?

Voice to power more than games

Back in June of last year, we wrote a blog post that talked about the power of voice to perform activities in mobile banking apps. The post specifically referenced Bank of America’s Erica virtual voice assistant as a tool to help users accomplish common, often time-consuming banking activities without the need to memorize complex menus or worse, speak to the dreaded online customer service representative.  The net result of this; a simple, pleasant, user experience that builds brand loyalty and customer retention.  

Enter Aiqudo Voice to Action®

Well, that got me thinking. I’ve been an E*Trade banking customer for years and all this time I’ve never really used voice to make payments, transfer money or check balances. 

Nonetheless, I decided to see if I could recreate and hopefully improve upon my previous experience – this time using our very own Q Actions app.  The following video highlights some of my efforts. 

But can I trust this new way of banking?

Yes. You may have noticed in the video that I am not providing credentials to access my account in E*Trade.  That’s because I’m already authenticated. Previous to shooting the video, I had provided credentials, by way of fingerprint biometric, as part of the very first action execution.  Note that Aiqudo did not manage this process; it was handled completely by the mobile app. And because of this, the data used to hold the credential, lives entirely in the app itself and is neither passed to nor processed by Aiqudo systems at all.  This separation of duties maintains privacy of user data and hence increases trust in using the technology.

A personalized experience

Personalization is a word typically used to describe how an app or other system function adjusts to provide an experience tailored specifically to a user.  It’s often used in conjunction with AI and machine learning systems as the end result of acquiring, processing and suggesting courses of action or data upon which to act.  We enable personalization in the previous actions a couple of ways. If you have, say, more than one voice-enabled banking app on your mobile device (similar to what I have), our system can be configured to remember the user’s preferred app action.  For instance, if I were to say the command “check my balances” Aiqudo suggests actions from both E*Trade AND Wells Fargo. If I choose the E*Trade action, the next time I say the command it will remember E*Trade and perform the action right away – no need to ask again.  Likewise, whenever the action requires the user to provide input such as account number or payee, the system can store these away for subsequent use. These are simple examples but add a nice touch to an already-useful integration. 

What if I don’t bank with E*Trade? What can I do with other apps and is it safe?

Aiqudo maintains similar actions for apps like Venmo and Paypal that allow “send money to <username>” type actions. In each of these cases, Aiqudo defers authentication to the app before completing the transaction and also ensures that the data used by the action in the app, e.g., the payee’s phone number or email address never leaves the device or the app. The following video illustrates this.

With the proper integration of our ActionKit SDK into a banking app such as E*Trade,  the end user reaps the benefits of a trusted, highly- useful voice-powered interface that enables complex and often multi-step operations with ease and reduces Time to Action® for many activities within the app.

Actionable Knowledge

Q Actions 2.4: “Under the Hood” improvements for Productivity and Utility

By App Actions, Artificial Intelligence, Digital Assistants, Voice, Voice Search No Comments

Q Actions 2.4 now available on Google Play

The recent release of Q Actions 2.4 emphasizes Aiqudo’s focus on productivity and utility through voice. As voice assistants are becoming an increasingly ubiquitous part of our daily lives, Aiqudo aims to empower users to get things done. Many of the improvements and enhancements are “under the hood” – we’ve increased personalization and expanded the knowledge that drives our Actions.

Actionable KnowledgeTM

Our content-rich Q Cards leverage Actionable Knowledge to extend functionality into popular 3rd party apps. Start by asking about an artist, music group, sports athlete, or celebrity: “who is Tom Hanks. Aiqudo’s Q Card not only presents information about the actor, but will ask “what next?”. You say “view his Twitter account” or “go to his Instagram”, Actionable Knowledge will drop you exactly where you want to go!

Sample Actionable Knowledge Flow:

  • Ask “who is Taylor Swift?”
  • Select one of the supported Actionable Knowledge apps
    • “listen to her on Spotify”
    • “go to her Facebook profile”
    • “check out her Instagram”

Personalization … with privacy

Q Actions is already personalized, showing you Action choices based on the apps you already trust. We can now leverage personal data as signals to personalize your experience, while still protecting your privacy. It’s another iteration of our continued focus and dedication to increase productivity and augment utility using voice.  For example, if you checked in to your United Airlines flight, and then, the following day, say “show my boarding pass”, the United Airlines action is promoted to the top – exactly what you’d expect the system to do for you.

Our new Personal Data Manager allows secure optimization for specific apps. If you have a Spotify  playlist called “Beach Vibes”, and you say “play Beach Vibes”, we understand what you want and we will promote your personal playlist over a random public channel by that name. Your playlists are not shipped off the device to our servers, but we can still use the relevant information to short-cut your day!  If “Casimo Caputo” is a friend in Facebook Messenger, Messenger will trump WhatsApp for “tell Casimo Caputo let’s meet for lunch”. But “message Mark Smith let’s play Fifa tonight” brings up WhatsApp since Mark Smith is your WhatsApp buddy.

Simply do more with voice! Q Actions is now available on Google Play.

Aiqudo Auto Mode

Aiqudo and Sensory Collaborate on Holistic Voice Solutions

By App Actions, Auto Mode, News, User Interface, Voice No Comments

Will develop solutions that simplify the process of integrating voice assistants into a variety of devices  

BUSINESSWIRE, LAS VEGAS, January 6, 2020

Aiqudo, a leading voice technology pioneer, today announced that it is collaborating with embedded voice and vision AI leader Sensory to bring to market comprehensive voice solutions that serve as white-label alternatives for voice services and assistants. The two companies are working on solutions targeting automotive, mobile app, smart home and wearable device applications.

Currently, companies and brands must piece together different technologies to fully implement a voice solution. With their technologies combined, Aiqudo and Sensory will deliver a fully integrated end-to-end solution that combines Sensory’s wake word, voice biometrics and natural language recognition technologies with Aiqudo’s multilingual intent understanding and action execution (Q Actions®) to provide the complete Voice to Action® experience consumers expect.

“Voice adoption continues to grow rapidly, and brands are always exploring ways to streamline the process of integrating a convenient voice UX into their products,” said Todd Mozer, Sensory’s CEO. “Working with Aiqudo allows our two companies to provide the industry a turn-key solution for integrating powerful voice assistants into their products that feature brand-specific wake words and are capable of recognizing who is speaking.”

“Users just need to enter the cabin with their smartphones. There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.

With both Aiqudo and Sensory positioned as leaders in their respective fields, this collaboration is a natural fit as their technologies are highly complementary.  The initial integration is focused on the automotive vertical that will be showcased at 2020 Consumer Electronics Show.

Aiqudo’s Auto Mode highlights a highly personalized user experience in the car using the Q Actions® platform. Enhanced with Sensory’s wake word (TrulyHandsfree™) and voice ID (TrulySecure™) functionality,  multiple users seamlessly access their personal mobile devices just by using their voice to execute personal actions. “Users just need to enter the cabin with their smartphones,” said Rajat Mukherjee, Aiqudo CTO. “There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.  

“Brands increasingly want to create their own branded voice experiences for their customers,” said John Foster, CEO of Aiqudo. “Working with Sensory, we have created the easiest and fastest way for brands to bring the power and convenience of voice to their customers. We are excited to integrate our areas of practice and expertise to deliver a comprehensive solution.”  

To view the demo onsite while at CES, please email Aiqudo at CES@aiqudo.com. 

BUSINESSWIRE: Aiqudo and Sensory Collaborate on Holistic Voice Solutions

About Sensory

For over 25 years, Sensory has pioneered and developed groundbreaking applications for machine learning and embedded AI – turning those applications into household technologies. Pioneering the concept of always-listening speech recognition more than a decade ago, Sensory’s flexible wake word, small to large vocabulary speech recognition, and natural language understanding technologies are fueling today’s voice revolution. Additionally, its biometric recognition technologies are making everything from unlocking a device to authenticating users for digital transactions faster, safer and more convenient. Sensory’s technologies are widely deployed in numerous markets, including automotive, home appliances, home entertainment, IoT, mobile phones, wearables and more, and have shipped in over two billion units of leading consumer products and software applications.

For more information about this announcement, Sensory or its technologies, visit https://www.sensory.com/, contact sales@sensory.com or for press inquiries contact press@sensory.com.

About Aiqudo

Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps and cloud services through its Voice to Action® platform. It lets people use natural voice commands to execute actions in mobile apps and cloud services across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, cloud services, or device actions, enabling consumers to get things done quickly and easily.

Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without mandating APIs or developer dependencies.

To see Aiqudo in action, visit Aiqudo’s YouTube channel (youtube.com/aiqudo)

Aiqudo in the Byton Cabin

The BYTON next generation infotainment experience

By App Actions, Auto Mode, Digital Assistants, User Interface, Voice No Comments

Byton Multiple Displays

Peter Mortensen, Solution Architect, BytonPeter Mortensen, Solution Architect, BYTON, Santa Clara, California

BYTON is the automotive EV brand taking the lead in next generation infotainment experiences with a vision of giving vehicle occupants the best combination of safety and entertainment while on the road. The first BYTON is the M-BYTE SUV incorporating a groundbreaking infotainment system with an advanced Android Automotive app platform combined with BYTON’s unique user interface including the industry’s largest display and user control via touch, gesture and voice.

One of the key challenges when expanding the infotainment capabilities in a vehicle is to avoid unsafe distractions for the driver. By extensive support of voice control, the driver in a BYTON has safer operation of the vehicle allowing more visual focus on the driving itself. Another challenge is how well a vehicle’s infotainment system provides the frequently used features of popular apps and online services common on the occupants’ smartphones.

Aiqudo’s Voice to Action® platform enables occupants of a BYTON vehicle a simple and seamless link between natural language voice interaction and their favorite personal apps. With this solution, the occupants of a BYTON can speak naturally to their favorite apps using the vehicle’s powerful microphone array system from any seat in the cabin. The apps can either be residing in the vehicle’s own infotainment system or on the Bluetooth-connected Android or iOS smartphones of the occupants. The Aiqudo solution interprets the spoken commands and automatically identifies which specific app to control. For example, this means the driver can easily send a message using voice through his favorite social networking app installed on his smartphone, hands-free, without picking up the device, thus staying focused on safely driving the vehicle. 

BYTON is committed to intelligently combine the best of voice control with the safe use of touch control, gesture control and display feedback. The Voice to Action® platform expands beyond voice interaction by allowing BYTON’s engineering to touch-enable relevant app controls and provide display feedback when relevant. Furthermore, BYTON’s voice interaction concept aims high by creating a truly intuitive voice interaction experience through integration of Aiqudo’s app control with vehicle control and online digital voice assistant services using Amazon’s Alexa and other digital voice assistant services, depending on country.

More information on our Aiqudo partnership is available at BYTON’s new developer siteAIQUDO and BYTON Partner to enable actions for your phone and car

This post is authored by Peter Mortensen, BYTON Solution Architect. Aiqudo and BYTON announced a strategic partnership at CES 2020 that features voice control of apps in BYTON automobiles using Aiqudo technology.

Aiqudo Voice Platform To Power Digital Experience in BYTON’S Electric Vehicles

By Auto Mode, Digital Assistants, News, Voice No Comments

BUSINESSWIRE, LAS VEGAS, JANUARY 5, 2020

Aiqudo in the Byton Cabin

Aiqudo, a voice technology pioneer, announced ahead of CES 2020 a partnership with premium electric vehicle manufacturer BYTON, bringing the power of Aiqudo’s Voice AI platform to BYTON cars. Aiqudo’s Voice to Action®  platform will enable interacting with your favorite apps on your mobile phone hands-free while driving, seamlessly integrating with BYTON’s unique Digital Experience.

Aiqudo’s industry-leading Voice AI platform will voice-enable actions in native apps within the BYTON ecosystem as well as intelligently launch app actions on personal mobile devices in the vehicle.  BYTON drivers and passengers will be able to navigate, make calls, send messages, listen to music, shop, join meetings, make payments and more using simple voice commands with apps they use and love. In its integration with BYTON, Aiqudo incorporates the personalization and individual choice reflected by consumers’ favorite apps, as well as personal elements within apps such as preferred playlists, contacts, or favorites, all without user registration or setup. The BYTON experience powered by Aiqudo delivers the safest, easiest and most useful way to use a mobile device while in the car.

“A seamless voice experience is integral to BYTON’s groundbreaking user experience and Aiqudo Voice will make accessing your favorite apps convenient and safe,” said Jeff Chung, BYTON Vice President of Digital Engineering. “Aiqudo’s white label solution allows us to explore new possibilities with our expanding partnerships in the BYTON digital ecosystem.”

“BYTON’s vision of the car of the future, equipped for autonomous driving, will accelerate the need that users have to access their personal digital lives everywhere. Aiqudo makes this easy!” 

Aiqudo’s voice platform comprises a semiotics-based intent engine that understands natural language commands in 7 languages currently, plus an action execution capability across thousands of applications that consumers rely on daily. The company’s white-label voice platform allows car manufacturers, phone and smart device OEMs and mobile app developers to define unique voice experiences for their customers.  

“Byton has reimagined the relationship between cars and the people who drive or ride in them, placing voice-based interactions at the center of the in-car experience. We believe that voice will soon be the primary way people interact with their digital world. We’re partnering with BYTON to bring a high utility, personalized voice experience to their automobiles,” said John Foster, CEO of Aiqudo. “The in-car experience is a prime use case demonstrating the power of voice. Customers can now drive safely, undistracted and hands-free, and still use their favorite apps just by using their voice.”

Aiqudo’s Action Kit functionality will be offered to app developers through the BYTON developer portal. 

“Action Kit enables BYTON app developers to easily and effortlessly enable voice within their applications for the company’s range of cars and expansive infotainment systems,” said Dr. Rajat Mukherjee, Aiqudo CTO. “BYTON’s vision of the car of the future, equipped for autonomous driving, will accelerate the need that users have to access their personal digital lives everywhere. Aiqudo makes this easy!” 

BUSINESSWIREAiqudo Voice Platform To Power Digital Experience in BYTON’S Electric Vehicles

About Aiqudo

Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps and cloud services through its Voice to Action® platform. It lets people use natural voice commands to execute actions in mobile apps across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, cloud services, or device actions, enabling consumers to get things done quickly and easily.

Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without mandating APIs or developer dependencies.

To see Aiqudo in action, visit Aiqudo’s YouTube channel (youtube.com/aiqudo)

About BYTON 

BYTON is a global premium electric vehicle manufacturer that is creating the world’s first smart device on wheels. By integrating advanced digital technologies to offer a smart, connected, and comfortable mobility experience, the company is designing an EV that will meet the demands of an increasingly digital lifestyle now and into the future. 

The company’s global headquarters and state-of-the-art manufacturing center are located in Nanjing, China. Its global R&D hub is located in the heart of Silicon Valley and devoted to the development of BYTON’s groundbreaking intelligent car experience, digital ecosystem, advanced connectivity, as well as other cutting-edge technologies. BYTON’s design and concept vehicle center is located in Munich, Germany. 

BYTON’s core management team is made up of top innovators from leading-edge companies such as BMW, Tesla, Google, and Apple. This diverse group of leaders from China, Europe, and the US share the singular vision of creating an unprecedented automotive experience.

Official website: www.byton.com  Further Information: BYTON Newsroom 

Q Actions on iOS

Announcing The New Q Actions For iOS

By Digital Assistants, News, Voice, Voice Search No Comments

For over 2 years Aiqudo has been leading the charge of deep app integration with voice assistants on Android phones.  Today, our Android platform continues to do many things that no other platform can. Now, we’re incredibly proud to announce the latest release of our Q Actions app for iOS.  We’ve been working on the latest iOS release for months, and it represents a full suite of actions functionality driven by the new ActionKit SDK for iOS. This new ActionKit is also what iOS developers can use to easily configure voice into their own apps.  

iOS is a more restrictive and closed ecosystem than Android.  Many of the platform capabilities that Android provides are not available to third-party developers in Apple’s ecosystem.  For instance, apps are not allowed to freely communicate with each other, and it’s difficult to determine what apps are installed.  Such restrictions challenge digital assistants like Q Actions, which rely on knowledge of a user’s apps to provide relevant results and the ability to communicate with apps in order to automate and execute actions in other apps.

Q Actions for iOS enables app developers to define their own voice experience for their users rather than being subject to the limitations of SiriKit or Siri Shortcuts. Currently, SiriKit limits developers’ ability to expose functionality in Siri, allowing only broad categories that dilute the differentiated app experiences that developers have built.  With Q Actions for iOS, brands and businesses will be able to maintain their differentiating features and brand recognition, rather than conform to a generalized category.

With this release, we took a hard look at what was needed to build a comparable experience to what we have on Android.  To make it more powerful for iOS app developers, we pushed most of the functionality into the ActionKit SDK. The result is that ActionKit powers all the actions available in the app, allowing developers to offer an equivalent experience in their iOS app.  The ActionKit SDK is available for embedding in any iOS app today.

Let’s take a look at what Q Actions and the Aiqudo platform offers right now:

Easily discover actions for your phone

Q Actions helpfully provides an Action Summary with a categorized list of apps and actions for your device.  Browse by category, tap on an app to view sample commands, or tap a command to execute the action.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Go beyond Siri

Q Actions supports hundreds of new actions!  Watch Netflix Originals or stream live video on Facebook with simple commands like “watch Narcos” or “stream live video”.

 

 

 

 

 

 

 

 

 

 

 

 

 

True Natural Language

Q Actions for iOS leverages Aiqudo’s proprietary, semiotics-based language modeling system to power support for natural language commands. Rather than the exact match syntax required by Siri Shortcuts, Aiqudo understands the wide variations in commands that consumers use when interacting naturally with their voice. Plus, Aiqudo is multilingual, currently supporting commands in seven languages worldwide.

Content-rich Cards for informational queries

Get access to web results from Bing, translate phrases or look at stock quotes directly from Q Actions.  Get rich audio and visual feedback from cards.

 

 

 

 

 

 

 

 

There’s still a lot to come!  We’ve already shown how Aiqudo can enable a better voice experience in the car.  We’ve also seen how voice can help users engage meaningfully with your app.  We’re working hard to build a ubiquitous voice assistant platform and this release on iOS gets us one step closer.  Stay tuned as we’ll be talking more about some of the challenges of bringing our voice platform to iOS and iOS app developers, and more importantly, how we’re aligned with Apple’s privacy-centric approach.

Q Actions Auto Mode

What if cars could understand ALL voice commands?

By App Actions, Auto Mode, Digital Assistants, News, Voice No Comments

The following transcript was taken from a casual conversation with my son.

Son: Dad, what are you working on?

Me: It’s a new feature in our product called “Auto Mode”.  We just released it in version 2.1 of our Q Actions App for Android.  We even made a video of it.  We can watch it after dinner if you’re interested.

Son: The feature sounds cool.  What’s it look like?

Me: Well, here.  We have this special setting that switches our software to look like the screen in a car. See how the screen is wider than it is tall? Yeah, that’s because most car screens are like that too.

Son: Wait. How do you get your software into cars? Can’t you just stick the tablet on the dashboard?

Me: Humm, not quite.  We develop the software so that car makers can combine it with their own software inside the car’s console.  We’ll even make it look like they developed it by using their own colors and buttons. I’m showing you how this works on a tablet because it’s easier to demonstrate to other people – we just tell them to pretend it’s the car console.  Until cars put our software into their consoles, we’ll make it easy for users to use “Auto Mode” directly on their phones. Just mount the phone on the car’s dash and say “turn on auto mode” – done!

Son:  So how do you use it?  And what does that blue button with a microphone in it do?

Me:  Well, we want anyone in the car to be able to say a command like “navigate to Great America” or “what’s the weather like in San Jose?” or “who are Twenty One-Pilots?”.  The button is simply a way to tell the car to listen. When we hear a command, our software figures out what to do and what to show on the console in the car. Sometimes it even speaks back the answer.  Now we don’t always want people to have to press the button on the screen so we’ll work with the car makers to add a button on the steering wheel or even a microphone that is always listening for a special phrase such as “Ok, Q” to start.

Son: How does it do that?  I mean, the command part.

Me: Good question.  Since you’re smart and know a little about software, I’ll keep it short.  Our software takes a command and tries to figure out what app or service can best provide the answer.  For example, if the command is about showing the route to say, an amusement park like Great America, we’ll ask Google Maps to handle it, which it does really well. Lots of cars come installed with mapping software like Google Maps so it’s best to let them handle those. For other types of commands that ask for information, like “what’s the weather like in San Jose” or  “who are Twenty One Pilots”, we’ll send it off to servers in the cloud. They then send us back answers and we format it and display it on the screen – in a pretty looking card like this one.

Me: Sometimes, apps running on our phones can best answer these commands and we use them to handle it.

Son: Wait. Phones?  How are phones involved? I only see you using a tablet.

Me:  Ahhh.  You’ve discovered our coolest feature.  We use Apps already installed on your phone.   Do you see those rectangle-looking things in the upper right corner of the tablet? The ones with the pictures and names of people? Well, those are phone profiles.  They appear when a person connects their phone, running our Q Actions app, to the car’s console through Bluetooth, sort of like you do with wireless earbuds. When connected, our software in the console sends the phone your commands and the phone in turn attempts to execute the command using one of the installed apps.   Let me explain with an example. Let’s pretend you track your daily homework assignments using the Google Tasks app on your phone. Now you hop into the car and your phone automatically pairs with the console. Now I asked you to show me your homework assignments. You then press the mic button and say “show my homework tasks”.  The software in the console would intelligently route the command to your phone (because Google Tasks is not on the console), open Google Tasks on your phone, grab all your homework assignments and send them back to the console to be displayed in a nice card. Oh, and it would also speak back your homework assignments as well. Let’s see what happens when I tell it to view my tasks.

Son:  Big deal.  I can just pick up my phone and do that.  Why do I need to use voice for that?

Me: Because if you’re the driver, you don’t want to be fumbling around with your phone, possibly getting into an accident! Remember, this is supposed to help drivers with safe, “hands-free” operation. You put your phone in a safe place and our software figures out how to use it to get the answers.

Son: Why can’t the car makers put all these apps in the console so you don’t have to use your phone?

Me: Great question.  Most people carry their phones on them at all times, especially when they drive.  And these phones have all their favorite apps with all their important personal information stored in them.  There’s no way the car makers could figure out which apps to include when you buy the car. And even if you could download these apps onto the console, all your personal information that’s on your phone would have to transferred over to the console, app by app.  Clumsy if you ask me. I prefer to keep my information on my phone and private, thank you very much!

Son: Oh. Now I get it.  So what else does the software do?

Me: The console can call a family member.  If you say “call Dad”, the software looks for ‘dad’ in your phone’s address book and dials the number associated with it.  But wait. You’re probably thinking ‘What’s so special about that? All the cool cars do it”. Well, we know that a bunch of apps can make phone calls so we show you which ones and let you decide.  Also, If you have two numbers for ‘dad’, say a home and mobile number, the software will ask you to choose one to call. Let’s see how this works when I say “call Dad”.

Me: It asks you to pick an app.  I say ‘phone’ and then it asks me to pick a number since my dad has both a home and mobile number.  I say ‘mobile’ and it dials the number through my phone.

Son: Cool. But what if I have two people with the same name, like Julie?

Me: It will ask you to pick a ‘Julie’ when it finds more than one.  And it will remember that choice next time you ask it to call Julie.  See what happens when I want to call Jason. It shows me all the people in my address book who are named Jason along with their phone numbers.  If a person has more than one number it will say ‘Multiple’

Son: Wow.  What else?

Me: How about sending a message on WhatsApp? Or setting up a team meeting in the calendar. Or joining a meeting from the car if you are running late. Or even checking which one of your friends have birthdays today.   All these actions are performed on your phone using the apps you are familiar with and use.

Son: Which app shows you your friends birthdays? That’s kind of neat.

Me: Facebook

Son: I don’t use Facebook. I use Instagram. It’s way better.  Plus all the cool kids use it now.

Me:

Me: You get the picture though, right?

Son: Sure.

Son: So what if all of my friends are in the car with you and we connect to the console?  How does the software know where to send the command?

Me: We use the person’s voice to identify who they are and route the command to the right person’s phone automatically.

Son: Really? That seems way too hard.

Me: Not really.  Although we haven’t implemented it yet, the technology exists to do this sort of thing today.

Son: Going back to main screen, why does the list of actions under ‘Recent’ and ‘Favorites’ change when you change people?

Me: Oh, you noticed that!   Whenever the software switches to a new profile, we grab the ‘Recent’ and ‘Favorites’ sections from that person’s phone and display it in the tablet, er, console.  This is our way of making the experience more personalized or familiar to the way the app appears on your phone. In fact, the ‘Favorites’ are like handy shortcuts for frequently used actions, like “call Mom”.  

Me: One more thing.  Remember the other buttons on the home screen? One looked like a music note, the other a picture for messaging and so on.  Well, when you press those, a series of icons appear across the screen, each showing an action that belongs to that group.  If your phone had Spotify installed, we would show you a few Spotify actions. If Pandora was installed, we would show you Pandora actions and so on.   Check out what happens when I activate my profile. Notice how Pandora appears? That’s because Pandora is on my phone and not on the tablet like Google Play Music and YouTube Music.

  

Me: Same is true for messaging and calling.   Actions from apps installed on your phone would appear.  You would simply tap on the icon to run the action.   In fact, if you look carefully, you’ll notice that all the actions that show up on the console are also in the ‘My Actions’ screen in the Q Actions app on your Android Phone.   Check out what’s on the tablet vs. my phone.

 .     

Son: Yep.

Me: Oh and before I forget, there’s one last item I’d like to tell you about.

Son: What’s that.

Me: Notifications.  If you send me a message on WhatsApp, Messenger or WeChat, a screen will popup letting me know I have a message from you.  I can listen to the message by pressing a button or respond to the message – by voice, of course, all while keeping my focus on the road.   You’ll get the response just as if I had sent it while holding the phone.

Son:  Cool. I’ll have fun sending you messages on your way home from work.

Me: Grrrrrr.

Son: Hey, can I try this out on my phone?

Me: Sure.  Just download our latest app from the Google Play Store.  After you get it installed, goto the Preferences section under Settings and check the box that says ‘Auto Mode’ (BETA).  You’ll automatically be switched into Auto Mode on your phone. Now this becomes your console in the car.

Of course, things appear a bit smaller than on your phone than what I’ve shown you on the tablet.  Oh, and since you’re not connected to another phone, all the commands you give it will be performed by apps on your phone.   Try it out and let me know what you think.

Son:  Ok. I’ll play around with it this week.

Me: Great.  Now let’s go see what your mom’s made us for dinner.