Category

Uncategorized

Voice for the Connected Worker

Natural Voice Recognition for Safety and Productivity in Industrial IoT

By Asset Management, Natural Language, Uncategorized, User Interface, Voice No Comments
Voice for AssetCare

Voice for AssetCare

Jim Christian

Jim Christian, Chief Technology and Product Officer, mCloud

It is estimated that there are 20 million field technicians operating worldwide. A sizable percentage of those technicians can’t always get to information they need to do their jobs. 

Why is that? After all, we train technicians, provide modern mobile devices and specialized apps, and send them to the field with stacks of information and modern communications. Smartphone sales in the United States grew from $3.8 billion in 2005 to nearly $80 billion in 2020. So why isn’t that enough?

One problem is that tools that work fine in offices don’t necessarily work for field workers. A 25-year old mobile app designer forgets that a 55-year old field worker cannot read small text on a small screen or see with the glare of natural light. In industrial and outdoor settings field workers frequently wear gloves and other protective gear. Consider a technician who needs to enter data on a mobile device while outside in freezing weather. This worker could easily choose to wait to enter data until he’s back in his truck and can take off his gloves, and as a result, not entering the data exactly right. Or a technician may need to wear gloves and may find it difficult to type on a mobile device.

A voice-based interface can be a great help in these situations. Wearable devices that respond to voice are becoming more common. For instance, RealWear makes a headset that is designed to be worn with a hardhat, and one model is intrinsically safe and can be used in hazardous areas. But voice interfaces have not become popular in industrial settings. Why is that?

We could look to the OODA loop–short for Observe, Orient, Decide, and Act–for insights. The OODA concept was developed by the U.S. Air Force as a mental model for fighter pilots. Fighter pilots need to act quickly. Understanding the OODA loop that applies in a particular situation is helpful to improve, to act more quickly and decisively. Field technicians don’t have life-and-death situations to evaluate, but the OODA loop still applies. The speed and accuracy of their work depends on their OODA loop for the task at hand.

Consider two technicians who observe an unexpected situation, perhaps a failed asset. John orients himself by taking off his gloves to call his office, then searches for drawings in his company’s document management, and then calls his office again to confirm his diagnosis. Meanwhile, Jane orients herself by doing the same search, but talking instead of typing, keeping her eyes on the asset all the time. Assuming that the voice system is robust, Jane is able to use her eyes and her voice at the same time, accelerating her Observe and Orient phases. Jane will do a faster, better job. A system where the Observe and Orient phases are difficult–John’s experience–can be inferior and will be rejected by users, whereas Jane’s experience with a short, easy OODA loop will be acceptable.

A downside of speaking to a device is that traditional voice recognition systems can be painfully slow and limited. These systems recognize the same commands that a user would type or click with a mouse, but most people type and click much faster than they talk. Consider the sequence of actions required to take a picture and send it to someone on a smartphone using your fingers: open photo app, take a picture, close the photo app, open the photo gallery app, select the picture, select the open photo to share that picture, select a recipient, type a note, and hit send. That could be nine or ten distinct operations. Many people can do this rapidly with their fingers, even if it is a lot of steps. Executing that same sequence with an old-style, separate voice command for each step would be slow and painful and most people would find it worse than useless. 

The solution is natural voice recognition, where the voice system recognizes what the speaker intends and understands what “call Dave” means. Humans naturally understand that a phrase such as “call Dave” is shorthand for a long sequence (“pick up the phone”, “open the contact list”, “search for ‘Dave'”, etc.).  Natural voice recognition has come a long way in recent years and systems like Siri and Alexa have become familiar for personal use. Field workers often have their own shorthand for their industry, like “drop the transmission” or “flush the drum”, which their peers understand but Siri or Alexa don’t.

At mCloud, we see great potential in applying natural voice recognition to field work in industries such as oil & gas. Consider a field operator who is given a wearable device with a camera and voice control, and who is able to say things like, “take a picture and send it to John” or “take a picture, add a note ‘new corrosion under insulation at the north pipe rack’ and send to Jane” or “give me a piping diagram of the north pipe rack.”  This worker will have no trouble accessing useful information, and in using that information to orient himself to make good decisions. An informed field operator will get work done faster, with less trouble, and greater accuracy.

The U.S. Chemical Safety Board analyzes major safety incidents at oil & gas and chemical facilities. A fair number of incidents have a contributing factor of field workers not knowing something or not having the right information. For instance, an isobutane release at a Louisiana refinery in 2016 occurred in part when field operators used the wrong procedure to remove the gearbox on a plug valve. There was a standard procedure but about 3% of the plug valves in the refinery were an older design that required different steps to remove the gearbox. This is an example where the field workers were wearing protective gear and followed the procedure that was correct for a different type of valve, wrong for the valve in front of them. Field workers like this generally have written procedures, but occasionally the work planner misses something or reality in the field is different than what was expected. This  means that field workers need to adapt, perhaps by calling for help or looking up information such as alternate procedures.

Examples where natural voice recognition can help include finding information, calling other people for advice, recording measurements and observations, inspecting assets, stepping through repair procedures, describing the state of an asset along with recommendations and questions, writing a report about the work done, and working with other people to accomplish tasks. Some of these examples are ad hoc tasks, like taking a picture or deciding to call someone. Other examples are part of larger, structured jobs. An isolation procedure in a chemical plant or replacing a transmission are examples of complex procedures with multiple steps that can require specialized information or where unexpected results from one step may require the field worker to get re-oriented, find new information, or get help.

Aiqudo has powerful tech for natural voice recognition and mCloud is pleased to be working with Aiqudo to apply this technology. Working together, we can help field workers get what they need by simply asking for it, talk to the right people by simply asking for help, confirm their status in a natural way, and in general get the right job done, effectively and without mistakes.


This post is authored by Jim Christian, Chief Technology and Product Officer, mCloud.

Aiqudo and mCloud recently announced a strategic partnership that brings natural Voice technology into emerging devices, such as smart glasses, to support AR/VR and connected worker applications, and also into new domains such as industrial IoT, remote support and healthcare.

Covid Information

QTime: What I Learned as an Aiqudo Intern

By App Actions, Startup Culture, Uncategorized, Voice No Comments

Mithil Chakraborty

Intern Voice: Mithil Chakraborty

Hi! My name is Mithil Chakraborty and I’m currently a senior at Saratoga High School. During the summer of 2020, I had the privilege of interning at Aiqudo for 6 weeks as a Product Operations intern. Although I had previously coded in Java, HTML/Javascript, and Python, this was still my first internship at a company. Coming in, I was excited but a bit uncertain thinking that I would not be able to fully understand the core technology (Q System) or how the app’s actions are created. But even amidst the COVID-19 Pandemic, I learned a tremendous amount about not only on boarding and debugging actions, but how startups work; the drive from each of the employees was admirable and really stood out to me. As the internship progressed, I felt like a part of the team. Phillip, Mark, and Steven did a great job making me feel welcome and explaining the Q Tools program, Q App, and on boarding procedures. 

As I played around with the app, I realized how cool the capabilities were. During the iOS stage of my internship, I verified and debugged numerous iOS Q App actions and contributed to the latest release of the iOS Q Actions app. From there, I researched new actions to on board for Android, focusing on relevant information and new apps. As a result, I proposed actions that would display COVID-19 information in Facebook and open Messenger Rooms. Through this process, I learned how to implement Voice Talkback too for the Facebook COVID-19 info action, using Android Device Monitor and Q Tools. The unique actions I finally on boarded included:

  • “show me coronavirus info” >> talks back first 3 headlines in COVID-19 Info Center Pane on Facebook 
  • “open messenger rooms” >> creates and opens a Messenger Room

Covid Information

Users don’t have to say an exact phrase in order for the app to execute the correct action; the smart AI-based intent matching system will only run the relevant actions from Facebook or Messenger based on the user’s query.  The user does not even have to mention the app by name – the system picks the right app automatically.

When these actions finally got implemented, it felt rewarding to see my work easily accessible on smartphones; thereafter, I told my friends and family about the amazing Q Actions app so they could see my work. Throughout my Aiqudo internship, the team was incredibly easy to talk to and they always encouraged questions. It showed me the real-life applications of software engineering and AI, which I hadn’t been exposed to before, and the importance of collaboration and perseverance, especially when I was debugging pesky actions for iOS. This opportunity taught me in a hands-on way the business and technical skills needed for a startup like Aiqudo to be nimble and successful, which I greatly appreciated. Overall, my time at Aiqudo was incredibly memorable and I hope to be back soon.

Thank you Phillip, Mark, Steven, Rajat and the rest of the Aiqudo team for giving me this valuable experience this summer! 

AssetCare

mCloud Brings Natural Language Processing to Connected Workers through Partnership with Aiqudo

By Artificial Intelligence, Asset Management, News, Press, Uncategorized, User Interface, Voice No Comments

CANADA NEWSWIRE, VANCOUVER, OCTOBER 1, 2020

mCloud Technologies Corp. (TSX-V: MCLD) (OTCQB: MCLDF) (“mCloud”   or the “Company”), a leading provider of asset management solutions combining IoT, cloud computing, and artificial intelligence (“AI”), today announced it has entered into a strategic partnership with Aiqudo Inc. (“Aiqudo”), leveraging Aiqudo’s Q Actions® Voice AI platform and Action Kit SDK to bring new voice-enabled interactions to the Company’s AssetCare™️ solutions for Connected Workers.

By combining AssetCare with Aiqudo’s powerful Voice to Action® platform, mobile field workers will be able to interact with AssetCare solutions through a custom digital assistant using natural language.

“mCloud’s partnership with Aiqudo provides AssetCare with a distinct competitive edge as we deliver AssetCare to our oil and gas, nuclear, wind, and healthcare customers all around the world. Connected workers will benefit from reduced training time, ease of use, and support for multiple languages” 

In the field, industrial asset operators and field technicians will be able to communicate with experts, find documentation, and pull up relevant asset data instantly and effortlessly. This will expedite the completion of asset inspections and operator rounds – an industry-first using hands-free, simple, and intuitive natural commands via head mounted smart glasses. Professionals will be able to call up information on-demand with a single natural language request, eliminating the need to search using complex queries or special commands.

Here’s a demonstration of mCloud’s AssetCare capabilities on smart glasses with Aiqudo.

“mCloud’s partnership with Aiqudo provides AssetCare with a distinct competitive edge as we deliver AssetCare to our oil and gas, nuclear, wind, and healthcare customers all around the world,” said Dr. Barry Po, mCloud’s President, Connected Solutions and Chief Marketing Officer. “Connected workers will benefit from reduced training time, ease of use, and support for multiple languages.”

“We are excited to power mCloud solutions with our Voice to Action platform, making it easier for connected workers using AssetCare to get things done safely and quickly,” said Dr. Rajat Mukherjee, Aiqudo’s Co-Founder and CTO. “Our flexible NLU and powerful Action Engine are perfect for creating custom voice experiences for applications on smart glasses and smartphones.”

Aiqudo technology will join the growing set of advanced capabilities mCloud is now delivering by way of its recent acquisition of kanepi Group Pty Ltd. (“kanepi”). The Company announced on September 22 it expected to roll out new Connected Worker capabilities to 1,000 workers in China by the end of the year, targeting over 20,000 in 2021.

BUSINESSWIRE:  mCloud Brings Natural Language Processing to Connected Workers through Partnership with Aiqudo

Official website: www.mcloudcorp.com  Further Information: mCloud Press 

About mCloud Technologies Corp.

mCloud is creating a more efficient future with the use of AI and analytics, curbing energy waste, maximizing energy production, and getting the most out of critical energy infrastructure. Through mCloud’s AI-powered AssetCare™ platform, mCloud offers complete asset management solutions in five distinct segments: commercial buildings, renewable energy, healthcare, heavy industry, and connected workers. IoT sensors bring data from connected assets into the cloud, where AI and analytics are applied to maximize their performance.

Headquartered in Vancouver, Canada with offices in twelve locations worldwide, the mCloud family includes an ecosystem of operating subsidiaries that deliver high-performance IoT, AI, 3D, and mobile capabilities to customers, all integrated into AssetCare. With over 100 blue-chip customers and more than 51,000 assets connected in thousands of locations worldwide, mCloud is changing the way energy assets are managed.

mCloud’s common shares trade on the TSX Venture Exchange under the symbol MCLD and on the OTCQB under the symbol MCLDF. mCloud’s convertible debentures trade on the TSX Venture Exchange under the symbol MCLD.DB. For more information, visit www.mcloudcorp.com.

About Aiqudo

Aiqudo’s Voice to Action® platform voice enables applications across multiple hardware environments including mobile phones, IoT and connected home devices, automobiles, and hands-free augmented reality devices.  Aiqudo’s Voice AI comprises a unique natural language command understanding engine, the largest Action Index and action execution platform available, and the company’s Voice Graph analytics platform to drive personalization based on behavioral insights.   Aiqudo powers customizable white label voice assistants that give our partners control of their voice brand and enable them to define their users’ voice experience.  Aiqudo currently powers the Moto Voice digital assistant experience on Motorola smartphones in 7 languages across 12 markets in North and South America, Europe, India and Russia.  Aiqudo is based in Campbell, CA with offices in Belfast, Northern Ireland.

SOURCE mCloud Technologies Corp.

For further information:

Wayne Andrews, RCA Financial Partners Inc., T: 727-268-0113, wayne.andrews@mcloudcorp.com; Barry Po, Chief Marketing Officer, mCloud Technologies Corp., T: 866-420-1781

Q Actions - Action Recipes and Compound Commands

Q Actions – Complex tasks through Compound Commands

By Artificial Intelligence, Command Matching, Conversation, Uncategorized No Comments

In many cases, a single action does the job.

Say it. Do it!

Often, however, a task require multiple actions to be performed across multiple independent apps. On-the-go, you just want things done quickly and efficiently without having to worry about which actions to run, and which apps need to be in the mix.

Compound commands allow you to do just that – just say what you want to do – naturally –  and, assuming this makes sense and you have  access to the relevant apps, the right actions are magically  executed. It’s not that complicated – just say “navigate to the tech museum and call Kevin”, firing off Maps and WhatsApp in the process.  Driving, and in a hurry to catch the train? Just say “navigate to the Caltrain station and buy a train ticket” launching Maps and the Caltrain app in sequence.  Did you just hear the announcement that your plane is ready to board? Say “show my boarding pass and tell susan I’m boarding now” (American, United, Delta,…)  and (Whatsapp, Messenger,…) and you’re ready to get on the flight home – one, two … do!

Compound commands are … complex magic to get things done … simply!