Category

User Interface

Voice for the Connected Worker

Natural Voice Recognition for Safety and Productivity in Industrial IoT

By Asset Management, Natural Language, Uncategorized, User Interface, Voice No Comments
Voice for AssetCare

Voice for AssetCare

Jim Christian

Jim Christian, Chief Technology and Product Officer, mCloud

It is estimated that there are 20 million field technicians operating worldwide. A sizable percentage of those technicians can’t always get to information they need to do their jobs. 

Why is that? After all, we train technicians, provide modern mobile devices and specialized apps, and send them to the field with stacks of information and modern communications. Smartphone sales in the United States grew from $3.8 billion in 2005 to nearly $80 billion in 2020. So why isn’t that enough?

One problem is that tools that work fine in offices don’t necessarily work for field workers. A 25-year old mobile app designer forgets that a 55-year old field worker cannot read small text on a small screen or see with the glare of natural light. In industrial and outdoor settings field workers frequently wear gloves and other protective gear. Consider a technician who needs to enter data on a mobile device while outside in freezing weather. This worker could easily choose to wait to enter data until he’s back in his truck and can take off his gloves, and as a result, not entering the data exactly right. Or a technician may need to wear gloves and may find it difficult to type on a mobile device.

A voice-based interface can be a great help in these situations. Wearable devices that respond to voice are becoming more common. For instance, RealWear makes a headset that is designed to be worn with a hardhat, and one model is intrinsically safe and can be used in hazardous areas. But voice interfaces have not become popular in industrial settings. Why is that?

We could look to the OODA loop–short for Observe, Orient, Decide, and Act–for insights. The OODA concept was developed by the U.S. Air Force as a mental model for fighter pilots. Fighter pilots need to act quickly. Understanding the OODA loop that applies in a particular situation is helpful to improve, to act more quickly and decisively. Field technicians don’t have life-and-death situations to evaluate, but the OODA loop still applies. The speed and accuracy of their work depends on their OODA loop for the task at hand.

Consider two technicians who observe an unexpected situation, perhaps a failed asset. John orients himself by taking off his gloves to call his office, then searches for drawings in his company’s document management, and then calls his office again to confirm his diagnosis. Meanwhile, Jane orients herself by doing the same search, but talking instead of typing, keeping her eyes on the asset all the time. Assuming that the voice system is robust, Jane is able to use her eyes and her voice at the same time, accelerating her Observe and Orient phases. Jane will do a faster, better job. A system where the Observe and Orient phases are difficult–John’s experience–can be inferior and will be rejected by users, whereas Jane’s experience with a short, easy OODA loop will be acceptable.

A downside of speaking to a device is that traditional voice recognition systems can be painfully slow and limited. These systems recognize the same commands that a user would type or click with a mouse, but most people type and click much faster than they talk. Consider the sequence of actions required to take a picture and send it to someone on a smartphone using your fingers: open photo app, take a picture, close the photo app, open the photo gallery app, select the picture, select the open photo to share that picture, select a recipient, type a note, and hit send. That could be nine or ten distinct operations. Many people can do this rapidly with their fingers, even if it is a lot of steps. Executing that same sequence with an old-style, separate voice command for each step would be slow and painful and most people would find it worse than useless. 

The solution is natural voice recognition, where the voice system recognizes what the speaker intends and understands what “call Dave” means. Humans naturally understand that a phrase such as “call Dave” is shorthand for a long sequence (“pick up the phone”, “open the contact list”, “search for ‘Dave'”, etc.).  Natural voice recognition has come a long way in recent years and systems like Siri and Alexa have become familiar for personal use. Field workers often have their own shorthand for their industry, like “drop the transmission” or “flush the drum”, which their peers understand but Siri or Alexa don’t.

At mCloud, we see great potential in applying natural voice recognition to field work in industries such as oil & gas. Consider a field operator who is given a wearable device with a camera and voice control, and who is able to say things like, “take a picture and send it to John” or “take a picture, add a note ‘new corrosion under insulation at the north pipe rack’ and send to Jane” or “give me a piping diagram of the north pipe rack.”  This worker will have no trouble accessing useful information, and in using that information to orient himself to make good decisions. An informed field operator will get work done faster, with less trouble, and greater accuracy.

The U.S. Chemical Safety Board analyzes major safety incidents at oil & gas and chemical facilities. A fair number of incidents have a contributing factor of field workers not knowing something or not having the right information. For instance, an isobutane release at a Louisiana refinery in 2016 occurred in part when field operators used the wrong procedure to remove the gearbox on a plug valve. There was a standard procedure but about 3% of the plug valves in the refinery were an older design that required different steps to remove the gearbox. This is an example where the field workers were wearing protective gear and followed the procedure that was correct for a different type of valve, wrong for the valve in front of them. Field workers like this generally have written procedures, but occasionally the work planner misses something or reality in the field is different than what was expected. This  means that field workers need to adapt, perhaps by calling for help or looking up information such as alternate procedures.

Examples where natural voice recognition can help include finding information, calling other people for advice, recording measurements and observations, inspecting assets, stepping through repair procedures, describing the state of an asset along with recommendations and questions, writing a report about the work done, and working with other people to accomplish tasks. Some of these examples are ad hoc tasks, like taking a picture or deciding to call someone. Other examples are part of larger, structured jobs. An isolation procedure in a chemical plant or replacing a transmission are examples of complex procedures with multiple steps that can require specialized information or where unexpected results from one step may require the field worker to get re-oriented, find new information, or get help.

Aiqudo has powerful tech for natural voice recognition and mCloud is pleased to be working with Aiqudo to apply this technology. Working together, we can help field workers get what they need by simply asking for it, talk to the right people by simply asking for help, confirm their status in a natural way, and in general get the right job done, effectively and without mistakes.


This post is authored by Jim Christian, Chief Technology and Product Officer, mCloud.

Aiqudo and mCloud recently announced a strategic partnership that brings natural Voice technology into emerging devices, such as smart glasses, to support AR/VR and connected worker applications, and also into new domains such as industrial IoT, remote support and healthcare.

AssetCare

mCloud Brings Natural Language Processing to Connected Workers through Partnership with Aiqudo

By Artificial Intelligence, Asset Management, News, Press, Uncategorized, User Interface, Voice No Comments

CANADA NEWSWIRE, VANCOUVER, OCTOBER 1, 2020

mCloud Technologies Corp. (TSX-V: MCLD) (OTCQB: MCLDF) (“mCloud”   or the “Company”), a leading provider of asset management solutions combining IoT, cloud computing, and artificial intelligence (“AI”), today announced it has entered into a strategic partnership with Aiqudo Inc. (“Aiqudo”), leveraging Aiqudo’s Q Actions® Voice AI platform and Action Kit SDK to bring new voice-enabled interactions to the Company’s AssetCare™️ solutions for Connected Workers.

By combining AssetCare with Aiqudo’s powerful Voice to Action® platform, mobile field workers will be able to interact with AssetCare solutions through a custom digital assistant using natural language.

“mCloud’s partnership with Aiqudo provides AssetCare with a distinct competitive edge as we deliver AssetCare to our oil and gas, nuclear, wind, and healthcare customers all around the world. Connected workers will benefit from reduced training time, ease of use, and support for multiple languages” 

In the field, industrial asset operators and field technicians will be able to communicate with experts, find documentation, and pull up relevant asset data instantly and effortlessly. This will expedite the completion of asset inspections and operator rounds – an industry-first using hands-free, simple, and intuitive natural commands via head mounted smart glasses. Professionals will be able to call up information on-demand with a single natural language request, eliminating the need to search using complex queries or special commands.

Here’s a demonstration of mCloud’s AssetCare capabilities on smart glasses with Aiqudo.

“mCloud’s partnership with Aiqudo provides AssetCare with a distinct competitive edge as we deliver AssetCare to our oil and gas, nuclear, wind, and healthcare customers all around the world,” said Dr. Barry Po, mCloud’s President, Connected Solutions and Chief Marketing Officer. “Connected workers will benefit from reduced training time, ease of use, and support for multiple languages.”

“We are excited to power mCloud solutions with our Voice to Action platform, making it easier for connected workers using AssetCare to get things done safely and quickly,” said Dr. Rajat Mukherjee, Aiqudo’s Co-Founder and CTO. “Our flexible NLU and powerful Action Engine are perfect for creating custom voice experiences for applications on smart glasses and smartphones.”

Aiqudo technology will join the growing set of advanced capabilities mCloud is now delivering by way of its recent acquisition of kanepi Group Pty Ltd. (“kanepi”). The Company announced on September 22 it expected to roll out new Connected Worker capabilities to 1,000 workers in China by the end of the year, targeting over 20,000 in 2021.

BUSINESSWIRE:  mCloud Brings Natural Language Processing to Connected Workers through Partnership with Aiqudo

Official website: www.mcloudcorp.com  Further Information: mCloud Press 

About mCloud Technologies Corp.

mCloud is creating a more efficient future with the use of AI and analytics, curbing energy waste, maximizing energy production, and getting the most out of critical energy infrastructure. Through mCloud’s AI-powered AssetCare™ platform, mCloud offers complete asset management solutions in five distinct segments: commercial buildings, renewable energy, healthcare, heavy industry, and connected workers. IoT sensors bring data from connected assets into the cloud, where AI and analytics are applied to maximize their performance.

Headquartered in Vancouver, Canada with offices in twelve locations worldwide, the mCloud family includes an ecosystem of operating subsidiaries that deliver high-performance IoT, AI, 3D, and mobile capabilities to customers, all integrated into AssetCare. With over 100 blue-chip customers and more than 51,000 assets connected in thousands of locations worldwide, mCloud is changing the way energy assets are managed.

mCloud’s common shares trade on the TSX Venture Exchange under the symbol MCLD and on the OTCQB under the symbol MCLDF. mCloud’s convertible debentures trade on the TSX Venture Exchange under the symbol MCLD.DB. For more information, visit www.mcloudcorp.com.

About Aiqudo

Aiqudo’s Voice to Action® platform voice enables applications across multiple hardware environments including mobile phones, IoT and connected home devices, automobiles, and hands-free augmented reality devices.  Aiqudo’s Voice AI comprises a unique natural language command understanding engine, the largest Action Index and action execution platform available, and the company’s Voice Graph analytics platform to drive personalization based on behavioral insights.   Aiqudo powers customizable white label voice assistants that give our partners control of their voice brand and enable them to define their users’ voice experience.  Aiqudo currently powers the Moto Voice digital assistant experience on Motorola smartphones in 7 languages across 12 markets in North and South America, Europe, India and Russia.  Aiqudo is based in Campbell, CA with offices in Belfast, Northern Ireland.

SOURCE mCloud Technologies Corp.

For further information:

Wayne Andrews, RCA Financial Partners Inc., T: 727-268-0113, wayne.andrews@mcloudcorp.com; Barry Po, Chief Marketing Officer, mCloud Technologies Corp., T: 866-420-1781

Q Card for Tom Petty

What can you do with that Thing?

By Conversation, Knowledge, User Interface, Voice Search No Comments

Often, when you have something to do, you start by searching for information about a particular Thing. Sometimes, you know exactly what that Thing is, but often, you find the Thing by using information related to it. 

“Who is Taylor Swift?” → Taylor Swift

“Who directed Avatar”  → “James Cameron”

The “Thing” is what we call a Knowledge Entity and something that you can do with that Thing is what we call a Downstream Action. The bond between that Knowledge Entity and the Downstream Action is what we refer to as Actionable Knowledge.

Actionable Knowledge

How do we do this? Our Knowledge database holds information about all kinds of entities such as movies, TV series, athletes, corporations etc. These Entities have rich semantic structure; we have detailed information about the different attributes of these Entities along with the Actions one can perform on those entities. An Action may be generic (watch a show), but can also be explicitly connected to a mobile app or service (watch the show on Disney+). This knowledge allows the user to follow up on an Entity command with an Action. 

For example, asking a question such as “How tall is Tom Brady?”  allows you to get his height i.e., 6’ 4” or 1.93 metres (based on the Locale of who’s asking) since Knowledge captures these important attributes about Tom Brady. Note that these attributes are different for different types of Entities. That is determined by the Schema of the Entity, which allows validation, normalization and transformation of data.

A command like “Who is Tom Brady?” returns a Q Card with information about Tom Brady, as shown below. As there may be multiple entities referring to “Tom Brady”, a popularity measure is computed so that the correct Tom Brady is returned, based on popularity, context and your current session. Popularity is a special attribute that is computed from multiple attributes of the entity. An Entity Card surfaces the various attributes associated with the attribute, such as when Tom Brady was born, how tall and heavy he is, and what sport he plays. There are also attributes that define potential Actions that can follow, so “go to his Instagram” will instantly take you to Tom Brady’s account in the Instagram app. 

Q Card for Tom Brady

Actions are about getting things done! Here’s another example of being able to instantly go from information to Action using Actionable Knowledge.  Asking “Who is Tom Petty?” followed by a command “listen to him on Spotify” will start playing his music. This is a powerful feature that provides a great user experience and rapid Time to Action® .

Q Card for Tom Petty

The three pillars of the Aiqudo’s Q Actions Platform allow us to implement downstream Actions:

  1. Semantically rich Entities in Actionable Knowledge
  2. AI-based Search
  3. Powerful Action execution engine for mobile apps and cloud services

AI Search

We are not limited by just the name of the entity. Our AI-based search allows you to find entities using various attributes of the entity. For example, you can search for stock information by saying “How is Tesla stock doing today?” or “Show me TSLA stock price”.   Aiqudo understands both the corporation name or the stock ticker when it needs to find information on a company’s stock price.  Some apps like Yahoo Finance can only understand the stock ticker; it may not be built to accept the name of the company as an input. Our platform allows us to fill this gap by decoupling action execution from search intent detection. A middle-tier federation module acts as a bridge between intent extraction and Action execution by extracting the right attributes of the Entity returned by the search to those required by the Action execution engine. In the above example it extracts the stockTicker attribute (TSLA),  from the corporation entity retrieved by the search (Tesla) and feeds it to the Action engine. 

Q Card for Tesla Stock

Voila! Job done!

So, what can you do with that Thing? Well, you can instantly perform a meaningful Action on it using the apps on your mobile phone. In the example above, you can jump to Yahoo News to get the latest finance news about Tesla, or go to the stock quote screen within E*Trade, the app you use and trust, to buy Tesla shares and make some money!

Mobile Accessibility

Accessibility plus utility plus convenience!

By Digital Assistants, User Interface, Voice Search No Comments

It’s great to see various platforms announce specific accessibility features on this Global Accessibility Awareness Day.

A feature that caught our attention today was Google’s Assistant-powered Action Blocks.

It’s a new app that allows users to create simple shortcuts to Actions they commonly perform. They are powered by Google Assistant, but allow for invocation through a tap.

My Actions and Favorites

We built this functionality into Aiqudo’s Q Actions when we launched it in 2017. Our approach is different in several ways:

  • The user does not need to do any work, Q Actions does it automatically for the user
  • Q Actions builds these dynamically – your most recently used Actions, and your favorite ones are automatically tracked for you – you just need to say “show my actions”
  • These handy Action shortcuts are available to you with one swipe to the right in the Q Actions app. One tap to invoke your favorite action. 
  • There’s no new app, just for accessibility – it’s built in to your Assistant interface for convenience – you just need to say “Hello Q, show my Actions”
  • There are hundreds of unique high-utility Actions you can perform that are not available in any other platform, including Google Assistant. Here are a few examples:
    • “whose birthday is it today?” (Facebook)
    • “show my orders” (Amazon, Walmart)
    • “start meditating” (Headspace)
    • “watch the Mandalorian” (Disney+)
    • “watch Fierce Queens” (Quibi)
    • “show my tasks” (Microsoft To Do, Google Tasks)
    • “show my account balances” (Etrade)
    • “join my meeting with my camera off” (Google Hangouts, Zoom)
    • “call Mark” (Whatsapp, Messenger, Teams, Slack,…)
    • “send money to John” (PayPal)
    • . and on and on and on…

It’s just easier, better and more powerful! 

And available to everyone!

Aiqudo Auto Mode

Aiqudo and Sensory Collaborate on Holistic Voice Solutions

By App Actions, Auto Mode, News, User Interface, Voice No Comments

Will develop solutions that simplify the process of integrating voice assistants into a variety of devices  

BUSINESSWIRE, LAS VEGAS, January 6, 2020

Aiqudo, a leading voice technology pioneer, today announced that it is collaborating with embedded voice and vision AI leader Sensory to bring to market comprehensive voice solutions that serve as white-label alternatives for voice services and assistants. The two companies are working on solutions targeting automotive, mobile app, smart home and wearable device applications.

Currently, companies and brands must piece together different technologies to fully implement a voice solution. With their technologies combined, Aiqudo and Sensory will deliver a fully integrated end-to-end solution that combines Sensory’s wake word, voice biometrics and natural language recognition technologies with Aiqudo’s multilingual intent understanding and action execution (Q Actions®) to provide the complete Voice to Action® experience consumers expect.

“Voice adoption continues to grow rapidly, and brands are always exploring ways to streamline the process of integrating a convenient voice UX into their products,” said Todd Mozer, Sensory’s CEO. “Working with Aiqudo allows our two companies to provide the industry a turn-key solution for integrating powerful voice assistants into their products that feature brand-specific wake words and are capable of recognizing who is speaking.”

“Users just need to enter the cabin with their smartphones. There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.

With both Aiqudo and Sensory positioned as leaders in their respective fields, this collaboration is a natural fit as their technologies are highly complementary.  The initial integration is focused on the automotive vertical that will be showcased at 2020 Consumer Electronics Show.

Aiqudo’s Auto Mode highlights a highly personalized user experience in the car using the Q Actions® platform. Enhanced with Sensory’s wake word (TrulyHandsfree™) and voice ID (TrulySecure™) functionality,  multiple users seamlessly access their personal mobile devices just by using their voice to execute personal actions. “Users just need to enter the cabin with their smartphones,” said Rajat Mukherjee, Aiqudo CTO. “There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.  

“Brands increasingly want to create their own branded voice experiences for their customers,” said John Foster, CEO of Aiqudo. “Working with Sensory, we have created the easiest and fastest way for brands to bring the power and convenience of voice to their customers. We are excited to integrate our areas of practice and expertise to deliver a comprehensive solution.”  

To view the demo onsite while at CES, please email Aiqudo at CES@aiqudo.com. 

BUSINESSWIRE: Aiqudo and Sensory Collaborate on Holistic Voice Solutions

About Sensory

For over 25 years, Sensory has pioneered and developed groundbreaking applications for machine learning and embedded AI – turning those applications into household technologies. Pioneering the concept of always-listening speech recognition more than a decade ago, Sensory’s flexible wake word, small to large vocabulary speech recognition, and natural language understanding technologies are fueling today’s voice revolution. Additionally, its biometric recognition technologies are making everything from unlocking a device to authenticating users for digital transactions faster, safer and more convenient. Sensory’s technologies are widely deployed in numerous markets, including automotive, home appliances, home entertainment, IoT, mobile phones, wearables and more, and have shipped in over two billion units of leading consumer products and software applications.

For more information about this announcement, Sensory or its technologies, visit https://www.sensory.com/, contact sales@sensory.com or for press inquiries contact press@sensory.com.

About Aiqudo

Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps and cloud services through its Voice to Action® platform. It lets people use natural voice commands to execute actions in mobile apps and cloud services across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, cloud services, or device actions, enabling consumers to get things done quickly and easily.

Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without mandating APIs or developer dependencies.

To see Aiqudo in action, visit Aiqudo’s YouTube channel (youtube.com/aiqudo)

Aiqudo in the Byton Cabin

The BYTON next generation infotainment experience

By App Actions, Auto Mode, Digital Assistants, User Interface, Voice No Comments

Byton Multiple Displays

Peter Mortensen, Solution Architect, BytonPeter Mortensen, Solution Architect, BYTON, Santa Clara, California

BYTON is the automotive EV brand taking the lead in next generation infotainment experiences with a vision of giving vehicle occupants the best combination of safety and entertainment while on the road. The first BYTON is the M-BYTE SUV incorporating a groundbreaking infotainment system with an advanced Android Automotive app platform combined with BYTON’s unique user interface including the industry’s largest display and user control via touch, gesture and voice.

One of the key challenges when expanding the infotainment capabilities in a vehicle is to avoid unsafe distractions for the driver. By extensive support of voice control, the driver in a BYTON has safer operation of the vehicle allowing more visual focus on the driving itself. Another challenge is how well a vehicle’s infotainment system provides the frequently used features of popular apps and online services common on the occupants’ smartphones.

Aiqudo’s Voice to Action® platform enables occupants of a BYTON vehicle a simple and seamless link between natural language voice interaction and their favorite personal apps. With this solution, the occupants of a BYTON can speak naturally to their favorite apps using the vehicle’s powerful microphone array system from any seat in the cabin. The apps can either be residing in the vehicle’s own infotainment system or on the Bluetooth-connected Android or iOS smartphones of the occupants. The Aiqudo solution interprets the spoken commands and automatically identifies which specific app to control. For example, this means the driver can easily send a message using voice through his favorite social networking app installed on his smartphone, hands-free, without picking up the device, thus staying focused on safely driving the vehicle. 

BYTON is committed to intelligently combine the best of voice control with the safe use of touch control, gesture control and display feedback. The Voice to Action® platform expands beyond voice interaction by allowing BYTON’s engineering to touch-enable relevant app controls and provide display feedback when relevant. Furthermore, BYTON’s voice interaction concept aims high by creating a truly intuitive voice interaction experience through integration of Aiqudo’s app control with vehicle control and online digital voice assistant services using Amazon’s Alexa and other digital voice assistant services, depending on country.

More information on our Aiqudo partnership is available at BYTON’s new developer siteAIQUDO and BYTON Partner to enable actions for your phone and car

This post is authored by Peter Mortensen, BYTON Solution Architect. Aiqudo and BYTON announced a strategic partnership at CES 2020 that features voice control of apps in BYTON automobiles using Aiqudo technology.

Q Actions - Voice Talkback

Q Actions – Voice feedback from apps using Talkback™

By App Actions, Conversation, Digital Assistants, User Interface No Comments

Wonder why you can’t talk to your apps, and why your apps can’t talk back to you?  Stop wondering, as Talkback™ in Q Actions does exactly that. Ask “show my tasks” and the system executes the right action (Google Tasks) and, better yet, tells you what your tasks are – safely and hands-free, as you drive your car.

Driving to work and stuck in traffic?  Ask “whose birthday is it today?” and hear the short list of your friends celebrating their birthdays (Facebook). You can then say  “tell michael happy birthday”  to wish Mike (WhatsApp or Messenger). And if you are running low on gas, just say “find me a gas station nearby” and Talkback™ will tell you where the nearest gas station is and how much you’ll pay for a gallon of unleaded fuel.

Say it. Do it. Hear it spoken back!

Q Actions - Directed Dialogue

Q Actions – Task completion through Directed Dialogue™

By Conversation, Digital Assistants, Natural Language, User Interface, Voice No Comments

When an action or a set of actions require specific input parameters, Directed Dialogue™ allows the user to submit the required information through very simple, natural back-and-forth conversation. Enhanced with parameter validation, and user confirmation,Directed Dialogue™ allows complex tasks to be performed with confidence.Directed Dialogue™ is  not about open-ended conversations, but  it about getting things done, simply and efficiently.

With Q Actions, Directed Dialogue™ is automatically enabled  for every action in the system because we know the semantic requirements of each and every action’s parameters. It is not constrained, and  applies across all actions across all verticals.

Another application of Directed Dialogue™ is input refinement. Let’s say I want to purchase batteries. If I just say, “add batteries to my shopping cart” I can get the wrong product added to my cart, as on Alexa, which does the wrong thing for a new product order (the right thing happens on a reorder). In the case of Q Actions, I can provide the brand Duracell and the type 9V 4 pack with very simpleDirected Dialogue™, and exactly the right product is added to my cart – in the Amazon or Walmart app.

Get Q Actions today.

Auto in-cabin experience

The Evolution of Our In-Car Experience

By Digital Assistants, User Interface, Voice No Comments

As the usage model for cars continues to shift away from traditional ownership and leasing to on-demand, ridesharing, and in the future, autonomous vehicle (AV) scenarios, how we think about our personal, in-car experience will need to shift as well.

Unimaginable just a few short years ago, today, we think nothing of jumping into our car and streaming our favorite music through the built-in audio system using our Spotify or Pandora subscription. We also expect the factory-installed navigation system to instantly pull up our favorite or most-commonly used locations (after we’ve entered them) and present us with the best route to or from our current one. And once we pair our smartphone with the media system, we can have our text and email messages not only appear on the onboard screen but also read to us using built-in text-to-speech capabilities.  It’s a highly personalized experience in our car.

When we use a pay-as-you-go service, such as Zipcar, we know we’re unlikely to have access to all of the tech comforts of our own vehicle, but we can usually find a way to get our smartphone paired for handsfree calling and streaming music using Bluetooth. If not, we end up using the navigation app on our phone and awkwardly holding it while driving, trying to multitask. It’s not pretty. And when we hail a rideshare, we don’t expect to have access to any of the creature comforts of our own car.

But what if we could?

Just as our relationship to media shifted from an ownership model–CDs or MP3 files on iPods–to subscription-based experiences that are untethered to a specific device but can be accessed anywhere at any time, it’s time to shift our thinking about in-car experiences in the same way.

It’s analogous to accessing your Amazon account and continuing to watch the new season of “True Detective” on the TV at your Airbnb–at the exact episode where you left off last week. Or listening to your favorite Spotify channel at your friend’s house through her speakers.

All your familiar apps (not just the limited Android Auto or Apple CarPlay versions) and your personalized in-car experience–music, navigation, messaging, even video (if you’re a passenger, of course)–will be transportable to any vehicle you happen to jump into, whether it’s a Zipcar, rental car or some version of a rideshare that’s yet to be developed. What’s more, you’ll be able to easily and safely access these apps using voice commands. Whereas today our personal driving environment is tied to our own vehicle, it will become something that’s portable, evolving as our relationship to cars changes over time.

Just on the horizon of this evolution in our relationship with automobiles? Autonomous vehicles, or AVs, in which we become strictly a passenger, perhaps one of several people sharing a ride. Automobile manufacturers today are thinking deeply about what this changing relationship means to them and to their brands. Will BMW become “The Ultimate Riding Machine?”(As a car guy, I personally hope not!)  And if so, what will be the differentiators?

Many car companies see the automobile as a new digital platform, for which each manufacturer creates its own, branded, in-car digital experience. In time, when we hail a rideshare or an autonomous vehicle, we could request a Mercedes because we know that we love the Mercedes in-car digital experience, as well as the leather seats and the smooth ride.

What happens if we share the ride in the AV, because, well, they are rideshare applications after all? The challenge for the car companies becomes creating a common denominator of services that define that branded experience while still enabling a high degree of personalization. Clearly, automobile manufacturers don’t want to become dumb pipes on wheels, but if we all just plug in our headphones and live on our phones, comfy seats alone aren’t going to drive brand loyalty for Mercedes. On the other hand, we don’t all want to listen to that one guy’s death metal playlist all the way to the city.  

The car manufacturers cannot create direct integrations to all services to accommodate infinite personalization. In the music app market alone there are at least 15 widely used apps, but what if you’re visiting from China? Does your rideshare support China’s favorite music app, QQ?  We’ve already made our choices in the apps we have on our phones, so transporting that personalized experience into the shared in-car experience is the elegant way to solve that piece of the puzzle.

This vision of the car providing a unique digital experience is not that far-fetched, nor is it that far away from becoming reality. It’s not only going to change our personal ridesharing experience, but it’s also going to be a change-agent for differentiation in the automobile industry.

And it’s going to be very interesting to watch.

Q Actions - Call

Q Actions 1.3 update is now available on Google Play!

By Digital Assistants, User Interface, Voice Search No Comments

Q Actions now enables you to make calls directly using voice commands, regardless of if your contact is in your phonebook or a third-party app like WhatsApp.

Remembering friends and family across multiple phone books and communication apps is cumbersome. Through voice, you can privately tell Q Actions which contact you want to connect with and what app you want to place the call with, safely and hands free.

Juggling multiple phone books across your apps can be tedious … We got your back!

Also, try out some of the new and improved actions from familiar apps that you already have on your phone: Netflix, Spotify, Waze, Maps, Facebook, and more.

Just launch Q Actions and say:

    • “dial John”, “call Jason on WhatsApp” Phone/WhatsApp
    • “Play Stranger Things”, “watch Netflix originals” Netflix
    • “play songs by Drake”, “play mint playlist”Spotify
    • “take me to work”, “I want to drive home” Waze
    • “are any of my friends nearby?”, “view upcoming events” Facebook

App Store Icon

As always, we welcome your feedback.