We’re ending a crazy 2020 with something sweet – the release of Q Actions 2.5! In our latest version, we’re proud to announce a couple of unique and useful features: Voice Announcements and new Parameter Options.
Now, app notifications have a Voice! We’ve made it super-simple for your apps to talk to you, and for you to follow up … hands free! The Voice Announcements feature gives you full control – you decide which app notifications are announced, and when. You can select one of our preset time profiles like Work “8:00AM – 5:00PM” or create your own.Specific Voice Announcements for incoming Calls, Texts, WhatsApp, and Twitter allow you to act upon the notification with a follow-on action. We empower users to simply do more with voice. Check out the video below to see hands free Voice Announcements in action.
Q Actions can help users select from among multiple valid options for an action using voice. Our Custom Knowledge knows what actions and content are available on a particular app or service.For example, Q Actions knows an awful lot about popular movies, TV series and music. So, the next time you feel like watching Star Wars, we can give you a list of titles relevant to the app you’ve selected. Of course, if you know exactly what you want, simply tell Q Actions and it’ll take you straight to that title.
Do more with voice hands free! Q Actions is now available on Google Play.
If you’ve been following technology news recently, you might have heard that there’s a privacy war brewing. It should also come as no surprise that the digital assistants you use on a daily basis know a terrifying amount of information about you. At the same time, there’s no arguing that some of this is ultimately useful to you, as a consumer. This personal information is used to enable phone calls to your loved ones, or to take you to the right address when you navigate “home”.
As consumers become more privacy-conscious, however, they’re starting to ask if perhaps they’re giving more than they’re getting. Where do you draw the line? You might be fine with letting Amazon get access to your calendar, but what about your Spotify password, or to your online banking account?
At Aiqudo, we care deeply about user privacy and providing utility. Often, this means we need to work that much harder at things that may seem easy or trivial for other digital assistants because they have all this access to your data. Let’s look at a few of the ways that Aiqudo is able to deliver personal and private Actions for your mobile device.
The first and simplest way that Aiqudo can guarantee privacy is by simply not collecting the data in the first place. For example, we don’t require you to create an account, or to give us credentials to access any of the apps or services available on our platform – you use your trusted apps as you normally would, e.g., with biometric authentication. The only information Aiqudo collects is what apps the user has installed on their device, and what the user says, i.e., user commands. The former is used to personalize and filter our Action results to what is most relevant to that particular user and device. In addition, Aiqudo uses a randomized identifier to track a user within our system. This is not tied to any personal information like an email address or phone number. This identifier is also unique to the Q Actions application, which means that user data from other applications cannot be correlated to Aiqudo user activity either.
What happens to the data that Aiqudo does collect? Ultimately, only aggregated data is stored for the long term. This data is valuable to understand what kinds of queries users ask, or when we may have incorrectly classified an intent. We do not use this data to track queries made by an individual user, or create a user profile. Aiqudo is GDPR compliant.
Aiqudo’s Private Data Manager
However in some cases, weneed to know a little bit more about you. If you’re trying to send your TPS report to Bill, we’d like to be able to identify the right contact to send that critical document to. So while you may notice that we do ask for access to things like your calendar, or your contact list in the Q Actions app, it’s important to know that we never send this information to our servers. Instead, what we do is annotate user queries with hints to indicate that a certain word or phrase matches a local contact or meeting name. This improves the accuracy of our intent matching without requiring direct access to personal or private information.
This approach is simple, but very powerful. We’ve also added the ability to send hints about previous Actions that a user has run, and their input or output. For example, if you searched for Chinese restaurants nearby, we might store the resultant list of restaurants on your device. Then, if you follow up by telling Q Actions “take me to the second one”, we know which restaurant you’re talking about and can start turn-by-turn directions to that address.
That’s not all we can do. A business has a lot more information. Sometimes we get review ratings or a phone number in addition to an address. We can search this information locally when you refer to previous actions that you’ve taken, or when starting a new interaction with Q Actions. This means we can take that restaurant and send its address to a friend. Or we can generate options when you say something like, “get me in touch with someone in the Engineering department”.
Another really powerful thing that we can do with our Private Data Manager is understand some of the relevant data in your apps (with your permission, of course), e.g., your Spotify playlists. So if you say “play Calming Acoustic”, which happens to refer to one of your favorite playlists, we kick off this action in Spotify (not Pandora) without you having to explicitly say so; this information stays safe on your device, and within your trusted apps.
We’ve talked about how this works with simple, everyday examples, but the functionality we’ve built means we have the unique ability to work with privacy-conscious or sensitive applications in verticals like finance, or healthcare. Partners also have the ability to import structured data into the Action Kit (SDK) on the client. This data is searched whenever a user makes a request, and the user query is annotated with hints, just like contacts or other built in data types. Partners have full control over what is stored, or when it is updated.
I hope this gives you a better understanding of how we treat private data. As a company, we firmly believe that users should be able to control the flow of their data, and not feel like it’s being taken hostage because of a handful of useful or maybe even critical features that they have on their phone. Most users don’t fully understand what data is being collected, or how it can be used in the wrong hands. It’s our job to educate and put in place sensible safeguards that restrict the flow of private data while still being able to deliver the same level of utility. We’ve shown that with the right kind of thinking and a little (or a lot) of elbow grease, this is possible, and consumers should demand nothing less.
This version of Q Actions features contextual downstream actions, integration with your calendar, as well as under the bonnet improvements to our matching engines. Q Actions help users power through their day by being more useful and thoughtful.
Q Actions understands the context when performing your actions. Let’s say you call a contact in your phonebook with the command “call Tiffany”. You can then follow-up with the command “navigate to her house”. Q Actions is aware of the context based on your previous command and is able to use that information in a downstream action.
say “call Tiffany”
then “navigate to her house”
Stay on top of your schedule and daily events with the recently added Calendar actions. Need to see what’s coming up next? Just ask “when is my next meeting?” and Q Actions will return a card with all the important event information. Need to quickly schedule something on your calendar? Say “create a new event” and after a few questions, your event is booked. On the go and need to join a video conferencing meeting? Simply say “join my next meeting”and Q Actions will take you directly to your meeting in Google Meet. All you have to do from there is confirm your camera/audio settings and join!
“when is my next meeting?”
“create a new event”
“join my next meeting”
Simply do more with voice! Q Actions is now available on the App Store.
The recent release of Q Actions 2.4 emphasizes Aiqudo’s focus on productivity and utility through voice. As voice assistants are becoming an increasingly ubiquitous part of our daily lives, Aiqudo aims to empower users to get things done. Many of the improvements and enhancements are “under the hood” – we’ve increased personalization and expanded the knowledge that drives our Actions.
Our content-rich Q Cards leverage Actionable Knowledge to extend functionality into popular 3rd party apps. Start by asking about an artist, music group, sports athlete, or celebrity: “who is Tom Hanks”. Aiqudo’s Q Card not only presents information about the actor, but will ask “what next?”. You say “view his Twitter account”or “go to his Instagram”, Actionable Knowledge will drop you exactly where you want to go!
Sample Actionable Knowledge Flow:
Ask “who is Taylor Swift?”
Select one of the supported Actionable Knowledge apps
“listen to her on Spotify”
“go to her Facebook profile”
“check out her Instagram”
Personalization … with privacy
Q Actions is already personalized, showing you Action choices based on the apps you already trust. We can now leverage personal data as signals to personalize your experience, while still protecting your privacy. It’s another iteration of our continued focus and dedication to increase productivity and augment utility using voice. For example, if you checked in to your United Airlines flight, and then, the following day, say “show my boarding pass”, the United Airlines action is promoted to the top – exactly what you’d expect the system to do for you.
Our new Personal Data Manager allows secure optimization for specific apps. If you have a Spotify playlist called “Beach Vibes”, and you say “play Beach Vibes”, we understand what you want and we will promote your personal playlist over a random public channel by that name. Your playlists are not shipped off the device to our servers, but we can still use the relevant information to short-cut your day! If “Casimo Caputo” is a friend in Facebook Messenger, Messenger will trump WhatsApp for “tell Casimo Caputo let’s meet for lunch”. But “message Mark Smith let’s play Fifa tonight”brings up WhatsApp since Mark Smith is your WhatsApp buddy.
Simply do more with voice! Q Actions is now available on Google Play.
Peter Mortensen, Solution Architect, BYTON, Santa Clara, California
BYTON is the automotive EV brand taking the lead in next generation infotainment experiences with a vision of giving vehicle occupants the best combination of safety and entertainment while on the road. The first BYTON is the M-BYTE SUV incorporating a groundbreaking infotainment system with an advanced Android Automotive app platform combined with BYTON’s unique user interface including the industry’s largest display and user control via touch, gesture and voice.
One of the key challenges when expanding the infotainment capabilities in a vehicle is to avoid unsafe distractions for the driver. By extensive support of voice control, the driver in a BYTON has safer operation of the vehicle allowing more visual focus on the driving itself. Another challenge is how well a vehicle’s infotainment system provides the frequently used features of popular apps and online services common on theoccupants’ smartphones.
Aiqudo’s Voice to Action® platform enables occupants of a BYTON vehicle a simple and seamless link between natural language voice interaction and their favorite personal apps. With this solution, the occupants of a BYTON can speak naturally to their favorite apps using the vehicle’s powerful microphone array system from any seat in the cabin. The apps can either be residing in the vehicle’s own infotainment system or on the Bluetooth-connected Android or iOS smartphones of the occupants. The Aiqudo solution interprets the spoken commands and automatically identifies which specific app to control. For example, this means the driver can easily send a message using voice through his favorite social networking app installed on his smartphone, hands-free, without picking up the device, thusstaying focused on safely driving the vehicle.
BYTON is committed to intelligently combine the best of voice control with the safe use of touch control, gesture control and display feedback. The Voice to Action® platform expands beyond voice interaction by allowing BYTON’s engineering to touch-enable relevant app controls and provide display feedback when relevant. Furthermore, BYTON’s voice interaction concept aims high by creating a truly intuitive voice interaction experience through integration of Aiqudo’s app control with vehicle control and online digital voice assistant services using Amazon’s Alexa and other digital voice assistant services, depending on country.
Aiqudo, a voice technology pioneer, announced ahead of CES 2020 a partnership with premium electric vehicle manufacturer BYTON, bringing the power of Aiqudo’s Voice AI platform to BYTON cars. Aiqudo’s Voice to Action® platform will enable interacting with your favorite apps on your mobile phone hands-free while driving, seamlessly integrating with BYTON’s unique Digital Experience.
Aiqudo’s industry-leading Voice AI platform will voice-enable actions in native apps within the BYTON ecosystem as well as intelligently launch app actions on personal mobile devices in the vehicle. BYTON drivers and passengers will be able to navigate, make calls, send messages, listen to music, shop, join meetings, make payments and more using simple voice commands with apps they use and love. In its integration with BYTON, Aiqudo incorporates the personalization and individual choice reflected by consumers’ favorite apps, as well as personal elements within apps such as preferred playlists, contacts, or favorites, all without user registration or setup. The BYTON experience powered by Aiqudo delivers the safest, easiest and most useful way to use a mobile device while in the car.
“A seamless voice experience is integral to BYTON’s groundbreaking user experience and Aiqudo Voice will make accessing your favorite apps convenient and safe,” said Jeff Chung, BYTON Vice President of Digital Engineering. “Aiqudo’s white label solution allows us to explore new possibilities with our expanding partnerships in the BYTON digital ecosystem.”
“BYTON’s vision of the car of the future, equipped for autonomous driving, will accelerate the need that users have to access their personal digital lives everywhere. Aiqudo makes this easy!”
Aiqudo’s voice platform comprises a semiotics-based intent engine that understands natural language commands in 7 languages currently, plus an action execution capability across thousands of applications that consumers rely on daily. The company’s white-label voice platform allows car manufacturers, phone and smart device OEMs and mobile app developers to define unique voice experiences for their customers.
“Byton has reimagined the relationship between cars and the people who drive or ride in them, placing voice-based interactions at the center of the in-car experience. We believe that voice will soon be the primary way people interact with their digital world. We’re partnering with BYTON to bring a high utility, personalized voice experience to their automobiles,” said John Foster, CEO of Aiqudo. “The in-car experience is a prime use case demonstrating the power of voice. Customers can now drive safely, undistracted and hands-free, and still use their favorite apps just by using their voice.”
Aiqudo’s Action Kit functionality will be offered to app developers through the BYTON developer portal.
“Action Kit enables BYTON app developers to easily and effortlessly enable voice within their applications for the company’s range of cars and expansive infotainment systems,” said Dr. Rajat Mukherjee, Aiqudo CTO. “BYTON’s vision of the car of the future, equipped for autonomous driving, will accelerate the need that users have to access their personal digital lives everywhere. Aiqudo makes this easy!”
Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps and cloud services through its Voice to Action® platform. It lets people use natural voice commands to execute actions in mobile apps across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, cloud services, or device actions, enabling consumers to get things done quickly and easily.
Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without mandating APIs or developer dependencies.
BYTON is a global premium electric vehicle manufacturer that is creating the world’s first smart device on wheels. By integrating advanced digital technologies to offer a smart, connected, and comfortable mobility experience, the company is designing an EV that will meet the demands of an increasingly digital lifestyle now and into the future.
The company’s global headquarters and state-of-the-art manufacturing center are located in Nanjing, China. Its global R&D hub is located in the heart of Silicon Valley and devoted to the development of BYTON’s groundbreaking intelligent car experience, digital ecosystem, advanced connectivity, as well as other cutting-edge technologies. BYTON’s design and concept vehicle center is located in Munich, Germany.
BYTON’s core management team is made up of top innovators from leading-edge companies such as BMW, Tesla, Google, and Apple. This diverse group of leaders from China, Europe, and the US share the singular vision of creating an unprecedented automotive experience.
For over 2 years Aiqudo has been leading the charge of deep app integration with voice assistants on Android phones. Today, our Android platform continues to do many things that no other platform can. Now, we’re incredibly proud to announce the latest release of our Q Actions app for iOS. We’ve been working on the latest iOS release for months, and it represents a full suite of actions functionality driven by the new ActionKit SDK for iOS. This new ActionKit is also what iOS developers can use to easily configure voice into their own apps.
iOS is a more restrictive and closed ecosystem than Android. Many of the platform capabilities that Android provides are not available to third-party developers in Apple’s ecosystem. For instance, apps are not allowed to freely communicate with each other, and it’s difficult to determine what apps are installed. Such restrictions challenge digital assistants like Q Actions, which rely on knowledge of a user’s apps to provide relevant results and the ability to communicate with apps in order to automate and execute actions in other apps.
Q Actions for iOS enables app developers to define their own voice experience for their users rather than being subject to the limitations of SiriKit or Siri Shortcuts. Currently, SiriKit limits developers’ ability to expose functionality in Siri, allowing only broad categories that dilute the differentiated app experiences that developers have built. With Q Actions for iOS, brands and businesses will be able to maintain their differentiating features and brand recognition, rather than conform to a generalized category.
With this release, we took a hard look at what was needed to build a comparable experience to what we have on Android. To make it more powerful for iOS app developers, we pushed most of the functionality into the ActionKit SDK. The result is that ActionKit powers all the actions available in the app, allowing developers to offer an equivalent experience in their iOS app. The ActionKit SDK is available for embedding in any iOS app today.
Let’s take a look at what Q Actions and the Aiqudo platform offers right now:
Easily discover actions for your phone
Q Actions helpfully provides an Action Summary with a categorized list of apps and actions for your device. Browse by category, tap on an app to view sample commands, or tap a command to execute the action.
Go beyond Siri
Q Actions supports hundreds of new actions! Watch Netflix Originals or stream live video on Facebook with simple commands like “watch Narcos” or “stream live video”.
True Natural Language
Q Actions for iOS leverages Aiqudo’s proprietary, semiotics-based language modeling system to power support for natural language commands. Rather than the exact match syntax required by Siri Shortcuts, Aiqudo understands the wide variations in commands that consumers use when interacting naturally with their voice. Plus, Aiqudo is multilingual, currently supporting commands in seven languages worldwide.
Content-rich Cards for informational queries
Get access to web results from Bing, translate phrases or look at stock quotes directly from Q Actions. Get rich audio and visual feedback from cards.
There’s still a lot to come! We’ve already shown how Aiqudo can enable abetter voice experience in the car. We’ve also seen how voice can help usersengage meaningfully with your app. We’re working hard to build a ubiquitous voice assistant platform and this release on iOS gets us one step closer. Stay tuned as we’ll be talking more about some of the challenges of bringing our voice platform to iOS and iOS app developers, and more importantly, how we’re aligned with Apple’s privacy-centric approach.
It’s no secret that a growing number of companies are recognizing the opportunities for new, branded experiences presented by voice interfaces powered by AI. In fact, Gartner predicts that 25 percent of digital workers will use virtual assistants daily by 2021, and brands already using chatbots have seen the number of leads they collect increase by as much as 600 percent over traditional lead generation methods.
These AI-driven voice assistants and chatbots have also become useful cost-cutting tools for companies with large subscriber bases – banks, insurance companies, and mobile phone operators, to name a few. A 2017 Juniper Research report calculates that, for every inquiry handled by a chatbot, banks save four minutes of an agent’s time, which translates to a cost saving of $0.70 per query. These platforms are expected to save banks an estimated $7.3 billion in operational costs by 2023.
The real opportunity presented by voice assistants is in delighting the customer and strengthening brand loyalty, which inevitably drives revenue. We’re entering an exciting time where voice has the ability to redefine the relationship that consumers have with their technology and open up aspects or functionality that the user didn’t previously know — or know they even cared –about.
A 2017 PwC report described chatbots as adding “a new dimension to the power of ‘personal touch’ and massively [enhancing] customer delight and loyalty.”
In my own life, I can’t think of a better example of this than Erica, Bank of America’s AI-driven virtual financial assistant. Working in and following the space for a few years, I am really impressed with what Bank of America has built for its customers in Erica.
Erica caters to the bank’s customer service requirements in a number of ways: sending notifications to customers, providing balance information, sharing money-saving tips, providing credit report updates, facilitating bill payments, and helping customers with simple transactions. Recently, BofA expanded Erica’s capabilities to help clients make smarter financial decisions by providing them with personalized, proactive insight.
For me, instead of calling the BofA customer service 800 number and spending 20 to 30 minutes navigating menus, waiting on hold, or being transferred and repeating the process all over again, I can talk to Erica and quickly complete transactions. Erica averages a mere three minutes time-to-resolution via voice within the app. Think about all the things you could get done in those saved minutes instead, not to mention a break on your blood-pressure medicine.
Another aspect where Erica shines for me is in exposing capabilities within the app that aren’t obvious or are buried deep in the menu structure. One feature I use all the time is the ability to put an international travel notice on my card before I leave the country (so my credit card works overseas) — sometimes I even use it standing in the TSA security line. Another feature I love is being able to find my routing and account numbers quickly and easily by simply asking Erica. Who hasn’t spent valuable time on a fishing expedition in their banking app while hoping the webpage (waiting for automatic payment information) doesn’t time-out first?
The proof of the value of Erica’s voice interface is in the user adoption numbers: just over a year after introduction, Erica has surpassed 7 million users and has handled more than 50 million client requests. And since launching Erica’s proactive insights in late 2018, daily client engagement with Erica has more than doubled. In an interview with American Banker, BofA’s head of digital banking attributes Erica’s strong adoption to its easy-to-use transaction-search functions and financial advice, two areas where the bank continues to focus on harnessing the power of voice to delight its customers.
Thing is, for all of Erica’s benefits for both consumers and BofA, building this kind of voice-activated assistance in-house — from scratch — isn’t fast, easy, or cheap. The Erica development team boasted 100 people in 2017 — before introduction — and has surely grown by now, given her success. And it took those 100 people nearly two years to get Erica ready for prime time, at a cost estimated at $30 million dollars. Why so expensive? As one BofA VP noted, during development, the bank “learned [that] there are over 2,000 different ways to ask us to move money.”
At Aiqudo, we’ve figured out — and operationalized — the technical heavy lifting needed to create a voice assistant: NLU, intent detection, action execution, multiple languages, the analytics platform; there’s no reason for partners to reinvent the wheel. We provide partners with a turnkey voice capability in their app. Developers retain control of this critical new Voice UI (and all of their users’ data) rather than surrendering the direct relationship with their users to voice platforms. Until now, developers have been required to create skills for each voice platform, which risks commoditizing the app and losing the brand they have worked so hard to develop. In contrast, Aiqudo offers a cost-effective solution that allows developers to focus on adding value to their app rather than on customizing for voice.
Disclaimer: Bank of America developed their voice technology without the assistance or use of Aiqudo technology.
The following transcript was taken from a casual conversation with my son.
Son: Dad, what are you working on?
Me: It’s a new feature in our product called “Auto Mode”. We just released it in version 2.1 of our Q Actions App for Android. We even made a video of it. We can watch it after dinner if you’re interested.
Son: The feature sounds cool. What’s it look like?
Me: Well, here. We have this special setting that switches our software to look like the screen in a car. See how the screen is wider than it is tall? Yeah, that’s because most car screens are like that too.
Son: Wait. How do you get your software into cars? Can’t you just stick the tablet on the dashboard?
Me: Humm, not quite. We develop the software so that car makers can combine it with their own software inside the car’s console. We’ll even make it look like they developed it by using their own colors and buttons. I’m showing you how this works on a tablet because it’s easier to demonstrate to other people – we just tell them to pretend it’s the car console. Until cars put our software into their consoles, we’ll make it easy for users to use “Auto Mode” directly on their phones. Just mount the phone on the car’s dash and say “turn on auto mode” – done!
Son: So how do you use it? And what does that blue button with a microphone in it do?
Me: Well, we want anyone in the car to be able to say a command like “navigate to Great America” or “what’s the weather like in San Jose?”or “who are Twenty One-Pilots?”. The button is simply a way to tell the car to listen. When we hear a command, our software figures out what to do and what to show on the console in the car. Sometimes it even speaks back the answer. Now we don’t always want people to have to press the button on the screen so we’ll work with the car makers to add a button on the steering wheel or even a microphone that is always listening for a special phrase such as “Ok, Q” to start.
Son: How does it do that? I mean, the command part.
Me: Good question. Since you’re smart and know a little about software, I’ll keep it short. Our software takes a command and tries to figure out what app or service can best provide the answer. For example, if the command is about showing the route to say, an amusement park like Great America, we’ll ask Google Maps to handle it, which it does really well. Lots of cars come installed with mapping software like Google Maps so it’s best to let them handle those. For other types of commands that ask for information, like “what’s the weather like in San Jose” or “who are Twenty One Pilots”, we’ll send it off to servers in the cloud. They then send us back answers and we format it and display it on the screen – in a pretty looking card like this one.
Me: Sometimes, apps running on our phones can best answer these commands and we use them to handle it.
Son: Wait. Phones? How are phones involved? I only see you using a tablet.
Me: Ahhh. You’ve discovered our coolest feature. We use Apps already installed on your phone. Do you see those rectangle-looking things in the upper right corner of the tablet? The ones with the pictures and names of people? Well, those are phone profiles. They appear when a person connects their phone, running our Q Actions app, to the car’s console through Bluetooth, sort of like you do with wireless earbuds. When connected, our software in the console sends the phone your commands and the phone in turn attempts to execute the command using one of the installed apps. Let me explain with an example. Let’s pretend you track your daily homework assignments using the Google Tasks app on your phone. Now you hop into the car and your phone automatically pairs with the console. Now I asked you to show me your homework assignments. You then press the mic button and say “show my homework tasks”. The software in the console would intelligently route the command to your phone (because Google Tasks is not on the console), open Google Tasks on your phone, grab all your homework assignments and send them back to the console to be displayed in a nice card. Oh, and it would also speak back your homework assignments as well. Let’s see what happens when I tell it to view my tasks.
Son: Big deal. I can just pick up my phone and do that. Why do I need to use voice for that?
Me: Because if you’re the driver, you don’t want to be fumbling around with your phone, possibly getting into an accident! Remember, this is supposed to help drivers with safe, “hands-free” operation. You put your phone in a safe place and our software figures out how to use it to get the answers.
Son:Why can’t the car makers put all these apps in the console so you don’t have to use your phone?
Me: Great question. Most people carry their phones on them at all times, especially when they drive. And these phones have all their favorite apps with all their important personal information stored in them. There’s no way the car makers could figure out which apps to include when you buy the car. And even if you could download these apps onto the console, all your personal information that’s on your phone would have to transferred over to the console, app by app. Clumsy if you ask me. I prefer to keep my information on my phone and private, thank you very much!
Son: Oh. Now I get it. So what else does the software do?
Me:The console can call a family member. If you say “call Dad”, the software looks for ‘dad’ in your phone’s address book and dials the number associated with it. But wait. You’re probably thinking ‘What’s so special about that? All the cool cars do it”. Well, we know that a bunch of apps can make phone calls so we show you which ones and let you decide. Also, If you have two numbers for ‘dad’, say a home and mobile number, the software will ask you to choose one to call. Let’s see how this works when I say “call Dad”.
Me: It asks you to pick an app. I say ‘phone’ and then it asks me to pick a number since my dad has both a home and mobile number. I say ‘mobile’ and it dials the number through my phone.
Son: Cool. But what if I have two people with the same name, like Julie?
Me: It will ask you to pick a ‘Julie’ when it finds more than one. And it will remember that choice next time you ask it to call Julie. See what happens when I want to call Jason. It shows me all the people in my address book who are named Jason along with their phone numbers. If a person has more than one number it will say ‘Multiple’
Son: Wow. What else?
Me: How about sending a message on WhatsApp? Or setting up a team meeting in the calendar. Or joining a meeting from the car if you are running late. Or even checking which one of your friends have birthdays today. All these actions are performed on your phone using the apps you are familiar with and use.
Son: Which app shows you your friends birthdays? That’s kind of neat.
Son: I don’t use Facebook. I use Instagram. It’s way better. Plus all the cool kids use it now.
Me: You get the picture though, right?
Son: So what if all of my friends are in the car with you and we connect to the console? How does the software know where to send the command?
Me: We use the person’s voice to identify who they are and route the command to the right person’s phone automatically.
Son: Really? That seems way too hard.
Me: Not really. Although we haven’t implemented it yet, the technology exists to do this sort of thing today.
Son: Going back to main screen, why does the list of actions under ‘Recent’ and ‘Favorites’ change when you change people?
Me:Oh, you noticed that! Whenever the software switches to a new profile, we grab the ‘Recent’ and ‘Favorites’ sections from that person’s phone and display it in the tablet, er, console. This is our way of making the experience more personalized or familiar to the way the app appears on your phone. In fact, the ‘Favorites’ are like handy shortcuts for frequently used actions, like “call Mom”.
Me: One more thing. Remember the other buttons on the home screen? One looked like a music note, the other a picture for messaging and so on. Well, when you press those, a series of icons appear across the screen, each showing an action that belongs to that group. If your phone had Spotify installed, we would show you a few Spotify actions. If Pandora was installed, we would show you Pandora actions and so on. Check out what happens when I activate my profile. Notice how Pandora appears? That’s because Pandora is on my phone and not on the tablet like Google Play Music and YouTube Music.
Me: Same is true for messaging and calling. Actions from apps installed on your phone would appear. You would simply tap on the icon to run the action. In fact, if you look carefully, you’ll notice that all the actions that show up on the console are also in the ‘My Actions’ screen in the Q Actions app on your Android Phone. Check out what’s on the tablet vs. my phone.
Me: Oh and before I forget, there’s one last item I’d like to tell you about.
Son: What’s that.
Me: Notifications. If you send me a message on WhatsApp, Messenger or WeChat, a screen will popup letting me know I have a message from you. I can listen to the message by pressing a button or respond to the message – by voice, of course, all while keeping my focus on the road. You’ll get the response just as if I had sent it while holding the phone.
Son: Cool. I’ll have fun sending you messages on your way home from work.
Son: Hey, can I try this out on my phone?
Me: Sure. Just download our latest app from the Google Play Store. After you get it installed, goto the Preferences section under Settings and check the box that says ‘Auto Mode’ (BETA). You’ll automatically be switched into Auto Mode on your phone. Now this becomes your console in the car.
Of course, things appear a bit smaller than on your phone than what I’ve shown you on the tablet. Oh, and since you’re not connected to another phone, all the commands you give it will be performed by apps on your phone. Try it out and let me know what you think.
Son: Ok. I’ll play around with it this week.
Me: Great. Now let’s go see what your mom’s made us for dinner.