Aiqudo Auto Mode

Aiqudo and Sensory Collaborate on Holistic Voice Solutions

By | App Actions, Auto Mode, News, User Interface, Voice | No Comments

Will develop solutions that simplify the process of integrating voice assistants into a variety of devices  

BUSINESSWIRE, LAS VEGAS, January 6, 2020

Aiqudo, a leading voice technology pioneer, today announced that it is collaborating with embedded voice and vision AI leader Sensory to bring to market comprehensive voice solutions that serve as white-label alternatives for voice services and assistants. The two companies are working on solutions targeting automotive, mobile app, smart home and wearable device applications.

Currently, companies and brands must piece together different technologies to fully implement a voice solution. With their technologies combined, Aiqudo and Sensory will deliver a fully integrated end-to-end solution that combines Sensory’s wake word, voice biometrics and natural language recognition technologies with Aiqudo’s multilingual intent understanding and action execution (Q Actions®) to provide the complete Voice to Action® experience consumers expect.

“Voice adoption continues to grow rapidly, and brands are always exploring ways to streamline the process of integrating a convenient voice UX into their products,” said Todd Mozer, Sensory’s CEO. “Working with Aiqudo allows our two companies to provide the industry a turn-key solution for integrating powerful voice assistants into their products that feature brand-specific wake words and are capable of recognizing who is speaking.”

“Users just need to enter the cabin with their smartphones. There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.

With both Aiqudo and Sensory positioned as leaders in their respective fields, this collaboration is a natural fit as their technologies are highly complementary.  The initial integration is focused on the automotive vertical that will be showcased at 2020 Consumer Electronics Show.

Aiqudo’s Auto Mode highlights a highly personalized user experience in the car using the Q Actions® platform. Enhanced with Sensory’s wake word (TrulyHandsfree™) and voice ID (TrulySecure™) functionality,  multiple users seamlessly access their personal mobile devices just by using their voice to execute personal actions. “Users just need to enter the cabin with their smartphones,” said Rajat Mukherjee, Aiqudo CTO. “There’s no registration required, and the personalized wake word and voice biometrics allow users to instantly access their personal apps seamlessly and securely”.  

“Brands increasingly want to create their own branded voice experiences for their customers,” said John Foster, CEO of Aiqudo. “Working with Sensory, we have created the easiest and fastest way for brands to bring the power and convenience of voice to their customers. We are excited to integrate our areas of practice and expertise to deliver a comprehensive solution.”  

To view the demo onsite while at CES, please email Aiqudo at CES@aiqudo.com. 

BUSINESSWIRE: Aiqudo and Sensory Collaborate on Holistic Voice Solutions

About Sensory

For over 25 years, Sensory has pioneered and developed groundbreaking applications for machine learning and embedded AI – turning those applications into household technologies. Pioneering the concept of always-listening speech recognition more than a decade ago, Sensory’s flexible wake word, small to large vocabulary speech recognition, and natural language understanding technologies are fueling today’s voice revolution. Additionally, its biometric recognition technologies are making everything from unlocking a device to authenticating users for digital transactions faster, safer and more convenient. Sensory’s technologies are widely deployed in numerous markets, including automotive, home appliances, home entertainment, IoT, mobile phones, wearables and more, and have shipped in over two billion units of leading consumer products and software applications.

For more information about this announcement, Sensory or its technologies, visit https://www.sensory.com/, contact sales@sensory.com or for press inquiries contact press@sensory.com.

About Aiqudo

Aiqudo (pronounced: “eye-cue-doe”) is a Voice AI pioneer that connects the nascent world of voice interfaces to the useful, mature world of mobile apps and cloud services through its Voice to Action® platform. It lets people use natural voice commands to execute actions in mobile apps and cloud services across devices. Aiqudo’s SaaS platform uses machine learning (AI) to understand natural-language voice commands and then triggers instant actions via mobile apps, cloud services, or device actions, enabling consumers to get things done quickly and easily.

Aiqudo’s proprietary technology is covered by more than 30 granted patents and patent applications. Aiqudo’s technology is delivered in a scalable approach to creating voice-enabled actions without mandating APIs or developer dependencies.

To see Aiqudo in action, visit Aiqudo’s YouTube channel (youtube.com/aiqudo)

Aiqudo in the Byton Cabin

The BYTON next generation infotainment experience

By | App Actions, Auto Mode, Digital Assistants, User Interface, Voice | No Comments

Byton Multiple Displays

Peter Mortensen, Solution Architect, BytonPeter Mortensen, Solution Architect, BYTON, Santa Clara, California

BYTON is the automotive EV brand taking the lead in next generation infotainment experiences with a vision of giving vehicle occupants the best combination of safety and entertainment while on the road. The first BYTON is the M-BYTE SUV incorporating a groundbreaking infotainment system with an advanced Android Automotive app platform combined with BYTON’s unique user interface including the industry’s largest display and user control via touch, gesture and voice.

One of the key challenges when expanding the infotainment capabilities in a vehicle is to avoid unsafe distractions for the driver. By extensive support of voice control, the driver in a BYTON has safer operation of the vehicle allowing more visual focus on the driving itself. Another challenge is how well a vehicle’s infotainment system provides the frequently used features of popular apps and online services common on the occupants’ smartphones.

Aiqudo’s Voice to Action® platform enables occupants of a BYTON vehicle a simple and seamless link between natural language voice interaction and their favorite personal apps. With this solution, the occupants of a BYTON can speak naturally to their favorite apps using the vehicle’s powerful microphone array system from any seat in the cabin. The apps can either be residing in the vehicle’s own infotainment system or on the Bluetooth-connected Android or iOS smartphones of the occupants. The Aiqudo solution interprets the spoken commands and automatically identifies which specific app to control. For example, this means the driver can easily send a message using voice through his favorite social networking app installed on his smartphone, hands-free, without picking up the device, thus staying focused on safely driving the vehicle. 

BYTON is committed to intelligently combine the best of voice control with the safe use of touch control, gesture control and display feedback. The Voice to Action® platform expands beyond voice interaction by allowing BYTON’s engineering to touch-enable relevant app controls and provide display feedback when relevant. Furthermore, BYTON’s voice interaction concept aims high by creating a truly intuitive voice interaction experience through integration of Aiqudo’s app control with vehicle control and online digital voice assistant services using Amazon’s Alexa and other digital voice assistant services, depending on country.

More information on our Aiqudo partnership is available at BYTON’s new developer siteAIQUDO and BYTON Partner to enable actions for your phone and car

This post is authored by Peter Mortensen, BYTON Solution Architect. Aiqudo and BYTON announced a strategic partnership at CES 2020 that features voice control of apps in BYTON automobiles using Aiqudo technology.

Q Actions - Voice Talkback

Q Actions – Voice feedback from apps using Talkback™

By | App Actions, Conversation, Digital Assistants, User Interface | No Comments

Wonder why you can’t talk to your apps, and why your apps can’t talk back to you?  Stop wondering, as Talkback™ in Q Actions does exactly that. Ask “show my tasks” and the system executes the right action (Google Tasks) and, better yet, tells you what your tasks are – safely and hands-free, as you drive your car.

Driving to work and stuck in traffic?  Ask “whose birthday is it today?” and hear the short list of your friends celebrating their birthdays (Facebook). You can then say  “tell michael happy birthday”  to wish Mike (WhatsApp or Messenger). And if you are running low on gas, just say “find me a gas station nearby” and Talkback™ will tell you where the nearest gas station is and how much you’ll pay for a gallon of unleaded fuel.

Say it. Do it. Hear it spoken back!

Q Actions - Directed Dialogue

Q Actions – Task completion through Directed Dialogue™

By | Conversation, Digital Assistants, Natural Language, User Interface, Voice | No Comments

When an action or a set of actions require specific input parameters, Directed Dialogue™ allows the user to submit the required information through very simple, natural back-and-forth conversation. Enhanced with parameter validation, and user confirmation,Directed Dialogue™ allows complex tasks to be performed with confidence.Directed Dialogue™ is  not about open-ended conversations, but  it about getting things done, simply and efficiently.

With Q Actions, Directed Dialogue™ is automatically enabled  for every action in the system because we know the semantic requirements of each and every action’s parameters. It is not constrained, and  applies across all actions across all verticals.

Another application of Directed Dialogue™ is input refinement. Let’s say I want to purchase batteries. If I just say, “add batteries to my shopping cart” I can get the wrong product added to my cart, as on Alexa, which does the wrong thing for a new product order (the right thing happens on a reorder). In the case of Q Actions, I can provide the brand Duracell and the type 9V 4 pack with very simpleDirected Dialogue™, and exactly the right product is added to my cart – in the Amazon or Walmart app.

Get Q Actions today.

Auto in-cabin experience

The Evolution of Our In-Car Experience

By | Digital Assistants, User Interface, Voice | No Comments

As the usage model for cars continues to shift away from traditional ownership and leasing to on-demand, ridesharing, and in the future, autonomous vehicle (AV) scenarios, how we think about our personal, in-car experience will need to shift as well.

Unimaginable just a few short years ago, today, we think nothing of jumping into our car and streaming our favorite music through the built-in audio system using our Spotify or Pandora subscription. We also expect the factory-installed navigation system to instantly pull up our favorite or most-commonly used locations (after we’ve entered them) and present us with the best route to or from our current one. And once we pair our smartphone with the media system, we can have our text and email messages not only appear on the onboard screen but also read to us using built-in text-to-speech capabilities.  It’s a highly personalized experience in our car.

When we use a pay-as-you-go service, such as Zipcar, we know we’re unlikely to have access to all of the tech comforts of our own vehicle, but we can usually find a way to get our smartphone paired for handsfree calling and streaming music using Bluetooth. If not, we end up using the navigation app on our phone and awkwardly holding it while driving, trying to multitask. It’s not pretty. And when we hail a rideshare, we don’t expect to have access to any of the creature comforts of our own car.

But what if we could?

Just as our relationship to media shifted from an ownership model–CDs or MP3 files on iPods–to subscription-based experiences that are untethered to a specific device but can be accessed anywhere at any time, it’s time to shift our thinking about in-car experiences in the same way.

It’s analogous to accessing your Amazon account and continuing to watch the new season of “True Detective” on the TV at your Airbnb–at the exact episode where you left off last week. Or listening to your favorite Spotify channel at your friend’s house through her speakers.

All your familiar apps (not just the limited Android Auto or Apple CarPlay versions) and your personalized in-car experience–music, navigation, messaging, even video (if you’re a passenger, of course)–will be transportable to any vehicle you happen to jump into, whether it’s a Zipcar, rental car or some version of a rideshare that’s yet to be developed. What’s more, you’ll be able to easily and safely access these apps using voice commands. Whereas today our personal driving environment is tied to our own vehicle, it will become something that’s portable, evolving as our relationship to cars changes over time.

Just on the horizon of this evolution in our relationship with automobiles? Autonomous vehicles, or AVs, in which we become strictly a passenger, perhaps one of several people sharing a ride. Automobile manufacturers today are thinking deeply about what this changing relationship means to them and to their brands. Will BMW become “The Ultimate Riding Machine?”(As a car guy, I personally hope not!)  And if so, what will be the differentiators?

Many car companies see the automobile as a new digital platform, for which each manufacturer creates its own, branded, in-car digital experience. In time, when we hail a rideshare or an autonomous vehicle, we could request a Mercedes because we know that we love the Mercedes in-car digital experience, as well as the leather seats and the smooth ride.

What happens if we share the ride in the AV, because, well, they are rideshare applications after all? The challenge for the car companies becomes creating a common denominator of services that define that branded experience while still enabling a high degree of personalization. Clearly, automobile manufacturers don’t want to become dumb pipes on wheels, but if we all just plug in our headphones and live on our phones, comfy seats alone aren’t going to drive brand loyalty for Mercedes. On the other hand, we don’t all want to listen to that one guy’s death metal playlist all the way to the city.  

The car manufacturers cannot create direct integrations to all services to accommodate infinite personalization. In the music app market alone there are at least 15 widely used apps, but what if you’re visiting from China? Does your rideshare support China’s favorite music app, QQ?  We’ve already made our choices in the apps we have on our phones, so transporting that personalized experience into the shared in-car experience is the elegant way to solve that piece of the puzzle.

This vision of the car providing a unique digital experience is not that far-fetched, nor is it that far away from becoming reality. It’s not only going to change our personal ridesharing experience, but it’s also going to be a change-agent for differentiation in the automobile industry.

And it’s going to be very interesting to watch.

Q Actions - Call

Q Actions 1.3 update is now available on Google Play!

By | Digital Assistants, User Interface, Voice Search | No Comments

Q Actions now enables you to make calls directly using voice commands, regardless of if your contact is in your phonebook or a third-party app like WhatsApp.

Remembering friends and family across multiple phone books and communication apps is cumbersome. Through voice, you can privately tell Q Actions which contact you want to connect with and what app you want to place the call with, safely and hands free.

Juggling multiple phone books across your apps can be tedious … We got your back!

Also, try out some of the new and improved actions from familiar apps that you already have on your phone: Netflix, Spotify, Waze, Maps, Facebook, and more.

Just launch Q Actions and say:

    • “dial John”, “call Jason on WhatsApp” Phone/WhatsApp
    • “Play Stranger Things”, “watch Netflix originals” Netflix
    • “play songs by Drake”, “play mint playlist”Spotify
    • “take me to work”, “I want to drive home” Waze
    • “are any of my friends nearby?”, “view upcoming events” Facebook

App Store Icon

As always, we welcome your feedback.

 

 

The Next Billion

By | User Interface, Voice | No Comments

When we started Aiqudo less than a year ago, we were focused on voice as the next big thing in tech, a UI that has the potential to be the most profound and disruptive change in consumer technology to date. Being a Silicon Valley company, we naturally focused on how voice could impact our world — savvy tech users who wanted the easiest, fastest way possible to use their technology, and the changes natural language voice could bring about for businesses serving us: voice search, voice commerce, hands free apps while driving, etc.

But as we work with partners who are focused on global deployments across multiple languages, we’re coming to realize that voice could have a much more far-reaching impact. When interacting with technology becomes completely seamless and intuitive, we will extend access to technology to billions of new users in emerging markets where mobile internet devices have arrived but where language or literacy issues may present barriers to usage.

Today, mobile carriers are pushing hard to capture new users in these frontier markets, offering inexpensive Android phones with unlimited data plans and putting internet connections into more hands than ever. Voice interfaces, localized for languages and for locale-specific apps, will unlock the final accessibility challenge for these users, allowing the benefits of the internet to reach far deeper into many societies that have until now been on the other side of the digital divide. Voice has the potential to become the universal interface to the digital world.

New users and a new user interface will certainly mean new entry points and new modalities of use for a broad range of established businesses. Industries that VCs would consider over with, done, un-investable in developed markets will be up for grabs again, and serving billions of users. We’ll see new business models, serving localized needs with localized solutions — this won’t be a walkover for the established incumbents. The next disruption is likely to have its roots far from Silicon Valley.

These are the next billion internet users, and voice is the interface that will power their digital experience.

Voice will be our interface to everything

By | User Interface, Voice | No Comments
Let’s face it, technology has not always been very user friendly. Sometimes that felt by design, so coders could keep their club small and exclusive. But usually there’s a step function innovation that totally changes how we interact with technology and, in so doing, disrupts the old paradigm. The mouse and graphical user interface launched the PC (if you’re old enough, you remember when saying GUI sounded cool). Touch screens were the brilliant innovation that enabled the whole new world of smartphones that we live in today.

Voice is the next disruption. Voice will change how we search, how we shop and manage our experiences with retailers, how we create and consume media. The big guys are placing big bets in the space, and we’re starting to see the payoff on some of those components now – voice recognition now has accuracy above 90%, which is good enough to be workable. With improvements in AI, we’ll have contextual understanding, maintain state, and get to conversational capabilities.

But today, voice doesn’t do very much. Alexa sets a mean timer, but if I want to order an Uber, I have to go to my Alexa app to sign in and register, and then I only get limited capabilities. Why wouldn’t I just go to my Uber app? My Uber app has Home, Work, SFO, already in it, plus my payment info, and I can share my ETA with my contacts. And if I want to check Surfline, forget it – there’s no Skill for that.

This is why we created Aiqudo. Our mobile apps do tons of things for us already – get rides, order food, check the surf, and loads of other interactions every day. But the touch screen interface has resulted in each app becoming an individual silo: you have to open the app, navigate your way down to the action you want, maybe tap through a few screens to select your size or color, checkout, and confirm, and then move to the next app and repeat. Aiqudo lets you use simple, intuitive voice commands to instantly get to the action you want, then seamlessly move on to the next action in another app. Do all the things you want to do in your favorite apps, but now at the speed of voice.

Voice will be our interface to everything, eventually. We’re starting with making voice the interface to the things we do every day with our mobile apps.