SpeakQL: Towards Speech-driven Multi-Modal Querying

Tech ID: 29219 / UC Case 2018-111-0

Background

Automatic speech recognition (ASR) systems currently in use work well for routine tasks such as posing a question to SIRI (Apple) or Alexa (Amazon), but do not interface with more complex datasets. Complex datasets take into account when the user considers a speech-driven system to query structured data, but these require new approaches. Some of these approaches have used new querying modalities such as visual, touch-based and natural language interfaces (NLIs) whereby user commands are translated into the Structured Query Language (SQL). Unfortunately these new proposals are not suitable for complex datasets.

Technology Description

Researchers at UC San Diego have developed a solution to the above problem. They have developed a software system (SpeakQL) including algorithms and methods to enable users to easily query structured datasets using speech with high accuracy and low latency. Structured datasets (also called relational datasets) are organized as named tables with named columns. The data querying language we focus on is the Structured Query Language (SQL), which is the most popular way to interact with structured data, especially in enterprise companies. SQL enables users to pose complex questions to retrieve facts or analyze their data. SpeakQL enables user to pose such questions using speech instead of typing by exploiting the latest advanced in Automatic Speech Recognition (ASR) and a suite of new methods that exploit our SQL-specific insights.

Applications

Enabling conversational assistants such as Amazon Alexa or Google Home to also support rich SQL queries over arbitrary user schema datasets. This could potentially increase adoption of such devices in enterprises settings such as healthcare, consulting, retail, and education, as well as benefit Web companies for their internal usage.

Advantages

SpeakQL enables user to pose such questions using speech instead of typing by exploiting the latest advanced in Automatic Speech Recognition (ASR).

State Of Development

A prototype of SpeakQL in the iOS environment for iPhones and iPads is in development. We also plan to prototype SpeakQL for the Amazon Echo Show/Alexa environment using Amazon Web Services.

Intellectual Property Info

A provisional patent has been submitted and the technology is available to be licensed.

Related Materials

Patent Status

Country Type Number Dated Case
United States Of America Published Application 2019-034729 11/14/2019 2018-111
 

Contact

Learn About UC TechAlerts - Save Searches and receive new technology matches

Other Information

Keywords

Conversational assistants, Natural language, Querying, Speech, SQL, Structured data, Voice

Categorized As