[go: nahoru, domu]

US20170075953A1 - Handling failures in processing natural language queries - Google Patents

Handling failures in processing natural language queries Download PDF

Info

Publication number
US20170075953A1
US20170075953A1 US15/261,538 US201615261538A US2017075953A1 US 20170075953 A1 US20170075953 A1 US 20170075953A1 US 201615261538 A US201615261538 A US 201615261538A US 2017075953 A1 US2017075953 A1 US 2017075953A1
Authority
US
United States
Prior art keywords
query
natural language
user
structured
language query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/261,538
Inventor
Tolga Bozkaya
Armand Joseph Dijamco
Tran Bui
Andy Chu-I Yu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US15/261,538 priority Critical patent/US20170075953A1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOZKAYA, TOLGA, BUI, TRAN, DIJAMCO, Armand Joseph, YU, ANDY CHU-I
Publication of US20170075953A1 publication Critical patent/US20170075953A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06F17/30401
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/2423Interactive query statement specification based on a database schema
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/243Natural language query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2452Query translation
    • G06F16/24522Translation of natural language queries to structured queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F17/2775
    • G06F17/30327
    • G06F17/30466
    • G06F17/30483
    • G06F17/30554
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking

Definitions

  • This specification relates to handling failures in processing natural language queries.
  • Failures may occur, when a computer system attempts to process natural language queries provided by users to provide matching search results.
  • An iterative model may be used to handle these failures.
  • This specification describes techniques for handling failures in generating SQL queries from natural language queries.
  • one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of obtaining, through a natural language front end, a natural language query from a user; converting the natural language query into structured operations to be performed on structured application programming interfaces (APIs) of a knowledge base, comprising: parsing the natural language query, analyzing the parsed query to determine dependencies, performing lexical resolution, forming a concept tree based on the dependencies and lexical resolution; analyzing the concept tree to generate a hypergraph, generate virtual query based on the hypergraph, and processing the virtual query to generate one or more structured operations; performing the one or more structured operations on the structured APIs of the knowledge base; and returning search results matching the natural language query to the user.
  • APIs application programming interfaces
  • inventions of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions.
  • one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.
  • Parsing the natural language query includes breaking the natural language query into phrases and placing the phrases in a parsing tree as nodes.
  • Performing lexical resolution includes generating concepts for one or more of the parsed phrases.
  • Analyzing the concept tree includes: analyzing concepts and parent-child or sibling relationships in the concept tree; and transforming the concept tree including annotating concepts with new information, moving concepts, deleting concepts, or merging concepts with other concepts.
  • the hypergraph represents a database schema where data tables may have multiple join mappings among themselves. The method further includes analyzing the hypergraph including performing path resolution for joins using the concept tree.
  • the method further includes detecting a failure during conversion of the natural language query to the one or more structured operations.
  • the method further includes resolving the failure through additional processing including determining if an alternative parse for the natural language query is available.
  • the method further includes resolving the failure through additional processing including: providing, through a user interaction interface, to the user one or more information items identifying the failure; responsive to a user interaction with an information item: and modifying the natural language query in accordance with the user interaction to generate one or more structured operations.
  • the failure can be based on one or more of a bad parse, an ambiguous column reference, an ambiguous constant, an ambiguous datetime, unused comparison keywords or negation keywords, aggregation errors, a missing join step, an unprocessed concept, an unmatched noun phrase, or missing data access.
  • the knowledge base, the natural language front end, and the user interaction interface are implemented on one or more computers and one or more storage devices storing instructions, and wherein the knowledge base stores information associated with entities according to a data schema and has the APIs for programs to query the knowledge base.
  • Natural language terms can be matched to lexicons recognized by a natural language processing system through user interactions, reducing the need for complete definitions of query terms upfront that may appear in a natural language query.
  • linguistic ambiguities detected in a user-provided natural language query can be resolved as they arise, eliminating the need to produce search results based on each alternative interpretation.
  • data access issues can be brought to a user's attention early on without risking any data security breach.
  • User interactions can be minimized in generating structured queries from natural language queries.
  • the system uses techniques to avoid unnecessary iterations through user actions by assessing a quality of the parse and the structured query that can be generated through identification of certain errors or warnings during parsing and processing of the input query expressed in natural language.
  • This assessment allows the system to perform operations to provide a translation of the natural language query to a structured query while overcoming some shortcomings of the parser or some grammatical/structural mistakes in the natural language query. Consequently, the system can often determine what the structured query from compact sentences or even phrases. This improves the user experience and makes translating natural language queries into structured queries more useful.
  • the system cannot determine the structured query without user interaction. In those cases, the system attempts to guide the user towards corrections that can resolve the errors and lead to a successful translation into a structured query. For example, if there is ambiguity, the system can identify and present possible interpretations and choices for disambiguation. This helps the user quickly correct the natural language query and improves the speed of generating the structured query in those cases.
  • the system allows users who are not experienced with the particular data domain or query languages to obtain specifically desired information using natural language queries.
  • the system accepts queries presented in plain English (or language of the user's choice) and processes it through the use of NLP (natural language processing) techniques to generate and run the corresponding structured query in the query backend and return the result to the user.
  • NLP natural language processing
  • a number of schema lexicons are generated which provide a number of mappings used to process the natural language query.
  • FIG. 1 is a flow diagram of an example process of converting a natural language query into a structured query.
  • FIG. 2 is a block diagram illustrating an example system for handling failures in processing natural language queries through user interactions.
  • FIG. 3 is a flow diagram illustrating an example process for iterating over query versions.
  • FIGS. 4-7 are diagrams of example concept trees.
  • FIG. 8 is a block diagram illustrating an example process for handling a missing token failure through user interactions.
  • FIG. 9 is a block diagram illustrating an example process for handling a lexicon matching failure through user interactions.
  • FIG. 10 is a block diagram illustrating an example process for handling a data access failure through user interactions.
  • FIG. 11 is a block diagram illustrating an example process for handling a linguistic ambiguity failure through user interactions.
  • FIG. 12 is a flow diagram illustrating an example process for handling failures in processing natural language queries through user interactions.
  • a system can convert the received natural language queries into structured queries, for example, structured query language (“SQL”) queries.
  • SQL structured query language
  • the structured queries can be executed and responsive data can be returned for output.
  • the converted structured query can be used to obtain data responsive to the query, which can then be returned to the user.
  • the system may not always be able to successfully convert a given natural language query into a structured query.
  • the natural language query can include errors made by the user including typos, malformed sentences, or missing keywords.
  • the system also may be unable to convert the natural language query due to limitations of the system in recognizing particular sentence formations.
  • FIG. 1 is a flow diagram of an example process 100 of converting a natural language query into a structured query. For convenience the process is described with respect to a system that performs the process, for example, the system described below with respect to FIG. 2 .
  • the system obtains 102 a natural language query.
  • the system can receive a query input by a user through a user interface.
  • the user interface can be a search interface through which a user can submit natural language search queries. Details of individual process steps are described in greater detail below with respect to FIGS. 2-7 .
  • the system parses 104 the obtained natural language query.
  • the parser can be used to parse a natural language query into tokens, for example, parsing the query “Where can I get bacon and egg sandwich?” into the following tokens: “Where,” “I,” “get,” “bacon and egg,” and “sandwich.”
  • Two types of parsers can be used: a dependency parser and a constituency parser.
  • Another example query can be “computer sales per sale country and production country for goods manufactured in ASIA and sold in EMEA.” This query can be parsed into tokens “sales,” “per,” “sale country,” “production country,” “manufactured,” “ASIA,” “sold,” and “EMEA.”
  • a constituency parser breaks a natural language query into phrases and places these phrases in a parsing tree as nodes.
  • the non-terminal nodes in a parsing tree are types of phrases, e.g., Noun Phrase or Verb Phrase; the terminal nodes are the phrases themselves, and the edges are unlabeled.
  • a dependency parser breaks words in a natural language query according to the relationships between these words.
  • Each node in a parsing tree represents a word, child nodes are words that are dependent on the parent, and edges are labeled by the relationship.
  • the system analyzes 106 the parsed query to determine dependencies between constituents.
  • the dependency analysis allows the system to identify modifier relationships between the parsed phrases.
  • the system performs 108 lexical resolution to identify matching n-grams and generates concepts for the matched n-grams.
  • a concept created for a phrase e.g., an n-gram, captures what the phrase means to some group of people. This meaning can be identified through the use of one or more lexicons.
  • the phrase “sales” can be recognized as an n-gram mapping to a “sales_cost_usd” column in a table for a particular schema lexicon. Consequently, an attribute concept is generated as corresponding to the phrase “sales” in the parsed query.
  • Other information may be known from the lexicon, for example, that the phrase is associated with a numeric and aggregatable column. This information can be used when eventually generating corresponding structured queries.
  • a number of different types of concepts can be created based on phrases that are recognized including, for example, attributes, date/time window expressions, parts of speech (e.g., per, by, for, in, or not), numeric/string constants, recognized constants, subcontexts, and aggregations. Recognized constants can be recognized for example through an inverted index or through common items
  • the system forms 110 a concept tree from the generated concepts and dependencies between n-grams.
  • the initial concept tree that is created from the concepts corresponding to the parsed phrases and the identified dependency relationships.
  • the concepts are represented by nodes in the concept tree.
  • the initial concept tree does not include information that can be inferred from parent-child relationships of the concept tree itself.
  • the initial concept tree represents an intermediate structure used by the system to generate structured queries after performing additional analysis, simplifications, and transformations over the concept tree.
  • the analysis and transformations allow the system to identify meaningful and unambiguous mappings between entities represented in the concept tree to attributes, joins, aggregations, and/or predicates that can be used to form structured queries that accurately represent the intent of the user submitting the query.
  • the system processes 112 the concepts and dependencies of the concept tree to transform the concept tree.
  • the concepts and the parent-child or sibling relationships in the concept tree are analyzed.
  • the transformations are based on a system of inference rules based on the semantic representation provided by the concept tree that allows the system to de-tangle syntactic ambiguities.
  • the concepts that are transformed may be annotated with new information, they may be moved around, deleted, or merged with other concepts.
  • the remaining concepts after the processing form a transformed concept tree.
  • the transformed concept tree deterministically map to query operations/components to facilitate translation into a structured query by simply processing them one by one to build up the query components.
  • the system creates 114 a hypergraph from the concept tree and analyses the hypergraph to generate joins.
  • a hypergraph represents a database schema where data tables may have multiple join mappings among themselves.
  • the hypergraph can include a set of nodes representing columns of data tables stored in a database, as well as a set of edges representing tables to which the columns belong. Two nodes are connected by an edge if the columns represented by the two nodes are joinable; and the edge identifies the tables to which the columns belong.
  • the hypergraph analysis includes path resolution for joins using the concept tree.
  • the system processes 116 the concept tree and the hypergraph to generate the building blocks of an output query into what will be referred to as a virtual query.
  • the virtual query is a representation of the query components including, for example, selected attributes, grouped attributes, aggregations, filters, and joins. These components are created from the nodes of the transformed concept tree, in other words, concepts that are processed merged or annotated, except for the join specifications, which come from the hypergraph analysis.
  • the system processes 118 the virtual query to generate a structured query.
  • the virtual query can be translated into a structured query by processing the query components represented by the virtual query.
  • the translation can be customized to generate structured queries in different dialects depending on the type of query evaluation engine being used.
  • the virtual query can be translated into different query languages, e.g., corresponding to the language of the received query.
  • a failure can occur at different stages of the conversion.
  • the present specification describes techniques for identifying the failure and acting on the failure.
  • the action can include resolving the failure through additional processing.
  • the action can be taken at the corresponding stage of the conversion. For example, if there is a failure at the parsing of the natural language query, the system can request an alternative parse.
  • the action is propagated all the way to the user. For example, the user can be prompted to clarify a portion of the input query, e.g., to clarify a binding of a constant value.
  • FIG. 2 is a block diagram illustrating an example system 200 for handling failures in processing natural language queries through user interactions.
  • the system 200 includes a natural language (NL) front end 220 and a knowledge base 230 .
  • NL natural language
  • the system 200 receives natural language queries originating from one or more user devices 210 , e.g., a smart phone 210 -B and a laptop 210 -A, and converts them into structured operations, e.g., programming statements, to be performed on application programming interfaces (APIs) of the knowledge base 230 .
  • APIs application programming interfaces
  • the system 200 can cause a prompt to be presented to a user requesting the user to provide input to correct the failure.
  • a prompt to be presented to a user requesting the user to provide input to correct the failure.
  • the knowledge base 230 includes a knowledge acquisition subsystem 232 and an entity database 234 .
  • the knowledge base 230 provides structured APIs for use by programs to query and update the entity database 234 .
  • the knowledge acquisition subsystem 232 obtains, from external sources, e.g., the Internet, additional entity information and stores it in association with existing entity information in the entity database 234 and according to the data schema of the knowledge base.
  • the knowledge acquisition subsystem may communicate directly with external sources, bypassing the NL frontend 220 .
  • the entity database 234 stores entity information, i.e., information about entities, e.g., dates of birth of people, addresses for businesses, and relationships between multiple organizations.
  • entity information is stored in the entity database 234 according to a data schema.
  • the entity database 234 stores entity information using a table structure.
  • the entity database 234 stores entity information in a graph structure.
  • a data schema is generally expressed using a formal language supported by a database management system (DBMS) of the entity database.
  • DBMS database management system
  • a data schema specifies the organization of entity information as it is logically constructed in the entity database, e.g., dividing entity information into database tables when the entity database is a relational database.
  • a data schema can include data representing integrity constraints specific to an application, e.g., which columns in a table the application can access and how input parameters should be organized to query a certain table.
  • a data schema may define, for example, tables, fields, relationships, views, indexes, packages, procedures, functions, queues, triggers, types, sequences, materialized views, synonyms, database links, directories, XML schemas, and other elements.
  • the NL frontend 220 which can be implemented on one or more computers located at one or more locations, includes an NL input/output interface 222 , a conversion and failure handling subsystem 224 , and a conversion database 226 .
  • the NL input/output interface 222 receives, from users, natural language queries and, when the system 200 finishes processing these queries, provides matching search results back to the users, generally through a network connection to a user device.
  • Conversion rules stored in the conversion database 226 may be specific to the data schema used by the underlying knowledge base. For example, if the underlying knowledge base stores entity information as a graph structure that uses nodes to represent entities and edges to represent relationships between the entities, the conversion rules may specify how a natural language query or update statement is to be parsed to generate statements, e.g., input parameter, operands between these input parameters, and output parameters, for querying the graph structure.
  • the system may use conversion rules to generate the following statements: 1. find a node connected with the Node “US president” by a “1st” edge; and 2. retrieve the node's name “George Washington.”
  • the conversion and failure handling subsystem 224 converts natural language queries received from users into structured operations to be performed on APIs of the knowledge base 230 .
  • the subsystem 224 performs these conversions based on conversion rules specified in the conversion database 226 .
  • the subsystem 224 can resolve the failure or can present information about the failure to a user and interact with the user to resolve the failure.
  • Different types of failures may occur, because processing a natural language query includes several stages, e.g., parsing, tokenization, dependency analysis, concept tree analysis, and SQL query generation, and failures may occur at any one of these stages.
  • alternative parses can be generated and scored.
  • a winning alternative parse e.g., one with a highest score, can be used to generate the structured query.
  • FIG. 3 is a flow diagram illustrating an example process 300 for iterating over query versions. For convenience the process 300 is described with respect to a system that performs the process 300 , for example, the system described with respect to FIG. 2 .
  • the system parses the natural language query 302 .
  • the natural language query can correspond with an obtained user input query.
  • the natural language query can be obtained and parsed, for example, as described above with respect to FIG. 1 .
  • the system determines 304 , based on analysis of the parsed query, whether the parsed query triggers an error or a warnings.
  • a warning can be used as a quality measure that indicates the parsed query is not as expected but can still be processed.
  • An error is a failure that indicates that something is wrong with the parsed query and the conversion process to a structured query cannot proceed. More than one warning can be triggered during analysis of the parsed query depending on the stage of the analysis.
  • the system computes 306 a quality score.
  • the quality score can be stored along with state information, e.g., the parse result, and warning information, e.g., information on the cause, location, and relevant query tokens.
  • state information e.g., the parse result
  • warning information e.g., information on the cause, location, and relevant query tokens.
  • the system determines 308 whether there is an alternative parse.
  • the quality score can depend on the number of warnings triggered during the analysis of the parsed query.
  • the system determines 308 whether there is an alternative parse. Additionally, the system logs the error and state information.
  • the state information can include the cause, location, and relevant tokens associated with the error.
  • step 302 In response to a determination that there is an alternative parse, yes branch, the system iterates from step 302 . Thus, multiple alternative parses can be analyzed if subsequent warnings or errors are triggered.
  • the system selects a best available parse 310 .
  • the quality scores for the parses are compared. For example, the parse with the highest quality score can be selected.
  • the system determines whether this parse is a best parse.
  • a best parse is a parse that may have warnings, but does not have any errors. If such a best parse if found, the system generates 314 a structured query.
  • the analysis of the parsed query, or parsed alternative queries, includes the generation of a transformed concept tree, which can then be used to generate the structured query.
  • a best parse is not found, for example, if the best available parse still has an error, the system generates 316 an error message. If each iteration resulted in an error being triggered, the system cannot continue. A particular error message can be presented to the user. In some implementations, the user can be prompted to take action to correct the input query. Additionally, even when a best parse is found, if there are generated warnings the system can generate 316 a warning message that can be provided to the user.
  • the system in response to a determination that a query or alternative query has no error or warning triggered, the system generates 314 the structured query.
  • an error can be determined that results in a failure or a warning can be triggered resulting in a quality score that indicates a lower confidence.
  • a number of different types of errors can be determined.
  • the system can determine that a bad parse exists, for example, when the system is not able to generate a concept tree from the parsed query. In response to a bad parse, the system determines whether an alternative parse exists. If no alternative parse exists, a failure can occur. If an alternative parse does exits, the analysis is performed using the alternative parse.
  • An ambiguous column reference error can occur in several different stages of the conversion process. As described above with respect to FIG. 1 , the system matches the constituents identified by the parsing to particular n-grams. However, there may be multiple matches possible, e.g., there may be multiple column matches to a particular n-gram. Instead of recording the error at this stage, the system can record all possible matches and determine if further analysis in the concept tree transformation stage, described in FIG. 1 , resolves the ambiguity. Furthermore, during hypergraph analysis the system can determine that there are no subcontexts available to disambiguate which join path is the one to use for a column.
  • the system can prompt the user to specify a particular subcontext to resolve the ambiguity.
  • the ambiguity may be due to a bad parse.
  • the system can attempt alternate parses to resolve the ambiguity before prompting the user.
  • the input query can be “countries where sales is more than 1000.”
  • This query can generate the following error message, which can be provided to the user: We found an ambiguous column reference in the query for the phrase “country”. We were not able to disambiguate the column as it had multiple matches:
  • the modified query “Production countries where sales is more than 1000” can result in the following structured query:
  • Analysis of the parsed query can result in a malformed concept tree that prevents the system from identifying what a specified constant value references or that an identified column has an incompatible type with the constant.
  • the system can determine whether alternative parses resolve the problem as a way to ensure the problem is not a bad parse. If the alternative parses do not resolve the ambiguity, the error can be propagated to the user as a message identifying the particular constant phrase and requesting clarification.
  • the input query can be “likes for name ‘JohnDoe’”.
  • the parse for this query leads to a concept tree where the dependency relationship between the constant string ‘JohnDoe’ and the attribute name was not properly captured.
  • An example of this concept tree is shown in FIG. 4 .
  • the concept “JohnDoe” is not shown as dependent on the concept “name.”
  • a different query version e.g., “likes where name is ‘JohnDoe ’” is parsed properly and results in the concept tree 500 shown in FIG. 5 .
  • concept tree 500 the dependency of “JohnDoe” on “name” is correctly defined. This results in a conversion to the following structured query:
  • Some datetime representations look very much like integer numbers, for example, 2015 is both a number and a datetime constant.
  • the parsing may not be able to disambiguate between the number and the datetime constant. Therefore, the system uses context of the phrase to determine whether it is actually a datetime or a numeric constant. This can be performed during the concept tree analysis stage. If the system is unable to disambiguate an error is generated.
  • the system checks for alternative parses to confirm that the ambiguity error is not caused by the parse. If alternative parses do not resolve the ambiguity, a message can be provided to the user pointing out the particular datetime/numeric expression and request clarification.
  • the query that can result in an error requiring user input to resolve is: “Total revenue in 2015.”
  • Negation and comparison keywords are important for generating predicates correctly.
  • the keywords are processed during the concept tree analysis stage. The system generates warnings when the system is not able to process them properly. Not processing properly basically means that the keyword concept was not used to set or modify a relation.
  • the warnings are most likely caused by either a bad parse or a malformed sentence.
  • the system attempts alternate parses first to see if there is an alternative version that allows the system to process the keywords properly. Since the errors are warnings and not failures, the system may generate a structure query anyway assuming there are no other errors. However, the system can still notify the user with a message indicating that the system was unable to process the keyword.
  • the input query can be “sales where production cost is not 2000.”
  • the parse result concept tree for the input query is illustrated in FIG. 6 .
  • the negation concept, “not,” is not located correctly. Consequently, a warning can be generated for the parse indicating that the system was unable to interpret the negation keyword “not” in the input query. If there are no alternative parses that do not generate a warning, the structured query generated from the input query can be:
  • FIG. 7 If there is an alternative parse that resolves the issues, an example resulting concept tree is shown in FIG. 7 .
  • the concept tree 700 shown in FIG. 7 correctly positions the negation concept.
  • the structured query generated can be:
  • aggregation errors There are different types of aggregation errors that can occur during analysis of the input query, in particular, during concept tree analysis.
  • One type of aggregation error occurs when an aggregation function is not applied. This can occur when the system is unable to associate an aggregation function with an attribute or structured query expression.
  • an input query “average where production country is France” can result in an error message being generated indicating that the system was unable to associate an aggregate function, specifically [average] in the input query with the column to which it has to be applied.
  • a corrected query “average sales where production country is France,” can be used to generate the structured query:
  • a second type of aggregation error that can occur during concept tree analysis is an aggregation function over non-compatible type. This aggregation error occurs when the query indicates that an aggregation is specified over an attribute that is not type compatible, for example, averaging a string attribute.
  • a third type of aggregation error can occur when a distinct keyword is recognized but was not properly associated with a compatible aggregate argument. For example, the query “number of distinct production countries where sold country is France” generates an error message because the system is not able to interpret the “distinct” keyword in the input query. A corrected query “distinct number of production countries where sold country is France” can be used to generate the structured query:
  • a fourth type of aggregation error can occur when one or more aggregate arguments are not specified.
  • a fifth type of aggregation error can occur when the query specifies an aggregate expression, e.g., a measure, as a grouping key. For example, the query “sum of clicks per sum of impressions” where both “clicks” and “impressions” are numeric measures. The use of “per” in the query indicates the query is malformed. An error message can be generated indicating that the aggregate expression “sum of impressions” was specified as a dimension in the input query.
  • an aggregate expression e.g., a measure
  • the issue may be caused by either a bad parse or a malformed sentence.
  • the system can attempt alternative parses to see of an alternative parse resolves the error. If an alternative parse does not exist, the error can be presented to the user, for example, with a prompt to correct the input query.
  • the system may determine that it is unable to uniquely identify a column reference.
  • the system may be able to perform a partial matching to join paths to determine which join step is missing.
  • the system checks for alternative parses to make sure that the error is not caused by the parse.
  • the system may communicate with the user the missing references that are needed, for example, subcontext phrases, with a request that the user identify correct join paths.
  • the input query can be “sales where buyer's location is in Nevada.”
  • the error generated can be a determination by the system of ambiguous reference in the query that indicate a join step is missing.
  • the system can present the user with information indicating where the missing reference lies, e.g., as illustrated in the following table:
  • the example query can also cause an Info message to inform the user that the noun phrase “location” is not recognized.
  • a correction replacing “location” with “personal address” can result in generation of the following structured query:
  • n-grams generated by the system for concepts for should be processed during the concept tree analysis except for some keywords that the system recognizes that may also serve as parts of speech. For example, if there is a constant literal concept, the system should be able to figure out which column it is relevant to and ultimately generate a predicate from it. If the system ends up with concepts that were not processed, it is an indication that something is missing even if the system is still able to generate a structured query.
  • a structured query If a structured query is generated, the system should return it along with a warning to let the user know that there may be something missing.
  • the message can indicate, e.g., highlight, what may be missing.
  • the handling may depend on the concept type. At a minimum, an error message can be returned to the user.
  • the system monitors for noun phrases that are not matched to any lexicons, e.g., attributes, subcontexts, etc., and generates dummy concepts for them to make sure they play their role in forming the concept tree properly. It is highly possible that an unrecognized noun phrase is a misspelled phrase or a partially provided multi-gram.
  • the system can recognize either “personal address” or “business address” phrases but the user only includes the phrase “address” in the query.
  • the system will generate a corresponding structured query if possible without processing it, but can also propagate a message to the user saying that the phrase “address” is not matched to any phrases that the system recognizes.
  • the message may further note that the phrase may correspond to “personal address” or “business address”.
  • the user input query is misspelled and used “personnel address”.
  • the system can recognize the similarity and ask the user if s/he meant “personal address” instead.
  • the system can check to see if the user has access to a table (and column) whenever the system creates a concept for it.
  • the system can show him/her an error message indicating that the user does not have any access to a table, or can show the query only e.g., user has peeker access only, or can show both the query and the result, e.g., if the user has data access. If the user does not have a data access but can see the schemas, the system may treat inverted index hits as constant literals or get explicit verification from the user to treat them as index hits.
  • the system may generate a bad parse. If the system is unable to identify one or more alternative parses that are processed successfully, then the user can be prompted with a message that describes the problem. The user can then modify the natural language query and the parsing can be attempted again.
  • the received natural language query can result in an ambiguous column reference.
  • the query “countries where sales is more than 1000 ” requires user input to disambiguate.
  • the user can be provided with a list of possible interpretations to aid the user in clarifying the use of “country” in the submitted query.
  • the system provides corresponding subcontext phrases to clarify each possible meaning of ‘country.’ The user can then add a particular phrase and retry, for example, “production countries where sales is more than 1000.”
  • the received natural language query can result in aggregation errors. For example, the query “number of distinct production countries where sold country is France” results in an error with a message to the user that indicates that the system is unable to associate “distinct” with an expression. The user then has an opportunity to rewrite the query.
  • the received natural language query can result in a missing join step.
  • the query “sales where buyer's location is in Nevada” does not provide enough information for the system to identify what “Nevada” refers to. From join analysis the system detects that it can reference either one of buyer's business location or buyer's home location. The system provides a display of the possible phrases that the user can use to fix the query.
  • the system can still provide all warnings (with context info) to the user if the best parse has warnings. For example, unused comparison or negation keywords will be highlighted in the natural language query along with the warning message. At that point the user may check the structured query and decide modify the natural language query (possibly using more proper English) to avoid the warnings. Similar with ‘unprocessed concept’, ‘unmatched noun phrase’, or ‘Ambiguous Datetime’ errors.
  • the user receives the translated structured query and the version of the query (if an alternate parse is used) that the system used. Otherwise the user is provided with some sort of guidance through the use of error/warning messages.
  • FIGS. 8-12 illustrate some example user interactions for resolving failures.
  • One type of failure that may occur when processing a natural language query are missing token failures.
  • Tokenization is the process of breaking up text into units, which are conventionally called tokens.
  • a token can represent one or more words, numbers, or punctuation marks.
  • FIG. 8 is a block diagram illustrating an example process 800 for handling a missing token failure through user interactions.
  • a missing token failure occurs when a natural language processing system cannot locate words in an original query that correspond to required tokens. For example, because the subject is missing from the natural language query “Where is?” a missing token failure may arise when the system processes this query.
  • process 800 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification.
  • the system 200 of FIG. 2 appropriately programmed, can perform the process 800 .
  • the process 800 begins with the system obtaining a user-provided natural language query 802 , e.g., “How much is a non-red 2015?”
  • the system Having received the natural language query 802 , the system attempts to convert the natural language query 802 into structured operations, e.g., SQL queries, suitable for operation on a table-based knowledge base 850 .
  • one of the conversion steps includes tokenizing the natural language query 802 based on an underlying data schema of the knowledge base 850 , e.g., a vehicle table 810 .
  • the natural language processing system breaks the natural language query 802 down into the following tokens 804 : “Non-red” and “2015.”
  • the system deems the tokens 804 as having been incorrectly produced and a missing token failure as having occurred.
  • the system prompts a user for input to resolve the failure. For example, the system may ask a user to provide a make and model of a vehicle to clarify the submitted natural language query 802 as shown in step 806 . A user can respond by clarifying the natural language query 802 with additional context to produce a clarified natural language query, e.g., “How much is a blue color Cadillac ATS 2015?”
  • the system 800 may resume by processing the clarified query, e.g., using the natural language query 802 as a context.
  • the system may produce the following tokens from the clarified query: “blue”; “Cadillac ATS”; and “2015” from the clarified query and generate SQL queries based on the new tokens.
  • Another type of failures that may occur when processing a natural language query are overly complex query failures.
  • a query that is semantically complicated is likely to have a large number of lexicon matches and dependency relationships, which can cause a failure when they exceed a system's ability to process.
  • FIG. 9 is a block diagram illustrating an example process 900 for handling a lexicon matching or dependency failure through user interactions.
  • the process 900 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification.
  • the system 200 of FIG. 2 appropriately programmed, can perform the process 900 .
  • a natural language processing system may attempt to resolve the dependencies of the phrase “second hand ones” when converting the natural language query 902 into one or more SQL queries.
  • second hand non-red Cadillac CTS 2015 may produce a large number of possible outcomes, e.g., “second hand non-red Cadillac CTS 2015”; “second hand non-red Cadillac CTS”; “second hand non-red Cadillac 2015”; “second hand Cadillac CTS 2015”; “second hand Cadillac CTS”; “second hand Cadillac 2015”; “second hand sedan,” which can exceed a specified maximum number of outcomes the system can handle for a single natural language query, the system may experience a lexicon matching failure or a dependency failure 906 .
  • the system may provide a query building user interface, through which the user can either rewrite the original natural language query 902 or provide linguistic boundaries for the terms included in the original natural language query 902 , to reduce query complexity.
  • the system may provide user interface (UI) controls, e.g., radio buttons and dropdown lists, as filters, so that a user may remove dependencies in the natural language query 902 .
  • UI user interface
  • a user may apply a condition filter, e.g., with the value “second hand,” in conjunction with a make and model filter, e.g., with the value “Cadillac CTS” and a year filter, e.g., with the value “2015,” to clarify that the term “second hand” refers to a “Cadillac CTS 2015.”
  • a condition filter e.g., with the value “second hand”
  • a make and model filter e.g., with the value “Cadillac CTS”
  • a year filter e.g., with the value “2015”
  • the system may process a new query based on the filter values.
  • a third type of failures that may occur when processing a natural language query are data access failures. For example, when a user queries against a data source to which the user lacks access, a data access failure occurs.
  • FIG. 10 is a block diagram illustrating an example process 1000 for handling a data access failure through user interactions.
  • the process 1000 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification.
  • the system 200 of FIG. 2 appropriately programmed, can perform the process 1000 .
  • a natural language process system may determine, at step 1004 , that processing the natural language query 1002 requires read access to a vehicle table 1010 . However, the system may determine that the user has not been granted read access to the vehicle table 1002 , e.g., based on permissions specified in the user's profile.
  • the system can experience a data access failure 1004 .
  • the system provides a suggestion as to how to resolve the failure. For example, the system may suggest the user to contact a database administrator to receive appropriate data access and then rerun the query. The user can then follow the suggestions to resolve the failure so that the processing can proceed.
  • the system avoids providing information that can potentially reveal data to which the user lacks access. For example, the system can refrain from revealing to the user the name of the data table, e.g., the vehicle table 1010 , or the data columns, e.g., the “color” and “make & model” columns, to which the user lacks read access. Instead, the system may provide only generic instructions directing a user to resolve a data access failure, e.g., suggesting that the user should contact a database administrator.
  • a fourth type of failures that may occur when processing a natural language query are linguistic ambiguity failures. For example, when a natural language query includes ambiguities that can lead to multiple different interpretations of the query terms, a linguistic ambiguity failure occurs.
  • FIG. 11 is a block diagram illustrating an example process 1100 for handling a linguistic ambiguity failure through user interactions.
  • the process 1100 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification.
  • the system 200 of FIG. 2 appropriately programmed, can perform the process 1100 .
  • a natural language process system may, as shown in step 1104 , interpret the natural language query 1102 as two separate queries of “Where can I get bacon?” and “Where can I get egg sandwich?”
  • the system may also interpret, as shown in step 1106 , the natural language query 1102 as a single query of “Where can I get a sandwich that includes both bacon and egg?”
  • the system deems both alternatives equally possible or even plausible.
  • the system can experience a linguistic ambiguity failure.
  • the system prompts a user to clarify the natural language query 1102 to remove ambiguity. For example, the system may prompt a user to clarify whether she meant to search for where to get “a bacon and egg sandwich,” as shown in step 1108 .
  • the system can proceed to process the clarified query and produce matching results.
  • FIG. 12 is a flow diagram illustrating an example process 1200 for handling failures in processing natural language queries through user interactions.
  • the process 1200 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification.
  • the system 200 of FIG. 2 appropriately programmed, can perform the process 1200 .
  • the process 1200 begins with the system obtaining ( 1202 ) a natural language query from a user through a natural language frontend.
  • the system After obtaining the query, the system attempts to convert the query into structured operations to be performed on structured application programming interfaces (APIs) of a knowledge base. For example, the system may parse a plain English query to produce several tokens and maps the produced token to a data table's scheme in order to generate a SQL query.
  • APIs application programming interfaces
  • Failures may occur when the system attempts to convert the natural language query into one or more structured operations.
  • the system detects a failure, the system provides ( 1204 ), through a user interaction interface, information to the user describing the failure, e.g., to prompt the user to help resolve the failure. For example, when a linguistic ambiguity failure occurs, the system may provide the user a choice of interpreting a natural language query in a certain way, to resolve ambiguity.
  • the system modifies ( 1206 ) the conversion process based on the user's input.
  • the system modifies the conversion process by abandoning the original query and processing a new query.
  • the system modifies the conversion process by continuing to process the original query in view of the user's input, e.g., context.
  • the system may generate SQL queries accordingly.
  • the system then continues the process 1200 by performing ( 1208 ) the one or more structured operations, e.g., SQL queries, on the structured APIs of the knowledge base. Once operation results, e.g., matching query results, are produced, the system provides ( 1210 ) them to the user.
  • structured operations e.g., SQL queries
  • operation results e.g., matching query results
  • a user enters a natural language query through a user interface.
  • the natural language query processing system parses the query to generate a document tree and performs a phrase dependency analysis to generate dependencies between constituents.
  • the system then performs a lexical resolution, which includes an n-gram matching followed by generation of concepts for the matched n-grams.
  • the system forms a concept tree is formed based on the generated concepts and the dependencies between the concepts.
  • the system may also transforms the concept tree by modifying relationship between the concepts in the tree.
  • the next stage is virtual query generation and it starts with the hypergraph analysis step path resolution is performed.
  • the system iterates through all the nodes (concepts) to generate the building blocks for the output query and use the hypergraph to generate all the joins (if any).
  • the structured query can be processed to generate the actual SQL query.
  • a failure can happen in any of these stages and a natural language query processing system may catch and propagate the failure to a user for resolution or may record the issue to investigate as a bug.
  • the system keeps track of the context and provide reasonable amount of information so that an action could be taken.
  • the action could be taken at any stage that we have gone through earlier (e.g., requesting the parser for an alternate parse) or could be propagated all the way up to the user (e.g., requesting a user to clarify the binding of a constant value).
  • iterating over query versions can include determining alternative parses for a given original natural language query.
  • the parse result of the original query is examined. If the original query does not have any verbs or if the punctuation at the end of the query is not consistent with the parse output, the system can make one or more minor changes to the query to make it closer to a properly formed sentence or question.
  • the original query can be “Revenue in France yesterday per sales channel?” This query is actually a noun phrase with a question mark at the end.
  • the system may be able to get a better parse if it changes the original query to a proper question, for example, “What is revenue in France yesterday per sales channel?” which parses as a proper question.
  • the system may get a better parse by adding a verb to the original query, for example, “Show me revenue in France yesterday per sales channel” which parses as a proper sentence.
  • the original query input by the user can be “sales per buyer name where buyer's personal address is in California, and the seller's business address is in Nevada?”
  • This query parses as a sentence but with a quotation mark at the end. The parse loses some dependencies and results in errors being triggered during the parse analysis. However, the following changed queries correctly parse:
  • the resulting structured query can be:
  • the original input query can lack proper punctuation and/or be interpretable in multiple ways.
  • the initial parse result for such queries may not result in a successful analysis.
  • the system's attempt to try alternate parses based on basic modifications as discussed above may also fail to produce a successful analysis.
  • the system can generate alternative parses by using other techniques, e.g., external to the parser, to augment the input query with some token range constraints before sending the query to the parse. These constraints are processed by the parser as a unit and often result in an alternative version that can be interpreted correctly, e.g., with a successful analysis or a high quality score. There are different techniques that can be used to generate the alternative queries based on particular grammars.
  • An example original query is “sales and average likes of buyer where seller has more than 100 likes.”
  • the basic changes for generating alternative versions as describe above do not result in a successful parse.
  • An example of a generated alternative query with token range constraints is “ ⁇ sales and average likes of buyer ⁇ where ⁇ seller has more than 100 likes ⁇ ” which results in a successful parse.
  • the constraints are marked by the use of curly parenthesis ⁇ ⁇ .
  • the system may generate multiple versions and use a ranking mechanism to feed those into the analysis based on their rank.
  • the resulting structured query can be:
  • Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus.
  • the computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • data processing apparatus refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • the apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages; and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code.
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers.
  • Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit.
  • a central processing unit will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.
  • the central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • USB universal serial bus
  • Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
  • a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interaction interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
  • LAN local area network
  • WAN wide area network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client.
  • Data generated at the user device e.g., a result of the user interaction, can be received at the server from the device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Systems, methods, and computer storage media for handling failures in generating structured queries from natural language queries. One of the methods includes obtaining, through a natural language front end, a natural language query from a user; converting the natural language query into structured operations to be performed on structured application programming interfaces (APIs) of a knowledge base, comprising: parsing the natural language query, analyzing the parsed query to determine dependencies, performing lexical resolution, forming a concept tree based on the dependencies and lexical resolution; analyzing the concept tree to generate a hypergraph, generate virtual query based on the hypergraph, and processing the virtual query to generate one or more structured operations; performing the one or more structured operations on the structured APIs of the knowledge base; and returning search results matching the natural language query to the user.

Description

    CLAIM PRIORITY
  • This application claims the benefit under 35 U.S.C. §119(e) of the filing date of U.S. Provisional Patent Application Ser. No. 62/217,260, for “Handling Failures in Processing Natural Language Queries Through User Interactions,” which was filed on Sep. 11, 2015, and which is incorporated here by reference.
  • BACKGROUND
  • This specification relates to handling failures in processing natural language queries.
  • Failures may occur, when a computer system attempts to process natural language queries provided by users to provide matching search results. An iterative model may be used to handle these failures.
  • Implementing an iterative model in this context, however, may be prohibitive, e.g., a complete set of definitions of terms that may be used in a user-provided natural language query is often needed.
  • SUMMARY
  • This specification describes techniques for handling failures in generating SQL queries from natural language queries.
  • In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of obtaining, through a natural language front end, a natural language query from a user; converting the natural language query into structured operations to be performed on structured application programming interfaces (APIs) of a knowledge base, comprising: parsing the natural language query, analyzing the parsed query to determine dependencies, performing lexical resolution, forming a concept tree based on the dependencies and lexical resolution; analyzing the concept tree to generate a hypergraph, generate virtual query based on the hypergraph, and processing the virtual query to generate one or more structured operations; performing the one or more structured operations on the structured APIs of the knowledge base; and returning search results matching the natural language query to the user. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.
  • The foregoing and other embodiments can each optionally include one or more of the following features, alone or in combination. In particular, one embodiment includes all the following features in combination. Parsing the natural language query includes breaking the natural language query into phrases and placing the phrases in a parsing tree as nodes. Performing lexical resolution includes generating concepts for one or more of the parsed phrases. Analyzing the concept tree includes: analyzing concepts and parent-child or sibling relationships in the concept tree; and transforming the concept tree including annotating concepts with new information, moving concepts, deleting concepts, or merging concepts with other concepts. The hypergraph represents a database schema where data tables may have multiple join mappings among themselves. The method further includes analyzing the hypergraph including performing path resolution for joins using the concept tree. The method further includes detecting a failure during conversion of the natural language query to the one or more structured operations. The method further includes resolving the failure through additional processing including determining if an alternative parse for the natural language query is available. The method further includes resolving the failure through additional processing including: providing, through a user interaction interface, to the user one or more information items identifying the failure; responsive to a user interaction with an information item: and modifying the natural language query in accordance with the user interaction to generate one or more structured operations. The failure can be based on one or more of a bad parse, an ambiguous column reference, an ambiguous constant, an ambiguous datetime, unused comparison keywords or negation keywords, aggregation errors, a missing join step, an unprocessed concept, an unmatched noun phrase, or missing data access. The knowledge base, the natural language front end, and the user interaction interface are implemented on one or more computers and one or more storage devices storing instructions, and wherein the knowledge base stores information associated with entities according to a data schema and has the APIs for programs to query the knowledge base.
  • The subject matter described in this specification can be implemented in particular embodiments so as to realize one or more of the following advantages. Efforts for handling failures in processing natural language queries can be reduced. Natural language terms can be matched to lexicons recognized by a natural language processing system through user interactions, reducing the need for complete definitions of query terms upfront that may appear in a natural language query. Also, linguistic ambiguities detected in a user-provided natural language query can be resolved as they arise, eliminating the need to produce search results based on each alternative interpretation. Further, data access issues can be brought to a user's attention early on without risking any data security breach.
  • User interactions can be minimized in generating structured queries from natural language queries. In particular, the system uses techniques to avoid unnecessary iterations through user actions by assessing a quality of the parse and the structured query that can be generated through identification of certain errors or warnings during parsing and processing of the input query expressed in natural language. This assessment allows the system to perform operations to provide a translation of the natural langue query to a structured query while overcoming some shortcomings of the parser or some grammatical/structural mistakes in the natural language query. Consequently, the system can often determine what the structured query from compact sentences or even phrases. This improves the user experience and makes translating natural language queries into structured queries more useful.
  • In some situations, the system cannot determine the structured query without user interaction. In those cases, the system attempts to guide the user towards corrections that can resolve the errors and lead to a successful translation into a structured query. For example, if there is ambiguity, the system can identify and present possible interpretations and choices for disambiguation. This helps the user quickly correct the natural language query and improves the speed of generating the structured query in those cases.
  • The system allows users who are not experienced with the particular data domain or query languages to obtain specifically desired information using natural language queries. The system accepts queries presented in plain English (or language of the user's choice) and processes it through the use of NLP (natural language processing) techniques to generate and run the corresponding structured query in the query backend and return the result to the user. To process the natural language query, a number of schema lexicons are generated which provide a number of mappings used to process the natural language query.
  • The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flow diagram of an example process of converting a natural language query into a structured query.
  • FIG. 2 is a block diagram illustrating an example system for handling failures in processing natural language queries through user interactions.
  • FIG. 3 is a flow diagram illustrating an example process for iterating over query versions.
  • FIGS. 4-7 are diagrams of example concept trees.
  • FIG. 8 is a block diagram illustrating an example process for handling a missing token failure through user interactions.
  • FIG. 9 is a block diagram illustrating an example process for handling a lexicon matching failure through user interactions.
  • FIG. 10 is a block diagram illustrating an example process for handling a data access failure through user interactions.
  • FIG. 11 is a block diagram illustrating an example process for handling a linguistic ambiguity failure through user interactions.
  • FIG. 12 is a flow diagram illustrating an example process for handling failures in processing natural language queries through user interactions.
  • Like reference numbers and designations in the various drawings indicate like elements.
  • DETAILED DESCRIPTION
  • Overview
  • Users can provide queries using natural language, for example, a free form English text string. A system can convert the received natural language queries into structured queries, for example, structured query language (“SQL”) queries. The structured queries can be executed and responsive data can be returned for output. For example, in response to a query the converted structured query can be used to obtain data responsive to the query, which can then be returned to the user.
  • The system may not always be able to successfully convert a given natural language query into a structured query. In particular, the natural language query can include errors made by the user including typos, malformed sentences, or missing keywords. The system also may be unable to convert the natural language query due to limitations of the system in recognizing particular sentence formations.
  • A process of converting a natural language query into a structured query can undergo a number of stages. FIG. 1 is a flow diagram of an example process 100 of converting a natural language query into a structured query. For convenience the process is described with respect to a system that performs the process, for example, the system described below with respect to FIG. 2.
  • The system obtains 102 a natural language query. The system can receive a query input by a user through a user interface. For example, the user interface can be a search interface through which a user can submit natural language search queries. Details of individual process steps are described in greater detail below with respect to FIGS. 2-7.
  • The system parses 104 the obtained natural language query. The parser can be used to parse a natural language query into tokens, for example, parsing the query “Where can I get bacon and egg sandwich?” into the following tokens: “Where,” “I,” “get,” “bacon and egg,” and “sandwich.” Two types of parsers can be used: a dependency parser and a constituency parser. Another example query can be “computer sales per sale country and production country for goods manufactured in ASIA and sold in EMEA.” This query can be parsed into tokens “sales,” “per,” “sale country,” “production country,” “manufactured,” “ASIA,” “sold,” and “EMEA.”
  • A constituency parser breaks a natural language query into phrases and places these phrases in a parsing tree as nodes. The non-terminal nodes in a parsing tree are types of phrases, e.g., Noun Phrase or Verb Phrase; the terminal nodes are the phrases themselves, and the edges are unlabeled.
  • A dependency parser breaks words in a natural language query according to the relationships between these words. Each node in a parsing tree represents a word, child nodes are words that are dependent on the parent, and edges are labeled by the relationship.
  • The system analyzes 106 the parsed query to determine dependencies between constituents. The dependency analysis allows the system to identify modifier relationships between the parsed phrases. Additionally, the system performs 108 lexical resolution to identify matching n-grams and generates concepts for the matched n-grams. A concept created for a phrase, e.g., an n-gram, captures what the phrase means to some group of people. This meaning can be identified through the use of one or more lexicons. For example, in the above example, the phrase “sales” can be recognized as an n-gram mapping to a “sales_cost_usd” column in a table for a particular schema lexicon. Consequently, an attribute concept is generated as corresponding to the phrase “sales” in the parsed query. Other information may be known from the lexicon, for example, that the phrase is associated with a numeric and aggregatable column. This information can be used when eventually generating corresponding structured queries.
  • A number of different types of concepts can be created based on phrases that are recognized including, for example, attributes, date/time window expressions, parts of speech (e.g., per, by, for, in, or not), numeric/string constants, recognized constants, subcontexts, and aggregations. Recognized constants can be recognized for example through an inverted index or through common items
  • The system forms 110 a concept tree from the generated concepts and dependencies between n-grams. The initial concept tree that is created from the concepts corresponding to the parsed phrases and the identified dependency relationships. The concepts are represented by nodes in the concept tree. However, the initial concept tree does not include information that can be inferred from parent-child relationships of the concept tree itself. Thus, the initial concept tree represents an intermediate structure used by the system to generate structured queries after performing additional analysis, simplifications, and transformations over the concept tree. The analysis and transformations allow the system to identify meaningful and unambiguous mappings between entities represented in the concept tree to attributes, joins, aggregations, and/or predicates that can be used to form structured queries that accurately represent the intent of the user submitting the query.
  • The system processes 112 the concepts and dependencies of the concept tree to transform the concept tree. In particular, the concepts and the parent-child or sibling relationships in the concept tree are analyzed. The transformations are based on a system of inference rules based on the semantic representation provided by the concept tree that allows the system to de-tangle syntactic ambiguities. The concepts that are transformed may be annotated with new information, they may be moved around, deleted, or merged with other concepts. The remaining concepts after the processing form a transformed concept tree. The transformed concept tree deterministically map to query operations/components to facilitate translation into a structured query by simply processing them one by one to build up the query components.
  • The system creates 114 a hypergraph from the concept tree and analyses the hypergraph to generate joins. A hypergraph represents a database schema where data tables may have multiple join mappings among themselves. The hypergraph can include a set of nodes representing columns of data tables stored in a database, as well as a set of edges representing tables to which the columns belong. Two nodes are connected by an edge if the columns represented by the two nodes are joinable; and the edge identifies the tables to which the columns belong. The hypergraph analysis includes path resolution for joins using the concept tree.
  • Once the concept tree is transformed and the hypergraph analysis is complete, the system processes 116 the concept tree and the hypergraph to generate the building blocks of an output query into what will be referred to as a virtual query. The virtual query is a representation of the query components including, for example, selected attributes, grouped attributes, aggregations, filters, and joins. These components are created from the nodes of the transformed concept tree, in other words, concepts that are processed merged or annotated, except for the join specifications, which come from the hypergraph analysis.
  • The system processes 118 the virtual query to generate a structured query. The virtual query can be translated into a structured query by processing the query components represented by the virtual query. The translation can be customized to generate structured queries in different dialects depending on the type of query evaluation engine being used. Additionally, the virtual query can be translated into different query languages, e.g., corresponding to the language of the received query.
  • A failure can occur at different stages of the conversion. The present specification describes techniques for identifying the failure and acting on the failure. The action can include resolving the failure through additional processing. In particular, the action can be taken at the corresponding stage of the conversion. For example, if there is a failure at the parsing of the natural language query, the system can request an alternative parse. In some implementations, the action is propagated all the way to the user. For example, the user can be prompted to clarify a portion of the input query, e.g., to clarify a binding of a constant value.
  • System Architecture
  • FIG. 2 is a block diagram illustrating an example system 200 for handling failures in processing natural language queries through user interactions.
  • The system 200 includes a natural language (NL) front end 220 and a knowledge base 230.
  • The system 200 receives natural language queries originating from one or more user devices 210, e.g., a smart phone 210-B and a laptop 210-A, and converts them into structured operations, e.g., programming statements, to be performed on application programming interfaces (APIs) of the knowledge base 230.
  • When the system 200 detects a predefined type of conversion failure, the system 200 can cause a prompt to be presented to a user requesting the user to provide input to correct the failure. Note that not all conversion failures require user input or interaction; rather, only some types of failures, e.g., data access issues or selected linguistic ambiguities, require user input. The system is configured to handle most issues without user interaction using one or more techniques for handling failures as described in this specification.
  • The knowledge base 230 includes a knowledge acquisition subsystem 232 and an entity database 234. The knowledge base 230 provides structured APIs for use by programs to query and update the entity database 234.
  • The knowledge acquisition subsystem 232 obtains, from external sources, e.g., the Internet, additional entity information and stores it in association with existing entity information in the entity database 234 and according to the data schema of the knowledge base. The knowledge acquisition subsystem may communicate directly with external sources, bypassing the NL frontend 220.
  • The entity database 234 stores entity information, i.e., information about entities, e.g., dates of birth of people, addresses for businesses, and relationships between multiple organizations. The entity information is stored in the entity database 234 according to a data schema. In some implementations, the entity database 234 stores entity information using a table structure. In other implementations, the entity database 234 stores entity information in a graph structure.
  • A data schema is generally expressed using a formal language supported by a database management system (DBMS) of the entity database. A data schema specifies the organization of entity information as it is logically constructed in the entity database, e.g., dividing entity information into database tables when the entity database is a relational database.
  • A data schema can include data representing integrity constraints specific to an application, e.g., which columns in a table the application can access and how input parameters should be organized to query a certain table. In a relational database, a data schema may define, for example, tables, fields, relationships, views, indexes, packages, procedures, functions, queues, triggers, types, sequences, materialized views, synonyms, database links, directories, XML schemas, and other elements.
  • The NL frontend 220, which can be implemented on one or more computers located at one or more locations, includes an NL input/output interface 222, a conversion and failure handling subsystem 224, and a conversion database 226. The NL input/output interface 222 receives, from users, natural language queries and, when the system 200 finishes processing these queries, provides matching search results back to the users, generally through a network connection to a user device.
  • The conversion database 226 stores rules for generating structured operations to be performed on APIs of the knowledge base 230 based on natural language queries. For example, based on (1) the configuration that the knowledge base stores entity information using data tables and (2) the names of these tables specified in an application schema, which is explained in greater detail with reference to FIG. 8, a conversion rule may specify that a natural language query, “How much is a non-red Cadillac CTS 2015?” should be converted to a structured query language (SQL) statement “Select MSRP From Table vehicle Where make_and_model=‘Cadillac CTS’ and color=‘Non-red.’”
  • Conversion rules stored in the conversion database 226 may be specific to the data schema used by the underlying knowledge base. For example, if the underlying knowledge base stores entity information as a graph structure that uses nodes to represent entities and edges to represent relationships between the entities, the conversion rules may specify how a natural language query or update statement is to be parsed to generate statements, e.g., input parameter, operands between these input parameters, and output parameters, for querying the graph structure.
  • For example, after receiving the natural language query “Who is the first president of the United States?” the system may use conversion rules to generate the following statements: 1. find a node connected with the Node “US president” by a “1st” edge; and 2. retrieve the node's name “George Washington.”
  • The conversion and failure handling subsystem 224 converts natural language queries received from users into structured operations to be performed on APIs of the knowledge base 230. The subsystem 224 performs these conversions based on conversion rules specified in the conversion database 226.
  • During a conversion process, when a failure occurs, the subsystem 224 can resolve the failure or can present information about the failure to a user and interact with the user to resolve the failure. Different types of failures may occur, because processing a natural language query includes several stages, e.g., parsing, tokenization, dependency analysis, concept tree analysis, and SQL query generation, and failures may occur at any one of these stages.
  • Iterating Over Query Versions
  • When a failure occurs, alternative parses can be generated and scored. A winning alternative parse, e.g., one with a highest score, can be used to generate the structured query.
  • FIG. 3 is a flow diagram illustrating an example process 300 for iterating over query versions. For convenience the process 300 is described with respect to a system that performs the process 300, for example, the system described with respect to FIG. 2.
  • The system parses the natural language query 302. Initially, the natural language query can correspond with an obtained user input query. The natural language query can be obtained and parsed, for example, as described above with respect to FIG. 1.
  • The system determines 304, based on analysis of the parsed query, whether the parsed query triggers an error or a warnings. A warning can be used as a quality measure that indicates the parsed query is not as expected but can still be processed. An error is a failure that indicates that something is wrong with the parsed query and the conversion process to a structured query cannot proceed. More than one warning can be triggered during analysis of the parsed query depending on the stage of the analysis.
  • In response to a determination that a warning is triggered by the parsed query, warning branch, the system computes 306 a quality score. The quality score can be stored along with state information, e.g., the parse result, and warning information, e.g., information on the cause, location, and relevant query tokens. After computing the quality score, the system determines 308 whether there is an alternative parse. The quality score can depend on the number of warnings triggered during the analysis of the parsed query.
  • In response to a determination that an error is triggered by the parsed query, error branch, the system determines 308 whether there is an alternative parse. Additionally, the system logs the error and state information. The state information can include the cause, location, and relevant tokens associated with the error.
  • In response to a determination that there is an alternative parse, yes branch, the system iterates from step 302. Thus, multiple alternative parses can be analyzed if subsequent warnings or errors are triggered.
  • In response to a determination that there is no alternative parse available, the system selects a best available parse 310.
  • If one or more of the iterations resulted in warnings, the quality scores for the parses are compared. For example, the parse with the highest quality score can be selected.
  • After selecting the best available parse, the system determines whether this parse is a best parse. A best parse is a parse that may have warnings, but does not have any errors. If such a best parse if found, the system generates 314 a structured query. The analysis of the parsed query, or parsed alternative queries, includes the generation of a transformed concept tree, which can then be used to generate the structured query.
  • If a best parse is not found, for example, if the best available parse still has an error, the system generates 316 an error message. If each iteration resulted in an error being triggered, the system cannot continue. A particular error message can be presented to the user. In some implementations, the user can be prompted to take action to correct the input query. Additionally, even when a best parse is found, if there are generated warnings the system can generate 316 a warning message that can be provided to the user.
  • Returning to the determining at step 304, in response to a determination that a query or alternative query has no error or warning triggered, the system generates 314 the structured query.
  • Recording and Propagation of Failures
  • During the conversion of a natural language query, an error can be determined that results in a failure or a warning can be triggered resulting in a quality score that indicates a lower confidence. A number of different types of errors can be determined.
  • Bad Parse:
  • The system can determine that a bad parse exists, for example, when the system is not able to generate a concept tree from the parsed query. In response to a bad parse, the system determines whether an alternative parse exists. If no alternative parse exists, a failure can occur. If an alternative parse does exits, the analysis is performed using the alternative parse.
  • Ambiguous Column Reference:
  • An ambiguous column reference error can occur in several different stages of the conversion process. As described above with respect to FIG. 1, the system matches the constituents identified by the parsing to particular n-grams. However, there may be multiple matches possible, e.g., there may be multiple column matches to a particular n-gram. Instead of recording the error at this stage, the system can record all possible matches and determine if further analysis in the concept tree transformation stage, described in FIG. 1, resolves the ambiguity. Furthermore, during hypergraph analysis the system can determine that there are no subcontexts available to disambiguate which join path is the one to use for a column.
  • In response to the error, the system can prompt the user to specify a particular subcontext to resolve the ambiguity. Alternatively, the ambiguity may be due to a bad parse. The system can attempt alternate parses to resolve the ambiguity before prompting the user.
  • For example, the input query can be “countries where sales is more than 1000.” This query can generate the following error message, which can be provided to the user: We found an ambiguous column reference in the query for the phrase “country”. We were not able to disambiguate the column as it had multiple matches:
  • Table Column Possible Phrase
    FactoryToConsumer Manufacture_country_code Production
    FactoryToConsumer Package_country_code Package
    FactoryToConsumer Sale_country_code Sold
  • The modified query: “Production countries where sales is more than 1000” can result in the following structured query:
  • SELECT
       manufacture_country_code,
       SUM(sales_usd) AS alias0_sales_usd
    FROM FactoryToConsumer
    GROUP BY 1
    HAVING alias0_sales_usd > 1000;
  • Ambiguous Constant:
  • Analysis of the parsed query, particularly during concept tree analysis described above with respect to FIG. 1, can result in a malformed concept tree that prevents the system from identifying what a specified constant value references or that an identified column has an incompatible type with the constant.
  • In response to the identified error, the system can determine whether alternative parses resolve the problem as a way to ensure the problem is not a bad parse. If the alternative parses do not resolve the ambiguity, the error can be propagated to the user as a message identifying the particular constant phrase and requesting clarification.
  • For example, the input query can be “likes for name ‘JohnDoe’”. The parse for this query leads to a concept tree where the dependency relationship between the constant string ‘JohnDoe’ and the attribute name was not properly captured. An example of this concept tree is shown in FIG. 4. In the example concept tree 400 shown in FIG. 4, the concept “JohnDoe” is not shown as dependent on the concept “name.” However, a different query version e.g., “likes where name is ‘JohnDoe ’” is parsed properly and results in the concept tree 500 shown in FIG. 5. In concept tree 500, the dependency of “JohnDoe” on “name” is correctly defined. This results in a conversion to the following structured query:
  • SELECT
       likes
    FROM buyer_seller.Person
    WHERE full_name = ‘JohnDoe’;
  • Ambiguous Datetime:
  • Some datetime representations look very much like integer numbers, for example, 2015 is both a number and a datetime constant. The parsing may not be able to disambiguate between the number and the datetime constant. Therefore, the system uses context of the phrase to determine whether it is actually a datetime or a numeric constant. This can be performed during the concept tree analysis stage. If the system is unable to disambiguate an error is generated.
  • In response to the error, the system checks for alternative parses to confirm that the ambiguity error is not caused by the parse. If alternative parses do not resolve the ambiguity, a message can be provided to the user pointing out the particular datetime/numeric expression and request clarification.
  • For example, the query that can result in an error requiring user input to resolve is: “Total revenue in 2015.”
  • Unused Comparison Keywords or Negation Keywords:
  • Negation and comparison keywords are important for generating predicates correctly. The keywords are processed during the concept tree analysis stage. The system generates warnings when the system is not able to process them properly. Not processing properly basically means that the keyword concept was not used to set or modify a relation.
  • The warnings are most likely caused by either a bad parse or a malformed sentence. The system attempts alternate parses first to see if there is an alternative version that allows the system to process the keywords properly. Since the errors are warnings and not failures, the system may generate a structure query anyway assuming there are no other errors. However, the system can still notify the user with a message indicating that the system was unable to process the keyword.
  • For example, the input query can be “sales where production cost is not 2000.” The parse result concept tree for the input query is illustrated in FIG. 6. In the concept tree 600 shown in FIG. 6 the negation concept, “not,” is not located correctly. Consequently, a warning can be generated for the parse indicating that the system was unable to interpret the negation keyword “not” in the input query. If there are no alternative parses that do not generate a warning, the structured query generated from the input query can be:
  • SELECT
       SUM(production_cost) AS alias0_production_cost,
       SUM(sales_usd) AS alias1_sales_usd
    FROM geo.FactoryToConsumer
    HAVING alias0_production_cost = 2000
  • If there is an alternative parse that resolves the issues, an example resulting concept tree is shown in FIG. 7. In the concept tree 700 shown in FIG. 7 correctly positions the negation concept. As a result, the structured query generated can be:
  • SELECT
       SUM(production_cost) AS alias0_production_cost,
       SUM(sales_usd) AS alias1_sales_usd
    FROM geo.FactoryToConsumer
    HAVING alias0_production_cost != 2000;
  • Aggregation Errors:
  • There are different types of aggregation errors that can occur during analysis of the input query, in particular, during concept tree analysis. One type of aggregation error occurs when an aggregation function is not applied. This can occur when the system is unable to associate an aggregation function with an attribute or structured query expression.
  • For example, an input query “average where production country is France” can result in an error message being generated indicating that the system was unable to associate an aggregate function, specifically [average] in the input query with the column to which it has to be applied. A corrected query “average sales where production country is France,” can be used to generate the structured query:
  • SELECT
       AVG(sales_usd) AS alias0_sales_usd
    FROM geo.FactoryToConsumer
    WHERE manufacture_country_code = ‘FR’;
  • A second type of aggregation error that can occur during concept tree analysis is an aggregation function over non-compatible type. This aggregation error occurs when the query indicates that an aggregation is specified over an attribute that is not type compatible, for example, averaging a string attribute.
  • A third type of aggregation error can occur when a distinct keyword is recognized but was not properly associated with a compatible aggregate argument. For example, the query “number of distinct production countries where sold country is France” generates an error message because the system is not able to interpret the “distinct” keyword in the input query. A corrected query “distinct number of production countries where sold country is France” can be used to generate the structured query:
  • SELECT
       COUNT( DISTINCT manufacture_country_code)
          AS alias0_manufacture_country_code
    FROM geo.FactoryToConsumer
    WHERE manufacture_country_code = ‘FR’;
  • A fourth type of aggregation error can occur when one or more aggregate arguments are not specified.
  • A fifth type of aggregation error can occur when the query specifies an aggregate expression, e.g., a measure, as a grouping key. For example, the query “sum of clicks per sum of impressions” where both “clicks” and “impressions” are numeric measures. The use of “per” in the query indicates the query is malformed. An error message can be generated indicating that the aggregate expression “sum of impressions” was specified as a dimension in the input query.
  • In each of the aggregation errors, the issue may be caused by either a bad parse or a malformed sentence. The system can attempt alternative parses to see of an alternative parse resolves the error. If an alternative parse does not exist, the error can be presented to the user, for example, with a prompt to correct the input query.
  • Missing Join Step:
  • During hypergraph analysis, the system may determine that it is unable to uniquely identify a column reference. The system may be able to perform a partial matching to join paths to determine which join step is missing.
  • The system checks for alternative parses to make sure that the error is not caused by the parse. The system may communicate with the user the missing references that are needed, for example, subcontext phrases, with a request that the user identify correct join paths.
  • For example, the input query can be “sales where buyer's location is in Nevada.” The error generated can be a determination by the system of ambiguous reference in the query that indicate a join step is missing. The system can present the user with information indicating where the missing reference lies, e.g., as illustrated in the following table:
  • Table Column Possible Phrases
    buyer_seller.Person business_address_id business address
    buyer_seller.Person personal_address_id personal address
  • The example query can also cause an Info message to inform the user that the noun phrase “location” is not recognized.
  • A correction replacing “location” with “personal address” can result in generation of the following structured query:
  • SELECT
       SUM(buyer_seller.BuyerSeller.sales_usd) AS
    alias0_sales_usd
    FROM buyer_seller.BuyerSeller.all AS buyer_seller.BuyerSeller
    INNER JOIN buyer_seller.Person
       ON (buyer_seller.BuyerSeller.buyer_id =
    buyer_seller.Person.person_id)
    INNER JOIN buyer_seller.Address
       ON (buyer_seller.Person.personal_address_id =
    buyer_seller.Address.address_id)
    WHERE buyer_seller.Address.state = ‘NV’;
  • Unprocessed Concept:
  • The n-grams generated by the system for concepts for should be processed during the concept tree analysis except for some keywords that the system recognizes that may also serve as parts of speech. For example, if there is a constant literal concept, the system should be able to figure out which column it is relevant to and ultimately generate a predicate from it. If the system ends up with concepts that were not processed, it is an indication that something is missing even if the system is still able to generate a structured query.
  • If a structured query is generated, the system should return it along with a warning to let the user know that there may be something missing. The message can indicate, e.g., highlight, what may be missing. If a structured query is not generated, the handling may depend on the concept type. At a minimum, an error message can be returned to the user.
  • Unmatched Noun Phrases:
  • The system monitors for noun phrases that are not matched to any lexicons, e.g., attributes, subcontexts, etc., and generates dummy concepts for them to make sure they play their role in forming the concept tree properly. It is highly possible that an unrecognized noun phrase is a misspelled phrase or a partially provided multi-gram.
  • For example, the system can recognize either “personal address” or “business address” phrases but the user only includes the phrase “address” in the query. The system will generate a corresponding structured query if possible without processing it, but can also propagate a message to the user saying that the phrase “address” is not matched to any phrases that the system recognizes. The message may further note that the phrase may correspond to “personal address” or “business address”. Once the user specified which one was intended, the conversion goes through.
  • In a similar example, the user input query is misspelled and used “personnel address”. The system can recognize the similarity and ask the user if s/he meant “personal address” instead.
  • Missing Data Access:
  • During the the lexeme resolver stage, the system can check to see if the user has access to a table (and column) whenever the system creates a concept for it.
  • Depending on the type of access the user has, the system can show him/her an error message indicating that the user does not have any access to a table, or can show the query only e.g., user has peeker access only, or can show both the query and the result, e.g., if the user has data access. If the user does not have a data access but can see the schemas, the system may treat inverted index hits as constant literals or get explicit verification from the user to treat them as index hits.
  • Examples of Using User Interactions for Resolving Failures
  • As described above, different types of failures can be resolved using user interactions. For example, the system may generate a bad parse. If the system is unable to identify one or more alternative parses that are processed successfully, then the user can be prompted with a message that describes the problem. The user can then modify the natural language query and the parsing can be attempted again.
  • The received natural language query can result in an ambiguous column reference. For example, the query “countries where sales is more than 1000” requires user input to disambiguate. The user can be provided with a list of possible interpretations to aid the user in clarifying the use of “country” in the submitted query. In some implementations, the system provides corresponding subcontext phrases to clarify each possible meaning of ‘country.’ The user can then add a particular phrase and retry, for example, “production countries where sales is more than 1000.”
  • The received natural language query can result in aggregation errors. For example, the query “number of distinct production countries where sold country is France” results in an error with a message to the user that indicates that the system is unable to associate “distinct” with an expression. The user then has an opportunity to rewrite the query.
  • The received natural language query can result in a missing join step. For example, the query “sales where buyer's location is in Nevada” does not provide enough information for the system to identify what “Nevada” refers to. From join analysis the system detects that it can reference either one of buyer's business location or buyer's home location. The system provides a display of the possible phrases that the user can use to fix the query.
  • The above represent only a few examples. Even if the system is able to move forward and generate a structured query, the system can still provide all warnings (with context info) to the user if the best parse has warnings. For example, unused comparison or negation keywords will be highlighted in the natural language query along with the warning message. At that point the user may check the structured query and decide modify the natural language query (possibly using more proper English) to avoid the warnings. Similar with ‘unprocessed concept’, ‘unmatched noun phrase’, or ‘Ambiguous Datetime’ errors.
  • If the system generates a parse that does not have any warnings or errors, the user receives the translated structured query and the version of the query (if an alternate parse is used) that the system used. Otherwise the user is provided with some sort of guidance through the use of error/warning messages.
  • FIGS. 8-12 illustrate some example user interactions for resolving failures. One type of failure that may occur when processing a natural language query are missing token failures. Tokenization is the process of breaking up text into units, which are conventionally called tokens. A token can represent one or more words, numbers, or punctuation marks.
  • FIG. 8 is a block diagram illustrating an example process 800 for handling a missing token failure through user interactions. A missing token failure occurs when a natural language processing system cannot locate words in an original query that correspond to required tokens. For example, because the subject is missing from the natural language query “Where is?” a missing token failure may arise when the system processes this query.
  • For convenience, the process 800 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification. For example, the system 200 of FIG. 2, appropriately programmed, can perform the process 800.
  • The process 800 begins with the system obtaining a user-provided natural language query 802, e.g., “How much is a non-red 2015?”
  • Having received the natural language query 802, the system attempts to convert the natural language query 802 into structured operations, e.g., SQL queries, suitable for operation on a table-based knowledge base 850. In some implementations, one of the conversion steps includes tokenizing the natural language query 802 based on an underlying data schema of the knowledge base 850, e.g., a vehicle table 810.
  • As shown in FIG. 8, based on a requirement that all SQL queries to the vehicle table 810 must provide a token corresponding to a vehicle's make & model, the natural language processing system breaks the natural language query 802 down into the following tokens 804: “Non-red” and “2015.”
  • In some implementations, because the token “Non-red” has no matching value in the “make and model” column of the vehicle table 810, the system deems the tokens 804 as having been incorrectly produced and a missing token failure as having occurred.
  • Once the natural language processing system detects this failure, the system prompts a user for input to resolve the failure. For example, the system may ask a user to provide a make and model of a vehicle to clarify the submitted natural language query 802 as shown in step 806. A user can respond by clarifying the natural language query 802 with additional context to produce a clarified natural language query, e.g., “How much is a blue color Cadillac ATS 2015?”
  • The system 800 may resume by processing the clarified query, e.g., using the natural language query 802 as a context. The system may produce the following tokens from the clarified query: “blue”; “Cadillac ATS”; and “2015” from the clarified query and generate SQL queries based on the new tokens.
  • Another type of failures that may occur when processing a natural language query are overly complex query failures. For example, a query that is semantically complicated is likely to have a large number of lexicon matches and dependency relationships, which can cause a failure when they exceed a system's ability to process.
  • FIG. 9 is a block diagram illustrating an example process 900 for handling a lexicon matching or dependency failure through user interactions. For convenience, the process 900 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification. For example, the system 200 of FIG. 2, appropriately programmed, can perform the process 900.
  • After receiving a user-provide natural language query 902, e.g., “How much is a non-red Cadillac CTS 2015 that's new? But second hand ones are ok if cheaper than 10K or have sunroof or turbo engine,” a natural language processing system may attempt to resolve the dependencies of the phrase “second hand ones” when converting the natural language query 902 into one or more SQL queries.
  • Because resolving the dependencies 904 of the phrase “second hand ones” may produce a large number of possible outcomes, e.g., “second hand non-red Cadillac CTS 2015”; “second hand non-red Cadillac CTS”; “second hand non-red Cadillac 2015”; “second hand Cadillac CTS 2015”; “second hand Cadillac CTS”; “second hand Cadillac 2015”; “second hand Cadillac,” which can exceed a specified maximum number of outcomes the system can handle for a single natural language query, the system may experience a lexicon matching failure or a dependency failure 906.
  • When a lexicon matching or dependency failure occurs, the system may provide a query building user interface, through which the user can either rewrite the original natural language query 902 or provide linguistic boundaries for the terms included in the original natural language query 902, to reduce query complexity. For example, the system may provide user interface (UI) controls, e.g., radio buttons and dropdown lists, as filters, so that a user may remove dependencies in the natural language query 902. For example, a user may apply a condition filter, e.g., with the value “second hand,” in conjunction with a make and model filter, e.g., with the value “Cadillac CTS” and a year filter, e.g., with the value “2015,” to clarify that the term “second hand” refers to a “Cadillac CTS 2015.”
  • Once a user applies appropriate filters, the system may process a new query based on the filter values.
  • A third type of failures that may occur when processing a natural language query are data access failures. For example, when a user queries against a data source to which the user lacks access, a data access failure occurs.
  • FIG. 10 is a block diagram illustrating an example process 1000 for handling a data access failure through user interactions. For convenience, the process 1000 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification. For example, the system 200 of FIG. 2, appropriately programmed, can perform the process 1000.
  • After receiving a natural language query 1002, e.g., “How much is a non-red Cadillac CTS 2015?,” a natural language process system may determine, at step 1004, that processing the natural language query 1002 requires read access to a vehicle table 1010. However, the system may determine that the user has not been granted read access to the vehicle table 1002, e.g., based on permissions specified in the user's profile.
  • When detecting that appropriate data access permission is lacking, the system can experience a data access failure 1004. In some implementation, the system provides a suggestion as to how to resolve the failure. For example, the system may suggest the user to contact a database administrator to receive appropriate data access and then rerun the query. The user can then follow the suggestions to resolve the failure so that the processing can proceed.
  • Note that when providing a suggestion to a user, the system avoids providing information that can potentially reveal data to which the user lacks access. For example, the system can refrain from revealing to the user the name of the data table, e.g., the vehicle table 1010, or the data columns, e.g., the “color” and “make & model” columns, to which the user lacks read access. Instead, the system may provide only generic instructions directing a user to resolve a data access failure, e.g., suggesting that the user should contact a database administrator.
  • A fourth type of failures that may occur when processing a natural language query are linguistic ambiguity failures. For example, when a natural language query includes ambiguities that can lead to multiple different interpretations of the query terms, a linguistic ambiguity failure occurs.
  • FIG. 11 is a block diagram illustrating an example process 1100 for handling a linguistic ambiguity failure through user interactions. For convenience, the process 1100 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification. For example, the system 200 of FIG. 2, appropriately programmed, can perform the process 1100.
  • After receiving a user-provide natural language query 1102, e.g., “Where can I get bacon and egg sandwich?,” a natural language process system may, as shown in step 1104, interpret the natural language query 1102 as two separate queries of “Where can I get bacon?” and “Where can I get egg sandwich?”
  • Alternatively, the system may also interpret, as shown in step 1106, the natural language query 1102 as a single query of “Where can I get a sandwich that includes both bacon and egg?”
  • Sometimes, e.g., due to a lack of further context, the system deems both alternatives equally possible or even plausible. When facing two competing plausible interpretations, the system can experience a linguistic ambiguity failure. To resolve this failure, the system prompts a user to clarify the natural language query 1102 to remove ambiguity. For example, the system may prompt a user to clarify whether she meant to search for where to get “a bacon and egg sandwich,” as shown in step 1108.
  • Once a user clarifies the natural language query 1102, removing one or more ambiguities, the system can proceed to process the clarified query and produce matching results.
  • FIG. 12 is a flow diagram illustrating an example process 1200 for handling failures in processing natural language queries through user interactions. For convenience, the process 1200 will be described as being performed by a system of one or more computers, located in one or more locations, and programmed appropriately in accordance with this specification. For example, the system 200 of FIG. 2, appropriately programmed, can perform the process 1200.
  • The process 1200 begins with the system obtaining (1202) a natural language query from a user through a natural language frontend.
  • After obtaining the query, the system attempts to convert the query into structured operations to be performed on structured application programming interfaces (APIs) of a knowledge base. For example, the system may parse a plain English query to produce several tokens and maps the produced token to a data table's scheme in order to generate a SQL query.
  • Failures, e.g., those described in this specification, may occur when the system attempts to convert the natural language query into one or more structured operations. When the system detects a failure, the system provides (1204), through a user interaction interface, information to the user describing the failure, e.g., to prompt the user to help resolve the failure. For example, when a linguistic ambiguity failure occurs, the system may provide the user a choice of interpreting a natural language query in a certain way, to resolve ambiguity.
  • In response to receiving a user's input regarding the failure, the system modifies (1206) the conversion process based on the user's input. In some implementations, the system modifies the conversion process by abandoning the original query and processing a new query. In some other implementations, the system modifies the conversion process by continuing to process the original query in view of the user's input, e.g., context.
  • For example, having received a user selection of how an ambiguity should be resolved, e.g., “a bacon and egg sandwich” rather than “bacon” and “egg sandwich,” the system may generate SQL queries accordingly.
  • The system then continues the process 1200 by performing (1208) the one or more structured operations, e.g., SQL queries, on the structured APIs of the knowledge base. Once operation results, e.g., matching query results, are produced, the system provides (1210) them to the user.
  • In some implementations, a user enters a natural language query through a user interface. The natural language query processing system parses the query to generate a document tree and performs a phrase dependency analysis to generate dependencies between constituents. The system then performs a lexical resolution, which includes an n-gram matching followed by generation of concepts for the matched n-grams. The system forms a concept tree is formed based on the generated concepts and the dependencies between the concepts.
  • The system may also transforms the concept tree by modifying relationship between the concepts in the tree. The next stage is virtual query generation and it starts with the hypergraph analysis step path resolution is performed. The system iterates through all the nodes (concepts) to generate the building blocks for the output query and use the hypergraph to generate all the joins (if any). The structured query can be processed to generate the actual SQL query.
  • A failure can happen in any of these stages and a natural language query processing system may catch and propagate the failure to a user for resolution or may record the issue to investigate as a bug. To resolve a failure through error propagation, the system keeps track of the context and provide reasonable amount of information so that an action could be taken. In general, the action could be taken at any stage that we have gone through earlier (e.g., requesting the parser for an alternate parse) or could be propagated all the way up to the user (e.g., requesting a user to clarify the binding of a constant value).
  • Generation of Alternative Parses
  • As described above with respect to FIG. 3, iterating over query versions can include determining alternative parses for a given original natural language query. In some implementations, the parse result of the original query is examined. If the original query does not have any verbs or if the punctuation at the end of the query is not consistent with the parse output, the system can make one or more minor changes to the query to make it closer to a properly formed sentence or question.
  • For example, the original query can be “Revenue in France yesterday per sales channel?” This query is actually a noun phrase with a question mark at the end. The system may be able to get a better parse if it changes the original query to a proper question, for example, “What is revenue in France yesterday per sales channel?” which parses as a proper question. The system may get a better parse by adding a verb to the original query, for example, “Show me revenue in France yesterday per sales channel” which parses as a proper sentence.
  • In another example, the original query input by the user can be “sales per buyer name where buyer's personal address is in California, and the seller's business address is in Nevada?” This query parses as a sentence but with a quotation mark at the end. The parse loses some dependencies and results in errors being triggered during the parse analysis. However, the following changed queries correctly parse:
  • “What is sales per buyer name where buyer's personal address is in California, and the seller's business address is in Nevada?” which parses as a proper question.
  • “sales per buyer name where buyer's personal address is in California, and the seller's business address is in Nevada” drops the question mark and parses as a proper fragment.
  • For completeness, the resulting structured query can be:
  • SELECT
       buyer_buyer_seller.Person.full_name,
       SUM(buyer_seller.BuyerSeller.sales_usd) AS
    alias0_sales_usd
    FROM buyer_seller.BuyerSeller.all AS buyer_seller.BuyerSeller
    INNER JOIN buyer_seller.Person AS seller_buyer_seller.Person
       ON (buyer_seller.BuyerSeller.seller_id =
       seller_buyer_seller.Person.person_id)
    INNER JOIN buyer_seller.Person AS buyer_buyer_seller.Person
       ON (buyer_seller.BuyerSeller.buyer_id =
       buyer_buyer_seller.Person.person_id)
    INNER JOIN buyer_seller.Address AS
    seller_business_address_buyer_seller.Address
       ON (seller_buyer_seller.Person.business_address_id =
       seller_business_address_buyer_seller.Address.address_id)
    INNER JOIN buyer_seller.Address AS
    buyer_personal_address_buyer_seller.Address
       ON (buyer_buyer_seller.Person.personal_address_id =
       buyer_personal_address_buyer_seller.Address.address_id)
    WHERE buyer_personal_address_buyer_seller.Address.state = ‘CA’
       AND seller_business_address_buyer_seller.Address.state =
    ‘NV’
    GROUP BY 1;
  • In some other implementations, the original input query can lack proper punctuation and/or be interpretable in multiple ways. The initial parse result for such queries may not result in a successful analysis. The system's attempt to try alternate parses based on basic modifications as discussed above may also fail to produce a successful analysis. The system can generate alternative parses by using other techniques, e.g., external to the parser, to augment the input query with some token range constraints before sending the query to the parse. These constraints are processed by the parser as a unit and often result in an alternative version that can be interpreted correctly, e.g., with a successful analysis or a high quality score. There are different techniques that can be used to generate the alternative queries based on particular grammars.
  • An example original query is “sales and average likes of buyer where seller has more than 100 likes.” The basic changes for generating alternative versions as describe above do not result in a successful parse. An example of a generated alternative query with token range constraints is “{sales and average likes of buyer} where {seller has more than 100 likes}” which results in a successful parse. The constraints are marked by the use of curly parenthesis { }. The system may generate multiple versions and use a ranking mechanism to feed those into the analysis based on their rank.
  • For completeness, the resulting structured query can be:
  • SELECT
       AVG(buyer_buyer_seller.Person.likes) AS
    alias0_buyer_buyer_seller.Person.likes,
       SUM(buyer_seller.BuyerSeller.sales_usd) AS
    alias1_sales_usd
    FROM buyer_seller.BuyerSeller.all AS buyer_seller.BuyerSeller
    INNER JOIN buyer_seller.Person AS seller buyer_seller.Person
       ON (buyer_seller.BuyerSeller.seller_id
    =seller_buyer_seller.Person.person_id)
    INNER JOIN buyer_seller.Person AS buyer_buyer_seller.Person
       ON (buyer_seller.BuyerSeller.buyer_id =
    buyer_buyer_seller.Person.person_id)
    WHERE seller_buyer_seller.Person.likes > 100;
  • Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • A computer program, which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages; and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.
  • The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers.
  • Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
  • Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interaction interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
  • The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client. Data generated at the user device, e.g., a result of the user interaction, can be received at the server from the device.
  • While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
  • Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
  • Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.

Claims (20)

What is claimed is:
1. A method comprising:
obtaining, through a natural language front end, a natural language query from a user;
converting the natural language query into structured operations to be performed on structured application programming interfaces (APIs) of a knowledge base, comprising:
parsing the natural language query,
analyzing the parsed query to determine dependencies,
performing lexical resolution,
forming a concept tree based on the dependencies and lexical resolution;
analyzing the concept tree to generate a hypergraph,
generate virtual query based on the hypergraph, and
processing the virtual query to generate one or more structured operations;
performing the one or more structured operations on the structured APIs of the knowledge base; and
returning search results matching the natural language query to the user.
2. The method of claim 1, wherein parsing the natural language query includes breaking the natural language query into phrases and placing the phrases in a parsing tree as nodes.
3. The method of claim 2, wherein performing lexical resolution comprises generating concepts for one or more of the parsed phrases.
4. The method of claim 1, wherein analyzing the concept tree comprises:
analyzing concepts and parent-child or sibling relationships in the concept tree; and
transforming the concept tree including annotating concepts with new information, moving concepts, deleting concepts, or merging concepts with other concepts.
5. The method of claim 1, wherein the hypergraph represents a database schema where data tables may have multiple join mappings among themselves.
6. The method of claim 1, comprising analyzing the hypergraph including performing path resolution for joins using the concept tree.
7. The method of claim 1, comprising detecting a failure during conversion of the natural language query to the one or more structured operations.
8. The method of claim 7, comprising resolving the failure through additional processing including determining if an alternative parse for the natural language query is available.
9. The method of claim 7, comprising resolving the failure through additional processing including:
providing, through a user interaction interface, to the user one or more information items identifying the failure;
responsive to a user interaction with an information item: and
modifying the natural language query in accordance with the user interaction to generate one or more structured operations.
10. The method of claim 7, wherein the failure can be based on one or more of a bad parse, an ambiguous column reference, an ambiguous constant, an ambiguous datetime, unused comparison keywords or negation keywords, aggregation errors, a missing join step, an unprocessed concept, an unmatched noun phrase, or missing data access.
11. The method of claim 1, wherein the knowledge base, the natural language front end, and the user interaction interface are implemented on one or more computers and one or more storage devices storing instructions, and wherein the knowledge base stores information associated with entities according to a data schema and has the APIs for programs to query the knowledge base.
12. A computing system comprising:
one or more computers; and
one or more storage units storing instructions that when executed by the one or more computers cause the computing system to perform operations comprising:
obtaining, through a natural language front end, a natural language query from a user;
converting the natural language query into structured operations to be performed on structured application programming interfaces (APIs) of a knowledge base, comprising:
parsing the natural language query,
analyzing the parsed query to determine dependencies,
performing lexical resolution,
forming a concept tree based on the dependencies and lexical resolution;
analyzing the concept tree to generate a hypergraph,
generate virtual query based on the hypergraph, and
processing the virtual query to generate one or more structured operations;
performing the one or more structured operations on the structured APIs of the knowledge base; and
returning search results matching the natural language query to the user.
13. The system of claim 12, wherein parsing the natural language query includes breaking the natural language query into phrases and placing the phrases in a parsing tree as nodes.
14. The system of claim 13, wherein performing lexical resolution comprises generating concepts for one or more of the parsed phrases.
15. The system of claim 12, wherein analyzing the concept tree comprises:
analyzing concepts and parent-child or sibling relationships in the concept tree; and
transforming the concept tree including annotating concepts with new information, moving concepts, deleting concepts, or merging concepts with other concepts.
16. The system of claim 12, wherein the hypergraph represents a database schema where data tables may have multiple join mappings among themselves.
17. The system of claim 12, comprising instructions that when executed by the one or more computers cause the computing system to perform operations including analyzing the hypergraph including performing path resolution for joins using the concept tree.
18. The system of claim 12, comprising instructions that when executed by the one or more computers cause the computing system to perform operations including detecting a failure during conversion of the natural language query to the one or more structured operations.
19. The system of claim 18, comprising instructions that when executed by the one or more computers cause the computing system to perform operations including resolving the failure through additional processing including determining if an alternative parse for the natural language query is available.
20. A computer storage medium encoded with a computer program, the computer program comprising instructions that when executed by a system cause the system to perform operations comprising:
obtaining, through a natural language front end, a natural language query from a user;
converting the natural language query into structured operations to be performed on structured application programming interfaces (APIs) of a knowledge base, comprising:
parsing the natural language query,
analyzing the parsed query to determine dependencies,
performing lexical resolution,
forming a concept tree based on the dependencies and lexical resolution;
analyzing the concept tree to generate a hypergraph,
generate virtual query based on the hypergraph, and
processing the virtual query to generate one or more structured operations;
performing the one or more structured operations on the structured APIs of the knowledge base; and
returning search results matching the natural language query to the user.
US15/261,538 2015-09-11 2016-09-09 Handling failures in processing natural language queries Abandoned US20170075953A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/261,538 US20170075953A1 (en) 2015-09-11 2016-09-09 Handling failures in processing natural language queries

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562217260P 2015-09-11 2015-09-11
US15/261,538 US20170075953A1 (en) 2015-09-11 2016-09-09 Handling failures in processing natural language queries

Publications (1)

Publication Number Publication Date
US20170075953A1 true US20170075953A1 (en) 2017-03-16

Family

ID=56893890

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/261,538 Abandoned US20170075953A1 (en) 2015-09-11 2016-09-09 Handling failures in processing natural language queries

Country Status (3)

Country Link
US (1) US20170075953A1 (en)
EP (1) EP3142028A3 (en)
CN (1) CN107016012A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161262A1 (en) * 2015-12-02 2017-06-08 International Business Machines Corporation Generating structured queries from natural language text
US20170286494A1 (en) * 2016-03-29 2017-10-05 Microsoft Technology Licensing, Llc Computational-model operation using multiple subject representations
US20180165330A1 (en) * 2016-12-08 2018-06-14 Sap Se Automatic generation of structured queries from natural language input
US20190096396A1 (en) * 2016-06-16 2019-03-28 Baidu Online Network Technology (Beijing) Co., Ltd. Multiple Voice Recognition Model Switching Method And Apparatus, And Storage Medium
EP3514694A1 (en) * 2018-01-19 2019-07-24 Servicenow, Inc. Query translation
US20190303473A1 (en) * 2018-04-02 2019-10-03 International Business Machines Corporation Query interpretation disambiguation
US10846286B2 (en) * 2018-07-20 2020-11-24 Dan Benanav Automatic object inference in a database system
US20210064643A1 (en) * 2018-04-16 2021-03-04 British Gas Trading Limited Natural language interface for a data management system
US20210279247A1 (en) * 2020-03-03 2021-09-09 Sap Se Centralized multi-tenancy as a service in cloud-based computing environment
US11194799B2 (en) * 2018-06-27 2021-12-07 Bitdefender IPR Management Ltd. Systems and methods for translating natural language sentences into database queries
US11227114B1 (en) * 2018-11-28 2022-01-18 Kensho Technologies, Llc Natural language interface with real-time feedback
US20220129450A1 (en) * 2020-10-23 2022-04-28 Royal Bank Of Canada System and method for transferable natural language interface
US11416481B2 (en) * 2018-05-02 2022-08-16 Sap Se Search query generation using branching process for database queries
US11423229B2 (en) * 2016-09-29 2022-08-23 Microsoft Technology Licensing, Llc Conversational data analysis
US20230185610A1 (en) * 2021-12-09 2023-06-15 BillGO, Inc. Electronic communication and transaction processing
US11748562B2 (en) 2019-09-20 2023-09-05 Merative Us L.P. Selective deep parsing of natural language content
CN116992006A (en) * 2023-09-26 2023-11-03 武汉益模科技股份有限公司 Chain type natural language interaction method and system driven by large language model
WO2024059094A1 (en) * 2022-09-14 2024-03-21 Schlumberger Technology Corporation Natural language-based search engine for information retrieval in energy industry
JP7546664B2 (en) 2019-10-07 2024-09-06 インターナショナル・ビジネス・マシーンズ・コーポレーション Ontology-Based Data Storage for Distributed Knowledge Bases

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10949807B2 (en) 2017-05-04 2021-03-16 Servicenow, Inc. Model building architecture and smart routing of work items
US10977575B2 (en) 2017-05-04 2021-04-13 Servicenow, Inc. Machine learning auto completion of fields
CN107885786B (en) * 2017-10-17 2021-10-26 东华大学 Natural language query interface implementation method facing big data
US10831797B2 (en) * 2018-03-23 2020-11-10 International Business Machines Corporation Query recognition resiliency determination in virtual agent systems
US10635679B2 (en) * 2018-04-13 2020-04-28 RELX Inc. Systems and methods for providing feedback for natural language queries
EP3573073B1 (en) * 2018-05-22 2020-12-02 Siemens Healthcare GmbH Method for generating a knowledge base useful in identifying and/or predicting a malfunction of a medical device
CN108733359B (en) * 2018-06-14 2020-12-25 北京航空航天大学 Automatic generation method of software program
US11055489B2 (en) * 2018-10-08 2021-07-06 Tableau Software, Inc. Determining levels of detail for data visualizations using natural language constructs
CN109858020A (en) * 2018-12-29 2019-06-07 航天信息股份有限公司 A kind of method and system obtaining taxation informatization problem answers based on grapheme
CN109902303B (en) * 2019-03-01 2023-05-26 腾讯科技(深圳)有限公司 Entity identification method and related equipment
US10922486B2 (en) * 2019-03-13 2021-02-16 International Business Machines Corporation Parse tree based vectorization for natural language processing
US11157705B2 (en) * 2019-07-22 2021-10-26 International Business Machines Corporation Semantic parsing using encoded structured representation
US11841883B2 (en) * 2019-09-03 2023-12-12 International Business Machines Corporation Resolving queries using structured and unstructured data
CN110727695B (en) * 2019-09-29 2022-05-03 浙江大学 Natural language query analysis method for novel power supply urban rail train data operation and maintenance
CN112035506A (en) * 2019-10-28 2020-12-04 竹间智能科技(上海)有限公司 Semantic recognition method and equipment
TWI809350B (en) * 2021-02-02 2023-07-21 財團法人工業技術研究院 Method and device for managing equipment information of logistics equipment
CN113821533B (en) * 2021-09-30 2023-09-08 北京鲸鹳科技有限公司 Method, device, equipment and storage medium for data query
CN118312533B (en) * 2024-06-07 2024-08-27 上海数涞科技有限公司 Query result determining method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7263517B2 (en) * 2002-10-31 2007-08-28 Biomedical Objects, Inc. Structured natural language query and knowledge system
US20150324422A1 (en) * 2014-05-08 2015-11-12 Marvin Elder Natural Language Query
US20170075891A1 (en) * 2015-09-11 2017-03-16 Google Inc. Disambiguating join paths for natural language queries
US9645993B2 (en) * 2006-10-10 2017-05-09 Abbyy Infopoisk Llc Method and system for semantic searching

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5584024A (en) * 1994-03-24 1996-12-10 Software Ag Interactive database query system and method for prohibiting the selection of semantically incorrect query parameters
US6665666B1 (en) * 1999-10-26 2003-12-16 International Business Machines Corporation System, method and program product for answering questions using a search engine
US7027974B1 (en) * 2000-10-27 2006-04-11 Science Applications International Corporation Ontology-based parser for natural language processing
US6714939B2 (en) * 2001-01-08 2004-03-30 Softface, Inc. Creation of structured data from plain text
CN100361126C (en) * 2004-09-24 2008-01-09 北京亿维讯科技有限公司 Method of solving problem using wikipedia and user inquiry treatment technology
US20140201241A1 (en) * 2013-01-15 2014-07-17 EasyAsk Apparatus for Accepting a Verbal Query to be Executed Against Structured Data
US9405794B2 (en) * 2013-07-17 2016-08-02 Thoughtspot, Inc. Information retrieval system
US20150026153A1 (en) * 2013-07-17 2015-01-22 Thoughtspot, Inc. Search engine for information retrieval system
KR101491843B1 (en) * 2013-11-13 2015-02-11 네이버 주식회사 Conversation based search system and search method
CN104252533B (en) * 2014-09-12 2018-04-13 百度在线网络技术(北京)有限公司 Searching method and searcher
CN104657439B (en) * 2015-01-30 2019-12-13 欧阳江 Structured query statement generation system and method for precise retrieval of natural language

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7263517B2 (en) * 2002-10-31 2007-08-28 Biomedical Objects, Inc. Structured natural language query and knowledge system
US9645993B2 (en) * 2006-10-10 2017-05-09 Abbyy Infopoisk Llc Method and system for semantic searching
US20150324422A1 (en) * 2014-05-08 2015-11-12 Marvin Elder Natural Language Query
US20170075891A1 (en) * 2015-09-11 2017-03-16 Google Inc. Disambiguating join paths for natural language queries

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10430407B2 (en) * 2015-12-02 2019-10-01 International Business Machines Corporation Generating structured queries from natural language text
US11068480B2 (en) * 2015-12-02 2021-07-20 International Business Machines Corporation Generating structured queries from natural language text
US20170161262A1 (en) * 2015-12-02 2017-06-08 International Business Machines Corporation Generating structured queries from natural language text
US20170286494A1 (en) * 2016-03-29 2017-10-05 Microsoft Technology Licensing, Llc Computational-model operation using multiple subject representations
US10592519B2 (en) * 2016-03-29 2020-03-17 Microsoft Technology Licensing, Llc Computational-model operation using multiple subject representations
US10847146B2 (en) * 2016-06-16 2020-11-24 Baidu Online Network Technology (Beijing) Co., Ltd. Multiple voice recognition model switching method and apparatus, and storage medium
US20190096396A1 (en) * 2016-06-16 2019-03-28 Baidu Online Network Technology (Beijing) Co., Ltd. Multiple Voice Recognition Model Switching Method And Apparatus, And Storage Medium
US11423229B2 (en) * 2016-09-29 2022-08-23 Microsoft Technology Licensing, Llc Conversational data analysis
US20180165330A1 (en) * 2016-12-08 2018-06-14 Sap Se Automatic generation of structured queries from natural language input
US10657124B2 (en) * 2016-12-08 2020-05-19 Sap Se Automatic generation of structured queries from natural language input
US11023461B2 (en) 2018-01-19 2021-06-01 Servicenow, Inc. Query translation
EP4170515A1 (en) * 2018-01-19 2023-04-26 ServiceNow, Inc. Query translation
EP3514694A1 (en) * 2018-01-19 2019-07-24 Servicenow, Inc. Query translation
US10838951B2 (en) * 2018-04-02 2020-11-17 International Business Machines Corporation Query interpretation disambiguation
US20190303473A1 (en) * 2018-04-02 2019-10-03 International Business Machines Corporation Query interpretation disambiguation
US20210064643A1 (en) * 2018-04-16 2021-03-04 British Gas Trading Limited Natural language interface for a data management system
US11609941B2 (en) * 2018-04-16 2023-03-21 British Gas Trading Limited Natural language interface for a data management system
US11416481B2 (en) * 2018-05-02 2022-08-16 Sap Se Search query generation using branching process for database queries
US11194799B2 (en) * 2018-06-27 2021-12-07 Bitdefender IPR Management Ltd. Systems and methods for translating natural language sentences into database queries
US11698899B2 (en) * 2018-07-20 2023-07-11 Dan Benanav Automatic object inference in a database system
US20210089524A1 (en) * 2018-07-20 2021-03-25 Dan Benanav Automatic object inference in a database system
US10846286B2 (en) * 2018-07-20 2020-11-24 Dan Benanav Automatic object inference in a database system
US11227114B1 (en) * 2018-11-28 2022-01-18 Kensho Technologies, Llc Natural language interface with real-time feedback
US11748562B2 (en) 2019-09-20 2023-09-05 Merative Us L.P. Selective deep parsing of natural language content
JP7546664B2 (en) 2019-10-07 2024-09-06 インターナショナル・ビジネス・マシーンズ・コーポレーション Ontology-Based Data Storage for Distributed Knowledge Bases
US11222035B2 (en) * 2020-03-03 2022-01-11 Sap Se Centralized multi-tenancy as a service in cloud-based computing environment
US20210279247A1 (en) * 2020-03-03 2021-09-09 Sap Se Centralized multi-tenancy as a service in cloud-based computing environment
US20220129450A1 (en) * 2020-10-23 2022-04-28 Royal Bank Of Canada System and method for transferable natural language interface
US20230185610A1 (en) * 2021-12-09 2023-06-15 BillGO, Inc. Electronic communication and transaction processing
WO2024059094A1 (en) * 2022-09-14 2024-03-21 Schlumberger Technology Corporation Natural language-based search engine for information retrieval in energy industry
CN116992006A (en) * 2023-09-26 2023-11-03 武汉益模科技股份有限公司 Chain type natural language interaction method and system driven by large language model

Also Published As

Publication number Publication date
CN107016012A (en) 2017-08-04
EP3142028A3 (en) 2017-07-12
EP3142028A2 (en) 2017-03-15

Similar Documents

Publication Publication Date Title
US20170075953A1 (en) Handling failures in processing natural language queries
Affolter et al. A comparative survey of recent natural language interfaces for databases
US10997167B2 (en) Disambiguating join paths for natural language queries
US11914627B1 (en) Parsing natural language queries without retraining
Zou et al. Natural language question answering over RDF: a graph data driven approach
US9477766B2 (en) Method for ranking resources using node pool
US11080295B2 (en) Collecting, organizing, and searching knowledge about a dataset
US8140559B2 (en) Knowledge correlation search engine
US7797303B2 (en) Natural language processing for developing queries
US8041697B2 (en) Semi-automatic example-based induction of semantic translation rules to support natural language search
EP2592572A1 (en) Facilitating extraction and discovery of enterprise services
TW201314476A (en) Automated self-service user support based on ontology
US9892191B2 (en) Complex query handling
US8554538B2 (en) Generating a unique name for a data element
US11934391B2 (en) Generation of requests to a processing system
US20200065344A1 (en) Knowledge correlation search engine
Šukys Querying ontologies on the base of semantics of business vocabulary and business rules
Kedwan NLQ into SQL translation using computational linguistics
Iqbal et al. A Negation Query Engine for Complex Query Transformations
Xiong et al. Inferring service recommendation from natural language api descriptions
Li et al. Term disambiguation in natural language query for XML
Chen et al. Nl2PSQL: Generating pseudo-SQL queries from under-specified natural language questions
Schwitter et al. Meaningful web annotations for humans and machines using controlled natural language
Kedwan NLP Application: Natural Language Questions and SQL Using Computational Linguistics
Li et al. Interactivity

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOZKAYA, TOLGA;DIJAMCO, ARMAND JOSEPH;BUI, TRAN;AND OTHERS;REEL/FRAME:040311/0830

Effective date: 20161114

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044129/0001

Effective date: 20170929

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION