US20020193992A1 - Voice-enabled directory look-up - Google Patents
Voice-enabled directory look-up Download PDFInfo
- Publication number
- US20020193992A1 US20020193992A1 US10/166,862 US16686202A US2002193992A1 US 20020193992 A1 US20020193992 A1 US 20020193992A1 US 16686202 A US16686202 A US 16686202A US 2002193992 A1 US2002193992 A1 US 2002193992A1
- Authority
- US
- United States
- Prior art keywords
- characters
- character
- character position
- candidate characters
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/487—Arrangements for providing information services, e.g. recorded voice services or time announcements
- H04M3/493—Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
- H04M3/4931—Directory assistance systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/40—Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/487—Arrangements for providing information services, e.g. recorded voice services or time announcements
- H04M3/493—Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
- H04M3/4936—Speech interaction details
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99933—Query processing, i.e. searching
- Y10S707/99935—Query augmenting and refining, e.g. inexact access
Definitions
- the present invention relates generally to voice recognition, and more particularly, but not exclusively, to the retrieval of records from a directory using spoken characters.
- substitution groups are established, each containing characters that sound alike when spoken.
- an operator speaks the first few characters of the one or more fields.
- the characters are parsed and decoded from the speech, thereby producing a set of candidate decodings for each character position.
- one or more alternative characters are selected to broaden the search.
- a regular expression is created that, for each character position output by the voice engine, matches (1) any of the candidate characters presented by the voice engine, or (2) any alternative character that is in a substitution group within one or more of the decoded characters.
- the regular expression is processed by an inexact string matching look-up routine and applied to the directory. The best matches are presented to the operator, who selects the desired record.
- FIG. 1 shows a block diagram of a voice-enabled look-up system.
- FIG. 2 shows a block diagram of another voice-enabled look-up system.
- FIG. 3 shows a workstation suitable for use in the systems of FIGS. 1 and 2.
- FIG. 1 shows a voice-enabled look-up system wherein a postal employee prepares a mail piece for automated processing.
- the operator reads at least the first few characters of the street number and name.
- the speech is parsed into letters and decoded by a voice engine.
- a regular expression is created using the characters so decoded and possible substitutes that sound similar to those selected by the voice engine.
- the regular expression is applied to the directory to retrieve a set of records, each of which contains an address that matches the regular expression.
- the set of records is presented to the operator as a list from which to select the address that actually appears on the mail piece.
- a bar code reflecting the proper sorting data e.g., carrier route and ZIP+4 data
- FIG. 2 shows an alternative application of this voice-enabled look-up technology.
- mail arrives in an organization's mail room.
- An operator reads the first few characters of the addressees' first and last names, and the system returns the addressees' mail stop, department, and/or other directory information.
- the mail piece is then routed to the addressee using that mail stop information.
- Operator 31 examines the mail piece 33 and speaks part (preferably the thousands and hundreds digits) of the street number, then part (preferably the first three characters) of the street name from address 35 into headset 37 .
- the spoken characters are captured by voice capture unit 41 and stored as a digitized audio signal. That signal is sent by voice capture unit 41 to voice engine 43 .
- Voice engine 43 uses any suitable method to parse the digital audio signal into segments, each associated with a spoken character. Each segment is translated, using any suitable method, by voice engine 43 into one or more candidate characters that may have been spoken, each preferably with an associated confidence level.
- This operation is preferably, but not necessarily, constrained to a predetermined grammar, so that each character is decoded from a limited set of possible characters based on context and/or a predetermined pattern of characters (e.g., two numeric characters, then between one and four alphabetic characters).
- a predetermined pattern of characters e.g., two numeric characters, then between one and four alphabetic characters.
- the candidate characters (and the associated confidence levels, if any) produced by voice engine 43 are sent to character set expansion module 45 and regular expression creation module 47 .
- character set expansion module 45 examines the one or more candidate characters received from voice engine 43 , and identifies potential alternative decodings. This identification may use predetermined groups of characters, each of which sound similar to the candidate character when spoken. Character set expansion module 45 may also assign a confidence level to each alternative candidate character that it produces. The selection of candidate characters and/or confidence levels may be made using any method that would occur to one skilled in the art, such as by application of linguistic spelling or syntactical rules.
- Regular expression creation module 47 takes the candidate characters (and confidence level data, if available) from voice engine 43 and character set expansion module 45 to form a regular expression that describes all possible matches for the spoken street number, and another regular expression that describes all possible matches for the street name. In each case, the regular expression will match all records that contain either the candidate character (from voice engine 43 ) or alternative candidate character (from expansion module 45 ) for a given character position.
- String matching module 49 also receives city, state, and ZIP data for mail piece 33 from a suitable source (e.g., an OCR module or database (not shown)) and prepares a query designed to retrieve all records in address directory 61 that have street numbers and names that match the regular expressions provided by module 47 , and also match the given city, state, and ZIP code of address 35 .
- a suitable source e.g., an OCR module or database (not shown)
- OCR module or database not shown
- all mail pieces to which the present system is applied in a particular batch or at a particular location are assumed to be destined for a particular geographical area, so directory 61 may be limited to addresses in that area.
- the record set produced in response to that query is sent to presentation module 51 , which presents a menu of the directory hits to user 31 .
- This menu preferably presents the possible matches in descending order of probability, given the confidence levels produced by voice engine 43 (and character set expansion module 45 , if produced).
- the candidate record associated with the highest level of confidence is preferably presented as a default option that is most easily selected by user 31 .
- the user's selection is made using any suitable means, and is accepted by module 53 .
- the selected record is provided as an output of the process at end point 55 . Data from the selected record may, for example, be used to print on the mail piece 33 a bar code including ZIP+4 and carrier route data for improved routing, sorting, and delivery.
- the records searched by string matching module 49 may be limited to those records in directory 61 that match partial street address information obtained from an upstream OCR process.
- information from the output record at end point 55 is used, but no bar code is applied to mail piece 33 .
- enough information from each record is presented by presentation module 51 to obviate the need for a user to select a record at all.
- operator 31 simply uses the desired information from the menu (e.g., sorts the mail piece 33 into a particular carrier route order) and proceeds to process the next piece.
- Directory 61 is preferably optimized with respect to the voice engine to reduce the number of records displayed by presentation module 51 . For example adjacent (as in consecutive blocks of the same street) or interwoven (as in odd and even numbers along the same street) address ranges may be combined into one record.
- FIG. 2 An alternative application will now be described with reference to FIG. 2.
- This embodiment is implemented in an organization's mail room, where some mail pieces arrive (from internal or external sources) bearing the name of an intended recipient within the organization.
- a system according to the present invention is used to retrieve that additional destination information to assist in routing and delivery of the mail piece.
- mail piece 133 bears address 135 , which includes a first and last name.
- Operator 131 visually examines mail piece 133 to find address block 135 , then speaks into headset 137 the first three letters each of the first name and last name of the addressee. That speech is captured by voice capture unit 141 and translated into a digitized audio signal.
- Voice engine 143 , character set expansion module 145 , and regular expression creation module 147 each operate analogously to the corresponding components (voice engine 43 , expansion module 45 , and regular expression creation module 47 ) discussed above in relation to FIG. 1.
- inexact string matching look-up module 149 uses the regular expression output of regular expression creation module 147 to search directory 161 .
- the result of the query is returned directly to menu presentation module 151 , which provides operator 131 with a menu of the most likely matches from the directory 161 .
- Selection acceptance module 153 of system 120 accepts the user's selection from the menu and outputs the selected record at point 155 .
- the department or mail stop associated with the selected addressee is displayed on a video monitor so that operator 131 can write that information directly on mail piece 133 or manually sort mail piece 133 based on the displayed information.
- mail piece 133 may be imprinted with a bar code or other suitable designator to facilitate automatic or semi-automatic routing and transport through the organization.
- system 120 might be applied only to those mail pieces bearing addresses (or addressees) that could not be properly routed solely by the OCR system module.
- Systems 20 and 120 might also be used with identifier-related (e.g., bar coding) systems by using the output record (at points 55 and 155 , respectively) and printing the identifier for routing using means well known in the art.
- identifier-related e.g., bar coding
- GUI graphical user interface
- WINDOWS® operating systems published by Microsoft Corporation, One Microsoft Way, Redmond, Wash., USA
- GUI graphical user interface
- the user 31 , 131 can then execute one or more gestures with a “mouse” pointing device to select the desired entry.
- Another alternative is to present the list of directory hits, each with an associated symbol so that the user 31 , 131 can press a single key on a keyboard to select the desired record.
- the default (highest-confidence) hit is automatically selected if a predetermined amount of time passes without an operator selection.
- the final selection by operator 31 can also be made using voice engine 43 itself, e.g., by saying “select 1” or by similar method.
- Voice engine 43 , 143 is preferably an off-the-shelf voice engine product such as a Dragon Systems product published by Lernout & Hauspie, 52 Third Avenue, Burlington, Mass., USA (world headquarters at Lernout & Hauspie Speech Products N.V., Flanders Languages Valley, 50, 8900 leper, Belgium), but may be any routine that interprets audio signals to provide one or more candidate characters of output based on voice decoding of the audio signal.
- workstation 240 The workstation used by operators 31 , 131 will now be discussed in relation to FIG. 3.
- the software programs and modules described above are encoded on hard disc 242 for execution by processor 244 .
- Workstation 240 may include more than one processor or CPU and more than one type of memory 246 , where memory 246 is representative of one or more types.
- processor 244 may be comprised of one or more components configured as a single unit. Alternatively, when of a multi-component form, processor 244 may have one or more components located remotely relative to the others.
- processor 244 may be of the electronic variety defining digital circuitry, analog circuitry, or both.
- processor 244 is of a conventional, integrated circuit microprocessor arrangement, such as one or more PENTIUM II or PENTIUM III processors supplied by INTEL Corporation of 2200 Mission College Boulevard, Santa Clara, Calif., 95052, USA.
- Memory 246 may include one or more types of solid-state electronic memory, magnetic memory, or optical memory, just to name a few.
- memory 246 may include solid-state electronic Random Access Memory (RAM), Sequentially Accessible Memory (SAM) (such as the First-In, First-Out (FIFO) variety or the Last-In First-Out (LIFO) variety), Programmable Read Only Memory (PROM), Electrically Programmable Read Only Memory (EPROM), or Electrically Erasable Programmable Read Only Memory (EEPROM); an optical disc memory (such as a DVD or CD ROM); a magnetically encoded hard disc, floppy disc, tape, or cartridge media; or a combination of any of these memory types.
- RAM solid-state electronic Random Access Memory
- SAM Sequentially Accessible Memory
- PROM Programmable Read Only Memory
- EPROM Electrically Programmable Read Only Memory
- EEPROM Electrically Erasable Programmable Read Only Memory
- an optical disc memory such as a DVD or CD ROM
- Audio subsystem 248 provides an interface between workstation 240 and the audio equipment used by operator 31 , 131 , such as headset 37 , 137 .
- Monitor 250 provides visual output from workstation 250 to operator 31 , 131 .
- Additional input device(s) 252 and output device(s) 254 provide interfaces with other computing and/or human entities.
- audio subsystem 248 , headset 37 , 137 , and workstation 240 may include additional and/or alternative components as would occur to one skilled in the art.
- the signals acquired by voice capture units 41 , 141 may be stored and processed in digital and/or analog form.
- the number of characters to be spoken in a particular context is predetermined. This additional a priori information will often allow the voice engine 43 , 143 more accurately to parse and decode the captured audio signal. In other embodiments, feedback paths are introduced so that the voice engine 43 , 143 “learns” to better decode the speech of a particular operator 31 , 131 or set of operators over time.
- a similar process to those described above is applied to multiple fields of an address (e.g., ZIP code, street number, street name, directional modifiers, and/or apartment or suite number) to determine a correct, legal address for the recipient.
- the output record is then used to apply a complete bar code to the mail piece using means and for purposes well known in the art.
- the present invention might also be applied in other directory look-up contexts. For example, accuracy and recognition in an automated telephone directory assistance system might be improved by implementing the present invention therein.
- the user might select a state, then a city, then a listing.
- the user speaks the first few characters of the data item, and the system presents a list of candidate entries. The user selects the desired entry (in response to the list presented by the system) by pressing a key on the telephone keypad.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Machine Translation (AREA)
- Document Processing Apparatus (AREA)
Abstract
A novel voice-enabled directory look-up system is disclosed. In one embodiment, an operator reads the first few characters from each of the first and last names of a mail addressee. The system captures the speech as an audio signal, which is parsed into character position segments. The system determines one or more candidate characters that might have resulted in the audio signal for each character position segment. The system then expands the list of candidate characters for at least one character position to include one or more characters that sound like the original candidate characters for that character position. The candidate characters for the respective character positions are composed into a regular expression, which is applied using an inexact string matching look-up routine to a directory of records. Records with the best matches are returned in a menu for the operator. The operator selects the desired record from the menu.
In another embodiment, an operator reads aloud the thousands and hundreds digits of the street number and the first three letters of the street name from a mail piece. A voice engine parses and decodes the speech into candidate characters for each character position. The system selects alternative characters that sound similar to candidate characters in a given character position. An inexact string matching routine retrieves records from a carrier route directory that match either a candidate character or an alternative character in each position of each data field.
Description
- The present invention relates generally to voice recognition, and more particularly, but not exclusively, to the retrieval of records from a directory using spoken characters.
- Certain modern data retrieval systems use voice recognition technology to select a desired record from among many. These systems, however, fail to perform adequately in certain circumstances, such as in the recognition of certain characters that sound similar when spoken. Such failures severely limit the utility of these systems for many operators and in many applications.
- Other systems fail to correctly retrieve records when one or more characters are missing or incorrectly interpreted. Again, such systems are of limited utility in many applications and for many operators.
- It is, therefore, apparent that a need exists for improved systems that apply voice recognition technology to large-directory look-up situations.
- It is an object of this invention to provide an improved system for retrieving records from a directory using spoken characters as input.
- It is another object of this invention to provide an improved table look-up system for contexts in which operators' speech patterns are inconsistent, or the prefix letters that are read by the operator are not clearly legible.
- These objects and others are provided in a system, method, and apparatus that retrieve data from a directory based on the spoken initial characters of one or more fields. Substitution groups are established, each containing characters that sound alike when spoken. For each query, an operator speaks the first few characters of the one or more fields. The characters are parsed and decoded from the speech, thereby producing a set of candidate decodings for each character position. Then, for at least one character position, one or more alternative characters (from the same substitution group(s) as the candidate character(s) for that character position) are selected to broaden the search. In some such embodiments, a regular expression is created that, for each character position output by the voice engine, matches (1) any of the candidate characters presented by the voice engine, or (2) any alternative character that is in a substitution group within one or more of the decoded characters. The regular expression is processed by an inexact string matching look-up routine and applied to the directory. The best matches are presented to the operator, who selects the desired record.
- Other embodiments, forms, variations, objects, features, and applications may appear to those skilled in the art from the drawings and description contained herein.
- FIG. 1 shows a block diagram of a voice-enabled look-up system.
- FIG. 2 shows a block diagram of another voice-enabled look-up system.
- FIG. 3 shows a workstation suitable for use in the systems of FIGS. 1 and 2.
- For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended; any alterations and further modifications of the described or illustrated embodiments, and any further applications of the principles of the invention as illustrated therein are contemplated as would normally occur to one skilled in the art to which the invention relates.
- Generally speaking, FIG. 1 shows a voice-enabled look-up system wherein a postal employee prepares a mail piece for automated processing. The operator reads at least the first few characters of the street number and name. The speech is parsed into letters and decoded by a voice engine. A regular expression is created using the characters so decoded and possible substitutes that sound similar to those selected by the voice engine. The regular expression is applied to the directory to retrieve a set of records, each of which contains an address that matches the regular expression. The set of records is presented to the operator as a list from which to select the address that actually appears on the mail piece. A bar code reflecting the proper sorting data (e.g., carrier route and ZIP+4 data) for the mail piece may then be applied to it.
- FIG. 2 shows an alternative application of this voice-enabled look-up technology. In this embodiment, mail arrives in an organization's mail room. An operator reads the first few characters of the addressees' first and last names, and the system returns the addressees' mail stop, department, and/or other directory information. The mail piece is then routed to the addressee using that mail stop information.
- In the illustrated embodiments, reference will be made to functional units and modules. It will be apparent to those skilled in the art that, in other embodiments within the scope of the present invention, these units and modules may be implemented in hardware, software, or a combination thereof. Furthermore, a variety of network topologies, directory table and storage structures, and query languages and schemes may be used as appropriate for a particular implementation of the present invention and would occur to one skilled in the art.
- Turning now to FIG. 1, system20 will now be described in more detail.
Operator 31 examines themail piece 33 and speaks part (preferably the thousands and hundreds digits) of the street number, then part (preferably the first three characters) of the street name fromaddress 35 intoheadset 37. The spoken characters are captured byvoice capture unit 41 and stored as a digitized audio signal. That signal is sent byvoice capture unit 41 tovoice engine 43.Voice engine 43 uses any suitable method to parse the digital audio signal into segments, each associated with a spoken character. Each segment is translated, using any suitable method, byvoice engine 43 into one or more candidate characters that may have been spoken, each preferably with an associated confidence level. This operation is preferably, but not necessarily, constrained to a predetermined grammar, so that each character is decoded from a limited set of possible characters based on context and/or a predetermined pattern of characters (e.g., two numeric characters, then between one and four alphabetic characters). In many embodiments, such constraint dramatically improves the accuracy of parsing and decoding byvoice engine 43. - The candidate characters (and the associated confidence levels, if any) produced by
voice engine 43 are sent to characterset expansion module 45 and regularexpression creation module 47. For each character position of data produced byvoice engine 43, characterset expansion module 45 examines the one or more candidate characters received fromvoice engine 43, and identifies potential alternative decodings. This identification may use predetermined groups of characters, each of which sound similar to the candidate character when spoken. Characterset expansion module 45 may also assign a confidence level to each alternative candidate character that it produces. The selection of candidate characters and/or confidence levels may be made using any method that would occur to one skilled in the art, such as by application of linguistic spelling or syntactical rules. - Regular
expression creation module 47 takes the candidate characters (and confidence level data, if available) fromvoice engine 43 and characterset expansion module 45 to form a regular expression that describes all possible matches for the spoken street number, and another regular expression that describes all possible matches for the street name. In each case, the regular expression will match all records that contain either the candidate character (from voice engine 43) or alternative candidate character (from expansion module 45) for a given character position. - The regular expression created by
module 47 is passed to an inexact string matching look-up module 49.String matching module 49 also receives city, state, and ZIP data formail piece 33 from a suitable source (e.g., an OCR module or database (not shown)) and prepares a query designed to retrieve all records inaddress directory 61 that have street numbers and names that match the regular expressions provided bymodule 47, and also match the given city, state, and ZIP code ofaddress 35. Alternatively, all mail pieces to which the present system is applied in a particular batch or at a particular location are assumed to be destined for a particular geographical area, sodirectory 61 may be limited to addresses in that area. - The record set produced in response to that query is sent to
presentation module 51, which presents a menu of the directory hits touser 31. This menu preferably presents the possible matches in descending order of probability, given the confidence levels produced by voice engine 43 (and characterset expansion module 45, if produced). The candidate record associated with the highest level of confidence is preferably presented as a default option that is most easily selected byuser 31. The user's selection is made using any suitable means, and is accepted bymodule 53. The selected record is provided as an output of the process atend point 55. Data from the selected record may, for example, be used to print on the mail piece 33 a bar code including ZIP+4 and carrier route data for improved routing, sorting, and delivery. - Many variations on this system will occur to those skilled in the art. For example, the records searched by
string matching module 49 may be limited to those records indirectory 61 that match partial street address information obtained from an upstream OCR process. - In other embodiments, information from the output record at
end point 55 is used, but no bar code is applied to mailpiece 33. - In still other embodiments, enough information from each record is presented by
presentation module 51 to obviate the need for a user to select a record at all. In such embodiments,operator 31 simply uses the desired information from the menu (e.g., sorts themail piece 33 into a particular carrier route order) and proceeds to process the next piece. - It will be apparent to those skilled in the art that the number and position of characters to be read may be varied widely depending upon the particular context of the implementation. Typically the time required to speak more characters (and/or characters from additional fields) must be weighed against the additional narrowing of the output list to be achieved using the additional information.
-
Directory 61 is preferably optimized with respect to the voice engine to reduce the number of records displayed bypresentation module 51. For example adjacent (as in consecutive blocks of the same street) or interwoven (as in odd and even numbers along the same street) address ranges may be combined into one record. - An alternative application will now be described with reference to FIG. 2. This embodiment is implemented in an organization's mail room, where some mail pieces arrive (from internal or external sources) bearing the name of an intended recipient within the organization. A system according to the present invention is used to retrieve that additional destination information to assist in routing and delivery of the mail piece.
- In this embodiment,
mail piece 133 bearsaddress 135, which includes a first and last name.Operator 131 visually examinesmail piece 133 to findaddress block 135, then speaks intoheadset 137 the first three letters each of the first name and last name of the addressee. That speech is captured byvoice capture unit 141 and translated into a digitized audio signal.Voice engine 143, characterset expansion module 145, and regularexpression creation module 147 each operate analogously to the corresponding components (voice engine 43,expansion module 45, and regular expression creation module 47) discussed above in relation to FIG. 1. - Like
analogous module 49, inexact string matching look-upmodule 149 uses the regular expression output of regularexpression creation module 147 to searchdirectory 161. In this embodiment, the result of the query is returned directly tomenu presentation module 151, which providesoperator 131 with a menu of the most likely matches from thedirectory 161.Selection acceptance module 153 of system 120 accepts the user's selection from the menu and outputs the selected record atpoint 155. - In this embodiment, the department or mail stop associated with the selected addressee is displayed on a video monitor so that
operator 131 can write that information directly onmail piece 133 or manually sortmail piece 133 based on the displayed information. Alternatively,mail piece 133 may be imprinted with a bar code or other suitable designator to facilitate automatic or semi-automatic routing and transport through the organization. - It will be seen by those skilled in the art that systems according to the present invention may be implemented efficiently in conjunction with systems that use optical character recognition. For example, system120 might be applied only to those mail pieces bearing addresses (or addressees) that could not be properly routed solely by the OCR system module.
- Systems20 and 120 might also be used with identifier-related (e.g., bar coding) systems by using the output record (at
points - It will also occur to one skilled in the art that various forms of menuing and selection may be used by
modules user user voice engine 43 itself, e.g., by saying “select 1” or by similar method. -
Voice engine - The workstation used by
operators workstation 240. The software programs and modules described above are encoded onhard disc 242 for execution byprocessor 244.Workstation 240 may include more than one processor or CPU and more than one type ofmemory 246, wherememory 246 is representative of one or more types. Furthermore, it should be understood that while oneworkstation 240 is illustrated, more workstations may be utilized in alternative embodiments.Processor 244 may be comprised of one or more components configured as a single unit. Alternatively, when of a multi-component form,processor 244 may have one or more components located remotely relative to the others. One or more components ofprocessor 244 may be of the electronic variety defining digital circuitry, analog circuitry, or both. In one embodiment,processor 244 is of a conventional, integrated circuit microprocessor arrangement, such as one or more PENTIUM II or PENTIUM III processors supplied by INTEL Corporation of 2200 Mission College Boulevard, Santa Clara, Calif., 95052, USA. -
Memory 246 may include one or more types of solid-state electronic memory, magnetic memory, or optical memory, just to name a few. By way of non-limiting example,memory 246 may include solid-state electronic Random Access Memory (RAM), Sequentially Accessible Memory (SAM) (such as the First-In, First-Out (FIFO) variety or the Last-In First-Out (LIFO) variety), Programmable Read Only Memory (PROM), Electrically Programmable Read Only Memory (EPROM), or Electrically Erasable Programmable Read Only Memory (EEPROM); an optical disc memory (such as a DVD or CD ROM); a magnetically encoded hard disc, floppy disc, tape, or cartridge media; or a combination of any of these memory types. Also,memory 246 may be volatile, nonvolatile, or a hybrid combination of volatile and nonvolatile varieties. -
Audio subsystem 248 provides an interface betweenworkstation 240 and the audio equipment used byoperator headset Monitor 250 provides visual output fromworkstation 250 tooperator audio subsystem 248,headset workstation 240 may include additional and/or alternative components as would occur to one skilled in the art. - Furthermore, in various embodiments of the invention, the signals acquired by
voice capture units - In some embodiments, the number of characters to be spoken in a particular context is predetermined. This additional a priori information will often allow the
voice engine voice engine particular operator - In yet other embodiments, a similar process to those described above is applied to multiple fields of an address (e.g., ZIP code, street number, street name, directional modifiers, and/or apartment or suite number) to determine a correct, legal address for the recipient. The output record is then used to apply a complete bar code to the mail piece using means and for purposes well known in the art.
- The present invention might also be applied in other directory look-up contexts. For example, accuracy and recognition in an automated telephone directory assistance system might be improved by implementing the present invention therein. In such a system, the user might select a state, then a city, then a listing. At one or more of the selection steps, the user speaks the first few characters of the data item, and the system presents a list of candidate entries. The user selects the desired entry (in response to the list presented by the system) by pressing a key on the telephone keypad.
- Modifications of the present disclosure and claims, as would occur to one skilled in the art, may be made within the scope of the present invention. While the disclosure above has been made in relation to preferred embodiments, the scope of the invention is defined by the claims appended hereto.
Claims (1)
1. A method, comprising:
capturing an audio signal representative of a plurality of spoken characters, each having a character position;
parsing the audio signal into audio segments, each audio segment representing a character position;
decoding each audio segment into one or more candidate characters for the corresponding character position;
retrieving all directory records that contain, in a predetermined data field:
in at least one character position, either (a) one of the candidate characters, or (b) one or more substitution characters, where each substitution character is selected as a function of at least one of the candidate characters; and
in each remaining character position for which candidate characters were decoded, one of the candidate characters; and
presenting the matching records to an operator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/166,862 US20020193992A1 (en) | 2000-09-09 | 2002-06-11 | Voice-enabled directory look-up |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/659,383 US6405172B1 (en) | 2000-09-09 | 2000-09-09 | Voice-enabled directory look-up based on recognized spoken initial characters |
US10/166,862 US20020193992A1 (en) | 2000-09-09 | 2002-06-11 | Voice-enabled directory look-up |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/659,383 Continuation US6405172B1 (en) | 2000-09-09 | 2000-09-09 | Voice-enabled directory look-up based on recognized spoken initial characters |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020193992A1 true US20020193992A1 (en) | 2002-12-19 |
Family
ID=24645174
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/659,383 Expired - Fee Related US6405172B1 (en) | 2000-09-09 | 2000-09-09 | Voice-enabled directory look-up based on recognized spoken initial characters |
US10/166,862 Abandoned US20020193992A1 (en) | 2000-09-09 | 2002-06-11 | Voice-enabled directory look-up |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/659,383 Expired - Fee Related US6405172B1 (en) | 2000-09-09 | 2000-09-09 | Voice-enabled directory look-up based on recognized spoken initial characters |
Country Status (4)
Country | Link |
---|---|
US (2) | US6405172B1 (en) |
EP (1) | EP1325494A1 (en) |
AU (1) | AU2001289207A1 (en) |
WO (1) | WO2002021511A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050240409A1 (en) * | 2003-06-11 | 2005-10-27 | Gallistel Lorin R | System and method for providing rules-based directory assistance automation |
US20080215336A1 (en) * | 2003-12-17 | 2008-09-04 | General Motors Corporation | Method and system for enabling a device function of a vehicle |
US20080211656A1 (en) * | 2003-12-23 | 2008-09-04 | Valerie Binning | 911 Emergency light |
US20100017393A1 (en) * | 2008-05-12 | 2010-01-21 | Nuance Communications, Inc. | Entry Selection from Long Entry Lists |
US8175226B2 (en) | 2004-01-30 | 2012-05-08 | At&T Intellectual Property I, L.P. | Methods, systems and products for emergency location |
US8364197B2 (en) | 2003-12-23 | 2013-01-29 | At&T Intellectual Property I, L.P. | Methods, systems, and products for processing emergency communications |
Families Citing this family (78)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060121938A1 (en) * | 1999-08-12 | 2006-06-08 | Hawkins Jeffrey C | Integrated handheld computing and telephony device |
US7503016B2 (en) * | 1999-08-12 | 2009-03-10 | Palm, Inc. | Configuration mechanism for organization of addressing elements |
US6781575B1 (en) | 2000-09-21 | 2004-08-24 | Handspring, Inc. | Method and apparatus for organizing addressing elements |
US8064886B2 (en) * | 1999-08-12 | 2011-11-22 | Hewlett-Packard Development Company, L.P. | Control mechanisms for mobile devices |
US7007239B1 (en) * | 2000-09-21 | 2006-02-28 | Palm, Inc. | Method and apparatus for accessing a contacts database and telephone services |
US6394278B1 (en) * | 2000-03-03 | 2002-05-28 | Sort-It, Incorporated | Wireless system and method for sorting letters, parcels and other items |
US8332553B2 (en) * | 2000-09-21 | 2012-12-11 | Hewlett-Packard Development Company, L.P. | Method and apparatus for accessing a contacts database and telephone services |
US6980204B1 (en) * | 2000-09-21 | 2005-12-27 | Jeffrey Charles Hawkins | Charging and communication cable system for a mobile computer apparatus |
US7444284B1 (en) | 2001-01-24 | 2008-10-28 | Bevocal, Inc. | System, method and computer program product for large-scale street name speech recognition |
US20020099545A1 (en) * | 2001-01-24 | 2002-07-25 | Levitt Benjamin J. | System, method and computer program product for damage control during large-scale address speech recognition |
US7010490B2 (en) * | 2001-01-26 | 2006-03-07 | International Business Machines Corporation | Method, system, and apparatus for limiting available selections in a speech recognition system |
US7392287B2 (en) | 2001-03-27 | 2008-06-24 | Hemisphere Ii Investment Lp | Method and apparatus for sharing information using a handheld device |
US7970610B2 (en) * | 2001-04-19 | 2011-06-28 | British Telecommunication Public Limited Company | Speech recognition |
US7692667B2 (en) | 2001-08-17 | 2010-04-06 | Palm, Inc. | Handheld computer having moveable segments that are interactive with an integrated display |
US7376846B2 (en) * | 2001-10-14 | 2008-05-20 | Palm, Inc. | Charging and communication cable system for a mobile computer apparatus |
US7231208B2 (en) * | 2001-10-17 | 2007-06-12 | Palm, Inc. | User interface-technique for managing an active call |
US20030101045A1 (en) * | 2001-11-29 | 2003-05-29 | Peter Moffatt | Method and apparatus for playing recordings of spoken alphanumeric characters |
US7295852B1 (en) | 2003-05-01 | 2007-11-13 | Palm, Inc. | Automated telephone conferencing method and system |
US7865180B2 (en) * | 2003-06-23 | 2011-01-04 | Palm, Inc. | Automated telephone conferencing method and system |
US7363224B2 (en) * | 2003-12-30 | 2008-04-22 | Microsoft Corporation | Method for entering text |
US8949287B2 (en) | 2005-08-23 | 2015-02-03 | Ricoh Co., Ltd. | Embedding hot spots in imaged documents |
US8369655B2 (en) * | 2006-07-31 | 2013-02-05 | Ricoh Co., Ltd. | Mixed media reality recognition using multiple specialized indexes |
US8332401B2 (en) | 2004-10-01 | 2012-12-11 | Ricoh Co., Ltd | Method and system for position-based image matching in a mixed media environment |
US7917554B2 (en) * | 2005-08-23 | 2011-03-29 | Ricoh Co. Ltd. | Visibly-perceptible hot spots in documents |
US9373029B2 (en) * | 2007-07-11 | 2016-06-21 | Ricoh Co., Ltd. | Invisible junction feature recognition for document security or annotation |
US8989431B1 (en) | 2007-07-11 | 2015-03-24 | Ricoh Co., Ltd. | Ad hoc paper-based networking with mixed media reality |
US7885955B2 (en) * | 2005-08-23 | 2011-02-08 | Ricoh Co. Ltd. | Shared document annotation |
US9405751B2 (en) | 2005-08-23 | 2016-08-02 | Ricoh Co., Ltd. | Database for mixed media document system |
US9384619B2 (en) | 2006-07-31 | 2016-07-05 | Ricoh Co., Ltd. | Searching media content for objects specified using identifiers |
US7702673B2 (en) * | 2004-10-01 | 2010-04-20 | Ricoh Co., Ltd. | System and methods for creation and use of a mixed media environment |
US8195659B2 (en) * | 2005-08-23 | 2012-06-05 | Ricoh Co. Ltd. | Integration and use of mixed media documents |
US8176054B2 (en) * | 2007-07-12 | 2012-05-08 | Ricoh Co. Ltd | Retrieving electronic documents by converting them to synthetic text |
US8521737B2 (en) * | 2004-10-01 | 2013-08-27 | Ricoh Co., Ltd. | Method and system for multi-tier image matching in a mixed media environment |
US7991778B2 (en) * | 2005-08-23 | 2011-08-02 | Ricoh Co., Ltd. | Triggering actions with captured input in a mixed media environment |
US9530050B1 (en) | 2007-07-11 | 2016-12-27 | Ricoh Co., Ltd. | Document annotation sharing |
US8868555B2 (en) * | 2006-07-31 | 2014-10-21 | Ricoh Co., Ltd. | Computation of a recongnizability score (quality predictor) for image retrieval |
US9171202B2 (en) | 2005-08-23 | 2015-10-27 | Ricoh Co., Ltd. | Data organization and access for mixed media document system |
US8156116B2 (en) * | 2006-07-31 | 2012-04-10 | Ricoh Co., Ltd | Dynamic presentation of targeted information in a mixed media reality recognition system |
US8086038B2 (en) * | 2007-07-11 | 2011-12-27 | Ricoh Co., Ltd. | Invisible junction features for patch recognition |
US7920759B2 (en) * | 2005-08-23 | 2011-04-05 | Ricoh Co. Ltd. | Triggering applications for distributed action execution and use of mixed media recognition as a control input |
US8838591B2 (en) * | 2005-08-23 | 2014-09-16 | Ricoh Co., Ltd. | Embedding hot spots in electronic documents |
US8184155B2 (en) | 2007-07-11 | 2012-05-22 | Ricoh Co. Ltd. | Recognition and tracking using invisible junctions |
US8856108B2 (en) * | 2006-07-31 | 2014-10-07 | Ricoh Co., Ltd. | Combining results of image retrieval processes |
US8005831B2 (en) * | 2005-08-23 | 2011-08-23 | Ricoh Co., Ltd. | System and methods for creation and use of a mixed media environment with geographic location information |
US8276088B2 (en) * | 2007-07-11 | 2012-09-25 | Ricoh Co., Ltd. | User interface for three-dimensional navigation |
US8156427B2 (en) * | 2005-08-23 | 2012-04-10 | Ricoh Co. Ltd. | User interface for mixed media reality |
US8385589B2 (en) * | 2008-05-15 | 2013-02-26 | Berna Erol | Web-based content detection in images, extraction and recognition |
US7970171B2 (en) * | 2007-01-18 | 2011-06-28 | Ricoh Co., Ltd. | Synthetic image and video generation from ground truth data |
US8600989B2 (en) | 2004-10-01 | 2013-12-03 | Ricoh Co., Ltd. | Method and system for image matching in a mixed media environment |
US7812986B2 (en) * | 2005-08-23 | 2010-10-12 | Ricoh Co. Ltd. | System and methods for use of voice mail and email in a mixed media environment |
US8144921B2 (en) * | 2007-07-11 | 2012-03-27 | Ricoh Co., Ltd. | Information retrieval using invisible junctions and geometric constraints |
US8510283B2 (en) * | 2006-07-31 | 2013-08-13 | Ricoh Co., Ltd. | Automatic adaption of an image recognition system to image capture devices |
US8335789B2 (en) * | 2004-10-01 | 2012-12-18 | Ricoh Co., Ltd. | Method and system for document fingerprint matching in a mixed media environment |
US8825682B2 (en) * | 2006-07-31 | 2014-09-02 | Ricoh Co., Ltd. | Architecture for mixed media reality retrieval of locations and registration of images |
US8024194B2 (en) * | 2004-12-08 | 2011-09-20 | Nuance Communications, Inc. | Dynamic switching between local and remote speech rendering |
US7769772B2 (en) | 2005-08-23 | 2010-08-03 | Ricoh Co., Ltd. | Mixed media reality brokerage network with layout-independent recognition |
US20070088549A1 (en) * | 2005-10-14 | 2007-04-19 | Microsoft Corporation | Natural input of arbitrary text |
US20070100619A1 (en) * | 2005-11-02 | 2007-05-03 | Nokia Corporation | Key usage and text marking in the context of a combined predictive text and speech recognition system |
US7783018B1 (en) | 2006-06-24 | 2010-08-24 | Goldberg Mark S | Directory display and configurable entry system |
US8676810B2 (en) * | 2006-07-31 | 2014-03-18 | Ricoh Co., Ltd. | Multiple index mixed media reality recognition using unequal priority indexes |
US8201076B2 (en) * | 2006-07-31 | 2012-06-12 | Ricoh Co., Ltd. | Capturing symbolic information from documents upon printing |
US9176984B2 (en) * | 2006-07-31 | 2015-11-03 | Ricoh Co., Ltd | Mixed media reality retrieval of differentially-weighted links |
US9063952B2 (en) * | 2006-07-31 | 2015-06-23 | Ricoh Co., Ltd. | Mixed media reality recognition with image tracking |
US9020966B2 (en) * | 2006-07-31 | 2015-04-28 | Ricoh Co., Ltd. | Client device for interacting with a mixed media reality recognition system |
US8489987B2 (en) * | 2006-07-31 | 2013-07-16 | Ricoh Co., Ltd. | Monitoring and analyzing creation and usage of visual content using image and hotspot interaction |
US8073263B2 (en) * | 2006-07-31 | 2011-12-06 | Ricoh Co., Ltd. | Multi-classifier selection and monitoring for MMR-based image recognition |
US20080032728A1 (en) * | 2006-08-03 | 2008-02-07 | Bina Patel | Systems, methods and devices for communicating among multiple users |
US7831431B2 (en) * | 2006-10-31 | 2010-11-09 | Honda Motor Co., Ltd. | Voice recognition updates via remote broadcast signal |
US7877375B1 (en) * | 2007-03-29 | 2011-01-25 | Oclc Online Computer Library Center, Inc. | Name finding system and method |
US9423996B2 (en) * | 2007-05-03 | 2016-08-23 | Ian Cummings | Vehicle navigation user interface customization methods |
US8126519B2 (en) * | 2007-08-31 | 2012-02-28 | Hewlett-Packard Development Company, L.P. | Housing for mobile computing device having construction to slide and pivot into multiple positions |
US8150482B2 (en) | 2008-01-08 | 2012-04-03 | Hewlett-Packard Development Company, L.P. | Mobile computing device with moveable housing segments |
US8233948B2 (en) | 2007-12-11 | 2012-07-31 | Hewlett-Packard Development Company, L.P. | Slider assembly for a housing of a mobile computing device |
US8200298B2 (en) | 2008-01-08 | 2012-06-12 | Hewlett-Packard Development Company, L.P. | Keypad housing configuration for a mobile computing device |
US8385660B2 (en) * | 2009-06-24 | 2013-02-26 | Ricoh Co., Ltd. | Mixed media reality indexing and retrieval for repeated content |
US9058331B2 (en) | 2011-07-27 | 2015-06-16 | Ricoh Co., Ltd. | Generating a conversation in a social network based on visual search results |
US9259765B2 (en) | 2012-12-19 | 2016-02-16 | Pitney Bowes Inc. | Mail run balancing using video capture |
CN112954695A (en) * | 2021-01-26 | 2021-06-11 | 国光电器股份有限公司 | Method and device for distributing network for sound box, computer equipment and storage medium |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4276597A (en) * | 1974-01-17 | 1981-06-30 | Volt Delta Resources, Inc. | Method and apparatus for information storage and retrieval |
US4453217A (en) * | 1982-01-04 | 1984-06-05 | Bell Telephone Laboratories, Incorporated | Directory lookup method and apparatus |
US4556944A (en) | 1983-02-09 | 1985-12-03 | Pitney Bowes Inc. | Voice responsive automated mailing system |
JPS60147887A (en) | 1984-01-12 | 1985-08-03 | Toshiba Corp | Sorter of mail |
US4908864A (en) | 1986-04-05 | 1990-03-13 | Sharp Kabushiki Kaisha | Voice recognition method and apparatus by updating reference patterns |
US4866778A (en) | 1986-08-11 | 1989-09-12 | Dragon Systems, Inc. | Interactive speech recognition apparatus |
US4979206A (en) * | 1987-07-10 | 1990-12-18 | At&T Bell Laboratories | Directory assistance systems |
US4921107A (en) | 1988-07-01 | 1990-05-01 | Pitney Bowes Inc. | Mail sortation system |
US5101375A (en) | 1989-03-31 | 1992-03-31 | Kurzweil Applied Intelligence, Inc. | Method and apparatus for providing binding and capitalization in structured report generation |
US5263118A (en) | 1990-03-13 | 1993-11-16 | Applied Voice Technology, Inc. | Parking ticket enforcement system |
JP2815714B2 (en) | 1991-01-11 | 1998-10-27 | シャープ株式会社 | Translation equipment |
US5212730A (en) | 1991-07-01 | 1993-05-18 | Texas Instruments Incorporated | Voice recognition of proper names using text-derived recognition models |
DE69423838T2 (en) | 1993-09-23 | 2000-08-03 | Xerox Corp., Rochester | Semantic match event filtering for speech recognition and signal translation applications |
US5454063A (en) * | 1993-11-29 | 1995-09-26 | Rossides; Michael T. | Voice input system for data retrieval |
US5581599A (en) * | 1993-12-30 | 1996-12-03 | Northern Telecom Limited | Cordless telephone terminal |
US5677834A (en) | 1995-01-26 | 1997-10-14 | Mooneyham; Martin | Method and apparatus for computer assisted sorting of parcels |
US5677990A (en) * | 1995-05-05 | 1997-10-14 | Panasonic Technologies, Inc. | System and method using N-best strategy for real time recognition of continuously spelled names |
US5905773A (en) * | 1996-03-28 | 1999-05-18 | Northern Telecom Limited | Apparatus and method for reducing speech recognition vocabulary perplexity and dynamically selecting acoustic models |
US6317489B1 (en) * | 1996-07-29 | 2001-11-13 | Elite Access Systems, Inc. | Entry phone apparatus and method with improved alphabetical access |
US5752230A (en) * | 1996-08-20 | 1998-05-12 | Ncr Corporation | Method and apparatus for identifying names with a speech recognition program |
US5995928A (en) | 1996-10-02 | 1999-11-30 | Speechworks International, Inc. | Method and apparatus for continuous spelling speech recognition with early identification |
FI101909B (en) * | 1997-04-01 | 1998-09-15 | Nokia Mobile Phones Ltd | Electronic data retrieval method and device |
US6032164A (en) | 1997-07-23 | 2000-02-29 | Inventec Corporation | Method of phonetic spelling check with rules of English pronunciation |
KR20010023028A (en) * | 1997-08-20 | 2001-03-26 | 맥슨 시스템스 아이엔시. (런던) 엘티디. | A method for locating stored entries in an electronic directory and communication apparatus |
US5987410A (en) | 1997-11-10 | 1999-11-16 | U.S. Philips Corporation | Method and device for recognizing speech in a spelling mode including word qualifiers |
US6052439A (en) | 1997-12-31 | 2000-04-18 | At&T Corp | Network server platform telephone directory white-yellow page services |
US6009392A (en) | 1998-01-15 | 1999-12-28 | International Business Machines Corporation | Training speech recognition by matching audio segment frequency of occurrence with frequency of words and letter combinations in a corpus |
-
2000
- 2000-09-09 US US09/659,383 patent/US6405172B1/en not_active Expired - Fee Related
-
2001
- 2001-09-10 WO PCT/US2001/042107 patent/WO2002021511A1/en not_active Application Discontinuation
- 2001-09-10 AU AU2001289207A patent/AU2001289207A1/en not_active Abandoned
- 2001-09-10 EP EP01969009A patent/EP1325494A1/en not_active Withdrawn
-
2002
- 2002-06-11 US US10/166,862 patent/US20020193992A1/en not_active Abandoned
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050240409A1 (en) * | 2003-06-11 | 2005-10-27 | Gallistel Lorin R | System and method for providing rules-based directory assistance automation |
US20080215336A1 (en) * | 2003-12-17 | 2008-09-04 | General Motors Corporation | Method and system for enabling a device function of a vehicle |
US8751241B2 (en) * | 2003-12-17 | 2014-06-10 | General Motors Llc | Method and system for enabling a device function of a vehicle |
US20080211656A1 (en) * | 2003-12-23 | 2008-09-04 | Valerie Binning | 911 Emergency light |
US8364197B2 (en) | 2003-12-23 | 2013-01-29 | At&T Intellectual Property I, L.P. | Methods, systems, and products for processing emergency communications |
US8983424B2 (en) | 2003-12-23 | 2015-03-17 | At&T Intellectual Property I, L.P. | Methods, systems, and products for processing emergency communications |
US8175226B2 (en) | 2004-01-30 | 2012-05-08 | At&T Intellectual Property I, L.P. | Methods, systems and products for emergency location |
US8666029B2 (en) | 2004-01-30 | 2014-03-04 | At&T Intellectual Property I, L.P. | Methods, systems, and products for emergency location |
US20100017393A1 (en) * | 2008-05-12 | 2010-01-21 | Nuance Communications, Inc. | Entry Selection from Long Entry Lists |
US8484582B2 (en) * | 2008-05-12 | 2013-07-09 | Nuance Communications, Inc. | Entry selection from long entry lists |
Also Published As
Publication number | Publication date |
---|---|
EP1325494A1 (en) | 2003-07-09 |
US6405172B1 (en) | 2002-06-11 |
WO2002021511A1 (en) | 2002-03-14 |
AU2001289207A1 (en) | 2002-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6405172B1 (en) | Voice-enabled directory look-up based on recognized spoken initial characters | |
US7574347B2 (en) | Method and apparatus for robust efficient parsing | |
US7124085B2 (en) | Constraint-based speech recognition system and method | |
EP2079038B1 (en) | Block analyser | |
US20100121631A1 (en) | Data detection | |
US7539326B2 (en) | Method for verifying an intended address by OCR percentage address matching | |
WO2006105108A2 (en) | Multigraph optical character reader enhancement systems and methods | |
US20070016420A1 (en) | Dictionary lookup for mobile devices using spelling recognition | |
GB2192077A (en) | A machine translation system | |
EP1058446A2 (en) | Key segment spotting in voice messages | |
JPH09244969A (en) | Personal information extraction method and device | |
US6167367A (en) | Method and device for automatic error detection and correction for computerized text files | |
KR20000073523A (en) | The method to connect a web site using a classical number system. | |
JPH1011434A (en) | Information recognition device | |
KR20010063882A (en) | System and its Method for creating delivery information of mail | |
Hu et al. | On-line handwriting recognition with constrained n-best decoding | |
JP2500680B2 (en) | Data name assignment registration device | |
US6993155B1 (en) | Method for reading document entries and addresses | |
US6970868B2 (en) | Method for ascertaining valid address codes | |
JPH08180066A (en) | Index preparation method, document retrieval method and document retrieval device | |
JP2001211245A (en) | Automatic registration/retrieval/analyzing device for information corresponding to phone call by call voice recognition | |
JP3455924B2 (en) | Message information error detection device and message information error detection method | |
JPH10269205A (en) | Document management device | |
JP2827066B2 (en) | Post-processing method for character recognition of documents with mixed digit strings | |
JP2996823B2 (en) | Character recognition device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |