US20040024582A1 - Systems and methods for aiding human translation - Google Patents
Systems and methods for aiding human translation Download PDFInfo
- Publication number
- US20040024582A1 US20040024582A1 US10/610,684 US61068403A US2004024582A1 US 20040024582 A1 US20040024582 A1 US 20040024582A1 US 61068403 A US61068403 A US 61068403A US 2004024582 A1 US2004024582 A1 US 2004024582A1
- Authority
- US
- United States
- Prior art keywords
- audio signal
- transcription
- user
- textual representation
- providing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000013519 translation Methods 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims description 51
- 230000005236 sound signal Effects 0.000 claims abstract description 122
- 238000013518 transcription Methods 0.000 claims description 83
- 230000035897 transcription Effects 0.000 claims description 83
- 238000012545 processing Methods 0.000 claims description 15
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 230000000977 initiatory effect Effects 0.000 claims 2
- 230000014616 translation Effects 0.000 description 45
- 238000010586 diagram Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 12
- 238000004891 communication Methods 0.000 description 11
- 230000007246 mechanism Effects 0.000 description 10
- 230000004044 response Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000007796 conventional method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/42—Graphical user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/60—Medium conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2203/00—Aspects of automatic or semi-automatic exchanges
- H04M2203/30—Aspects of automatic or semi-automatic exchanges related to audio recordings in general
- H04M2203/305—Recording playback features, e.g. increased speed
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99941—Database schema or data structure
- Y10S707/99943—Generating database or data structure, e.g. via user interface
Definitions
- the present invention relates generally to language translation and, more particularly, to systems and methods for aiding a human in translating audio data.
- Human translation is a slow and time-consuming process. As a result, the human translator typically translates only important segments of an audio signal. The translator will often work from a recorded audio signal to skim the complete audio signal, listening for segments that are suitable for translation. The translator then replays selected segments, translating the speech while transcribing them with a word processor. To accurately transcribe the audio segments, the translator will usually go through the audio segments many times, rewinding the audio repeatedly to keep the translation synchronized with the playback. Only after the translator feels that the translated audio segment is accurate and complete will the translator publish the translation results.
- Systems and methods consistent with the present invention address this and other needs by providing a transcription of an audio signal, along with the original audio signal, to a translator to assist the translator in translating the audio signal.
- the systems and methods visually synchronize the playback of the audio signal with the transcription to aid the translation process.
- a system aids a user in translating an audio signal that includes speech from one language to another.
- the system retrieves a textual representation of the audio signal and presents the textual representation to the user.
- the system receives selection of a segment of the textual representation for translation and obtains a portion of the audio signal corresponding to the segment.
- the system then provides the segment of the textual representation and the portion of the audio signal to the user to help the user translate the audio signal.
- a graphical user interface includes a transcription section, a translation section, and a play button.
- the transcription section includes a transcription of non-text information in a first language.
- the translation section receives a translation of the non-text information into a second language.
- the play button when selected, causes retrieval of the non-text information to be initiated, playing of the non-text information, and the playing of the non-text information to be visually synchronized with the transcription in the transcription section.
- FIG. 1 is a diagram of a system in which systems and methods consistent with the present invention may be implemented
- FIG. 2 is an exemplary diagram of the server of FIG. 1 according to an implementation consistent with the principles of the invention
- FIG. 3 is an exemplary diagram of the client of FIG. 1 according to an implementation consistent with the principles of the invention
- FIG. 4 is a flowchart of exemplary processing for presenting information for perusal by a human translator according to an implementation consistent with the principles of the invention
- FIG. 5 is a diagram of an exemplary graphical user interface that may be presented to a user according to an implementation consistent with the principles of the invention
- FIG. 6 is a diagram of the graphical user interface of FIG. 5 that illustrates a user's request to play back an original audio signal
- FIG. 7 is a diagram of the graphical user interface of FIG. 5 that illustrates synchronization of a document to the playback of the original audio signal
- FIG. 8 is a flowchart of exemplary processing for translating an audio signal according to an implementation consistent with the principles of the invention
- FIG. 9 is a diagram of an exemplary graphical user interface that may be presented to a user in an implementation consistent with the principles of the invention.
- FIG. 10 is a diagram of the graphical user interface of FIG. 9 that illustrates a user's translation of an audio signal.
- Systems and methods consistent with the present invention aid a human translator in translating an audio stream from one language to another.
- the systems and methods present the human translator with the audio stream, along with a transcription of the audio stream.
- the systems and methods visually synchronize the playing back of the audio with the words in the transcription. As will be apparent below, such systems and methods help the human translator translate the audio stream more efficiently, quickly, and accurately.
- FIG. 1 is a diagram of an exemplary system 100 in which systems and methods consistent with the present invention may be implemented.
- System 100 may include server 110 , metadata database 120 , database of original media 130 , and clients 140 interconnected via a network 150 .
- Network 150 may include any type of network, such as a local area network (LAN), a wide area network (WAN), a public telephone network (e.g., the Public Switched Telephone Network (PSTN)), a virtual private network (VPN), or a combination of networks.
- PSTN Public Switched Telephone Network
- VPN virtual private network
- Server 110 , database 130 , and clients 140 may connect to network 150 via wired, wireless, and/or optical connections.
- clients 140 may interact with server 110 to obtain information from metadata database 120 .
- the information may include a textual representation (or transcription) of audio data.
- a user of one of clients 140 may peruse the information to identify segments to be translated.
- Client 140 may then obtain the original audio from database of original media 130 either directly or via server 110 .
- Client 140 may present the information and original audio to the user in such a manner that facilitates the user's translation of the audio.
- Server 110 may include a computer or another device that is capable of servicing client requests for information and providing such information to a client 140 .
- FIG. 2 is an exemplary diagram of server 110 according to an implementation consistent with the principles of the invention.
- Server 110 may include bus 210 , processor 220 , main memory 230 , read only memory (ROM) 240 , storage device 250 , input device 260 , output device 270 , and communication interface 280 .
- Bus 210 permits communication among the components of server 110 .
- Processor 220 may include any type of conventional processor or microprocessor that interprets and executes instructions.
- Main memory 230 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 220 .
- ROM 240 may include a conventional ROM device or another type of static storage device that stores static information and instructions for use by processor 220 .
- Storage device 250 may include a magnetic and/or optical recording medium and its corresponding drive.
- Input device 260 may include one or more conventional mechanisms that permit an operator to input information to server 110 , such as a keyboard, a mouse, a pen, voice recognition and/or biometric mechanisms, etc.
- Output device 270 may include one or more conventional mechanisms that output information to the operator, including a display, a printer, a pair of speakers, etc.
- Communication interface 280 may include any transceiver-like mechanism that enables server 110 to communicate with other devices and/or systems.
- communication interface 280 may include mechanisms for communicating with another device or system via a network, such as network 150 .
- server 110 services requests for information and manages access to metadata database 120 .
- Server 110 may perform these tasks in response to processor 220 executing sequences of instructions contained in, for example, memory 230 . These instructions may be read into memory 230 from another computer-readable medium, such as storage device 250 , or from another device via communication interface 280 .
- Metadata database 120 may include a relational database, or another type of database, that stores metadata and other information relating to audio data in any language.
- An audio processing system (not shown), such as the one described in John Makhoul et al., “Speech and Language Technologies for Audio Indexing and Retrieval,” Proceedings of the IEEE, Vol. 88, No. 8, August 2000, pp. 1338-1353, may capture audio data from various sources, process the audio data, and create an automated transcription and metadata relating to the audio data.
- the media processing system may segment an input audio stream by speaker, cluster audio segments from the same speaker, identify speakers known to the system, and transcribe the spoken words.
- the media processing system may also segment the input stream into stories, based on their topic content, and locate the names of people, places, and organizations.
- the media processing system may further analyze the input stream to identify when each word is spoken.
- the media processing system may include any or all of this information in the transcription and metadata relating to the input stream.
- Database of original media 130 may include a conventional database that stores audio (or other types of media) in any language.
- the audio may be processed by a known audio compression technique, such as MP3 compression, and stored in database 130 .
- the audio stored in database 130 may correspond to the information in metadata database 120 .
- the original audio may include the data from which the transcription and metadata was created.
- database 130 may contain additional audio, or another type of media, for which there is no corresponding information in metadata database 120 .
- the original audio may be stored in such a way that it is easily retrievable as a whole and in portions. For example, a portion of an audio signal may be retrieved by specifying that the portion of the signal that occurred between 8:05 a.m. and 8:08 a.m. is desired.
- the database 130 may then provide the desired audio as streaming audio to client 140 , for example.
- Client 140 may include a personal computer, a laptop, a personal digital assistant, or another type of device that is capable of interacting with server 110 and database of original media 130 to obtain information for translation or perusal. Client 140 may present the information to a user via a graphical user interface (GUI), possibly within a web browser window or a word processing window.
- GUI graphical user interface
- FIG. 3 is an exemplary diagram of client 140 according to an implementation consistent with the principles of the invention.
- Client 140 may include a bus 310 , a processor 320 , a memory 330 , one or more input devices 340 , one or more output devices 350 , and a communication interface 360 .
- Bus 310 may permit communication among the components of client 140 .
- Processor 320 may include any type of conventional processor or microprocessor that interprets and executes instructions.
- Memory 330 may include a RAM or another type of dynamic storage device that stores information and instructions for execution by processor 320 ; a ROM or another type of static storage device that stores static information and instructions for use by processor 320 ; and/or some other type of magnetic or optical recording medium and its corresponding drive.
- memory 330 may include both volatile and non-volatile memory devices.
- Input devices 340 may include one or more conventional mechanisms that permit a user to input information into client 140 or control operation of client 140 , such as a keyboard, mouse, pen, etc.
- input devices 340 may include a foot pedal that permits a user to control the playback of an audio signal.
- Output devices 350 may include one or more conventional mechanisms that output information to the user, including a display, a printer, a pair of speakers, etc.
- Communication interface 360 may include any transceiver-like mechanism that enables client 140 to communicate with other devices and systems via a network, such as network 150 .
- client 140 aids a user in translating an audio signal by, for example, presenting a textual representation of the audio signal in a same window as will be used for the transcription and visually synchronizing the playing back of the audio signal with the textual representation of the audio signal.
- Client 140 may perform these operations in response to processor 320 executing software instructions contained in a computer-readable medium, such as memory 330 .
- the software instructions may be read into memory 330 from another computer-readable medium or from another device via communication interface 360 .
- the software instructions contained in memory 330 causes processor 320 to perform processes that will be described later.
- hardwired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the present invention.
- processes performed by client 140 are not limited to any specific combination of hardware circuitry and software.
- FIG. 4 is a flowchart of exemplary processing for presenting information for perusal by a human translator according to an implementation consistent with the principles of the invention. Processing may begin with the user (i.e., human translator) inputting, into client 140 , a request for information.
- a typical request might be as specific as “give me audio from Al Jazeera for Jan. 3, 2002 between 9:00 a.m. and 10:00 a.m.,” or as general as “show me everything where George Bush was the topic.”
- Other requests may include data regarding the date, time, language, and/or source of the desired information, or relevant words next to each other or within a certain distance of each other (similar to a typical database query).
- Client 140 may process (e.g., convert) the request, if necessary, and issue the request to server 110 (act 405 ). For example, client 140 may establish communication with server 110 via network 150 , using conventional techniques. Once communication has been established, client 140 may transmit the request to server 110 .
- Server 110 may formulate a query based on the request from client 140 and use the query to access metadata database 120 .
- Server 110 may retrieve data (e.g., a transcription and metadata) relating to the desired information from metadata database 120 (act 410 ).
- Server 110 may then convert the data to an appropriate form, such as a Hyper Text Mark-up Language (HTML) document, and transmit the HTML document to client 140 for display in a standard web browser (acts 415 and 420 ).
- the HTML document may contain the transcription and metadata information, such as speaker identifiers, topics, and word time codes.
- server 110 may convert the data to another form or transmit the data unconverted to client 140 .
- Client 140 may present the HTML document to the user via a graphical user interface (GUI) (act 425 ).
- GUI graphical user interface
- FIG. 5 is a diagram of an exemplary GUI 500 that client 140 may present to a user according to an implementation consistent with the principles of the invention.
- GUI 500 may be part of an interface of a standard Internet browser, such as Internet Explorer or Netscape Navigator, or any other browser that follows World Wide Web Consortium (W3C) specifications for HTML.
- W3C World Wide Web Consortium
- the information presented by GUI 500 in this example relates to an episode of a television news program (i.e., ABC's World News Tonight from Jan. 31, 1998).
- GUI 500 may include a speaker section 510 , a transcription section 520 , and a topics section 530 .
- Speaker section 510 may identify boundaries between speakers, the gender of a speaker, and the name of a speaker (when known). In this way, speaker segments are clustered together over the entire episode to group together segments from the same speaker under the same label. In the example of FIG. 5, one speaker, Elizabeth Vargas, has been identified by name.
- Transcription section 520 may include a transcription of the audio data. Transcription section 520 may identify named entities (i.e., people, places, and organizations) by highlighting them in some manner. For example, people, places, organizations may be identified using different colors. Topic section 530 may include topics relating to the transcription in transcription section 520 . Each of the topics may describe the main themes of the episode and may constitute a very high-level summary of the content of the transcription, even though the exact words in the topic may not be included in the transcription.
- named entities i.e., people, places, and organizations
- Topic section 530 may include topics relating to the transcription in transcription section 520 . Each of the topics may describe the main themes of the episode and may constitute a very high-level summary of the content of the transcription, even though the exact words in the topic may not be included in the transcription.
- GUI 500 may also include a play audio button 540 corresponding to an embedded media player, such as the RealPlayer media player available from RealNetworks, that permits the original audio corresponding to the transcription in transcription section 520 to be played back.
- an embedded media player such as the RealPlayer media player available from RealNetworks
- the media player may access database of original media 130 to retrieve the original audio and present the audio to the user.
- the media player may permit the audio corresponding to the transcription to be played.
- GUI 500 may also include a product button 550 .
- the product button 550 may be used by the user when the user desires to produce a translation of one or more portions of the document in transcription section 520 .
- the user may read, skim, or browse the HTML document to determine whether the user would like to translate any portions of the document.
- the user may play back the information in the HTML document in its original form (act 430 ).
- the user may highlight or otherwise identify a portion of the HTML document for which the user desires to obtain the original audio and select play button 540 .
- the user may use a computer mouse to highlight the desired portion.
- the user may simply identify a starting point from which the original audio is desired.
- FIG. 6 is a diagram of GUI 500 that illustrates a user's request to play back an original media.
- the user highlights a portion of the HTML document at highlighted block 610 .
- the user selects play button 540 to initiate the playback process.
- client 140 initiates the embedded media player.
- the media player may determine the portion identified by the user, such as highlighted block 610 (act 435 ).
- the media player may identify the time codes, corresponding to the beginning and ending (if applicable) of the identified portion, using the time codes in the HTML document.
- the media player may then retrieve the desired portion of the original audio signal (act 440 ).
- the media player may use conventional techniques to pull that portion of the original audio from database of original media 130 .
- the media player may use the beginning and ending time codes (e.g., 7:03 p.m. to 7:05 p.m.) when accessing database 130 .
- the original audio from database 130 may stream back to the media player.
- the media player can then play the original audio for the user (act 445 ).
- client 140 visually synchronizes the playback with the transcription in the HTML document (act 450 ). To facilitate this, the media player lets client 140 know as time passes in the playback of the original audio. Because the metadata of the HTML document includes time codes that identify exactly when each word in the transcription of the HTML document was spoken, client 140 knows precisely (possibly down to the millisecond) when to highlight (or otherwise visually distinguish) a word. Client 140 compares the times emitted by the media player with the time codes and highlights the appropriate words.
- FIG. 7 is a diagram of GUI 500 that illustrates the synchronization of the HTML document to the playback of the original media.
- Client 140 visually distinguishes the word “american” in synchronism with the playback of the original audio by the media player, as shown at the highlighted block 710 .
- the user may be permitted to stop the playback at any time.
- the user may also be permitted to control the playback by, for example, fast forwarding, speeding it up, slowing it down, or backing it up so many seconds or so many words.
- the media player or the graphical user interface may present the user with a set of controls to permit the user to perform these functions.
- the user may use foot pedals to control the playback of the audio signal.
- the user may also be permitted to alter the HTML document in some manner and save the altered document back in metadata database 120 .
- the user may be permitted to highlight or comment on the document that the user, or another translator, may desire to later translate.
- Client 140 in this case, may send the altered document back to server 110 for storage in metadata database 120 .
- FIG. 8 is a flowchart of exemplary processing for translating an audio signal according to an implementation consistent with the principles of the invention. Processing may begin with the user viewing a document presented by client 140 that corresponds to an audio signal that the user desires to translate.
- the user In translating the audio signal, the user performs three separate tasks: selection 810 , translation 820 , and publication 830 .
- selection task 810 the user selects a portion of the audio signal to translate (act 812 ). For example, the user may highlight or otherwise identify a portion of the document that the user desires to translate and select product button 550 . In one implementation, the user may use a computer mouse to highlight the desired portion. Alternatively, the user may simply identify a starting point from which the user desires to begin the translation.
- client 140 may send a message to server 110 to retrieve the portion of the document selected by the user (act 814 ).
- the message may include data specific to the portion (or range) selected by the user.
- server 110 may obtain the text relating to the selected portion (i.e., the transcription of the audio signal relating to the range selected by the user) and send a return message to client 140 .
- the return message may include the text and metadata (e.g., time codes, named entities, etc.) relating to the selected portion with the Multipurpose Internet Mail Extension (MIME) type set to inform client 140 that the text is intended for a translation application.
- the translation application may be a word processing application, such as Microsoft Word or WordPerfect from Corel Corporation, or another type of application, such as an application that operates upon Java or HTML.
- GUI graphical user interface
- GUI 900 may include translation section 910 and transcription section 920 .
- Translation section 910 may include the area in which the user types in, or otherwise provides, a translation of the audio signal.
- Transcription section 920 may include the area in which the text (or transcription) of the portion of the audio signal to be translated is displayed.
- GUI 900 may also include several buttons, such as backup button 930 , play/pause button 940 , save product button 950 , and configuration (config) button 960 . The functions performed when buttons 930950 are selected will be described below.
- the user translates the selected portion of the audio signal.
- the user may begin by selecting play/pause button 940 .
- client 140 may initiate an embedded media player, such as the RealPlayer media player available from RealNetworks.
- the media player may identify the time codes corresponding to the beginning and ending (if applicable) of the selected portion.
- the media player may then retrieve the corresponding portion of the original audio signal (act 822 ).
- the media player may use conventional techniques to pull that portion of the original audio from database of original media 130 .
- the media player may use the beginning and ending time codes (e.g., 7:03 p.m. to 7:05 p.m.) when accessing database 130 .
- the original audio from database 130 may stream back to the media player.
- the media player then plays the original audio for the user (act 824 ).
- client 140 may visually synchronize the playback with the transcription in transcription section 920 (act 826 ). To facilitate this, the media player lets client 140 know as time passes in the playback of the original audio. Because the time codes identify exactly when each word in transcription section 920 was spoken, client 140 knows precisely (possibly down to the millisecond) when to highlight (or otherwise visually distinguish) a word. Client 140 compares the times emitted by the media player with the time codes and highlights the appropriate words.
- FIG. 10 is a diagram of GUI 900 that illustrates a user's translation of an audio signal.
- play/pause button 940 may be toggled to start and stop the playback of the audio signal.
- the user will need to replay portions of the audio signal.
- backup button 930 may cause the playback of the audio to rewind a predetermined amount of time or number of words. The amount of rewind may be user-determined via configuration button 960 .
- configuration button 960 When configuration button 960 is selected, the user may be presented with a configuration window, such as window 1010 .
- Window 1010 may present the user with a number of options. For example, the user may be prompted to provide the product (i.e., the translation) with a name. The user may also be prompted to identify a location at which the product is to be published (or saved). The user may further be prompted to identify the amount of rewind for each selection of backup button 930 . The amount of rewind may be specified in terms of seconds or the number of words. If a number of words are specified, client 140 may convert the number of words to seconds based on the time codes associated with the text in transcription section 920 .
- the functions of play/pause button 940 and backup button 930 may be initiated via one or more foot pedals.
- the user may press a foot pedal to start and stop the playback of the audio.
- the user may press another foot pedal to back up a predetermined amount of time or number of words.
- the same or other foot pedals may be used to fast forward, speed up, and/or slow down the play back of the audio.
- the foot pedals may free the user's hands for typing in the translation.
- the user may publish the translation results (act 832 ).
- the user may select save product button 950 , which may cause configuration window 1010 (FIG. 10) to be presented to the user.
- the user via configuration window 1010 , may be given the option of saving the translation results to any file, directory, or location.
- the user may save the results to a location where it will be useful and may be easily accessed by people who may be interested in the translation.
- Systems and methods consistent with the present invention provide mechanisms that aid a human in translating an audio signal to another language.
- the systems and methods provide improvements at all three stages of the translation process.
- the systems and methods provide a transcription of the audio signal to the translator. This helps the translator in selecting a segment of the audio signal to translate because it is faster to skim through text than it is to listen to an entire audio signal.
- the translator may also use search criteria to find relevant text. This makes it possible to easily monitor a very large number of audio sources.
- the systems and methods also present a transcription of the audio signal on the same screen that the translator uses to provide the translation.
- the systems and methods visually synchronize the playback of the audio signal with the text in the transcription. This helps the translator in translating the audio signal. For example, this gives the translator two indications (audible and visual) of what a particular word might mean, which increases the speed of translation. More people can read a language and translate it than can translate audio alone.
- the systems and methods also permit the translator to publish the translation results anywhere that is useful. This helps the translator in making the translation results available to those who would be interested in them.
- a media player retrieves the original audio when instructed by a user.
- the original audio may be transmitted to the user along with the translation of the audio and any associated metadata.
- more than the requested portion of the original media may be transmitted to the user in anticipation of its later request by the user.
- the translation may be presented to a human translator, possibly along with the transcription and/or the original audio signal, to aid the translator in preparing an accurate translation of the audio signal.
- aspects of the invention have been described as operating upon speech within an audio signal, these aspects may also operate upon speech contained with a video signal. Still, while aspects of the invention have been described in reference to a client-server configuration over a network, systems and methods for translating in a manner consistent with the present invention may also be implemented locally on a single computer.
Landscapes
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Telephonic Communication Services (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application claims priority under 35 U.S.C. § 119 based on U.S. Provisional Application Nos. 60/394,064 and 60/394,082, filed Jul. 3, 2002, and Provisional Application No. 60/419,214, filed Oct. 17, 2002, the disclosures of which are incorporated herein by reference.
- This application is related to U.S. patent application, Ser. No. ______ (Docket No. 02-4036), entitled, “Systems and Methods for Facilitating Playback of Media,” filed concurrently herewith and incorporated herein by reference.
- [0003] The U.S. Government may have a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract No. 1999*S018900*000 awarded by the Federal Broadcast Information Service.
- 1. Field of the Invention
- The present invention relates generally to language translation and, more particularly, to systems and methods for aiding a human in translating audio data.
- 2. Description of Related Art
- There are three major tasks when performing translations of an audio signal: selection, translation, and publication. During selection, a human translator chooses a segment of audio to translate. During translation, the translator actually translates the audio segment. During publication, the translator publishes or saves the translation results.
- Human translation is a slow and time-consuming process. As a result, the human translator typically translates only important segments of an audio signal. The translator will often work from a recorded audio signal to skim the complete audio signal, listening for segments that are suitable for translation. The translator then replays selected segments, translating the speech while transcribing them with a word processor. To accurately transcribe the audio segments, the translator will usually go through the audio segments many times, rewinding the audio repeatedly to keep the translation synchronized with the playback. Only after the translator feels that the translated audio segment is accurate and complete will the translator publish the translation results.
- As a result, there is a need for mechanisms that facilitate and expedite the translation of an audio signal.
- Systems and methods consistent with the present invention address this and other needs by providing a transcription of an audio signal, along with the original audio signal, to a translator to assist the translator in translating the audio signal. The systems and methods visually synchronize the playback of the audio signal with the transcription to aid the translation process.
- In one aspect consistent with the principles of the invention, a system aids a user in translating an audio signal that includes speech from one language to another. The system retrieves a textual representation of the audio signal and presents the textual representation to the user. The system receives selection of a segment of the textual representation for translation and obtains a portion of the audio signal corresponding to the segment. The system then provides the segment of the textual representation and the portion of the audio signal to the user to help the user translate the audio signal.
- According to another aspect of the invention, a graphical user interface is provided. The graphical user interface includes a transcription section, a translation section, and a play button. The transcription section includes a transcription of non-text information in a first language. The translation section receives a translation of the non-text information into a second language. The play button, when selected, causes retrieval of the non-text information to be initiated, playing of the non-text information, and the playing of the non-text information to be visually synchronized with the transcription in the transcription section.
- The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate the invention and, together with the description, explain the invention. In the drawings,
- FIG. 1 is a diagram of a system in which systems and methods consistent with the present invention may be implemented;
- FIG. 2 is an exemplary diagram of the server of FIG. 1 according to an implementation consistent with the principles of the invention;
- FIG. 3 is an exemplary diagram of the client of FIG. 1 according to an implementation consistent with the principles of the invention;
- FIG. 4 is a flowchart of exemplary processing for presenting information for perusal by a human translator according to an implementation consistent with the principles of the invention;
- FIG. 5 is a diagram of an exemplary graphical user interface that may be presented to a user according to an implementation consistent with the principles of the invention;
- FIG. 6 is a diagram of the graphical user interface of FIG. 5 that illustrates a user's request to play back an original audio signal;
- FIG. 7 is a diagram of the graphical user interface of FIG. 5 that illustrates synchronization of a document to the playback of the original audio signal;
- FIG. 8 is a flowchart of exemplary processing for translating an audio signal according to an implementation consistent with the principles of the invention;
- FIG. 9 is a diagram of an exemplary graphical user interface that may be presented to a user in an implementation consistent with the principles of the invention; and
- FIG. 10 is a diagram of the graphical user interface of FIG. 9 that illustrates a user's translation of an audio signal.
- The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents.
- Systems and methods consistent with the present invention aid a human translator in translating an audio stream from one language to another. The systems and methods present the human translator with the audio stream, along with a transcription of the audio stream. The systems and methods visually synchronize the playing back of the audio with the words in the transcription. As will be apparent below, such systems and methods help the human translator translate the audio stream more efficiently, quickly, and accurately.
- FIG. 1 is a diagram of an
exemplary system 100 in which systems and methods consistent with the present invention may be implemented.System 100 may includeserver 110,metadata database 120, database oforiginal media 130, andclients 140 interconnected via anetwork 150. Network 150 may include any type of network, such as a local area network (LAN), a wide area network (WAN), a public telephone network (e.g., the Public Switched Telephone Network (PSTN)), a virtual private network (VPN), or a combination of networks.Server 110,database 130, andclients 140 may connect tonetwork 150 via wired, wireless, and/or optical connections. - Generally,
clients 140 may interact withserver 110 to obtain information frommetadata database 120. The information may include a textual representation (or transcription) of audio data. A user of one ofclients 140 may peruse the information to identify segments to be translated.Client 140 may then obtain the original audio from database oforiginal media 130 either directly or viaserver 110.Client 140 may present the information and original audio to the user in such a manner that facilitates the user's translation of the audio. - Each of the components of
system 100 will now be described in more detail. -
Server 110 may include a computer or another device that is capable of servicing client requests for information and providing such information to aclient 140. FIG. 2 is an exemplary diagram ofserver 110 according to an implementation consistent with the principles of the invention.Server 110 may includebus 210,processor 220,main memory 230, read only memory (ROM) 240,storage device 250,input device 260,output device 270, andcommunication interface 280.Bus 210 permits communication among the components ofserver 110. -
Processor 220 may include any type of conventional processor or microprocessor that interprets and executes instructions.Main memory 230 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution byprocessor 220.ROM 240 may include a conventional ROM device or another type of static storage device that stores static information and instructions for use byprocessor 220.Storage device 250 may include a magnetic and/or optical recording medium and its corresponding drive. -
Input device 260 may include one or more conventional mechanisms that permit an operator to input information toserver 110, such as a keyboard, a mouse, a pen, voice recognition and/or biometric mechanisms, etc.Output device 270 may include one or more conventional mechanisms that output information to the operator, including a display, a printer, a pair of speakers, etc.Communication interface 280 may include any transceiver-like mechanism that enablesserver 110 to communicate with other devices and/or systems. For example,communication interface 280 may include mechanisms for communicating with another device or system via a network, such asnetwork 150. - As will be described in detail below,
server 110, consistent with the present invention, services requests for information and manages access tometadata database 120.Server 110 may perform these tasks in response toprocessor 220 executing sequences of instructions contained in, for example,memory 230. These instructions may be read intomemory 230 from another computer-readable medium, such asstorage device 250, or from another device viacommunication interface 280. - Execution of the sequences of instructions contained in
memory 230 causesprocessor 220 to perform processes that will be described later. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the present invention. Thus, processes performed byserver 110 are not limited to any specific combination of hardware circuitry and software. -
Metadata database 120 may include a relational database, or another type of database, that stores metadata and other information relating to audio data in any language. An audio processing system (not shown), such as the one described in John Makhoul et al., “Speech and Language Technologies for Audio Indexing and Retrieval,” Proceedings of the IEEE, Vol. 88, No. 8, August 2000, pp. 1338-1353, may capture audio data from various sources, process the audio data, and create an automated transcription and metadata relating to the audio data. - For example, the media processing system may segment an input audio stream by speaker, cluster audio segments from the same speaker, identify speakers known to the system, and transcribe the spoken words. The media processing system may also segment the input stream into stories, based on their topic content, and locate the names of people, places, and organizations. The media processing system may further analyze the input stream to identify when each word is spoken. The media processing system may include any or all of this information in the transcription and metadata relating to the input stream.
- Database of
original media 130 may include a conventional database that stores audio (or other types of media) in any language. The audio may be processed by a known audio compression technique, such as MP3 compression, and stored indatabase 130. The audio stored indatabase 130 may correspond to the information inmetadata database 120. In other words, the original audio may include the data from which the transcription and metadata was created. In other implementations,database 130 may contain additional audio, or another type of media, for which there is no corresponding information inmetadata database 120. - The original audio may be stored in such a way that it is easily retrievable as a whole and in portions. For example, a portion of an audio signal may be retrieved by specifying that the portion of the signal that occurred between 8:05 a.m. and 8:08 a.m. is desired. The
database 130 may then provide the desired audio as streaming audio toclient 140, for example. -
Client 140 may include a personal computer, a laptop, a personal digital assistant, or another type of device that is capable of interacting withserver 110 and database oforiginal media 130 to obtain information for translation or perusal.Client 140 may present the information to a user via a graphical user interface (GUI), possibly within a web browser window or a word processing window. - FIG. 3 is an exemplary diagram of
client 140 according to an implementation consistent with the principles of the invention.Client 140 may include abus 310, a processor 320, a memory 330, one or more input devices 340, one or more output devices 350, and a communication interface 360.Bus 310 may permit communication among the components ofclient 140. - Processor320 may include any type of conventional processor or microprocessor that interprets and executes instructions. Memory 330 may include a RAM or another type of dynamic storage device that stores information and instructions for execution by processor 320; a ROM or another type of static storage device that stores static information and instructions for use by processor 320; and/or some other type of magnetic or optical recording medium and its corresponding drive. For example, memory 330 may include both volatile and non-volatile memory devices.
- Input devices340 may include one or more conventional mechanisms that permit a user to input information into
client 140 or control operation ofclient 140, such as a keyboard, mouse, pen, etc. In one implementation, input devices 340 may include a foot pedal that permits a user to control the playback of an audio signal. Output devices 350 may include one or more conventional mechanisms that output information to the user, including a display, a printer, a pair of speakers, etc. Communication interface 360 may include any transceiver-like mechanism that enablesclient 140 to communicate with other devices and systems via a network, such asnetwork 150. - As will be described in detail below,
client 140, consistent with the present invention, aids a user in translating an audio signal by, for example, presenting a textual representation of the audio signal in a same window as will be used for the transcription and visually synchronizing the playing back of the audio signal with the textual representation of the audio signal.Client 140 may perform these operations in response to processor 320 executing software instructions contained in a computer-readable medium, such as memory 330. - The software instructions may be read into memory330 from another computer-readable medium or from another device via communication interface 360. The software instructions contained in memory 330 causes processor 320 to perform processes that will be described later. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the present invention. Thus, processes performed by
client 140 are not limited to any specific combination of hardware circuitry and software. - FIG. 4 is a flowchart of exemplary processing for presenting information for perusal by a human translator according to an implementation consistent with the principles of the invention. Processing may begin with the user (i.e., human translator) inputting, into
client 140, a request for information. For example, a typical request might be as specific as “give me audio from Al Jazeera for Jan. 3, 2002 between 9:00 a.m. and 10:00 a.m.,” or as general as “show me everything where George Bush was the topic.” Other requests may include data regarding the date, time, language, and/or source of the desired information, or relevant words next to each other or within a certain distance of each other (similar to a typical database query). -
Client 140 may process (e.g., convert) the request, if necessary, and issue the request to server 110 (act 405). For example,client 140 may establish communication withserver 110 vianetwork 150, using conventional techniques. Once communication has been established,client 140 may transmit the request toserver 110. -
Server 110 may formulate a query based on the request fromclient 140 and use the query to accessmetadata database 120.Server 110 may retrieve data (e.g., a transcription and metadata) relating to the desired information from metadata database 120 (act 410).Server 110 may then convert the data to an appropriate form, such as a Hyper Text Mark-up Language (HTML) document, and transmit the HTML document toclient 140 for display in a standard web browser (acts 415 and 420). The HTML document may contain the transcription and metadata information, such as speaker identifiers, topics, and word time codes. In other implementations,server 110 may convert the data to another form or transmit the data unconverted toclient 140. -
Client 140 may present the HTML document to the user via a graphical user interface (GUI) (act 425). FIG. 5 is a diagram of an exemplary GUI 500 thatclient 140 may present to a user according to an implementation consistent with the principles of the invention. GUI 500 may be part of an interface of a standard Internet browser, such as Internet Explorer or Netscape Navigator, or any other browser that follows World Wide Web Consortium (W3C) specifications for HTML. The information presented by GUI 500 in this example relates to an episode of a television news program (i.e., ABC's World News Tonight from Jan. 31, 1998). - GUI500 may include a
speaker section 510, a transcription section 520, and a topics section 530.Speaker section 510 may identify boundaries between speakers, the gender of a speaker, and the name of a speaker (when known). In this way, speaker segments are clustered together over the entire episode to group together segments from the same speaker under the same label. In the example of FIG. 5, one speaker, Elizabeth Vargas, has been identified by name. - Transcription section520 may include a transcription of the audio data. Transcription section 520 may identify named entities (i.e., people, places, and organizations) by highlighting them in some manner. For example, people, places, organizations may be identified using different colors. Topic section 530 may include topics relating to the transcription in transcription section 520. Each of the topics may describe the main themes of the episode and may constitute a very high-level summary of the content of the transcription, even though the exact words in the topic may not be included in the transcription.
- GUI500 may also include a play audio button 540 corresponding to an embedded media player, such as the RealPlayer media player available from RealNetworks, that permits the original audio corresponding to the transcription in transcription section 520 to be played back. As will be described below, the media player may access database of
original media 130 to retrieve the original audio and present the audio to the user. For example, the media player may permit the audio corresponding to the transcription to be played. - GUI500 may also include a product button 550. As will be described below, the product button 550 may be used by the user when the user desires to produce a translation of one or more portions of the document in transcription section 520.
- Returning to FIG. 4, the user may read, skim, or browse the HTML document to determine whether the user would like to translate any portions of the document. To help the user make this determination, the user may play back the information in the HTML document in its original form (act430). In this case, the user may highlight or otherwise identify a portion of the HTML document for which the user desires to obtain the original audio and select play button 540. For example, the user may use a computer mouse to highlight the desired portion. Alternatively, the user may simply identify a starting point from which the original audio is desired.
- FIG. 6 is a diagram of GUI500 that illustrates a user's request to play back an original media. The user highlights a portion of the HTML document at highlighted
block 610. The user selects play button 540 to initiate the playback process. - Returning to FIG. 4, when the user selects play button540,
client 140 initiates the embedded media player. The media player may determine the portion identified by the user, such as highlighted block 610 (act 435). In particular, the media player may identify the time codes, corresponding to the beginning and ending (if applicable) of the identified portion, using the time codes in the HTML document. - The media player may then retrieve the desired portion of the original audio signal (act440). The media player may use conventional techniques to pull that portion of the original audio from database of
original media 130. For example, the media player may use the beginning and ending time codes (e.g., 7:03 p.m. to 7:05 p.m.) when accessingdatabase 130. The original audio fromdatabase 130 may stream back to the media player. The media player can then play the original audio for the user (act 445). - As the media player plays back the original audio,
client 140 visually synchronizes the playback with the transcription in the HTML document (act 450). To facilitate this, the media player letsclient 140 know as time passes in the playback of the original audio. Because the metadata of the HTML document includes time codes that identify exactly when each word in the transcription of the HTML document was spoken,client 140 knows precisely (possibly down to the millisecond) when to highlight (or otherwise visually distinguish) a word.Client 140 compares the times emitted by the media player with the time codes and highlights the appropriate words. - FIG. 7 is a diagram of GUI500 that illustrates the synchronization of the HTML document to the playback of the original media.
Client 140 visually distinguishes the word “american” in synchronism with the playback of the original audio by the media player, as shown at the highlightedblock 710. - The user may be permitted to stop the playback at any time. The user may also be permitted to control the playback by, for example, fast forwarding, speeding it up, slowing it down, or backing it up so many seconds or so many words. The media player or the graphical user interface may present the user with a set of controls to permit the user to perform these functions. Alternatively, the user may use foot pedals to control the playback of the audio signal.
- The user may also be permitted to alter the HTML document in some manner and save the altered document back in
metadata database 120. For example, the user may be permitted to highlight or comment on the document that the user, or another translator, may desire to later translate.Client 140, in this case, may send the altered document back toserver 110 for storage inmetadata database 120. - At some point, the user may identify this document or another document as containing one or more portions that the user desires to translate. FIG. 8 is a flowchart of exemplary processing for translating an audio signal according to an implementation consistent with the principles of the invention. Processing may begin with the user viewing a document presented by
client 140 that corresponds to an audio signal that the user desires to translate. - In translating the audio signal, the user performs three separate tasks:
selection 810,translation 820, andpublication 830. Duringselection task 810, the user selects a portion of the audio signal to translate (act 812). For example, the user may highlight or otherwise identify a portion of the document that the user desires to translate and select product button 550. In one implementation, the user may use a computer mouse to highlight the desired portion. Alternatively, the user may simply identify a starting point from which the user desires to begin the translation. - Upon selection of product button550,
client 140 may send a message toserver 110 to retrieve the portion of the document selected by the user (act 814). The message may include data specific to the portion (or range) selected by the user. In response to the message,server 110 may obtain the text relating to the selected portion (i.e., the transcription of the audio signal relating to the range selected by the user) and send a return message toclient 140. The return message may include the text and metadata (e.g., time codes, named entities, etc.) relating to the selected portion with the Multipurpose Internet Mail Extension (MIME) type set to informclient 140 that the text is intended for a translation application. The translation application may be a word processing application, such as Microsoft Word or WordPerfect from Corel Corporation, or another type of application, such as an application that operates upon Java or HTML. - Upon receipt of the return message,
client 140 may initiate the translation application and present the text to the user (acts 816 and 818). FIG. 9 is a diagram of an exemplary graphical user interface (GUI) 900 thatclient 140 may present to a user in an implementation consistent with the principles of the invention. GUI 900 may be associated with the translation application to aid the user in translating an audio signal. - GUI900 may include
translation section 910 andtranscription section 920.Translation section 910 may include the area in which the user types in, or otherwise provides, a translation of the audio signal.Transcription section 920 may include the area in which the text (or transcription) of the portion of the audio signal to be translated is displayed. GUI 900 may also include several buttons, such as backup button 930, play/pause button 940, save product button 950, and configuration (config) button 960. The functions performed when buttons 930950 are selected will be described below. - Returning to FIG. 8, during
translation task 820, the user translates the selected portion of the audio signal. The user may begin by selecting play/pause button 940. In response,client 140 may initiate an embedded media player, such as the RealPlayer media player available from RealNetworks. The media player may identify the time codes corresponding to the beginning and ending (if applicable) of the selected portion. - The media player may then retrieve the corresponding portion of the original audio signal (act822). The media player may use conventional techniques to pull that portion of the original audio from database of
original media 130. For example, the media player may use the beginning and ending time codes (e.g., 7:03 p.m. to 7:05 p.m.) when accessingdatabase 130. The original audio fromdatabase 130 may stream back to the media player. The media player then plays the original audio for the user (act 824). - As the media player plays back the original audio,
client 140 may visually synchronize the playback with the transcription in transcription section 920 (act 826). To facilitate this, the media player letsclient 140 know as time passes in the playback of the original audio. Because the time codes identify exactly when each word intranscription section 920 was spoken,client 140 knows precisely (possibly down to the millisecond) when to highlight (or otherwise visually distinguish) a word.Client 140 compares the times emitted by the media player with the time codes and highlights the appropriate words. - As the audio plays, the user may type in, or otherwise provide, a translation of the audio signal (act828). FIG. 10 is a diagram of GUI 900 that illustrates a user's translation of an audio signal. If the user wishes to stop the playback of the audio, the user may select play/pause button 940. Play/pause button 940 may be toggled to start and stop the playback of the audio signal. Typically during translation, the user will need to replay portions of the audio signal. To do this, the user may select backup button 930. Backup button 930 may cause the playback of the audio to rewind a predetermined amount of time or number of words. The amount of rewind may be user-determined via configuration button 960.
- When configuration button960 is selected, the user may be presented with a configuration window, such as
window 1010.Window 1010 may present the user with a number of options. For example, the user may be prompted to provide the product (i.e., the translation) with a name. The user may also be prompted to identify a location at which the product is to be published (or saved). The user may further be prompted to identify the amount of rewind for each selection of backup button 930. The amount of rewind may be specified in terms of seconds or the number of words. If a number of words are specified,client 140 may convert the number of words to seconds based on the time codes associated with the text intranscription section 920. - In an implementation consistent with the principles of the invention, the functions of play/pause button940 and backup button 930 may be initiated via one or more foot pedals. For example, the user may press a foot pedal to start and stop the playback of the audio. The user may press another foot pedal to back up a predetermined amount of time or number of words. The same or other foot pedals may be used to fast forward, speed up, and/or slow down the play back of the audio. The foot pedals may free the user's hands for typing in the translation.
- Returning to FIG. 8, during
publication task 830, the user may publish the translation results (act 832). The user may select save product button 950, which may cause configuration window 1010 (FIG. 10) to be presented to the user. The user, viaconfiguration window 1010, may be given the option of saving the translation results to any file, directory, or location. The user may save the results to a location where it will be useful and may be easily accessed by people who may be interested in the translation. - Systems and methods consistent with the present invention provide mechanisms that aid a human in translating an audio signal to another language. The systems and methods provide improvements at all three stages of the translation process. For example, the systems and methods provide a transcription of the audio signal to the translator. This helps the translator in selecting a segment of the audio signal to translate because it is faster to skim through text than it is to listen to an entire audio signal. The translator may also use search criteria to find relevant text. This makes it possible to easily monitor a very large number of audio sources.
- The systems and methods also present a transcription of the audio signal on the same screen that the translator uses to provide the translation. The systems and methods visually synchronize the playback of the audio signal with the text in the transcription. This helps the translator in translating the audio signal. For example, this gives the translator two indications (audible and visual) of what a particular word might mean, which increases the speed of translation. More people can read a language and translate it than can translate audio alone.
- The systems and methods also permit the translator to publish the translation results anywhere that is useful. This helps the translator in making the translation results available to those who would be interested in them.
- The foregoing description of preferred embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention.
- For example, it has been disclosed that a media player retrieves the original audio when instructed by a user. In other implementations, the original audio may be transmitted to the user along with the translation of the audio and any associated metadata. In yet other implementations, more than the requested portion of the original media may be transmitted to the user in anticipation of its later request by the user.
- It may also be possible to translate the audio signal or the transcription of the audio signal using automated techniques. In this case, the translation may be presented to a human translator, possibly along with the transcription and/or the original audio signal, to aid the translator in preparing an accurate translation of the audio signal.
- Further, while aspects of the invention have been described as operating upon speech within an audio signal, these aspects may also operate upon speech contained with a video signal. Still, while aspects of the invention have been described in reference to a client-server configuration over a network, systems and methods for translating in a manner consistent with the present invention may also be implemented locally on a single computer.
- No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. The scope of the invention is defined by the claims and their equivalents.
Claims (45)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/610,684 US20040024582A1 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for aiding human translation |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US39406402P | 2002-07-03 | 2002-07-03 | |
US39408202P | 2002-07-03 | 2002-07-03 | |
US41921402P | 2002-10-17 | 2002-10-17 | |
US10/610,684 US20040024582A1 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for aiding human translation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040024582A1 true US20040024582A1 (en) | 2004-02-05 |
Family
ID=30003990
Family Applications (11)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/610,696 Abandoned US20040024585A1 (en) | 2002-07-03 | 2003-07-02 | Linguistic segmentation of speech |
US10/611,106 Active 2026-04-11 US7337115B2 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for providing acoustic classification |
US10/610,799 Abandoned US20040199495A1 (en) | 2002-07-03 | 2003-07-02 | Name browsing systems and methods |
US10/610,532 Abandoned US20040006481A1 (en) | 2002-07-03 | 2003-07-02 | Fast transcription of speech |
US10/610,699 Abandoned US20040117188A1 (en) | 2002-07-03 | 2003-07-02 | Speech based personal information manager |
US10/610,697 Expired - Fee Related US7290207B2 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for providing multimedia information management |
US10/610,574 Abandoned US20040006748A1 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for providing online event tracking |
US10/610,679 Abandoned US20040024598A1 (en) | 2002-07-03 | 2003-07-02 | Thematic segmentation of speech |
US10/610,533 Expired - Fee Related US7801838B2 (en) | 2002-07-03 | 2003-07-02 | Multimedia recognition system comprising a plurality of indexers configured to receive and analyze multimedia data based on training data and user augmentation relating to one or more of a plurality of generated documents |
US10/610,684 Abandoned US20040024582A1 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for aiding human translation |
US12/806,465 Expired - Fee Related US8001066B2 (en) | 2002-07-03 | 2010-08-13 | Systems and methods for improving recognition results via user-augmentation of a database |
Family Applications Before (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/610,696 Abandoned US20040024585A1 (en) | 2002-07-03 | 2003-07-02 | Linguistic segmentation of speech |
US10/611,106 Active 2026-04-11 US7337115B2 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for providing acoustic classification |
US10/610,799 Abandoned US20040199495A1 (en) | 2002-07-03 | 2003-07-02 | Name browsing systems and methods |
US10/610,532 Abandoned US20040006481A1 (en) | 2002-07-03 | 2003-07-02 | Fast transcription of speech |
US10/610,699 Abandoned US20040117188A1 (en) | 2002-07-03 | 2003-07-02 | Speech based personal information manager |
US10/610,697 Expired - Fee Related US7290207B2 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for providing multimedia information management |
US10/610,574 Abandoned US20040006748A1 (en) | 2002-07-03 | 2003-07-02 | Systems and methods for providing online event tracking |
US10/610,679 Abandoned US20040024598A1 (en) | 2002-07-03 | 2003-07-02 | Thematic segmentation of speech |
US10/610,533 Expired - Fee Related US7801838B2 (en) | 2002-07-03 | 2003-07-02 | Multimedia recognition system comprising a plurality of indexers configured to receive and analyze multimedia data based on training data and user augmentation relating to one or more of a plurality of generated documents |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/806,465 Expired - Fee Related US8001066B2 (en) | 2002-07-03 | 2010-08-13 | Systems and methods for improving recognition results via user-augmentation of a database |
Country Status (1)
Country | Link |
---|---|
US (11) | US20040024585A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060212830A1 (en) * | 2003-09-09 | 2006-09-21 | Fogg Brian J | Graphical messaging system |
US20070225973A1 (en) * | 2006-03-23 | 2007-09-27 | Childress Rhonda L | Collective Audio Chunk Processing for Streaming Translated Multi-Speaker Conversations |
US20070225967A1 (en) * | 2006-03-23 | 2007-09-27 | Childress Rhonda L | Cadence management of translated multi-speaker conversations using pause marker relationship models |
US20080172219A1 (en) * | 2007-01-17 | 2008-07-17 | Novell, Inc. | Foreign language translator in a document editor |
US20080229914A1 (en) * | 2007-03-19 | 2008-09-25 | Trevor Nathanial | Foot operated transport controller for digital audio workstations |
US20080288239A1 (en) * | 2007-05-15 | 2008-11-20 | Microsoft Corporation | Localization and internationalization of document resources |
US20100318364A1 (en) * | 2009-01-15 | 2010-12-16 | K-Nfb Reading Technology, Inc. | Systems and methods for selection and use of multiple characters for document narration |
US20110035467A1 (en) * | 2009-08-10 | 2011-02-10 | Sling Media Pvt Ltd | Localization systems and methods |
US20130262111A1 (en) * | 2012-03-30 | 2013-10-03 | Src, Inc. | Automated voice and speech labeling |
US20130332165A1 (en) * | 2012-06-06 | 2013-12-12 | Qualcomm Incorporated | Method and systems having improved speech recognition |
US9190055B1 (en) * | 2013-03-14 | 2015-11-17 | Amazon Technologies, Inc. | Named entity recognition with personalized models |
CN106104677A (en) * | 2014-03-17 | 2016-11-09 | 谷歌公司 | Visually indicating of the action that the voice being identified is initiated |
US10671251B2 (en) | 2017-12-22 | 2020-06-02 | Arbordale Publishing, LLC | Interactive eReader interface generation based on synchronization of textual and audial descriptors |
US11443646B2 (en) | 2017-12-22 | 2022-09-13 | Fathom Technologies, LLC | E-Reader interface system with audio and highlighting synchronization for digital books |
US20230032115A1 (en) * | 2005-02-14 | 2023-02-02 | Thomas M. Majchrowski & Associates, Inc. | Multipurpose media players |
Families Citing this family (218)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7349477B2 (en) * | 2002-07-10 | 2008-03-25 | Mitsubishi Electric Research Laboratories, Inc. | Audio-assisted video segmentation and summarization |
US20070225614A1 (en) * | 2004-05-26 | 2007-09-27 | Endothelix, Inc. | Method and apparatus for determining vascular health conditions |
US7574447B2 (en) * | 2003-04-08 | 2009-08-11 | United Parcel Service Of America, Inc. | Inbound package tracking systems and methods |
US20050010231A1 (en) * | 2003-06-20 | 2005-01-13 | Myers Thomas H. | Method and apparatus for strengthening the biomechanical properties of implants |
US7487094B1 (en) * | 2003-06-20 | 2009-02-03 | Utopy, Inc. | System and method of call classification with context modeling based on composite words |
US7231396B2 (en) * | 2003-07-24 | 2007-06-12 | International Business Machines Corporation | Data abstraction layer for a database |
US8229744B2 (en) * | 2003-08-26 | 2012-07-24 | Nuance Communications, Inc. | Class detection scheme and time mediated averaging of class dependent models |
US8046814B1 (en) * | 2003-10-22 | 2011-10-25 | The Weather Channel, Inc. | Systems and methods for formulating and delivering video having perishable information |
GB2409087A (en) * | 2003-12-12 | 2005-06-15 | Ibm | Computer generated prompting |
US7496500B2 (en) * | 2004-03-01 | 2009-02-24 | Microsoft Corporation | Systems and methods that determine intent of data and respond to the data based on the intent |
US7844684B2 (en) * | 2004-03-19 | 2010-11-30 | Media Captioning Services, Inc. | Live media captioning subscription framework for mobile devices |
US8266313B2 (en) * | 2004-03-19 | 2012-09-11 | Media Captioning Services, Inc. | Live media subscription framework for mobile devices |
US7421477B2 (en) * | 2004-03-19 | 2008-09-02 | Media Captioning Services | Real-time media captioning subscription framework for mobile devices |
US8014765B2 (en) * | 2004-03-19 | 2011-09-06 | Media Captioning Services | Real-time captioning framework for mobile devices |
US20050209849A1 (en) * | 2004-03-22 | 2005-09-22 | Sony Corporation And Sony Electronics Inc. | System and method for automatically cataloguing data by utilizing speech recognition procedures |
US7363279B2 (en) * | 2004-04-29 | 2008-04-22 | Microsoft Corporation | Method and system for calculating importance of a block within a display page |
US8838452B2 (en) * | 2004-06-09 | 2014-09-16 | Canon Kabushiki Kaisha | Effective audio segmentation and classification |
US8036893B2 (en) * | 2004-07-22 | 2011-10-11 | Nuance Communications, Inc. | Method and system for identifying and correcting accent-induced speech recognition difficulties |
US7631336B2 (en) | 2004-07-30 | 2009-12-08 | Broadband Itv, Inc. | Method for converting, navigating and displaying video content uploaded from the internet to a digital TV video-on-demand platform |
US9584868B2 (en) | 2004-07-30 | 2017-02-28 | Broadband Itv, Inc. | Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection |
US11259059B2 (en) | 2004-07-30 | 2022-02-22 | Broadband Itv, Inc. | System for addressing on-demand TV program content on TV services platform of a digital TV services provider |
US9344765B2 (en) | 2004-07-30 | 2016-05-17 | Broadband Itv, Inc. | Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection |
US7590997B2 (en) | 2004-07-30 | 2009-09-15 | Broadband Itv, Inc. | System and method for managing, converting and displaying video content on a video-on-demand platform, including ads used for drill-down navigation and consumer-generated classified ads |
US7529765B2 (en) * | 2004-11-23 | 2009-05-05 | Palo Alto Research Center Incorporated | Methods, apparatus, and program products for performing incremental probabilistic latent semantic analysis |
US7769579B2 (en) | 2005-05-31 | 2010-08-03 | Google Inc. | Learning facts from semi-structured text |
CA2609247C (en) * | 2005-05-24 | 2015-10-13 | Loquendo S.P.A. | Automatic text-independent, language-independent speaker voice-print creation and speaker recognition |
TWI270052B (en) * | 2005-08-09 | 2007-01-01 | Delta Electronics Inc | System for selecting audio content by using speech recognition and method therefor |
WO2007023436A1 (en) | 2005-08-26 | 2007-03-01 | Koninklijke Philips Electronics N.V. | System and method for synchronizing sound and manually transcribed text |
GB2430073A (en) * | 2005-09-08 | 2007-03-14 | Univ East Anglia | Analysis and transcription of music |
US20070061703A1 (en) * | 2005-09-12 | 2007-03-15 | International Business Machines Corporation | Method and apparatus for annotating a document |
US20070078644A1 (en) * | 2005-09-30 | 2007-04-05 | Microsoft Corporation | Detecting segmentation errors in an annotated corpus |
AU2006303886B2 (en) * | 2005-10-12 | 2011-11-17 | Interdigital Vc Holdings, Inc. | Region of interest H .264 scalable video coding |
US20070094023A1 (en) * | 2005-10-21 | 2007-04-26 | Callminer, Inc. | Method and apparatus for processing heterogeneous units of work |
JP4432877B2 (en) * | 2005-11-08 | 2010-03-17 | ソニー株式会社 | Information processing system, information processing method, information processing apparatus, program, and recording medium |
US8019752B2 (en) * | 2005-11-10 | 2011-09-13 | Endeca Technologies, Inc. | System and method for information retrieval from object collections with complex interrelationships |
US20070150540A1 (en) * | 2005-12-27 | 2007-06-28 | Microsoft Corporation | Presence and peer launch pad |
TW200731113A (en) * | 2006-02-09 | 2007-08-16 | Benq Corp | Method for utilizing a media adapter for controlling a display device to display information of multimedia data corresponding to an authority datum |
US20070225606A1 (en) * | 2006-03-22 | 2007-09-27 | Endothelix, Inc. | Method and apparatus for comprehensive assessment of vascular health |
US8301448B2 (en) | 2006-03-29 | 2012-10-30 | Nuance Communications, Inc. | System and method for applying dynamic contextual grammars and language models to improve automatic speech recognition accuracy |
WO2007130864A2 (en) * | 2006-05-02 | 2007-11-15 | Lit Group, Inc. | Method and system for retrieving network documents |
US20080027330A1 (en) * | 2006-05-15 | 2008-01-31 | Endothelix, Inc. | Risk assessment method for acute cardiovascular events |
US7593940B2 (en) * | 2006-05-26 | 2009-09-22 | International Business Machines Corporation | System and method for creation, representation, and delivery of document corpus entity co-occurrence information |
US7587407B2 (en) * | 2006-05-26 | 2009-09-08 | International Business Machines Corporation | System and method for creation, representation, and delivery of document corpus entity co-occurrence information |
US8219543B2 (en) | 2006-06-12 | 2012-07-10 | Etrial Communications, Inc. | Electronic documentation |
US10339208B2 (en) | 2006-06-12 | 2019-07-02 | Brief-Lynx, Inc. | Electronic documentation |
US7504969B2 (en) * | 2006-07-11 | 2009-03-17 | Data Domain, Inc. | Locality-based stream segmentation for data deduplication |
US7620551B2 (en) * | 2006-07-20 | 2009-11-17 | Mspot, Inc. | Method and apparatus for providing search capability and targeted advertising for audio, image, and video content over the internet |
US20080081963A1 (en) * | 2006-09-29 | 2008-04-03 | Endothelix, Inc. | Methods and Apparatus for Profiling Cardiovascular Vulnerability to Mental Stress |
US8122026B1 (en) * | 2006-10-20 | 2012-02-21 | Google Inc. | Finding and disambiguating references to entities on web pages |
DE102006057159A1 (en) * | 2006-12-01 | 2008-06-05 | Deutsche Telekom Ag | Method for classifying spoken language in speech dialogue systems |
JP4827721B2 (en) * | 2006-12-26 | 2011-11-30 | ニュアンス コミュニケーションズ,インコーポレイテッド | Utterance division method, apparatus and program |
TW200841189A (en) * | 2006-12-27 | 2008-10-16 | Ibm | Technique for accurately detecting system failure |
US8285697B1 (en) | 2007-01-23 | 2012-10-09 | Google Inc. | Feedback enhanced attribute extraction |
US20080177536A1 (en) * | 2007-01-24 | 2008-07-24 | Microsoft Corporation | A/v content editing |
US20080215318A1 (en) * | 2007-03-01 | 2008-09-04 | Microsoft Corporation | Event recognition |
US20110054897A1 (en) * | 2007-03-07 | 2011-03-03 | Phillips Michael S | Transmitting signal quality information in mobile dictation application |
US20110054896A1 (en) * | 2007-03-07 | 2011-03-03 | Phillips Michael S | Sending a communications header with voice recording to send metadata for use in speech recognition and formatting in mobile dictation application |
US20090030691A1 (en) * | 2007-03-07 | 2009-01-29 | Cerra Joseph P | Using an unstructured language model associated with an application of a mobile communication facility |
US8838457B2 (en) | 2007-03-07 | 2014-09-16 | Vlingo Corporation | Using results of unstructured language model based speech recognition to control a system-level function of a mobile communications facility |
US8886540B2 (en) | 2007-03-07 | 2014-11-11 | Vlingo Corporation | Using speech recognition results based on an unstructured language model in a mobile communication facility application |
US20090030688A1 (en) * | 2007-03-07 | 2009-01-29 | Cerra Joseph P | Tagging speech recognition results based on an unstructured language model for use in a mobile communication facility application |
US8949130B2 (en) | 2007-03-07 | 2015-02-03 | Vlingo Corporation | Internal and external speech recognition use with a mobile communication facility |
US20090030687A1 (en) * | 2007-03-07 | 2009-01-29 | Cerra Joseph P | Adapting an unstructured language model speech recognition system based on usage |
US20110054895A1 (en) * | 2007-03-07 | 2011-03-03 | Phillips Michael S | Utilizing user transmitted text to improve language model in mobile dictation application |
US8880405B2 (en) * | 2007-03-07 | 2014-11-04 | Vlingo Corporation | Application text entry in a mobile environment using a speech processing facility |
US20110054898A1 (en) * | 2007-03-07 | 2011-03-03 | Phillips Michael S | Multiple web-based content search user interface in mobile search application |
US8886545B2 (en) | 2007-03-07 | 2014-11-11 | Vlingo Corporation | Dealing with switch latency in speech recognition |
US20110054899A1 (en) * | 2007-03-07 | 2011-03-03 | Phillips Michael S | Command and control utilizing content information in a mobile voice-to-speech application |
US20090030685A1 (en) * | 2007-03-07 | 2009-01-29 | Cerra Joseph P | Using speech recognition results based on an unstructured language model with a navigation system |
US8949266B2 (en) | 2007-03-07 | 2015-02-03 | Vlingo Corporation | Multiple web-based content category searching in mobile search application |
US8635243B2 (en) | 2007-03-07 | 2014-01-21 | Research In Motion Limited | Sending a communications header with voice recording to send metadata for use in speech recognition, formatting, and search mobile search application |
US10056077B2 (en) * | 2007-03-07 | 2018-08-21 | Nuance Communications, Inc. | Using speech recognition results based on an unstructured language model with a music system |
US20080221899A1 (en) * | 2007-03-07 | 2008-09-11 | Cerra Joseph P | Mobile messaging environment speech processing facility |
US20090030697A1 (en) * | 2007-03-07 | 2009-01-29 | Cerra Joseph P | Using contextual information for delivering results generated from a speech recognition facility using an unstructured language model |
JP4466665B2 (en) * | 2007-03-13 | 2010-05-26 | 日本電気株式会社 | Minutes creation method, apparatus and program thereof |
US8347202B1 (en) | 2007-03-14 | 2013-01-01 | Google Inc. | Determining geographic locations for place names in a fact repository |
US8078464B2 (en) * | 2007-03-30 | 2011-12-13 | Mattersight Corporation | Method and system for analyzing separated voice data of a telephonic communication to determine the gender of the communicant |
US8856002B2 (en) * | 2007-04-12 | 2014-10-07 | International Business Machines Corporation | Distance metrics for universal pattern processing tasks |
US8682982B2 (en) * | 2007-06-19 | 2014-03-25 | The Invention Science Fund I, Llc | Preliminary destination-dependent evaluation of message content |
US9374242B2 (en) * | 2007-11-08 | 2016-06-21 | Invention Science Fund I, Llc | Using evaluations of tentative message content |
US8984133B2 (en) * | 2007-06-19 | 2015-03-17 | The Invention Science Fund I, Llc | Providing treatment-indicative feedback dependent on putative content treatment |
US20080320088A1 (en) * | 2007-06-19 | 2008-12-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Helping valuable message content pass apparent message filtering |
US11570521B2 (en) | 2007-06-26 | 2023-01-31 | Broadband Itv, Inc. | Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection |
US9654833B2 (en) | 2007-06-26 | 2017-05-16 | Broadband Itv, Inc. | Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection |
US20090027485A1 (en) * | 2007-07-26 | 2009-01-29 | Avaya Technology Llc | Automatic Monitoring of a Call Participant's Attentiveness |
JP5088050B2 (en) * | 2007-08-29 | 2012-12-05 | ヤマハ株式会社 | Voice processing apparatus and program |
US8065404B2 (en) * | 2007-08-31 | 2011-11-22 | The Invention Science Fund I, Llc | Layering destination-dependent content handling guidance |
US8082225B2 (en) * | 2007-08-31 | 2011-12-20 | The Invention Science Fund I, Llc | Using destination-dependent criteria to guide data transmission decisions |
US8326833B2 (en) * | 2007-10-04 | 2012-12-04 | International Business Machines Corporation | Implementing metadata extraction of artifacts from associated collaborative discussions |
US20090122157A1 (en) * | 2007-11-14 | 2009-05-14 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and computer-readable storage medium |
US7930389B2 (en) | 2007-11-20 | 2011-04-19 | The Invention Science Fund I, Llc | Adaptive filtering of annotated messages or the like |
EP2081405B1 (en) * | 2008-01-21 | 2012-05-16 | Bernafon AG | A hearing aid adapted to a specific type of voice in an acoustical environment, a method and use |
WO2009146042A2 (en) | 2008-03-31 | 2009-12-03 | Terra Soft Solutions Of Colorado, Inc. | Tablet computer |
US20090259469A1 (en) * | 2008-04-14 | 2009-10-15 | Motorola, Inc. | Method and apparatus for speech recognition |
US8326788B2 (en) * | 2008-04-29 | 2012-12-04 | International Business Machines Corporation | Determining the degree of relevance of alerts in an entity resolution system |
US8015137B2 (en) * | 2008-04-29 | 2011-09-06 | International Business Machines Corporation | Determining the degree of relevance of alerts in an entity resolution system over alert disposition lifecycle |
US8250637B2 (en) * | 2008-04-29 | 2012-08-21 | International Business Machines Corporation | Determining the degree of relevance of duplicate alerts in an entity resolution system |
US20090271394A1 (en) * | 2008-04-29 | 2009-10-29 | Allen Thomas B | Determining the degree of relevance of entities and identities in an entity resolution system that maintains alert relevance |
US7475344B1 (en) * | 2008-05-04 | 2009-01-06 | International Business Machines Corporation | Genders-usage assistant for composition of electronic documents, emails, or letters |
JP5381988B2 (en) * | 2008-07-28 | 2014-01-08 | 日本電気株式会社 | Dialogue speech recognition system, dialogue speech recognition method, and dialogue speech recognition program |
US8655950B2 (en) * | 2008-08-06 | 2014-02-18 | International Business Machines Corporation | Contextual awareness in real time collaborative activity alerts |
US8744532B2 (en) * | 2008-11-10 | 2014-06-03 | Disney Enterprises, Inc. | System and method for customizable playback of communication device alert media |
US8249870B2 (en) * | 2008-11-12 | 2012-08-21 | Massachusetts Institute Of Technology | Semi-automatic speech transcription |
WO2010071112A1 (en) * | 2008-12-15 | 2010-06-24 | 日本電気株式会社 | Topic transition analysis system, topic transition analysis method, and program |
US8654963B2 (en) | 2008-12-19 | 2014-02-18 | Genesys Telecommunications Laboratories, Inc. | Method and system for integrating an interaction management system with a business rules management system |
US8135333B2 (en) * | 2008-12-23 | 2012-03-13 | Motorola Solutions, Inc. | Distributing a broadband resource locator over a narrowband audio stream |
US8301444B2 (en) | 2008-12-29 | 2012-10-30 | At&T Intellectual Property I, L.P. | Automated demographic analysis by analyzing voice activity |
EP2216775B1 (en) * | 2009-02-05 | 2012-11-21 | Nuance Communications, Inc. | Speaker recognition |
US8458105B2 (en) * | 2009-02-12 | 2013-06-04 | Decisive Analytics Corporation | Method and apparatus for analyzing and interrelating data |
US20100235314A1 (en) * | 2009-02-12 | 2010-09-16 | Decisive Analytics Corporation | Method and apparatus for analyzing and interrelating video data |
US9646603B2 (en) * | 2009-02-27 | 2017-05-09 | Longsand Limited | Various apparatus and methods for a speech recognition system |
CN101847412B (en) * | 2009-03-27 | 2012-02-15 | 华为技术有限公司 | Method and device for classifying audio signals |
CN101901235B (en) | 2009-05-27 | 2013-03-27 | 国际商业机器公司 | Method and system for document processing |
US8463606B2 (en) | 2009-07-13 | 2013-06-11 | Genesys Telecommunications Laboratories, Inc. | System for analyzing interactions and reporting analytic results to human-operated and system interfaces in real time |
EP3610918B1 (en) * | 2009-07-17 | 2023-09-27 | Implantica Patent Ltd. | Voice control of a medical implant |
US8190420B2 (en) | 2009-08-04 | 2012-05-29 | Autonomy Corporation Ltd. | Automatic spoken language identification based on phoneme sequence patterns |
US8160877B1 (en) * | 2009-08-06 | 2012-04-17 | Narus, Inc. | Hierarchical real-time speaker recognition for biometric VoIP verification and targeting |
US9727842B2 (en) | 2009-08-21 | 2017-08-08 | International Business Machines Corporation | Determining entity relevance by relationships to other relevant entities |
CN102612691B (en) * | 2009-09-18 | 2015-02-04 | 莱克西私人有限公司 | Method and system for scoring texts |
EP2486567A1 (en) | 2009-10-09 | 2012-08-15 | Dolby Laboratories Licensing Corporation | Automatic generation of metadata for audio dominance effects |
US8954434B2 (en) * | 2010-01-08 | 2015-02-10 | Microsoft Corporation | Enhancing a document with supplemental information from another document |
US8903847B2 (en) * | 2010-03-05 | 2014-12-02 | International Business Machines Corporation | Digital media voice tags in social networks |
US8831942B1 (en) * | 2010-03-19 | 2014-09-09 | Narus, Inc. | System and method for pitch based gender identification with suspicious speaker detection |
US20150279354A1 (en) * | 2010-05-19 | 2015-10-01 | Google Inc. | Personalization and Latency Reduction for Voice-Activated Commands |
US8600750B2 (en) * | 2010-06-08 | 2013-12-03 | Cisco Technology, Inc. | Speaker-cluster dependent speaker recognition (speaker-type automated speech recognition) |
US9465935B2 (en) * | 2010-06-11 | 2016-10-11 | D2L Corporation | Systems, methods, and apparatus for securing user documents |
US20130115606A1 (en) * | 2010-07-07 | 2013-05-09 | The University Of British Columbia | System and method for microfluidic cell culture |
TWI403304B (en) * | 2010-08-27 | 2013-08-01 | Ind Tech Res Inst | Method and mobile device for awareness of linguistic ability |
US9678572B2 (en) | 2010-10-01 | 2017-06-13 | Samsung Electronics Co., Ltd. | Apparatus and method for turning e-book pages in portable terminal |
EP2437153A3 (en) * | 2010-10-01 | 2016-10-05 | Samsung Electronics Co., Ltd. | Apparatus and method for turning e-book pages in portable terminal |
EP2437151B1 (en) | 2010-10-01 | 2020-07-08 | Samsung Electronics Co., Ltd. | Apparatus and method for turning e-book pages in portable terminal |
KR101743632B1 (en) | 2010-10-01 | 2017-06-07 | 삼성전자주식회사 | Apparatus and method for turning e-book pages in portable terminal |
US8498998B2 (en) * | 2010-10-11 | 2013-07-30 | International Business Machines Corporation | Grouping identity records to generate candidate lists to use in an entity and relationship resolution process |
US20120197643A1 (en) * | 2011-01-27 | 2012-08-02 | General Motors Llc | Mapping obstruent speech energy to lower frequencies |
US20120244842A1 (en) | 2011-03-21 | 2012-09-27 | International Business Machines Corporation | Data Session Synchronization With Phone Numbers |
US8688090B2 (en) | 2011-03-21 | 2014-04-01 | International Business Machines Corporation | Data session preferences |
US20120246238A1 (en) | 2011-03-21 | 2012-09-27 | International Business Machines Corporation | Asynchronous messaging tags |
US9053750B2 (en) * | 2011-06-17 | 2015-06-09 | At&T Intellectual Property I, L.P. | Speaker association with a visual representation of spoken content |
US9160837B2 (en) | 2011-06-29 | 2015-10-13 | Gracenote, Inc. | Interactive streaming content apparatus, systems and methods |
US20130144414A1 (en) * | 2011-12-06 | 2013-06-06 | Cisco Technology, Inc. | Method and apparatus for discovering and labeling speakers in a large and growing collection of videos with minimal user effort |
US9396277B2 (en) * | 2011-12-09 | 2016-07-19 | Microsoft Technology Licensing, Llc | Access to supplemental data based on identifier derived from corresponding primary application data |
US9330188B1 (en) | 2011-12-22 | 2016-05-03 | Amazon Technologies, Inc. | Shared browsing sessions |
US8886651B1 (en) * | 2011-12-22 | 2014-11-11 | Reputation.Com, Inc. | Thematic clustering |
US9324323B1 (en) | 2012-01-13 | 2016-04-26 | Google Inc. | Speech recognition using topic-specific language models |
US9336321B1 (en) | 2012-01-26 | 2016-05-10 | Amazon Technologies, Inc. | Remote browsing and searching |
US8839087B1 (en) | 2012-01-26 | 2014-09-16 | Amazon Technologies, Inc. | Remote browsing and searching |
US9087024B1 (en) * | 2012-01-26 | 2015-07-21 | Amazon Technologies, Inc. | Narration of network content |
JP2013161205A (en) * | 2012-02-03 | 2013-08-19 | Sony Corp | Information processing device, information processing method and program |
US8543398B1 (en) | 2012-02-29 | 2013-09-24 | Google Inc. | Training an automatic speech recognition system using compressed word frequencies |
US8775177B1 (en) | 2012-03-08 | 2014-07-08 | Google Inc. | Speech recognition process |
US8965766B1 (en) * | 2012-03-15 | 2015-02-24 | Google Inc. | Systems and methods for identifying music in a noisy environment |
WO2013149027A1 (en) * | 2012-03-28 | 2013-10-03 | Crawford Terry | Method and system for providing segment-based viewing of recorded sessions |
GB2502944A (en) * | 2012-03-30 | 2013-12-18 | Jpal Ltd | Segmentation and transcription of speech |
US8374865B1 (en) | 2012-04-26 | 2013-02-12 | Google Inc. | Sampling training data for an automatic speech recognition system based on a benchmark classification distribution |
US8571859B1 (en) | 2012-05-31 | 2013-10-29 | Google Inc. | Multi-stage speaker adaptation |
US8805684B1 (en) | 2012-05-31 | 2014-08-12 | Google Inc. | Distributed speaker adaptation |
WO2013179275A2 (en) * | 2012-06-01 | 2013-12-05 | Donald, Heather June | Method and system for generating an interactive display |
US8775175B1 (en) * | 2012-06-01 | 2014-07-08 | Google Inc. | Performing dictation correction |
GB2505072A (en) | 2012-07-06 | 2014-02-19 | Box Inc | Identifying users and collaborators as search results in a cloud-based system |
US8554559B1 (en) | 2012-07-13 | 2013-10-08 | Google Inc. | Localized speech recognition with offload |
US9123333B2 (en) | 2012-09-12 | 2015-09-01 | Google Inc. | Minimum bayesian risk methods for automatic speech recognition |
US10915492B2 (en) * | 2012-09-19 | 2021-02-09 | Box, Inc. | Cloud-based platform enabled with media content indexed for text-based searches and/or metadata extraction |
US8676590B1 (en) * | 2012-09-26 | 2014-03-18 | Google Inc. | Web-based audio transcription tool |
TW201417093A (en) * | 2012-10-19 | 2014-05-01 | Hon Hai Prec Ind Co Ltd | Electronic device with video/audio files processing function and video/audio files processing method |
EP2736042A1 (en) * | 2012-11-23 | 2014-05-28 | Samsung Electronics Co., Ltd | Apparatus and method for constructing multilingual acoustic model and computer readable recording medium for storing program for performing the method |
US9912816B2 (en) | 2012-11-29 | 2018-03-06 | Genesys Telecommunications Laboratories, Inc. | Workload distribution with resource awareness |
US9542936B2 (en) | 2012-12-29 | 2017-01-10 | Genesys Telecommunications Laboratories, Inc. | Fast out-of-vocabulary search in automatic speech recognition systems |
KR102112742B1 (en) * | 2013-01-22 | 2020-05-19 | 삼성전자주식회사 | Electronic apparatus and voice processing method thereof |
US9208777B2 (en) | 2013-01-25 | 2015-12-08 | Microsoft Technology Licensing, Llc | Feature space transformation for personalization using generalized i-vector clustering |
WO2014132402A1 (en) * | 2013-02-28 | 2014-09-04 | 株式会社東芝 | Data processing device and method for constructing story model |
US9195656B2 (en) | 2013-12-30 | 2015-11-24 | Google Inc. | Multilingual prosody generation |
US9413891B2 (en) | 2014-01-08 | 2016-08-09 | Callminer, Inc. | Real-time conversational analytics facility |
US9497868B2 (en) | 2014-04-17 | 2016-11-15 | Continental Automotive Systems, Inc. | Electronics enclosure |
US9773499B2 (en) | 2014-06-18 | 2017-09-26 | Google Inc. | Entity name recognition based on entity type |
WO2016027800A1 (en) * | 2014-08-22 | 2016-02-25 | オリンパス株式会社 | Cell culture bag, cell culture device, and cell culture container |
US9772816B1 (en) * | 2014-12-22 | 2017-09-26 | Google Inc. | Transcription and tagging system |
EP3089159B1 (en) | 2015-04-28 | 2019-08-28 | Google LLC | Correcting voice recognition using selective re-speak |
US10025773B2 (en) * | 2015-07-24 | 2018-07-17 | International Business Machines Corporation | System and method for natural language processing using synthetic text |
US10381022B1 (en) | 2015-12-23 | 2019-08-13 | Google Llc | Audio classifier |
US10282411B2 (en) * | 2016-03-31 | 2019-05-07 | International Business Machines Corporation | System, method, and recording medium for natural language learning |
CN107305541B (en) * | 2016-04-20 | 2021-05-04 | 科大讯飞股份有限公司 | Method and device for segmenting speech recognition text |
US20180018973A1 (en) | 2016-07-15 | 2018-01-18 | Google Inc. | Speaker verification |
US9978392B2 (en) * | 2016-09-09 | 2018-05-22 | Tata Consultancy Services Limited | Noisy signal identification from non-stationary audio signals |
US11205103B2 (en) | 2016-12-09 | 2021-12-21 | The Research Foundation for the State University | Semisupervised autoencoder for sentiment analysis |
US10642889B2 (en) * | 2017-02-20 | 2020-05-05 | Gong I.O Ltd. | Unsupervised automated topic detection, segmentation and labeling of conversations |
CN109102810B (en) * | 2017-06-21 | 2021-10-15 | 北京搜狗科技发展有限公司 | Voiceprint recognition method and device |
WO2019002831A1 (en) | 2017-06-27 | 2019-01-03 | Cirrus Logic International Semiconductor Limited | Detection of replay attack |
GB201713697D0 (en) | 2017-06-28 | 2017-10-11 | Cirrus Logic Int Semiconductor Ltd | Magnetic detection of replay attack |
GB2563953A (en) | 2017-06-28 | 2019-01-02 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB201801530D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for authentication |
GB201801526D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for authentication |
GB201801528D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Method, apparatus and systems for biometric processes |
GB201801532D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Methods, apparatus and systems for audio playback |
GB201801527D0 (en) | 2017-07-07 | 2018-03-14 | Cirrus Logic Int Semiconductor Ltd | Method, apparatus and systems for biometric processes |
EP3432560A1 (en) * | 2017-07-20 | 2019-01-23 | Dialogtech Inc. | System, method, and computer program product for automatically analyzing and categorizing phone calls |
GB201801664D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of liveness |
GB201801663D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of liveness |
GB201803570D0 (en) | 2017-10-13 | 2018-04-18 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB2567503A (en) | 2017-10-13 | 2019-04-17 | Cirrus Logic Int Semiconductor Ltd | Analysing speech signals |
GB201801874D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Improving robustness of speech processing system against ultrasound and dolphin attacks |
GB201804843D0 (en) | 2017-11-14 | 2018-05-09 | Cirrus Logic Int Semiconductor Ltd | Detection of replay attack |
GB201801661D0 (en) | 2017-10-13 | 2018-03-21 | Cirrus Logic International Uk Ltd | Detection of liveness |
GB201801659D0 (en) | 2017-11-14 | 2018-03-21 | Cirrus Logic Int Semiconductor Ltd | Detection of loudspeaker playback |
US11568231B2 (en) * | 2017-12-08 | 2023-01-31 | Raytheon Bbn Technologies Corp. | Waypoint detection for a contact center analysis system |
US11270071B2 (en) * | 2017-12-28 | 2022-03-08 | Comcast Cable Communications, Llc | Language-based content recommendations using closed captions |
US11264037B2 (en) | 2018-01-23 | 2022-03-01 | Cirrus Logic, Inc. | Speaker identification |
US11735189B2 (en) | 2018-01-23 | 2023-08-22 | Cirrus Logic, Inc. | Speaker identification |
US11475899B2 (en) | 2018-01-23 | 2022-10-18 | Cirrus Logic, Inc. | Speaker identification |
US11276407B2 (en) | 2018-04-17 | 2022-03-15 | Gong.Io Ltd. | Metadata-based diarization of teleconferences |
US10692490B2 (en) | 2018-07-31 | 2020-06-23 | Cirrus Logic, Inc. | Detection of replay attack |
US10915614B2 (en) | 2018-08-31 | 2021-02-09 | Cirrus Logic, Inc. | Biometric authentication |
US11037574B2 (en) * | 2018-09-05 | 2021-06-15 | Cirrus Logic, Inc. | Speaker recognition and speaker change detection |
US11183195B2 (en) * | 2018-09-27 | 2021-11-23 | Snackable Inc. | Audio content processing systems and methods |
US11410658B1 (en) * | 2019-10-29 | 2022-08-09 | Dialpad, Inc. | Maintainable and scalable pipeline for automatic speech recognition language modeling |
US11373657B2 (en) * | 2020-05-01 | 2022-06-28 | Raytheon Applied Signal Technology, Inc. | System and method for speaker identification in audio data |
US11315545B2 (en) | 2020-07-09 | 2022-04-26 | Raytheon Applied Signal Technology, Inc. | System and method for language identification in audio data |
US12020697B2 (en) | 2020-07-15 | 2024-06-25 | Raytheon Applied Signal Technology, Inc. | Systems and methods for fast filtering of audio keyword search |
CN112289323B (en) * | 2020-12-29 | 2021-05-28 | 深圳追一科技有限公司 | Voice data processing method and device, computer equipment and storage medium |
Citations (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4193119A (en) * | 1977-03-25 | 1980-03-11 | Xerox Corporation | Apparatus for assisting in the transposition of foreign language text |
US4814988A (en) * | 1986-05-20 | 1989-03-21 | Sharp Kabushiki Kaisha | Machine translation system translating all or a selected portion of an input sentence |
US5146439A (en) * | 1989-01-04 | 1992-09-08 | Pitney Bowes Inc. | Records management system having dictation/transcription capability |
US5408686A (en) * | 1991-02-19 | 1995-04-18 | Mankovitz; Roy J. | Apparatus and methods for music and lyrics broadcasting |
US5475792A (en) * | 1992-09-21 | 1995-12-12 | International Business Machines Corporation | Telephony channel simulator for speech recognition application |
US5638487A (en) * | 1994-12-30 | 1997-06-10 | Purespeech, Inc. | Automatic speech recognition |
US5649060A (en) * | 1993-10-18 | 1997-07-15 | International Business Machines Corporation | Automatic indexing and aligning of audio and text using speech recognition |
US5724593A (en) * | 1995-06-07 | 1998-03-03 | International Language Engineering Corp. | Machine assisted translation tools |
US5768603A (en) * | 1991-07-25 | 1998-06-16 | International Business Machines Corporation | Method and system for natural language translation |
US5810599A (en) * | 1994-01-26 | 1998-09-22 | E-Systems, Inc. | Interactive audio-visual foreign language skills maintenance system and method |
US5897614A (en) * | 1996-12-20 | 1999-04-27 | International Business Machines Corporation | Method and apparatus for sibilant classification in a speech recognition system |
US6151603A (en) * | 1994-09-02 | 2000-11-21 | Wolfe; Mark A. | Document retrieval system employing a preloading procedure |
US6208967B1 (en) * | 1996-02-27 | 2001-03-27 | U.S. Philips Corporation | Method and apparatus for automatic speech segmentation into phoneme-like units for use in speech processing applications, and based on segmentation into broad phonetic classes, sequence-constrained vector quantization and hidden-markov-models |
US6243680B1 (en) * | 1998-06-15 | 2001-06-05 | Nortel Networks Limited | Method and apparatus for obtaining a transcription of phrases through text and spoken utterances |
US6292772B1 (en) * | 1998-12-01 | 2001-09-18 | Justsystem Corporation | Method for identifying the language of individual words |
US6314417B1 (en) * | 1996-11-19 | 2001-11-06 | Microsoft Corporation | Processing multiple database transactions in the same process to reduce process overhead and redundant retrieval from database servers |
US6338033B1 (en) * | 1999-04-20 | 2002-01-08 | Alis Technologies, Inc. | System and method for network-based teletranslation from one natural language to another |
US6341330B1 (en) * | 1998-07-27 | 2002-01-22 | Oak Technology, Inc. | Method and system for caching a selected viewing angle in a DVD environment |
US6360237B1 (en) * | 1998-10-05 | 2002-03-19 | Lernout & Hauspie Speech Products N.V. | Method and system for performing text edits during audio recording playback |
US6361326B1 (en) * | 1998-02-20 | 2002-03-26 | George Mason University | System for instruction thinking skills |
US6385568B1 (en) * | 1997-05-28 | 2002-05-07 | Marek Brandon | Operator-assisted translation system and method for unconstrained source text |
US20020161579A1 (en) * | 2001-04-26 | 2002-10-31 | Speche Communications | Systems and methods for automated audio transcription, translation, and transfer |
US20020194300A1 (en) * | 2001-04-20 | 2002-12-19 | Carol Lin | Method and apparatus for integrated, user-directed web site text translation |
US20030018663A1 (en) * | 2001-05-30 | 2003-01-23 | Cornette Ranjita K. | Method and system for creating a multimedia electronic book |
US20030046062A1 (en) * | 2001-08-31 | 2003-03-06 | Cartus John R. | Productivity tool for language translators |
US20030078973A1 (en) * | 2001-09-25 | 2003-04-24 | Przekop Michael V. | Web-enabled system and method for on-demand distribution of transcript-synchronized video/audio records of legal proceedings to collaborative workgroups |
US6618702B1 (en) * | 2002-06-14 | 2003-09-09 | Mary Antoinette Kohler | Method of and device for phone-based speaker recognition |
US6732095B1 (en) * | 2001-04-13 | 2004-05-04 | Siebel Systems, Inc. | Method and apparatus for mapping between XML and relational representations |
US6807570B1 (en) * | 1997-01-21 | 2004-10-19 | International Business Machines Corporation | Pre-loading of web pages corresponding to designated links in HTML |
US6820055B2 (en) * | 2001-04-26 | 2004-11-16 | Speche Communications | Systems and methods for automated audio transcription, translation, and transfer with text display software for manipulating the text |
US6895376B2 (en) * | 2001-05-04 | 2005-05-17 | Matsushita Electric Industrial Co., Ltd. | Eigenvoice re-estimation technique of acoustic models for speech recognition, speaker identification and speaker verification |
US20050132418A1 (en) * | 1998-07-30 | 2005-06-16 | Tivo Inc. | Multimedia time warping system |
US7027973B2 (en) * | 2001-07-13 | 2006-04-11 | Hewlett-Packard Development Company, L.P. | System and method for converting a standard generalized markup language in multiple languages |
US7075671B1 (en) * | 2000-09-14 | 2006-07-11 | International Business Machines Corp. | System and method for providing a printing capability for a transcription service or multimedia presentation |
US7107204B1 (en) * | 2000-04-24 | 2006-09-12 | Microsoft Corporation | Computer-aided writing system and method with cross-language writing wizard |
US20070005591A1 (en) * | 2000-08-22 | 2007-01-04 | Microsoft Corporation | Method and system for searching for words and phrases in active and stored ink word documents |
US7315809B2 (en) * | 2000-04-24 | 2008-01-01 | Microsoft Corporation | Computer-aided reading system and method with cross-language reading wizard |
US7412643B1 (en) * | 1999-11-23 | 2008-08-12 | International Business Machines Corporation | Method and apparatus for linking representation and realization data |
US7529656B2 (en) * | 2002-01-29 | 2009-05-05 | International Business Machines Corporation | Translating method, translated sentence outputting method, recording medium, program, and computer device |
US7627479B2 (en) * | 2003-02-21 | 2009-12-01 | Motionpoint Corporation | Automation tool for web site content language translation |
US20090307584A1 (en) * | 2008-06-07 | 2009-12-10 | Davidson Douglas R | Automatic language identification for dynamic text processing |
US7805298B2 (en) * | 1993-03-24 | 2010-09-28 | Engate Llc | Computer-aided transcription system using pronounceable substitute text with a common cross-reference library |
Family Cites Families (139)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AUPQ131399A0 (en) | 1999-06-30 | 1999-07-22 | Silverbrook Research Pty Ltd | A method and apparatus (NPAGE02) |
US4317611A (en) * | 1980-05-19 | 1982-03-02 | International Business Machines Corporation | Optical ray deflection apparatus |
US4615595A (en) * | 1984-10-10 | 1986-10-07 | Texas Instruments Incorporated | Frame addressed spatial light modulator |
US4908866A (en) * | 1985-02-04 | 1990-03-13 | Eric Goldwasser | Speech transcribing system |
JPH0693221B2 (en) | 1985-06-12 | 1994-11-16 | 株式会社日立製作所 | Voice input device |
US4879648A (en) | 1986-09-19 | 1989-11-07 | Nancy P. Cochran | Search system which continuously displays search terms during scrolling and selections of individually displayed data sets |
JPH0833799B2 (en) * | 1988-10-31 | 1996-03-29 | 富士通株式会社 | Data input / output control method |
US6978277B2 (en) | 1989-10-26 | 2005-12-20 | Encyclopaedia Britannica, Inc. | Multimedia search system |
US5418716A (en) * | 1990-07-26 | 1995-05-23 | Nec Corporation | System for recognizing sentence patterns and a system for recognizing sentence patterns and grammatical cases |
US5404295A (en) * | 1990-08-16 | 1995-04-04 | Katz; Boris | Method and apparatus for utilizing annotations to facilitate computer retrieval of database material |
US5317732A (en) * | 1991-04-26 | 1994-05-31 | Commodore Electronics Limited | System for relocating a multimedia presentation on a different platform by extracting a resource map in order to remap and relocate resources |
US5875108A (en) | 1991-12-23 | 1999-02-23 | Hoffberg; Steven M. | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
US5544257A (en) | 1992-01-08 | 1996-08-06 | International Business Machines Corporation | Continuous parameter hidden Markov model approach to automatic handwriting recognition |
US5311360A (en) * | 1992-04-28 | 1994-05-10 | The Board Of Trustees Of The Leland Stanford, Junior University | Method and apparatus for modulating a light beam |
CA2108536C (en) | 1992-11-24 | 2000-04-04 | Oscar Ernesto Agazzi | Text recognition using two-dimensional stochastic models |
US5525047A (en) * | 1993-06-30 | 1996-06-11 | Cooper Cameron Corporation | Sealing system for an unloader |
US5689641A (en) | 1993-10-01 | 1997-11-18 | Vicor, Inc. | Multimedia collaboration system arrangement for routing compressed AV signal through a participant site without decompressing the AV signal |
US5452024A (en) * | 1993-11-01 | 1995-09-19 | Texas Instruments Incorporated | DMD display system |
JP3185505B2 (en) | 1993-12-24 | 2001-07-11 | 株式会社日立製作所 | Meeting record creation support device |
GB2285895A (en) | 1994-01-19 | 1995-07-26 | Ibm | Audio conferencing system which generates a set of minutes |
FR2718539B1 (en) * | 1994-04-08 | 1996-04-26 | Thomson Csf | Device for amplifying the amplitude modulation rate of an optical beam. |
JPH07319917A (en) | 1994-05-24 | 1995-12-08 | Fuji Xerox Co Ltd | Document data base managing device and document data base system |
US5613032A (en) * | 1994-09-02 | 1997-03-18 | Bell Communications Research, Inc. | System and method for recording, playing back and searching multimedia events wherein video, audio and text can be searched and retrieved |
US5768607A (en) | 1994-09-30 | 1998-06-16 | Intel Corporation | Method and apparatus for freehand annotation and drawings incorporating sound and for compressing and synchronizing sound |
WO1996010799A1 (en) * | 1994-09-30 | 1996-04-11 | Motorola Inc. | Method and system for extracting features from handwritten text |
US5777614A (en) * | 1994-10-14 | 1998-07-07 | Hitachi, Ltd. | Editing support system including an interactive interface |
US5835667A (en) | 1994-10-14 | 1998-11-10 | Carnegie Mellon University | Method and apparatus for creating a searchable digital video library and a system and method of using such a library |
US5614940A (en) * | 1994-10-21 | 1997-03-25 | Intel Corporation | Method and apparatus for providing broadcast information with indexing |
US6029195A (en) | 1994-11-29 | 2000-02-22 | Herz; Frederick S. M. | System for customized electronic identification of desirable objects |
US5729656A (en) | 1994-11-30 | 1998-03-17 | International Business Machines Corporation | Reduction of search space in speech recognition using phone boundaries and phone ranking |
US5715367A (en) * | 1995-01-23 | 1998-02-03 | Dragon Systems, Inc. | Apparatuses and methods for developing and using models for speech recognition |
US5684924A (en) * | 1995-05-19 | 1997-11-04 | Kurzweil Applied Intelligence, Inc. | User adaptable speech recognition system |
US6046840A (en) * | 1995-06-19 | 2000-04-04 | Reflectivity, Inc. | Double substrate reflective spatial light modulator with self-limiting micro-mechanical elements |
US5559875A (en) | 1995-07-31 | 1996-09-24 | Latitude Communications | Method and apparatus for recording and retrieval of audio conferences |
US6151598A (en) * | 1995-08-14 | 2000-11-21 | Shaw; Venson M. | Digital dictionary with a communication system for the creating, updating, editing, storing, maintaining, referencing, and managing the digital dictionary |
US5963940A (en) * | 1995-08-16 | 1999-10-05 | Syracuse University | Natural language information retrieval system and method |
US6026388A (en) | 1995-08-16 | 2000-02-15 | Textwise, Llc | User interface and other enhancements for natural language information retrieval system and method |
EP0856175A4 (en) | 1995-08-16 | 2000-05-24 | Univ Syracuse | Multilingual document retrieval system and method using semantic vector matching |
US5757536A (en) * | 1995-08-30 | 1998-05-26 | Sandia Corporation | Electrically-programmable diffraction grating |
US20020002562A1 (en) * | 1995-11-03 | 2002-01-03 | Thomas P. Moran | Computer controlled display system using a graphical replay device to control playback of temporal data representing collaborative activities |
US5742419A (en) * | 1995-11-07 | 1998-04-21 | The Board Of Trustees Of The Leland Stanford Junior Universtiy | Miniature scanning confocal microscope |
US5960447A (en) | 1995-11-13 | 1999-09-28 | Holt; Douglas | Word tagging and editing system for speech recognition |
US5999306A (en) * | 1995-12-01 | 1999-12-07 | Seiko Epson Corporation | Method of manufacturing spatial light modulator and electronic device employing it |
JPH09269931A (en) | 1996-01-30 | 1997-10-14 | Canon Inc | Cooperative work environment constructing system, its method and medium |
US6067517A (en) | 1996-02-02 | 2000-05-23 | International Business Machines Corporation | Transcription of speech data with segments from acoustically dissimilar environments |
US5862259A (en) | 1996-03-27 | 1999-01-19 | Caere Corporation | Pattern recognition employing arbitrary segmentation and compound probabilistic evaluation |
US6024571A (en) | 1996-04-25 | 2000-02-15 | Renegar; Janet Elaine | Foreign language communication system/device and learning aid |
US5778187A (en) | 1996-05-09 | 1998-07-07 | Netcast Communications Corp. | Multicasting method and apparatus |
US5996022A (en) * | 1996-06-03 | 1999-11-30 | Webtv Networks, Inc. | Transcoding data in a proxy computer prior to transmitting the audio data to a client |
US5806032A (en) | 1996-06-14 | 1998-09-08 | Lucent Technologies Inc. | Compilation of weighted finite-state transducers from decision trees |
US6169789B1 (en) | 1996-12-16 | 2001-01-02 | Sanjay K. Rao | Intelligent keyboard system |
US6732183B1 (en) * | 1996-12-31 | 2004-05-04 | Broadware Technologies, Inc. | Video and audio streaming for multiple users |
US6185531B1 (en) | 1997-01-09 | 2001-02-06 | Gte Internetworking Incorporated | Topic indexing method |
US6088669A (en) | 1997-01-28 | 2000-07-11 | International Business Machines, Corporation | Speech recognition with attempted speaker recognition for speaker model prefetching or alternative speech modeling |
JP2991287B2 (en) * | 1997-01-28 | 1999-12-20 | 日本電気株式会社 | Suppression standard pattern selection type speaker recognition device |
US6029124A (en) | 1997-02-21 | 2000-02-22 | Dragon Systems, Inc. | Sequential, nonparametric speech recognition and speaker identification |
US6024751A (en) * | 1997-04-11 | 2000-02-15 | Coherent Inc. | Method and apparatus for transurethral resection of the prostate |
US6463444B1 (en) * | 1997-08-14 | 2002-10-08 | Virage, Inc. | Video cataloger system with extensibility |
US6567980B1 (en) * | 1997-08-14 | 2003-05-20 | Virage, Inc. | Video cataloger system with hyperlinked output |
US6360234B2 (en) * | 1997-08-14 | 2002-03-19 | Virage, Inc. | Video cataloger system with synchronized encoders |
US6052657A (en) | 1997-09-09 | 2000-04-18 | Dragon Systems, Inc. | Text segmentation and identification of topic using language models |
US6317716B1 (en) | 1997-09-19 | 2001-11-13 | Massachusetts Institute Of Technology | Automatic cueing of speech |
WO1999017235A1 (en) | 1997-10-01 | 1999-04-08 | At & T Corp. | Method and apparatus for storing and retrieving labeled interval data for multimedia recordings |
US6961954B1 (en) | 1997-10-27 | 2005-11-01 | The Mitre Corporation | Automated segmentation, information extraction, summarization, and presentation of broadcast news |
US6064963A (en) | 1997-12-17 | 2000-05-16 | Opus Telecom, L.L.C. | Automatic key word or phrase speech recognition for the corrections industry |
JP4183311B2 (en) * | 1997-12-22 | 2008-11-19 | 株式会社リコー | Document annotation method, annotation device, and recording medium |
US5970473A (en) | 1997-12-31 | 1999-10-19 | At&T Corp. | Video communication device providing in-home catalog services |
SE511584C2 (en) | 1998-01-15 | 1999-10-25 | Ericsson Telefon Ab L M | information Routing |
US6327343B1 (en) | 1998-01-16 | 2001-12-04 | International Business Machines Corporation | System and methods for automatic call and data transfer processing |
JP3181548B2 (en) | 1998-02-03 | 2001-07-03 | 富士通株式会社 | Information retrieval apparatus and information retrieval method |
US6073096A (en) | 1998-02-04 | 2000-06-06 | International Business Machines Corporation | Speaker adaptation system and method based on class-specific pre-clustering training speakers |
US7257528B1 (en) * | 1998-02-13 | 2007-08-14 | Zi Corporation Of Canada, Inc. | Method and apparatus for Chinese character text input |
US6381640B1 (en) * | 1998-09-11 | 2002-04-30 | Genesys Telecommunications Laboratories, Inc. | Method and apparatus for automated personalization and presentation of workload assignments to agents within a multimedia communication center |
US6112172A (en) | 1998-03-31 | 2000-08-29 | Dragon Systems, Inc. | Interactive searching |
CN1159662C (en) * | 1998-05-13 | 2004-07-28 | 国际商业机器公司 | Automatic punctuating for continuous speech recognition |
US6076053A (en) * | 1998-05-21 | 2000-06-13 | Lucent Technologies Inc. | Methods and apparatus for discriminative training and adaptation of pronunciation networks |
US6067514A (en) | 1998-06-23 | 2000-05-23 | International Business Machines Corporation | Method for automatically punctuating a speech utterance in a continuous speech recognition system |
US6246983B1 (en) * | 1998-08-05 | 2001-06-12 | Matsushita Electric Corporation Of America | Text-to-speech e-mail reader with multi-modal reply processor |
US6373985B1 (en) | 1998-08-12 | 2002-04-16 | Lucent Technologies, Inc. | E-mail signature block analysis |
US6161087A (en) | 1998-10-05 | 2000-12-12 | Lernout & Hauspie Speech Products N.V. | Speech-recognition-assisted selective suppression of silent and filled speech pauses during playback of an audio recording |
US6038058A (en) * | 1998-10-15 | 2000-03-14 | Memsolutions, Inc. | Grid-actuated charge controlled mirror and method of addressing the same |
US6347295B1 (en) * | 1998-10-26 | 2002-02-12 | Compaq Computer Corporation | Computer method and apparatus for grapheme-to-phoneme rule-set-generation |
US6332139B1 (en) | 1998-11-09 | 2001-12-18 | Mega Chips Corporation | Information communication system |
JP3252282B2 (en) | 1998-12-17 | 2002-02-04 | 松下電器産業株式会社 | Method and apparatus for searching scene |
US6654735B1 (en) | 1999-01-08 | 2003-11-25 | International Business Machines Corporation | Outbound information analysis for generating user interest profiles and improving user productivity |
US6253179B1 (en) * | 1999-01-29 | 2001-06-26 | International Business Machines Corporation | Method and apparatus for multi-environment speaker verification |
DE19912405A1 (en) * | 1999-03-19 | 2000-09-21 | Philips Corp Intellectual Pty | Determination of a regression class tree structure for speech recognizers |
CN1148965C (en) | 1999-03-30 | 2004-05-05 | 提维股份有限公司 | Data storage management and scheduling system |
US6345252B1 (en) * | 1999-04-09 | 2002-02-05 | International Business Machines Corporation | Methods and apparatus for retrieving audio information using content and speaker information |
US6434520B1 (en) | 1999-04-16 | 2002-08-13 | International Business Machines Corporation | System and method for indexing and querying audio archives |
US6711585B1 (en) * | 1999-06-15 | 2004-03-23 | Kanisa Inc. | System and method for implementing a knowledge management system |
US6219640B1 (en) | 1999-08-06 | 2001-04-17 | International Business Machines Corporation | Methods and apparatus for audio-visual speaker recognition and utterance verification |
EP1079313A3 (en) | 1999-08-20 | 2005-10-19 | Digitake Software Systems Limited | An audio processing system |
JP3232289B2 (en) * | 1999-08-30 | 2001-11-26 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Symbol insertion device and method |
US6480826B2 (en) | 1999-08-31 | 2002-11-12 | Accenture Llp | System and method for a telephonic emotion detection that provides operator feedback |
US6711541B1 (en) | 1999-09-07 | 2004-03-23 | Matsushita Electric Industrial Co., Ltd. | Technique for developing discriminative sound units for speech recognition and allophone modeling |
US6624826B1 (en) * | 1999-09-28 | 2003-09-23 | Ricoh Co., Ltd. | Method and apparatus for generating visual representations for audio documents |
US6396619B1 (en) * | 2000-01-28 | 2002-05-28 | Reflectivity, Inc. | Deflectable spatial light modulator having stopping mechanisms |
US6571208B1 (en) | 1999-11-29 | 2003-05-27 | Matsushita Electric Industrial Co., Ltd. | Context-dependent acoustic models for medium and large vocabulary speech recognition with eigenvoice training |
JP2003518266A (en) | 1999-12-20 | 2003-06-03 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Speech reproduction for text editing of speech recognition system |
US20020071169A1 (en) * | 2000-02-01 | 2002-06-13 | Bowers John Edward | Micro-electro-mechanical-system (MEMS) mirror device |
JP5105682B2 (en) * | 2000-02-25 | 2012-12-26 | ニュアンス コミュニケーションズ オーストリア ゲーエムベーハー | Speech recognition apparatus with reference conversion means |
US7197694B2 (en) | 2000-03-21 | 2007-03-27 | Oki Electric Industry Co., Ltd. | Image display system, image registration terminal device and image reading terminal device used in the image display system |
US7120575B2 (en) | 2000-04-08 | 2006-10-10 | International Business Machines Corporation | Method and system for the automatic segmentation of an audio stream into semantic or syntactic units |
EP1148505A3 (en) * | 2000-04-21 | 2002-03-27 | Matsushita Electric Industrial Co., Ltd. | Data playback apparatus |
US6388661B1 (en) * | 2000-05-03 | 2002-05-14 | Reflectivity, Inc. | Monochrome and color digital display systems and methods |
US6505153B1 (en) * | 2000-05-22 | 2003-01-07 | Compaq Information Technologies Group, L.P. | Efficient method for producing off-line closed captions |
US6748356B1 (en) * | 2000-06-07 | 2004-06-08 | International Business Machines Corporation | Methods and apparatus for identifying unknown speakers using a hierarchical tree structure |
US7047192B2 (en) | 2000-06-28 | 2006-05-16 | Poirier Darrell A | Simultaneous multi-user real-time speech recognition system |
US6337760B1 (en) * | 2000-07-17 | 2002-01-08 | Reflectivity, Inc. | Encapsulated multi-directional light beam steering device |
US6931376B2 (en) | 2000-07-20 | 2005-08-16 | Microsoft Corporation | Speech-related event notification system |
AU2001271940A1 (en) * | 2000-07-28 | 2002-02-13 | Easyask, Inc. | Distributed search system and method |
EP1176493A3 (en) | 2000-07-28 | 2002-07-10 | Jan Pathuel | Method and system of securing data and systems |
AU2001288469A1 (en) | 2000-08-28 | 2002-03-13 | Emotion, Inc. | Method and apparatus for digital media management, retrieval, and collaboration |
US6604110B1 (en) | 2000-08-31 | 2003-08-05 | Ascential Software, Inc. | Automated software code generation from a metadata-based repository |
US6647383B1 (en) | 2000-09-01 | 2003-11-11 | Lucent Technologies Inc. | System and method for providing interactive dialogue and iterative search functions to find information |
WO2002029614A1 (en) | 2000-09-30 | 2002-04-11 | Intel Corporation | Method and system to scale down a decision tree-based hidden markov model (hmm) for speech recognition |
AU2000276394A1 (en) | 2000-09-30 | 2002-04-15 | Intel Corporation | Method and system for generating and searching an optimal maximum likelihood decision tree for hidden markov model (hmm) based speech recognition |
US6431714B1 (en) * | 2000-10-10 | 2002-08-13 | Nippon Telegraph And Telephone Corporation | Micro-mirror apparatus and production method therefor |
US6934756B2 (en) | 2000-11-01 | 2005-08-23 | International Business Machines Corporation | Conversational networking via transport, coding and control conversational protocols |
US20050060162A1 (en) | 2000-11-10 | 2005-03-17 | Farhad Mohit | Systems and methods for automatic identification and hyperlinking of words or other data items and for information retrieval using hyperlinked words or data items |
US6574026B2 (en) * | 2000-12-07 | 2003-06-03 | Agere Systems Inc. | Magnetically-packaged optical MEMs device |
SG98440A1 (en) * | 2001-01-16 | 2003-09-19 | Reuters Ltd | Method and apparatus for a financial database structure |
US6944272B1 (en) * | 2001-01-16 | 2005-09-13 | Interactive Intelligence, Inc. | Method and system for administering multiple messages over a public switched telephone network |
US6714911B2 (en) | 2001-01-25 | 2004-03-30 | Harcourt Assessment, Inc. | Speech transcription and analysis system and method |
US6429033B1 (en) * | 2001-02-20 | 2002-08-06 | Nayna Networks, Inc. | Process for manufacturing mirror devices using semiconductor technology |
US20020133477A1 (en) * | 2001-03-05 | 2002-09-19 | Glenn Abel | Method for profile-based notice and broadcast of multimedia content |
ATE335195T1 (en) * | 2001-05-10 | 2006-08-15 | Koninkl Philips Electronics Nv | BACKGROUND LEARNING OF SPEAKER VOICES |
US6973428B2 (en) | 2001-05-24 | 2005-12-06 | International Business Machines Corporation | System and method for searching, analyzing and displaying text transcripts of speech after imperfect speech recognition |
US6778979B2 (en) | 2001-08-13 | 2004-08-17 | Xerox Corporation | System for automatically generating queries |
US6748350B2 (en) | 2001-09-27 | 2004-06-08 | Intel Corporation | Method to compensate for stress between heat spreader and thermal interface material |
US6708148B2 (en) | 2001-10-12 | 2004-03-16 | Koninklijke Philips Electronics N.V. | Correction device to mark parts of a recognized text |
US20030093580A1 (en) | 2001-11-09 | 2003-05-15 | Koninklijke Philips Electronics N.V. | Method and system for information alerts |
US7165024B2 (en) | 2002-02-22 | 2007-01-16 | Nec Laboratories America, Inc. | Inferring hierarchical descriptions of a set of documents |
US7522910B2 (en) * | 2002-05-31 | 2009-04-21 | Oracle International Corporation | Method and apparatus for controlling data provided to a mobile device |
US7668816B2 (en) | 2002-06-11 | 2010-02-23 | Microsoft Corporation | Dynamically updated quick searches and strategies |
US7131117B2 (en) | 2002-09-04 | 2006-10-31 | Sbc Properties, L.P. | Method and system for automating the analysis of word frequencies |
US6999918B2 (en) | 2002-09-20 | 2006-02-14 | Motorola, Inc. | Method and apparatus to facilitate correlating symbols to sounds |
EP1422692A3 (en) | 2002-11-22 | 2004-07-14 | ScanSoft, Inc. | Automatic insertion of non-verbalized punctuation in speech recognition |
-
2003
- 2003-07-02 US US10/610,696 patent/US20040024585A1/en not_active Abandoned
- 2003-07-02 US US10/611,106 patent/US7337115B2/en active Active
- 2003-07-02 US US10/610,799 patent/US20040199495A1/en not_active Abandoned
- 2003-07-02 US US10/610,532 patent/US20040006481A1/en not_active Abandoned
- 2003-07-02 US US10/610,699 patent/US20040117188A1/en not_active Abandoned
- 2003-07-02 US US10/610,697 patent/US7290207B2/en not_active Expired - Fee Related
- 2003-07-02 US US10/610,574 patent/US20040006748A1/en not_active Abandoned
- 2003-07-02 US US10/610,679 patent/US20040024598A1/en not_active Abandoned
- 2003-07-02 US US10/610,533 patent/US7801838B2/en not_active Expired - Fee Related
- 2003-07-02 US US10/610,684 patent/US20040024582A1/en not_active Abandoned
-
2010
- 2010-08-13 US US12/806,465 patent/US8001066B2/en not_active Expired - Fee Related
Patent Citations (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4193119A (en) * | 1977-03-25 | 1980-03-11 | Xerox Corporation | Apparatus for assisting in the transposition of foreign language text |
US4814988A (en) * | 1986-05-20 | 1989-03-21 | Sharp Kabushiki Kaisha | Machine translation system translating all or a selected portion of an input sentence |
US5146439A (en) * | 1989-01-04 | 1992-09-08 | Pitney Bowes Inc. | Records management system having dictation/transcription capability |
US5408686A (en) * | 1991-02-19 | 1995-04-18 | Mankovitz; Roy J. | Apparatus and methods for music and lyrics broadcasting |
US5768603A (en) * | 1991-07-25 | 1998-06-16 | International Business Machines Corporation | Method and system for natural language translation |
US5475792A (en) * | 1992-09-21 | 1995-12-12 | International Business Machines Corporation | Telephony channel simulator for speech recognition application |
US7805298B2 (en) * | 1993-03-24 | 2010-09-28 | Engate Llc | Computer-aided transcription system using pronounceable substitute text with a common cross-reference library |
US5649060A (en) * | 1993-10-18 | 1997-07-15 | International Business Machines Corporation | Automatic indexing and aligning of audio and text using speech recognition |
US5810599A (en) * | 1994-01-26 | 1998-09-22 | E-Systems, Inc. | Interactive audio-visual foreign language skills maintenance system and method |
US6151603A (en) * | 1994-09-02 | 2000-11-21 | Wolfe; Mark A. | Document retrieval system employing a preloading procedure |
US5638487A (en) * | 1994-12-30 | 1997-06-10 | Purespeech, Inc. | Automatic speech recognition |
US5724593A (en) * | 1995-06-07 | 1998-03-03 | International Language Engineering Corp. | Machine assisted translation tools |
US6208967B1 (en) * | 1996-02-27 | 2001-03-27 | U.S. Philips Corporation | Method and apparatus for automatic speech segmentation into phoneme-like units for use in speech processing applications, and based on segmentation into broad phonetic classes, sequence-constrained vector quantization and hidden-markov-models |
US6314417B1 (en) * | 1996-11-19 | 2001-11-06 | Microsoft Corporation | Processing multiple database transactions in the same process to reduce process overhead and redundant retrieval from database servers |
US5897614A (en) * | 1996-12-20 | 1999-04-27 | International Business Machines Corporation | Method and apparatus for sibilant classification in a speech recognition system |
US6807570B1 (en) * | 1997-01-21 | 2004-10-19 | International Business Machines Corporation | Pre-loading of web pages corresponding to designated links in HTML |
US6385568B1 (en) * | 1997-05-28 | 2002-05-07 | Marek Brandon | Operator-assisted translation system and method for unconstrained source text |
US6361326B1 (en) * | 1998-02-20 | 2002-03-26 | George Mason University | System for instruction thinking skills |
US6243680B1 (en) * | 1998-06-15 | 2001-06-05 | Nortel Networks Limited | Method and apparatus for obtaining a transcription of phrases through text and spoken utterances |
US6341330B1 (en) * | 1998-07-27 | 2002-01-22 | Oak Technology, Inc. | Method and system for caching a selected viewing angle in a DVD environment |
US20050132418A1 (en) * | 1998-07-30 | 2005-06-16 | Tivo Inc. | Multimedia time warping system |
US6360237B1 (en) * | 1998-10-05 | 2002-03-19 | Lernout & Hauspie Speech Products N.V. | Method and system for performing text edits during audio recording playback |
US6292772B1 (en) * | 1998-12-01 | 2001-09-18 | Justsystem Corporation | Method for identifying the language of individual words |
US6338033B1 (en) * | 1999-04-20 | 2002-01-08 | Alis Technologies, Inc. | System and method for network-based teletranslation from one natural language to another |
US7412643B1 (en) * | 1999-11-23 | 2008-08-12 | International Business Machines Corporation | Method and apparatus for linking representation and realization data |
US7315809B2 (en) * | 2000-04-24 | 2008-01-01 | Microsoft Corporation | Computer-aided reading system and method with cross-language reading wizard |
US7107204B1 (en) * | 2000-04-24 | 2006-09-12 | Microsoft Corporation | Computer-aided writing system and method with cross-language writing wizard |
US20070005591A1 (en) * | 2000-08-22 | 2007-01-04 | Microsoft Corporation | Method and system for searching for words and phrases in active and stored ink word documents |
US7075671B1 (en) * | 2000-09-14 | 2006-07-11 | International Business Machines Corp. | System and method for providing a printing capability for a transcription service or multimedia presentation |
US6732095B1 (en) * | 2001-04-13 | 2004-05-04 | Siebel Systems, Inc. | Method and apparatus for mapping between XML and relational representations |
US20020194300A1 (en) * | 2001-04-20 | 2002-12-19 | Carol Lin | Method and apparatus for integrated, user-directed web site text translation |
US7035804B2 (en) * | 2001-04-26 | 2006-04-25 | Stenograph, L.L.C. | Systems and methods for automated audio transcription, translation, and transfer |
US6820055B2 (en) * | 2001-04-26 | 2004-11-16 | Speche Communications | Systems and methods for automated audio transcription, translation, and transfer with text display software for manipulating the text |
US20020161579A1 (en) * | 2001-04-26 | 2002-10-31 | Speche Communications | Systems and methods for automated audio transcription, translation, and transfer |
US6895376B2 (en) * | 2001-05-04 | 2005-05-17 | Matsushita Electric Industrial Co., Ltd. | Eigenvoice re-estimation technique of acoustic models for speech recognition, speaker identification and speaker verification |
US20030018663A1 (en) * | 2001-05-30 | 2003-01-23 | Cornette Ranjita K. | Method and system for creating a multimedia electronic book |
US7027973B2 (en) * | 2001-07-13 | 2006-04-11 | Hewlett-Packard Development Company, L.P. | System and method for converting a standard generalized markup language in multiple languages |
US6993473B2 (en) * | 2001-08-31 | 2006-01-31 | Equality Translation Services | Productivity tool for language translators |
US20030046062A1 (en) * | 2001-08-31 | 2003-03-06 | Cartus John R. | Productivity tool for language translators |
US20030078973A1 (en) * | 2001-09-25 | 2003-04-24 | Przekop Michael V. | Web-enabled system and method for on-demand distribution of transcript-synchronized video/audio records of legal proceedings to collaborative workgroups |
US7529656B2 (en) * | 2002-01-29 | 2009-05-05 | International Business Machines Corporation | Translating method, translated sentence outputting method, recording medium, program, and computer device |
US6618702B1 (en) * | 2002-06-14 | 2003-09-09 | Mary Antoinette Kohler | Method of and device for phone-based speaker recognition |
US7627479B2 (en) * | 2003-02-21 | 2009-12-01 | Motionpoint Corporation | Automation tool for web site content language translation |
US20090307584A1 (en) * | 2008-06-07 | 2009-12-10 | Davidson Douglas R | Automatic language identification for dynamic text processing |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060212830A1 (en) * | 2003-09-09 | 2006-09-21 | Fogg Brian J | Graphical messaging system |
US20230032115A1 (en) * | 2005-02-14 | 2023-02-02 | Thomas M. Majchrowski & Associates, Inc. | Multipurpose media players |
US20070225973A1 (en) * | 2006-03-23 | 2007-09-27 | Childress Rhonda L | Collective Audio Chunk Processing for Streaming Translated Multi-Speaker Conversations |
US20070225967A1 (en) * | 2006-03-23 | 2007-09-27 | Childress Rhonda L | Cadence management of translated multi-speaker conversations using pause marker relationship models |
US7752031B2 (en) * | 2006-03-23 | 2010-07-06 | International Business Machines Corporation | Cadence management of translated multi-speaker conversations using pause marker relationship models |
US20080172219A1 (en) * | 2007-01-17 | 2008-07-17 | Novell, Inc. | Foreign language translator in a document editor |
US20080229914A1 (en) * | 2007-03-19 | 2008-09-25 | Trevor Nathanial | Foot operated transport controller for digital audio workstations |
US20080288239A1 (en) * | 2007-05-15 | 2008-11-20 | Microsoft Corporation | Localization and internationalization of document resources |
US8498867B2 (en) * | 2009-01-15 | 2013-07-30 | K-Nfb Reading Technology, Inc. | Systems and methods for selection and use of multiple characters for document narration |
US20100324904A1 (en) * | 2009-01-15 | 2010-12-23 | K-Nfb Reading Technology, Inc. | Systems and methods for multiple language document narration |
US8498866B2 (en) * | 2009-01-15 | 2013-07-30 | K-Nfb Reading Technology, Inc. | Systems and methods for multiple language document narration |
US20100318364A1 (en) * | 2009-01-15 | 2010-12-16 | K-Nfb Reading Technology, Inc. | Systems and methods for selection and use of multiple characters for document narration |
US8799408B2 (en) * | 2009-08-10 | 2014-08-05 | Sling Media Pvt Ltd | Localization systems and methods |
US20110035467A1 (en) * | 2009-08-10 | 2011-02-10 | Sling Media Pvt Ltd | Localization systems and methods |
US9129605B2 (en) * | 2012-03-30 | 2015-09-08 | Src, Inc. | Automated voice and speech labeling |
US20130262111A1 (en) * | 2012-03-30 | 2013-10-03 | Src, Inc. | Automated voice and speech labeling |
US20130332165A1 (en) * | 2012-06-06 | 2013-12-12 | Qualcomm Incorporated | Method and systems having improved speech recognition |
US9881616B2 (en) * | 2012-06-06 | 2018-01-30 | Qualcomm Incorporated | Method and systems having improved speech recognition |
US9190055B1 (en) * | 2013-03-14 | 2015-11-17 | Amazon Technologies, Inc. | Named entity recognition with personalized models |
CN106104677A (en) * | 2014-03-17 | 2016-11-09 | 谷歌公司 | Visually indicating of the action that the voice being identified is initiated |
US10671251B2 (en) | 2017-12-22 | 2020-06-02 | Arbordale Publishing, LLC | Interactive eReader interface generation based on synchronization of textual and audial descriptors |
US11443646B2 (en) | 2017-12-22 | 2022-09-13 | Fathom Technologies, LLC | E-Reader interface system with audio and highlighting synchronization for digital books |
US11657725B2 (en) | 2017-12-22 | 2023-05-23 | Fathom Technologies, LLC | E-reader interface system with audio and highlighting synchronization for digital books |
Also Published As
Publication number | Publication date |
---|---|
US20040006576A1 (en) | 2004-01-08 |
US7337115B2 (en) | 2008-02-26 |
US20040006737A1 (en) | 2004-01-08 |
US20040117188A1 (en) | 2004-06-17 |
US7290207B2 (en) | 2007-10-30 |
US20040006748A1 (en) | 2004-01-08 |
US20110004576A1 (en) | 2011-01-06 |
US20040199495A1 (en) | 2004-10-07 |
US20040030550A1 (en) | 2004-02-12 |
US20040006481A1 (en) | 2004-01-08 |
US8001066B2 (en) | 2011-08-16 |
US20040024598A1 (en) | 2004-02-05 |
US20040024585A1 (en) | 2004-02-05 |
US7801838B2 (en) | 2010-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20040024582A1 (en) | Systems and methods for aiding human translation | |
US20140289596A1 (en) | Systems and methods for facilitating playback of media | |
US7966184B2 (en) | System and method for audible web site navigation | |
US8676868B2 (en) | Macro programming for resources | |
US7506262B2 (en) | User interface for creating viewing and temporally positioning annotations for media content | |
US7006975B1 (en) | Methods and apparatus for referencing and processing audio information | |
EP1481328B1 (en) | User interface and dynamic grammar in a multi-modal synchronization architecture | |
US7593854B2 (en) | Method and system for collecting user-interest information regarding a picture | |
US20050015254A1 (en) | Voice menu system | |
EP2273754A2 (en) | A conversational portal for providing conversational browsing and multimedia broadcast on demand | |
EP1320043A2 (en) | Multi-modal picture | |
US8930308B1 (en) | Methods and systems of associating metadata with media | |
US20150089368A1 (en) | Searching within audio content | |
JP2001125896A (en) | Natural language interactive system | |
JP2010527494A (en) | Multilingual information search | |
JPH1078952A (en) | Voice synthesizing method and device therefor and hypertext control method and controller | |
US20140324858A1 (en) | Information processing apparatus, keyword registration method, and program | |
US20060271365A1 (en) | Methods and apparatus for processing information signals based on content | |
JP3789614B2 (en) | Browser system, voice proxy server, link item reading method, and storage medium storing link item reading program | |
US7216287B2 (en) | Personal voice portal service | |
JP2009042968A (en) | Information selection system, information selection method, and program for information selection | |
JP2006526207A (en) | Media object search method | |
US7353175B2 (en) | Apparatus, method, and program for speech synthesis with capability of providing word meaning immediately upon request by a user | |
JP3963349B2 (en) | Interactive program presentation apparatus and interactive program presentation program | |
WO2007029204A2 (en) | Method, device and system for providing search results |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BBNT SOLUTIONS LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHEPARD, SCOTT;KUBALA, FRANCIS;REEL/FRAME:014265/0453;SIGNING DATES FROM 20030617 TO 20030618 |
|
AS | Assignment |
Owner name: BBNT SOLUTIONS LLC, MASSACHUSETTS Free format text: JOINT ASSIGNMENT;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014601/0448 Effective date: 20040503 Owner name: VERIZON CORPORATE SERVICES GROUP INC., NEW YORK Free format text: JOINT ASSIGNMENT;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014601/0448 Effective date: 20040503 Owner name: VERIZON CORPORATE SERVICES GROUP INC.,NEW YORK Free format text: JOINT ASSIGNMENT;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014601/0448 Effective date: 20040503 Owner name: BBNT SOLUTIONS LLC,MASSACHUSETTS Free format text: JOINT ASSIGNMENT;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014601/0448 Effective date: 20040503 |
|
AS | Assignment |
Owner name: BBNT SOLUTIONS LLC, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014634/0525 Effective date: 20040503 Owner name: VERIZON CORPORATE SERVICES GROUP INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014634/0525 Effective date: 20040503 Owner name: BBNT SOLUTIONS LLC,MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014634/0525 Effective date: 20040503 Owner name: VERIZON CORPORATE SERVICES GROUP INC.,NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:014634/0525 Effective date: 20040503 |
|
AS | Assignment |
Owner name: FLEET NATIONAL BANK, AS AGENT, MASSACHUSETTS Free format text: PATENT AND TRADEMARK SECURITY AGREEMENT;ASSIGNOR:BBNT SOULTIONS LLC;REEL/FRAME:014718/0294 Effective date: 20040326 Owner name: FLEET NATIONAL BANK, AS AGENT,MASSACHUSETTS Free format text: PATENT AND TRADEMARK SECURITY AGREEMENT;ASSIGNOR:BBNT SOULTIONS LLC;REEL/FRAME:014718/0294 Effective date: 20040326 |
|
AS | Assignment |
Owner name: BBN TECHNOLOGIES CORP.,MASSACHUSETTS Free format text: MERGER;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:017751/0049 Effective date: 20060103 Owner name: BBN TECHNOLOGIES CORP., MASSACHUSETTS Free format text: MERGER;ASSIGNOR:BBNT SOLUTIONS LLC;REEL/FRAME:017751/0049 Effective date: 20060103 |
|
AS | Assignment |
Owner name: BBN TECHNOLOGIES CORP. (AS SUCCESSOR BY MERGER TO Free format text: RELEASE OF SECURITY INTEREST;ASSIGNOR:BANK OF AMERICA, N.A. (SUCCESSOR BY MERGER TO FLEET NATIONAL BANK);REEL/FRAME:023427/0436 Effective date: 20091026 |
|
AS | Assignment |
Owner name: RAYTHEON BBN TECHNOLOGIES CORP.,MASSACHUSETTS Free format text: CHANGE OF NAME;ASSIGNOR:BBN TECHNOLOGIES CORP.;REEL/FRAME:024523/0625 Effective date: 20091027 Owner name: RAYTHEON BBN TECHNOLOGIES CORP., MASSACHUSETTS Free format text: CHANGE OF NAME;ASSIGNOR:BBN TECHNOLOGIES CORP.;REEL/FRAME:024523/0625 Effective date: 20091027 |
|
AS | Assignment |
Owner name: VERIZON PATENT AND LICENSING INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VERIZON CORPORATE SERVICES GROUP INC.;REEL/FRAME:033421/0403 Effective date: 20140409 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |