CN104321812A - Three-dimensional sound compression and over-the-air-transmission during a call - Google Patents
Three-dimensional sound compression and over-the-air-transmission during a call Download PDFInfo
- Publication number
- CN104321812A CN104321812A CN201380026946.9A CN201380026946A CN104321812A CN 104321812 A CN104321812 A CN 104321812A CN 201380026946 A CN201380026946 A CN 201380026946A CN 104321812 A CN104321812 A CN 104321812A
- Authority
- CN
- China
- Prior art keywords
- signal
- radio communication
- circuit
- codec
- communication device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000006835 compression Effects 0.000 title claims description 25
- 238000007906 compression Methods 0.000 title claims description 25
- 230000005236 sound signal Effects 0.000 claims abstract description 440
- 238000004891 communication Methods 0.000 claims abstract description 245
- 238000000034 method Methods 0.000 claims abstract description 202
- 238000001914 filtration Methods 0.000 claims description 124
- 238000009826 distribution Methods 0.000 claims description 68
- 230000008569 process Effects 0.000 claims description 58
- 230000001965 increasing effect Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 17
- 210000005069 ears Anatomy 0.000 claims description 16
- 230000005540 biological transmission Effects 0.000 claims description 15
- 239000002131 composite material Substances 0.000 claims description 8
- 238000000354 decomposition reaction Methods 0.000 claims 4
- 238000010586 diagram Methods 0.000 description 57
- 238000004458 analytical method Methods 0.000 description 18
- 238000005516 engineering process Methods 0.000 description 13
- 230000004044 response Effects 0.000 description 13
- 238000007514 turning Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 238000003860 storage Methods 0.000 description 11
- 230000000007 visual effect Effects 0.000 description 10
- 238000003491 array Methods 0.000 description 9
- 238000013461 design Methods 0.000 description 9
- 238000000926 separation method Methods 0.000 description 8
- 230000015572 biosynthetic process Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000012880 independent component analysis Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 7
- 230000001413 cellular effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000007654 immersion Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000013500 data storage Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000010267 cellular communication Effects 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 238000011045 prefiltration Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000012010 growth Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 239000000700 radioactive tracer Substances 0.000 description 2
- 230000010076 replication Effects 0.000 description 2
- 230000000630 rising effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000005728 strengthening Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000010415 tropism Effects 0.000 description 2
- RXKGHZCQFXXWFQ-UHFFFAOYSA-N 4-ho-mipt Chemical compound C1=CC(O)=C2C(CCN(C)C(C)C)=CNC2=C1 RXKGHZCQFXXWFQ-UHFFFAOYSA-N 0.000 description 1
- 101710116852 Molybdenum cofactor sulfurase 1 Proteins 0.000 description 1
- 101710116850 Molybdenum cofactor sulfurase 2 Proteins 0.000 description 1
- 206010038743 Restlessness Diseases 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 239000003855 balanced salt solution Substances 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000006249 magnetic particle Substances 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 238000011112 process operation Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000000741 silica gel Substances 0.000 description 1
- 229910002027 silica gel Inorganic materials 0.000 description 1
- 239000000243 solution Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/167—Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B1/00—Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
- H04B1/06—Receivers
- H04B1/16—Circuits
- H04B1/20—Circuits for coupling gramophone pick-up, recorder output, or microphone to receiver
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/24—Radio transmission systems, i.e. using radiation field for communication between two or more posts
- H04B7/26—Radio transmission systems, i.e. using radiation field for communication between two or more posts at least one of which is mobile
- H04B7/2662—Arrangements for Wireless System Synchronisation
- H04B7/2671—Arrangements for Wireless Time-Division Multiple Access [TDMA] System Synchronisation
- H04B7/2678—Time synchronisation
- H04B7/2687—Inter base stations synchronisation
- H04B7/2696—Over the air autonomous synchronisation, e.g. by monitoring network activity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/006—Systems employing more than two channels, e.g. quadraphonic in which a plurality of audio signals are transformed in a combination of audio signals and modulated signals, e.g. CD-4 systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Circuit For Audible Band Transducer (AREA)
- Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
- Stereophonic Arrangements (AREA)
- Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
Abstract
A method for encoding three dimensional audio by a wireless communication device is disclosed. The wireless communication device detects an indication of a plurality of localizable audio sources. The wireless communication device also records a plurality of audio signals associated with the plurality of localizable audio sources. The wireless communication device also encodes the plurality of audio signals.
Description
related application
Subject application relate to and advocate from application on May 24th, 2012 for " compression of three dimensional sound during calling out and air-launched (THREE-DIMENSIONAL SOUND COMPRESSION AND OVER-THE-ADR TRANSMISSION DURING A CALL) " the 61/651st, the right of priority of No. 185 U.S. Provisional Patent Application cases.
Technical field
The present invention relates to Audio Signal Processing.More particularly, the present invention relates to the three dimensional sound compression during calling out and air-launched.
Background technology
Along with technical progress, we have seen rising appreciably of network speed and storage, and described growth not only supports text, and support multi-medium data.In real-time cellular communications system, the ability in order to three-dimensional (3-D) audio frequency of seizure, compression and transmitting is current and unavailable.One of challenge is for catching three-dimensional sound signal.Therefore, the more real and immersion exchange experienced for individual auditory by catching and reproduce three-dimensional audio realizes benefit.
Summary of the invention
The present invention discloses a kind of for the method by radio communication device coding three-dimensional audio.Described method comprise determine multiple can the instruction of direction in space in 3dpa source.Described method also comprise record with described multiple can multiple sound signals of being associated of 3dpa source.Described method comprises the described multiple sound signal of coding further.Described can the instruction of direction in space in 3dpa source based on the input received.
Described method can comprise that determine can the number in 3dpa source.Described method also can comprise estimates that each can the arrival direction in 3dpa source.Described method can comprise carrys out coded multi-channel signal according to three-dimensional audio encoding scheme.
Described method can comprise beam in application first end-fire direction to obtain the first filtering signal.Described method also can comprise beam in application second end-fire direction to obtain the second filtering signal.Described method may be combined with the delay version of the first filtering signal and the second filtering signal.Each in first and second filtering signal can have at least two passages.One in filtering signal can postpone relative to another filtering signal.Described method can postpone the first passage of the first filtering signal relative to the second channel of the first filtering signal, and postpones the first passage of the second filtering signal relative to the second channel of the second filtering signal.Described method can postpone the first passage of composite signal relative to the second channel of composite signal.
The wave filter of the beam with first direction can be applied to signal that first pair of microphone produce to obtain the first spatial filtering signal by described method, and the wave filter of the beam with second direction is applied to signal that second pair of microphone produce to obtain second space filtering signal.Described method then may be combined with first and second spatial filtering signal to obtain output signal.
For each in the multiple microphones in array, described method can comprise and records corresponding input channel.Described method also can comprise for each in multiple view direction, corresponding multi-channel filter is applied to multiple recorded input channel to obtain corresponding output channel.Each in described multi-channel filter can apply the beam in corresponding view direction and the empty beam in other view direction.Described method can comprise the described multiple output channel of process to produce ears record.Described method can comprise application beam to the frequency between Low threshold and high threshold.At least one in described low and high threshold be based on microphone between distance.
The present invention discloses a kind of for being selected the method for codec by radio communication device, and described method comprises the energy distribution curve determining multiple sound signal.Described method also comprises the energy distribution curve of each in the described multiple sound signal of display.Described method also comprises the input detecting and select energy distribution curve.Described method also comprises makes codec be associated with described input.Described method comprises further compresses multiple sound signal to produce bag based on codec.Described method can be included in described in air-launched and wrap.Described method can comprise transmission channel identification.
The present invention discloses a kind of for being increased the method for distributing by radio communication device.Described method comprises the energy distribution curve determining multiple sound signal.Described method also comprises the energy distribution curve of each in the described multiple sound signal of display.Described method also comprises the input detecting and select energy distribution curve.Described method also comprises makes codec be associated with described input.Described method comprises further to increase based on described input and distributes the position of the codec in order to compressing audio signal.The compression of described sound signal can produce four bags launched aloft.
The present invention describes a kind of for the radio communication device by radio communication device coding three-dimensional audio.Described radio communication device comprises direction in space circuit, its detect multiple can the instruction of direction in space in 3dpa source.Described radio communication device also comprises the writing circuit being coupled to direction in space circuit.Writing circuit record with multiple can multiple sound signals of being associated of 3dpa source.Radio communication device also comprises the scrambler being coupled to writing circuit.Multiple sound signal described in encoder encodes.
The present invention describes a kind of for being selected the radio communication device of codec by radio communication device.Described radio communication device comprises the energy distribution curve circuit of the energy distribution curve determining multiple sound signal.Described radio communication device comprises the display being coupled to energy distribution curve circuit.The energy distribution curve of each in the described multiple sound signal of described display display.Described radio communication device comprises the input detecting circuit being coupled to display.Described input detecting circuit detects the input selecting energy distribution curve.Described radio communication device comprises the associated circuit being coupled to input detecting circuit.Described associated circuit makes codec be associated with input.Radio communication device comprises the compressor circuit being coupled to associated circuit.Compressor circuit compresses multiple sound signal to produce bag based on codec.
The present invention describes a kind of for being increased the radio communication device that position is distributed by radio communication device.Described radio communication device comprises the energy distribution curve circuit of the energy distribution curve determining multiple sound signal.Described radio communication device comprises the display being coupled to energy distribution curve circuit.The energy distribution curve of each in the described multiple sound signal of described display display.Described radio communication device comprises the input detecting circuit being coupled to display.Described input detecting circuit detects the input selecting energy distribution curve.Described radio communication device comprises the associated circuit being coupled to input detecting circuit.Described associated circuit makes codec be associated with input.Radio communication device comprises the position distributor circuit being coupled to associated circuit.Institute's rheme distributor circuit increases based on described input and distributes the position of the codec in order to compressing audio signal.
The present invention describes a kind of computer program for three-dimensional audio of encoding.Described computer program comprises the non-transitory tangible computer readable media with instruction.Described instruction comprise for cause radio communication device detect multiple can the code of instruction of direction in space in 3dpa source.Described instruction comprise for cause described radio communication device record with multiple can the code of multiple sound signals that is associated of 3dpa source.Described instruction comprises for causing described radio communication device to encode the code of multiple sound signal.
The present invention describes a kind of for selecting the computer program of codec.Described computer program comprises the non-transitory tangible computer readable media with instruction.Described instruction comprises the code for causing radio communication device to determine the energy distribution curve of multiple sound signal.Described instruction comprises the code for causing radio communication device to show the energy distribution curve of each in multiple sound signal.Described instruction comprises the code for causing radio communication device to detect the input selecting energy distribution curve.Described method also comprises makes codec be associated with described input.Described instruction comprises for causing radio communication device to compress multiple sound signal to produce the code of bag based on codec.
The present invention describes a kind of computer program distributed for increasing position.Described computer program comprises the non-transitory tangible computer readable media with instruction.Described instruction comprises the code for causing radio communication device to determine the energy distribution curve of multiple sound signal.Described instruction comprises the code for causing radio communication device to show the energy distribution curve of each in multiple sound signal.Described instruction comprises the code for causing radio communication device to detect the input selecting energy distribution curve.Described method also comprises makes codec be associated with described input.Described instruction comprises for causing radio communication device to increase based on described input the code distributed the position of the codec in order to compressing audio signal.
Accompanying drawing explanation
Fig. 1 illustrates the microphone be placed on the representative mobile phone of cellular phone;
Fig. 2 A illustrates the process flow diagram of the method for the microphone/Beam shaper selection based on user interface input;
Fig. 2 B illustrates the region being used for the right spatial selectivity of microphone;
Fig. 3 illustrates for selecting the user interface that will record direction in two dimensions;
Fig. 4 illustrates the possible space sector being defined in and being configured to perform around the headphone of active noise elimination (ANC);
Fig. 5 illustrates three microphone arrangement;
Fig. 6 illustrates that the omnidirectional for space decoding that use four microphone is arranged and single order catch;
Fig. 7 illustrates front view and the rear view of an example of portable communication appts;
Fig. 8 declare record is from the situation of the source signal of broadside;
Fig. 9 declare record is from another situation of the source signal of broadside;
Figure 10 illustrates the situation of combination end-fire beam;
Figure 11 illustrates the example of chart being used for front center, front left side, front right side, rear left side and rear right side beam upwards;
Figure 12 illustrates the example of the process in order to obtain the signal on right lateral side direction in space.
Figure 13 illustrates and uses two microphones with three microphone arrays to the empty beam formation method of blind source separating;
Beam on front side of Figure 14 illustrates and wherein combines and on right direction is to obtain the example of the result on forward right side direction;
Figure 15 illustrates the example of the empty beam of the method be used for as illustrated in Figure 13;
Figure 16 illustrates the empty beam formation method using and have the four-way blind source separating of four microphone arrays;
Figure 17 illustrates the example of the beam pattern of one group of four wave filter for corner direction FL, FR, BL and BR;
Figure 18 illustrates that the Independent Vector Analysis understood according to mobile speaker data assembles the example of wave filter beam pattern;
Figure 19 illustrates that the Independent Vector Analysis understood according to meticulous mobile speaker data assembles the example of wave filter beam pattern;
Figure 20 illustrates the process flow diagram of the method for combination end-fire beam;
Figure 21 illustrates the process flow diagram of the method being used for general biconjugate situation;
Figure 22 illustrates the embodiment of the method for the Figure 21 being used for three Mike's landscape conditions;
Figure 23 illustrates the process flow diagram using and have the method for the four-way blind source separating of four microphone arrays;
Figure 24 illustrates the part routing diagram being used for blindly separation filter group;
Figure 25 illustrates the routing diagram being used for 2 × 2 bank of filters;
Figure 26 A illustrates the block diagram according to the multi-microphone audio frequency sensing apparatus of a general configuration;
Figure 26 B illustrates the block diagram of communicator;
Figure 27 A illustrates the block diagram of microphone array;
Figure 27 B illustrates the block diagram of microphone array;
Figure 28 illustrates the chart of the different frequency scope that different voice codec operates and frequency band thereon;
Figure 29 A, 29B and 29C illustrate possible the scheme of the first configuration for using four non-narrowband codec for each signal type that can be compressed separately, i.e. band (FB), ultra broadband (SWB) and broadband (WB) entirely;
Figure 30 A illustrates the possible scheme being used for the second configuration, and wherein two codecs have average audio signal;
Figure 30 B illustrates the possible scheme being used for the second configuration, and wherein one or more codec has average audio signal;
Figure 31 A illustrates the possible scheme being used for the 3rd configuration, and one or many person wherein in codec can average one or more sound signal;
Figure 31 B illustrates the possible scheme being used for the 3rd configuration, and one or many person wherein in non-narrowband codec has average audio signal;
Figure 32 illustrates four narrowband codec;
Figure 33 is the process flow diagram of the end-to-end system of the encoder/decoder system of four non-narrowband codec that any scheme using Figure 29 A, Figure 29 B or Figure 29 C is described;
Figure 34 is for illustrating the process flow diagram of the end-to-end system of the encoder/decoder system of use four codecs (such as, from any one in Figure 30 A or Figure 30 B);
Figure 35 is for illustrating the process flow diagram of the end-to-end system of the encoder/decoder system of use four codecs (such as, from any one in Figure 31 A or Figure 31 B);
Figure 36 be illustrate for use in order to four non-narrowband codec (such as, from Figure 29 A, Figure 29 B or Figure 29 C) of coding combination with produce in order to any one in four wideband codecs of decoding or narrowband codec and the process flow diagram of other method of received audio signal bag;
Figure 37 is the process flow diagram of the end-to-end system that encoder/decoder system is described, wherein select based on the visual user be associated of the energy at four turnings from sound and there is different positions between the compression period of one or two signal distribute, but launching four bags in passage aloft;
Figure 38 illustrates the end-to-end system of encoder/decoder system and process flow diagram, wherein selects based on the visual user be associated of the energy at four turnings with sound and compresses and launch a sound signal;
Figure 39 is the block diagram of the embodiment of the radio communication device that four configurations comprising codec combination are described;
Figure 40 is the block diagram of the embodiment that radio communication device is described, it illustrates the configuration wherein using 4 of Figure 29 wideband codecs to carry out compressing.
Figure 41 is the block diagram of the embodiment of the communicator that four configurations comprising codec combination are described, wherein can use optional codec prefilter;
Figure 42 is the block diagram of the embodiment of the communicator that four configurations comprising codec combination are described, wherein optionally filtering can be used as a part for bank of filters array and occurs;
Figure 43 is the block diagram of embodiment of the communicators that four configurations comprising codec combination are described, wherein from auditory scene sound source data can before with the one coding in codec configuration with the data mixing from one or more wave filter;
Figure 44 illustrates for using integrated codec to the process flow diagram of the method for multi-direction sound signal of encoding;
Figure 45 illustrates the process flow diagram for the method for Audio Signal Processing;
Figure 46 illustrates the process flow diagram for the method for three-dimensional audio of encoding;
Figure 47 is the process flow diagram of the method illustrated for selecting codec;
Figure 48 is the process flow diagram that the method for distributing for increasing position is described; And
Figure 49 illustrates some assembly that can be included in radio communication device.
Embodiment
The example of communicator comprises cellular phone base station or node, access point, radio network gateway and wireless router.Communicator can operate according to some industry standard, such as third generation partner program (3GPP) Long Term Evolution (LTE) standard.Other standard instance that communicator can be observed comprises IEEE (IEEE) 802.11a, 802.11b, 802.11g, 802.1 In and/or 802.11ac (such as, wireless fidelity or " Wi-Fi ") standard, IEEE802.16 (such as, World Interoperability for Microwave Access, WiMax or " WiMAX ") standard and other standard.In some standards, communicator can be called Node B, evolved node B etc.Although some in system and method disclosed herein can describe about one or more standard, this should not limit the scope of the invention, because described system and method is applicable to many systems and/or standard.
Some communicators (such as, access terminal, client terminal device, client stations etc.) wirelessly can communicate with other communicator.Some communicators (such as, radio communication device) can be called mobile device, movement station, subscriber station, client, client stations, subscriber equipment (UE), distant station, access terminal, mobile terminal, terminal, user terminal, subscri er unit etc.The additional examples of communicator comprises on knee or desk-top computer, cellular phone, smart phone, radio modem, electronic reader, board device, games system etc.Some in these communicators can operate according to one or more industry standard as above.Therefore, general terms " communicator " can comprise with naming the communicator (such as, access terminal, subscriber equipment, remote terminal, access point, base station, Node B, evolved node B etc.) described according to the difference of industry standard.
Some communicators can provide the access to communication network.The example of communication network including (but not limited to) telephone network (such as, " land line " network, such as PSTN (PSTN) or cellular telephone network), the Internet, LAN (Local Area Network) (LAN), wide area network (WAN), Metropolitan Area Network (MAN) (MAN) etc.
Unless context limits clearly, otherwise term " signal " is herein in order to indicate any one in its its ordinary meaning, comprise as wire, bus or other launch the state of the memory location (or memory location set) that media are expressed.Unless context limits clearly, otherwise term " generation " is herein in order to indicate any one in its its ordinary meaning, such as, calculate or otherwise produce.Unless context limits clearly, otherwise term " calculating " is herein in order to indicate any one in its its ordinary meaning, such as, calculate, assess, level and smooth and/or select from multiple value.Unless context limits clearly, otherwise term " acquisition " is in order to indicate any one in its its ordinary meaning, such as, calculate, derive, receive (such as, from external device (ED)) and/or retrieval (such as, from memory element array).Unless context limits clearly, otherwise term " selection " is in order to indicate any one in its its ordinary meaning, such as, identify, indicate, apply and/or use at least one in two or more set and be less than all.When using term " to comprise " in description of the present invention and claims, it does not get rid of other element or operation.Term "based" (as in " A is based on B "), in order to indicate any one in its its ordinary meaning, comprises following situation: (i) " derivation " (such as, " B is the forerunner person of A "); (ii) " at least based on " (such as, " A is at least based on B ") and, suitable in specific context; (iii) " equal " (such as, " A equals B ").Similarly, term " in response to " in order to indicate any one in its its ordinary meaning, comprise " at least in response to ".
" position " with reference to the microphone of multi-microphone audio frequency sensing apparatus indicates the position at the center in the acouesthesia face of described microphone, unless the context indicates otherwise.Term " passage " sometimes in order to indicator signal path, and according to specific context at other time in order to indicate the signal of the contained fortune of this class.path.Unless otherwise directed, otherwise term " series " in order to indicate two or more projects of a sequence.Term " logarithm " is the logarithm of substrate in order to ten, but this type of computing is within the scope of the invention to the expansion of other substrate.Term " frequency component " is in order to the one in the middle of the frequency of indicator signal or frequency band set, the sample of the frequency domain representation of such as signal (such as, as by Fast Fourier Transform (FFT) produce) or the subband (such as, Bark (Bark) scale or Mel (mel) scale subband) of signal.
Unless otherwise instructed, otherwise to have special characteristic equipment operation any disclosure also clearly intention disclose there is the method (and vice versa) of similar characteristics, and to any disclosure of the operation of the equipment according to customized configuration also clearly intention disclose the method (and vice versa) according to similar configuration.Term " configuration " can in order to reference as the method, equipment and/or the system that are indicated by its specific context.Term " method ", " process ", " program " and " technology " is through vague generalization and use interchangeably, unless specific context separately has instruction.Term " equipment " and " device " also to use, unless specific context separately has instruction interchangeably through vague generalization.Term " element " and " module " are usually in order to indicate a part for larger configuration.Unless context limits clearly, otherwise term " system " is herein in order to indicate any one in its its ordinary meaning, comprises " alternately to serve the element group of common purpose ".Any being incorporated to by reference to a part for document also should be understood to be incorporated in the described term of part internal reference or the definition of parameter, wherein this type of define existing within said document other local and be incorporated to reference in part any graphic in.
Method as described herein can be configured to institute's signal acquisition to be treated to a series of segmentation.Typical case's section length scope is from about five or ten milliseconds to about 40 or 50 milliseconds, and described segmentation can overlapping (such as, wherein adjacent sectional overlapping 25% or 50%) or non-overlapped.In a particular instance, signal is divided into a series of non-overlapping segments or " frame ", and each has the length of ten milliseconds.Segmentation as class methods process thus also can be the segmentation (that is, " subframe ") as the larger segmentation by different operating process, or vice versa.Now, we are just experiencing the quick exchange of the individual information of the social networking service (such as, face book (Facebook), push away spy (Twitter) etc.) via fast development.Meanwhile, we also have seen rising appreciably of network speed and storage, and described growth not only supports text, and support multi-medium data.In the environment, the important needs that the more real and immersion being used for individual auditory experience for catching and reproduce three-dimensional (3D) audio frequency exchanges can be recognized.In real-time cellular communications system, in order to catch, compression and to launch the ability of 3-D audio frequency current and unavailable.One of challenge is for catching 3-D sound signal.The attorney docket of " the using the three dimensional sound of multi-microphone catch and reproduce (THREE-DIMENSIONAL SOUND CAPTURING AND REPRODUCING WITH MULTI-MICROPHONES) " by name that on October 24th, 2011 also can be used in this article to apply for is the 13/280th of 102978U2, in some technology described in No. 303 U.S. patent application case, to describe the mode how catching and how can record 3-D audio-frequency information.But this application case expands the previous ability disclosed by the mode describing 3-D audio frequency and can combine with the voice codec that finds in real-time cellular communications system.
First, the seizure of 3-D audio frequency is described.In some embodiments, audible information can be recorded.Audible information as herein described also compresses by one or more independent voice codec and launches in one or more aerial channels.
Fig. 1 illustrates three kinds of different views of the radio communication device 102 of the configurable microphone 104a-e array geometry had for different Sounnd source direction.Radio communication device 102 can comprise receiver 108 and one or more loudspeaker 110a-b.Depend on use-case, the various combination of the microphone 104a-e of optional apparatus 102 (such as, to) with support different source side to spatial selectivity audio recording.For example, in video camera situation (such as, wherein camera lens 106 radio communication device 102 below on), front and back microphone 104a-e can be used (such as, first microphone 104a and the 4th microphone 104d, first microphone 104a and the 5th microphone 104e or the 3rd microphone 104c and the 4th microphone 104d) record before and posterior direction (namely, beams directed enter and away from camera lens 106), wherein manually and automatically can configure the left side and direction, the right preference.For the SoundRec on the direction being orthogonal to front and back axis, microphone 104a-e can be another option to (such as, the first microphone 104a and second microphone 104b).In addition, configurable microphone 104a-e array geometry also can be used to compress and launch 3-D audio frequency.
(that is, the undistorted response of minimum variance (MVDR), linear constraint minimal variance (LCMV), phased array) can be combined for the various microphone 104a-e of given method for designing scope and carry out the different Beam shaper database of calculated off-line.During use, can be depending on current use-case demand and want one via the menu in user interface to select the institute in these Beam shaper.
Fig. 2 A illustrates the conceptual flow chart being used for these class methods 200.First, radio communication device 102 can obtain 201 one or more preferred voice capture directions (such as, as automatically and/or via user interface selected).Then, radio communication device 102 combination that 203 can be selected to provide the Beam shaper of designated parties tropism and microphone array (such as, to).Designated parties tropism also can combinationally use with one or more voice codec.
Fig. 2 B illustrates the region of the spatial selectivity being used for a pair microphone 204a-b.For example, the first space 205a can represent to apply by using the first microphone 204a and second microphone 204b end-fire beam formed and focus on audio frequency from space.Similarly, second space 205b can represent by use second microphone 204b and the first microphone 204a and apply end-fire beam formed and focus on audio frequency from space.
Fig. 3 illustrates the example of the user interface 312 of radio communication device 302.As mentioned above, in some embodiments, can select to record direction via user interface 312.For example, user interface 312 can show one or more record direction.User can select to record direction via user interface 312.In some instances, user interface 312 also can in order to select the audio-frequency information of wishing with user to be associated with the specific direction compressed compared with multidigit.In some embodiments, radio communication device 302 can comprise receiver 308, one or more loudspeaker 310a-b and one or more microphone 304a-c.
Fig. 4 illustrates the related example that can comprise the stereo headset 414a-b of three microphone 404a-c.For example, stereo headset 414a-b can comprise center microphone 404a, left microphone 404b and right microphone 404c.Microphone 404a-c can support that such as speech capture and/or active noise eliminate the application program of (ANC).For this type of application program, the different sector 416a-d of head can be defined (namely, rear sector 416a, left sector 416b, right wing district 416c and front sector 416d), carry out record for this three microphone of use 404a-c configuration (Fig. 4 uses omnidirectional microphone).Similarly, this use-case in order to compression and can launch 3-D audio frequency.
Also can use dedicated microphone that (such as, three microphone 504a-c as shown in Figure 5 arrange) is set and perform three-dimensional audio seizure.This type of layout via line 518 or can wirelessly be connected to pen recorder 520.Pen recorder 520 can comprise described herein for pick-up unit 520 directed and according to selected audio recording direction in the middle of microphone 504a-c (that is, in the middle of center microphone 504a, left microphone 504b and right microphone 504c) select the equipment of a pair.In alternative arrangement, center microphone 504a can be positioned on pen recorder 520.Similarly, this use-case in order to compression and can launch 3-D audio frequency.
General supposition remote subscriber uses stereo headset (such as, adaptive noise cancel-ation or ANC headphone) to listen to institute's record space sound.But in other applications, the many array of loudspeakers can reproducing two or more direction in space can be able to be used at far-end.For supporting this type of use-case, can need at the record of 3-D sound signal or enabling the combination of more than one microphone/Beam shaper during catching with in order to compression and launch 3-D audio frequency simultaneously.
Multi-microphone array can use to produce together with spatial selectivity wave filter for one or more source side in the monophonic sounds of each.But this type of array also can in order to support the spatial audio coding in two or three dimensions.The example of the spatial audio coding method of available multi-microphone array support as described herein comprises: 5.1 around, 7.1 around, Doby (Dolby) around, dolby pro logic (Pro-Logic) or other phase width matrix stereo format any; Dolby Digital, DTS or any discrete multi channel format; And wave field synthesis.Five-channel coding an example comprise the left side, the right, center, the left side around and the right around passage.
The omnidirectional microphone 604a-d caught for the single order be similar to for space decoding that Fig. 6 illustrates that use four microphone 604a-d arranges arranges.The example of the spatial audio coding method of multi-microphone 604a-d array as described herein support can be used also can to comprise the method that can be intended at first for special microphone 604a-d, such as ambiophony (Ambisonic) B form or high-order ambiophony form.For example, the treated hyperchannel of ambiophony encoding scheme exports the three-dimensional Taylor expansion (Taylor expansion) that can be included on measurement point, and it can use the three-dimensional localization microphone array as described in Fig. 6 and at least be similar to up to single order.By comparatively multi-microphone, degree of approximation can be increased.According to example, second microphone 604b can with the first microphone 604a separating distance Δ z in a z-direction.3rd microphone 604c can with the first microphone 604a separating distance Δ y in y-direction.4th microphone 604d can with the first microphone 604a separating distance Δ x in the x direction.
In order to immersion sound experience is conveyed to user, surround sound record can be independently or in conjunction with videotaping, surround sound record can use the independent microphone utilizing omnidirectional microphone 604a-d to arrange.In this example, one or more omnidirectional microphone 604a-d of editing can be distinguished.In the present invention, the replacement scheme based on multiple omnidirectional microphone 604a-d combined with spatial filtering is presented.In the example of this configuration, one or more omnidirectional microphone 604a-d be embedded on smart phone or flat computer can support multiple SoundRec application program.For example, it is stereo that two microphone 604a-d can be used for wide field, and at least three omnidirectional microphone 604a-d with suitable microphone 604a-d axis can be used for surround sound, can in order to record the multiple sound channels on smart phone or board device.These passages can then process in pairs or with through design with have on wanted view direction particular space pickup pattern wave filter while filtering.Due to spacial aliasing, microphone space can be selected to make pattern in most associated frequency band effectively.Can play in surround sound is arranged produce stereo or 5.1 output channels to produce immersion sound experience.
Fig. 7 illustrates front view and the rear view of an example of radio communication device 702 (such as, smart phone).After front microphone 704a and first, the array of microphone 704c can in order to produce stereo record.The example of other microphone 704 pairing comprises the first microphone 704a (above) and second microphone 704b (above), the 3rd microphone 704c (later) and the 4th microphone 704d (later) and second microphone 704b (above) and the 4th microphone 704d (later).Microphone 704a-d relative to the diverse location (it can be depending on the holding position of device 702) in source can produce can usage space filtering strengthening stereophonism.In order to produce commentator with record scene (such as, during videotaping) between stereo image, use can be needed to utilize the end-fire of the first microphone 704a (above) and the 3rd microphone 704c (later) to match, there is the distance of the thickness of described device (as shown in the side view of Fig. 1) therebetween.But, the identical microphone 704a-d that also can use in different holding position should be noted, and can produce to have and match towards the end-fire of the distance of z-axis (such as, as shown in the rear view of Fig. 1).In the latter cases, can produce towards the stereo image of described scene (sound such as, from the scene left side is captured as the sound transmitted on the left side).In some embodiments, radio communication device can comprise receiver 708, one or more loudspeaker 710a-b and/or camera lens 706.
Fig. 8 illustrates the situation of the end-fire pairing of use first microphone 704a (above) and the 3rd microphone 704c (later), has the distance of the thickness of device 702 therebetween with the source signal of record from broadside.In the case, X-axis 874 is increased to the right, and Y-axis 876 is increased to the left side, and Z axis 878 is increased to top.In this example, the coordinate of two microphone 704a, 704c can be (x=0, y=0, z=0) and (x=0, y=0.10, z=-0.01).Stereo beam can be applied formed, make the district along y=0 plane that beam in broadside can be described, and district (x=0, y=-0.5, z=0) around can illustrate the empty beam on end-fire direction.When commentator speaks from broadside (such as, the back side towards device 702), due to the ambiguity relative to the rotation around the right axle of microphone 704a, 704c, can be difficult to distinguish the scene before the sound of commentator and device 702.In this example, can not strengthen to be separated the sound of commentator and the stereophonism of described scene.
Fig. 9 illustrates another situation of the end-fire pairing of use first microphone 704a (above) and the 3rd microphone 704c (later), the distance therebetween with the thickness of device 702 with the source signal of record from broadside, wherein microphone 704a (above), 704c (later) coordinate can be identical with Fig. 8.In the case, X-axis 974 is increased to the right, and Y-axis 976 is increased to the left side, and Z axis 978 is increased to top.In this example, beam can through orientation towards end-fire direction (crossing point (x=0, y=-0.5, z=0)), and (such as, commentator) voice of user can be soared in a passage.Described beam can use empty Beam shaper or other method to be formed.For example, blind source separating (BSS) method, such as independent component analysis (ICA) or Independent Vector Analysis (IVA) can provide the stereophonism wider than empty Beam shaper.Note that the wider stereophonism in order to be provided for recorded scene itself, it can be enough to use the end-fire of identical microphone 704a, 704c to be matched, and has therebetween towards the distance of Z axis 978 (such as, as shown in the rear view of Fig. 1).
Figure 10 is the chart of the situation that combination end-fire beam is described.In the case, X-axis 1074 is increased to the right, and Y-axis 1076 is increased to the left side, and Z axis 1078 is increased to top.Because radio communication device 702 is in broadside holding position, combination end-fire beam can be needed to the left side and the right (such as, as shown in Fig. 9 and 10) to strengthen the stereophonism compared with raw readings.This type of process also can comprise adds interchannel delay (such as, imitating microphone interval).This type of delay can in order to the common reference point output of two Beam shaper being postponed to be normalized in space.When playing stereo channel on head-telephone, control lag also can help revolution space image in the preferred direction.Device 702 can comprise the instruction accelerometer of holding position, magnetometer and/or gyrostat (such as, as being the 13/280th of 102978U1 in the attorney docket of by name " for the system of the responsive record controls of orientation, method, equipment and computer-readable media (SYSTEMS; METHODS; APPARATUS AND COMPUTER-READABLE MEDIA FOR ORIENTATION-SENSITIVE RECORDING CONTROL) ", describe in No. 211 U.S. patent application case).The Figure 20 hereafter discussed illustrates the process flow diagram of these class methods.
When device is in end-fire holding position, described record can provide wide field stereophonism.In the case, spatial filtering (such as, using sky Beam shaper or BSS solution, such as ICA or IVA) can strengthen described effect a little.
In dual microphone situation, the file of stereo record can strengthen (such as, the voice of adding users and being separated of recorded scene) via spatial filtering as above.Can need to produce the some different directed access (such as, for surround sound) from caught stereophonic signal, signal will mix two or more passage.For example, can need signal will to mix five passages (such as, for 5.1 surround sound schemes), make to use the different one in five loudspeaker arrays of each passage to play.These class methods can be included on correspondence direction application space filtering to obtain upper mixed passage.These class methods also can comprise application multi-channel coding scheme to upper mixed passage (such as, the version of Dolby Surround).
For wherein using the situation of two or more microphone 704a-d for recording, possible usage space filtering and different microphone 704a-d combine in multiple directions (such as, five directions, according to 5.1 standards) enterprising line item, then institute's tracer signal (such as, using five loudspeakers) is play.This type of process can when supreme mixed perform.
Figure 11 illustrates the example of the chart for front center (FC) 1180, front left side (FL) 1182, front right side (FR) 1184, rear left side (BL) 1186 and this type of beam of rear right side (BR) 1188 on direction.X, Y and Z axis in these charts similar orientation (centre of each scope is zero, and end is +/-0.5 to the greatest extent, wherein X-axis is increased to the right, and Y-axis increases towards the left side, and Z axis increase towards top), and dark space instruction described in beam or empty beam direction.For the beam of each chart through being directed across following point (z=0): for (the x=0 of front center (FC) 1180, y=+0.5), for (the x=+0.5 of front right side (FR) 1184, y=+0.5), for (the x=+0.5 of rear right side (BR) 1188, y=-0.5), for (the x=-0.5 of rear left side (BL) 1186, y=-0.5), and for (x=-0.5, the y=+0.5) of front left side (FL) 1182.
The sound signal be associated with four different directions (FR 1184, BR 1188, BL 1186, FL 1182) can use the voice codec on radio communication device 702 to compress.At receiver-side, play/or decoding four of being associated from the different direct sound center sound of rebuilding the user of structure sound signal by FR 1184, BR 1188, BL1186, FL 1182 the combination of passage produce.These sound signals be associated with different directions can use radio communication device 702 to come Real Time Compression and transmitting.Each in four independent sources can upwards be compressed from some low-band frequencies (LB) frequency and be transmitted into some and be with frequency (UB).
The validity of spatial filtering technology can be limited to and depend on that the band of following factor leads to scope: such as, interval, spacial aliasing and High frequency scattering between little microphone.In an example, signal can through low-pass filtering (such as, having the cutoff frequency of 8kHz) before spatial filtering.
For the situation that the sound from single point source is captured, formed can cause the strong attenuation of non-direction path signal and/or in listened to the distortion reaching the aggressive level needed for wanted masking effect with this type of beam supplementary of sheltering of the signal from other direction.This type of false shadow can be nonconforming for high definition (HD) audio frequency.In an example, HD audio frequency the sampling rate of 48kHz can carry out record.For alleviating this type of false shadow, replace the signal using aggressive spatial filtering, the energy distribution curve of the treated signal only using each passage can be needed, and apply the gain translation rule according to energy distribution curve for each passage that original input signal or the spatial manipulation before sheltering export.Note that because sound event can be sparse in time-frequency figure, so this type of rear gain shift method even with multi-source situation may be used.
Figure 12 illustrates the example of the process in order to obtain the signal on right lateral side direction in space.Graph A 1290 (amplitude versus time) illustrates original microphone record.Chart B 1292 (amplitude versus time) illustrates and carries out low-pass filtering (having the cutoff frequency of 8kHz) and perform to have the result of the spatial filtering sheltered to microphone signal.Chart C 1294 (value is to the time) illustrates the correlation space energy (such as, the quadratic sum of sample value) of the energy based on the signal in chart B 1292.Chart D 1296 (state is to the time) illustrates the translation distribution curve based on the energy difference indicated by low frequency space filtering, and chart E1298 (amplitude versus time) illustrates that 48kHz translation exports.
For dual microphone to situation, can need to be designed for right at least one beam and for another right at least two beams at different directions.Beam can through design or such as, through study (such as, using blind source separation method, independent component analysis or Independent Vector Analysis).Each in these beams can in order to obtain the different passages (such as, for surround sound record) of record.
Figure 13 illustrates and uses two microphones with three microphone 1304a-c arrays to the empty beam formation method of blind source separating (such as, independent component analysis or Independent Vector Analysis).For and below can 3dpa source 1380a, 1380b, second microphone 1304b and the 3rd microphone 1304c can be used above.Can 3dpa source 1380c, 1380d for the left side and the right, the first microphone 1304a and second microphone 1304b can be used.The right axle of two microphone 1304a-c can be needed to be orthogonal or orthogonal at least in fact (such as, being no more than five, ten, 15 or 20 degree with quadrature).
Some in passage are above and produce by both or both in combination beam.Figure 14 illustrates that wherein front beam 1422a and right beam 1422b (that is, the beam before and on direction, the right) capable of being combined is to obtain the example of front right side result upwards.Beam can carry out record by one or more microphone 1404a-c (such as, the first microphone 1404a, second microphone 1404b and the 3rd microphone 1404c).Can obtain by same way and be used for front left side, rear right side and/or rear left side result upwards.In this example, combine overlapping beams 1422a-d in this way and one signal can be provided, wherein from the signal at corresponding turning than from the large 6dB of the signal of other position.In some embodiments, rear empty beam 1422c and left empty beam 1422d (that is, the beam in the left side and posterior direction can be empty) can be formed.In some cases, interchannel delay can be applied with the common reference point output of two Beam shaper being postponed to be normalized in space.When combining " left and right end-fire to " and " front and back end-fire to ", the center of gravity being set to microphone 1404a-c array with reference to point can be needed.This generic operation can be supported in the maximization beam emissions of wanted corner location, wherein two between have through adjustment postpone.
Figure 15 illustrate for method as illustrated in Figure 13 above 1501, below 1503, the example of empty beam on the left side 1505 and direction, the right 1507.Beam can use the undistorted response Beam shaper of minimum variance to design or uses blind source separating (such as, independent component analysis or the Independent Vector Analysis) wave filter that learns the situation that wherein device 702 is fixed with the relative position of sound source (or source) and assemble.In these examples, show frequency separation scope correspond to from 0 to 8kHz band.Real space beam figure is complementary.Be also shown in, because the different interval between the right microphone 1304a-c in the left and right in these examples and the right microphone 1304a-c in front and back, so spacial aliasing differently affects these beams figure.
Due to spacial aliasing, depend on microphone space from, can need beam to be applied to be less than institute's signal acquisition whole frequency range (such as, as above from 0 to 8kHz scope).In low-frequency content after spatial filtering, high-frequency content can be added in the wings, there are some adjustment for space delay, processing delay and/or gain match.In some cases (such as, the handheld device form factor), also can need only frequency filtering intermediate range (such as, be only reduced to 200 or 500Hz), this is because some directivity losses always can be expected due to the restriction of microphone interval.
If there is the nonlinear phase distortion of a certain kind, so poorly can perform the standard beam/empty formation technology of the same delay based on all frequencies according to identical arrival direction (DOA), this is because the differential delay in some frequencies caused by nonlinear phase distortion.But the method based on Independent Vector Analysis as described herein operates based on source separation, and therefore can expect that these class methods even bring forth good fruit when there is the differential delay for identical arrival direction.This type of steadiness can be and uses Independent Vector Analysis for obtaining the potential advantages around processing coefficient.
For the situation of wherein not carrying out spatial filtering more than a certain cutoff frequency (such as, 8kHz), the band that described final high-definition signal can comprise the original front/rear passage of high-pass filtering and add below from 8kHz to 24kHz.This generic operation can comprise adjustment space and high-pass filtering postpones.Also can need the gain (such as, not obscure space separation effect) adjusting 8-24-kHz band.Example illustrated in fig. 12 can filtering in the time domain, but contain clearly and therefore disclose the application in order to the method described herein of (such as, frequency domain) filtering in other territory.
Figure 16 illustrates the empty beam formation method using and have the four-way blind source separating (such as, independent component analysis or Independent Vector Analysis) of four microphone 1604a-d arrays.At least both axle that can need the various centerings of four microphone 1604a-d is orthogonal or orthogonal at least in fact (such as, being no more than five, ten, 15 or 20 degree with quadrature).This type of four microphone 1604a-d wave filter can be used to produce the beam figure in corner direction except dual microphone pairing.In an example, wave filter can use Independent Vector Analysis and training data and learn, and gained is assembled Independent Vector Analysis wave filter and be embodied as fixed filters, described fixed filters is applied to four institute record microphone 1604a-d and inputs signal (FL, FC, FR, BR, BL) to produce each in corresponding five channel direction of being used in 5.1 surround sounds.For making full use of five loudspeakers, central passage FC before can such as using following equation to obtain:
the Figure 23 hereafter discussed illustrates the process flow diagram for these class methods.The Figure 25 hereafter discussed illustrates the part routing diagram for this type of bank of filters, wherein microphone n is provided to the input (1 <=n <=4) of wave filter in row n, and each in output channel is the summation of the output of wave filter in corresponding row.
In an example of this type of learning process, separate sound sources is positioned around four microphone 1604a-d arrays four through design attitude (such as, four corner locations FL, FR, BL and BR) in each place, and described array is in order to catch four-way signal.Note that caught four-way export in each be the potpourri in all four sources.Then blind source separate technology (such as, Independent Vector Analysis) can be applied to be separated four independent sources.After convergence, the filter set of four independent sources separately and convergence can be obtained, described filter set beam emissions head for target turning and not towards other three turnings substantially.
Figure 17 illustrates the example of the beam figure of these type of group of four wave filter being used for the front left side of corner direction (FL) 1709, front right side (FR) 1711, rear left side (BL) 1713 and rear right side (BR) 1715.For landscape logging mode, to obtain and filter application can to comprise before use two microphones after microphone and two, the source for the fixed position place relative to described array performs four-way Independent Vector Analysis learning algorithm, and applies described convergence wave filter.
Beam figure can be depending on obtained blended data and changes.Figure 18 illustrates in rear left side (BL) 1817 direction, rear right side (BR) 1819 direction, front left side (FL) 1821 direction and the Independent Vector Analysis that on mobile speaker data learn of front right side (FR) 1823 on direction assemble the example of wave filter beam figure.Figure 19 illustrates in rear left side (BL) 1917 direction, rear right side (BR) 1919 direction, front left side (FL) 1921 direction and the Independent Vector Analysis that on meticulous mobile speaker data learn of front right side (FR) 1923 on direction assemble the example of wave filter beam figure.Identical with shown in Figure 18 of these examples, except front right side beam figure.
Independent Vector Analysis is used to train the process of four microphone wave filters can comprise beam emissions towards wanted direction, but not towards interference direction.For example, the wave filter for front left side (FL) direction converges to the solution comprising following beam: described beam towards (FL) direction, front left side and on front right side (FR), rear left side (BL) and rear right side (BR) direction for empty.If known accurate microphone array geometric configuration, so can carry out this type of training and operation definitely.Or Independent Vector Analysis process can perform with abundant training data, wherein one or more audio-source (such as, speech, musical instruments etc.) is positioned at each corner and is caught by four microphone arrays.In the case, no matter microphone arrangement (that is, not needing the information about microphone geometric configuration) training process can be performed once, and wave filter can be fixing for particular array configuration in time after a while.As long as array is included in four microphones in projected two-dimensional (x-y) plane, the result of this study process just can be applied to produce four suitable turning bank of filters.If on the axle that the microphone arrangement of described array is orthogonal or almost orthogonal at two (such as, with in quadrature 15 degree), so this type of can in order to record surround sound image through training wave filter under the constraint not having particular microphone array configurations.For example, if two axles are closely orthogonal, so it is enough for three microphone arrays, and ratio between separation between microphone on each axle is unimportant.
As mentioned above, high-definition signal to be obtained by high frequency item by spatial manipulation low frequency.But, if the increase of computational complexity is not the major issue of particular design, the process performing whole frequency field so can be replaced.Because four microphone Independent Vector Analysis methods focus on room but not on beam emissions more, so the aliasing effect in described high frequency item can reduce.Under the minority frequency that empty aliasing can occur in beam emissions direction, make the most of frequency field in beam emissions direction can keep not by empty aliazing effect, especially for little microphone space from.For larger microphone spacing, in fact room can become randomization, makes described effect be similar to situation just by unprocessed high frequency item.
For little form factor (such as, handheld apparatus 102), can need to avoid to perform the spatial filtering under low frequency, this is because microphone interval may be too little and can not support good result, and the performance under high frequency may be traded off.Similarly, can need to avoid to perform the spatial filtering under high frequency, this is because this quefrency is usually directed, and filtering can be invalid for frequency more than spacial aliasing frequency.
If use and be less than four microphones, so can be difficult in three other formation rooms, turning (such as, due to the degree of freedom of deficiency).In the case, can need to use replacement scheme, such as, end-fire pairing as discussed with reference to Figure 14,21 and 22.
Figure 20 illustrates method 2000 process flow diagram of combination end-fire beam.In an example, radio communication device 102 can apply 2002 beams on an end-fire direction.Radio communication device 102 can apply 2004 beams on another end-fire direction.In some instances, microphone 104a-e is to the beam that can be applicable on end-fire direction.Then, radio communication device 102 2006 filtering signals capable of being combined.
Figure 21 illustrates the process flow diagram being used for combining the method 2100 of beam in general biconjugate microphone situation.In an example, the first microphone 104a-e is to applying 2102 beams in a first direction.Second microphone 104a-e is to applying 2104 beams in a second direction.Then, radio communication device 102 2106 filtering signals capable of being combined.
Figure 22 illustrates the process flow diagram combining the method 2200 of beam in three Mike's landscape conditions.In this example, the first microphone 104a and second microphone 104b can apply 2202 beams in a first direction.Second microphone 104b and the 3rd microphone 104c can apply 2204 beams in a second direction.Then, radio communication device 102 2206 filtering signals capable of being combined.Every pair of end is penetrated formula beam and is formed the focal region can with+90 and-90 degree.As an example, be (front and back right+90) left side (left and right right+90) before having, the beam-forming combination of two end-fire all with+90 degree focal regions can be used.
Figure 23 is the block diagram of the array of four the microphone 2304a-d (such as, the first microphone channel 2304a, second microphone passage 2304b, the 3rd microphone channel 2304c and the 4th microphone channel 2304d) using four-way blind source separating.Microphone 2304a-d passage can be coupled to each in four wave filter 2324a-d separately.For making full use of five loudspeakers, combining front right channel 2304a and left passage 2304b by (such as) via the output of the first wave filter 2324a and the second wave filter 2324b and obtaining front central passage 2304e.
Figure 24 illustrates the part routing diagram being used for blind source separating filtering device group 2426.Four microphones 2404 (such as, the first microphone 2404a, second microphone 2404b, the 3rd microphone 2404c and the 4th microphone 2404d) can be coupled to bank of filters 2426 to produce the sound signal on front left side (FL) direction, front right side (FR) direction, (BL) direction, rear left side and rear right side (BR) direction.
Figure 25 illustrates the routing diagram being used for 2 × 2 bank of filters 2526.Four microphones 2504 (such as, the first microphone 2504a, second microphone 2504b, the 3rd microphone 2504c and the 4th microphone 2504d) can be coupled to bank of filters 2526 to produce the sound signal on front left side (FL) direction, front right side (FR) direction, (BL) direction, rear left side and rear right side (BR) direction.Note that the output in 2 × 2 bank of filters, 3-D sound signal FL, FR, BR and BL are for exporting.As illustrated in fig. 23, can from the combination reproduction central passage of both (first and second wave filters) other wave filter.
This describes the disclosure comprising and use multiple omnidirectional microphone 2504a-d to provide the 5.1 passage records from institute's tracer signal.Can need to use multiple omnidirectional microphone 2504a-d to produce ears record from institute's signal acquisition.For example, if not from 5.1 passage surrounding systems of user side, so can to need 5.1 passage downmixs, to stereo ears record, to make user can have the experience being in and having in the actual auditory space of ambiophonic system.And, monitor around record while the scene that this ability can provide wherein user can record at it on place and/or use stereo headset to replace household audio and video system and on its mobile device, play the option of institute's recording of video and surround sound.
System and method as herein described can provide the direct sound source of the array from omnidirectional microphone 2504a-d, and its intention is through being arranged in the assigned address (FL, FR, C, BL (or a left side around) in space, living room and BR (or the right side around)) loudspeaker at place plays.The method using head-telephone to reproduce this situation can comprise measures each loudspeaker to the off-line procedure of binaural impulse response (BIR) (such as, ears transfer function) of microphone 2504a-d of each ear inside being arranged in wanted auditory space.Binaural impulse response codified sense of hearing routing information, comprise each source receiver in the middle of for array of loudspeakers and two ears right from each micropkonic direct-path and reflection paths.Little microphone 2504a-d can be positioned at for people's ear inner, or use such as has the head of silica gel ear and the artificial head of trunk simulator (such as, HATS, Bu Lvaier (Bruel) and Ke Yaer (Kjaer), DK).
For binaural reproduction, measured binaural impulse response can amass with each the direct sound source book being used to specify loudspeaker location.After by all directed sources and binaural impulse response convolution, the result summation of each ear record can be used for.In the case, two passages (such as, the left side and the right) of the left side that catches of duplicator's ear and right side singal can be play via head-telephone.Note that from omnidirectional microphone 2504a-d array 5.1 around produce can be used as from array to binaural reproduction through point.Therefore, this scheme can be depending on the mode of generation process point and popularizes.For example, signal that more directed source is caught by array produces, its can be used as having the binaural impulse response of the approximate measure from wanted loudspeaker location to ear through point.
Can need to perform method as described herein in portable audio sensing apparatus, described device has the array being configured to two or more microphones 2504a-d receiving audible signal.Can comprise with the example of the portable audio sensing apparatus comprising this type of array and can be used for audio recording and/or voice communications applications through enforcement: telephone bandset (such as, cellular telephone handset); Wired or wireless headphone (such as, bluetooth headset); Hand-held audio frequency and/or video recorder; Be configured to the personal media player of record audio and/or video content; Personal digital assistant (PDA) or other hand-held computing device; And mobile computer, laptop computer, mini mobile computer, flat computer or other portable computing.The kind of portable computing is current comprises the device with following title: such as laptop computer, mobile computer, mini mobile computer, ultra portable computing machine, flat computer, mobile Internet device, originally intelligent and smart phone.Such device can have the top panel comprising display screen and the bottom panel that can comprise keyboard, wherein two panels can in a clamshell or other hinged relationship connect.Such device can be embodied as similarly the flat computer comprising touch-screen display on the top surface.Can through construction to perform these class methods and to comprise exemplary array and other example that can be used for the audio frequency sensing apparatus of audio recording and/or voice communications applications comprises Set Top Box and audio frequency and/or video conference device.
Figure 26 A illustrates the block diagram according to the multi-microphone audio frequency sensing apparatus 2628 of a general configuration.Audio frequency sensing apparatus 2628 can comprise any one in the embodiment of microphone array 2630 disclosed herein and can be embodied as audio frequency sensing apparatus 2628 example audio frequency sensing apparatus disclosed herein in the example of any one.Audio frequency sensing apparatus 2628 also can comprise the embodiment that can be configured to by performing one or many person in method as herein disclosed and process the equipment 2632 of multi-channel audio signal (MCS).Equipment 2632 can through being embodied as hardware (such as, processor) and software and/or the combination with firmware.
Figure 26 B illustrates the block diagram that can be the communicator 2602 of the embodiment of device 2628.Radio communication device 2602 can comprise chip or chipset 2634 (such as, mobile station modems (MSM) chipset), and it comprises equipment 2632.Chip/chipset 2634 can comprise one or more processor.Chip/chipset 2634 also can comprise the treatment element (element of the audio frequency pre-processing stage such as, hereafter discussed) of array 2630.Chip/chipset 2634 also can comprise: receiver, and it can be configured to received RF (RF) signal of communication and decodes and be replicated in the sound signal of encoding in described RF signal; And transmitter, it can be configured to the sound signal of the treated signal that coding can produce based on equipment 2632, and launches the RF signal of communication describing coded audio signal.For example, one or more processor of chip/chipset 2634 can be configured to perform noise reduction operation as above on one or more passage of multi channel signals, makes coded audio signal be signal based on noise decrease.
Each microphone of array 2630 can have the response for omnidirectional, two-way or unidirectional (such as, cardiod).The various types of microphones that can be used in array 2630 can comprise (unrestricted) piezoelectric microphones, dynamic microphones and electret microphone.In the device communicated for portable voice (such as mobile phone or headphone), center to center interval between the neighboring microphones of array 2630 can in from about 1.5cm to the scope of about 4.5cm, but comparatively large-spacing (such as, up to 10 or 15cm) be also possible in the device of such as mobile phone or smart phone, and even greater distance (such as, up to 20,25 or 30cm or more than 30cm) is possible in the device of such as flat computer.The microphone of array 2630 can along line (having even or non-homogeneous microphone interval) through arranging, or, make it be centrally located at the summit place of two dimension (such as, triangle) or 3D shape.
Explicitly point out, microphone can be embodied as more substantially the radiation except sound or launch responsive transducer.In this type of example, can by microphone to being embodied as a pair ultrasonic transducer (such as, to the transducer of audio frequency sensitivity being greater than more than 15,20,25,30,40 or 50 kilo hertzs or 50 kilo hertzs).
During the operation of multi-microphone audio frequency sensing apparatus 2628, array 2630 can produce multi channel signals, and wherein each passage is to the response of acoustic enviroment based on the corresponding one in microphone.Microphone another microphone comparable more directly receives specific sound, makes respective channel different from each other jointly to provide representing than the more complete acoustic enviroment that single microphone can be used to catch.In some embodiments, chipset 2634 can be coupled to one or more microphone 2604a-b, loudspeaker 2610, one or more antenna 2603a-b, display 2605 and/or keypad 2607.
Figure 27 A is the block diagram of the array 2730 being configured to the microphone 2704a-b performing one or more operation.The signal that array 2730 couples of microphone 2704a-b can be needed to produce performs one or more process operation, to produce multi channel signals.Array 2730 can comprise audio frequency pre-processing stage 2736, and it is configured to perform one or more this generic operation, can comprise the filtering in (unrestricted) impedance matching, analog/digital conversion, gain control and/or simulation and/or numeric field.
Figure 27 B is another block diagram being configured to the microphone array 2730 performing one or more operation.Array 2730 can comprise audio frequency pre-processing stage 2736, and it can comprise simulation pre-processing stage 2738a and 2738b.In an example, level 2738a and 2738b can be configured to perform high-pass filtering operation (such as, having the cutoff frequency of 50,100 or 200Hz) to corresponding microphone signal separately.
Array 2730 can be needed to produce multi channel signals as digital signal, that is, as sample sequence.For example, array 2730 can comprise analog/digital converter (ADC) 2740a and 2740b, and it is configured to sample corresponding analog channel separately.8kHz, 12kHz, 16kHz can be comprised for the typical sampling speed of acoustic applications and from about 8kHz to the scope of about 16kHz other frequency, but also can use the sampling rate up to about 44kHz.In this particular instance, array 2730 also can comprise digital pre-processing stage 2742a and 2742b, it is configured to perform one or more pretreatment operation (such as to corresponding digitized channel separately, echo cancellor, noise decrease and/or frequency spectrum are moulding), to produce respective channel MCS-1, the MCS-2 of multi channel signals MCS.Although Figure 27 A and 27B shows two channel implementation, the respective channel that same principle can be expanded to an arbitrary number microphone 2704a-b and multi channel signals MCS will be understood.
Current format for immersion audio reproducing comprises (a) ears 3D, (b) aural transmission type (transaural) 3D and (c) 5.1/7.1 surround sound.For both ears and aural transmission type 3D, usually only launch stereo channel/signal.For surround sound, not only stereophonic signal can be launched.The present invention propose a kind of for launch incessantly stereo for surround sound for the decoding scheme in mobile device.
Current system can be launched " B format audio " as shown in Figure 1, from audio frequency engineering society magazine, and the 57th volume, the 9th phase, in September, 2009.B format audio has has 1 of 4 passages through point, and requires that special record is arranged.Other system focuses on to be broadcasted and non-speech communications.
System and method of the present invention has for four in real-time communication system through point, wherein through each place that point can be present in four turnings (such as, front left side, front right side, rear left side and rear right side) of ambiophonic system.The audio emission at these four turnings can complete together or independently.In such arrangements, any number voice codec can be used to compress four sound signals.In some cases, can not need record that (such as, for the setting in B format audio) is set.Z-axis can be omitted.Carry out this and can't make degradation of signals, because information still can be distinguished by people's ear.
New decoding scheme can provide the compression with distortion, is mainly limited to the distortion that voice codec is intrinsic.Final audio frequency exports can for placing and interpolation by loudspeaker.In addition, its can with other format compatible, such as B form (except z-axis and ears record).In addition, new decoding scheme can benefit from the use used with the Echo Canceller of the voice codec tandem working of the audio path being arranged in most of mobile device, this is because four sound signals can be mainly incorrect.
System and method of the present invention can process the problem of real-time Communication for Power.In some instances, can by from up to some being with the band transmission of some low strap (LB) frequency of (UB) frequency (such as, [LB, UB]) as respective channel.Can be depending on available channel capacity to be with more than (UB) frequency to Nyquist (Nyquist) frequency (such as, [UB, NF]) to launch on some) different passages.For example, if four passages can be used, four voice-grade channels can so be launched.If two passages can be used, so can at average two and before launching after two passages below and subsequent pass above.If a passage can be used, the mean value of so all microphone inputs of transmitting.In some configurations, non-transmission channel, and the technology being similar to spectral band replication can be used to produce high-band (such as, [UB, NF]) from low strap (such as, [LB, UB]).For low-band frequencies (LB) those bands (such as, [0, LB]) below, the mean value of all microphones input can be launched.
In some instances, the coding of sound signal can comprise selective coding.For example, if user wants transmission certain orientation source, (such as, the voice of user), so radio communication device by the dynamic range that minimizes other passage and reduce other direction energy to distribute more decoded bits resource for described direction.Additionally or alternati, if user is interested in certain orientation source (such as, the voice of user), so radio communication device can launch one or two passage.
Figure 28 illustrates the chart of the frequency band of one or more sound signal 2844a-d.Sound signal 2844a-d can represent the sound signal received from different directions.For example, sound signal 2844a can be the sound signal from (FL) direction, left side before in ambiophonic system, another sound signal 2844b can be the sound signal from rear left side (BL) direction, another sound signal 2844c can be the sound signal from front right side (FR) direction, and another sound signal 2844d can be the sound signal from rear right side (BR) direction.
According to some configurations, sound signal 2844a-d can be divided into one or more to be with.For example, front left side audio signal 2844a can be divided into band 1A 2846a, band 1B 2876a, band 2A 2878a, band 2B 2880a and band 2C 2882a.Other sound signal 2844b-d can divide similarly.As used herein term " band 1B " can refer to the frequency band between a certain low-band frequencies (LB) Yu a certain upper band frequency (UB) (such as, [LB, UB]).The band of sound signal 2844a-d can comprise the band of one or more type.For example, sound signal 2844a can comprise one or more narrow band signal.In some embodiments, narrow band signal can comprise a part (such as, being with the part being less than 4kHz of 1B2876a-d) of band 1A 2846a-d and band 1B 2876a-d.In other words, if a certain upper band frequency (UB) is greater than 4kHz, be so with 1B 2876a-d to be greater than narrow band signal.In other embodiments, narrow band signal can comprise a part (such as, being with the part being less than 4kHz of 2A 2878a-d) of band 1A 2846a-d, band 1B 2876a-d and band 2A 2878a-d.Sound signal 2844a also can comprise one or more non-narrow band signal (such as, being with a part (being greater than the part of 4kHz) of 2A 2878a, band 2B 2880a and band 2C 2882a).As used herein, term " non-arrowband " refers to any signal (such as, broadband signal, ultra-broadband signal and full band signal) of non-narrow band signal.
The scope of described band can be as follows: band 1A 2846a-d can span from 0 to 200Hz.In some embodiments, the upper limit of band 1A 2846a-d can up to about 500Hz.With 1B 2876a-d can span from the maximum frequency (such as, 200Hz or 500Hz) of band 1A 2846a-d up to about 6.4kHz.With 2A 2878a-d can span from the maximum magnitude (such as, 6.4kHz) of band 1B 2876a-d and about 8kHz.With 2B 2880a-d can span from the maximum magnitude (such as 8kHz) of band 2A 2878a-d up to about 16kHz.With 2C 2882a-d can span from the maximum magnitude (such as, 16kHz) of band 2B 2880a-d up to about 24kHz.
In some embodiments, the upper limit of band 1B 2876a-d can be depending on one or more factor, including (but not limited to) the geometry placement of microphone and the Machine Design (such as, omnidirectional microphone is to omnidirectional microphone) of microphone.For example, be with the upper limit of 1B 2876a-d can microphone through locate closer together time different from when microphone separates far away through locating.In this embodiment, other band (such as, being with 2A-C 2878a-d, 2880a-d, 2882a-d) can be derived from band 1B 2876a-d.
Frequency range up to the upper bound of band 1B 2876a-d can be narrow band signal (such as, up to 4kHz) or limits (such as, 6.4KHz) slightly higher than arrowband.As mentioned above, if the upper bound of band 1B 2876a-d is less than narrow band signal (such as, 4kHz), a part of 2A 2878a-d is so with to comprise narrow band signal.By comparing, if the upper bound of band 1B2876a-d is greater than narrow band signal (such as, 4kHz), 2A 2878a-d is so with not comprise narrow band signal.A part up to the frequency range of the upper bound (such as, 8kHz) of band 2A 2878a-d can be broadband signal (such as, being greater than the part of 4kHz).Frequency range up to band 2B 2880a-d upper bound (such as, 16kHz) can be ultra-broadband signal.Frequency range up to the upper bound (such as, 24kHz) of band 2C 2882a-d can be full band signal.
Depend on the availability of voice codec available in the availability of network and mobile device 102, the difference of codec can be used to configure.When relating to compression, sometimes distinguish between audio codec and voice codec.Voice codec can be referred to as audio coder & decoder (codec).Audio codec and voice codec have different compression schemes, and decrement can extensively change between.Audio codec can have better fidelity, but can need comparatively multidigit when compressing audio signal 2844a-d.Therefore, compression ratio (that is, the bits number of the bits number of the input signal in codec and the output signal of codec) is lower than voice codec for audio codec.Therefore, due to the air bandwidth constraint in community (region by multiple base station coverage), so do not use audio codec to launch voice in old 2G (second generation) and 3G (third generation) communication system, this is because the number launching the position needed for voice packets is nonconforming.As a result, in 2G and 3G communication system or used voice codec to launch compressed speech aloft in from a mobile device to the voice channel of another mobile device.
Although audio codec is present in mobile device, the transmitting of audio pack (that is, the description of the compression of the audio frequency undertaken by audio codec) aloft data channel is completed.The example of audio codec comprises that MPEG-2/AAC is stereo, MPEG-4BSAC is stereo, real-time audio, SBC bluetooth, WMA and WMA 10Pro.It should be noted that these audio codecs can find in the mobile device in 3G system, but compressed sound signal not real-time air-launched on service channel or voice channel.Voice codec is in order to Real Time Compression sound signal and air-launched.The example of voice codec comprises AMR narrowband voice codec (5.15kbp), AMR wide-band voice codec (8.85Kbps), G.729AB voice codec (8kbps), GSM-EFR voice codec (12.2kbps), GSM-FR voice codec (13kbps), GSM-HR voice codec (5.6kpbs), EVRC-NB, EVRC-WB.Compressed speech (or audio frequency) to be encapsulated in vocoder packets and aloft to send in service channel.Voice codec is sometimes referred to as vocoder.Before being sent in the air, vocoder packets is inserted in larger bag.In 2G and 3G communication, in voice channel, launch voice, but VOIP (ip voice) also can be used in data channel to launch voice.
Depend on air bandwidth, various codec scheme can be used for the signal of encoding between upper band (UB) frequency and nyquist frequency (NF).The example of these schemes is presented in Figure 29-33.
Figure 29 A illustrates that a possibility scheme of first configuration of codec 2948a-d is with in use four entirely.As mentioned above, sound signal 2944a-d can represent the sound signal 2944a-d (such as, front left side audio signal 2944a, rear left side audio signal 2944b, front right audio signal 2944c and rear right audio signal 2944d) received from diverse location.Similarly, as mentioned above, sound signal 2944a-d can be divided into one or more to be with.Band 1A 2946a, band 1B 2976a and band 2A-2C 2984a can be comprised by using full band codec 2948a-d, sound signal 2944a.In some cases, the frequency range of described band can be previously described frequency range.
In this example, each sound signal 2944a-d can use for compressing and the full band codec 2948a-d of various bands of audio signals 2944a-d.For example, those bands of each the sound signal 2944a-d in the frequency range defined by a certain low-band frequencies (LB) and a certain upper band frequency (UB) (such as, comprising band 1B 2976a-d) can through filtering.According to this configuration, be greater than a certain upper band frequency (UB) for comprising and be less than nyquist frequency (such as, band 2A-2C 2984a-d) the band of frequency, closest to want the microphone place of corner location 2944a-d to catch original audio signal can be encoded.Similarly, for the band comprising the frequency being less than a certain low-band frequencies (LB) (such as, band 1A 2946a-d), closest to want the microphone place of corner location 2944a-d to catch original audio signal can be encoded.In some configurations, be coded in the original audio signal wanting the microphone place of corner location 2944a-d to catch closest to institute and can represent the assigned direction being with 2A-2C 2984a-d, this is because it captures delay and gain difference naturally between microphone channel.In some instances, catch closest to want the microphone of position with the difference between filter range to be: with compared with frequency filtering region, direction-sense effect is strongly really not so.
Figure 29 B illustrates a possibility scheme of first configuration of use four ultra broadband codec 2988a-d.By using ultra broadband codec 2988a-d, sound signal 2944a-d can comprise band 1A 2946a-d, band 1B 2976a-d and band 2A-2B 2986a-d.
In this example, those bands of each the sound signal 2944a-d in the frequency range defined by a certain low-band frequencies (LB) and a certain upper band frequency (UB) (such as, comprising band 1B 2976a-d) can through filtering.According to this configuration, be greater than a certain upper band frequency (UB) for comprising and be less than nyquist frequency (such as, band 2A-2B 2986a-d) the band of frequency, closest to want the microphone place of corner location 2944a-d to catch original audio signal can be encoded.Similarly, for the band comprising the frequency being less than a certain low-band frequencies (LB) (such as, band 1A 2946a-d), closest to want the microphone place of corner location 2944a-d to catch original audio signal can be encoded.
Figure 29 C illustrates a possibility scheme of first configuration of use four wideband codec 2990a-d.By using wideband codec 2990a-d, sound signal 2944a-d can comprise band 1A 2946a-d, band 1B 2976a-d and band 2A 2978a-d.
In this example, those bands of each the sound signal 2944a-d in the frequency range defined by a certain low-band frequencies (LB) and a certain upper band frequency (UB) (such as, comprising band 1B 2976a-d) can through filtering.According to this configuration, be greater than a certain upper band frequency (UB) for comprising and be less than nyquist frequency (such as, band 2A 2978a-d) the band of frequency, closest to want the microphone place of corner location 2944a-d to catch original audio signal can be encoded.Similarly, for the band comprising the frequency being less than a certain low-band frequencies (LB) (such as, band 1A 2946a-d), closest to want the microphone place of corner location 2944a-d to catch original audio signal can be encoded.
Figure 30 A illustrates the possible scheme being used for the second configuration, and wherein two codec 3094a-d have average audio signal.In some instances, different codec 3094a-d can be used for different audio signals 3044a-d.For example, front left side audio signal 3044a and rear left side audio signal 3044b can use full band codec 3094a, 3094b respectively.In addition, front right audio signal 3044c and rear right audio signal 3044d can use narrowband codec 3094c, 3094d.When Figure 30 A describes satisfactory to both parties band codec 3094a, 3094b and two narrowband codec 3094c, 3094d, any combination of codec can be used, and system and method for the present invention does not limit by the configuration described in Figure 30 A.For example, front right audio signal 3044c and rear right audio signal 3044d can use broadband or ultra broadband codec to replace the narrowband codec 3094c-d described in Figure 30 A.In some instances, if upper band frequency (UB) is greater than arrowband restriction (such as, 4kHz), so front right audio signal 3044c and rear right audio signal 3044d can use wideband codec carry out room for improvement decoding effect or can use narrowband codec when Internet resources are limited.
In this configuration, full band codec 3094a, 3094b can one or more sound signal 3044a-d of more than a certain upper bound of average front right audio signal 3044c and rear right audio signal 3044d frequency range.For example, full band codec 3094a, 3094b can average packet containing the sound signal band (such as, being with 2A-2C 3092a, 3092b) of frequency being greater than a certain upper band frequency (UB).The sound signal 3044a-d deriving from same general direction can together by average.For example, front left side audio signal 3044a and front right audio signal 3044c can together by average, and rear left side audio signal 3044b and rear right audio signal 3044d can by average together.
The example of average audio signal 3044a-d is given as follows.Front left side audio signal 3044a and rear left side audio signal 3044b can use full band codec 3094a, 3094b.In this example, front right audio signal 3044c and rear right audio signal 3044d can use narrowband codec 3094c, 3094d.In this example, full band codec 3094a, 3094b can comprise for respective audio signal (such as, front left side audio signal 3044a and rear left side audio signal 3044b) a certain low-band frequencies (LB) and a certain upper band frequency (UB) between those through cake resistancet (such as, being with 1B3076a-b).Full band codec 3094a, 3094b also on average can contain the sound signal of similar orientation (such as, front sound signal 3044a, 3044c and rear sound signal 3044b, 3044d) a certain upper band frequency (UB) more than the sound signal band (such as, be with 2A-2C 3092a-b) of frequency.Similarly, full band codec 3094a, 3094b can comprise a certain low-band frequencies (LB) band below (such as, being with 1A 3046a-b).
In addition, in this example, narrowband codec 3094c, 3094d can comprise for respective audio signal (such as, front right audio signal 3044c, rear right audio signal 3044d) containing between those of a certain low-band frequencies (LB) and the frequency between maximum 4kHz and a certain upper band frequency (UB) through cake resistancet (such as, being with 1B 3076c, 3076d).Narrowband codec 3094c, 3094d also can comprise a certain low-band frequencies (LB) band below for respective audio signal (such as, front right audio signal 3044c, rear right audio signal 3044d).In this example, if a certain upper band frequency (UB) is less than 4kHz, so closest to want the microphone place of corner location 3044a-d to catch original audio signal can be encoded.
As mentioned above, when Figure 30 A describes satisfactory to both parties band codec 3094a, 3094b and two narrowband codec 3094c, 3094d, any combination of codec can be used.For example, two ultra broadband codecs alternative satisfactory to both parties band codec 3094a, 3094b.
Figure 30 B illustrates the possible scheme being used for the second configuration, and wherein one or more codec 3094a-b, e-f has average audio signal.In this example, front left side audio signal 3044a and rear left side audio signal 3044b can use full band codec 3094a, 3094b.In this example, front right audio signal 3044c and rear right audio signal 3044d can use wideband codec 3094e, 3094f.In this configuration, full band codec 3094a, 3094b can one or more the sound signal 3044a-d of a part of frequency range on average more than upper bound.For example, full band codec 2094a, 2094b can one or more sound signal 3044a-d of an average part (such as, be with 2B 3092a, 2C 3092b) for the frequency range of front right audio signal 3044c and rear right audio signal 3044d.The sound signal 3044a-d deriving from same general direction can together by average.For example, front left side audio signal 3044a and front right audio signal 3044c can together by average, and rear left side audio signal 3044b and rear right audio signal 3044d can by average together.
In this example, full band codec 3094a, 3094b can comprise band 1A 3046a-b, band 1B 3076a-b, band 2A 3078a-b and average band 2B, 2C 3092a-b.Wideband codec 3094e, 3094f can comprise for respective audio signal (such as, front right audio signal 3044c and rear right audio signal 3044d) containing those of frequency between a certain low-band frequencies (LB) and a certain upper band frequency (UB) through cake resistancet (such as, being with 1B 3076c-d).Wideband codec 3094e, 3094f also can be included in the original audio signal caught closest to microphone signal place of band 2A 3078c-d.By coding closest to microphone signal, directionality still encodes (although not having spatial manipulation so dramatization of the frequency between a certain low-band frequencies (LB) and a certain upper band frequency (UB)) by the inherent time between microphone channel and rank difference.Wideband codec 3094e, 3094f also can comprise for respective audio signal (such as, front right audio signal 3044c and rear right audio signal 3044d) a certain low-band frequencies (LB) band below (such as, be with 1A 3046c-d).
Figure 31 A illustrates the possible scheme being used for the 3rd configuration, and one or many person wherein in codec can average one or more sound signal.Average example in this configuration is given as follows.Front left side audio signal 3144a can use full band codec 3198a.Rear left side audio signal 3144b, front right audio signal 3144c and rear right audio signal 3144d can use narrowband codec 3198b, 3198c, 3198d.
In this example, full band codec 3198a can comprise for sound signal 3144a containing those of the frequency between a certain low-band frequencies (LB) and a certain upper band frequency (UB) through cake resistancet (being with 1B 3176a).Full band codec 3198a also on average can contain the sound signal band (such as, being with 2A-2C 3192a) of the frequency of more than a certain upper band frequency (UB) of sound signal 3144a-d.Similarly, full band codec 3198a can comprise a certain low-band frequencies (LB) band below (being such as with 1A 3146a).
Narrowband codec 3198b-d can comprise for respective audio signal (such as, comprising between those of a certain low-band frequencies (LB) and the frequency between maximum 4kHz and a certain upper band frequency (UB) through cake resistancet (such as, being with 1B3176b-d) 3144b-d).Narrowband codec 3198b-d also can comprise the band (such as, being with 1A 3146b-d) containing a certain low-band frequencies (LB) frequency below for respective audio signal (such as, 3144b-d).
Figure 31 B illustrates the possible scheme being used for the 3rd configuration, and one or many person wherein in non-narrowband codec has average audio signal.In this example, front left side audio signal 3144a can use full band codec 3198a.Rear left side audio signal 3144b, front right audio signal 3144c and rear right audio signal 3144d can use wideband codec 3194e, 3194f and 3194g.In this configuration, full band codec 3198a can on average for one or more sound signal 3144a-d of a part (such as, being with 2B-2C 3192a, 3192b) for the frequency range of sound signal 3144a-d.
In this example, full band codec 3198a can comprise band 1A 3146a, band 1B 3176a, band 2A 3178a and band 2B-2C 3192a.Wideband codec 3198e-g can comprise for respective audio signal (such as, 3144b-d) comprise frequency between a certain low-band frequencies (LB) and a certain upper band frequency (UB) those through cake resistancet (such as, being with 1B 3176b-d).Wideband codec 3198e-g also can comprise frequency more than for a certain upper band frequency (UB) at the original audio signal wanting the microphone place of corner location to catch closest to institute (such as, band 2A 3178b-d).Wideband codec 3198e-g also can comprise the band (such as, being with 1A 3146b-d) containing a certain low-band frequencies (LB) frequency below for respective audio signal (such as, 3144b-d).
Figure 32 illustrates four narrowband codec 3201a-d.In this example, be with for each sound signal 3244a-d through filtering containing between those of a certain low-band frequencies (LB) and the frequency between maximum 4kHz and a certain upper band frequency (UB).If a certain upper band frequency (UB) is less than 4kHz, so can be encoded for being greater than up to the frequency range of a certain upper band frequency (UB) of 4kHz from the original audio signal closest to microphone.In this example, four passages can be produced, corresponding to each sound signal 3244a-d.Each passage can comprise for described sound signal 3244a-d through cake resistancet (such as, at least comprising a part of band 1B 3276a-d).Narrowband codec 3201a-d also can comprise the band (such as, being with 1A3246a-d) containing a certain low-band frequencies (LB) frequency below for respective audio signal (such as, 3244a-d).
Figure 33 illustrates for using four non-narrowband codec of any scheme of Figure 29 A, Figure 29 B or Figure 29 C to produce and the process flow diagram of method 3300 of received audio signal bag 3376.Method 3300 can comprise record 3,302 four sound signal 2944a-d.In this configuration, record or catch four sound signal 2944a-d by microphone array.As an example, array 2630,2730 illustrated in Figure 26 and 27 can be used.Institute recorde audio signal 2944a-d may correspond to the direction in audio reception.For example, radio communication device 102 can record four sound signals (such as, front left side 2944a, rear left side 2944b, front right side 2944c and rear right side 2944d) from four direction.
Radio communication device 102 then can produce 3304 sound signal bags 3376.In some embodiments, produce 3304 sound signal bags 3376 and can comprise one or more voice-grade channel of generation.For example, the codec configuration of Given Graph 29A, the band (such as, [LB, UB]) of the sound signal in a certain low-band frequencies (LB) with a certain upper band frequency (UB) can through filtering.In some embodiments, application blind source separating (BSS) wave filter can be comprised to these band filtering.In other embodiments, belonging to low-band frequencies (LB) can pair-wise combination with one or many person in the sound signal 2944a-d in upper band frequency (UB).For being greater than up to the band of the upper band frequency (UB) of nyquist frequency and for the band being less than low-band frequencies (LB), original audio signal 2944a-d can be combined as voice-grade channel with through filtering audio signals.In other words, voice-grade channel (corresponding to sound signal 2944a-d) can comprise between a certain low-band frequencies (LB) and a certain upper band frequency (UB) through cake resistancet (such as, band 1B 2976a-d) and up to the grandfather tape more than a certain upper band frequency (UB) of nyquist frequency (such as, 2A-2C 2984a-d) and low-band frequencies (LB) grandfather tape below (such as, being with 1A 2946a-d).
Produce 3304 sound signal bags 3376 also can comprise one or more non-narrowband codec is applied to voice-grade channel.According to some configurations, radio communication device 102 can use one or many person in first of the codec as described in Figure 29 A-C the configuration to carry out encoded audio channels.For example, the codec described in Given Graph 29A, radio communication device 102 can use the full band codec 2948a-d for each voice-grade channel to encode four voice-grade channels.Or the non-narrowband codec in Figure 33 can be as ultra broadband codec 2988a-d illustrated in Figure 29 B or as wideband codec 2990a-d illustrated in Figure 29 C.Any combination of codec can be used.
After generation sound signal bag 3376, radio communication device 102 can launch 3306 sound signal bags 3376 to demoder.Demoder can be included in audio output device, such as radio communication device 102.In some embodiments, sound signal bag 3376 can aloft be launched.
Demoder can receive 3308 sound signal bags 3376.In some embodiments, receive 3308 sound signal bags 3376 and can comprise the sound signal bag 3376 that decoding receives.Demoder can carry out this operation according to the first configuration.Draw according to above-mentioned example, demoder can use the full band codec for each voice-grade channel to carry out decoded audio passage.Or demoder can be depending on produced transmitting bag 3376 and uses ultra broadband codec 2988a-d or wideband codec 2990a-d.
In some configurations, receive 3308 sound signal bags 3376 can comprise and rebuild central passage before structure.For example, audio reception output unit front left side audio passage capable of being combined and front right audio passage are to produce front center audio channel.
Receive 3308 sound signal bags 3376 and also can comprise reconstruction structure sub-bass channel.This can comprise one or many person in sound signal 2944a-d is passed through low-pass filter.
Then institute's received audio signal can be play 3310 on audio output device.In some cases, this can comprise with surround sound form playing audio signal.In other cases, sound signal can be mixed through lower and play with stereo format.
Figure 34 is for illustrating for using four codecs (such as, from any one in Figure 30 A or Figure 30 B) to produce and the process flow diagram of other method 3400 of received audio signal bag 3476.Method 3400 can comprise one or more sound signal 3044a-d of record 3402.In some embodiments, this can carry out according to description in conjunction with Figure 33.Radio communication device 102 then can produce 3404 sound signal bags 3476.In some embodiments, produce 3404 sound signal bags 3476 and can comprise one or more voice-grade channel of generation.For example, the band (such as, [LB, UB]) of the sound signal 3044a-d in a certain low-band frequencies (LB) with a certain upper band frequency (UB) can through filtering.In some embodiments, this can carry out according to the description in Figure 33.
In some embodiments, four low tape channels (such as, corresponding to four sound signal 3044a-d illustrated in Figure 30 A or 30B) can be produced.Low tape channel can comprise the frequency between [0,8] kHz of sound signal 3044a-d.These four low tape channels can comprise filtering signal between a certain low-band frequencies (LB) and a certain upper band frequency (UB) (such as, band 1B 3076a-d) and low-band frequencies (LB) original audio signal below (such as, being with 1A 3046a-d) of the original audio signal that is greater than up to a certain upper band frequency (UB) of 8kHz and four sound signal 3044a-d.Similarly, two high tape channels corresponding to average front/rear sound signal can be produced.High tape channel can comprise from zero up to the frequency of 24kHz.High tape channel can comprise for the filtering signal between a certain low-band frequencies (LB) of sound signal 3044a-d and a certain upper band frequency (UB) (such as, band 1B 3076a-d) and the original audio signal that is greater than up to a certain upper band frequency (UB) of 8kHz and low-band frequencies (LB) original audio signal below (such as, the band 1A3046a-d of four sound signal 3044a-d).High tape channel also can comprise the average audio signal up to 24kHz of more than 8kHz.
Produce 3404 sound signal bags 3476 also can comprise one or more codec 3094a-f is applied to voice-grade channel.According to some configurations, radio communication device 102 can use one or many person in second of the codec 3094a-f as described in Figure 30 A and 30B the configuration to carry out encoded audio channels.
For example, the given codec as described in Figure 30 B, radio communication device 102 can use full band codec 3094a, 3094b to the front left side audio signal 3044a and rear left side audio signal 3044b that encodes respectively, and wideband codec 3094c, 3094d can be used respectively to the front right audio signal 3044c and rear right audio signal 3044d that encodes.In other words, four sound signal bags 3476 can be produced.For use full band codec 3094a, the bag 3476 corresponding to sound signal 3044a-d of 3094b (such as, front left side audio signal 3044a and rear left side audio signal 3044b), described bag 3476 can comprise described sound signal 3044a-d (such as, sound signal 3044a, low tape channel 3044b) (such as, [0, 8] kHz) and in general direction up to average audio signal 3044a-d (such as, front sound signal 3044a, 3044c and rear sound signal 3044b, 3044d) up to 24kHz (such as, full band codec 3094a, the maximum frequency that 3094b allows) high tape channel.For the sound signal bag 3476 (such as front right audio signal 3044c and rear right audio signal 3044d) corresponding to sound signal 3044a-d using wideband codec 3094e-f, sound signal bag 3476 can comprise described sound signal 3044a-d (such as, sound signal 3044c, 3044d) low tape channel (such as, [0,8] kHz).
After generation audio signal information, radio communication device 102 can launch 3406 audio signal informations.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Demoder can receive 3408 audio signal informations.In some embodiments, receive 3408 audio signal informations and can comprise the audio signal information that decoding receives.In some embodiments, this can carry out according to description in conjunction with Figure 33.The codec scheme of Given Graph 30B, demoder can use full band codec 3094a, 3094b to the front left side audio signal 3044a and rear left side audio signal 3044b that decodes, and wideband codec 3094e, 3094f can be used to the front right audio signal 3044b and rear right audio signal 3044d that decodes.Audio output device also can use a part as being contained in the mean height tape channel in full band voice-grade channel (such as, [8,24] kHz part) rebuild [8 of structure wideband audio passage, 24] kHz scope, (such as, using the mean height tape channel of the rear left side audio signal of the mean height tape channel of the front left side audio signal of front right audio passage and the rear right audio passage of use).
In some configurations, receive 3408 audio signal informations can comprise and rebuild central passage before structure.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Receive 3408 audio signal informations and also can comprise reconstruction structure super low sound signal.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Then institute's received audio signal can be play 3410 on audio output device.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Figure 35 is for illustrating for using four codecs (such as, from any one in Figure 31 A or Figure 31 B) to produce and the process flow diagram of other method 3500 of received audio signal bag 3576.Method 3500 can comprise one or more sound signal 3144a-d of record 3502.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Radio communication device 102 then can produce 3504 sound signal bags 3576.In some embodiments, produce 3504 sound signal bags 3576 and can comprise one or more voice-grade channel of generation.For example, the band (such as, being with 1B 3176a-d) of the sound signal 3144 in a certain low-band frequencies (LB) with a certain upper band frequency (UB) can through filtering.In some embodiments, this can carry out according to the description in Figure 33.
In some embodiments, four low tape channels corresponding to four sound signals 3144 can be produced.In some embodiments, this can carry out according to the description in Figure 34.Similarly, the high tape channel corresponding to average audio signal (such as, front left side audio signal 3144a, rear left side audio signal 3144b, front right audio signal 3144c and rear right audio signal 3144d) can be produced.In some embodiments, this can carry out according to the description in Figure 34.
Produce 3504 sound signal bags 3576 also can comprise one or more codec 3198a-g is applied to voice-grade channel.According to some configurations, radio communication device 102 can use one or many person in the 3rd of the codec 3198a-g as described in Figure 31 A and 31B the configuration to carry out encoded audio channels.For example, the given codec as described in Figure 31 B, radio communication device 102 can use full band codec 3198a to the front left side audio signal 3144a that encodes, and use wideband codec 3198e respectively, wideband codec 3198f and wideband codec 3198g encode after left side audio signal 3144b, front right audio signal 3144c and rear right audio signal 3144d.In other words, four sound signal bags 3576 can be produced.
For the bag 3576 corresponding to sound signal 3144a using full band codec 3198a, described bag 3576 can comprise the low tape channel of sound signal 3144a and the high tape channel up to 24kHz (such as, the full maximum frequency being with codec 3198a to allow) of average audio signal 3144a-d.For using the sound signal bag 3576 corresponding to sound signal 3144a-d of wideband codec 3198e-g (such as, sound signal 3144b-d), sound signal bag 3576 can comprise the low tape channel of sound signal 3144a-d (such as, sound signal 3144b-d) and be greater than the original audio signal of a certain upper band frequency (UB) up to 8kHz.
After generation audio signal information, radio communication device 102 can launch 3506 audio signal informations.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Demoder can receive 3508 audio signal informations.In some embodiments, receive 3508 audio signal informations and can comprise the audio signal information that decoding receives.In some embodiments, this can carry out according to description in conjunction with Figure 33.Audio output device also can use the part (such as, [8,24] kHz part) as being contained in the mean height tape channel in full band voice-grade channel to rebuild [8,24] kHz scope of structure wideband audio passage.
In some configurations, receive 3508 audio signal informations can comprise and rebuild central passage before structure.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Receive 3508 audio signal informations and also can comprise reconstruction structure super low sound signal.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Then institute's received audio signal can be play 3510 on audio output device.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Figure 36 be illustrate for use in order to four narrowband codec (such as, from Figure 29 A, Figure 29 B or Figure 29 C) of coding combination with produce in order to any one in four wideband codecs of decoding or narrowband codec and the process flow diagram of other method 3600 of received audio signal bag 3676.Method 3600 can comprise one or more sound signal 2944 of record 3602.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Radio communication device 102 then can produce 3604 sound signal bags 3676.Produce 3604 sound signal bags 3676 and can comprise one or more voice-grade channel of generation.In some embodiments, this can carry out according to the description in Figure 33.
Produce 3604 sound signal bags 3676 also can comprise by one or more non-narrowband codec (as in Figure 29 A-C describe) be applied to voice-grade channel.For example, radio communication device 102 can use the wideband codec 2988a-d described in Figure 29 B to carry out encoded audio channels.
After generation sound signal bag 3676, radio communication device 102 can launch 3606 sound signal bags 3676 to demoder.In some embodiments, this can carry out according to the description in Figure 33.
Demoder can receive 3608 sound signal bags 3676.In some embodiments, receive 3608 sound signal bags 3676 and can comprise the sound signal bag 3676 that decoding receives.Demoder can use one or more wideband codec or one or more narrowband codec to carry out decoded audio signal bag 3676.Audio output device also can use broadband channel bandwidth expansion to rebuild [8,24] kHz scope of structure voice-grade channel based on institute's received audio signal bag 3676.Transmitting in this example not from upper band frequency (UB) to nyquist frequency is necessary.This scope can use the technology that is similar to spectral band replication (SBR) and produce to upper band frequency (UB) scope from low-band frequencies.For example, low-band frequencies (LB) band is below launched by average microphone input.
In some configurations, receive 3608 sound signal bags 3676 can comprise and rebuild central passage before structure.In some embodiments, this can carry out according to the description in Figure 33.
Receive 3608 sound signal bags 3676 and also can comprise reconstruction structure sub-bass channel.In some embodiments, this can carry out according to the description in Figure 33.Then institute's received audio signal can be play 3310 on audio output device.In some embodiments, this can carry out according to the description in Figure 33.
Decoded bits can always be assigned based on certain party or distribute.This direction can be selected by user.For example, user voice from direction can have the comparatively multidigit being assigned to it.This performs by the dynamic range and the energy reduced on other direction minimizing other passage.In addition, in difference configuration, the visual of the energy distribution at four turnings of surround sound can be produced.The user of which direct sound selects to distribute more position, that is, sound is better, or has wanted audio direction and can select based on the visual of energy distribution.In this configuration, can to encode one or two passage by comparatively multidigit, but launch one or more passage.
Figure 37 be illustrate for generation of and the process flow diagram of other method 3700 of received audio signal bag 3776, wherein during encoding, the not coordination of one or two voice-grade channel is distributed and can be selected based on user.In some embodiments, during encoding, the not coordination of one or two sound signal is distributed and can be selected based on the visual user be associated of the energy distribution of the four direction with ambiophonic system.In this embodiment, passage launches four encoded sources aloft.
Method 3700 can comprise one or more sound signal 2944 of record 3702.In some embodiments, this can carry out according to description in conjunction with Figure 33.Radio communication device 102 then can produce 3704 sound signal bags 3776.Produce 3704 sound signal bags 3776 and can comprise one or more voice-grade channel of generation.In some embodiments, this can carry out according to the description in Figure 33-36.
Produce 3704 sound signal bags 3776 and also can comprise the visual of the energy distribution at generation four turnings (such as, four sound signal 2944a-d).Visual according to this, which direct sound user can select to distribute comparatively multidigit (such as, the voice institute of user is from part).Select (such as based on user, the instruction of direction in space 3878), comparatively multidigit can be applied to the one or both in the codec of the first configuration of codec (codec such as, described in Figure 29 A-C) by radio communication device 102.Produce 3704 audio signal informations also can comprise one or more non-narrowband codec is applied to voice-grade channel.In some embodiments, this can take into account user's selection and carry out according to the description in Figure 33.
After generation sound signal bag 3776, radio communication device 102 can launch 3706 sound signal bags 3776 to demoder.In some embodiments, this can carry out according to description in conjunction with Figure 33.Demoder can receive 3708 audio signal informations.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Then institute's received audio signal can be play 3710 on audio output device.In some embodiments, this can carry out according to description in conjunction with Figure 33.Similarly, if user is to certain orientation source (such as, the voice of user or user concentrate interested other sound a certain) interested, the transmitting of one or two passage can so be performed.In this configuration, encode and launch a passage.
Figure 38 be illustrate for generation of and the process flow diagram of other method 3800 of received audio signal bag 3876, wherein select to compress and launch a sound signal based on user.Method 3800 can comprise one or more sound signal 2944a-d of record 3802.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Radio communication device 102 then can produce 3804 sound signal bags 3876.Produce 3804 sound signal bags 3876 and can comprise one or more voice-grade channel of generation.In some embodiments, this can carry out according to the description in Figure 33-36.Produce 3804 sound signal bags 3876 and also can comprise the visual of the energy distribution at generation four turnings (such as, four sound signal 2944a-d).Visual according to this, user can select which direct sound (such as, the instruction of direction in space 3878) should to be encoded and launch (such as, the voice institute of user is from part).Produce 3804 audio signal informations also can comprise by a non-narrowband codec (as in Figure 29 A-C describe) be applied to selected voice-grade channel.In some embodiments, this can take into account user's selection and carry out in conjunction with the description in Figure 33.
After generation audio signal information, radio communication device 102 can launch 3806 sound signal bags 3876 to demoder.In some embodiments, this can carry out according to description in conjunction with Figure 33.Together with sound signal bag 3876, radio communication device can launch 3806 channel recognition.
Demoder can receive 3808 audio signal informations.In some embodiments, this can carry out according to description in conjunction with Figure 33.
Then institute's received audio signal can be play 3810 on audio output device.In some embodiments, institute's received audio signal can play 3810 in conjunction with Figure 33 according to description.By coding and the passage that defines of decode users and make other passage export zero, hyperchannel can be used to reproduce and/or head-telephone presents system to produce enhancing but spatialization exports.
Figure 39 is the block diagram of the embodiment that radio communication device 3902 is described, it can through implementing the sound signal bag 3376 to produce four configurations comprising codec combination 3974a-d.Communicator 3902 can comprise array 3930, is similar to previously described array 2630.Array 3930 can comprise one or more microphone 3904a-d being similar to previously described microphone.For example, array 3930 can comprise four the microphone 3904a-d received from four records direction (such as, front left side, front right side, rear left side and rear right side).
Radio communication device 3902 can comprise the storer 3950 being coupled to microphone array 3930.Storer 3950 can receive the sound signal that microphone array 3930 provides.For example, storer 3950 can comprise about four record one or more data set in direction.In other words, storer 3950 can comprise the data for front left side microphone 3904a sound signal, front right side microphones 3904b sound signal, rear right side microphones 3904c sound signal and rear left side microphone 3904d sound signal.
Radio communication device 3902 also can comprise the controller 3952 receiving process information.For example, controller 3952 can receive the user profile input in user interface.More particularly, user can indicate desired record direction.In other example, user can indicate one or more voice-grade channel to distribute comparatively multiprocessing position, or user can indicate coding and launch which voice-grade channel.Controller 3952 also can receive bandwidth information.For example, bandwidth information can be assigned to the bandwidth for audio signals information (such as, full band, ultra broadband, broadband and arrowband) of radio communication device 3902 to controller 3952 instruction.
Based on the information carrying out self-controller 3952, (such as, user's input and bandwidth information) and the information that is stored in storer 3950, communicator 3902 can select customized configuration to be applied to voice-grade channel from one or more codec configuration 3974a-d.In some embodiments, the codec configuration 3974a-d be present on radio communication device can comprise first configuration of Figure 29 A-C, second configuration of Figure 30 A-B, the 3rd configuration and the configuration of Figure 32 of Figure 31 A-B.For example, what radio communication device 3902 can use first of Figure 29 A to configure carrys out encoded audio channels.
Figure 40 is the block diagram of the embodiment that radio communication device 4002 is described, it configuration 4074 comprising four non-narrowband codec 4048a-d of the non-narrowband codec being similar to Figure 29 A-C is with compressing audio signal.Radio communication device 4002 can comprise a certain combination of the array 4030 of microphone 4004a-d, storer 4050, controller 4052 or these elements (corresponding to previously described element).In this embodiment, radio communication device 4002 can comprise the configuration 4074 of the codec 4048a-d in order to coding audio signal bag 3376.For example, radio communication device 4002 can comprise and one or more wideband codec 2990a-d implemented as described in Figure 29 B with coding audio signal information.Or, full band codec 2948a-d or ultra broadband codec 2988a-d can be used.Radio communication device 4002 can audio signals bag 4076a-d (such as, FL, FR, BL and BR bag) to demoder.
Figure 41 is the block diagram of the embodiment of the communicator 4102 that four the configuration 4174a-d comprising codec combination are described, wherein can use optional codec prefilter 4154.Radio communication device 4102 can comprise a certain combination of the array 4130 of microphone 4104a-d, storer 4150, controller 4152 or these elements (corresponding to previously described element).Codec prefilter 4154 can use the information of self-controller 4152 to control and what audio signal data be stored in memory, and therefore, controls which data encoded and launch.
Figure 42 is the block diagram of the embodiment of the communicator 4202 that four the configuration 4274a-d comprising codec combination are described, wherein optionally filtering can be used as a part for bank of filters array 4226 and occurs.Radio communication device 4202 can comprise a certain combination of microphone 4204a-d, storer 4250, controller 4252 or these elements (corresponding to previously described element).In this embodiment, optionally filtering can be used as a part for bank of filters array 4226 and occurs, and wherein 4226 corresponds to previously described element similarly.
Figure 43 is the block diagram of embodiment of the communicator 4302 that four the configuration 4374a-d comprising codec combination are described, wherein from auditory scene sound source data can before with the one coding in codec configuration 4374a-d with the data mixing from one or more wave filter.Radio communication device 4302 can comprise a certain combination of the array 4330 of microphone, storer 4350 and/or controller 4352 or these elements (corresponding to previously described element).In some embodiments, radio communication device 4302 can comprise one or more frequency mixer 4356a-d.One or more frequency mixer 4356a-d can make sound signal and the data mixing from one or more wave filter before with the one coding in codec configuration.
Figure 44 illustrates for using integrated codec to the process flow diagram of the method 4400 of multi-direction sound signal of encoding.Method 4400 performs by radio communication device 102.Radio communication device 102 can record multiple directional audio signal of 4402.Multiple directional audio signal can by multiple microphone record.For example, the multiple microphones be positioned on radio communication device 102 can record the directional audio signal from front left direction, rear left direction, front right direction, rear right direction or a certain combination.In some cases, radio communication device 102 (such as) records more than 4402 directional audio signal via user interface 312 based on user's input.
Radio communication device 102 can produce more than 4404 sound signal bag 3376.In some configurations, sound signal bag 3376 can based on multiple sound signal.Multiple sound signal bag 3376 can comprise average signal.As mentioned above, produce more than 4404 sound signal bag 3376 and can comprise the multiple voice-grade channel of generation.For example, by a part for multiple directional audio signal compression and multiple voice-grade channel can be aloft emitted as.In some cases, the number of compressed directional audio signal can be not equal to the number of launched voice-grade channel.For example, if four directional audio signal are compressed, the number of so launched voice-grade channel can equal three.Voice-grade channel may correspond in one or more directional audio signal.In other words, radio communication device 102 can produce corresponding to left side audio passage before front left side audio signal.Multiple voice-grade channel can comprise through frequency filtering scope (such as, being with 1B) and non-filtered frequency range (such as, being with 1A, 2A, 2B and/or 2C).
Produce more than 4404 sound signal bag 3376 also can comprise codec is applied to voice-grade channel.For example, radio communication device 102 can will be applied to multiple sound signal with one or many person in codec, wideband codec, ultra broadband codec or narrowband codec entirely.More particularly, radio communication device 102 can compress at least one directional audio signal in low strap, and can compress different directional audio signal in high-band.
In some embodiments, produce more than 4404 sound signal bag 3376 can based on receive input.For example, radio communication device 102 can receive from user input with determine codec position distribute.In some cases, distribute can be visual based on what treat by the energy in the direction compressed in position.Radio communication device 102 also can receive and the input compressed directional audio signal and be associated.For example, radio communication device 102 can receive the input about compression (and air-launched) which directional audio signal from user.In some cases, which directional audio signal input can indicate to have better audio quality.In these examples, input based on the gesture of the hand of user, such as, can pass through the display touching radio communication device.Similarly, input can based on the movement of radio communication device.
After generation sound signal bag 3376, radio communication device 102 can launch more than 4406 sound signal bag 3376 to demoder.Radio communication device 102 can launch more than 4406 sound signal bag 3376 aloft.In some configurations, demoder is included in radio communication device 102, such as audio frequency sensing apparatus.
Figure 45 illustrates the process flow diagram for the method 4500 of Audio Signal Processing.Method 4500 performs by radio communication device 102.Radio communication device 102 can catch 4500 auditory scenes.For example, multiple microphone can catch the sound signal from multiple directed source.Radio communication device 102 can estimate the arrival direction of each sound signal.In some embodiments, radio communication device 102 can be selected to record direction.Selecting to record direction can based on the orientation of portable audio sensing apparatus (microphone such as, on radio communication device).Additionally or alternati, selecting to record direction can based on input.For example, user can select the direction should with better audio quality.Auditory scene can be decomposed 4504 at least four sound signals by radio communication device 102.In some embodiments, sound signal corresponds to four independent, direction.For example, the first sound signal may correspond in front left direction, and the second sound signal may correspond in rear left direction, and the 3rd sound signal may correspond in front right direction, and the 4th sound signal may correspond in rear right direction.Radio communication device 102 is compressible 4506 at least four sound signals also.
In some embodiments, decompose 4504 auditory scenes can comprise sound signal is divided into one or more frequency range.For example, sound signal can be divided into the first narrowband frequency range set and the second wideband frequency range set by radio communication device.In addition, the compressible audio sample be associated with the first frequency band in narrowband frequency range set of radio communication device.After compressed audio sample, radio communication device can launch compressed audio sample.
Radio communication device 102 also can apply beam in the first end-fire direction to obtain the first filtering signal.Similarly, the second beam in the second end-fire direction can produce the second filtering signal.In some cases, beam is applicable to the frequency between Low threshold and high threshold.In these cases, the one (such as, Low threshold or high threshold) in threshold value can based on the distance between microphone.
Radio communication device may be combined with the delay version of the first filtering signal and the second filtering signal.In some cases, first and second filtering signal can have two passages separately.In some cases, a passage of filtering signal (such as, the first filtering signal and the second filtering signal) can relative to other channel delay.Similarly, composite signal (such as, the combination of the first filtering signal and the second filtering signal) can have two passages that can relative to each other postpone.
Radio communication device 102 can comprise generation first spatial filtering signal.For example, the wave filter of the beam with first direction can be applied to the first microphone to produced signal by radio communication device 102.In a similar manner, radio communication device 102 can produce second space filtering signal.In some cases, first microphone is to (such as, in order to produce the microphone of described first spatial filtering signal) axle can be orthogonal with the axle of second microphone to (such as, in order to produce the microphone of second space filtering signal) at least in fact.Radio communication device 102 then can combine the first spatial filtering signal and second space filtering signal outputs signal to produce.Output signal the direction that may correspond in the direction being different from the first spatial filtering signal and second space filtering signal.
Radio communication device also can record an input channel.In some embodiments, each during input channel may correspond in array multiple microphones.For example, input channel may correspond to the input in four microphones.Multiple multi-channel filter can be applied to input channel to obtain output channel.In some cases, multi-channel filter may correspond in multiple view direction.For example, four multi-channel filters may correspond in four view directions.The multi-channel filter applied in a view direction can comprise the empty beam in other view direction of application.In some embodiments, the axle of the first couple in multiple microphone can be less than 15 degree with the orthogonal of the axle of the second couple in multiple microphone.
As mentioned above, apply multiple multi-channel filter and can produce an output channel.In some cases, radio communication device 102 can process described output channel to produce the ears record based on the summation of binaural signal.For example, binaural impulse response can be applied to output channel by radio communication device 102.This can produce can in order to produce the binaural signal of ears record.
Figure 46 illustrates the process flow diagram for the method 4600 of three-dimensional audio of encoding.Method 4600 performs by radio communication device 102.Radio communication device 102 can to detect more than 4602 can the instruction of direction in space in 3dpa source.As used herein, the audio-source referred to from specific direction " can be located " in term.For example, sound signal from front left direction can be can be in 3dpa source.Radio communication device 102 can be determined can the number in 3dpa source.This can comprise estimates that each can the arrival direction in 3dpa source.In some cases, radio communication device 102 can detect the instruction from user interface 312.For example, user can select one or more direction in space based on user's input of the user interface 312 from radio communication device 302.The example of user's input comprises the gesture (such as, on the touch-screen of radio communication device, the movement of radio communication device) of the hand of user.
Radio communication device 102 can then record 4604 with can multiple sound signals of being associated of 3dpa source.For example, one or more microphone be positioned on radio communication device 102 can record 4604 from the sound signal of front left side, front right side, rear left side and/or rear right direction.
Radio communication device 102 codified more than 4606 sound signal.As mentioned above, radio communication device 102 can use any number codec with coded signal.For example, radio communication device 102 can use full band codec to encode 4606 front left sides and rear left side audio signal, and can use wideband codec to encode 4606 front right side and rear right audio signal.In some cases, radio communication device 102 can carry out coded multi-channel signal according to three-dimensional audio encoding scheme.For example, radio communication device 102 can use any one more than 4606 sound signal of encoding in the allocation plan described by composition graphs 29-32.
Radio communication device 102 also can apply beam in the first end-fire direction to obtain the first filtering signal.Similarly, the second beam in the second end-fire direction can produce the second filtering signal.In some cases, beam is applicable to the frequency between Low threshold and high threshold.In these cases, the one (such as, Low threshold or high threshold) in threshold value can based on the distance between microphone.
Radio communication device may be combined with the delay version of the first filtering signal and the second filtering signal.In some cases, first and second filtering signal can have two passages separately.In some cases, a passage of filtering signal (such as, the first filtering signal and the second filtering signal) can relative to other channel delay.Similarly, composite signal (such as, the combination of the first filtering signal and the second filtering signal) can have two passages that can relative to each other postpone.
Radio communication device 102 can comprise generation first spatial filtering signal.For example, the wave filter of the beam with first direction can be applied to the first microphone to produced signal by radio communication device 102.In a similar manner, radio communication device 102 can produce second space filtering signal.In some cases, first microphone is to (such as, in order to produce the microphone of described first spatial filtering signal) axle can be orthogonal with the axle of second microphone to (such as, in order to produce the microphone of second space filtering signal) at least in fact.Radio communication device 102 then can combine the first spatial filtering signal and second space filtering signal outputs signal to produce.Output signal the direction that may correspond in the direction being different from the first spatial filtering signal and second space filtering signal.
Radio communication device also can record an input channel.In some embodiments, each during input channel may correspond in array multiple microphones.For example, input channel may correspond to the input in four microphones.Multiple multi-channel filter can be applied to input channel to obtain output channel.In some cases, multi-channel filter may correspond in multiple view direction.For example, four multi-channel filters may correspond in four view directions.The multi-channel filter applied in a view direction can comprise the empty beam in other view direction of application.In some embodiments, the axle of the first couple in multiple microphone can be less than 15 degree with the orthogonal of the axle of the second couple in multiple microphone.
As mentioned above, apply multiple multi-channel filter and can produce an output channel.In some cases, radio communication device 102 can process described output channel to produce the ears record based on the summation of binaural signal.For example, binaural impulse response can be applied to output channel by radio communication device 102.This can produce can in order to produce the binaural signal of ears record.
Figure 47 is the process flow diagram of the method 4700 illustrated for selecting codec.Method 4700 performs by radio communication device 102.Radio communication device 102 can determine the energy distribution curve of more than 4702 sound signal.Radio communication device 102 then can show the energy distribution curve of each in more than 4704 sound signal.For example, radio communication device 102 can show the energy distribution curve of 4704 front left sides, front right side, rear left side and rear right audio signal.Radio communication device 102 then can detect the input of 4706 selection energy distribution curves.In some embodiments, input can input based on user.For example, user can select to answer graphic based to represent and compressed energy distribution curve (such as, corresponding to direct sound).In some instances, select can reflect which directional audio signal should have the instruction of better sound quality, such as, described selection can reflect the voice of user from direction.
Radio communication device 102 can associate 4708 codecs be associated with input.For example, radio communication device 102 can associate 4708 codecs to produce the better audio quality for user-selected directional audio signal.Radio communication device 102 then can compress more than 4710 sound signal to produce sound signal bag based on codec.As mentioned above, then bag can be launched aloft.In some embodiments, radio communication device also can transmission channel identification.
Figure 48 is the process flow diagram that the method 4800 of distributing for increasing position is described.Method 4800 performs by radio communication device 102.Radio communication device 102 can determine the energy distribution curve of more than 4802 sound signal.Radio communication device 102 then can show the energy distribution curve of each in more than 4804 sound signal.For example, radio communication device 102 can show the energy distribution curve of 4804 front left sides, front right side, rear left side and rear right audio signal.Radio communication device 102 then can detect the input of 4806 selection energy distribution curves.In some embodiments, input can input based on user.For example, user can represent to select should distribute compared with the energy distribution curve (such as, corresponding to direct sound) of multidigit for compressing by graphic based.In some instances, select can reflect which directional audio signal should have the instruction of better sound quality, such as, described selection can reflect the voice of user from direction.
Radio communication device 102 can associate 4808 codecs be associated with input.For example, radio communication device 102 can associate 4808 codecs to produce the better audio quality for user-selected directional audio signal.Radio communication device 102 then can increase by 4810 and distribute to the position in order to the codec of compressing audio signal based on input.As mentioned above, then bag can be launched aloft.
Figure 49 illustrates some assembly that can be included in radio communication device 4902.One or many person in above-mentioned radio communication device can configure similarly with the radio communication device 4902 shown in Figure 49.
Radio communication device 4902 comprises processor 4958.Processor 4958 general purpose single or multi-chip microprocessor (such as, ARM), special microprocessor (such as, digital signal processor (DSP)), microcontroller, programmable gate array etc.Processor 4958 can be referred to as CPU (central processing unit) (CPU).Although only show single processor 4958 in the radio communication device 4902 of Figure 49, in alternative arrangements, the combination of purpose processor 4958 (such as, ARM and DSP) can be made.
Radio communication device 4958 also comprises and the storer 4956 of processor 4958 electronic communication (that is, processor 4958 can read from the information of storer 4956 and/or written information to storer 4956).Storer 4956 can be can any electronic package of storage of electronic information.Storer 4956 can be storer, programmable read only memory (PROM), Erasable Programmable Read Only Memory EPROM (EPROM), electric erasable PROM (EEPROM), register etc. on flash memory device in random access memory (RAM), ROM (read-only memory) (ROM), magnetic disc storage media, optic storage medium, RAM, the plate that comprises together with processor 4958, comprises its combination.
Data 4960 and instruction 4962 can be stored in storer 4956.Instruction 4962 can comprise one or more program, routine, subroutine, function, code, code etc.Instruction 4962 can comprise single computer-readable statement, and perhaps multicomputer can reading statement.Instruction 4962 can be performed one or many person to implement the above described methods by processor 4958.Perform instruction 4962 and can relate to the data 4960 using and be stored in storer 4956.Figure 49 illustrates some instructions 4962a of loading in processor 4958 and data 4960a (its can from the instruction 4962 in storer 4956 and data 4960).
Radio communication device 4902 also can comprise transmitter 4964 and receiver 4966 is launched and Received signal strength between radio communication device 4902 with remote location (such as, communicator, base station etc.) with permission.Transmitter 4964 and receiver 4966 can be referred to as transceiver 4968.Antenna 4970 can be electrically coupled to transceiver 4968.Radio communication device 4902 also can comprise (not shown) multiple transmitter 4964, multiple receiver 4966, multiple transceiver 4968 and/or multiple antenna 4970.
In some configurations, radio communication device 4902 can comprise one or more microphone for catching acoustic signal.In one configuration, microphone can be transducer acoustic signal (such as, voice, speech) being converted to the acoustic signal of electricity or electronic signal.Additionally or alternati, radio communication device 4902 can comprise one or more loudspeaker.In one configuration, loudspeaker can be transducer electric signal or electronic signal being converted to acoustic signal.
By one or more bus coupling together, bus can comprise electrical bus, control signal bus, status signal bus in addition, data bus etc. to the various assemblies of radio communication device 4902.In order to simplicity, in Figure 49, various bus is illustrated as bus system 4972.
Method and apparatus disclosed herein can be applicable to any transmitting-receiving and/or the application of audio frequency sensing substantially, the movement of especially this type of application or other portable example.For example, configuration scope disclosed herein comprises the communicator resided in the mobile phone communication system being configured to employing code division multiple access (CDMA) air interface.But, those skilled in the art will appreciate that, the method and apparatus with feature described herein can reside in any one in the various communication systems of the technology of the broad range adopting those skilled in the art known, the system of ip voice (VoIP) is such as adopted via wired and/or wireless (such as, CDMA, TDMA, FDMA and/or TD-SCDMA) transmission channel.
Contain clearly and disclose communicator disclosed herein at this and can be suitable for using in packet switch (such as, being configured to come the wired of carrying audio emission and/or wireless network according to the agreement of such as VoIP) and/or Circuit-switched network.Also containing clearly and disclosing communicator disclosed herein at this to be suitable at arrowband decoding system (such as, to encode the system of frequency range of about four or five kilo hertzs) in use and/or at broadband decoding system (such as, coding is greater than the system of the audio frequency of five kilo hertzs) middle use, comprise unitary strip broadband decoding system and division zone broadband decoding system.
There is provided to described configuration previously present those skilled in the art can be made or uses the method and other structure that disclose herein.Process flow diagram, block diagram and other structure of showing and describing herein are only example, and other variant of these structures also within the scope of the invention.The various amendments of these configurations are possible, and the General Principle presented herein is also applicable to other configuration.Therefore, the present invention is without wishing to be held to configuration shown above, but the widest scope (be included in applied for additional claims) consistent with the principle disclosed by any way in this article and novel feature should be met, described claims form a part for original disclosure.
Be understood by those skilled in the art that, any one in multiple different skill and technology can be used to represent information and signal.For example, the data of reference all the time, instruction, order, information, signal, position and symbol can be represented in the above description by voltage, electric current, electromagnetic wave, magnetic field or magnetic particle, light field or light particle or its any combination.
Significant design for the embodiment of configuration as herein disclosed requires to comprise the processing delay and/or computational complexity (usually measuring with 1,000,000 instructions per second or MIPS) that minimize and be particularly useful for following application: compute-intensive applications, such as compressed audio or audio-visual information are (such as, according to file or the stream of compressed format encodings, such as herein identify one in example) broadcasting, or the application of broadband connections (such as, sampling rate higher than the voice communication of eight kilo hertzs, such as 12,16 or 44kHz).
The object of multi-microphone disposal system can comprise: in overall noise reduces, realize 10 to 12dB; Speech level and color is retained during wanted loudspeaker moves; Obtain the perception that noise has moved to non-positive noise removal in background; The derever beration of speech; And/or the option enabling aftertreatment is for more positive noise decrease.
The various elements of the embodiment of equipment as herein disclosed may be embodied in be considered to be suitable for expect application hardware and software and/or with any combination of firmware.For example, this class component can be manufactured to electronics in two or more chips resided in (such as) same chip or chipset and/or optical devices.An example of such device is fixing or programmable logic element (such as, transistor or logic gate) array, and any one in these elements can through being embodied as one or more this type of array.Any both or both in these elements are above and even all may be implemented in identical array.This type of array may be implemented in one or more chip and (such as, comprises in the chipset of two or more chips).
One or more element of the various embodiments of equipment disclosed herein also can be embodied as one or more instruction set in whole or in part, described instruction be configured to be executed in logic element one or more to fix or on programmable array, such as microprocessor, flush bonding processor, the IP kernel heart, digital signal processor, FPGA (field programmable gate array), ASSP (Application Specific Standard Product) and ASIC (special IC).Any one in the various elements of the embodiment of equipment as herein disclosed also can be presented as one or more computing machine (such as, comprise able to programme with one or more set or machine of sequence of performing instruction, be also called " processor ") and any in may be implemented in this type of computing machine identical these elements both or both above or even whole.
As herein disclosed for the treatment of processor or other device can be fabricated as one or more electronics in two or more chips resided in (such as) same chip or chipset and/or optical devices.An example of such device is fixing or programmable logic element (such as, transistor or logic gate) array, and any one in these elements can through being embodied as one or more this type of array.This type of array may be implemented in one or more chip and (such as, comprises in the chipset of two or more chips).The example of this type of array comprises the fixing of logic element or programmable array, such as microprocessor, flush bonding processor, the IP kernel heart, DSP, FPGA, ASSP and ASIC.As herein disclosed for the treatment of processor or other device also can be presented as one or more computing machine (such as, comprising through programming to perform the machine of one or more array of instruction or multiple set or sequence) or other processor.Processor as described herein may in order to perform not directly related with directed encoding procedure other instruction set of task, such as, about task of the wherein device of embedded processor or another operation of system (such as, audio frequency sensing apparatus).The part of method as herein disclosed also may be performed by the processor of audio frequency sensing apparatus and another part of described method performs under the control of one or more other processor.
Be understood by those skilled in the art that, the various illustrative modules described in conjunction with configuration disclosed herein, logical block, circuit and test and other operation can be embodied as electronic hardware, computer software or both combinations.Can use and implement through design with the general processor of generation configuration as herein disclosed, digital signal processor (DSP), ASIC or ASSP, FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components or its any combination or perform this generic module, logical block, circuit and operation.For example, this type of configuration can be embodied as hard-wired circuit, Circnit Layout through being manufactured in special IC or the firmware program loaded in Nonvolatile memory devices at least partly or load or to the software program in data storage medium from data storage medium as machine readable code, this category code is the instruction that can be performed by array of logic elements, such as general processor or other digital signal processing unit.General processor can be microprocessor, but in replacement scheme, and processor can be the processor of any routine, controller, microcontroller or state machine.Processor also can be embodied as the combination of calculation element, such as, and the combination of the combination of DSP and microprocessor, multi-microprocessor, one or more microprocessor and DSP core, or any other this type of configuration.Software module can reside in the medium of other form any known in RAM (random access memory), ROM (ROM (read-only memory)), non-volatile ram (NVRAM) (such as quick flashing RAM), erasable programmable ROM (EPROM), electrically erasable ROM (EEPROM), register, hard disk, moveable magnetic disc, CD-ROM or technique.Illustrative medium is coupled to processor and makes processor can from read information and written information to medium.In replacement scheme, medium can formula integral with processor.Processor and medium can reside in ASIC.ASIC can reside in user terminal.In replacement scheme, processor and medium can be used as discrete component and reside in user terminal.
It should be noted that various methods disclosed herein can be performed by the array of logic elements of such as processor, and the various elements of equipment can through being embodied as through design with the module performed on this type of array as described herein.As used herein, term " module " or " submodule " can refer to comprise in software, any method of the computer instruction (such as, logical expression) of hardware or form of firmware, unit, unit or computer-readable data storage medium.Should understand can be a module or system by multiple module or system in combination, and can be that multiple module or system are to perform identical function by a module or systematic position.When implementing with software or other computer executable instructions, the original code segment be essentially in order to perform such as relevant with routine, program, object, assembly, data structure etc. task of process.Term " software " is interpreted as comprising any combination of source code, assembler language code, machine code, binary code, firmware, grand code, microcode, one or more instruction set any that can be performed by array of logic elements or sequence and this type of example.Program or code segment can be stored in processor readable media or by the computer data signal be embodied in carrier wave via transmission medium or communication link.
The embodiment of method disclosed herein, scheme and technology also can visibly embody (such as, in one or more computer-readable media gone out as set forth herein) be machine readable and/or one or more instruction set executable, described machine comprises array of logic elements (such as, processor, microprocessor, microcontroller or other finite state machine).Term " computer-readable media " can comprise any media that can store or transmit information, comprises volatibility, non-volatile, removable and non-removable media.The example of computer-readable media comprises electronic circuit, semiconductor memory system, ROM, flash memory, erasable ROM (EROM), floppy disk or other magnetic storage device, CD-ROM/DVD or other optical storage, hard disk, optical fiber media, radio frequency (RF) link, or can be used for storing want information and other media any that can be accessed.Computer data signal can comprise any signal can propagated via transmission medium (such as electronic network channels, optical fiber, air, electromagnetism, RF link etc.).Code segment can be downloaded via the such as computer network such as the Internet or Intranet.Under any circumstance, scope of the present invention should be interpreted as that being subject to this type of configuration limits.
Can directly with hardware, to state both with software module performed by processor or more combination to each in the task of embodying method disclosed herein.In the typical apply of the embodiment of method as herein disclosed, array of logic elements (such as, logic gate) is configured more than one in various tasks to execute a method described, one or even whole.One or many person (may own) in described task also can be embodied as at computer program (such as, one or more data storage medium, such as disk, quick flashing or other Nonvolatile memory card, semiconductor memory chips etc.) the middle code embodied is (such as, one or more instruction set), described computer program can by comprising the array of logic element (such as, processor, microprocessor, microcontroller or other finite state machine) machine (such as, computing machine) read and/or perform.The task of the embodiment of method as herein disclosed also can be performed by more than one this type of array or machine.In these or other embodiment, described task can for performing in the device of radio communication, and described device is such as cellular phone or other device with this communication capacity.Such device can be configured to communicate with circuit switching and/or packet network (such as, using one or more agreement (such as VoIP)).For example, such device can comprise the RF circuit being configured to receive and/or launch encoded frame.
Disclose clearly, various methods disclosed herein can be performed by portable communication appts, such as mobile phone, headphone or portable digital assistant (PDA), and various equipment described herein can be included in such device.Typical (such as, online) in real time application is the telephone conversation using this type of mobile device to carry out.
In one or more exemplary configuration, operation described herein can be implemented in hardware, software, firmware or its any combination.If with implement software, so this generic operation can be stored on computer-readable media or via computer-readable media as one or more instruction or code and transmit.Term " computer-readable media " comprises both Computer Memory Unit media and communication medium, comprises any media promoting computer program to be sent to another place from.Medium can be can by any useable medium of computer access.By the mode (and unrestricted) of example, this type of computer-readable media can comprise the array of memory element, such as semiconductor memory (its can comprise but non-be limited to dynamically or static RAM (SRAM), ROM, EEPROM and/or quick flashing RAM) or ferroelectrics, magnetic resistance, two-way, polymerization or phase transition storage; CD-ROM or other optical disk storage apparatus, disk storage device or other magnetic storage device or can in order to can by the tangible structure of computer access in store in instruction or data structure form want other media any of program code.And, rightly any connection can be called computer-readable media.For example, if use concentric cable, fiber optic cables, twisted-pair feeder, digital subscribe lines (DSL) or such as infrared ray, radio and/or microwave wireless technology from website, server or other remote source software, so the wireless technology of concentric cable, fiber optic cables, twisted-pair feeder, DSL or such as infrared ray, radio and/or microwave is included in the definition of media.As used herein, disk and case for computer disc are containing compact disc (CD), laser-optical disk, optical compact disks, digital versatile disc (DVD), floppy disk and Blu-ray Disc
tM(Blu-ray Disc association, global city, California (Universal City, CA)), wherein disk is usually with magnetic means playback of data, and CD laser playback of data to be optically.Combination above also should be included in the scope of computer-readable media.
Acoustics signal processing equipment as described herein can be incorporated into and accept speech input to control in electronic installation of some operation, or can otherwise benefit from from ground unrest want the separation (such as communicator) of noise.Many application can benefit from wanted the clearly sound strengthening or be separated from the background sound deriving from multiple directions.This type of application can comprise the man-machine interface in electronics or calculation element, and it has been incorporated to the abilities such as such as speech recognition and detection, speech enhancing and separation, the control of voice activation formula.Can need this type of acoustics signal processing equipment to be embodied as the device being suitable for only providing limited processing capacity.
The element of the various embodiments of module described herein, element and device can be manufactured to electronics in two or more chips resided in (such as) same chip or chipset and/or optical devices.An example of such device is fixing or programmable logic element (such as, transistor or door) array.One or more element in the various embodiments of equipment described herein also can be embodied as fully or partly through arranging to fix at one or more or upper one or more instruction set performed of programmable logic element array (such as, microprocessor, flush bonding processor, the IP kernel heart, digital signal processor, FPGA, ASSP and ASIC).
Likely make one or more element of the embodiment of equipment as described in this article for performing not directly related with the operation of described equipment task or other instruction set, such as to wherein embed the device of described equipment or system another operate relevant task.One or more element of the embodiment of this kind equipment is also likely made to have common structure (such as, for perform at different time the code section corresponding to different elements processor, through performing to perform the instruction set of task corresponding to different elements at different time, or in the electronics of different time to different elements executable operations and/or the layout of optical devices).
In the above description, sometimes use together in conjunction with various term with reference to label.When using term in conjunction with reference number, this may imply that the particular element of showing in finger one or many person in the drawings.When without when using term when reference number, this may imply that and substantially refers to described term and be not limited to any specific figure.
According to the present invention, the circuit in mobile device can be suitable for receiving the signal conversion command relevant to the compressed audio bit stream of multiple type and accompanying data.Second section of same circuits, different circuit or identical or different circuit can be suitable for execution one and convert the part that the signal as the compressed audio bit stream for multiple type changes.Second section can advantageously be coupled to the first section, or it can be embodied in the circuit identical with the first section.In addition, same circuits, different circuit, or the 3rd section of identical or different circuit can be suitable for the part that execution one complementation process changes as the signal of the compressed audio bit stream for multiple type.3rd section can advantageously be coupled to first and second section, or it can be embodied in the circuit identical with first and second section.In addition, same circuits, different circuit, or the 4th section of identical or different circuit can be suitable for the configuration of the circuit controlling to provide above-mentioned functions or the section of circuit.
Term " is determined " to contain extensive various motion, and therefore " determines " to comprise reckoning, calculates, processes, derives, investigates, searches (such as, searching in table, database or another data structure), concludes and fellow.And " determination " can comprise reception (such as, receive information), access (such as, in memory access data) and fellow thereof.And " determination " can comprise parsing, selects, selects, set up and fellow.
Claims (50)
1., for the method by radio communication device coding three-dimensional audio, it comprises:
Detect multiple can the instruction of direction in space in 3dpa source;
Record with described multiple can multiple sound signals of being associated of 3dpa source; And
To encode described multiple sound signal.
2. method according to claim 1, wherein said can the described instruction of described direction in space in 3dpa source be input based on receiving.
3. method according to claim 1, comprises further:
Determining can the number in 3dpa source; And
Estimate that each can the arrival direction in 3dpa source.
4. method according to claim 1, it comprises further and carrys out coded multi-channel signal according to three-dimensional audio encoding scheme.
5. method according to claim 1, it comprises further:
Apply beam in the first end-fire direction to obtain the first filtering signal;
Apply beam in the second end-fire direction to obtain the second filtering signal; And
Combine the delay version of described first filtering signal and described second filtering signal.
6. method according to claim 5, each in first and second filtering signal wherein said has at least two passages, and one of wherein said filtering signal postpones relative to another filtering signal.
7. method according to claim 6, it comprises further:
The first passage of described first filtering signal is postponed relative to the second channel of described first filtering signal; And
The first passage of described second filtering signal is postponed relative to the second channel of described second filtering signal.
8. method according to claim 6, it comprises the first passage postponing described composite signal relative to the second channel of described composite signal further.
9. method according to claim 1, it comprises further:
The wave filter of the beam had in first direction is applied to signal that first pair of microphone produce to obtain the first spatial filtering signal;
The wave filter of the beam had in second direction is applied to signal that second pair of microphone produce to obtain second space filtering signal; And
Combination first and second spatial filtering signal described is to obtain output signal.
10. method according to claim 1, it comprises further:
Corresponding input channel is recorded for each in the multiple microphones in array; And
For each in multiple view direction, corresponding multi-channel filter is applied to multiple described recorded input channel to obtain corresponding output channel,
Beam in the described corresponding view direction of each application in wherein said multi-channel filter and the empty beam in other view direction described.
11. methods according to claim 10, it comprises the described multiple output channel of process further to produce ears record.
12. methods according to claim 5, wherein the described beam applied in end-fire direction comprises the frequency be applied to by described beam between Low threshold and high threshold, at least one in wherein said low and high threshold be based on microphone between distance.
13. 1 kinds for selecting the method for codec by radio communication device, it comprises:
Determine the energy distribution curve of multiple sound signal;
Show the described energy distribution curve of each in described multiple sound signal;
Detect the input selecting energy distribution curve;
Codec is associated with described input; And
Described multiple sound signal is compressed to produce bag based on described codec.
14. methods according to claim 13, it is included in further described in air-launched and wraps.
15. methods according to claim 13, it comprises transmission channel identification further.
16. 1 kinds of methods for being distributed by radio communication device increase position, it comprises:
Determine the energy distribution curve of multiple sound signal;
Show the described energy distribution curve of each in described multiple sound signal;
Detect the input selecting energy distribution curve;
Codec is associated with described input; And
Increase based on described input and the position of the described codec in order to compressing audio signal is distributed.
17. methods according to claim 16, the compression of wherein said sound signal produces four bags launched aloft.
18. 1 kinds of radio communication devices for three-dimensional audio of encoding, it comprises:
Direction in space circuit, its detect multiple can the instruction of direction in space in 3dpa source;
Writing circuit, it is coupled to described direction in space circuit, wherein said writing circuit record with described multiple can multiple sound signals of being associated of 3dpa source; And
Scrambler, it is coupled to described writing circuit, multiple sound signal described in wherein said encoder encodes.
19. radio communication devices according to claim 18, wherein said can the described instruction of described direction in space in 3dpa source be input based on receiving.
20. radio communication devices according to claim 18, it comprises further:
Audio-source determination circuit, it is determined can the number in 3dpa source; And
Estimating circuit, it is coupled to described audio-source determination circuit, and wherein said estimating circuit estimates that each can the arrival direction in 3dpa source.
21. radio communication devices according to claim 18, it comprises the coding circuit being coupled to described estimating circuit further, and wherein said coding circuit carrys out coded multi-channel signal according to three-dimensional audio encoding scheme.
22. radio communication devices according to claim 18, it comprises further:
Be coupled to the first beam application circuit of decomposition circuit, wherein said first beam application circuit applies beam in the first end-fire direction to obtain the first filtering signal;
Be coupled to the second beam application circuit of described first beam application circuit, wherein said second beam application circuit applies beam in the second end-fire direction to obtain the second filtering signal; And
Be coupled to the combinational circuit of described second beam application circuit and described first beam application circuit, the delay version of wherein said combinational circuit described first filtering signal of combination and described second filtering signal.
23. radio communication devices according to claim 22, each in first and second filtering signal wherein said has at least two passages, and one of wherein said filtering signal postpones relative to another filtering signal.
24. radio communication devices according to claim 23, it comprises further:
Be coupled to the delay circuit of described decomposition circuit, wherein said delay circuit postpones the first passage of described first filtering signal relative to the second channel of described first filtering signal, and postpones the first passage of described second filtering signal relative to the second channel of described second filtering signal.
25. radio communication devices according to claim 24, wherein said delay circuit postpones the first passage of described composite signal relative to the second channel of described composite signal.
26. radio communication devices according to claim 18, it comprises further:
Be coupled to the filter circuit of described decomposition circuit, the wave filter of the beam with first direction is applied to signal that first pair of microphone produce to obtain the first spatial filtering signal by wherein said filter circuit, and the wave filter of the beam with second direction is applied to signal that second pair of microphone produce to obtain second space filtering signal; And
Be coupled to the combinational circuit of described filter circuit, wherein said combinational circuit combination first and second spatial filtering signal described is to obtain output signal.
27. radio communication devices according to claim 18, it comprises further:
Be coupled to the writing circuit of described decomposition circuit, wherein for each in the multiple microphones in array, the corresponding input channel of described writing circuit record; And
Be coupled to the multi-channel filter circuit of described writing circuit, wherein for each in multiple view direction, corresponding multi-channel filter is applied to multiple described recorded input channel to obtain corresponding output channel by described multi-channel filter circuit,
Beam in the described corresponding view direction of each application in wherein said multi-channel filter and the empty beam in other view direction described.
28. radio communication devices according to claim 27, it comprises the ears writing circuit being coupled to described multi-channel filter circuit further, and described in the process of wherein said ears writing circuit, multiple output channel is to produce ears record.
29. radio communication devices according to claim 22, wherein the described beam applied in end-fire direction comprises the frequency be applied to by described beam between Low threshold and high threshold, at least one in wherein said low and high threshold be based on microphone between distance.
30. 1 kinds for selecting the radio communication device of codec, it comprises:
Determine the energy distribution curve circuit of the energy distribution curve of multiple sound signal;
Be coupled to the display of described energy distribution curve circuit, the described energy distribution curve of each in the described multiple sound signal of wherein said display display;
Be coupled to the input detecting circuit of described display, wherein said input detecting circuit detects the input selecting energy distribution curve;
Be coupled to the associated circuit of described input detecting circuit, wherein said associated circuit makes codec be associated with described input; And
Be coupled to the compressor circuit of described associated circuit, wherein said compressor circuit compresses described multiple sound signal to produce bag based on described codec.
31. radio communication devices according to claim 30, it comprises the transmitter being coupled to described compressor circuit further, and described bag launched aloft by wherein said transmitter.
32. radio communication devices according to claim 30, the identification of wherein said transmitter transmission channel.
33. 1 kinds of radio communication devices distributed for increasing position, it comprises:
Determine the energy distribution curve circuit of the energy distribution curve of multiple sound signal;
Be coupled to the display of described energy distribution curve circuit, the described energy distribution curve of each in the described multiple sound signal of wherein said display display;
Be coupled to the input detecting circuit of described display, wherein said input detecting circuit detects the input selecting energy distribution curve;
Be coupled to the associated circuit of described input detecting circuit, wherein said associated circuit makes codec be associated with described input; And
Be coupled to the position distributor circuit of described associated circuit, wherein said position distributor circuit increases based on described input and distributes the position of the described codec in order to compressing audio signal.
34. radio communication devices according to claim 33, the compression of wherein said sound signal produces four bags launched aloft.
35. 1 kinds of computer programs for three-dimensional audio of encoding, it comprises the non-transitory tangible computer readable media with instruction, and described instruction comprises:
For cause radio communication device detect multiple can the code of instruction of direction in space in 3dpa source;
For cause described radio communication device record with described multiple can the code of multiple sound signals that is associated of 3dpa source; And
For causing the code of the described multiple sound signal of described radio communication device coding.
36. computer programs according to claim 35, wherein said can the described instruction of described direction in space in 3dpa source be input based on receiving.
37. computer programs according to claim 35, wherein said instruction comprises further for causing described radio communication device to carry out the code of coded multi-channel signal according to three-dimensional audio encoding scheme.
38. 1 kinds for selecting the computer program of codec, it comprises the non-transitory tangible computer readable media with instruction, and described instruction comprises:
For the code causing radio communication device to determine the energy distribution curve of multiple sound signal;
For the code causing described radio communication device to show the described energy distribution curve of each in described multiple sound signal;
For the code causing described radio communication device to detect the input selecting energy distribution curve;
The code making codec for causing described radio communication device and be associated with described input; And
Described multiple sound signal is compressed to produce the code of bag based on described codec for causing described radio communication device.
Computer program according to claim 38 described in 39., wherein said instruction comprises the code for causing described radio communication device to launch described bag aloft further.
Computer program according to claim 38 described in 40., wherein said instruction comprises the code for causing the identification of described radio communication device transmission channel further.
41. 1 kinds of computer programs for increasing position, it comprises the non-transitory tangible computer readable media with instruction, and described instruction comprises:
For the code causing radio communication device to determine the energy distribution curve of multiple sound signal;
For the code causing described radio communication device to show the described energy distribution curve of each in described multiple sound signal;
For the code causing described radio communication device to detect the input selecting energy distribution curve;
The code making codec for causing described radio communication device and be associated with described input; And
Increase based on described input the code that the position of the described codec in order to compressing audio signal is distributed for causing described radio communication device.
42. computer programs according to claim 41, the compression of wherein said sound signal produces four bags launched aloft.
43. 1 kinds of equipment for three-dimensional audio of encoding, it comprises:
For detect multiple can the device of instruction of direction in space in 3dpa source;
For record with described multiple can the device of multiple sound signals that is associated of 3dpa source; And
For the device of described multiple sound signal of encoding.
44. equipment according to claim 43, wherein said can the described instruction of described direction in space in 3dpa source be input based on receiving.
45. equipment according to claim 43, it comprises the device for carrying out coded multi-channel signal according to three-dimensional audio encoding scheme further.
46. 1 kinds for selecting the equipment of codec by radio communication device, it comprises:
For determining the device of the energy distribution curve of multiple sound signal;
For showing the device of the described energy distribution curve of each in described multiple sound signal;
For detecting the device of the input selecting energy distribution curve;
For the device making codec be associated with described input; And
For compressing described multiple sound signal based on described codec to produce the device of bag.
47. equipment according to claim 46, it comprises the device for launching described bag aloft further.
48. equipment according to claim 13, it comprises the device for transmission channel identification further.
49. 1 kinds of equipment distributed for increasing position, it comprises:
For determining the device of the energy distribution curve of multiple sound signal;
For showing the device of the described energy distribution curve of each in described multiple sound signal;
For detecting the device of the input selecting energy distribution curve;
For the device making codec be associated with described input; And
For the position assigned unit increased the described codec in order to compressing audio signal based on described input.
50. equipment according to claim 49, the compression of wherein said sound signal produces four bags launched aloft.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261651185P | 2012-05-24 | 2012-05-24 | |
US61/651,185 | 2012-05-24 | ||
US13/664,701 US9161149B2 (en) | 2012-05-24 | 2012-10-31 | Three-dimensional sound compression and over-the-air transmission during a call |
US13/664,701 | 2012-10-31 | ||
PCT/US2013/040137 WO2013176890A2 (en) | 2012-05-24 | 2013-05-08 | Three-dimensional sound compression and over-the-air-transmission during a call |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104321812A true CN104321812A (en) | 2015-01-28 |
CN104321812B CN104321812B (en) | 2016-10-05 |
Family
ID=49621612
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201380026946.9A Expired - Fee Related CN104321812B (en) | 2012-05-24 | 2013-05-08 | Three dimensional sound compression during calling and air-launched |
Country Status (6)
Country | Link |
---|---|
US (3) | US20130315402A1 (en) |
EP (1) | EP2856464B1 (en) |
JP (1) | JP6336968B2 (en) |
KR (1) | KR101705960B1 (en) |
CN (1) | CN104321812B (en) |
WO (2) | WO2013176890A2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104637494A (en) * | 2015-02-02 | 2015-05-20 | 哈尔滨工程大学 | Double-microphone mobile equipment voice signal enhancing method based on blind source separation |
CN106356074A (en) * | 2015-07-16 | 2017-01-25 | 中华映管股份有限公司 | Audio processing system and audio processing method thereof |
CN108028977A (en) * | 2015-09-09 | 2018-05-11 | 微软技术许可有限责任公司 | Microphone for Sounnd source direction estimation is placed |
CN110858943A (en) * | 2018-08-24 | 2020-03-03 | 纬创资通股份有限公司 | Sound reception processing device and sound reception processing method thereof |
CN112259110A (en) * | 2020-11-17 | 2021-01-22 | 北京声智科技有限公司 | Audio encoding method and device and audio decoding method and device |
CN113329138A (en) * | 2021-06-03 | 2021-08-31 | 维沃移动通信有限公司 | Video shooting method, video playing method and electronic equipment |
WO2024082181A1 (en) * | 2022-10-19 | 2024-04-25 | 北京小米移动软件有限公司 | Spatial audio collection method and apparatus |
Families Citing this family (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11665482B2 (en) | 2011-12-23 | 2023-05-30 | Shenzhen Shokz Co., Ltd. | Bone conduction speaker and compound vibration device thereof |
WO2020051786A1 (en) * | 2018-09-12 | 2020-03-19 | Shenzhen Voxtech Co., Ltd. | Signal processing device having multiple acoustic-electric transducers |
US20130315402A1 (en) | 2012-05-24 | 2013-11-28 | Qualcomm Incorporated | Three-dimensional sound compression and over-the-air transmission during a call |
US9264524B2 (en) | 2012-08-03 | 2016-02-16 | The Penn State Research Foundation | Microphone array transducer for acoustic musical instrument |
US8884150B2 (en) * | 2012-08-03 | 2014-11-11 | The Penn State Research Foundation | Microphone array transducer for acoustical musical instrument |
US9460729B2 (en) * | 2012-09-21 | 2016-10-04 | Dolby Laboratories Licensing Corporation | Layered approach to spatial audio coding |
US10194239B2 (en) * | 2012-11-06 | 2019-01-29 | Nokia Technologies Oy | Multi-resolution audio signals |
KR20140070766A (en) * | 2012-11-27 | 2014-06-11 | 삼성전자주식회사 | Wireless communication method and system of hearing aid apparatus |
WO2014087195A1 (en) | 2012-12-05 | 2014-06-12 | Nokia Corporation | Orientation Based Microphone Selection Apparatus |
US9521486B1 (en) * | 2013-02-04 | 2016-12-13 | Amazon Technologies, Inc. | Frequency based beamforming |
US10750132B2 (en) * | 2013-03-14 | 2020-08-18 | Pelco, Inc. | System and method for audio source localization using multiple audio sensors |
CN105284129A (en) * | 2013-04-10 | 2016-01-27 | 诺基亚技术有限公司 | Audio recording and playback apparatus |
EP2992687B1 (en) * | 2013-04-29 | 2018-06-06 | University Of Surrey | Microphone array for acoustic source separation |
CN103699260B (en) * | 2013-12-13 | 2017-03-08 | 华为技术有限公司 | A kind of method starting termination function module and terminal unit |
GB2521649B (en) * | 2013-12-27 | 2018-12-12 | Nokia Technologies Oy | Method, apparatus, computer program code and storage medium for processing audio signals |
KR102201027B1 (en) | 2014-03-24 | 2021-01-11 | 돌비 인터네셔널 에이비 | Method and device for applying dynamic range compression to a higher order ambisonics signal |
KR102216048B1 (en) * | 2014-05-20 | 2021-02-15 | 삼성전자주식회사 | Apparatus and method for recognizing voice commend |
WO2015181727A2 (en) * | 2014-05-26 | 2015-12-03 | Vladimir Sherman | Methods circuits devices systems and associated computer executable code for acquiring acoustic signals |
EP2960903A1 (en) | 2014-06-27 | 2015-12-30 | Thomson Licensing | Method and apparatus for determining for the compression of an HOA data frame representation a lowest integer number of bits required for representing non-differential gain values |
US10073607B2 (en) | 2014-07-03 | 2018-09-11 | Qualcomm Incorporated | Single-channel or multi-channel audio control interface |
CN105451151B (en) * | 2014-08-29 | 2018-09-21 | 华为技术有限公司 | A kind of method and device of processing voice signal |
US9875745B2 (en) * | 2014-10-07 | 2018-01-23 | Qualcomm Incorporated | Normalization of ambient higher order ambisonic audio data |
KR102008745B1 (en) * | 2014-12-18 | 2019-08-09 | 후아웨이 테크놀러지 컴퍼니 리미티드 | Surround sound recording for mobile devices |
US9712936B2 (en) * | 2015-02-03 | 2017-07-18 | Qualcomm Incorporated | Coding higher-order ambisonic audio data with motion stabilization |
USD768596S1 (en) * | 2015-04-20 | 2016-10-11 | Pietro V. Covello | Media player |
US10187738B2 (en) * | 2015-04-29 | 2019-01-22 | International Business Machines Corporation | System and method for cognitive filtering of audio in noisy environments |
US10327067B2 (en) * | 2015-05-08 | 2019-06-18 | Samsung Electronics Co., Ltd. | Three-dimensional sound reproduction method and device |
GB2540175A (en) | 2015-07-08 | 2017-01-11 | Nokia Technologies Oy | Spatial audio processing apparatus |
WO2017143067A1 (en) * | 2016-02-19 | 2017-08-24 | Dolby Laboratories Licensing Corporation | Sound capture for mobile devices |
US11722821B2 (en) | 2016-02-19 | 2023-08-08 | Dolby Laboratories Licensing Corporation | Sound capture for mobile devices |
GB201607455D0 (en) * | 2016-04-29 | 2016-06-15 | Nokia Technologies Oy | An apparatus, electronic device, system, method and computer program for capturing audio signals |
US9858944B1 (en) * | 2016-07-08 | 2018-01-02 | Apple Inc. | Apparatus and method for linear and nonlinear acoustic echo control using additional microphones collocated with a loudspeaker |
KR102277438B1 (en) | 2016-10-21 | 2021-07-14 | 삼성전자주식회사 | In multimedia communication between terminal devices, method for transmitting audio signal and outputting audio signal and terminal device performing thereof |
US10362393B2 (en) | 2017-02-08 | 2019-07-23 | Logitech Europe, S.A. | Direction detection device for acquiring and processing audible input |
US10366700B2 (en) | 2017-02-08 | 2019-07-30 | Logitech Europe, S.A. | Device for acquiring and processing audible input |
US10366702B2 (en) | 2017-02-08 | 2019-07-30 | Logitech Europe, S.A. | Direction detection device for acquiring and processing audible input |
US10229667B2 (en) | 2017-02-08 | 2019-03-12 | Logitech Europe S.A. | Multi-directional beamforming device for acquiring and processing audible input |
US9820073B1 (en) | 2017-05-10 | 2017-11-14 | Tls Corp. | Extracting a common signal from multiple audio signals |
US10129648B1 (en) | 2017-05-11 | 2018-11-13 | Microsoft Technology Licensing, Llc | Hinged computing device for binaural recording |
US10789949B2 (en) * | 2017-06-20 | 2020-09-29 | Bose Corporation | Audio device with wakeup word detection |
US10665234B2 (en) * | 2017-10-18 | 2020-05-26 | Motorola Mobility Llc | Detecting audio trigger phrases for a voice recognition session |
WO2020051836A1 (en) * | 2018-09-13 | 2020-03-19 | Alibaba Group Holding Limited | Methods and devices for processing audio input using unidirectional audio input devices |
IL307415B1 (en) | 2018-10-08 | 2024-07-01 | Dolby Laboratories Licensing Corp | Transforming audio signals captured in different formats into a reduced number of formats for simplifying encoding and decoding operations |
US11049509B2 (en) * | 2019-03-06 | 2021-06-29 | Plantronics, Inc. | Voice signal enhancement for head-worn audio devices |
CN111986695B (en) * | 2019-05-24 | 2023-07-25 | 中国科学院声学研究所 | Non-overlapping sub-band division rapid independent vector analysis voice blind separation method and system |
US11380312B1 (en) * | 2019-06-20 | 2022-07-05 | Amazon Technologies, Inc. | Residual echo suppression for keyword detection |
US11638111B2 (en) * | 2019-11-01 | 2023-04-25 | Meta Platforms Technologies, Llc | Systems and methods for classifying beamformed signals for binaural audio playback |
TWI740339B (en) * | 2019-12-31 | 2021-09-21 | 宏碁股份有限公司 | Method for automatically adjusting specific sound source and electronic device using same |
US11277689B2 (en) | 2020-02-24 | 2022-03-15 | Logitech Europe S.A. | Apparatus and method for optimizing sound quality of a generated audible signal |
CN111246285A (en) * | 2020-03-24 | 2020-06-05 | 北京奇艺世纪科技有限公司 | Method for separating sound in comment video and method and device for adjusting volume |
US11200908B2 (en) * | 2020-03-27 | 2021-12-14 | Fortemedia, Inc. | Method and device for improving voice quality |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1433355A1 (en) * | 2001-07-19 | 2004-06-30 | Vast Audio Pty Ltd | Recording a three dimensional auditory scene and reproducing it for the individual listener |
US7184559B2 (en) * | 2001-02-23 | 2007-02-27 | Hewlett-Packard Development Company, L.P. | System and method for audio telepresence |
US20090080632A1 (en) * | 2007-09-25 | 2009-03-26 | Microsoft Corporation | Spatial audio conferencing |
WO2012061149A1 (en) * | 2010-10-25 | 2012-05-10 | Qualcomm Incorporated | Three-dimensional sound capturing and reproducing with multi-microphones |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6289308B1 (en) * | 1990-06-01 | 2001-09-11 | U.S. Philips Corporation | Encoded wideband digital transmission signal and record carrier recorded with such a signal |
US6072878A (en) | 1997-09-24 | 2000-06-06 | Sonic Solutions | Multi-channel surround sound mastering and reproduction techniques that preserve spatial harmonics |
US6813360B2 (en) * | 2002-01-22 | 2004-11-02 | Avaya, Inc. | Audio conferencing with three-dimensional audio encoding |
US7099821B2 (en) * | 2003-09-12 | 2006-08-29 | Softmax, Inc. | Separation of target acoustic signals in a multi-transducer arrangement |
US7756713B2 (en) * | 2004-07-02 | 2010-07-13 | Panasonic Corporation | Audio signal decoding device which decodes a downmix channel signal and audio signal encoding device which encodes audio channel signals together with spatial audio information |
US7826624B2 (en) * | 2004-10-15 | 2010-11-02 | Lifesize Communications, Inc. | Speakerphone self calibration and beam forming |
BRPI0607303A2 (en) | 2005-01-26 | 2009-08-25 | Matsushita Electric Ind Co Ltd | voice coding device and voice coding method |
US20080004729A1 (en) | 2006-06-30 | 2008-01-03 | Nokia Corporation | Direct encoding into a directional audio coding format |
TW200849219A (en) * | 2007-02-26 | 2008-12-16 | Qualcomm Inc | Systems, methods, and apparatus for signal separation |
US20080232601A1 (en) | 2007-03-21 | 2008-09-25 | Ville Pulkki | Method and apparatus for enhancement of audio reconstruction |
US8098842B2 (en) * | 2007-03-29 | 2012-01-17 | Microsoft Corp. | Enhanced beamforming for arrays of directional microphones |
US8005237B2 (en) * | 2007-05-17 | 2011-08-23 | Microsoft Corp. | Sensor array beamformer post-processor |
KR101415026B1 (en) | 2007-11-19 | 2014-07-04 | 삼성전자주식회사 | Method and apparatus for acquiring the multi-channel sound with a microphone array |
US8175291B2 (en) * | 2007-12-19 | 2012-05-08 | Qualcomm Incorporated | Systems, methods, and apparatus for multi-microphone based speech enhancement |
US8582783B2 (en) | 2008-04-07 | 2013-11-12 | Dolby Laboratories Licensing Corporation | Surround sound generation from a microphone array |
US8396226B2 (en) * | 2008-06-30 | 2013-03-12 | Costellation Productions, Inc. | Methods and systems for improved acoustic environment characterization |
US9025775B2 (en) | 2008-07-01 | 2015-05-05 | Nokia Corporation | Apparatus and method for adjusting spatial cue information of a multichannel audio signal |
US8279357B2 (en) | 2008-09-02 | 2012-10-02 | Mitsubishi Electric Visual Solutions America, Inc. | System and methods for television with integrated sound projection system |
EP2517486A1 (en) | 2009-12-23 | 2012-10-31 | Nokia Corp. | An apparatus |
KR101423737B1 (en) * | 2010-01-21 | 2014-07-24 | 한국전자통신연구원 | Method and apparatus for decoding audio signal |
US8600737B2 (en) * | 2010-06-01 | 2013-12-03 | Qualcomm Incorporated | Systems, methods, apparatus, and computer program products for wideband speech coding |
US8638951B2 (en) | 2010-07-15 | 2014-01-28 | Motorola Mobility Llc | Electronic apparatus for generating modified wideband audio signals based on two or more wideband microphone signals |
US8433076B2 (en) * | 2010-07-26 | 2013-04-30 | Motorola Mobility Llc | Electronic apparatus for generating beamformed audio signals with steerable nulls |
US9456289B2 (en) * | 2010-11-19 | 2016-09-27 | Nokia Technologies Oy | Converting multi-microphone captured signals to shifted signals useful for binaural signal processing and use thereof |
US8819523B2 (en) * | 2011-05-19 | 2014-08-26 | Cambridge Silicon Radio Limited | Adaptive controller for a configurable audio coding system |
RU2618383C2 (en) * | 2011-11-01 | 2017-05-03 | Конинклейке Филипс Н.В. | Encoding and decoding of audio objects |
US20130315402A1 (en) | 2012-05-24 | 2013-11-28 | Qualcomm Incorporated | Three-dimensional sound compression and over-the-air transmission during a call |
-
2012
- 2012-10-31 US US13/664,687 patent/US20130315402A1/en not_active Abandoned
- 2012-10-31 US US13/664,701 patent/US9161149B2/en active Active
-
2013
- 2013-05-08 KR KR1020147035519A patent/KR101705960B1/en active IP Right Grant
- 2013-05-08 EP EP13727680.4A patent/EP2856464B1/en active Active
- 2013-05-08 CN CN201380026946.9A patent/CN104321812B/en not_active Expired - Fee Related
- 2013-05-08 WO PCT/US2013/040137 patent/WO2013176890A2/en active Application Filing
- 2013-05-08 JP JP2015514045A patent/JP6336968B2/en not_active Expired - Fee Related
- 2013-05-16 WO PCT/US2013/041392 patent/WO2013176959A1/en active Application Filing
-
2015
- 2015-09-10 US US14/850,776 patent/US9361898B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7184559B2 (en) * | 2001-02-23 | 2007-02-27 | Hewlett-Packard Development Company, L.P. | System and method for audio telepresence |
EP1433355A1 (en) * | 2001-07-19 | 2004-06-30 | Vast Audio Pty Ltd | Recording a three dimensional auditory scene and reproducing it for the individual listener |
US20090080632A1 (en) * | 2007-09-25 | 2009-03-26 | Microsoft Corporation | Spatial audio conferencing |
WO2012061149A1 (en) * | 2010-10-25 | 2012-05-10 | Qualcomm Incorporated | Three-dimensional sound capturing and reproducing with multi-microphones |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104637494A (en) * | 2015-02-02 | 2015-05-20 | 哈尔滨工程大学 | Double-microphone mobile equipment voice signal enhancing method based on blind source separation |
CN106356074A (en) * | 2015-07-16 | 2017-01-25 | 中华映管股份有限公司 | Audio processing system and audio processing method thereof |
CN108028977A (en) * | 2015-09-09 | 2018-05-11 | 微软技术许可有限责任公司 | Microphone for Sounnd source direction estimation is placed |
CN108028977B (en) * | 2015-09-09 | 2020-03-03 | 微软技术许可有限责任公司 | Microphone placement for sound source direction estimation |
CN110858943A (en) * | 2018-08-24 | 2020-03-03 | 纬创资通股份有限公司 | Sound reception processing device and sound reception processing method thereof |
CN112259110A (en) * | 2020-11-17 | 2021-01-22 | 北京声智科技有限公司 | Audio encoding method and device and audio decoding method and device |
CN113329138A (en) * | 2021-06-03 | 2021-08-31 | 维沃移动通信有限公司 | Video shooting method, video playing method and electronic equipment |
WO2024082181A1 (en) * | 2022-10-19 | 2024-04-25 | 北京小米移动软件有限公司 | Spatial audio collection method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
JP6336968B2 (en) | 2018-06-06 |
US20160005408A1 (en) | 2016-01-07 |
US9161149B2 (en) | 2015-10-13 |
US20130315402A1 (en) | 2013-11-28 |
JP2015523594A (en) | 2015-08-13 |
WO2013176959A1 (en) | 2013-11-28 |
US9361898B2 (en) | 2016-06-07 |
EP2856464A2 (en) | 2015-04-08 |
WO2013176890A2 (en) | 2013-11-28 |
KR101705960B1 (en) | 2017-02-10 |
WO2013176890A3 (en) | 2014-02-27 |
CN104321812B (en) | 2016-10-05 |
US20130317830A1 (en) | 2013-11-28 |
EP2856464B1 (en) | 2019-06-19 |
KR20150021052A (en) | 2015-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104321812B (en) | Three dimensional sound compression during calling and air-launched | |
CN109644314B (en) | Method of rendering sound program, audio playback system, and article of manufacture | |
JP6121481B2 (en) | 3D sound acquisition and playback using multi-microphone | |
US11128976B2 (en) | Representing occlusion when rendering for computer-mediated reality systems | |
US9219972B2 (en) | Efficient audio coding having reduced bit rate for ambient signals and decoding using same | |
US20220417656A1 (en) | An Apparatus, Method and Computer Program for Audio Signal Processing | |
CN104471960A (en) | Systems, methods, apparatus, and computer-readable media for backward-compatible audio coding | |
US20080004729A1 (en) | Direct encoding into a directional audio coding format | |
CN110537221A (en) | Two stages audio for space audio processing focuses | |
CN109313907A (en) | Combined audio signal and Metadata | |
CN110049428B (en) | Method, playing device and system for realizing multi-channel surround sound playing | |
US11140507B2 (en) | Rendering of spatial audio content | |
CN106716526A (en) | Method and apparatus for enhancing sound sources | |
CN114051736A (en) | Timer-based access for audio streaming and rendering | |
WO2010125228A1 (en) | Encoding of multiview audio signals | |
US20240119945A1 (en) | Audio rendering system and method, and electronic device | |
CN116569255A (en) | Vector field interpolation of multiple distributed streams for six degree of freedom applications | |
CN114067810A (en) | Audio signal rendering method and device | |
US20240119946A1 (en) | Audio rendering system and method and electronic device | |
Sun | Immersive audio, capture, transport, and rendering: A review | |
CN115938388A (en) | Three-dimensional audio signal processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20161005 Termination date: 20210508 |