US20090307565A1 - Method and apparatus for fast encoding of data symbols according to half-weight codes - Google Patents
Method and apparatus for fast encoding of data symbols according to half-weight codes Download PDFInfo
- Publication number
- US20090307565A1 US20090307565A1 US12/331,268 US33126808A US2009307565A1 US 20090307565 A1 US20090307565 A1 US 20090307565A1 US 33126808 A US33126808 A US 33126808A US 2009307565 A1 US2009307565 A1 US 2009307565A1
- Authority
- US
- United States
- Prior art keywords
- symbols
- input
- weight
- source
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/3761—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 using code combining, i.e. using combining of codeword portions which may have been transmitted separately, e.g. Digital Fountain codes, Raptor codes or Luby Transform [LT] codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/373—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with erasure correction and erasure determination, e.g. for packet loss recovery or setting of erasures for the decoding of Reed-Solomon codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/3776—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 using a re-encoding step during the decoding process
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/47—Error detection, forward error correction or error protection, not provided for in groups H03M13/01 - H03M13/37
- H03M13/51—Constant weight codes; n-out-of-m codes; Berger codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/13—Linear codes
- H03M13/19—Single error correction without using particular properties of the cyclic codes, e.g. Hamming codes, extended or generalised Hamming codes
Definitions
- the present invention relates to systems and methods for encoding and/or decoding data, and more particularly to systems and methods for encoding and/or decoding using multi-stage fountain codes, herein referred to collectively as “MS Codes.”
- fountain codes have been described previously such as in Luby I, Shokrollahi I and Luby05. As described therein, sometimes referred to “chain reaction codes” in view of some of the decoding techniques, fountain codes provide a form of forward error-correction that enables data reconstruction from a received data set of a given size, without regard to the particular data packets received and provide for sequences of encoded that does not need to repeat over normal use, such that a set of data can be encoded with a fountain code to generate as much output sequence data as needed. Unlike many coding techniques that generate a fixed amount of output for a given input and thus have a predetermined “code rate”, fountain codes can be “rateless” codes. Fountain codes are often considered information additive codes in that randomly selected output symbols are likely to contribute (add) information to other randomly selected output symbols, whereas random selections of among fixed rate code symbols are more likely to be duplicative of selections already received.
- a recipient desires to receive an exact copy of data transmitted over a channel by a sender with some level of certainty.
- the channel does not have perfect fidelity (which covers most of all physically realizable systems)
- one concern is how to deal with data lost or garbled in transmission.
- Lost data (erasures) are often easier to deal with than corrupted data (errors) because the recipient cannot always tell when corrupted data is data received in error.
- Many error-correcting codes have been developed to correct for erasures and/or for errors.
- the particular code used is chosen based on some information about the infidelities of the channel through which the data is being transmitted and the nature of the data being transmitted. For example, where the channel is known to have long periods of infidelity, a burst error code might be best suited for that application. Where only short, infrequent errors are expected, a simple parity code might be best.
- Data transmission between multiple senders and/or multiple receivers over a communications channel has also been the subject of much literature.
- data transmission from multiple senders requires coordination among the multiple senders to allow the senders to minimize duplication of efforts.
- the senders do not coordinate which data they will transmit and when, but instead just send segments of the file, it is likely that a receiver will receive many useless duplicate segments.
- a concern is how to ensure that all data the receivers receive from the sender is useful. For example, suppose the sender is wishes to transmit a file, and is continuously transmitting data about the same file.
- Often data to be transmitted over a communications channel is partitioned into equal size input symbols.
- the “size” of an input symbol can be measured in bits, whether or not the input symbol is actually broken into a bit stream, where an input symbol has a size of M bits when the input symbol is selected from an alphabet of 2 M symbols.
- a coding system may produce output symbols from the input symbols.
- Output symbols are elements from an output symbol alphabet.
- the output symbol alphabet may or may not have the same characteristics as the alphabet for the input symbols. Once the output symbols are created, they are transmitted to the receivers.
- the task of transmission may include post-processing of the output symbols so as to produce symbols suitable for the particular type of transmission.
- transmission constitutes sending the data from a wireless provider to a wireless receiver
- several output symbols may be lumped together to form a frame, and each frame may be converted into a wave signal in which the amplitude or the phase is related to the frame.
- the operation of converting a frame into a wave is often called modulation, and the modulation is further referred to as phase or amplitude modulation depending on whether the information of the wave signal is stored in its phase or in its amplitude.
- modulation is further referred to as phase or amplitude modulation depending on whether the information of the wave signal is stored in its phase or in its amplitude.
- this type of modulated transmission is used in many applications, such as satellite transmission, cable modems, Digital Subscriber Lines (DSL), and many others.
- DSL Digital Subscriber Lines
- a transmission is called reliable if it allows the intended recipient to recover an exact copy of the original data even in the face of errors and/or erasures during the transmission.
- Recovery of erased information has been the subject of much literature and very efficient coding methods have been devised in this case.
- Fountain codes as described in Luby I or Shokrollahi I or Luby05, are efficient coding methods for recovery of erasures in a wide variety of settings.
- FEC Forward Error-Correction
- LDPC low density parity codes
- Reed-Solomon codes such as Reed-Solomon codes, Tornado codes, or, more generally, LDPC (“low density parity codes”).
- Traditional error correcting codes such as Reed-Solomon or other LDPC codes, generate a fixed number of output symbols for a fixed length content. For example, for K input symbols, N output symbols might be generated. These N output symbols may comprise the K original input symbols and N-K redundant symbols. If storage permits, then the sender can compute the set of output symbols for each piece of data only once and transmit the output symbols using a carousel protocol.
- FEC codes require excessive computing power or memory to operate.
- Another problem is that the number of output symbols must be determined in advance of the coding process. This can lead to inefficiencies if the error rate of the symbols is overestimated, and can lead to failure if the error rate is underestimated.
- traditional FEC schemes often require a mechanism to estimate the reliability of the communications channel on which they operate. For example, in wireless transmission the sender and the receiver are in need of probing the communications channel so as to obtain an estimate of the noise and hence of the reliability of the channel. In such a case, this probing has to be repeated quite often, since the actual noise is a moving target due to rapid and transient changes in the quality of the communications channel.
- the number of valid output symbols that can be generated is of the same order of magnitude as the number of input symbols the content is partitioned into and is often a fixed ratio called the “code rate.”
- code rate typically, but not exclusively, most or all of these output symbols are generated in a preprocessing step before the sending step.
- These output symbols often have the property that all the input symbols can be regenerated from any subset of the output symbols equal in length to the original content or slightly longer in length than the original content.
- a code rate is selected to match an expected error rate.
- Fountain decoding is a form of forward error-correction that addresses the above issues in cases where a transmission error constitutes an erasure.
- the pool of possible output symbols that can be generated is orders of magnitude larger than the number of the input symbols typically limited only by a resolution of a counter on the encoder rather than by a code rate, and a random output symbol from the pool of possibilities can be generated very quickly.
- the output symbols can be generated on the fly on an as needed basis concurrent with the sending step.
- Fountain codes have a property that all input symbols of the content can be regenerated from any subset of a set of randomly generated and/or selected output symbols slightly longer in length than the original content.
- a fountain coding system comprise an encoder and a decoder.
- Data may be presented to the encoder in the form of a block, or a stream, and the encoder may generate output symbols from the block or the stream on the fly.
- data may be pre-encoded off-line, or concurrently with the process of transmission, using a static encoder, and the output symbols may be generated from the static input symbols, defined as the plurality of the original data symbols, and the output symbols of the static encoder.
- the block or stream of symbols from which the dynamic output symbols are generated is referred to herein as “source symbols.”
- the source symbols are the static input symbols, while for codes described in Luby I, the source symbols are the input symbols.
- input symbols herein refers to the original symbols presented to the encoder for encoding.
- the source symbols are identical with input symbols.
- an MS code as for example one of the codes described in Shokrollahi I or Luby05, and the codes described in Luby I, we will refer herein to the output symbols generated by a coding system employing an MS code as the “dynamic output symbols.”
- Systematic MS Systematic Encoding and Decoding of Chain Reaction Codes
- Shokrollahi I and Luby05 Various methods for generating source symbols from the input symbols are described in Shokrollahi I and Luby05. Generally, but not exclusively, the source symbols are preferably generated efficiently on a data processing device, and, at the same time, a good erasure correcting capability is required of the multi-stage code.
- One of the teachings in Shokrollahi I and Luby05 is to use a combination of codes to produce the source symbols.
- this combination comprises using an LDPC code to produce a second set of source symbols from the input symbols (the input symbols are the first set of source symbols) and then using a Half-Weight encoder to produce a third set of source symbols from the first two sets of source symbols, and then the dynamic output symbols are calculated based on all three sets of source symbols.
- the encoding for an MS encoder in some embodiments can be partitioned into two stages.
- the first stage computes redundant symbols from the original input symbols
- the second stage generates output symbols from combinations of the original input symbols and redundant symbols.
- the first stage can be further partitioned into two or more steps, where some of these steps compute redundant symbols based on Low Density Parity Check (LDPC) codes or other codes, and where other steps compute redundant symbols based on other codes.
- LDPC Low Density Parity Check
- both multi-stage fountain decoding and some embodiments of decoding described in Inactivation Decoding may make use of a Half-Weight code in these other steps, and thus a Half-Weight code is used in these embodiments in one of the primary stages of static encoding.
- the techniques of multi-stage encoding, multi-stage fountain decoding and Inactivation Decoding may also be applied to other forward error correction codes including LDPC (“low density parity check”) codes, IRA (“Irregular Repeat Accumulate”) code, AIRA (“Accumulate Irregular Repeat Accumulate”) codes, and LDGM (“low density generator matrix”) codes, and many other classes of codes based on graphs, and thus the techniques described herein for Half-Weight code generation may also be applied in those settings.
- LDPC low density parity check
- IRA International Repeat Accumulate
- AIRA Accele Irregular Repeat Accumulate
- LDGM low density generator matrix
- a Half-Weight code generates, for a given number k of first source symbols, a number h of redundant symbols (referred to as the “Half-Weight symbols” hereinafter).
- Half-Weight symbols can be calculated in a specific way from the k source symbols.
- each Half-Weight symbol requires, on average, around k/2 XORs of source symbols, and thus, in total, the h Half-Weight symbols require around (k/2) ⁇ h XORs of source symbols. Since h is of the order log(k), this amounts to roughly k ⁇ log(k)/2 XORs of input symbols for the calculation of the Half-Weight symbols.
- h is of the order log(k)
- the calculation of the Half-Weight symbols using the straightforward approach would constitute a computational bottleneck for the design of some embodiments of reliable multi-stage encoders.
- efficient methods for encoding and decoding Half-Weight codes and similar high density codes are disclosed. Such efficient methods can operate with at most 3 ⁇ (k ⁇ 1)+h/2+1 XORs of symbols to calculate h Half-Weight symbols from k source symbols, where h is of the order of log(k).
- a method of encoding data operates on an ordered set of input symbols and includes generating intermediate symbols from the set of input symbols and possibly a set of a plurality of redundant symbols from the intermediate symbols. The method also includes generating a plurality of output symbols from a combined set of symbols including the input symbols, the intermediate symbols and possibly the redundant symbols.
- the intermediate symbols might include a plurality of Half-Weight symbols wherein their number is a function of the number of input symbols.
- a system for receiving data transmitted from a source over a communications channel comprises a receive module coupled to a communications channel for receiving output symbols transmitted over the communications channel, including a decoder for Half-Weight symbols.
- a computer data signal embodied in a carrier wave is provided, wherein Half-Weight symbols are encoded.
- the computational expense of encoding data for transmission over a channel is reduced.
- the computational expense of decoding such data is reduced.
- one or more of these benefits may be achieved.
- FIG. 1 is a block diagram of a communications system according to one embodiment wherein the present invention might be used.
- FIG. 2 is a block diagram of an encoder usable in the communication system of FIG. 1 .
- FIG. 3 is a simplified block diagram of a method of generating redundant symbols usable in the communication system of FIG. 1 .
- FIG. 4 is a simplified block diagram of the basic operation of a static encoder usable in the communication system of FIG. 1 .
- FIG. 5 is a simplified block diagram of a dynamic encoder usable in the communication system of FIG. 1 .
- FIG. 6 is a simplified block diagram of a basic operation of a dynamic encoder usable in the communication system of FIG. 1 .
- FIG. 7 is a simplified block diagram of a static encoder usable in the communication system of FIG. 1 .
- FIG. 8 is a simplified block diagram of the basic operation of a static encoder usable in the communication system of FIG. 1 .
- FIG. 9 is a simplified block diagram of a decoder usable in the communication system of FIG. 1 .
- FIG. 10 is an illustration of an example of the first few elements of a Gray code sequence.
- FIG. 11 is an illustration of an example of a Half-Weight matrix H.
- FIG. 12 is a flowchart of an overall efficient method for encoding a Half-Weight code according to embodiments of the present invention.
- FIG. 13 is a flowchart of a method for implementing the Initialize Computation phase of FIG. 12 .
- FIG. 14 is a flowchart of a method for implementing the Perform Core Computation phase of FIG. 12 .
- FIG. 15 is a flowchart of a method for implementing the Clean Up phase of FIG. 12 .
- multi-stage coding a coding scheme denoted as “multi-stage coding” is described, embodiments of which are provided in Shokrollahi I.
- Multi-stage encoding encodes the data in a plurality of stages. Typically, but not always, a first stage adds a predetermined amount of redundancy to the data. A second stage then uses a fountain code, or the like, to produce output symbols from the original data and the redundant symbols computed by the first stage of the encoding.
- the first stage's redundancy might be in the form of LDPC symbols, Half-Weight symbols, source symbols, etc.
- input symbols are data that an encoder seeks to encode such that a decoder can recover those input symbols.
- a pre-coder generates a set of intermediate symbols from the input symbols using some control inputs known to the encoder and to the decoder (at least when the decoder is tasked with decoding information that depends on knowing those control inputs).
- a next stage might generate a set of redundant symbols from the intermediate symbols, which might include Half-Weight symbols. The input symbols and the redundant symbols might then collectively form a set of source symbols that form input to a dynamic encoder such as a fountain encoder.
- the source symbols include the input symbols, whereas for nonsystematic coding, the source symbols contain less than all or none of the input symbols and rely more on redundant symbols. In the latter case, the redundant symbols might not be referred to as redundant, but just as being source symbols. Furthermore, in some embodiments the labels “intermediate symbol” and “redundant symbol” are interchangeable and the source symbols comprise input symbols and redundant/intermediate symbols.
- the output symbols can be generated from source symbols as needed or based on a maximum anticipated error rate.
- each output symbol can be generated without regard to how other output symbols are generated, in part because they can be generated statistically with high likelihood of information additivity between independently generated output symbols due to the large key alphabet. Once generated, these output symbols can then be placed into packets and transmitted to their destination, with each packet containing one or more output symbols. Non-packetized transmission techniques can be used instead or as well.
- file refers to any data that is stored at one or more sources and is to be delivered as a unit to one or more destinations.
- a document, an image, and a file from a file server or computer storage device are all examples of “files” that can be delivered.
- Files can be of known size (such as a one megabyte image stored on a hard disk) or can be of unknown size (such as a file taken from the output of a streaming source). Either way, the file is a sequence of input symbols, where each input symbol has a position in the file and a value.
- the term “stream” refers to any data that is stored or generated at one or more sources and is delivered at a specified rate at each point in time in the order it is generated to one or more destinations.
- Streams can be fixed rate or variable rate.
- An MPEG video stream, AMR audio stream, and a data stream used to control a remote device are all examples of “streams” that can be delivered.
- the rate of the stream at each point in time can be known (such as 4 megabits per second) or unknown (such as a variable rate stream where the rate at each point in time is not known in advance). Either way, the stream is a sequence of input symbols, where each input symbol has a position in the stream and a value.
- Transmission is the process of transmitting data from one or more senders to one or more recipients through a channel in order to deliver a file or stream.
- a sender is also sometimes referred to as the encoder. If one sender is connected to any number of recipients by a perfect channel, the received data can be an exact copy of the input file or stream, as all the data will be received correctly.
- the channel is not perfect, which is the case for most real-world channels.
- two imperfections of interest are data erasure and data incompleteness (which can be treated as a special case of data erasure). Data erasure occurs when the channel loses or drops data.
- Data incompleteness occurs when a recipient does not start receiving data until some of the data has already passed it by, the recipient stops receiving data before transmission ends, the recipient chooses to only receive a portion of the transmitted data, and/or the recipient intermittently stops and starts again receiving data.
- a moving satellite sender might be transmitting data representing an input file or stream and start the transmission before a recipient is in range. Once the recipient is in range, data can be received until the satellite moves out of range, at which point the recipient can redirect its satellite dish (during which time it is not receiving data) to start receiving the data about the same input file or stream being transmitted by another satellite that has moved into range.
- data incompleteness is a special case of data erasure, since the recipient can treat the data incompleteness (and the recipient has the same problems) as if the recipient was in range the entire time, but the channel lost all the data up to the point where the recipient started receiving data. Also, as is well known in communication systems design, detectable errors can be considered equivalent to erasures by simply dropping all data blocks or symbols that have detectable errors.
- a recipient receives data generated by multiple senders, or by one sender using multiple connections. For example, to speed up a download, a recipient might simultaneously connect to more than one sender to transmit data concerning the same file.
- multiple multicast data streams might be transmitted to allow recipients to connect to one or more of these streams to match the aggregate transmission rate with the bandwidth of the channel connecting them to the sender. In all such cases, a concern is to ensure that all transmitted data is of independent use to a recipient, i.e., that the multiple source data is not redundant among the streams, even when the transmission rates are vastly different for the different streams, and when there are arbitrary patterns of loss.
- a communication channel is that which connects the sender and the recipient for data transmission.
- the communication channel could be a real-time channel, where the channel moves data from the sender to the recipient as the channel gets the data, or the communication channel might be a storage channel that stores some or all of the data in its transit from the sender to the recipient.
- An example of the latter is disk storage or other storage device.
- a program or device that generates data can be thought of as the sender, transmitting the data to a storage device.
- the recipient is the program or device that reads the data from the storage device.
- the mechanisms that the sender uses to get the data onto the storage device, the storage device itself and the mechanisms that the recipient uses to get the data from the storage device collectively form the channel. If there is a chance that those mechanisms or the storage device can lose data, then that would be treated as data erasure in the communication channel.
- An encoder is a circuit, device, module or code segment that handles that task.
- One way of viewing the operation of the encoder is that the encoder generates output symbols from input symbols, where a sequence of input symbol values represents the input file or a block of the stream. Each input symbol would thus have a position, in the input file or block of the stream, and a value.
- a decoder is a circuit, device, module or code segment that reconstructs the input symbols from the output symbols received by the recipient. In multi-stage coding, the encoder and the decoder are further divided into sub-modules each performing a different task.
- the encoder and the decoder can be further divided into sub-modules, each performing a different task.
- the encoder comprises what is referred to herein as a static encoder and a dynamic encoder.
- a “static encoder” is an encoder that generates a number of redundant symbols from a set of input symbols, wherein the number of redundant symbols is determined prior to encoding. Examples of static encoding codes include Reed-Solomon codes, Tornado codes, Hamming codes, Low Density Parity Check (LDPC) codes, etc.
- static decoder is used herein to refer to a decoder that can decode data that was encoded by a static encoder.
- the input symbols are used to generate intermediate symbols, which are used to generate the source symbols and the intermediate symbols are not transmitted.
- a “dynamic encoder” is an encoder that generates output symbols from a set of input symbols, where the number of possible output symbols is orders of magnitude larger than the number of input symbols, and where the number of output symbols to be generated need not be fixed.
- a dynamic encoder is a chain reaction encoder, such as the encoders described in Luby I and Luby05.
- the term “dynamic decoder” is used herein to refer to a decoder that can decode data that was encoded by a dynamic encoder.
- Embodiments of multi-stage coding need not be limited to any particular type of input symbol.
- the values for the input symbols are selected from an alphabet of 2 M symbols for some positive integer M.
- an input symbol can be represented by a sequence of M bits of data from the input file or stream.
- the value of M is often determined based on, for example, the uses of the application, the communication channel, and/or the size of the output symbols.
- the size of an output symbol is often determined based on the application, the channel, and/or the size of the input symbols.
- the coding process might be simplified if the output symbol values and the input symbol values were the same size (i.e., representable by the same number of bits or selected from the same alphabet).
- the input symbol value size is limited when the output symbol value size is limited. For example, it may be desired to put output symbols in packets of limited size. If some data about a key associated with the output symbols were to be transmitted in order to recover the key at the receiver, the output symbol would preferably be small enough to accommodate, in one packet, the output symbol value and the data about the key.
- an input file is a multiple megabyte file
- the input file might be broken into thousands, tens of thousands, or hundreds of thousands of input symbols with each input symbol encoding thousands, hundreds, or only few bytes.
- a packet with a payload of size of 1024 bytes might be appropriate (a byte is 8 bits).
- an output symbol size of 8128 bits ((1024 ⁇ 8)*8) would be appropriate.
- some satellite systems use the MPEG packet standard, where the payload of each packet comprises 188 bytes.
- the application-specific parameters such as the input symbol size (i.e., M, the number of bits encoded by an input symbol), might be variables set by the application.
- the symbol size might be chosen to be rather small so that each source packet can be covered with an integral number of input symbols that have aggregate size at most slightly larger than the source packet.
- Each output symbol has a value.
- each output symbol also has associated therewith an identifier called its “key.”
- the key of each output symbol can be easily determined by the recipient to allow the recipient to distinguish one output symbol from other output symbols.
- the key of an output symbol is distinct from the keys of all other output symbols.
- keying discussed in previous art. For example, Luby I describes various forms of keying that can be employed in embodiments of the present invention.
- Multi-stage coding is particularly useful where there is an expectation of data erasure or where the recipient does not begin and end reception exactly when a transmission begins and ends. The latter condition is referred to herein as “data incompleteness.”Regarding erasure events, multi-stage coding shares many of the benefits of fountain coding described in Luby I. In particular, multi-stage output symbols are information additive, so any suitable number of packets can be used to recover an input file or stream to a desired degree of accuracy.
- a receiver With multi-stage coding, a receiver is not constrained to pickup any particular set of packets, so it can receive some packets from one transmitter, switch to another transmitter, lose some packets, miss the beginning or end of a given transmission and still recover an input file or block of a stream.
- the ability to join and leave a transmission without receiver-transmitter coordination helps to simplify the communication process. With suitable selection of the stages, it might be that a coding scheme will result in output symbols that can be more quickly decoded, or with less computational effort.
- transmitting a file or stream using multi-stage coding can include generating, forming or extracting input symbols from an input file or block of a stream, computing redundant symbols, encoding input and redundant symbols into one or more output symbols, where each output symbol is generated based on its key independently of all other output symbols, and transmitting the output symbols to one or more recipients over a channel.
- receiving (and reconstructing) a copy of the input file or block of a stream using multi-stage coding can include receiving some set or subset of output symbols from one of more data streams, and decoding the input symbols from the values and keys of the received output symbols.
- Suitable FEC erasure codes as described herein can be used to overcome the above-cited difficulties and would find use in a number of fields including multimedia broadcasting and multicasting systems and services.
- An FEC erasure code hereafter referred to as “a multi-stage fountain code” has properties that meet many of the current and future requirements of such systems and services.
- Some basic properties of multi-stage fountain codes are that, for any packet loss conditions and for delivery of source files of any relevant size or streams of any relevant rate: (a) reception overhead of each individual receiver device (“RD”) is minimized; (b) the total transmission time needed to deliver source files to any number of RDs can be minimized (c) the quality of the delivered stream to any number of RDs can be maximized for the number of output symbols sent relative to the number of input symbols, with suitable selection of transmission schedules.
- the RDs might be handheld devices, embedded into a vehicle, portable (i.e., movable but not typically in motion when in use) or fixed to a location.
- Multi-stage chain reaction codes are fountain codes, i.e., as many encoding packets as needed can be generated on-the-fly, each containing unique encoding symbols that are equally useful for recovering a source file or block of a stream, while decoding often occurs in a chain reaction fashion, i.e., as one symbol is recovered, it might be the last remaining symbol needed to recover other symbols, which in turn might by the last remaining symbols to recover other symbols, etc.
- fountain codes there are many advantages to using fountain codes versus other types of FEC codes.
- One advantage is that, regardless of packet loss conditions and RD availability, fountain codes minimize the number of encoding packets each RD needs to receive to reconstruct a source file or block of a stream. This is true even under harsh packet loss conditions and when, for example, mobile RDs are only intermittently turned-on or available over a long file download session.
- Another advantage is the ability to generate exactly as many encoding packets as needed, making the decision on how many encoding packets to generate on-the-fly while the transmission is in progress. This can be useful if for example there is feedback from RDs indicating whether or not they received enough encoding packets to recover a source file or block of a stream.
- packet loss conditions are less severe than expected the transmission can be terminated early.
- packet loss conditions are more severe than expected or RDs are unavailable more often than expected the transmission can be seamlessly extended.
- Inverse multiplexing is when a RD is able to combine received encoding packets generated at independent senders to reconstruct a source file or block of a stream.
- inverse multiplexing is described in below in reference to receiving encoding packets from different senders.
- Multi-stage fountain codes provide a degree of flexibility unmatched by other types of FEC codes.
- FIG. 1 is a block diagram of a communications system 100 that uses multi-stage coding.
- an input file 101 or an input stream 105 , is provided to an input symbol generator 110 .
- Input symbol generator 110 generates a sequence of one or more input symbols (IS( 0 ), IS( 1 ), IS( 2 ), . . . ) from the input file or stream, with each input symbol having a value and a position (denoted in FIG. 1 as a parenthesized integer).
- the possible values for input symbols i.e., its alphabet, is typically an alphabet of 2 M symbols, so that each input symbol codes for M bits of the input file or stream.
- the value of M is generally determined by the use of communication system 100 , but a general purpose system might include a symbol size input for input symbol generator 110 so that M can be varied from use to use.
- the output of input symbol generator 110 is provided to an encoder 115 .
- Static key generator 130 produces a stream of static keys S 0 , S 1 , . . . .
- the number of the static keys generated is generally limited and depends on the specific embodiment of encoder 115 . The generation of static keys will be subsequently described in more detail.
- Dynamic key generator 120 generates a dynamic key for each output symbol to be generated by the encoder 115 . Each dynamic key is generated so that a large fraction of the dynamic keys for the same input file or block of a stream are unique. For example, Luby I describes embodiments of key generators that can be used.
- the outputs of dynamic key generator 120 and the static key generator 130 are provided to encoder 115 .
- encoder 115 From each key I provided by dynamic key generator 120 , encoder 115 generates an output symbol, with a value B(I), from the input symbols provided by the input symbol generator. The operation of encoder 115 will be described in more detail below.
- the value of each output symbol is generated based on its key, on some function of one or more of the input symbols, and possibly on or more redundant symbols that had been computed from the input symbols.
- the collection of input symbols and redundant symbols that give rise to a specific output symbol is referred to herein as the output symbol's “associated symbols” or just its “associates”.
- the selection of the function (the “value function”) and the associates is done according to a process described in more detail below.
- M is the same for input symbols and output symbols, i.e., they both code for the same number of bits.
- the number K of input symbols is used by the encoder 115 to select the associates. If K is not known in advance, such as where the input is a streaming file, K can be just an estimate. The value K might also be used by encoder 115 to allocate storage for input symbols and any intermediate symbols generated by encoder 115 .
- Encoder 115 provides output symbols to a transmit module 140 .
- Transmit module 140 is also provided the key of each such output symbol from the dynamic key generator 120 .
- Transmit module 140 transmits the output symbols, and depending on the keying method used, transmit module 140 might also transmit some data about the keys of the transmitted output symbols, over a channel 145 to a receive module 150 .
- Channel 145 is assumed to be an erasure channel, but that is not a requirement for proper operation of communication system 100 .
- Modules 140 , 145 and 150 can be any suitable hardware components, software components, physical media, or any combination thereof, so long as transmit module 140 is adapted to transmit output symbols and any needed data about their keys to channel 145 and receive module 150 is adapted to receive symbols and potentially some data about their keys from channel 145 .
- the value of K if used to determine the associates, can be sent over channel 145 , or it may be set ahead of time by agreement of encoder 115 and decoder 155 .
- channel 145 can be a real-time channel, such as a path through the Internet or a broadcast link from a television transmitter to a television recipient or a telephone connection from one point to another, or channel 145 can be a storage channel, such as a CD-ROM, disk drive, Web site, or the like.
- Channel 145 might even be a combination of a real-time channel and a storage channel, such as a channel formed when one person transmits an input file from a personal computer to an Internet Service Provider (ISP) over a telephone line, the input file is stored on a Web server and is subsequently transmitted to a recipient over the Internet.
- ISP Internet Service Provider
- channel 145 is assumed to be an erasure channel, communications system 100 does not assume a one-to-one correspondence between the output symbols that exit receive module 150 and the output symbols that go into transmit module 140 .
- channel 145 comprises a packet network
- communications system 100 might not even be able to assume that the relative order of any two or more packets is preserved in transit through channel 145 . Therefore, the key of the output symbols is determined using one or more of the keying schemes described above, and not necessarily determined by the order in which the output symbols exit receive module 150 .
- Receive module 150 provides the output symbols to a decoder 155 , and any data receive module 150 receives about the keys of these output symbols is provided to a dynamic key regenerator 160 .
- Dynamic key regenerator 160 regenerates the dynamic keys for the received output symbols and provides these dynamic keys to decoder 155 .
- Static key generator 163 regenerates the static keys S 0 , S 1 , . . . and provides them to decoder 155 .
- the static key generator has access to random number generator 135 used both during the encoding and the decoding process. This can be in the form of access to the same physical device if the random numbers are generated on such device, or in the form of access to the same algorithm for the generation of random numbers to achieve identical behavior.
- Decoder 155 uses the keys provided by dynamic key regenerator 160 and static key generator 163 together with the corresponding output symbols, to recover the input symbols (again IS( 0 ), IS( 1 ), IS( 2 ), . . . ). Decoder 155 provides the recovered input symbols to an input file reassembler 165 , which generates a copy 170 of input file 101 or input stream 105 .
- FIG. 2 is a block diagram of one specific embodiment of encoder 115 shown in FIG. 1 .
- Encoder 115 comprises a static encoder 210 , a dynamic encoder 220 , and a redundancy calculator 230 .
- Static encoder 210 receives the following inputs: a) original input symbols IS( 0 ), IS( 1 ), . . . , IS(K ⁇ 1) provided by the input symbol generator 110 and stored in an input symbol buffer 205 ; b) the number K of original input symbols; c) static keys S 0 , S 1 , . . . provided by the static key generator 130 ; and d) a number R of redundant symbols.
- static encoder 205 Upon receiving these inputs static encoder 205 computes R redundant symbols RE( 0 ), RE( 1 ), . . . , RE(R ⁇ 1) as will be described below. Typically, but not always, the redundant symbols have the same size as the input symbols. In one specific embodiment, the redundant symbols generated by static encoder 210 are stored in input symbol buffer 205 . Input symbol buffer 205 may be only logical, i.e., the file or block of the stream may be physically stored in one place and the positions of the input symbols within symbol buffer 205 could only be renamings of the positions of these symbols within the original file or block of the stream. Some of the redundant symbols RE( ) might be Half-Weight symbols calculated as describe below.
- Dynamic encoder receives the input symbols and the redundant symbols, and generates output symbols as will be described in further detail below.
- dynamic encoder 220 receives the input symbols and redundant symbols from input symbol buffer 205 .
- Redundancy calculator 230 computes the number R of redundant symbols from the number K of input symbols. This computation is described in further detail below.
- FIG. 3 is a simplified flow diagram illustrating one embodiment 300 of a method of statically encoding.
- a variable j which keeps track of how many redundant symbols have been generated, is set to zero.
- a first redundant symbol RE( 0 ) is computed as a function F 0 of at least some of the input symbols IS( 0 ), . . . , IS(K ⁇ 1).
- the variable j is incremented.
- step 320 it is tested whether all of the redundant symbols have been generated (i.e., is j greater than R ⁇ 1?). If yes, then the flow ends. Otherwise, the flow proceeds to step 325 .
- RE(j)) is computed as a function F j of the input symbols IS( 0 ), . . . , IS(K ⁇ 1) and of the previously generated redundant symbols RE( 0 ), . . . , RE(j ⁇ 1), where F j need not be a function that depends on every one of the input symbols or every one of the redundant symbols.
- Steps 315 , 320 , and 325 are repeated until R redundant symbols have been computed. In some cases F j might be such that Half-Weight symbols are generated.
- static encoder 210 receives one or more static keys S 0 , S 1 , . . . from static key generator 130 .
- the static encoder 210 uses the static keys to determine some or all of functions F 0 , F 1 , . . . , F j-1 .
- static key S 0 can be used to determine function F 0
- static key S 1 can be used to determine function F 1 , etc.
- one or more of static keys S 0 , S 1 , . . . can be used to determine function F 0
- one or more of static keys S 0 , S 1 , . . . can be used to determine function F 1 , etc.
- no static keys are needed, and thus static key generator 130 is not needed.
- FIG. 4 is a simplified illustration of the operation of one embodiment of static encoder 210 .
- static encoder 210 generates redundant symbol RE(j) as a function Fj of input symbols IS( 0 ), . . . , IS(K ⁇ 1), RE( 0 ), . . . , RE(j ⁇ 1), received from input symbol buffer 205 , and stores it back into input symbol buffer 205 .
- the exact form of the functions F 0 , F 1 , . . . , F R-1 depends on the particular application.
- functions F 0 , F 1 , . . . , F R-1 include an exclusive OR of some or all of their corresponding arguments. As described above, these functions may or may not actually employ static keys generated by static key generator 130 of FIG. 1 .
- the first few functions implement a Hamming code and do not make any use of the static keys S 0 , S 1 , . . . , whereas the remaining functions implement a Low-Density Parity-Check code and make explicit use of the static keys.
- dynamic encoder 220 receives input symbols IS( 0 ), . . . , IS(K ⁇ 1) and the redundant symbols RE( 0 ), . . . , RE(R ⁇ 1) and a key I for each output symbol it is to generate.
- the collection comprising the original input symbols and the redundant symbols will be referred to as the collection of “dynamic input symbols” hereafter.
- FIG. 5 is a simplified block diagram of one embodiment of a dynamic encoder, including a weight selector 510 , an associator 515 , a value function selector 520 and a calculator 525 .
- the K+R dynamic input symbols are stored in a dynamic symbol buffer 505 .
- dynamic encoder 500 performs the action illustrated in FIG. 6 , namely, to generate an output symbol value B(I) as some value function of selected input symbols.
- FIG. 6 illustrates the action of a function used by calculator 525 .
- FIG. 7 is a simplified block diagram of one specific embodiment of a static encoder according to the present invention.
- Static encoder 600 comprises a parameter calculator 605 , a Hamming encoder 610 , and a low-density-parity-check (LDPC) encoder 620 .
- Hamming encoder 610 is coupled to receive the input symbols IS( 0 ), . . . , IS(K ⁇ 1) from an input symbol buffer 625 , the number K of input symbols, and the parameter D. In response, Hamming encoder 610 generates D+1 redundant symbols HA( 0 ), HA( 1 ), . . . , HA(D) according to a Hamming code.
- LDPC low-density-parity-check
- FIG. 8 illustrates the operation of one embodiment of the present invention that employs the static encoder shown in FIG. 7 .
- FIG. 9 is a simplified block diagram illustrating one embodiment of a decoder according to the present invention.
- Decoder 900 can be used, for example, to implement decoder 155 of FIG. 1 .
- Decoder 900 comprises a dynamic decoder 905 and a static decoder 910 .
- Input symbols and redundant symbols recovered by dynamic decoder 905 are stored in a reconstruction buffer 915 .
- static decoder 910 attempts to recover any input symbols not recovered by dynamic decoder 905 , if any.
- static decoder 910 receives input symbols and redundant symbols from reconstruction buffer 915 .
- LDPC decoders and Hamming decoders are well known to those skilled in the art, and can be employed in various embodiments according to the present invention.
- a Hamming decoder is implemented using a Gaussian elimination algorithm.
- Gaussian elimination algorithms are well known to those skilled in the art, and can be employed in various embodiments according to the present invention.
- Multi-stage fountain codes as described above are not systematic codes, i.e., all of the original source symbols of a source block are not necessarily among the encoding symbols that are sent.
- systematic FEC codes are useful for a file download system or service, and very important for a streaming system or service.
- a modified code can be made to be systematic and still maintain the fountain code and other described properties.
- a supplemental service to a file download service that allows multi-stage fountain codes that did not receive enough encoding packets to reconstruct a source file from the file download session to request additional encoding packets to be sent from a make-up sender, e.g., via a HTTP session.
- the make-up sender generates encoding symbols from the source file and sends them, for example using HTTP, and all these encoding symbols can be combined with those received from the file download session to recover the source file.
- This approach allows different senders to provide incremental source file delivery services without coordination between the senders, and ensuring that each individual receiver need receive only a minimal number of encoding packets to recover each source file.
- a static encoder may generate L intermediate symbols wherein L>K, wherein the first K of these are the input symbols (in the case of a systematic encoder) and the remaining L ⁇ K intermediate symbols are redundant symbols including Half-Weight symbols.
- the intermediate symbols are such that the input symbols can be recovered from the intermediate symbols.
- the source symbols used as input to a dynamic encoder can comprise some, all or none of the intermediate symbols (and/or the input symbols if these are not part of the intermediate symbols).
- a dynamic encoder generates output symbols from the source symbols.
- the code rate can be “rateless” and as many output symbols can be generated as needed without having to resort to repeating information duplicative symbols.
- a source block is sometime broken into sub-blocks, each of which is sufficiently small to be decoded in working memory.
- each sub-block comprises K sub-symbols, wherein each symbol of the source block comprises one sub-symbol from each sub-block.
- a Half-Weight symbol is a symbol that depends on approximately half of the source symbols, such as being an XOR of approximately half of the source symbols and wherein each symbol lends a dependency to about half of the Half-Weight symbols.
- the number of source symbols and the number of Half-Weight symbols are both divisible by two, it might be such that exactly half of the Half-Weight symbols depend from any one source symbol and each of the Half-Weight symbols depend on exactly half of the source symbols.
- the graph mapping source symbols on one side to Half-Weight symbols on the other side might be generated using a Gray sequence.
- Values for a set of intermediate symbols are derived from the original input symbols such that knowledge of the intermediate symbols is sufficient to reconstruct the input symbols.
- an encoder might produce redundant symbols that are each a function (such as the exclusive OR, or “XOR”) of a number of the intermediate symbols.
- the redundant symbols are produced in such a way that the intermediate symbols, and therefore also the source symbols, can be recovered from any sufficiently large set of output symbols, typically just the amount of data representing the original data plus some small extra amount.
- the intermediate symbols might comprise source symbols, Half-Weight symbols, or the like.
- Luby-Prov teaches the use of a systematic code encoder wherein a number of decoding algorithms are possible, along with some decoding algorithms.
- construction of intermediate and repair symbols is based in part on a pseudo-random number generator based on a fixed set of random numbers that are available to both sender and receiver.
- the construction of the intermediate symbols from the source symbols might be governed by a systematic index.
- a pre-coder might be programmed or constructed to generate L intermediate symbols from K input symbols (L>K where the symbols are the same size).
- the first K intermediate symbols are the K input symbols being pre-coded for and the last L-K intermediate symbols are generated in terms of the first K intermediate symbols.
- the first K intermediate symbols might be generated in some other manner.
- the last L ⁇ K intermediate symbols C[K], . . . , C[L ⁇ 1] might comprise S LDPC symbols and H Half-Weight symbols.
- H such that it is the smallest integer such that choose(H, ceil(H/2)) ⁇ K+S, where the function choose(i,j) denotes the number of ways j objects can be chosen from among i objects without repetition.
- the S LDPC symbols are defined to be the values of C[K], . . . , C[K+S ⁇ 1] at the end of the following process, wherein floor(x) denotes the largest positive integer which is less than or equal to x, “i % j” denotes i modulo j, and X ⁇ Y denotes, for equal-length bit strings X and Y, the bitwise exclusive-or of X and Y:
- g[i] î(floor(i/2)) for all positive integers i, wherein g[i] is a Gray sequence in which each element differs from the previous one in a single bit position.
- the Half-Weight symbols are defined as the values of C[K+S], . . . , C[L ⁇ 1] after the following process:
- An encoder for Half-Weight codes computes, for a sequence of k source symbols, a plurality of h Half-Weight symbols. Each of these Half-Weight symbols is the XOR of some of the source symbols, where generally the lengths of all the symbols are the same. These Half-Weight symbols can be used as intermediate symbols that provide input to a dynamic encoder. We use matrix notation to describe which source symbols are XORed to yield a given Half-Weight symbol.
- a Gray code sequence g[•] is used to describe the relationship between the source symbols and the Half-Weight symbols generated from the source symbols.
- g[i] be defined as follows. Let b[i] be the highest order bit that is different in the binary representation of i ⁇ 1 and i. Then, the binary representation of g[i] is obtained by flipping bit b[i] of g[i ⁇ 1].
- FIG. 10 shows the first few elements of a Gray code sequence generated according to this rule. Note that g[ ⁇ ] has the property that each pair of consecutive elements in the sequence differ in exactly one bit position. Note also that there are other Gray code sequences besides the ones described here that would be equally suitable as the basis for a Half-Weight code.
- g[ ⁇ j] be the subsequence of g[ ⁇ ] where for each element in the sequence exactly j bits are set to 1.
- g[ ⁇ j] can be defined based on g[ ⁇ ] as follows:
- Equation 1 Let (x 0 , x 1 , . . . , x k-1 ) denote the sequence of k source symbols, and let (y 0 , y 1 , . . . , y h-1 ) denote the sequence of Half-Weight symbols, where h satisfies Equation 1.
- the relationship between the source symbols and the h Half-Weight symbols can be represented as shown in FIG. 11 by a matrix H with k rows and h columns with each entry being 0 or 1 such that the Half-Weight symbols can be calculated from the source symbols using Equation 2.
- Equation 2 if H[l,k] denotes the entry of the matrix H at position (l, k), then y j is equal to H[ 0 ,j] ⁇ X 0 ⁇ H[ 1 ,j] ⁇ x 1 ⁇ . . . ⁇ H[k ⁇ 1,j] ⁇ X k-1 , wherein the symbol “ ⁇ +” is used to denote the XOR operation.
- FIG. 12 An exemplary embodiment of an efficient method for calculation of Half-Weight symbols is shown in FIG. 12 .
- the method comprises three phases: an Initialize Computation phase shown in step 1210 , a Perform Core Computation phase shown in step 1215 , and a Clean Up phase shown in step 1220 .
- the method starts by receiving the source symbols x 0 , x 1 , . . . , x k-1 in step 1205 , performs the phases shown in steps 1210 , 1215 , and 1220 on the source symbols, and outputs the Half-Weight symbols y 0 , y 1 , y h-1 in step 1225 .
- FIG. 13 An exemplary embodiment of the Initialize Computation phase 1210 of FIG. 12 is shown in FIG. 13 .
- the Initialize Computation phase 1210 shown in FIG. 13 comprises two steps. Step 1305 initializes the values of each of the Half-Weight symbols to zero.
- the second Step 1310 initializes the value of a symbol variable SUM to x 0 .
- the value of SUM will be the XOR of all the source symbols visited during the execution of the process.
- FIG. 14 shows one exemplary embodiment of the Perform Core Computation phase 1215 of FIG. 12 .
- the overall structure of this phase is a loop over the source symbols x 1 , x 2 , . . . , x k-1 .
- a variable i keeps track of the indices of these symbols, and is, accordingly, initialized to one in 1405 .
- Step 1410 the value of SUM is XORed into the two Half-Weight symbols indexed by Pos1[i,h′] and Pos2[i,h′].
- Pos1[i,h′] and Pos2[i,h′] can be calculated in a variety of ways, as is well known to those of skill in the art. For example, one way to do this is to explicitly calculate for consecutive indices of i the g[i,h′] sequence, and then calculate Pos1[i,h′] and Pos2[i,h′] based on the two positions where g[i ⁇ 1,h] and g[i,h′] differ in their binary representation. As another example, J. R. Bitner, G. Ehrlich, E. M.
- step 1420 the value of x i is XORed into SUM to update the value of SUM. This update ensures that SUM remains the XOR of the values of all of the x i visited so far.
- step 1430 The counter i is incremented by one in step 1425 .
- step 1430 the value of i is compared to k, and if i ⁇ k, then steps 1410 through 1430 are repeated, whereas if i is equal to k, then the loop terminates at step 1435 , and the process continues with the Clean Up phase 1220 .
- FIG. 15 An exemplary embodiment of the Clean Up phase 1220 is shown in FIG. 15 .
- This phase comprises an initialization step 1505 and a computation that loops through values of a variable j ranging from 0 to h ⁇ 1.
- the computation loop encompasses steps 1510 through 1550 in FIG. 15 .
- the value of j is initialized to zero and the value of MASK is initialized to g[k ⁇ 1,h′].
- Step 1510 of the computation loop it is checked whether the least significant bit of the binary representation of MASK is equal to 1, and if the test is true, then the next is step 1520 .
- step 1520 the value of y j is updated by XORing it with the value of SUM, and the process continues with step 1540 .
- step 1540 the value of j is incremented by one and the least significant bit of MASK is shifted out of MASK, where this shift is implemented by the DIV operator comprising dividing MASK by two and dropping the remainder.
- This second part of step 1540 could be implemented in a variety of ways, including for example applying a shift operator to MASK.
- Step 1550 checks to see if j ⁇ h, and if j ⁇ h then processing returns to step 1510 , but if this condition is not true then processing stops at step 1560 . Each of these steps can be performed by digital logic, software or other physical computation device.
- Half-Weight code As was mentioned above, one of the reasons for using the Half-Weight code is the excellent erasure correction capability of this code. In certain applications, however, it is important to protect the data with more redundant symbols than a Half-Weight code is capable of producing. This is particularly important when the redundant symbols are used to decrease the error in a probabilistic decoding algorithm, such as those described in Shokrollahi I. For this reason, a variation of using Half-Weight codes is described herein that is capable of producing several sets of redundant symbols, wherein each set of redundant symbols has the same number of redundant symbols as the Half-Weight code. This encoder is referred to as the “parallel Half-Weight encoder” herein.
- the method starts by calculating s random or pseudorandom permutations of the integers 0, 1, . . . , k ⁇ 1. There are a number of ways to calculate these permutations, as is well known to those of skill in the art.
- one property of these permutations is that they are easy to calculate, i.e., it is easy to determine for any given j in among the integers 0, 1, . . . , k ⁇ 1, what the image of j under the permutation is.
- k is a prime number
- one of the many possibilities for this permutation can be to independently choose two random integers a and b, where a is chosen uniformly at random in the range 1, 2, . . . , k ⁇ 1, and b is independently chosen at random in the range 0, 1, . . . , k ⁇ 1.
- the image of j under the permutation defined by a and b is defined as a ⁇ j+b modulo the integer k. In other words, this image is the remainder of the division of a ⁇ j+b by k.
- the numbers a and b can be chosen by choosing a uniformly at random from the set of positive integers less than k which do not have a common divisor with k, while b can be chosen uniformly at random in the range 0, 1, . . . , k ⁇ 1 as before.
- Various other methods for generating permutations can be envisioned by those of skill in the art upon studying this application, and the method described below does not depend on the specific choice of the permutations, except that in preferred applications it is desirable to have s different permutations.
- the encoder generates s random or pseudorandom permutations, denoted ⁇ 1 , ⁇ 2 , . . . , ⁇ s .
- h is the smallest positive integer that satisfies Equation 1.
- the generation of the redundant symbols of the parallel Half-Weight encoder is done one set at a time.
- the encoder proceeds in a similar manner as the original Half-Weight encoder described above.
- the encoder might comprise three phases, similar to those shown in FIG. 12 , with a modification for parallel operation of step 1420 , when updating the value of SUM.
- a decoder described in Shokrollahi I might be used to recover the original source symbols from the received dynamic symbols. If a Half-Weight code is used by the encoder, one of the steps in the decoding process in some embodiments is to recover some of the symbols involved in a Half-Weight code based on the recovery of the remainder of the symbols involved in the Half-Weight code.
- the Half-Weight code has the property that with high probability any combination of r ⁇ h missing symbols can be recovered if the remaining symbols among the k+h symbols y 0 , y 1 , . . . , y h-1 and x 0 , x 1 , . . . , x k-1 have already been recovered.
- the Half-Weight code When used in conjunction with an Inactivation Decoder or other decoder wherein a Gaussian Elimination of the decoding matrix is performed, then the Half-Weight code has the property that addition of the Half-Weight rows to the decoding matrix formed from the other decoding stages in a multi-stage decoder will result in a matrix of full rank, with high probability, and hence reduce the error probability of the decoder.
- Such a variant comprises making the following changes to the Half-Weight encoding method described in FIGS. 12 , 13 , 14 and 15 .
- the symbols received are the t symbols among the k+h source and Half-Weight symbols.
- the missing Half-Weight symbols are initialized to zero and the remaining received Half-Weight symbols retain their received values.
- SUM is set to x 0
- SUM is initialized to zero.
- step 14 if x i is among the received source symbols, then SUM is updated as described in step 1420 , and if x i is among the missing source symbols, then SUM is left unchanged.
- step 1225 of FIG. 12 the dependency on the k ⁇ r′ received source symbol of the h ⁇ r′′ received Half-Weight symbols has been removed, and thus the h ⁇ r′′ received Half-Weight symbol values depend only on the missing r′ source symbols.
- step 1410 of FIG. 14 one variant is to skip updates of missing Half-Symbol values, thus making the overall computation somewhat more efficient.
- the missing Half-Symbol updates are not skipped in step 1410 of FIG. 14 , and then in step 1225 of FIG. 12 , the Half-Symbol values corresponding to missing Half-Symbols only depend on the missing source symbol values and their own missing value, and these equations may be useful for other part of an overall decoding process, e.g., if there are other equations that involve the missing Half-Symbols and missing source symbols from other parts of an overall code. This is the case, for example, when using an Inactivation Decoder with a multi-stage code.
- the input and output symbols encode for the same number of bits and each output symbol is placed in one packet (a packet being a unit of transport that is either received in its entirety or lost in its entirety).
- the communications system is modified so that each packet contains several output symbols.
- the size of an output symbol value is then set to a size determined by the size of the input symbol values in the initial splitting of the file or blocks of the stream into input symbols, based on a number of factors.
- the decoding process remains essentially unchanged, except that output symbols arrive in bunches as each packet is received.
- the setting of input symbol and output symbol sizes is usually dictated by the size of the file or block of the stream and the communication system over which the output symbols are to be transmitted. For example, if a communication system groups bits of data into packets of a defined size or groups bits in other ways, the design of symbol sizes begins with the packet or grouping size. From there, a designer would determine how many output symbols will be carried in one packet or group and that determines the output symbol size. For simplicity, the designer would likely set the input symbol size equal to the output symbol size, but if the input data makes a different input symbol size more convenient, it can be used.
- the above described encoding process produces a stream of packets containing output symbols based on the original file or block of the stream.
- Each output symbol in the stream is generated independently of all other output symbols, and there is no lower or upper bound on the number of output symbols that can be created.
- a key is associated with each output symbol. That key, and some contents of the input file or block of the stream, determines the value of the output symbol. Consecutively generated output symbols need not have consecutive keys, and in some applications it would be preferable to randomly generate the sequence of keys, or pseudorandomly generate the sequence.
- Multi-stage decoding has a property that if the original file or block of the stream can be split into K equal sized input symbols and each output symbol value is the same length as an input symbol value, then the file or block can be recovered from K+A output symbols on average, with very high probability, where A is small compared to K. For example, for the weight distributions introduced above, the probability that the value of A exceeds ⁇ *K is at most 10 ⁇ 12 if K is larger than 19,681, and it is at most 10 ⁇ 10 for any value of K. Since the particular output symbols are generated in a random or pseudorandom order, and the loss of particular output symbols in transit is assumed random, some small variance exists in the actual number of output symbols needed to recover the input file or block. In some cases, where a particular collection of K+A packets are not enough to decode the entire input file or block, the input file or block is still recoverable if the receiver can gather more packets from one or more sources of output packets.
- a receiver can stop attempting to decode all of the input symbols after receiving K+A output symbols. Or, the receiver can stop receiving output symbols after receiving less than K+A output symbols. In some applications, the receiver may even only receive K or less output symbols. Thus, it is to be understood that in some embodiments of the present invention, the desired degree of accuracy need not be complete recovery of all the input symbols.
- the data can be encoded such that all of the input symbols cannot be recovered, or such that complete recovery of the input symbols would require reception of many more output symbols than the number of input symbols.
- Such an encoding would generally require less computational expense, and may thus be an acceptable way to decrease the computational expense of encoding.
Landscapes
- Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Error Detection And Correction (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
- This application is a divisional patent application of U.S. Non-provisional patent application Ser. No. 11,202,933 filed Aug. 11, 2005 which claims priority from co-pending U.S. Provisional Patent Application No. 60/600,932 filed Aug. 11, 2004 for Luby et al. “File Download System Using Fountain Codes” (hereinafter “Luby-Prov”) which is hereby incorporated by reference, as if set forth in full in this document, for all purposes.
- Related applications with common assignment include U.S. Pat. No. 6,307,487 to Luby entitled “Information Additive Code Generator and Decoder for Communication Systems” (hereinafter “Luby I”), U.S. patent application Ser. No. 10/032,156 filed Dec. 21, 2001 for Shokrollahi et al., entitled “Multi-Stage Code Generator and Decoder for Communication Systems” (hereinafter “Shokrollahi I”) and U.S. patent application Ser. No. 11/125,818 filed May 9, 2005 for Luby et al., entitled “File Download System” (hereinafter “Luby05”), each of which is hereby incorporated by reference, as if set forth in full in this document, for all purposes.
- The present invention relates to systems and methods for encoding and/or decoding data, and more particularly to systems and methods for encoding and/or decoding using multi-stage fountain codes, herein referred to collectively as “MS Codes.”
- Fountain codes have been described previously such as in Luby I, Shokrollahi I and Luby05. As described therein, sometimes referred to “chain reaction codes” in view of some of the decoding techniques, fountain codes provide a form of forward error-correction that enables data reconstruction from a received data set of a given size, without regard to the particular data packets received and provide for sequences of encoded that does not need to repeat over normal use, such that a set of data can be encoded with a fountain code to generate as much output sequence data as needed. Unlike many coding techniques that generate a fixed amount of output for a given input and thus have a predetermined “code rate”, fountain codes can be “rateless” codes. Fountain codes are often considered information additive codes in that randomly selected output symbols are likely to contribute (add) information to other randomly selected output symbols, whereas random selections of among fixed rate code symbols are more likely to be duplicative of selections already received.
- As a result, communication systems employing fountain codes are able to communicate information much more efficiently compared to traditional FEC codes transmitted via data carousel or acknowledgement-based protocols, as described in Luby I, or Shokrollahi I or Luby05.
- Transmission of data between a sender and a recipient over a communications channel has been the subject of much literature. Preferably, but not exclusively, a recipient desires to receive an exact copy of data transmitted over a channel by a sender with some level of certainty. Where the channel does not have perfect fidelity (which covers most of all physically realizable systems), one concern is how to deal with data lost or garbled in transmission. Lost data (erasures) are often easier to deal with than corrupted data (errors) because the recipient cannot always tell when corrupted data is data received in error. Many error-correcting codes have been developed to correct for erasures and/or for errors. Typically, the particular code used is chosen based on some information about the infidelities of the channel through which the data is being transmitted and the nature of the data being transmitted. For example, where the channel is known to have long periods of infidelity, a burst error code might be best suited for that application. Where only short, infrequent errors are expected, a simple parity code might be best.
- Data transmission between multiple senders and/or multiple receivers over a communications channel has also been the subject of much literature. Typically, data transmission from multiple senders requires coordination among the multiple senders to allow the senders to minimize duplication of efforts. In a typical multiple sender system sending data to a receiver, if the senders do not coordinate which data they will transmit and when, but instead just send segments of the file, it is likely that a receiver will receive many useless duplicate segments. Similarly, where different receivers join a transmission from a sender at different points in time, a concern is how to ensure that all data the receivers receive from the sender is useful. For example, suppose the sender is wishes to transmit a file, and is continuously transmitting data about the same file. If the sender just sends segments of the original file and some segments are lost, it is likely that a receiver will receive many useless duplicate segments before receiving one copy of each segment in the file. Similarly, if a segment is received in error multiple times, then the amount of information conveyed to the receiver is much less than the cumulative information of the received garbled data. Often this leads to undesirable inefficiencies of the transmission system, or may require transmitter and/or receiver coordination.
- Often data to be transmitted over a communications channel is partitioned into equal size input symbols. The “size” of an input symbol can be measured in bits, whether or not the input symbol is actually broken into a bit stream, where an input symbol has a size of M bits when the input symbol is selected from an alphabet of 2M symbols.
- A coding system may produce output symbols from the input symbols. Output symbols are elements from an output symbol alphabet. The output symbol alphabet may or may not have the same characteristics as the alphabet for the input symbols. Once the output symbols are created, they are transmitted to the receivers.
- The task of transmission may include post-processing of the output symbols so as to produce symbols suitable for the particular type of transmission. For example, where transmission constitutes sending the data from a wireless provider to a wireless receiver, several output symbols may be lumped together to form a frame, and each frame may be converted into a wave signal in which the amplitude or the phase is related to the frame. The operation of converting a frame into a wave is often called modulation, and the modulation is further referred to as phase or amplitude modulation depending on whether the information of the wave signal is stored in its phase or in its amplitude. Nowadays this type of modulated transmission is used in many applications, such as satellite transmission, cable modems, Digital Subscriber Lines (DSL), and many others.
- A transmission is called reliable if it allows the intended recipient to recover an exact copy of the original data even in the face of errors and/or erasures during the transmission. Recovery of erased information has been the subject of much literature and very efficient coding methods have been devised in this case. Fountain codes, as described in Luby I or Shokrollahi I or Luby05, are efficient coding methods for recovery of erasures in a wide variety of settings.
- One solution that has been proposed to increase reliability of transmission is to use Forward Error-Correction (FEC) codes, such as Reed-Solomon codes, Tornado codes, or, more generally, LDPC (“low density parity codes”). With such codes, one sends output symbols generated from the content instead of just sending the input symbols that constitute the content. Traditional error correcting codes, such as Reed-Solomon or other LDPC codes, generate a fixed number of output symbols for a fixed length content. For example, for K input symbols, N output symbols might be generated. These N output symbols may comprise the K original input symbols and N-K redundant symbols. If storage permits, then the sender can compute the set of output symbols for each piece of data only once and transmit the output symbols using a carousel protocol.
- One problem with some FEC codes is that they require excessive computing power or memory to operate. Another problem is that the number of output symbols must be determined in advance of the coding process. This can lead to inefficiencies if the error rate of the symbols is overestimated, and can lead to failure if the error rate is underestimated. Moreover, traditional FEC schemes often require a mechanism to estimate the reliability of the communications channel on which they operate. For example, in wireless transmission the sender and the receiver are in need of probing the communications channel so as to obtain an estimate of the noise and hence of the reliability of the channel. In such a case, this probing has to be repeated quite often, since the actual noise is a moving target due to rapid and transient changes in the quality of the communications channel.
- For traditional FEC codes, the number of valid output symbols that can be generated is of the same order of magnitude as the number of input symbols the content is partitioned into and is often a fixed ratio called the “code rate.” Typically, but not exclusively, most or all of these output symbols are generated in a preprocessing step before the sending step. These output symbols often have the property that all the input symbols can be regenerated from any subset of the output symbols equal in length to the original content or slightly longer in length than the original content. Typically, a code rate is selected to match an expected error rate.
- Fountain decoding is a form of forward error-correction that addresses the above issues in cases where a transmission error constitutes an erasure. For fountain codes, the pool of possible output symbols that can be generated is orders of magnitude larger than the number of the input symbols typically limited only by a resolution of a counter on the encoder rather than by a code rate, and a random output symbol from the pool of possibilities can be generated very quickly. For fountain codes, the output symbols can be generated on the fly on an as needed basis concurrent with the sending step. Fountain codes have a property that all input symbols of the content can be regenerated from any subset of a set of randomly generated and/or selected output symbols slightly longer in length than the original content.
- Other descriptions of various fountain coding systems can be found in documents such as U.S. Pat. No. 6,486,803, entitled “On Demand Encoding with a Window” and U.S. Pat. No. 6,411,223 entitled “Generating High Weight Output Symbols using a Basis,” assigned to the assignee of the present application.
- Some embodiments of a fountain coding system comprise an encoder and a decoder. Data may be presented to the encoder in the form of a block, or a stream, and the encoder may generate output symbols from the block or the stream on the fly. In some embodiments, for example those described in Shokrollahi I and Luby05, data may be pre-encoded off-line, or concurrently with the process of transmission, using a static encoder, and the output symbols may be generated from the static input symbols, defined as the plurality of the original data symbols, and the output symbols of the static encoder.
- In general, the block or stream of symbols from which the dynamic output symbols are generated is referred to herein as “source symbols.” Thus, in the case of the codes described in Shokrollahi I and Luby05, the source symbols are the static input symbols, while for codes described in Luby I, the source symbols are the input symbols. The term “input symbols” herein refers to the original symbols presented to the encoder for encoding. Thus, for chain reaction codes described in Luby I, the source symbols are identical with input symbols. Sometimes, to distinguish between an MS code, as for example one of the codes described in Shokrollahi I or Luby05, and the codes described in Luby I, we will refer herein to the output symbols generated by a coding system employing an MS code as the “dynamic output symbols.”
- In certain applications, it may be preferable to transmit the input symbols first, and then continue transmission by sending generated output symbols. Such a coding system is called a systematic coding system and systematic coding systems for codes such as those described in Luby I and Shokrollahi I are disclosed in U.S. Pat. No. 6,909,383 entitled, “Systematic Encoding and Decoding of Chain Reaction Codes,” assigned to the assignee of the present application (hereinafter “Systematic MS”), whereas Luby05 is itself a systematic MS code that is designed using the elements described in “Systematic MS”.
- Various methods for generating source symbols from the input symbols are described in Shokrollahi I and Luby05. Generally, but not exclusively, the source symbols are preferably generated efficiently on a data processing device, and, at the same time, a good erasure correcting capability is required of the multi-stage code. One of the teachings in Shokrollahi I and Luby05 is to use a combination of codes to produce the source symbols. In one particular embodiment described in Luby05, this combination comprises using an LDPC code to produce a second set of source symbols from the input symbols (the input symbols are the first set of source symbols) and then using a Half-Weight encoder to produce a third set of source symbols from the first two sets of source symbols, and then the dynamic output symbols are calculated based on all three sets of source symbols.
- Other methods and processes for both the generation of source symbols and dynamic output symbols, and the decoding of these output symbols, have been described in U.S. Pat. No. 6,856,263 entitled “Systems and Processes for Decoding a Chain Reaction Code Through Inactivation,” assigned to the assignee of the present application (hereinafter “Inactivation Decoding”). One advantage of a decoder according teachings Inactivation Decoding over a multi-stage chain reaction decoder described in Shokrollahi I is that an inactivation decoding decoder has typically a lower probability of error.
- The encoding for an MS encoder in some embodiments can be partitioned into two stages. The first stage computes redundant symbols from the original input symbols, and the second stage generates output symbols from combinations of the original input symbols and redundant symbols. In some embodiments of an MS encoder, the first stage can be further partitioned into two or more steps, where some of these steps compute redundant symbols based on Low Density Parity Check (LDPC) codes or other codes, and where other steps compute redundant symbols based on other codes. To lower the probability of error of the decoder, both multi-stage fountain decoding and some embodiments of decoding described in Inactivation Decoding may make use of a Half-Weight code in these other steps, and thus a Half-Weight code is used in these embodiments in one of the primary stages of static encoding.
- The techniques of multi-stage encoding, multi-stage fountain decoding and Inactivation Decoding may also be applied to other forward error correction codes including LDPC (“low density parity check”) codes, IRA (“Irregular Repeat Accumulate”) code, AIRA (“Accumulate Irregular Repeat Accumulate”) codes, and LDGM (“low density generator matrix”) codes, and many other classes of codes based on graphs, and thus the techniques described herein for Half-Weight code generation may also be applied in those settings.
- As described in Luby05, a Half-Weight code generates, for a given number k of first source symbols, a number h of redundant symbols (referred to as the “Half-Weight symbols” hereinafter). The number h is the smallest integer with the property illustrated in
Equation 1, where for positive integers i and j with i>j, the function choose (i,j) is equal to i!/((i−j)!·j!) and n!=1·2· . . . ·(n−1)·n, and where for real-valued x the function ceil(x) is the smallest integer greater than or equal to x. -
choose(h,ceil(h/2))≧k (Equ. 1) - As described in Luby05, Half-Weight symbols can be calculated in a specific way from the k source symbols. Using the straightforward method for the computation of these symbols, each Half-Weight symbol requires, on average, around k/2 XORs of source symbols, and thus, in total, the h Half-Weight symbols require around (k/2)·h XORs of source symbols. Since h is of the order log(k), this amounts to roughly k·log(k)/2 XORs of input symbols for the calculation of the Half-Weight symbols. Taking into account that other redundant symbols calculated via, for example LDPC encoding, require much less computational time, the calculation of the Half-Weight symbols using the straightforward approach would constitute a computational bottleneck for the design of some embodiments of reliable multi-stage encoders.
- What is therefore needed is an apparatus or process for calculating the Half-Weight symbols that is much more efficient than the straightforward one, and can be implemented easily on various computing devices.
- In embodiments of encoders and/or decoders according to the present invention, efficient methods for encoding and decoding Half-Weight codes and similar high density codes are disclosed. Such efficient methods can operate with at most 3·(k−1)+h/2+1 XORs of symbols to calculate h Half-Weight symbols from k source symbols, where h is of the order of log(k).
- According to one embodiment of the invention, a method of encoding data operates on an ordered set of input symbols and includes generating intermediate symbols from the set of input symbols and possibly a set of a plurality of redundant symbols from the intermediate symbols. The method also includes generating a plurality of output symbols from a combined set of symbols including the input symbols, the intermediate symbols and possibly the redundant symbols. The intermediate symbols might include a plurality of Half-Weight symbols wherein their number is a function of the number of input symbols.
- According to still another embodiment of the invention, a system for receiving data transmitted from a source over a communications channel is provided using similar techniques. The system comprises a receive module coupled to a communications channel for receiving output symbols transmitted over the communications channel, including a decoder for Half-Weight symbols.
- According to yet another embodiment of the invention, a computer data signal embodied in a carrier wave is provided, wherein Half-Weight symbols are encoded.
- Numerous benefits are achieved by way of the present invention. For example, in a specific embodiment, the computational expense of encoding data for transmission over a channel is reduced. In another specific embodiment, the computational expense of decoding such data is reduced. Depending upon the embodiment, one or more of these benefits may be achieved. These and other benefits are provided in more detail throughout the present specification and more particularly below.
- A further understanding of the nature and the advantages of the inventions disclosed herein may be realized by reference to the remaining portions of the specification and the attached drawings.
-
FIG. 1 is a block diagram of a communications system according to one embodiment wherein the present invention might be used. -
FIG. 2 is a block diagram of an encoder usable in the communication system ofFIG. 1 . -
FIG. 3 is a simplified block diagram of a method of generating redundant symbols usable in the communication system ofFIG. 1 . -
FIG. 4 is a simplified block diagram of the basic operation of a static encoder usable in the communication system ofFIG. 1 . -
FIG. 5 is a simplified block diagram of a dynamic encoder usable in the communication system ofFIG. 1 . -
FIG. 6 is a simplified block diagram of a basic operation of a dynamic encoder usable in the communication system ofFIG. 1 . -
FIG. 7 is a simplified block diagram of a static encoder usable in the communication system ofFIG. 1 . -
FIG. 8 is a simplified block diagram of the basic operation of a static encoder usable in the communication system ofFIG. 1 . -
FIG. 9 is a simplified block diagram of a decoder usable in the communication system ofFIG. 1 . -
FIG. 10 is an illustration of an example of the first few elements of a Gray code sequence. -
FIG. 11 is an illustration of an example of a Half-Weight matrix H. -
FIG. 12 is a flowchart of an overall efficient method for encoding a Half-Weight code according to embodiments of the present invention. -
FIG. 13 is a flowchart of a method for implementing the Initialize Computation phase ofFIG. 12 . -
FIG. 14 is a flowchart of a method for implementing the Perform Core Computation phase ofFIG. 12 . -
FIG. 15 is a flowchart of a method for implementing the Clean Up phase ofFIG. 12 . - In the specific embodiments described herein, a coding scheme denoted as “multi-stage coding” is described, embodiments of which are provided in Shokrollahi I.
- Multi-stage encoding encodes the data in a plurality of stages. Typically, but not always, a first stage adds a predetermined amount of redundancy to the data. A second stage then uses a fountain code, or the like, to produce output symbols from the original data and the redundant symbols computed by the first stage of the encoding. The first stage's redundancy might be in the form of LDPC symbols, Half-Weight symbols, source symbols, etc.
- As used herein, input symbols are data that an encoder seeks to encode such that a decoder can recover those input symbols. In some multi-stage encoders described herein, a pre-coder generates a set of intermediate symbols from the input symbols using some control inputs known to the encoder and to the decoder (at least when the decoder is tasked with decoding information that depends on knowing those control inputs). A next stage might generate a set of redundant symbols from the intermediate symbols, which might include Half-Weight symbols. The input symbols and the redundant symbols might then collectively form a set of source symbols that form input to a dynamic encoder such as a fountain encoder. For systematic coding, the source symbols include the input symbols, whereas for nonsystematic coding, the source symbols contain less than all or none of the input symbols and rely more on redundant symbols. In the latter case, the redundant symbols might not be referred to as redundant, but just as being source symbols. Furthermore, in some embodiments the labels “intermediate symbol” and “redundant symbol” are interchangeable and the source symbols comprise input symbols and redundant/intermediate symbols.
- In some embodiments, the output symbols can be generated from source symbols as needed or based on a maximum anticipated error rate. In embodiments in which the second stage comprises fountain encoding, each output symbol can be generated without regard to how other output symbols are generated, in part because they can be generated statistically with high likelihood of information additivity between independently generated output symbols due to the large key alphabet. Once generated, these output symbols can then be placed into packets and transmitted to their destination, with each packet containing one or more output symbols. Non-packetized transmission techniques can be used instead or as well.
- As used herein, the term “file” refers to any data that is stored at one or more sources and is to be delivered as a unit to one or more destinations. Thus, a document, an image, and a file from a file server or computer storage device, are all examples of “files” that can be delivered. Files can be of known size (such as a one megabyte image stored on a hard disk) or can be of unknown size (such as a file taken from the output of a streaming source). Either way, the file is a sequence of input symbols, where each input symbol has a position in the file and a value.
- As used herein, the term “stream” refers to any data that is stored or generated at one or more sources and is delivered at a specified rate at each point in time in the order it is generated to one or more destinations. Streams can be fixed rate or variable rate. Thus, an MPEG video stream, AMR audio stream, and a data stream used to control a remote device, are all examples of “streams” that can be delivered. The rate of the stream at each point in time can be known (such as 4 megabits per second) or unknown (such as a variable rate stream where the rate at each point in time is not known in advance). Either way, the stream is a sequence of input symbols, where each input symbol has a position in the stream and a value.
- Transmission is the process of transmitting data from one or more senders to one or more recipients through a channel in order to deliver a file or stream. A sender is also sometimes referred to as the encoder. If one sender is connected to any number of recipients by a perfect channel, the received data can be an exact copy of the input file or stream, as all the data will be received correctly. Here, we assume that the channel is not perfect, which is the case for most real-world channels. Of the many channel imperfections, two imperfections of interest are data erasure and data incompleteness (which can be treated as a special case of data erasure). Data erasure occurs when the channel loses or drops data. Data incompleteness occurs when a recipient does not start receiving data until some of the data has already passed it by, the recipient stops receiving data before transmission ends, the recipient chooses to only receive a portion of the transmitted data, and/or the recipient intermittently stops and starts again receiving data. As an example of data incompleteness, a moving satellite sender might be transmitting data representing an input file or stream and start the transmission before a recipient is in range. Once the recipient is in range, data can be received until the satellite moves out of range, at which point the recipient can redirect its satellite dish (during which time it is not receiving data) to start receiving the data about the same input file or stream being transmitted by another satellite that has moved into range. As should be apparent from reading this description, data incompleteness is a special case of data erasure, since the recipient can treat the data incompleteness (and the recipient has the same problems) as if the recipient was in range the entire time, but the channel lost all the data up to the point where the recipient started receiving data. Also, as is well known in communication systems design, detectable errors can be considered equivalent to erasures by simply dropping all data blocks or symbols that have detectable errors.
- In some communication systems, a recipient receives data generated by multiple senders, or by one sender using multiple connections. For example, to speed up a download, a recipient might simultaneously connect to more than one sender to transmit data concerning the same file. As another example, in a multicast transmission, multiple multicast data streams might be transmitted to allow recipients to connect to one or more of these streams to match the aggregate transmission rate with the bandwidth of the channel connecting them to the sender. In all such cases, a concern is to ensure that all transmitted data is of independent use to a recipient, i.e., that the multiple source data is not redundant among the streams, even when the transmission rates are vastly different for the different streams, and when there are arbitrary patterns of loss.
- In general, a communication channel is that which connects the sender and the recipient for data transmission. The communication channel could be a real-time channel, where the channel moves data from the sender to the recipient as the channel gets the data, or the communication channel might be a storage channel that stores some or all of the data in its transit from the sender to the recipient. An example of the latter is disk storage or other storage device. In that example, a program or device that generates data can be thought of as the sender, transmitting the data to a storage device. The recipient is the program or device that reads the data from the storage device. The mechanisms that the sender uses to get the data onto the storage device, the storage device itself and the mechanisms that the recipient uses to get the data from the storage device collectively form the channel. If there is a chance that those mechanisms or the storage device can lose data, then that would be treated as data erasure in the communication channel.
- When the sender and recipient are separated by a communication channel in which symbols can be erased, it is preferable not to transmit an exact copy of an input file or stream, but instead to transmit data generated from the input file or stream (which could include all or parts of the input file or stream itself) that assists with recovery of erasures. An encoder is a circuit, device, module or code segment that handles that task. One way of viewing the operation of the encoder is that the encoder generates output symbols from input symbols, where a sequence of input symbol values represents the input file or a block of the stream. Each input symbol would thus have a position, in the input file or block of the stream, and a value. A decoder is a circuit, device, module or code segment that reconstructs the input symbols from the output symbols received by the recipient. In multi-stage coding, the encoder and the decoder are further divided into sub-modules each performing a different task.
- In embodiments of multi-stage coding systems, the encoder and the decoder can be further divided into sub-modules, each performing a different task. For instance, in some embodiments, the encoder comprises what is referred to herein as a static encoder and a dynamic encoder. As used herein, a “static encoder” is an encoder that generates a number of redundant symbols from a set of input symbols, wherein the number of redundant symbols is determined prior to encoding. Examples of static encoding codes include Reed-Solomon codes, Tornado codes, Hamming codes, Low Density Parity Check (LDPC) codes, etc. The term “static decoder” is used herein to refer to a decoder that can decode data that was encoded by a static encoder. For some encoders, the input symbols are used to generate intermediate symbols, which are used to generate the source symbols and the intermediate symbols are not transmitted.
- As used herein, a “dynamic encoder” is an encoder that generates output symbols from a set of input symbols, where the number of possible output symbols is orders of magnitude larger than the number of input symbols, and where the number of output symbols to be generated need not be fixed. One example of a dynamic encoder is a chain reaction encoder, such as the encoders described in Luby I and Luby05. The term “dynamic decoder” is used herein to refer to a decoder that can decode data that was encoded by a dynamic encoder.
- Embodiments of multi-stage coding need not be limited to any particular type of input symbol. Typically, the values for the input symbols are selected from an alphabet of 2M symbols for some positive integer M. In such cases, an input symbol can be represented by a sequence of M bits of data from the input file or stream. The value of M is often determined based on, for example, the uses of the application, the communication channel, and/or the size of the output symbols. Additionally, the size of an output symbol is often determined based on the application, the channel, and/or the size of the input symbols. In some cases, the coding process might be simplified if the output symbol values and the input symbol values were the same size (i.e., representable by the same number of bits or selected from the same alphabet). If that is the case, then the input symbol value size is limited when the output symbol value size is limited. For example, it may be desired to put output symbols in packets of limited size. If some data about a key associated with the output symbols were to be transmitted in order to recover the key at the receiver, the output symbol would preferably be small enough to accommodate, in one packet, the output symbol value and the data about the key.
- As an example, if an input file is a multiple megabyte file, the input file might be broken into thousands, tens of thousands, or hundreds of thousands of input symbols with each input symbol encoding thousands, hundreds, or only few bytes. As another example, for a packet-based Internet channel, a packet with a payload of size of 1024 bytes might be appropriate (a byte is 8 bits). In this example, assuming each packet contains one output symbol and 8 bytes of auxiliary information, an output symbol size of 8128 bits ((1024−8)*8) would be appropriate. Thus, the input symbol size could be chosen as M=(1024−8)*8, or 8128 bits. As another example, some satellite systems use the MPEG packet standard, where the payload of each packet comprises 188 bytes. In that example, assuming each packet contains one output symbol and 4 bytes of auxiliary information, an output symbol size of 1472 bits ((188−4)*8), would be appropriate. Thus, the input symbol size could be chosen as M=(188−4)*8, or 1472 bits. In a general-purpose communication system using multi-stage coding, the application-specific parameters, such as the input symbol size (i.e., M, the number of bits encoded by an input symbol), might be variables set by the application.
- As another example, for a stream that is sent using variable size source packets, the symbol size might be chosen to be rather small so that each source packet can be covered with an integral number of input symbols that have aggregate size at most slightly larger than the source packet.
- Each output symbol has a value. In one preferred embodiment, which we consider below, each output symbol also has associated therewith an identifier called its “key.” Preferably, the key of each output symbol can be easily determined by the recipient to allow the recipient to distinguish one output symbol from other output symbols. Preferably, the key of an output symbol is distinct from the keys of all other output symbols. There are various forms of keying discussed in previous art. For example, Luby I describes various forms of keying that can be employed in embodiments of the present invention.
- Multi-stage coding is particularly useful where there is an expectation of data erasure or where the recipient does not begin and end reception exactly when a transmission begins and ends. The latter condition is referred to herein as “data incompleteness.”Regarding erasure events, multi-stage coding shares many of the benefits of fountain coding described in Luby I. In particular, multi-stage output symbols are information additive, so any suitable number of packets can be used to recover an input file or stream to a desired degree of accuracy.
- These conditions do not adversely affect the communication process when multi-stage coding is used, because the output symbols generated with multi-stage coding are information additive. For example, if a hundred packets are lost due to a burst of noise causing data erasure, an extra hundred packets can be picked up after the burst to replace the loss of the erased packets. If thousands of packets are lost because a receiver did not tune into a transmitter when it began transmitting, the receiver could just pickup those thousands of packets from any other period of transmission, or even from another transmitter.
- With multi-stage coding, a receiver is not constrained to pickup any particular set of packets, so it can receive some packets from one transmitter, switch to another transmitter, lose some packets, miss the beginning or end of a given transmission and still recover an input file or block of a stream. The ability to join and leave a transmission without receiver-transmitter coordination helps to simplify the communication process. With suitable selection of the stages, it might be that a coding scheme will result in output symbols that can be more quickly decoded, or with less computational effort.
- In some embodiments, transmitting a file or stream using multi-stage coding can include generating, forming or extracting input symbols from an input file or block of a stream, computing redundant symbols, encoding input and redundant symbols into one or more output symbols, where each output symbol is generated based on its key independently of all other output symbols, and transmitting the output symbols to one or more recipients over a channel. Additionally, in some embodiments, receiving (and reconstructing) a copy of the input file or block of a stream using multi-stage coding can include receiving some set or subset of output symbols from one of more data streams, and decoding the input symbols from the values and keys of the received output symbols.
- Suitable FEC erasure codes as described herein can be used to overcome the above-cited difficulties and would find use in a number of fields including multimedia broadcasting and multicasting systems and services. An FEC erasure code hereafter referred to as “a multi-stage fountain code” has properties that meet many of the current and future requirements of such systems and services.
- Some basic properties of multi-stage fountain codes are that, for any packet loss conditions and for delivery of source files of any relevant size or streams of any relevant rate: (a) reception overhead of each individual receiver device (“RD”) is minimized; (b) the total transmission time needed to deliver source files to any number of RDs can be minimized (c) the quality of the delivered stream to any number of RDs can be maximized for the number of output symbols sent relative to the number of input symbols, with suitable selection of transmission schedules. The RDs might be handheld devices, embedded into a vehicle, portable (i.e., movable but not typically in motion when in use) or fixed to a location.
- The amount of working memory needed for decoding is low and can still provide the above properties, and the amount of computation needed to encode and decode is minimal. In this document, we provide a simple and easy to implement description of some variations of multi-stage fountain codes.
- Multi-stage chain reaction codes are fountain codes, i.e., as many encoding packets as needed can be generated on-the-fly, each containing unique encoding symbols that are equally useful for recovering a source file or block of a stream, while decoding often occurs in a chain reaction fashion, i.e., as one symbol is recovered, it might be the last remaining symbol needed to recover other symbols, which in turn might by the last remaining symbols to recover other symbols, etc.
- There are many advantages to using fountain codes versus other types of FEC codes. One advantage is that, regardless of packet loss conditions and RD availability, fountain codes minimize the number of encoding packets each RD needs to receive to reconstruct a source file or block of a stream. This is true even under harsh packet loss conditions and when, for example, mobile RDs are only intermittently turned-on or available over a long file download session.
- Another advantage is the ability to generate exactly as many encoding packets as needed, making the decision on how many encoding packets to generate on-the-fly while the transmission is in progress. This can be useful if for example there is feedback from RDs indicating whether or not they received enough encoding packets to recover a source file or block of a stream. When packet loss conditions are less severe than expected the transmission can be terminated early. When packet loss conditions are more severe than expected or RDs are unavailable more often than expected the transmission can be seamlessly extended.
- Another advantage is the ability to inverse multiplex. Inverse multiplexing is when a RD is able to combine received encoding packets generated at independent senders to reconstruct a source file or block of a stream. One practical use of inverse multiplexing is described in below in reference to receiving encoding packets from different senders.
- Where future packet loss, RD availability and application conditions are hard to predict, it is important to choose an FEC solution that is as flexible as possible to work well under unpredictable conditions. Multi-stage fountain codes provide a degree of flexibility unmatched by other types of FEC codes.
- Aspects of the invention will now be described with reference to the figures.
-
FIG. 1 is a block diagram of acommunications system 100 that uses multi-stage coding. Incommunications system 100, aninput file 101, or aninput stream 105, is provided to aninput symbol generator 110.Input symbol generator 110 generates a sequence of one or more input symbols (IS(0), IS(1), IS(2), . . . ) from the input file or stream, with each input symbol having a value and a position (denoted inFIG. 1 as a parenthesized integer). As explained above, the possible values for input symbols, i.e., its alphabet, is typically an alphabet of 2M symbols, so that each input symbol codes for M bits of the input file or stream. The value of M is generally determined by the use ofcommunication system 100, but a general purpose system might include a symbol size input forinput symbol generator 110 so that M can be varied from use to use. The output ofinput symbol generator 110 is provided to anencoder 115. - Static
key generator 130 produces a stream of static keys S0, S1, . . . . The number of the static keys generated is generally limited and depends on the specific embodiment ofencoder 115. The generation of static keys will be subsequently described in more detail. Dynamickey generator 120 generates a dynamic key for each output symbol to be generated by theencoder 115. Each dynamic key is generated so that a large fraction of the dynamic keys for the same input file or block of a stream are unique. For example, Luby I describes embodiments of key generators that can be used. The outputs of dynamickey generator 120 and the statickey generator 130 are provided toencoder 115. - From each key I provided by dynamic
key generator 120,encoder 115 generates an output symbol, with a value B(I), from the input symbols provided by the input symbol generator. The operation ofencoder 115 will be described in more detail below. The value of each output symbol is generated based on its key, on some function of one or more of the input symbols, and possibly on or more redundant symbols that had been computed from the input symbols. The collection of input symbols and redundant symbols that give rise to a specific output symbol is referred to herein as the output symbol's “associated symbols” or just its “associates”. The selection of the function (the “value function”) and the associates is done according to a process described in more detail below. Typically, but not always, M is the same for input symbols and output symbols, i.e., they both code for the same number of bits. - In some embodiments, the number K of input symbols is used by the
encoder 115 to select the associates. If K is not known in advance, such as where the input is a streaming file, K can be just an estimate. The value K might also be used byencoder 115 to allocate storage for input symbols and any intermediate symbols generated byencoder 115. -
Encoder 115 provides output symbols to a transmitmodule 140. Transmitmodule 140 is also provided the key of each such output symbol from the dynamickey generator 120. Transmitmodule 140 transmits the output symbols, and depending on the keying method used, transmitmodule 140 might also transmit some data about the keys of the transmitted output symbols, over achannel 145 to a receivemodule 150.Channel 145 is assumed to be an erasure channel, but that is not a requirement for proper operation ofcommunication system 100.Modules module 140 is adapted to transmit output symbols and any needed data about their keys to channel 145 and receivemodule 150 is adapted to receive symbols and potentially some data about their keys fromchannel 145. The value of K, if used to determine the associates, can be sent overchannel 145, or it may be set ahead of time by agreement ofencoder 115 anddecoder 155. - As explained above,
channel 145 can be a real-time channel, such as a path through the Internet or a broadcast link from a television transmitter to a television recipient or a telephone connection from one point to another, orchannel 145 can be a storage channel, such as a CD-ROM, disk drive, Web site, or the like.Channel 145 might even be a combination of a real-time channel and a storage channel, such as a channel formed when one person transmits an input file from a personal computer to an Internet Service Provider (ISP) over a telephone line, the input file is stored on a Web server and is subsequently transmitted to a recipient over the Internet. - Because
channel 145 is assumed to be an erasure channel,communications system 100 does not assume a one-to-one correspondence between the output symbols that exit receivemodule 150 and the output symbols that go into transmitmodule 140. In fact, wherechannel 145 comprises a packet network,communications system 100 might not even be able to assume that the relative order of any two or more packets is preserved in transit throughchannel 145. Therefore, the key of the output symbols is determined using one or more of the keying schemes described above, and not necessarily determined by the order in which the output symbols exit receivemodule 150. - Receive
module 150 provides the output symbols to adecoder 155, and any data receivemodule 150 receives about the keys of these output symbols is provided to a dynamickey regenerator 160. Dynamickey regenerator 160 regenerates the dynamic keys for the received output symbols and provides these dynamic keys todecoder 155. Statickey generator 163 regenerates the static keys S0, S1, . . . and provides them todecoder 155. The static key generator has access torandom number generator 135 used both during the encoding and the decoding process. This can be in the form of access to the same physical device if the random numbers are generated on such device, or in the form of access to the same algorithm for the generation of random numbers to achieve identical behavior.Decoder 155 uses the keys provided by dynamickey regenerator 160 and statickey generator 163 together with the corresponding output symbols, to recover the input symbols (again IS(0), IS(1), IS(2), . . . ).Decoder 155 provides the recovered input symbols to aninput file reassembler 165, which generates acopy 170 ofinput file 101 orinput stream 105. -
FIG. 2 is a block diagram of one specific embodiment ofencoder 115 shown inFIG. 1 .Encoder 115 comprises astatic encoder 210, adynamic encoder 220, and aredundancy calculator 230.Static encoder 210 receives the following inputs: a) original input symbols IS(0), IS(1), . . . , IS(K−1) provided by theinput symbol generator 110 and stored in aninput symbol buffer 205; b) the number K of original input symbols; c) static keys S0, S1, . . . provided by the statickey generator 130; and d) a number R of redundant symbols. Upon receiving these inputsstatic encoder 205 computes R redundant symbols RE(0), RE(1), . . . , RE(R−1) as will be described below. Typically, but not always, the redundant symbols have the same size as the input symbols. In one specific embodiment, the redundant symbols generated bystatic encoder 210 are stored ininput symbol buffer 205.Input symbol buffer 205 may be only logical, i.e., the file or block of the stream may be physically stored in one place and the positions of the input symbols withinsymbol buffer 205 could only be renamings of the positions of these symbols within the original file or block of the stream. Some of the redundant symbols RE( ) might be Half-Weight symbols calculated as describe below. - Dynamic encoder receives the input symbols and the redundant symbols, and generates output symbols as will be described in further detail below. In one embodiment in which the redundant symbols are stored in the
input symbol buffer 205,dynamic encoder 220 receives the input symbols and redundant symbols frominput symbol buffer 205. -
Redundancy calculator 230 computes the number R of redundant symbols from the number K of input symbols. This computation is described in further detail below. - The general operation of
static encoder 210 is shown with reference toFIGS. 3 and 4 .FIG. 3 is a simplified flow diagram illustrating oneembodiment 300 of a method of statically encoding. In astep 305, a variable j, which keeps track of how many redundant symbols have been generated, is set to zero. Then, in astep 310, a first redundant symbol RE(0) is computed as a function F0 of at least some of the input symbols IS(0), . . . , IS(K−1). Then, in astep 315, the variable j is incremented. Next, in astep 320, it is tested whether all of the redundant symbols have been generated (i.e., is j greater than R−1?). If yes, then the flow ends. Otherwise, the flow proceeds to step 325. Instep 325, RE(j)) is computed as a function Fj of the input symbols IS(0), . . . , IS(K−1) and of the previously generated redundant symbols RE(0), . . . , RE(j−1), where Fj need not be a function that depends on every one of the input symbols or every one of the redundant symbols.Steps - Referring again to
FIGS. 1 and 2 , in some embodiments,static encoder 210 receives one or more static keys S0, S1, . . . from statickey generator 130. In these embodiments, thestatic encoder 210 uses the static keys to determine some or all of functions F0, F1, . . . , Fj-1. For example, static key S0 can be used to determine function F0, static key S1 can be used to determine function F1, etc. Or, one or more of static keys S0, S1, . . . can be used to determine function F0, one or more of static keys S0, S1, . . . can be used to determine function F1, etc. In other embodiments, no static keys are needed, and thus statickey generator 130 is not needed. - Referring now to
FIGS. 2 and 3 , in some embodiments, the redundant symbols generated bystatic encoder 210 can be stored ininput symbol buffer 205.FIG. 4 is a simplified illustration of the operation of one embodiment ofstatic encoder 210. Particularly,static encoder 210 generates redundant symbol RE(j) as a function Fj of input symbols IS(0), . . . , IS(K−1), RE(0), . . . , RE(j−1), received frominput symbol buffer 205, and stores it back intoinput symbol buffer 205. The exact form of the functions F0, F1, . . . , FR-1 depends on the particular application. Typically, but not always, functions F0, F1, . . . , FR-1 include an exclusive OR of some or all of their corresponding arguments. As described above, these functions may or may not actually employ static keys generated by statickey generator 130 ofFIG. 1 . For example, in one specific embodiment described below, the first few functions implement a Hamming code and do not make any use of the static keys S0, S1, . . . , whereas the remaining functions implement a Low-Density Parity-Check code and make explicit use of the static keys. - Referring again to
FIG. 2 ,dynamic encoder 220 receives input symbols IS(0), . . . , IS(K−1) and the redundant symbols RE(0), . . . , RE(R−1) and a key I for each output symbol it is to generate. The collection comprising the original input symbols and the redundant symbols will be referred to as the collection of “dynamic input symbols” hereafter. -
FIG. 5 is a simplified block diagram of one embodiment of a dynamic encoder, including aweight selector 510, anassociator 515, avalue function selector 520 and acalculator 525. As shown inFIG. 5 , the K+R dynamic input symbols are stored in adynamic symbol buffer 505. In effect,dynamic encoder 500 performs the action illustrated inFIG. 6 , namely, to generate an output symbol value B(I) as some value function of selected input symbols. -
FIG. 6 illustrates the action of a function used bycalculator 525. -
FIG. 7 is a simplified block diagram of one specific embodiment of a static encoder according to the present invention.Static encoder 600 comprises aparameter calculator 605, aHamming encoder 610, and a low-density-parity-check (LDPC)encoder 620.Hamming encoder 610 is coupled to receive the input symbols IS(0), . . . , IS(K−1) from aninput symbol buffer 625, the number K of input symbols, and the parameter D. In response, Hamming encoder 610 generates D+1 redundant symbols HA(0), HA(1), . . . , HA(D) according to a Hamming code. -
FIG. 8 illustrates the operation of one embodiment of the present invention that employs the static encoder shown inFIG. 7 . -
FIG. 9 is a simplified block diagram illustrating one embodiment of a decoder according to the present invention.Decoder 900 can be used, for example, to implementdecoder 155 ofFIG. 1 .Decoder 900 comprises adynamic decoder 905 and astatic decoder 910. Input symbols and redundant symbols recovered bydynamic decoder 905 are stored in areconstruction buffer 915. Upon completion of dynamic decoding,static decoder 910 attempts to recover any input symbols not recovered bydynamic decoder 905, if any. In particular,static decoder 910 receives input symbols and redundant symbols fromreconstruction buffer 915. - Many variations of LDPC decoders and Hamming decoders are well known to those skilled in the art, and can be employed in various embodiments according to the present invention. In one specific embodiment, a Hamming decoder is implemented using a Gaussian elimination algorithm. Many variations of Gaussian elimination algorithms are well known to those skilled in the art, and can be employed in various embodiments according to the present invention.
- Multi-stage fountain codes as described above are not systematic codes, i.e., all of the original source symbols of a source block are not necessarily among the encoding symbols that are sent. However, systematic FEC codes are useful for a file download system or service, and very important for a streaming system or service. As shown in the implementation below, a modified code can be made to be systematic and still maintain the fountain code and other described properties.
- One reason why it is easy to architect a variety of supplemental services using multi-stage codes is that it can combine received encoding symbols from multiple senders to reconstruct a source file or stream without coordination among the senders. The only requirement is that the senders use differing sets of keys to generate the encoding symbols that they send in encoding packets to the code. Ways to achieve this include designating different ranges of the key space to be used by each such sender, or generating keys randomly at each sender.
- As an example of the use of this capability, consider providing a supplemental service to a file download service that allows multi-stage fountain codes that did not receive enough encoding packets to reconstruct a source file from the file download session to request additional encoding packets to be sent from a make-up sender, e.g., via a HTTP session. The make-up sender generates encoding symbols from the source file and sends them, for example using HTTP, and all these encoding symbols can be combined with those received from the file download session to recover the source file. Using this approach allows different senders to provide incremental source file delivery services without coordination between the senders, and ensuring that each individual receiver need receive only a minimal number of encoding packets to recover each source file.
- For a given set of K input symbols, a static encoder may generate L intermediate symbols wherein L>K, wherein the first K of these are the input symbols (in the case of a systematic encoder) and the remaining L−K intermediate symbols are redundant symbols including Half-Weight symbols. The intermediate symbols are such that the input symbols can be recovered from the intermediate symbols. The source symbols used as input to a dynamic encoder can comprise some, all or none of the intermediate symbols (and/or the input symbols if these are not part of the intermediate symbols).
- A dynamic encoder generates output symbols from the source symbols. Where the dynamic encoder is a fountain or fountain encoder, the code rate can be “rateless” and as many output symbols can be generated as needed without having to resort to repeating information duplicative symbols.
- Intermediate symbols are generated using an inverse encoding process. The output symbols do not need to include the intermediate symbols, i.e., intermediate symbols are not included in data packets. A source block is sometime broken into sub-blocks, each of which is sufficiently small to be decoded in working memory. For a source block comprising K source symbols, each sub-block comprises K sub-symbols, wherein each symbol of the source block comprises one sub-symbol from each sub-block.
- A method and apparatus for generating intermediate symbols including Half-Weight symbols will now be described. A Half-Weight symbol is a symbol that depends on approximately half of the source symbols, such as being an XOR of approximately half of the source symbols and wherein each symbol lends a dependency to about half of the Half-Weight symbols. Where the number of source symbols and the number of Half-Weight symbols are both divisible by two, it might be such that exactly half of the Half-Weight symbols depend from any one source symbol and each of the Half-Weight symbols depend on exactly half of the source symbols. The graph mapping source symbols on one side to Half-Weight symbols on the other side might be generated using a Gray sequence.
- Values for a set of intermediate symbols are derived from the original input symbols such that knowledge of the intermediate symbols is sufficient to reconstruct the input symbols. With multiple stages, an encoder might produce redundant symbols that are each a function (such as the exclusive OR, or “XOR”) of a number of the intermediate symbols. The redundant symbols are produced in such a way that the intermediate symbols, and therefore also the source symbols, can be recovered from any sufficiently large set of output symbols, typically just the amount of data representing the original data plus some small extra amount. The intermediate symbols might comprise source symbols, Half-Weight symbols, or the like.
- Luby-Prov teaches the use of a systematic code encoder wherein a number of decoding algorithms are possible, along with some decoding algorithms. As taught there, construction of intermediate and repair symbols is based in part on a pseudo-random number generator based on a fixed set of random numbers that are available to both sender and receiver. The construction of the intermediate symbols from the source symbols might be governed by a systematic index.
- A pre-coder might be programmed or constructed to generate L intermediate symbols from K input symbols (L>K where the symbols are the same size). In this example, the first K intermediate symbols are the K input symbols being pre-coded for and the last L-K intermediate symbols are generated in terms of the first K intermediate symbols. The first K intermediate symbols might be generated in some other manner.
- The last L−K intermediate symbols C[K], . . . , C[L−1] might comprise S LDPC symbols and H Half-Weight symbols. The values of S and H are determined from K as described below, wherein L=K+S+H. Let X be the smallest positive integer such that X·(X−1)=2·K and let S be the smallest prime integer such that S≧ceil(0.01·K)+X, where the function ceil(x) denotes the smallest positive integer which is greater than or equal to x. Then, determine H such that it is the smallest integer such that choose(H, ceil(H/2))≧K+S, where the function choose(i,j) denotes the number of ways j objects can be chosen from among i objects without repetition.
- Next, determine the Half-Weight symbols using a process wherein H′=ceil(H/2). Let:
-
- C[0], . . . , C[K−1] denote the first K intermediate symbols
- C[K], . . . , C[K+S−1] denote the S LDPC symbols, initialized to zero.
- C[K+S], . . . , C[L−1] denote the H Half-Weight symbols, initialized to zero.
- The S LDPC symbols are defined to be the values of C[K], . . . , C[K+S−1] at the end of the following process, wherein floor(x) denotes the largest positive integer which is less than or equal to x, “i % j” denotes i modulo j, and X̂Y denotes, for equal-length bit strings X and Y, the bitwise exclusive-or of X and Y:
-
- For i=0, . . . , K−1 do
- a=1+(floor (i/S)%(S−1))
- b=i % S
- C[K+b]=C[K+b]̂C[i]
- b=(b+a)% S
- C[K+b]=C[K+b] ̂C[i]
- b=(b+a)% S
- C[K+b]=C[K+b] ̂C[i]
- For i=0, . . . , K−1 do
- The H Half-Weight symbols are defined as follows:
- Let g[i]=î(floor(i/2)) for all positive integers i, wherein g[i] is a Gray sequence in which each element differs from the previous one in a single bit position. Let g[, k] denote the jth element, j=0, 1, 2, . . . , etc. of the subsequence of g[i] whose elements have exactly k non-zero bits in their binary representation. Then, the Half-Weight symbols are defined as the values of C[K+S], . . . , C[L−1] after the following process:
-
- For h=0, . . . , H−1 do
- For j=0, . . . , K+S−1 do
- If bit h of g[j,H′] is equal to 1, then C[h+K+S]=C[h+K+S] ̂C[j].
- For h=0, . . . , H−1 do
- A more efficient method and apparatus will now be described for quickly generating the Half-Weight symbols.
- An encoder for Half-Weight codes computes, for a sequence of k source symbols, a plurality of h Half-Weight symbols. Each of these Half-Weight symbols is the XOR of some of the source symbols, where generally the lengths of all the symbols are the same. These Half-Weight symbols can be used as intermediate symbols that provide input to a dynamic encoder. We use matrix notation to describe which source symbols are XORed to yield a given Half-Weight symbol.
- A Gray code sequence g[•] is used to describe the relationship between the source symbols and the Half-Weight symbols generated from the source symbols. To illustrate the concept, we use the following particular Gray code. For all positive integers i, let g[i] be defined as follows. Let b[i] be the highest order bit that is different in the binary representation of i−1 and i. Then, the binary representation of g[i] is obtained by flipping bit b[i] of g[i−1].
FIG. 10 shows the first few elements of a Gray code sequence generated according to this rule. Note that g[·] has the property that each pair of consecutive elements in the sequence differ in exactly one bit position. Note also that there are other Gray code sequences besides the ones described here that would be equally suitable as the basis for a Half-Weight code. - We also use the function cnt[i], where cnt[i] returns the number of bits that are set to one in the binary representation of i. For example, if i=25 in decimal, then its binary representation is 11001, and thus cnt[25]=3.
- For any fixed positive integer j, let g[·j] be the subsequence of g[·] where for each element in the sequence exactly j bits are set to 1. Thus, g[·j] can be defined based on g[·] as follows:
- Initialize c=0, i=0
Do forever - While (cnt(g[c])≠j)c=c+1
- g[i,j]=g[c]
- c=c+1
- i=i+1
- Thus, for example, for j=3, g[0,3]=g[5]=7, g[1,3]=g[9]=13, g[2,3]=g[11]=14, g[3,3]=g[13]=11, g[4,3]=g[17]=25, etc. Note that g[·,j] has the property that each pair of consecutive elements in the sequence differ in exactly two bit positions in their binary representation. For all i≧1, let Pos1[i,j] be one of the two bit positions in which g[i,j] and g[i−1,j] differ and let Pos2[i,j] be the other bit position in which they differ. Thus, for example, Pos1[4,3]=1 and Pos2[4,3]=4.
- Let (x0, x1, . . . , xk-1) denote the sequence of k source symbols, and let (y0, y1, . . . , yh-1) denote the sequence of Half-Weight symbols, where h satisfies
Equation 1. The relationship between the source symbols and the h Half-Weight symbols can be represented as shown inFIG. 11 by a matrix H with k rows and h columns with each entry being 0 or 1 such that the Half-Weight symbols can be calculated from the sourcesymbols using Equation 2. -
(X 0 , X 1 , . . . , X k-1)·H=(y 0 , y 1 , . . . , y h-1) (Equ. 2) - In
Equation 2, if H[l,k] denotes the entry of the matrix H at position (l, k), then yj is equal to H[0,j]·X0⊕H[1,j]·x1⊕ . . . ⊕H[k−1,j]·Xk-1, wherein the symbol “⊕+” is used to denote the XOR operation. Herein, H[l,j]·x1 is defined as 0 if H[l,j]=0, and as X1 if H[l,j]=1. - The matrix H has a special form that can be described as follows. Let h′=ceil(h/2), i.e., h′ is equal to h/2 rounded up to the nearest integer. The matrix H is the k by h matrix in which the rows are indexed from 0 to k−1, where row i is equal to the binary representation of g[i,h′]. For example, when k=7, we have h=5 (from Equ. 1) and h′=3, the corresponding matrix H is as shown in
FIG. 111 . In this example, the seven rows of the matrix are indexed 0 through 6 and the entries in the rows correspond to the binary representations of theintegers - In a straightforward calculation of the product in
Equation 2, the number of XORs of source symbols performed for calculating each Half-Weight symbol yi for i=0, 1, . . . , h−1 is one less than the number of ones in the column corresponding to i in the matrix H. Since H has exactly k·h′ ones, the number of XORs of source symbols performed by the straightforward method is k·(h′−1), where h′ is of the order of log(k). - We present a method by which the Half-Weight symbols can be calculated with at most 3·(k−1)+h′+1 XORs of symbols.
- An exemplary embodiment of an efficient method for calculation of Half-Weight symbols is shown in
FIG. 12 . The method comprises three phases: an Initialize Computation phase shown instep 1210, a Perform Core Computation phase shown instep 1215, and a Clean Up phase shown instep 1220. Other variations are possible. The method starts by receiving the source symbols x0, x1, . . . , xk-1 instep 1205, performs the phases shown insteps step 1225. - An exemplary embodiment of the
Initialize Computation phase 1210 ofFIG. 12 is shown inFIG. 13 . TheInitialize Computation phase 1210 shown inFIG. 13 comprises two steps.Step 1305 initializes the values of each of the Half-Weight symbols to zero. Thesecond Step 1310 initializes the value of a symbol variable SUM to x0. During the PerformCore Computation phase 1215 ofFIG. 12 , the value of SUM will be the XOR of all the source symbols visited during the execution of the process. -
FIG. 14 shows one exemplary embodiment of the PerformCore Computation phase 1215 ofFIG. 12 . The overall structure of this phase is a loop over the source symbols x1, x2, . . . , xk-1. A variable i keeps track of the indices of these symbols, and is, accordingly, initialized to one in 1405. InStep 1410, the value of SUM is XORed into the two Half-Weight symbols indexed by Pos1[i,h′] and Pos2[i,h′]. - The functions Pos1[i,h′] and Pos2[i,h′] can be calculated in a variety of ways, as is well known to those of skill in the art. For example, one way to do this is to explicitly calculate for consecutive indices of i the g[i,h′] sequence, and then calculate Pos1[i,h′] and Pos2[i,h′] based on the two positions where g[i−1,h] and g[i,h′] differ in their binary representation. As another example, J. R. Bitner, G. Ehrlich, E. M. Reingold, “Efficient generation of the binary reflected Gray code”, Communications of the ACM (1976) describes an efficient process that requires only a constant number of operations on standard central processing units to calculate each consecutive value of Pos1[i,h′] and Pos2[i,h′] for a fixed value of h′. As another example, a lookup table might be used to hold pre-computed values of Pos1[i,h′] and Pos2[i,h′] for indices i=1, . . . , K−1 and h′=1, . . . , H′, where K′ is an upper-bound on values of k to be used and H′ is an upper-bound on values of h′ to be used.
- In
step 1420, the value of xi is XORed into SUM to update the value of SUM. This update ensures that SUM remains the XOR of the values of all of the xi visited so far. - The counter i is incremented by one in
step 1425. Instep 1430, the value of i is compared to k, and if i<k, then steps 1410 through 1430 are repeated, whereas if i is equal to k, then the loop terminates atstep 1435, and the process continues with theClean Up phase 1220. - An exemplary embodiment of the
Clean Up phase 1220 is shown inFIG. 15 . This phase comprises aninitialization step 1505 and a computation that loops through values of a variable j ranging from 0 to h−1. The computation loop encompassessteps 1510 through 1550 inFIG. 15 . In theinitialization step 1505 the value of j is initialized to zero and the value of MASK is initialized to g[k−1,h′]. - In
Step 1510 of the computation loop, it is checked whether the least significant bit of the binary representation of MASK is equal to 1, and if the test is true, then the next isstep 1520. Instep 1520, the value of yj is updated by XORing it with the value of SUM, and the process continues withstep 1540. - If the result of the test in
Step 1510 is false, then the process jumps directly to step 1540. Instep 1540, the value of j is incremented by one and the least significant bit of MASK is shifted out of MASK, where this shift is implemented by the DIV operator comprising dividing MASK by two and dropping the remainder. This second part ofstep 1540 could be implemented in a variety of ways, including for example applying a shift operator to MASK.Step 1550 checks to see if j<h, and if j<h then processing returns to step 1510, but if this condition is not true then processing stops atstep 1560. Each of these steps can be performed by digital logic, software or other physical computation device. - One skilled in the art will recognize after reading this disclosure that there are other alternative methods to those shown in
FIGS. 12 , 13, 14 and 15 for efficient computation of the Half-Weight symbols. - As was mentioned above, one of the reasons for using the Half-Weight code is the excellent erasure correction capability of this code. In certain applications, however, it is important to protect the data with more redundant symbols than a Half-Weight code is capable of producing. This is particularly important when the redundant symbols are used to decrease the error in a probabilistic decoding algorithm, such as those described in Shokrollahi I. For this reason, a variation of using Half-Weight codes is described herein that is capable of producing several sets of redundant symbols, wherein each set of redundant symbols has the same number of redundant symbols as the Half-Weight code. This encoder is referred to as the “parallel Half-Weight encoder” herein.
- Denoting again by k the number of source symbols to the parallel Half-Weight encoder, and by s the number of sets of independent Half-Weight symbols that are to be generated, the method starts by calculating s random or pseudorandom permutations of the
integers - In some embodiments, one property of these permutations is that they are easy to calculate, i.e., it is easy to determine for any given j in among the
integers range range - If k is not a prime number, then the numbers a and b can be chosen by choosing a uniformly at random from the set of positive integers less than k which do not have a common divisor with k, while b can be chosen uniformly at random in the
range - Returning to the description of the parallel Half-Weight encoder, the encoder generates s random or pseudorandom permutations, denoted π1, π2, . . . , πs. There will be s sets of redundant Half-Weight symbols denoted y0,0, y0,1, . . . , y0,h-1; y1,0, y1,1, . . . , y1,h-1; . . . ; ys-1,0, ys-1,1, . . . , ys-1,h-1. As before, h is the smallest positive integer that satisfies
Equation 1. The generation of the redundant symbols of the parallel Half-Weight encoder is done one set at a time. - To generate the jth set, where j=0, 1, . . . , s−1, the encoder proceeds in a similar manner as the original Half-Weight encoder described above. The encoder might comprise three phases, similar to those shown in
FIG. 12 , with a modification for parallel operation ofstep 1420, when updating the value of SUM. In the parallel encoder, the value of SUM is updated as SUM=SUM ⊕x(πj(i)) for the jth set. - Using the Fast Encoder for Recovering from Erasures
- A decoder described in Shokrollahi I might be used to recover the original source symbols from the received dynamic symbols. If a Half-Weight code is used by the encoder, one of the steps in the decoding process in some embodiments is to recover some of the symbols involved in a Half-Weight code based on the recovery of the remainder of the symbols involved in the Half-Weight code. The Half-Weight code has the property that with high probability any combination of r≦h missing symbols can be recovered if the remaining symbols among the k+h symbols y0, y1, . . . , yh-1 and x0, x1, . . . , xk-1 have already been recovered. When used in conjunction with an Inactivation Decoder or other decoder wherein a Gaussian Elimination of the decoding matrix is performed, then the Half-Weight code has the property that addition of the Half-Weight rows to the decoding matrix formed from the other decoding stages in a multi-stage decoder will result in a matrix of full rank, with high probability, and hence reduce the error probability of the decoder.
- The relationship between a missing set of r symbols z0 can be efficiently calculated using at most 3·(k−1)+h/2+1 XORs of symbols given the values of the remaining t=k+h−r symbols, using a variant of the efficient method for calculating the Half-Weight code disclosed above. This can dramatically speed up the overall decoding time, since this avoids using k·log(k) XORs of symbols to calculate the relationships between the r missing symbols based on a variant of the straightforward method of calculating the Half-Weight code.
- Such a variant comprises making the following changes to the Half-Weight encoding method described in
FIGS. 12 , 13, 14 and 15. Instep 1205 ofFIG. 12 , the symbols received are the t symbols among the k+h source and Half-Weight symbols. Instep 1305 ofFIG. 13 , the missing Half-Weight symbols are initialized to zero and the remaining received Half-Weight symbols retain their received values. Instep 1310 ofFIG. 13 , if source symbol x0 is among the received source symbols, then SUM is set to x0, and if x0 is among the missing source symbols, then SUM is initialized to zero. Instep 1420 ofFIG. 14 , if xi is among the received source symbols, then SUM is updated as described instep 1420, and if xi is among the missing source symbols, then SUM is left unchanged. Suppose r′ of the source symbols are missing and r″ of the Half-Weight symbols are missing, and thus r′+r″=r. Then, instep 1225 ofFIG. 12 , the dependency on the k−r′ received source symbol of the h−r″ received Half-Weight symbols has been removed, and thus the h−r″ received Half-Weight symbol values depend only on the missing r′ source symbols. Thereafter, a process that can solve the small system of equations involving the h−r″ received Half-Weight symbols and the missing r′ source symbols can be used to recover the missing r′ source symbol values. Examples of such systems of equations should be apparent after review of this disclosure. - As one skilled in the art will recognize, there are many other variants of the above efficient decoding method. For example, in
step 1410 ofFIG. 14 , one variant is to skip updates of missing Half-Symbol values, thus making the overall computation somewhat more efficient. As another variant, the missing Half-Symbol updates are not skipped instep 1410 ofFIG. 14 , and then instep 1225 ofFIG. 12 , the Half-Symbol values corresponding to missing Half-Symbols only depend on the missing source symbol values and their own missing value, and these equations may be useful for other part of an overall decoding process, e.g., if there are other equations that involve the missing Half-Symbols and missing source symbols from other parts of an overall code. This is the case, for example, when using an Inactivation Decoder with a multi-stage code. - In most of the examples described above, the input and output symbols encode for the same number of bits and each output symbol is placed in one packet (a packet being a unit of transport that is either received in its entirety or lost in its entirety). In some embodiments, the communications system is modified so that each packet contains several output symbols. The size of an output symbol value is then set to a size determined by the size of the input symbol values in the initial splitting of the file or blocks of the stream into input symbols, based on a number of factors. The decoding process remains essentially unchanged, except that output symbols arrive in bunches as each packet is received.
- The setting of input symbol and output symbol sizes is usually dictated by the size of the file or block of the stream and the communication system over which the output symbols are to be transmitted. For example, if a communication system groups bits of data into packets of a defined size or groups bits in other ways, the design of symbol sizes begins with the packet or grouping size. From there, a designer would determine how many output symbols will be carried in one packet or group and that determines the output symbol size. For simplicity, the designer would likely set the input symbol size equal to the output symbol size, but if the input data makes a different input symbol size more convenient, it can be used.
- The above described encoding process produces a stream of packets containing output symbols based on the original file or block of the stream. Each output symbol in the stream is generated independently of all other output symbols, and there is no lower or upper bound on the number of output symbols that can be created. A key is associated with each output symbol. That key, and some contents of the input file or block of the stream, determines the value of the output symbol. Consecutively generated output symbols need not have consecutive keys, and in some applications it would be preferable to randomly generate the sequence of keys, or pseudorandomly generate the sequence.
- Multi-stage decoding has a property that if the original file or block of the stream can be split into K equal sized input symbols and each output symbol value is the same length as an input symbol value, then the file or block can be recovered from K+A output symbols on average, with very high probability, where A is small compared to K. For example, for the weight distributions introduced above, the probability that the value of A exceeds α*K is at most 10−12 if K is larger than 19,681, and it is at most 10−10 for any value of K. Since the particular output symbols are generated in a random or pseudorandom order, and the loss of particular output symbols in transit is assumed random, some small variance exists in the actual number of output symbols needed to recover the input file or block. In some cases, where a particular collection of K+A packets are not enough to decode the entire input file or block, the input file or block is still recoverable if the receiver can gather more packets from one or more sources of output packets.
- Because the number of output symbols is only limited by the resolution of I, well more than K+A output symbols can be generated. For example, if I is a 32 bit number, 4 billion different output symbols could be generated, whereas the file or block of the stream could include K=50,000 input symbols. In some applications, only a small number of those 4 billion output symbols may be generated and transmitted and it is a near certainty that an input file or block of a stream can be recovered with a very small fraction of the possible output symbols and an excellent probability that the input file or block can be recovered with slightly more than K output symbols (assuming that the input symbol size is the same as the output symbol size).
- In some applications, it may be acceptable to not be able to decode all of the input symbols, or to be able to decode all of input symbols, but with a relatively low probability. In such applications, a receiver can stop attempting to decode all of the input symbols after receiving K+A output symbols. Or, the receiver can stop receiving output symbols after receiving less than K+A output symbols. In some applications, the receiver may even only receive K or less output symbols. Thus, it is to be understood that in some embodiments of the present invention, the desired degree of accuracy need not be complete recovery of all the input symbols.
- Further, in some applications where incomplete recovery is acceptable, the data can be encoded such that all of the input symbols cannot be recovered, or such that complete recovery of the input symbols would require reception of many more output symbols than the number of input symbols. Such an encoding would generally require less computational expense, and may thus be an acceptable way to decrease the computational expense of encoding.
- It is to be understood that the various functional blocks in the above described figures may be implemented by a combination of hardware and/or software, and that in specific implementations some or all of the functionality of some of the blocks may be combined. Similarly, it is also to be understood that the various methods described herein may be implemented by a combination of hardware and/or software.
- The above description is illustrative and not restrictive. Many variations of the invention will become apparent to those of skill in the art upon review of this disclosure. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/331,268 US20090307565A1 (en) | 2004-08-11 | 2008-12-09 | Method and apparatus for fast encoding of data symbols according to half-weight codes |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US60093204P | 2004-08-11 | 2004-08-11 | |
US11/202,933 US7721184B2 (en) | 2004-08-11 | 2005-08-11 | Method and apparatus for fast encoding of data symbols according to half-weight codes |
US12/331,268 US20090307565A1 (en) | 2004-08-11 | 2008-12-09 | Method and apparatus for fast encoding of data symbols according to half-weight codes |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/202,933 Division US7721184B2 (en) | 2004-08-11 | 2005-08-11 | Method and apparatus for fast encoding of data symbols according to half-weight codes |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090307565A1 true US20090307565A1 (en) | 2009-12-10 |
Family
ID=35908165
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/202,933 Active 2028-01-27 US7721184B2 (en) | 2004-08-11 | 2005-08-11 | Method and apparatus for fast encoding of data symbols according to half-weight codes |
US12/331,268 Abandoned US20090307565A1 (en) | 2004-08-11 | 2008-12-09 | Method and apparatus for fast encoding of data symbols according to half-weight codes |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/202,933 Active 2028-01-27 US7721184B2 (en) | 2004-08-11 | 2005-08-11 | Method and apparatus for fast encoding of data symbols according to half-weight codes |
Country Status (2)
Country | Link |
---|---|
US (2) | US7721184B2 (en) |
WO (1) | WO2006020826A2 (en) |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060062243A1 (en) * | 2004-09-23 | 2006-03-23 | Dacosta Behram M | Reliable audio-video transmission system using multi-media diversity |
US20060062242A1 (en) * | 2004-09-23 | 2006-03-23 | Sony Corporation | Reliable audio-video transmission system using multi-media diversity |
US20080152061A1 (en) * | 2006-12-22 | 2008-06-26 | Kozat Ulas C | Method and apparatus for opportunistic multicasting with coded scheduling in wireless networks |
KR101130591B1 (en) * | 2006-12-22 | 2012-04-02 | 가부시키가이샤 엔티티 도코모 | A method and apparatus for opportunistic multicasting with coded scheduling in wireless networks |
USRE43741E1 (en) | 2002-10-05 | 2012-10-16 | Qualcomm Incorporated | Systematic encoding and decoding of chain reaction codes |
US20130254616A1 (en) * | 2012-03-22 | 2013-09-26 | Lsi Corporation | Systems and Methods for Variable Rate Coding in a Data Processing System |
US8887020B2 (en) | 2003-10-06 | 2014-11-11 | Digital Fountain, Inc. | Error-correcting multi-stage code generator and decoder for communication systems having single transmitters or multiple transmitters |
US8958375B2 (en) | 2011-02-11 | 2015-02-17 | Qualcomm Incorporated | Framing for an improved radio link protocol including FEC |
US9136878B2 (en) | 2004-05-07 | 2015-09-15 | Digital Fountain, Inc. | File download and streaming system |
US9136983B2 (en) | 2006-02-13 | 2015-09-15 | Digital Fountain, Inc. | Streaming and buffering using variable FEC overhead and protection periods |
US9178535B2 (en) | 2006-06-09 | 2015-11-03 | Digital Fountain, Inc. | Dynamic stream interleaving and sub-stream based delivery |
US9191151B2 (en) | 2006-06-09 | 2015-11-17 | Qualcomm Incorporated | Enhanced block-request streaming using cooperative parallel HTTP and forward error correction |
US9237101B2 (en) | 2007-09-12 | 2016-01-12 | Digital Fountain, Inc. | Generating and communicating source identification information to enable reliable communications |
US9236976B2 (en) | 2001-12-21 | 2016-01-12 | Digital Fountain, Inc. | Multi stage code generator and decoder for communication systems |
US9240810B2 (en) | 2002-06-11 | 2016-01-19 | Digital Fountain, Inc. | Systems and processes for decoding chain reaction codes through inactivation |
US9246633B2 (en) | 1998-09-23 | 2016-01-26 | Digital Fountain, Inc. | Information additive code generator and decoder for communication systems |
US9253233B2 (en) | 2011-08-31 | 2016-02-02 | Qualcomm Incorporated | Switch signaling methods providing improved switching between representations for adaptive HTTP streaming |
US9264069B2 (en) | 2006-05-10 | 2016-02-16 | Digital Fountain, Inc. | Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient uses of the communications systems |
US9270299B2 (en) | 2011-02-11 | 2016-02-23 | Qualcomm Incorporated | Encoding and decoding using elastic codes with flexible source block mapping |
US9270414B2 (en) | 2006-02-21 | 2016-02-23 | Digital Fountain, Inc. | Multiple-field based code generator and decoder for communications systems |
US9281847B2 (en) | 2009-02-27 | 2016-03-08 | Qualcomm Incorporated | Mobile reception of digital video broadcasting—terrestrial services |
US9288010B2 (en) | 2009-08-19 | 2016-03-15 | Qualcomm Incorporated | Universal file delivery methods for providing unequal error protection and bundled file delivery services |
US9294226B2 (en) | 2012-03-26 | 2016-03-22 | Qualcomm Incorporated | Universal object delivery and template-based file delivery |
US9374552B2 (en) | 2013-11-11 | 2016-06-21 | Amazon Technologies, Inc. | Streaming game server video recorder |
US9380096B2 (en) | 2006-06-09 | 2016-06-28 | Qualcomm Incorporated | Enhanced block-request streaming system for handling low-latency streaming |
US9386064B2 (en) | 2006-06-09 | 2016-07-05 | Qualcomm Incorporated | Enhanced block-request streaming using URL templates and construction rules |
US9419749B2 (en) | 2009-08-19 | 2016-08-16 | Qualcomm Incorporated | Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes |
US9432433B2 (en) | 2006-06-09 | 2016-08-30 | Qualcomm Incorporated | Enhanced block-request streaming system using signaling or block creation |
US9578074B2 (en) | 2013-11-11 | 2017-02-21 | Amazon Technologies, Inc. | Adaptive content transmission |
US9582904B2 (en) | 2013-11-11 | 2017-02-28 | Amazon Technologies, Inc. | Image composition based on remote object data |
US9602802B2 (en) | 2010-07-21 | 2017-03-21 | Qualcomm Incorporated | Providing frame packing type information for video coding |
US9604139B2 (en) | 2013-11-11 | 2017-03-28 | Amazon Technologies, Inc. | Service for generating graphics object data |
US9634942B2 (en) | 2013-11-11 | 2017-04-25 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US9641592B2 (en) | 2013-11-11 | 2017-05-02 | Amazon Technologies, Inc. | Location of actor resources |
US9805479B2 (en) | 2013-11-11 | 2017-10-31 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US9843844B2 (en) | 2011-10-05 | 2017-12-12 | Qualcomm Incorporated | Network streaming of media data |
US9917874B2 (en) | 2009-09-22 | 2018-03-13 | Qualcomm Incorporated | Enhanced block-request streaming using block partitioning or request controls for improved client-side handling |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020129159A1 (en) * | 2001-03-09 | 2002-09-12 | Michael Luby | Multi-output packet server with independent streams |
EP2278719B1 (en) * | 2002-06-11 | 2013-12-18 | Digital Fountain, Inc. | Decoding of chain reaction codes through inactivation |
EP1900105A1 (en) * | 2005-04-15 | 2008-03-19 | Trellisware Technologies, Inc. | Clash-free irregular-repeat-accumulate code |
US7793190B1 (en) | 2005-08-10 | 2010-09-07 | Trellisware Technologies, Inc. | Reduced clash GRA interleavers |
WO2008003094A2 (en) | 2006-06-29 | 2008-01-03 | Digital Fountain, Inc. | Efficient representation of symbol-based transformations with application to encoding and decoding of forward error correction codes |
WO2008082572A1 (en) * | 2006-12-29 | 2008-07-10 | Interdigital Technology Corporation | Method and apparatus for transmitting and receiving multimedia broadcast multicast services via a dedicated downlink carrier |
EP2066038A1 (en) * | 2007-11-30 | 2009-06-03 | Deutsche Telekom AG | Method and system for constructing and decoding rateless codes with partial information |
CN101272150B (en) * | 2008-05-14 | 2010-09-29 | 中兴通讯股份有限公司 | Decoding method and device for low-density generating matrix code |
US9100153B2 (en) * | 2008-09-25 | 2015-08-04 | The Royal Institution For The Advancement Of Learning/Mcgill University | Methods and systems for improving iterative signal processing |
US7990290B1 (en) * | 2010-02-03 | 2011-08-02 | International Business Machines Corporation | Efficient rateless distributed compression of non-binary sources |
US9225961B2 (en) | 2010-05-13 | 2015-12-29 | Qualcomm Incorporated | Frame packing for asymmetric stereo video |
US9049497B2 (en) | 2010-06-29 | 2015-06-02 | Qualcomm Incorporated | Signaling random access points for streaming video data |
US9185439B2 (en) | 2010-07-15 | 2015-11-10 | Qualcomm Incorporated | Signaling data for multiplexing video components |
US9456015B2 (en) | 2010-08-10 | 2016-09-27 | Qualcomm Incorporated | Representation groups for network streaming of coded multimedia data |
WO2011124167A2 (en) * | 2011-05-11 | 2011-10-13 | 华为技术有限公司 | Encoding method and device thereof, decoding method and device thereof, encoding and decoding system |
CN103152124B (en) * | 2011-12-07 | 2017-06-20 | 华为技术有限公司 | A kind of unicast communication method, apparatus and system |
US9311640B2 (en) * | 2014-02-11 | 2016-04-12 | Digimarc Corporation | Methods and arrangements for smartphone payments and transactions |
CN103944678B (en) * | 2014-04-21 | 2017-04-12 | 浙江大学 | Fountain-code encoding method with feedback and unequal error protection capacity |
US9227579B1 (en) * | 2014-07-02 | 2016-01-05 | GM Global Technology Operations LLC | Hybrid wireless-wired architecture based on power lines for intra-vehicular communication |
US10903858B2 (en) * | 2015-05-27 | 2021-01-26 | Quantum Corporation | Dynamically variable error correcting code (ECC) system with hybrid rateless reed-solomon ECCs |
US10594689B1 (en) | 2015-12-04 | 2020-03-17 | Digimarc Corporation | Robust encoding of machine readable information in host objects and biometrics, and associated decoding and authentication |
WO2018204758A1 (en) * | 2017-05-04 | 2018-11-08 | Interdigital Patent Holdings, Inc. | Low complexity encryption with fountain codes |
US10355821B2 (en) * | 2017-06-14 | 2019-07-16 | Nokia Solutions And Networks Oy | Probabilistic signal shaping using a self-referencing sequence |
CN108123780B (en) * | 2018-01-31 | 2021-01-08 | 南京航空航天大学 | LT coding modulation method of 16QAM system |
US11223372B2 (en) | 2019-11-27 | 2022-01-11 | Hughes Network Systems, Llc | Communication throughput despite periodic blockages |
US11838127B2 (en) | 2022-03-11 | 2023-12-05 | Hughes Network Systems, Llc | Adaptive satellite communications |
Citations (93)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5844636A (en) * | 1997-05-13 | 1998-12-01 | Hughes Electronics Corporation | Method and apparatus for receiving and recording digital packet data |
US6079041A (en) * | 1995-08-04 | 2000-06-20 | Sanyo Electric Co., Ltd. | Digital modulation circuit and digital demodulation circuit |
US6141787A (en) * | 1997-05-19 | 2000-10-31 | Sanyo Electric Co., Ltd. | Digital modulation and demodulation |
US6166544A (en) * | 1998-11-25 | 2000-12-26 | General Electric Company | MR imaging system with interactive image contrast control |
US20010015944A1 (en) * | 1997-05-19 | 2001-08-23 | Sony Corporation | Recording method and apparatus for continuous playback of fragmented signals |
US20010033586A1 (en) * | 1996-12-17 | 2001-10-25 | Satoru Takashimizu | Receiving apparatus for digital broadcasting signal and receving/recording/reproducing apparatus thereof |
US20020009137A1 (en) * | 2000-02-01 | 2002-01-24 | Nelson John E. | Three-dimensional video broadcasting system |
US20020083345A1 (en) * | 2000-08-16 | 2002-06-27 | Halliday David C. | Method and system for secure communication over unstable public connections |
US20020143953A1 (en) * | 2001-04-03 | 2002-10-03 | International Business Machines Corporation | Automatic affinity within networks performing workload balancing |
US6510177B1 (en) * | 2000-03-24 | 2003-01-21 | Microsoft Corporation | System and method for layered video coding enhancement |
US20030138043A1 (en) * | 2002-01-23 | 2003-07-24 | Miska Hannuksela | Grouping of image frames in video coding |
US6631172B1 (en) * | 2000-05-01 | 2003-10-07 | Lucent Technologies Inc. | Efficient list decoding of Reed-Solomon codes for message recovery in the presence of high noise levels |
US20030207696A1 (en) * | 2002-05-06 | 2003-11-06 | Serge Willenegger | Multi-media broadcast and multicast service (MBMS) in a wireless communications system |
US20040015768A1 (en) * | 2002-03-15 | 2004-01-22 | Philippe Bordes | Device and method for inserting error correcting codes and for reconstructing data streams, and corresponding products |
US20040081106A1 (en) * | 2002-10-25 | 2004-04-29 | Stefan Bruhn | Delay trading between communication links |
US20040240382A1 (en) * | 2002-04-19 | 2004-12-02 | Daiji Ido | Data reception apparatus and data distribution system |
US20050018635A1 (en) * | 1999-11-22 | 2005-01-27 | Ipr Licensing, Inc. | Variable rate coding for forward link |
US6876623B1 (en) * | 1998-12-02 | 2005-04-05 | Agere Systems Inc. | Tuning scheme for code division multiplex broadcasting system |
US20050084006A1 (en) * | 2003-10-16 | 2005-04-21 | Shawmin Lei | System and method for three-dimensional video coding |
US20050091697A1 (en) * | 2003-10-27 | 2005-04-28 | Matsushita Electric Industrial Co., Ltd. | Apparatus for receiving broadcast signal |
US20050123058A1 (en) * | 1999-04-27 | 2005-06-09 | Greenbaum Gary S. | System and method for generating multiple synchronized encoded representations of media data |
US20050180415A1 (en) * | 2002-03-06 | 2005-08-18 | Gene Cheung | Medium streaming distribution system |
US6937618B1 (en) * | 1998-05-20 | 2005-08-30 | Sony Corporation | Separating device and method and signal receiving device and method |
US20050193309A1 (en) * | 2003-08-21 | 2005-09-01 | Francesco Grilli | Methods for forward error correction coding above a radio link control layer and related apparatus |
US20050195752A1 (en) * | 2004-03-08 | 2005-09-08 | Microsoft Corporation | Resolving partial media topologies |
US20050216472A1 (en) * | 2004-03-29 | 2005-09-29 | David Leon | Efficient multicast/broadcast distribution of formatted data |
US6985459B2 (en) * | 2002-08-21 | 2006-01-10 | Qualcomm Incorporated | Early transmission and playout of packets in wireless communication systems |
US20060015568A1 (en) * | 2004-07-14 | 2006-01-19 | Rod Walsh | Grouping of session objects |
US20060031738A1 (en) * | 2002-10-30 | 2006-02-09 | Koninklijke Philips Electronics, N.V. | Adaptative forward error control scheme |
US20060107174A1 (en) * | 2004-11-16 | 2006-05-18 | Bernd Heise | Seamless change of depth of a general convolutional interleaver during transmission without loss of data |
US20060120464A1 (en) * | 2002-01-23 | 2006-06-08 | Nokia Corporation | Grouping of image frames in video coding |
US7068681B2 (en) * | 1999-05-10 | 2006-06-27 | Samsung Electronics Co., Ltd. | Apparatus and method for exchanging variable-length data according to radio link protocol in mobile communication system |
US7073191B2 (en) * | 2000-04-08 | 2006-07-04 | Sun Microsystems, Inc | Streaming a single media track to multiple clients |
US7100188B2 (en) * | 1999-05-26 | 2006-08-29 | Enounce, Inc. | Method and apparatus for controlling time-scale modification during multi-media broadcasts |
US20060212444A1 (en) * | 2001-05-16 | 2006-09-21 | Pandora Media, Inc. | Methods and systems for utilizing contextual feedback to generate and modify playlists |
US20060229075A1 (en) * | 2005-04-09 | 2006-10-12 | Lg Electronics Inc. | Supporting handover of mobile terminal |
US20060248195A1 (en) * | 2005-04-27 | 2006-11-02 | Kunihiko Toumura | Computer system with a packet transfer device using a hash value for transferring a content request |
US20060244865A1 (en) * | 2005-03-02 | 2006-11-02 | Rohde & Schwarz, Inc. | Apparatus, systems, methods and computer products for providing a virtual enhanced training sequence |
US20070002953A1 (en) * | 2005-06-29 | 2007-01-04 | Kabushiki Kaisha Toshiba | Encoded stream reproducing apparatus |
US20070006274A1 (en) * | 2005-06-30 | 2007-01-04 | Toni Paila | Transmission and reception of session packets |
US20070016594A1 (en) * | 2005-07-15 | 2007-01-18 | Sony Corporation | Scalable video coding (SVC) file format |
US7168030B2 (en) * | 2003-10-17 | 2007-01-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Turbo code decoder with parity information update |
US20070140369A1 (en) * | 2003-07-07 | 2007-06-21 | Limberg Allen L | System of robust DTV signal transmissions that legacy DTV receivers will disregard |
US7240358B2 (en) * | 2000-12-08 | 2007-07-03 | Digital Fountain, Inc. | Methods and apparatus for scheduling, serving, receiving media-on demand for clients, servers arranged according to constraints on resources |
US20070162611A1 (en) * | 2006-01-06 | 2007-07-12 | Google Inc. | Discontinuous Download of Media Files |
US20070185973A1 (en) * | 2006-02-07 | 2007-08-09 | Dot Hill Systems, Corp. | Pull data replication model |
US20070195894A1 (en) * | 2006-02-21 | 2007-08-23 | Digital Fountain, Inc. | Multiple-field based code generator and decoder for communications systems |
US7265688B2 (en) * | 2002-06-11 | 2007-09-04 | Digital Fountain, Inc. | Systems and processes for decoding a chain reaction code through inactivation |
US20070255844A1 (en) * | 2006-04-27 | 2007-11-01 | Microsoft Corporation | Guided random seek support for media streaming |
US20070277209A1 (en) * | 2006-05-24 | 2007-11-29 | Newport Media, Inc. | Robust transmission system and method for mobile television applications |
US7304990B2 (en) * | 2000-02-03 | 2007-12-04 | Bandwiz Inc. | Method of encoding and transmitting data over a communication medium through division and segmentation |
US20070300127A1 (en) * | 2006-05-10 | 2007-12-27 | Digital Fountain, Inc. | Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient users of the communications systems |
US20080034273A1 (en) * | 1998-09-23 | 2008-02-07 | Digital Fountain, Inc. | Information additive code generator and decoder for communication systems |
US20080058958A1 (en) * | 2006-06-09 | 2008-03-06 | Chia Pao Cheng | Knee joint with retention and cushion structures |
US20080066136A1 (en) * | 2006-08-24 | 2008-03-13 | International Business Machines Corporation | System and method for detecting topic shift boundaries in multimedia streams using joint audio, visual and text cues |
US7363048B2 (en) * | 2002-04-15 | 2008-04-22 | Nokia Corporation | Apparatus, and associated method, for operating upon data at RLP logical layer of a communication station |
US20080101478A1 (en) * | 2006-10-31 | 2008-05-01 | Kabushiki Kaisha Toshiba | Decoding device and decoding method |
US20080134005A1 (en) * | 2004-12-02 | 2008-06-05 | Izzat Hekmat Izzat | Adaptive Forward Error Correction |
US7391717B2 (en) * | 2003-06-30 | 2008-06-24 | Microsoft Corporation | Streaming of variable bit rate multimedia content |
US7394407B2 (en) * | 2002-10-05 | 2008-07-01 | Digital Fountain, Inc. | Systematic encoding and decoding of chain reaction codes |
US20080170564A1 (en) * | 2006-11-14 | 2008-07-17 | Qualcomm Incorporated | Systems and methods for channel switching |
US20080172712A1 (en) * | 2007-01-11 | 2008-07-17 | Matsushita Electric Industrial Co., Ltd. | Multimedia data transmitting apparatus, multimedia data receiving apparatus, multimedia data transmitting method, and multimedia data receiving method |
US7418651B2 (en) * | 2004-05-07 | 2008-08-26 | Digital Fountain, Inc. | File download and streaming system |
US20080281943A1 (en) * | 2001-11-09 | 2008-11-13 | Jody Shapiro | System, method, and computer program product for remotely determining the configuration of a multi-media content user |
US20080285556A1 (en) * | 2007-05-14 | 2008-11-20 | Samsung Electronics Co., Ltd. | Broadcasting service transmitting apparatus and method and broadcasting service receiving apparatus and method for effectively accessing broadcasting service |
US20090019229A1 (en) * | 2007-07-10 | 2009-01-15 | Qualcomm Incorporated | Data Prefetch Throttle |
US20090089445A1 (en) * | 2007-09-28 | 2009-04-02 | Deshpande Sachin G | Client-Controlled Adaptive Streaming |
US7529806B1 (en) * | 1999-11-04 | 2009-05-05 | Koninklijke Philips Electronics N.V. | Partitioning of MP3 content file for emulating streaming |
US20090158114A1 (en) * | 2003-10-06 | 2009-06-18 | Digital Fountain, Inc. | Error-correcting multi-stage code generator and decoder for communication systems having single transmitters or multiple transmitters |
US20090164653A1 (en) * | 2007-12-24 | 2009-06-25 | Mandyam Giridhar D | Adaptive streaming for on demand wireless services |
US7555006B2 (en) * | 2003-09-15 | 2009-06-30 | The Directv Group, Inc. | Method and system for adaptive transcoding and transrating in a video network |
US20090222873A1 (en) * | 2005-03-07 | 2009-09-03 | Einarsson Torbjoern | Multimedia Channel Switching |
US20090257508A1 (en) * | 2008-04-10 | 2009-10-15 | Gaurav Aggarwal | Method and system for enabling video trick modes |
US20090300203A1 (en) * | 2008-05-30 | 2009-12-03 | Microsoft Corporation | Stream selection for enhanced media streaming |
US20090319563A1 (en) * | 2008-06-21 | 2009-12-24 | Microsoft Corporation | File format for media distribution and presentation |
US20090328228A1 (en) * | 2008-06-27 | 2009-12-31 | Microsoft Corporation | Segmented Media Content Rights Management |
US20100011274A1 (en) * | 2008-06-12 | 2010-01-14 | Qualcomm Incorporated | Hypothetical fec decoder and signalling for decoding control |
US20100023525A1 (en) * | 2006-01-05 | 2010-01-28 | Magnus Westerlund | Media container file management |
US20100049865A1 (en) * | 2008-04-16 | 2010-02-25 | Nokia Corporation | Decoding Order Recovery in Session Multiplexing |
US20100061444A1 (en) * | 2008-09-11 | 2010-03-11 | On2 Technologies Inc. | System and method for video encoding using adaptive segmentation |
US20100067495A1 (en) * | 2006-10-30 | 2010-03-18 | Young Dae Lee | Method of performing random access in a wireless communcation system |
US7720174B2 (en) * | 2001-12-21 | 2010-05-18 | Digital Fountain, Inc. | Multi-stage code generator and decoder for communication systems |
US7720096B2 (en) * | 2005-10-13 | 2010-05-18 | Microsoft Corporation | RTP payload format for VC-1 |
US20100153578A1 (en) * | 2008-07-16 | 2010-06-17 | Nokia Corporation | Method and Apparatus for Peer to Peer Streaming |
US20100174823A1 (en) * | 2006-07-31 | 2010-07-08 | Juniper Networks, Inc. | Optimizing batch size for prefetching data over wide area networks |
US8027328B2 (en) * | 2006-12-26 | 2011-09-27 | Alcatel Lucent | Header compression in a wireless communication network |
US20110268178A1 (en) * | 2009-08-18 | 2011-11-03 | Anthony Neal Park | Encoding video streams for adaptive video streaming |
US8081716B2 (en) * | 2006-01-25 | 2011-12-20 | Lg Electronics Inc. | Digital broadcasting receiving system and method of processing data |
US20120317305A1 (en) * | 2010-02-19 | 2012-12-13 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and Arrangement for Representation Switching in HTTP Streaming |
US8340133B2 (en) * | 2005-10-05 | 2012-12-25 | Lg Electronics Inc. | Method of processing traffic information and digital broadcast system |
US20130246643A1 (en) * | 2011-08-31 | 2013-09-19 | Qualcomm Incorporated | Switch signaling methods providing improved switching between representations for adaptive http streaming |
US20130254634A1 (en) * | 2012-03-26 | 2013-09-26 | Qualcomm Incorporated | Universal object delivery and template-based file delivery |
US20140009578A1 (en) * | 2010-07-21 | 2014-01-09 | Qualcomm Incorporated | Providing frame packing type information for video coding |
Family Cites Families (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US854650A (en) * | 1906-11-19 | 1907-05-21 | Warren S Johnson | Condenser. |
US3909721A (en) | 1972-01-31 | 1975-09-30 | Signatron | Signal processing system |
US4365338A (en) | 1980-06-27 | 1982-12-21 | Harris Corporation | Technique for high rate digital transmission over a dynamic dispersive channel |
US4589112A (en) | 1984-01-26 | 1986-05-13 | International Business Machines Corporation | System for multiple error detection with single and double bit error correction |
US5455823A (en) | 1990-11-06 | 1995-10-03 | Radio Satellite Corporation | Integrated communications terminal |
EP0543070A1 (en) | 1991-11-21 | 1993-05-26 | International Business Machines Corporation | Coding system and method using quaternary codes |
EP0613249A1 (en) | 1993-02-12 | 1994-08-31 | Altera Corporation | Custom look-up table with reduced number of architecture bits |
JP2576776B2 (en) | 1993-11-10 | 1997-01-29 | 日本電気株式会社 | Packet transmission method and packet transmission device |
US5432787A (en) | 1994-03-24 | 1995-07-11 | Loral Aerospace Corporation | Packet data transmission system with adaptive data recovery method |
US5617541A (en) | 1994-12-21 | 1997-04-01 | International Computer Science Institute | System for packetizing data encoded corresponding to priority levels where reconstructed data corresponds to fractionalized priority level and received fractionalized packets |
JPH11505685A (en) | 1995-04-27 | 1999-05-21 | トラスティーズ・オブ・ザ・スティーブンス・インスティテュート・オブ・テクノロジー | High integrity transmission for time-limited multimedia network applications |
US5805825A (en) | 1995-07-26 | 1998-09-08 | Intel Corporation | Method for semi-reliable, unidirectional broadcast information services |
US6044485A (en) | 1997-01-03 | 2000-03-28 | Ericsson Inc. | Transmitter method and transmission system using adaptive coding based on channel characteristics |
US5983383A (en) | 1997-01-17 | 1999-11-09 | Qualcom Incorporated | Method and apparatus for transmitting and receiving concatenated code data |
EP0854650A3 (en) | 1997-01-17 | 2001-05-02 | NOKIA TECHNOLOGY GmbH | Method for addressing a service in digital video broadcasting |
US5970098A (en) | 1997-05-02 | 1999-10-19 | Globespan Technologies, Inc. | Multilevel encoder |
US6178536B1 (en) | 1997-08-14 | 2001-01-23 | International Business Machines Corporation | Coding scheme for file backup and systems based thereon |
US6073250A (en) | 1997-11-06 | 2000-06-06 | Luby; Michael G. | Loss resilient decoding technique |
US6081918A (en) | 1997-11-06 | 2000-06-27 | Spielman; Daniel A. | Loss resilient code with cascading series of redundant layers |
US6195777B1 (en) | 1997-11-06 | 2001-02-27 | Compaq Computer Corporation | Loss resilient code with double heavy tailed series of redundant layers |
US6163870A (en) | 1997-11-06 | 2000-12-19 | Compaq Computer Corporation | Message encoding with irregular graphing |
US6081909A (en) | 1997-11-06 | 2000-06-27 | Digital Equipment Corporation | Irregularly graphed encoding technique |
US6849803B1 (en) | 1998-01-15 | 2005-02-01 | Arlington Industries, Inc. | Electrical connector |
US6097320A (en) | 1998-01-20 | 2000-08-01 | Silicon Systems, Inc. | Encoder/decoder system with suppressed error propagation |
US6333926B1 (en) | 1998-08-11 | 2001-12-25 | Nortel Networks Limited | Multiple user CDMA basestation modem |
US6320520B1 (en) | 1998-09-23 | 2001-11-20 | Digital Fountain | Information additive group code generator and decoder for communications systems |
US6223324B1 (en) | 1999-01-05 | 2001-04-24 | Agere Systems Guardian Corp. | Multiple program unequal error protection for digital audio broadcasting and other applications |
US6643332B1 (en) | 1999-07-09 | 2003-11-04 | Lsi Logic Corporation | Method and apparatus for multi-level coding of digital signals |
US6430233B1 (en) | 1999-08-30 | 2002-08-06 | Hughes Electronics Corporation | Single-LNB satellite data receiver |
US6384750B1 (en) | 2000-03-23 | 2002-05-07 | Mosaid Technologies, Inc. | Multi-stage lookup for translating between signals of different bit lengths |
US6473010B1 (en) | 2000-04-04 | 2002-10-29 | Marvell International, Ltd. | Method and apparatus for determining error correction code failure rate for iterative decoding algorithms |
US6742154B1 (en) * | 2000-05-25 | 2004-05-25 | Ciena Corporation | Forward error correction codes for digital optical network optimization |
US6486803B1 (en) | 2000-09-22 | 2002-11-26 | Digital Fountain, Inc. | On demand encoding with a window |
US6411223B1 (en) | 2000-10-18 | 2002-06-25 | Digital Fountain, Inc. | Generating high weight encoding symbols using a basis |
US6850736B2 (en) | 2000-12-21 | 2005-02-01 | Tropian, Inc. | Method and apparatus for reception quality indication in wireless communication |
-
2005
- 2005-08-11 WO PCT/US2005/028668 patent/WO2006020826A2/en active Application Filing
- 2005-08-11 US US11/202,933 patent/US7721184B2/en active Active
-
2008
- 2008-12-09 US US12/331,268 patent/US20090307565A1/en not_active Abandoned
Patent Citations (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6079041A (en) * | 1995-08-04 | 2000-06-20 | Sanyo Electric Co., Ltd. | Digital modulation circuit and digital demodulation circuit |
US20010033586A1 (en) * | 1996-12-17 | 2001-10-25 | Satoru Takashimizu | Receiving apparatus for digital broadcasting signal and receving/recording/reproducing apparatus thereof |
US5844636A (en) * | 1997-05-13 | 1998-12-01 | Hughes Electronics Corporation | Method and apparatus for receiving and recording digital packet data |
US20010015944A1 (en) * | 1997-05-19 | 2001-08-23 | Sony Corporation | Recording method and apparatus for continuous playback of fragmented signals |
US6141787A (en) * | 1997-05-19 | 2000-10-31 | Sanyo Electric Co., Ltd. | Digital modulation and demodulation |
US20050163468A1 (en) * | 1997-05-19 | 2005-07-28 | Takao Takahashi | Signal recording method & apparatus, signal recording / reproducing method & apparatus and signal recording medium |
US6937618B1 (en) * | 1998-05-20 | 2005-08-30 | Sony Corporation | Separating device and method and signal receiving device and method |
US20080034273A1 (en) * | 1998-09-23 | 2008-02-07 | Digital Fountain, Inc. | Information additive code generator and decoder for communication systems |
US6166544A (en) * | 1998-11-25 | 2000-12-26 | General Electric Company | MR imaging system with interactive image contrast control |
US6876623B1 (en) * | 1998-12-02 | 2005-04-05 | Agere Systems Inc. | Tuning scheme for code division multiplex broadcasting system |
US20050123058A1 (en) * | 1999-04-27 | 2005-06-09 | Greenbaum Gary S. | System and method for generating multiple synchronized encoded representations of media data |
US7068681B2 (en) * | 1999-05-10 | 2006-06-27 | Samsung Electronics Co., Ltd. | Apparatus and method for exchanging variable-length data according to radio link protocol in mobile communication system |
US7483447B2 (en) * | 1999-05-10 | 2009-01-27 | Samsung Electronics Co., Ltd | Apparatus and method for exchanging variable-length data according to radio link protocol in mobile communication system |
US7100188B2 (en) * | 1999-05-26 | 2006-08-29 | Enounce, Inc. | Method and apparatus for controlling time-scale modification during multi-media broadcasts |
US7529806B1 (en) * | 1999-11-04 | 2009-05-05 | Koninklijke Philips Electronics N.V. | Partitioning of MP3 content file for emulating streaming |
US20050018635A1 (en) * | 1999-11-22 | 2005-01-27 | Ipr Licensing, Inc. | Variable rate coding for forward link |
US20020009137A1 (en) * | 2000-02-01 | 2002-01-24 | Nelson John E. | Three-dimensional video broadcasting system |
US7304990B2 (en) * | 2000-02-03 | 2007-12-04 | Bandwiz Inc. | Method of encoding and transmitting data over a communication medium through division and segmentation |
US6510177B1 (en) * | 2000-03-24 | 2003-01-21 | Microsoft Corporation | System and method for layered video coding enhancement |
US7073191B2 (en) * | 2000-04-08 | 2006-07-04 | Sun Microsystems, Inc | Streaming a single media track to multiple clients |
US6631172B1 (en) * | 2000-05-01 | 2003-10-07 | Lucent Technologies Inc. | Efficient list decoding of Reed-Solomon codes for message recovery in the presence of high noise levels |
US20020083345A1 (en) * | 2000-08-16 | 2002-06-27 | Halliday David C. | Method and system for secure communication over unstable public connections |
US7240358B2 (en) * | 2000-12-08 | 2007-07-03 | Digital Fountain, Inc. | Methods and apparatus for scheduling, serving, receiving media-on demand for clients, servers arranged according to constraints on resources |
US20080086751A1 (en) * | 2000-12-08 | 2008-04-10 | Digital Fountain, Inc. | Methods and apparatus for scheduling, serving, receiving media-on-demand for clients, servers arranged according to constraints on resources |
US20020143953A1 (en) * | 2001-04-03 | 2002-10-03 | International Business Machines Corporation | Automatic affinity within networks performing workload balancing |
US20060212444A1 (en) * | 2001-05-16 | 2006-09-21 | Pandora Media, Inc. | Methods and systems for utilizing contextual feedback to generate and modify playlists |
US20080281943A1 (en) * | 2001-11-09 | 2008-11-13 | Jody Shapiro | System, method, and computer program product for remotely determining the configuration of a multi-media content user |
US7720174B2 (en) * | 2001-12-21 | 2010-05-18 | Digital Fountain, Inc. | Multi-stage code generator and decoder for communication systems |
US20060120464A1 (en) * | 2002-01-23 | 2006-06-08 | Nokia Corporation | Grouping of image frames in video coding |
US20030138043A1 (en) * | 2002-01-23 | 2003-07-24 | Miska Hannuksela | Grouping of image frames in video coding |
US20050180415A1 (en) * | 2002-03-06 | 2005-08-18 | Gene Cheung | Medium streaming distribution system |
US20040015768A1 (en) * | 2002-03-15 | 2004-01-22 | Philippe Bordes | Device and method for inserting error correcting codes and for reconstructing data streams, and corresponding products |
US7363048B2 (en) * | 2002-04-15 | 2008-04-22 | Nokia Corporation | Apparatus, and associated method, for operating upon data at RLP logical layer of a communication station |
US20040240382A1 (en) * | 2002-04-19 | 2004-12-02 | Daiji Ido | Data reception apparatus and data distribution system |
US20030207696A1 (en) * | 2002-05-06 | 2003-11-06 | Serge Willenegger | Multi-media broadcast and multicast service (MBMS) in a wireless communications system |
US7265688B2 (en) * | 2002-06-11 | 2007-09-04 | Digital Fountain, Inc. | Systems and processes for decoding a chain reaction code through inactivation |
US6985459B2 (en) * | 2002-08-21 | 2006-01-10 | Qualcomm Incorporated | Early transmission and playout of packets in wireless communication systems |
US20090189792A1 (en) * | 2002-10-05 | 2009-07-30 | Shokrollahi M Amin | Systematic encoding and decoding of chain reaction codes |
US7394407B2 (en) * | 2002-10-05 | 2008-07-01 | Digital Fountain, Inc. | Systematic encoding and decoding of chain reaction codes |
US20040081106A1 (en) * | 2002-10-25 | 2004-04-29 | Stefan Bruhn | Delay trading between communication links |
US20060031738A1 (en) * | 2002-10-30 | 2006-02-09 | Koninklijke Philips Electronics, N.V. | Adaptative forward error control scheme |
US7391717B2 (en) * | 2003-06-30 | 2008-06-24 | Microsoft Corporation | Streaming of variable bit rate multimedia content |
US20070140369A1 (en) * | 2003-07-07 | 2007-06-21 | Limberg Allen L | System of robust DTV signal transmissions that legacy DTV receivers will disregard |
US20050193309A1 (en) * | 2003-08-21 | 2005-09-01 | Francesco Grilli | Methods for forward error correction coding above a radio link control layer and related apparatus |
US7555006B2 (en) * | 2003-09-15 | 2009-06-30 | The Directv Group, Inc. | Method and system for adaptive transcoding and transrating in a video network |
US20090158114A1 (en) * | 2003-10-06 | 2009-06-18 | Digital Fountain, Inc. | Error-correcting multi-stage code generator and decoder for communication systems having single transmitters or multiple transmitters |
US20050084006A1 (en) * | 2003-10-16 | 2005-04-21 | Shawmin Lei | System and method for three-dimensional video coding |
US7168030B2 (en) * | 2003-10-17 | 2007-01-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Turbo code decoder with parity information update |
US20050091697A1 (en) * | 2003-10-27 | 2005-04-28 | Matsushita Electric Industrial Co., Ltd. | Apparatus for receiving broadcast signal |
US20050195752A1 (en) * | 2004-03-08 | 2005-09-08 | Microsoft Corporation | Resolving partial media topologies |
US20050216472A1 (en) * | 2004-03-29 | 2005-09-29 | David Leon | Efficient multicast/broadcast distribution of formatted data |
US20090031199A1 (en) * | 2004-05-07 | 2009-01-29 | Digital Fountain, Inc. | File download and streaming system |
US7418651B2 (en) * | 2004-05-07 | 2008-08-26 | Digital Fountain, Inc. | File download and streaming system |
US20060015568A1 (en) * | 2004-07-14 | 2006-01-19 | Rod Walsh | Grouping of session objects |
US20060107174A1 (en) * | 2004-11-16 | 2006-05-18 | Bernd Heise | Seamless change of depth of a general convolutional interleaver during transmission without loss of data |
US20080134005A1 (en) * | 2004-12-02 | 2008-06-05 | Izzat Hekmat Izzat | Adaptive Forward Error Correction |
US20060244865A1 (en) * | 2005-03-02 | 2006-11-02 | Rohde & Schwarz, Inc. | Apparatus, systems, methods and computer products for providing a virtual enhanced training sequence |
US20090222873A1 (en) * | 2005-03-07 | 2009-09-03 | Einarsson Torbjoern | Multimedia Channel Switching |
US20060229075A1 (en) * | 2005-04-09 | 2006-10-12 | Lg Electronics Inc. | Supporting handover of mobile terminal |
US20060248195A1 (en) * | 2005-04-27 | 2006-11-02 | Kunihiko Toumura | Computer system with a packet transfer device using a hash value for transferring a content request |
US20070002953A1 (en) * | 2005-06-29 | 2007-01-04 | Kabushiki Kaisha Toshiba | Encoded stream reproducing apparatus |
US20070006274A1 (en) * | 2005-06-30 | 2007-01-04 | Toni Paila | Transmission and reception of session packets |
US20070016594A1 (en) * | 2005-07-15 | 2007-01-18 | Sony Corporation | Scalable video coding (SVC) file format |
US8340133B2 (en) * | 2005-10-05 | 2012-12-25 | Lg Electronics Inc. | Method of processing traffic information and digital broadcast system |
US7720096B2 (en) * | 2005-10-13 | 2010-05-18 | Microsoft Corporation | RTP payload format for VC-1 |
US20100023525A1 (en) * | 2006-01-05 | 2010-01-28 | Magnus Westerlund | Media container file management |
US20070162568A1 (en) * | 2006-01-06 | 2007-07-12 | Manish Gupta | Dynamic media serving infrastructure |
US20070162611A1 (en) * | 2006-01-06 | 2007-07-12 | Google Inc. | Discontinuous Download of Media Files |
US8081716B2 (en) * | 2006-01-25 | 2011-12-20 | Lg Electronics Inc. | Digital broadcasting receiving system and method of processing data |
US20070185973A1 (en) * | 2006-02-07 | 2007-08-09 | Dot Hill Systems, Corp. | Pull data replication model |
US20070195894A1 (en) * | 2006-02-21 | 2007-08-23 | Digital Fountain, Inc. | Multiple-field based code generator and decoder for communications systems |
US20070255844A1 (en) * | 2006-04-27 | 2007-11-01 | Microsoft Corporation | Guided random seek support for media streaming |
US20070300127A1 (en) * | 2006-05-10 | 2007-12-27 | Digital Fountain, Inc. | Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient users of the communications systems |
US20070277209A1 (en) * | 2006-05-24 | 2007-11-29 | Newport Media, Inc. | Robust transmission system and method for mobile television applications |
US20080058958A1 (en) * | 2006-06-09 | 2008-03-06 | Chia Pao Cheng | Knee joint with retention and cushion structures |
US20100174823A1 (en) * | 2006-07-31 | 2010-07-08 | Juniper Networks, Inc. | Optimizing batch size for prefetching data over wide area networks |
US20080066136A1 (en) * | 2006-08-24 | 2008-03-13 | International Business Machines Corporation | System and method for detecting topic shift boundaries in multimedia streams using joint audio, visual and text cues |
US20100067495A1 (en) * | 2006-10-30 | 2010-03-18 | Young Dae Lee | Method of performing random access in a wireless communcation system |
US20080101478A1 (en) * | 2006-10-31 | 2008-05-01 | Kabushiki Kaisha Toshiba | Decoding device and decoding method |
US20080170564A1 (en) * | 2006-11-14 | 2008-07-17 | Qualcomm Incorporated | Systems and methods for channel switching |
US8027328B2 (en) * | 2006-12-26 | 2011-09-27 | Alcatel Lucent | Header compression in a wireless communication network |
US20080172712A1 (en) * | 2007-01-11 | 2008-07-17 | Matsushita Electric Industrial Co., Ltd. | Multimedia data transmitting apparatus, multimedia data receiving apparatus, multimedia data transmitting method, and multimedia data receiving method |
US20080285556A1 (en) * | 2007-05-14 | 2008-11-20 | Samsung Electronics Co., Ltd. | Broadcasting service transmitting apparatus and method and broadcasting service receiving apparatus and method for effectively accessing broadcasting service |
US20090019229A1 (en) * | 2007-07-10 | 2009-01-15 | Qualcomm Incorporated | Data Prefetch Throttle |
US20090089445A1 (en) * | 2007-09-28 | 2009-04-02 | Deshpande Sachin G | Client-Controlled Adaptive Streaming |
US20090164653A1 (en) * | 2007-12-24 | 2009-06-25 | Mandyam Giridhar D | Adaptive streaming for on demand wireless services |
US20090257508A1 (en) * | 2008-04-10 | 2009-10-15 | Gaurav Aggarwal | Method and system for enabling video trick modes |
US20100049865A1 (en) * | 2008-04-16 | 2010-02-25 | Nokia Corporation | Decoding Order Recovery in Session Multiplexing |
US20090300203A1 (en) * | 2008-05-30 | 2009-12-03 | Microsoft Corporation | Stream selection for enhanced media streaming |
US20100011274A1 (en) * | 2008-06-12 | 2010-01-14 | Qualcomm Incorporated | Hypothetical fec decoder and signalling for decoding control |
US20090319563A1 (en) * | 2008-06-21 | 2009-12-24 | Microsoft Corporation | File format for media distribution and presentation |
US20090328228A1 (en) * | 2008-06-27 | 2009-12-31 | Microsoft Corporation | Segmented Media Content Rights Management |
US20100153578A1 (en) * | 2008-07-16 | 2010-06-17 | Nokia Corporation | Method and Apparatus for Peer to Peer Streaming |
US20100061444A1 (en) * | 2008-09-11 | 2010-03-11 | On2 Technologies Inc. | System and method for video encoding using adaptive segmentation |
US20110268178A1 (en) * | 2009-08-18 | 2011-11-03 | Anthony Neal Park | Encoding video streams for adaptive video streaming |
US20120317305A1 (en) * | 2010-02-19 | 2012-12-13 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and Arrangement for Representation Switching in HTTP Streaming |
US20140009578A1 (en) * | 2010-07-21 | 2014-01-09 | Qualcomm Incorporated | Providing frame packing type information for video coding |
US20130246643A1 (en) * | 2011-08-31 | 2013-09-19 | Qualcomm Incorporated | Switch signaling methods providing improved switching between representations for adaptive http streaming |
US20130254634A1 (en) * | 2012-03-26 | 2013-09-26 | Qualcomm Incorporated | Universal object delivery and template-based file delivery |
Cited By (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9246633B2 (en) | 1998-09-23 | 2016-01-26 | Digital Fountain, Inc. | Information additive code generator and decoder for communication systems |
US9236976B2 (en) | 2001-12-21 | 2016-01-12 | Digital Fountain, Inc. | Multi stage code generator and decoder for communication systems |
US9240810B2 (en) | 2002-06-11 | 2016-01-19 | Digital Fountain, Inc. | Systems and processes for decoding chain reaction codes through inactivation |
USRE43741E1 (en) | 2002-10-05 | 2012-10-16 | Qualcomm Incorporated | Systematic encoding and decoding of chain reaction codes |
US9236885B2 (en) | 2002-10-05 | 2016-01-12 | Digital Fountain, Inc. | Systematic encoding and decoding of chain reaction codes |
US8887020B2 (en) | 2003-10-06 | 2014-11-11 | Digital Fountain, Inc. | Error-correcting multi-stage code generator and decoder for communication systems having single transmitters or multiple transmitters |
US9236887B2 (en) | 2004-05-07 | 2016-01-12 | Digital Fountain, Inc. | File download and streaming system |
US9136878B2 (en) | 2004-05-07 | 2015-09-15 | Digital Fountain, Inc. | File download and streaming system |
US8374087B2 (en) * | 2004-09-23 | 2013-02-12 | Sony Corporation | Reliable audio-video transmission system using multi-media diversity |
US20060062243A1 (en) * | 2004-09-23 | 2006-03-23 | Dacosta Behram M | Reliable audio-video transmission system using multi-media diversity |
US20060062242A1 (en) * | 2004-09-23 | 2006-03-23 | Sony Corporation | Reliable audio-video transmission system using multi-media diversity |
US8184657B2 (en) | 2004-09-23 | 2012-05-22 | Sony Corporation | Reliable audio-video transmission system using multi-media diversity |
US9136983B2 (en) | 2006-02-13 | 2015-09-15 | Digital Fountain, Inc. | Streaming and buffering using variable FEC overhead and protection periods |
US9270414B2 (en) | 2006-02-21 | 2016-02-23 | Digital Fountain, Inc. | Multiple-field based code generator and decoder for communications systems |
US9264069B2 (en) | 2006-05-10 | 2016-02-16 | Digital Fountain, Inc. | Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient uses of the communications systems |
US9191151B2 (en) | 2006-06-09 | 2015-11-17 | Qualcomm Incorporated | Enhanced block-request streaming using cooperative parallel HTTP and forward error correction |
US9178535B2 (en) | 2006-06-09 | 2015-11-03 | Digital Fountain, Inc. | Dynamic stream interleaving and sub-stream based delivery |
US9432433B2 (en) | 2006-06-09 | 2016-08-30 | Qualcomm Incorporated | Enhanced block-request streaming system using signaling or block creation |
US9386064B2 (en) | 2006-06-09 | 2016-07-05 | Qualcomm Incorporated | Enhanced block-request streaming using URL templates and construction rules |
US9380096B2 (en) | 2006-06-09 | 2016-06-28 | Qualcomm Incorporated | Enhanced block-request streaming system for handling low-latency streaming |
US9209934B2 (en) | 2006-06-09 | 2015-12-08 | Qualcomm Incorporated | Enhanced block-request streaming using cooperative parallel HTTP and forward error correction |
US11477253B2 (en) | 2006-06-09 | 2022-10-18 | Qualcomm Incorporated | Enhanced block-request streaming system using signaling or block creation |
US20080152061A1 (en) * | 2006-12-22 | 2008-06-26 | Kozat Ulas C | Method and apparatus for opportunistic multicasting with coded scheduling in wireless networks |
US7839851B2 (en) * | 2006-12-22 | 2010-11-23 | Ntt Docomo, Inc. | Method and apparatus for opportunistic multicasting with coded scheduling in wireless networks |
KR101130591B1 (en) * | 2006-12-22 | 2012-04-02 | 가부시키가이샤 엔티티 도코모 | A method and apparatus for opportunistic multicasting with coded scheduling in wireless networks |
US9237101B2 (en) | 2007-09-12 | 2016-01-12 | Digital Fountain, Inc. | Generating and communicating source identification information to enable reliable communications |
US9281847B2 (en) | 2009-02-27 | 2016-03-08 | Qualcomm Incorporated | Mobile reception of digital video broadcasting—terrestrial services |
US9876607B2 (en) | 2009-08-19 | 2018-01-23 | Qualcomm Incorporated | Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes |
US9288010B2 (en) | 2009-08-19 | 2016-03-15 | Qualcomm Incorporated | Universal file delivery methods for providing unequal error protection and bundled file delivery services |
US9660763B2 (en) | 2009-08-19 | 2017-05-23 | Qualcomm Incorporated | Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes |
US9419749B2 (en) | 2009-08-19 | 2016-08-16 | Qualcomm Incorporated | Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes |
US11743317B2 (en) | 2009-09-22 | 2023-08-29 | Qualcomm Incorporated | Enhanced block-request streaming using block partitioning or request controls for improved client-side handling |
US10855736B2 (en) | 2009-09-22 | 2020-12-01 | Qualcomm Incorporated | Enhanced block-request streaming using block partitioning or request controls for improved client-side handling |
US9917874B2 (en) | 2009-09-22 | 2018-03-13 | Qualcomm Incorporated | Enhanced block-request streaming using block partitioning or request controls for improved client-side handling |
US11770432B2 (en) | 2009-09-22 | 2023-09-26 | Qualcomm Incorporated | Enhanced block-request streaming system for handling low-latency streaming |
US9602802B2 (en) | 2010-07-21 | 2017-03-21 | Qualcomm Incorporated | Providing frame packing type information for video coding |
US8958375B2 (en) | 2011-02-11 | 2015-02-17 | Qualcomm Incorporated | Framing for an improved radio link protocol including FEC |
US9270299B2 (en) | 2011-02-11 | 2016-02-23 | Qualcomm Incorporated | Encoding and decoding using elastic codes with flexible source block mapping |
US9253233B2 (en) | 2011-08-31 | 2016-02-02 | Qualcomm Incorporated | Switch signaling methods providing improved switching between representations for adaptive HTTP streaming |
US9843844B2 (en) | 2011-10-05 | 2017-12-12 | Qualcomm Incorporated | Network streaming of media data |
US20130254616A1 (en) * | 2012-03-22 | 2013-09-26 | Lsi Corporation | Systems and Methods for Variable Rate Coding in a Data Processing System |
US9230596B2 (en) * | 2012-03-22 | 2016-01-05 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Systems and methods for variable rate coding in a data processing system |
US9294226B2 (en) | 2012-03-26 | 2016-03-22 | Qualcomm Incorporated | Universal object delivery and template-based file delivery |
US10257266B2 (en) | 2013-11-11 | 2019-04-09 | Amazon Technologies, Inc. | Location of actor resources |
US10347013B2 (en) | 2013-11-11 | 2019-07-09 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US9805479B2 (en) | 2013-11-11 | 2017-10-31 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US9582904B2 (en) | 2013-11-11 | 2017-02-28 | Amazon Technologies, Inc. | Image composition based on remote object data |
US9578074B2 (en) | 2013-11-11 | 2017-02-21 | Amazon Technologies, Inc. | Adaptive content transmission |
US9413830B2 (en) | 2013-11-11 | 2016-08-09 | Amazon Technologies, Inc. | Application streaming service |
US10097596B2 (en) | 2013-11-11 | 2018-10-09 | Amazon Technologies, Inc. | Multiple stream content presentation |
US9604139B2 (en) | 2013-11-11 | 2017-03-28 | Amazon Technologies, Inc. | Service for generating graphics object data |
US10315110B2 (en) | 2013-11-11 | 2019-06-11 | Amazon Technologies, Inc. | Service for generating graphics object data |
US9596280B2 (en) | 2013-11-11 | 2017-03-14 | Amazon Technologies, Inc. | Multiple stream content presentation |
US10374928B1 (en) | 2013-11-11 | 2019-08-06 | Amazon Technologies, Inc. | Efficient bandwidth estimation |
US10601885B2 (en) | 2013-11-11 | 2020-03-24 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US10778756B2 (en) | 2013-11-11 | 2020-09-15 | Amazon Technologies, Inc. | Location of actor resources |
US9374552B2 (en) | 2013-11-11 | 2016-06-21 | Amazon Technologies, Inc. | Streaming game server video recorder |
US9641592B2 (en) | 2013-11-11 | 2017-05-02 | Amazon Technologies, Inc. | Location of actor resources |
US9634942B2 (en) | 2013-11-11 | 2017-04-25 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US9608934B1 (en) | 2013-11-11 | 2017-03-28 | Amazon Technologies, Inc. | Efficient bandwidth estimation |
Also Published As
Publication number | Publication date |
---|---|
US7721184B2 (en) | 2010-05-18 |
WO2006020826A2 (en) | 2006-02-23 |
WO2006020826A3 (en) | 2007-08-02 |
US20060036930A1 (en) | 2006-02-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7721184B2 (en) | Method and apparatus for fast encoding of data symbols according to half-weight codes | |
EP1665539B1 (en) | Soft-Decision Decoding of Multi-Stage Chain Reaction Codes | |
US9136878B2 (en) | File download and streaming system | |
US7711068B2 (en) | Multi-stage code generator and decoder for communication systems | |
KR101355761B1 (en) | Multiple-field based code generator and decoder for communications systems | |
WO2004068715A2 (en) | Systems and processes for fast encoding of hamming codes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DIGITAL FOUNTAIN, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LUBY, MICHAEL;SHOKROLLAHI, M. AMIN;REEL/FRAME:022304/0722 Effective date: 20051005 |
|
AS | Assignment |
Owner name: DIGITAL FOUNTAIN, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LUBY, MICHAEL;SHOKROLLAHI, M. AMIN;REEL/FRAME:022651/0890 Effective date: 20051005 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |
|
AS | Assignment |
Owner name: QUALCOMM INCORPORATED, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DIGITAL FOUNTAIN, INC.;REEL/FRAME:045641/0207 Effective date: 20180315 |