US20070201563A1 - Method and apparatus for processing a data series including processing priority data - Google Patents
Method and apparatus for processing a data series including processing priority data Download PDFInfo
- Publication number
- US20070201563A1 US20070201563A1 US11/742,810 US74281007A US2007201563A1 US 20070201563 A1 US20070201563 A1 US 20070201563A1 US 74281007 A US74281007 A US 74281007A US 2007201563 A1 US2007201563 A1 US 2007201563A1
- Authority
- US
- United States
- Prior art keywords
- information
- data
- priority
- picture
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 179
- 238000012545 processing Methods 0.000 title abstract description 156
- 238000005070 sampling Methods 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 abstract description 199
- 238000004891 communication Methods 0.000 abstract description 101
- 230000002194 synthesizing effect Effects 0.000 abstract description 28
- 230000008859 change Effects 0.000 abstract description 24
- 230000015654 memory Effects 0.000 abstract description 13
- 238000004364 calculation method Methods 0.000 description 20
- 230000001131 transforming effect Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 14
- 239000013598 vector Substances 0.000 description 12
- 239000003550 marker Substances 0.000 description 11
- 238000012546 transfer Methods 0.000 description 11
- 230000003247 decreasing effect Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 101000969688 Homo sapiens Macrophage-expressed gene 1 protein Proteins 0.000 description 8
- 102100021285 Macrophage-expressed gene 1 protein Human genes 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 239000000872 buffer Substances 0.000 description 6
- 238000003672 processing method Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000003780 insertion Methods 0.000 description 5
- 230000037431 insertion Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 230000003111 delayed effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002542 deteriorative effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/12—Arrangements for detecting or preventing errors in the information received by using return channel
- H04L1/16—Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
- H04L1/18—Automatic repetition systems, e.g. Van Duuren systems
- H04L1/1829—Arrangements specially adapted for the receiver end
- H04L1/1854—Scheduling and prioritising arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/12—Arrangements for detecting or preventing errors in the information received by using return channel
- H04L1/16—Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
- H04L1/18—Automatic repetition systems, e.g. Van Duuren systems
- H04L1/1867—Arrangements specially adapted for the transmitter end
- H04L1/1887—Scheduling and prioritising arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/142—Detection of scene cut or scene change
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/156—Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/164—Feedback from the receiver or from the transmission channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/523—Motion estimation or motion compensation with sub-pixel accuracy
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/87—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234327—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/2368—Multiplexing of audio and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2381—Adapting the multiplex stream to a specific network, e.g. an Internet Protocol [IP] network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4305—Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4341—Demultiplexing of audio and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/438—Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
- H04N21/4381—Recovering the multiplex stream from a specific network, e.g. recovering MPEG packets from ATM cells
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6106—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
- H04N21/6137—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via a telephone network, e.g. POTS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/643—Communication protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/643—Communication protocols
- H04N21/6437—Real-time Transport Protocol [RTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/654—Transmission by server directed to the client
- H04N21/6543—Transmission by server directed to the client for forcing some client operations, e.g. recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/654—Transmission by server directed to the client
- H04N21/6547—Transmission by server directed to the client comprising parameters, e.g. for client setup
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/85406—Content authoring involving a specific file format, e.g. MP4 format
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/08—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/152—Multipoint control units therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/24—Systems for the transmission of television signals using pulse code modulation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/24—Systems for the transmission of television signals using pulse code modulation
- H04N7/52—Systems for transmission of a pulse code modulated video signal with one or more other pulse code modulated signals, e.g. an audio signal or a synchronizing signal
Definitions
- the present invention relates to audio-video transmitter and audio-video receiver, data-processing apparatus and method, waveform-data-transmitting method and apparatus and waveform-data-receiving method and apparatus, and video-transmitting method and apparatus and video-receiving method and apparatus.
- (A1) a method for transmitting (communicating and broadcasting) and controlling pictures and audio under the environment in which data and control information (information transmitted by a packet different from that of data to control the processing of terminal side) are independently transmitted by using a plurality of logical transmission lines constructed by software on one real transmission line or more;
- (A2) a method for dynamically changing header information (corresponding to data control information of the present invention) to be added to data for a picture or audio to be transmitted;
- (A3) a method for dynamically changing header information (corresponding to transmission control information of the present invention) to be added for transmission;
- (A4) a method for transmitting information by dynamically multiplexing and separating a plurality of logical transmission lines
- a dynamic throughput scalable algorithm capable of providing a high-quality video under a restricted processing time is proposed as a method for adjusting throughput at the encoder side (T. Osako, Y. Yajima, H. Kodera, H. Watanabe, K. Shimamura: Encoding of software video using a dynamic throughput scalable algorithm, Thesis Journal of IEICE, D-2, Vol. 80-D-2, No. 2, pp. 444-458 (1997)).
- MPEG1/MPEG2 system as an example of realizing synchronous reproduction of video and audio.
- H. 261 ITU-T Recommendation H. 261-Video codec for audio-visual services at px 64
- H. 261 ITU-T Recommendation H. 261-Video codec for audio-visual services at px 64
- the above-designated time denotes a time required to transmit a bit stream obtained by coding a sheet of video. If decoding is not completed within the time, an extra time becomes a delay. If the delay is accumulated, the delay from the transmitting side to the receiving side increases and the system cannot be used as a video telephone. This state must be avoided.
- the frame rate of a picture is determined by the time required for one-time coding. Therefore, when the frame rate designated by a user exceeds the throughput of a computer, it is impossible to correspond to the designation.
- the present invention is an audio-video transmitting apparatus comprising transmitting means for transmitting the content concerned with a transmitting method and/or the structure of data to be transmitted or an identifier showing the content as transmission format information through a transmission line same as that of the data to be transmitted or a transmission line different from the data transmission line;
- said data to be transmitted is video data and/or audio data.
- One aspect of the present invention is the audio-video transmitting apparatus, wherein said transmission format information is included in at least one of data control information added to said data to control said data, transmission control information added to said data to transmit said data, and information for controlling the processing of the terminal side.
- Another aspect of the present invention is the audio-video transmitting apparatus, wherein at least one of said data control information, transmission control information, and information for controlling the processing of said terminal side is dynamically changed.
- Still another aspect of the present invention is the audio-video transmitting apparatus, wherein said data is divided into a plurality of packets, and said data control information or said transmission control information is added not only to the head packet of said divided packets but also to a middle packet of them.
- Yet another aspect of the present invention is the audio-video transmitting apparatus, wherein an identifier showing whether to use timing information concerned with said data as information showing the reproducing time of said data is included in said transmission format information.
- Still yet another aspect of the present invention is the audio-video transmitting apparatus, wherein said transmission format information is the structural information of said data and a signal which is output from a receiving apparatus receiving the transmitted structural information of said data and which can be received is confirmed and thereafter, said transmitting means transmits corresponding data to said receiving apparatus.
- a further aspect of the present invention is the audio-video transmitting apparatus, wherein said transmission format information include (1) an identifier for identifying a program or data to be used by a receiving apparatus later and (2) at least one of a flag, counter, and timer as information for knowing the point of time in which said program or data is used or the term of validity for using said program or data.
- Still a further aspect of the present invention is the audio-video transmitting apparatus, wherein said point of time in which said program or data is used is transmitted as transmission control information by using a transmission serial number for identifying a transmission sequence or as information to be transmitted by a packet different from that of data to control terminal-side processing.
- Still yet a further aspect of the present invention is the audio-video transmitting apparatus, wherein storing means for storing a plurality of contents concerned with said transmitting method and/or said structure of data to be transmitted and a plurality of its identifiers are included, and said identifier is included in at least one of said data control information, transmission control information, and information for controlling terminal-side processing as said transmission format information.
- Another aspect of the present invention is the audio-video transmitting apparatus, wherein storing means for storing a plurality of contents concerned with said transmitting method and/or said structure of data to be transmitted are included, and said contents are included in at least one of said data control information, transmission control information, and information for controlling terminal-side processing as said transmission format information.
- Still another aspect of the present invention is the audio-video transmitting apparatus, wherein a default identifier showing whether to change the contents concerned with said transmitting method and/or structure of data to be transmitted is added.
- Still yet another aspect of the present invention is the audio-video transmitting apparatus, wherein said identifier or said default identifier is added to a predetermined fixed-length region of information to be transmitted or said predetermined position.
- a further aspect of the present invention is an audio-video receiving apparatus comprising: receiving means for receiving said transmission format information transmitted from the audio-video transmitting apparatus; and transmitted-information interpreting means for interpreting said received transmission-format information.
- a still further aspect of the present invention is the audio-video receiving apparatus, wherein storing means for storing a plurality of contents concerned with said transmitting method and/or said structure of data to be transmitted and a plurality of its identifiers are included, and the contents stored in said storing means are used to interpret said transmission format information.
- a still yet further aspect of the present invention is an audio-video transmitting apparatus comprising: information multiplexing means for controlling start and end of multiplexing the information for a plurality of logical transmission lines for transmitting data and/or control information is included; wherein, not only said data and/or control information multiplexed by said information multiplexing means but also control contents concerned with start and end of said multiplexing by said information multiplexing means are transmitted as multiplexing control information, and said data includes video data and/or audio data.
- Another aspect of the present invention is the audio-video transmitting apparatus wherein it is possible to select whether to transmit said multiplexing control information by arranging said information without multiplexing it before said data and/or control information or transmit said multiplexing control information through a transmission line different from the transmission line for transmitting said data and/or control information.
- Yet another aspect of the present invention is an audio-video receiving apparatus comprising: main looking-listening means for looking at and listening to a broadcast program; and auxiliary looking-listening means for cyclically detecting the state of a broadcast program other than the broadcast program looked and listened through said main looking-listening means; wherein said detection is performed so that a program and/or data necessary when said broadcast program looked and listened through said main looking-listening means is switched to other broadcast program can be smoothly processed, and said data includes video data and/or audio data.
- Still yet another aspect of the present invention is the audio-video transmitting apparatus, wherein priority values can be changed in accordance with the situation by transmitting the offset value of information showing the priority for processing of said data.
- a further aspect of the present invention is an audio-video receiving apparatus comprising: receiving means for receiving encoded information to which the information concerned with the priority for processing under an overload state is previously added; and priority deciding means for deciding a threshold serving as a criterion for selecting whether to process an object in said information received by said receiving means;
- the timing for outputting said received information is compared with the elapsed time after start of processing or the timing for decoding said received information is compared with the elapsed time after start of processing to change said threshold in accordance with the comparison result, and video data and/or audio data are or is included as said encoding object.
- a still further aspect of the present invention is the audio-video transmitting apparatus, wherein retransmission-request-priority deciding means for deciding a threshold serving as a criterion for selecting whether to request retransmission of some of said information not received because it is lost under transmission when it is necessary to retransmit said information is included, and
- said decided threshold is decided in accordance with at least one of the priority controlled by said priority deciding means, retransmission frequency, lost factor of information, insertion interval between in-frame-encoded frames, and grading of priority.
- a yet further aspect of the present invention is an audio-video transmitting apparatus comprising: retransmission-priority deciding means for deciding a threshold serving as a criterion for selecting whether to request retransmission of some of said information not received because it is lost under transmission when retransmission of said unreceived information is requested is included, wherein said decided threshold is decided in accordance with at least one of the priority controlled by the priority deciding means of said audio-video receiving apparatus, retransmission frequency, lost factor of information, insertion interval between in-frame-encoded frames, and grading of priority.
- a still yet further aspect of the present invention is an audio-video transmitting apparatus for transmitting said encoded information by using the priority added to said encoded information and thereby thinning it when (1) an actual transfer rate exceeds the target transfer rate of information for a video or audio or (2) it is decided that writing of said encoded information into a transmitting buffer is delayed as the result of comparing the elapsed time after start of transmission with a period to be decoded or output added to said encoded information.
- Another aspect of the present invention is a data processing apparatus comprising: receiving means for receiving a data series including (1) time-series data for audio or video, (2) an inter-time-series-data priority showing the priority of the processing between said time-series-data values, and (3) a plurality of in-time-series-data priorities for dividing said time-series data value to show the processing priority between divided data values; and data processing means for performing processing by using said inter-time-series-data priority and said in-time-series-data priority together when pluralities of said time-series-data values are simultaneously present.
- Still another aspect of the present invention is a data processing apparatus comprising: receiving means for receiving a data series including (1) time-series data for audio or video, (2) an inter-time-series-data priority showing the priority of the processing between said time-series-data values, and (3) a plurality of in-time-series-data priorities for dividing said time-series data value to show the processing priority between divided data values; and data processing means for distributing throughput to each of said time-series-data values in accordance with said inter-time-series-data priority and moreover, adaptively deteriorating the processing quality of the divided data in said time-series data in accordance with said in-time-series-data priority so that each of said time-series-data values is kept within said distributed throughput.
- Yet another aspect of the present invention is a data processing apparatus characterized by, when an in-time-series-data priority for a video is added every frame of said video and said video for each frame is divided into a plurality of packets, adding said in-time-series-data priority only to the header portion of a packet for transmitting the head portion of a frame of said video accessible as independent information.
- Still yet another aspect of the present invention is the data processing apparatus, wherein said in-time-series-data priority is described in the header of a packet to perform priority processing.
- a further aspect of the present invention is the data processing apparatus, wherein the range of a value capable of expressing said in-time-series-data priority is made variable to perform priority processing.
- a still further aspect of the present invention is a data processing method comprising the steps of: inputting a data series including time-series data for audio or video and an inter-time-series-data priority showing the processing priority between said time-series data values; and
- processing priorities by using said inter-time-series-data priority as the value of a relative or absolute priority.
- a yet further aspect of the present invention is a data processing method comprising the steps of: classifying time-series data values for audio or video; inputting a data series including said time-series data and a plurality of in-time-series-data priorities showing the processing priority between said classified data values; and processing priorities by using said in-time-series-data priority as the value of a relative or absolute priority.
- the present invention is characterized by:
- the present invention is characterized by:
- the present invention is characterized by:
- the present invention has the above structure to obtain the execution frequency of indispensable processing and that of dispensable processing, transmit the execution frequencies to the receiving side, and estimate the time required for each processing in accordance with the execution frequencies and the decoding time.
- the decoding execution time it is possible to set the decoding execution time to a value equal to or less than a designated time by transmitting the execution time of indispensable processing and that of dispensable processing estimated by the receiving side to the transmitting side and determining each execution frequency at the transmitting side in accordance with each execution time.
- the encoding estimation time is set to a value equal to or less than a user designated time by estimating the execution time of indispensable processing and that of dispensable processing and determining each execution frequency in accordance with each execution time and the user designated time determined by a frame rate designated by a user.
- FIG. 1 is a schematic block diagram of the audio-video transceiver of an embodiment of the present invention
- FIG. 2 is an illustration showing a reception control section and a separating section
- FIG. 3 is an illustration showing a method for transmitting and controlling video and audio by using a plurality of logical transmission lines
- FIG. 4 is an illustration showing a method for dynamically changing header information added to the data for a video or audio to be transmitted;
- FIGS. 5 ( a ) and 5 ( b ) are illustrations showing a method for adding AL information
- FIGS. 6 ( a ) to 6 ( d ) are illustrations showing examples of a method for adding AL information
- FIG. 7 is an illustration showing a method for transmitting information by dynamically multiplexing and separating a plurality of logical transmission lines
- FIG. 8 is an illustration showing a procedure for transmitting a broadcasting program
- FIG. 9 ( a ) is an illustration showing a method for transmitting a video or audio considering the read and rise time of program or data when the program or data is present at a receiving terminal;
- FIG. 9 ( b ) is an illustration showing a method for transmitting a video or audio considering the read and rise time of program or data when the program or data is transmitted;
- FIG. 10 ( a ) is an illustration showing a method for corresponding to zapping
- FIG. 10 ( b ) is an illustration showing a method for corresponding to zapping
- FIG. 11 ( a ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 11 ( b ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 12 is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 13 ( a ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 13 ( b ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 13 ( c ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 14 is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 15 is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 16 ( a ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 16 ( b ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 17 is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 18 is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 19 ( a ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIG. 19 ( b ) is an illustration showing a specific example of the protocol to be actually transferred between terminals
- FIGS. 20 ( a ) to 20 ( c ) are block diagrams of demonstration systems of CGD of the present invention.
- FIG. 21 is an illustration showing a method for adding a priority under overload at an encoder
- FIG. 22 is an illustration describing a method for deciding a priority at a receiving terminal under overload
- FIG. 23 is an illustration showing temporal change of priorities
- FIG. 24 is an illustration showing stream priority and object priority
- FIG. 25 is a schematic block diagram of a video encoder and a video decoder of an embodiment of the present invention.
- FIG. 26 is a schematic block diagram of an audio encoder and an audio decoder of an embodiment of the present invention.
- FIGS. 27 ( a ) and 27 ( b ) are illustrations showing a priority adding section and a priority deciding section for controlling the priority of processing under overload;
- FIGS. 28 ( a ) to 28 ( c ) are illustrations showing the grading for adding a priority
- FIG. 29 is an illustration showing a method for assigning a priority to multi-resolution video data
- FIG. 30 is an illustration showing a method for constituting a communication payload
- FIG. 31 is an illustration showing a method for making data correspond to a communication payload
- FIG. 32 is an illustration showing the relation between object priority, stream priority, and communication packet priority
- FIG. 33 is a block diagram of a transmitter of the first embodiment of the present invention.
- FIG. 34 is an illustration of the first embodiment
- FIG. 35 is a block diagram of the receiver of the third embodiment of the present invention.
- FIG. 36 is a block diagram of the receiver of the fifth embodiment of the present invention.
- FIG. 37 is an illustration of the fifth embodiment
- FIG. 38 is a block diagram of the transmitter of the sixth embodiment of the present invention.
- FIG. 39 is a block diagram of the transmitter of the eighth embodiment of the present invention.
- FIG. 40 is a flowchart of the transmission method of the second embodiment of the present invention.
- FIG. 41 is a flowchart of the reception method of the fourth embodiment of the present invention.
- FIG. 42 is a flowchart of the transmission method of the seventh embodiment of the present invention.
- FIG. 43 is a flowchart of the transmission method of the ninth embodiment of the present invention.
- FIG. 44 is a block diagram showing an audio-video transmitter of the present invention.
- FIG. 45 is a block diagram showing an audio-video receiver of the present invention.
- FIG. 46 is an illustration for explaining priority adding means for adding a priority to a video and audio of an audio-video transmitter of the present invention.
- FIG. 47 is an illustration for explaining priority deciding means for deciding whether to perform decoding by interpreting the priority added to a video and audio of an audio-video receiver of the present invention.
- a “picture (or video)” used for the present invention includes a static-picture and a moving-picture.
- a purposed picture can be a two-dimensional picture like computer graphics (CG) or three-dimensional picture data constituted with a wire-frame model.
- FIG. 1 is a schematic block diagram of the audio-video transceiver of an embodiment of the present invention.
- a reception control section 11 for receiving information and a transmitting section 13 for transmitting information are information transmitting means such as a coaxial cable, CATV, LAN, and modem.
- Communication environment can be the environment in which a plurality of logical transmission lines can be used without considering multiplexing means such as internet or the environment in which multiplexing means must be considered such as analog telephone or satellite broadcast.
- terminal connection systems a system for bidirectionally transferring video and audio between terminals such as a picture telephone or teleconference system or a system for broadcasting broadcast-type video and audio through satellite broadcast, CATV, or internet are listed as terminal connection systems.
- the present invention takes such terminal connection systems into consideration.
- a separating section 12 shown in FIG. 1 is means for analyzing received information and separating data from control information. Specifically, the section 12 is means for decomposing the header information for transmission added to data and data or decomposing the header for data control added to the data and the contents of the data.
- a picture extending section 14 is means for extending a received video.
- a video to be extended can be the compressed picture of a standardized moving (dynamic) or static picture such as H.261, H.263, MPEG1/2, or JPEG or not.
- the picture-extension control section 15 shown in FIG. 1 is means for monitoring the extended state of a video. For example, by monitoring the extended state of a picture, it is possible to empty-read a receiving buffer without extending the picture when the receiving buffer almost causes overflow and restart the extension of the picture after the picture is ready for extension.
- a picture synthesizing section 16 is means for synthesizing a next ended picture.
- a picture synthesizing method can be defined by describing a picture and its structural information (display position and display time (moreover, a display period can be included)), a method for grouping pictures, a picture display layer (depth), an object ID (SSRC to be described later), and the relation between attributes of them with a script language such as JAVA, VRML, or MHEG.
- the script describing the synthesizing method is input or output through a network or a local memory.
- an output section 17 is a display or printer for outputting a picture synthesized result.
- a terminal control section 18 is means for controlling each section. Furthermore, it is possible to use a structure for extending an audio instead of a picture (it is possible to constitute the structure by changing a picture extending section to an audio extending section, a picture extension control section to an audio extension control section, and a picture synthesizing section to an audio synthesizing section) or a structure for extending a picture and an audio and synthesizing and displaying them while keeping temporal synchronization.
- a picture compressing section for compressing a picture
- a picture compression control section for controlling the picture compressing section
- an audio compressing section for compressing an audio
- an audio compression control section for controlling the audio compressing section.
- FIG. 2 is an illustration showing a reception control section and a separating section.
- reception control section 11 shown in FIG. 1 By constituting the reception control section 11 shown in FIG. 1 with a data receiving section 101 for receiving data and a control information receiving section 102 for receiving the control information for controlling data and the separating section 12 with a transmission format storing section 103 for storing a transmission structure (to be described later in detail) for interpreting transmission contents and a transmission information interpreting section 104 for interpreting transmission contents in accordance with the transmission structure stored in the transmission format storing section 103 , it is possible to independently receive data and control information. Therefore, for example, it is easy to delete or move a received video or audio while receiving it.
- the communication environment purposed by the reception control section 11 it is possible for the communication environment purposed by the reception control section 11 to use a communication environment (internet profile) in which a plurality of logical transmission lines can be used without considering multiplexing means like internet or a communication environment (Raw profile) in which multiplexing means must be considered like analog telephone or satellite broadcast.
- a user premises a communication environment in which a plurality of logical transmission lines (logical channels) are prepared (for example, in the case of a communication environment in which TCP/IP can be used, the expression referred to as “communication port” is generally used).
- the reception control section 11 receives one type of data transmission line or more and one type of control logical transmission line for controlling data to be transmitted or more. It is also possible to prepare a plurality of transmission lines for transmitting data and only one transmission line for controlling data. Moreover, it is possible to prepare a transmission line for controlling data every data transmission like the RTP/RTCP also used for H.323. Furthermore, when considering the broadcast using UDP, it is possible to use a communication system using a single communication port (multicast address).
- FIG. 3 is an illustration for explaining a method for transmitting and controlling video and audio by using a plurality of logical transmission lines.
- the data to be transmitted is referred to as ES (Elementary Stream), which can be picture information for one frame or picture information in GOBs or macroblocks smaller than one frame in the case of a picture.
- ES Simple Stream
- the data-control header information added to the data to be transmitted is referred to as AL (Adaptation Layer information).
- AL Adaptation Layer information
- the information showing whether it is a start position capable of processing data, information showing data-reproducing time, and information showing the priority of data processing are listed as the AL information.
- Data control information of the present invention corresponds to the AL information.
- the information showing whether it is a start position capable of processing data specifically includes two types of information.
- First one is a flag for random access, that is, the information showing that it can be individually read and reproduced independently of preceding or following data such as intra-frame (I picture) in the case of a picture.
- Second one is the information capable of defining an access flag as a flag for showing that it can be individually read, that is, the information showing that it is the head of pictures in GOBs or macroblocks in the case of a picture. Therefore, absence of an access flag shows the middle of data. Both random access flag and access flag are not always necessary as the information showing that it is a start position capable of processing data.
- the information indicating a data reproducing time shows the information for time synchronization when a picture and an audio are reproduced, which is referred to as PTS (Presentation Time Stamp) in the case of MEPG1/2. Because time synchronization is not normally considered in the case of the real time communication such as a teleconference system, the information representing a reproducing time is not always necessary.
- the time interval between encoded frames may be necessary information.
- the receiving side By making the receiving side adjust a time interval, it is possible to prevent a large fluctuation of frame intervals. However, by making the receiving side adjust the reproducing interval, a delay may occur. Therefore, it may be decided that the time information showing the frame interval between encoded frames is unnecessary.
- the receiving terminal is able to process the data with the picture-extension control section 15 and the network is able to process the data with a relay terminal or router.
- the priority can be expressed by a numerical value or a flag.
- by transmitting the offset value of the information showing the data-processing priority as control information or data control information (AL information) together with data and adding the offset value to the priority previously assigned to a video or audio in the case of a sudden fluctuation of the load of a receiving terminal or network it is possible to set a dynamic priority corresponding to the operation state of a system.
- the information showing the data processing priority can be added every stream constituted with the aggregation of frames of a plurality of pictures or audios or every frame of video or audio.
- Priority adding means for deciding the encoded-information processing priority under overload in accordance with the predetermined rules by the encoding method such as H.263 or G.723 and making the encoded information correspond to the decided priority is provided for a transmitting terminal unit (see FIG. 46 ).
- FIG. 46 is an illustration for explaining priority adding means 5201 for adding a priority to a picture and an audio.
- a priority is added to encoded-video data (to be processed by video encoding means 5202 ) and encoded-audio data (to be processed by audio encoding means 5203 ) in accordance with predetermined rules.
- the rules for adding priorities are stored in priority adding rules 5204 .
- the rules include rules for adding a priority higher than that of a P-frame (inter-frame encoded picture frame) to an I-frame (intra-frame encoded picture frame) and rules for adding a priority lower than that of an audio to a picture. Moreover, it is possible to change the rules in accordance with the designation of a user.
- Priority-adding objects are scene changes in the case of a picture or an audio block and audioless block in the case of a picture frame, stream, or audio designated by an editor or user.
- a method for adding a priority to a communication header and a method for embedding a priority in the header of a bit stream in which a video or audio is encoded under encoding.
- the former makes it possible to obtain the information for priority without decoding it and the latter makes it possible to independently handle a single bit stream without depending on a system.
- a priority is added only to a communication header for transmitting the head of a picture frame accessible as independent information in the case of a picture (when priorities are equal in the same picture frame, it is possible to assume that the priorities are not changed before the head of the next accessible picture frame appears).
- priority deciding means for deciding a processing method is provided for a receiving terminal unit in accordance with the priority under overload of received various encoded pieces of information (see FIG. 47 ).
- FIG. 47 is an illustration for interpreting priorities added to a picture and an audio and explaining priority deciding means 5301 for deciding whether to perform decoding.
- the priorities include a priority added to each stream of each picture or audio and a priority added to each frame of a picture or audio. It is possible to use these priorities independently or by making a frame priority correspond to a stream priority.
- the priority deciding means 5301 decides a stream or frame to be decoded in accordance with these priorities.
- Decoding is performed by using two types of priorities for deciding a processing priority under overload at a terminal.
- a stream priority for defining a relative priority between bit streams such as a picture and audio
- a frame priority for defining a relative priority between decoding units such as picture frames in the same stream
- the former stream priority makes it possible to handle a plurality of videos or audios.
- the latter frame priority makes it possible to change scenes or add different priorities even to the same intra-frame encoded picture frames (I-frame) in accordance with the intention of an editor.
- a stream priority corresponds to a time assigned to an operating system (OS) for encoding or decoding a picture or audio or a processing priority and thereby controlling the stream priority, it is possible to control a processing time at an OS level.
- OS operating system
- a priority can be defined at five OS levels.
- a priority by using a floppy disk or optical disk as a data-recording medium. Furthermore, it is possible to decide a priority by using not only a recording medium but also an object capable of recording a program such as an IC card or ROM cassette. Furthermore, it is possible to use a repeater for a picture or audio such as a router or gateway for relaying data.
- priority deciding means for deciding the threshold of the priority of encoded information to be processed is set to a picture-extension control section or audio-extension control section and the time to be displayed (PTS) is compared with the elapsed time after start of processing or the time to be decoded (DTS) is compared with the time elapsed time after start of processing to change thresholds of the priority of encoded information to be processed in accordance with the comparison result (it is also possible to refer to the insertion interval of I-frame or the grading of a priority as the information for changing thresholds).
- a picture with the size of captured QCIF or CIF is encoded by an encoder (H.263) under encoding to output a time stamp (PTS) showing the time for decoding (DTS) or the time for displaying the picture, priority information showing processing sequence under overload (CGD, Computational Graceful Degradation), frame type (SN), and sequence number together with encoded information.
- PTS time stamp
- CCD processing sequence under overload
- SN frame type
- an audio is also recorded through a microphone and encoded by an encoder (G.721) to output a time stamp (PTS) showing the time for decoding (DTS) or the time for reproducing an audio, priority information (CGD), and sequence number (SN) together with encoded information.
- PTS time stamp
- DTS time for decoding
- CCD priority information
- SN sequence number
- a picture and an audio are supplied to separate buffers to compare their respective DTS (decoding time) with the elapsed time after start of processing.
- DTS decoding time
- the picture and the audio are supplied to their corresponding decoders (H.263 and G.721).
- the example in FIG. 21 describes a method for adding a priority by an encoder under overload.
- I-frame intra-frame encoded picture frame
- P-frame has a priority of “2” which is lower than that of I-frame. Because two levels of priorities are assigned to I-frame, it is possible to reproduce only I-frame having a priority of “0” when a terminal for decoding has a large load. Moreover, it is necessary to adjust the insertion interval of I-frame in accordance with a priority adding method.
- the example in FIG. 22 shows an illustration showing a method for deciding a priority at a receiving terminal under overload.
- the priority of a frame to be disused is set to a value larger than a cutOffPriority. That is, every picture frame is assumed as an object to be processed. It is possible to previously know the maximum value of priorities added to picture frames by communicating it from the transmitting side to the receiving side (step 101 ).
- the threshold of the priority of a picture or audio to be processed is decreased to thin out processings (step 102 ).
- the threshold of a priority is increased in order to increase the number of pictures or audio which can be processed (step 103 ).
- a priority offset value is added to the priority of a picture frame (or audio frame) to compare the priority offset value with the threshold of the priority.
- data to be decoded is supplied to a decoder (step 104 ).
- a priority offset allows the usage of previously checking the performance of a machine and communicating the offset to a receiving terminal (it is also possible that a user issues designation at the receiving terminal) and the usage of changing priorities of a plurality of video and audio streams in streams (for example, thinning out processings by increasing the offset value of the rearmost background).
- FIG. 23 is an illustration showing temporal change of priorities by using the above algorithm.
- FIG. 23 shows the change of a priority to be added to a picture frame.
- This priority is a priority for deciding whether to perform decoding when a terminal is overloaded, which is added every frame. The smaller the value of a priority becomes, the higher the priority becomes. In the case of the example in FIG. 23 , 0 has the highest priority.
- the threshold of a priority is 3, a frame having a priority to which a value larger than 3 is added is disused without being decoded and a frame having a priority to which a value of 3 or less is added is decoded.
- DTS decoding time
- the retransmission frequency or loss rate of information is too large, it is necessary to raise the priority of the information to be retransmitted and lower the retransmission or loss rate. Moreover, by knowing the priority used for the priority deciding section, it is possible to prevent the information to be processed from being transmitted.
- a transmitting terminal when an actual transfer rate exceeds the target transfer rate of the information of the transmitting terminal or when writing of the encoded information into a transmitting buffer is delayed as the result of comparing the elapsed time after start of transfer processing with the time added to the encoded information to be decoded or displayed, it is possible to transmit a picture or audio matching with the target rate by using a priority added to encoded information and used by the priority deciding section of the receiving terminal when the terminal is overloaded and thereby thinning out transmissions of information. Moreover, by introducing the processing skipping function under overload performed at the receiving-side terminal into the transmitting-side terminal, it is possible to control a failure due to overload of the transmitting-side terminal.
- the AL information data control information
- FIG. 4 is an illustration for explaining a method for dynamically changing header information added to the data for a picture or audio to be transmitted.
- the data (ES) to be transmitted is decomposed into data pieces and the identifying information (sequence number) for showing the sequence of data, the information (marker bit) showing whether it is a start position capable of processing data pieces, and time information (time stamp) concerned with transfer of data pieces are added to data pieces in the form of communication headers by assuming that the above pieces of information correspond to transmission control information of the present invention.
- RTP Realtime Transfer Protocol
- RFC1889 uses the information for the above sequence number, marker bit, time stamp, object ID (referred to as SSRC), and version number as communication headers.
- SSRC object ID
- version number a header-information item can be extended, the above items are always added as fixed items.
- identifying means is necessary because meanings of communication headers are different from each other.
- time-stamp information shows PTS that is a reproducing time as previously described in the case of MPEG1/2.
- the time-stamp information shows a time interval when the information is encoded.
- PTS information shows the time interval between encoded frames in the case of H. 263 and it is defined by RTP that the time stamp of the first frame is random.
- a flag showing whether a time stamp is PTS is PTS as (a) communication header information (it is necessary to extend a communication header) or (b) header information for payload of H.263 or H.261 (that is, AL information) (in this case, it is necessary to extend payload information).
- a marker bit serving as the information showing whether it is a start position capable of processing data pieces is added as RTP header information. Moreover, as described above, there is a case in which it is necessary to provide an access flag showing that it is a start position capable of accessing data and a random access flag showing that it is possible to access data at random for AL information. Because doubly providing flags for a communication header lowers the efficiency, a method of substituting an AL flag by a flag prepared for the communication header is also considered.
- timer or counter for showing the effective period of data processing
- FIGS. 5 ( a ) and 5 ( b ) and FIGS. 6 ( a ) to 6 ( d ) are illustrations for explaining a method for adding AL information.
- receiving-terminal correspondence can be smoothly performed by using the expression of a flag, counter, or timer and thereby, preparing the expression as AL information or as a communication header to communicate it to the receiving terminal.
- the header information of RTP or AL information is corrected and extended so that the header information already assigned to RTP and that already assigned to AL are not overlapped (particularly, the information for a time stamp is overlapped and the priority information for a timer, counter, or data processing becomes extension information). Or, it is possible to use a method of not extending the header of RTP or not considering duplication of AL information with information of RTP. They correspond to the contents having been shown so far. Because a part of RTP is already practically used for H.323, it is effective to extend RTP having compatibility. (See FIG. 6 ( a ).)
- a communication header is simplified (for example, using only a sequence number) and remainder is provided for AL information as multifunctional control information. Moreover, by making it possible to variably set items used for AL information before communication, it is possible to specify a flexible transmission format. (See FIG. 6 ( b ).)
- AL information is simplified (for an extreme example, no information is added to AL) and every control information is provided for a communication header.
- a sequence number, time stamp, marker bit, payload type, and object ID frequently used as communication headers are kept as fixed information and data-processing priority information and timer information are respectively provided with an identifier showing whether extended information is present as extended information to refer to the extended information if the information is defined. (See FIG. 6 ( c ).)
- a communication header and AL information are simplified and a format is defined as a packet separate from the communication header or AL information to transmit the format.
- a method is also considered in which only a marker bit, time stamp, and object ID are defined for AL information, only a sequence number is defined for a communication header, and payload information, data-processing priority information, and timer information are defined as a transmission packet (second packet) separate from the above information and transmitted. (See FIG. 6 ( d ).)
- FIG. 7 is an illustration for explaining a method for transmitting information by dynamically multiplexing and separating a plurality of logical transmission lines.
- the number of logical transmission lines can be decreased by providing an information multiplexing section capable of starting or ending multiplexing of the information for logical transmission lines for transmitting a plurality of pieces of data or control information in accordance with the designation by a user or the number of logical transmission lines for a transmitting section and an information separating section for separating multiplexed information for a reception control section.
- the information multiplexing section is referred to as “Group MUX” and specifically, it is possible to use a multiplexing system such as H.223. It is possible to provide the Group MUX for a transmitting/receiving terminal. By providing the Group MUX for a relay router or terminal, it is possible to correspond to a narrow-band communication channel. Moreover, by realizing Group MUX with H.223, it is possible to interconnect H.223 and H.324.
- the multiplexing control information concerned with the information multiplexing section is information showing the content of multiplexing about how the information multiplexing section performs multiplexing for each piece of data.
- control information multiplexing control information
- the control information includes a multiplexing pattern.
- deciding an identifier of an information multiplexing section or information separating section between terminals it is possible to generate an identifier of the information multiplexing section. For example, it is possible to generate random numbers in a range determined between transmitting and receiving terminals and use the largest value for the identifier (identification number) of the information multiplexing section.
- the data multiplexed by the information multiplexing section is conventionally different from the media type defined in RTP, it is necessary to define the information showing that it is information multiplexed by the information multiplexing section (new media type H.223 is defined) for the payload type of RTP.
- the information multiplexing section By arranging the information to be transmitted by or recorded in the information multiplexing section in the sequence of control information and data information so as to improve the access speed to multiplexed data, it is expected to quickly analyze multiplexed information. Moreover, it is possible to quickly analyze header information by fixing an item which is described in accordance with the data control information added to control information and adding and multiplexing an identifier (unique pattern) different from data.
- FIG. 8 is an illustration for explaining the transmission procedure of a broadcasting program.
- control information In the case of communication with no back channel, it is necessary to transmit control information sufficiently before transmitting data so as to enable the receiving terminal to know a structural information of data. Moreover, control information should be transmitted through a transmission channel free from packet loss and having a high reliability. However, when using a transmission channel having a low reliability, it is necessary to cyclically transmit the control information having the same transmission sequence number. This is not restricted to the case of transmitting the control information concerned with a setup time.
- FIGS. 9 ( a ) and 9 ( b ) are illustrations showing a picture or audio transmission method considering the read time and rise time of program or data.
- a necessary program e.g. H.263, MPEG1/2, or software of audio decoder
- data e.g. video data or audio data
- a memory e.g.
- program or data When program or data is transmitted, by transmitting the program or data from the transmitting side together with the information showing the storage destination (e.g. hard disk or memory) of the program or data at a receiving terminal, time required for start or read, relation between the type or storage destination of a terminal and the time required for start or read (e.g. relation between CPU power, storage device, and average response time), and utilization sequence, it is possible to schedule the storage destination and read time of the program or data if the program or data necessary for the receiving terminal is actually required.
- the storage destination e.g. hard disk or memory
- FIGS. 10 ( a ) and 10 ( b ) are illustrations for explaining a method for corresponding to zapping (channel change of TV).
- the setup time at a receiving-side terminal can be decreased by (a) using a main looking-listening section by which the user looks at and listens to, and an auxiliary looking-listening section in which a receiving terminal cyclically monitors programs other than the program looked and listened by a user and receiving the relation between identifier for identifying program or data required in advance, information for a flag, counter, or timer for estimating the point of time necessary for the receiving terminal, and program as control information (information transmitted by a packet different from that of data to control terminal processing) or as data control information (information for AL), and preparing read of the program or data together with data as one of the settlement measures when to program or data necessary for a program other than the program looked and listened by the user is present in a memory requiring a lot of time for read.
- the above heading pictures include broadcasted pictures obtained by cyclically sampling programs broadcasted through a plurality of channels.
- a timer is a time expression and shows the point of time when a program necessary to decode a data stream sent from the transmitting side is necessary.
- a counter is the basic time unit determined between transmitting and receiving terminals, which can be information showing what-th time.
- a flag is transmitted and communicated together with the data transmitted before the time necessary for setup or control information (information transmitted through a packet different from that of data to control terminal processing). It is possible to transmit the timer and counter by embedding them in data or transmit them as control information.
- the time in which setup is performed can be estimated by, when using a transmission line such as ISDN operating on the clock base, using a transmission serial number for identifying a transmission sequence as transmission control information in order to communicate from the transmitting terminal to the receiving terminal a time point when program or data is required and thereby communicating the serial number to a receiving terminal together with data as data control information or as control information.
- a transmission time is fluctuated due to jitter or delay like internet, it is necessary to add the transmission time to the setup time by considering the propagation delay of transmission in accordance with jitter or delay time by the means for realizing RTCP (media transmission protocol used for internet).
- FIGS. 11 ( a ) to 19 ( b ) are illustrations showing specific examples of protocols actually transferred between terminals.
- a transmission format and a transmission procedure are described in ASN.1. Moreover, the transmission format is extended on the basis of H.245 of ITU.
- objects of a picture and audio can have a hierarchical structure.
- each object ID has the attributes of a broadcasting-program identifier (program ID) and an object ID (S SRC) and the structural information and synthesizing method between pictures are described by a script language such as Java or VRML.
- FIG. 11 ( a ) is an illustration showing examples of the relation between objects.
- objects are media such as an audio-video, CG, and text.
- objects constitute a hierarchical structure.
- Each object has a program number “Program ID” corresponding to TV channel) and an object identifier “Object ID” for identifying an object.
- RTP media transmission protocol for transmitting media used for internet, Realtime Transfer Protocol
- SSRC synchronous source identifier
- One is the broadcasting type in which the objects are unilaterally transmitted from a transmitting-side terminal.
- the other is the type (communication type) for transferring the objects between transmitting and receiving terminals (terminals A and B).
- Control information is transmitted by using a transmission channel referred to as LCNO in the case of the standard for video telephones.
- LCNO transmission channel referred to as the standard for video telephones.
- a plurality of transmission channels are used for transmission.
- the same program channel (program ID) is assigned to these channels.
- FIG. 11 ( b ) is an illustration for explaining how to realize a protocol for realizing the functions described for the present invention.
- the transmission protocol (H.245) used for the video-telephone standards (H.324 and H.323) is described below.
- the functions described for the present invention are realized by extending H.245.
- the description method shown by the example in FIG. 11 ( b ) is the protocol description method referred to as ASN.1.
- “Terminal Capability Set” expresses the performance of a terminal.
- the function described as “mpeg4 Capability” is extended for the conventional H.245.
- mpeg4 Capability describes the maximum number of pictures “Max Number Of pictures” and the maximum number of audio (“Max Number Of Audio”) which can be simultaneously processed by a terminal and the maximum number of multiplexing functions (“MaxNumberOfMux”) which can be realized by a terminal.
- these are expressed as the maximum number of objects (“Number Of Process Object”) which can be processed.
- a flag showing whether a communication header (expressed as AL in FIG. 12 ) can be changed is described. When the value of the flag is true, the communication header can be changed.
- the communicated side returns “MEPG4 Capability Ack” to a terminal from which “MEPG4 Capability” is transmitted if the communicated side can accept (process) the objects but returns “MEPG4 Capability Reject” to the terminal if not.
- FIG. 13 ( a ) shows how to describe a protocol for using the above Group MUX for multiplexing a plurality of logical channels to one transmission channel (transmission channel of LAN in the case of this example) in order to share the transmission channel by logical channels.
- multiplexing means (Group MUX) is made to correspond to the transmission channel (“LAN Port Number”) of LAN (Local Area Network).
- Group Mux ID is an identifier for identifying the multiplexing means.
- the communicated side returns “Create Group Mux Ack” to a terminal from which “Create Group Mux” is transmitted if the side can accept (use) the multiplexing means but returns “Create Group Mux Reject” to the terminal if not.
- Separating means serving as means for performing an operation reverse to that of the multiplexing means can be realized by the same method.
- FIG. 13 ( b ) a case of deleting already-generated multiplexing means is described.
- FIG. 13 ( c ) the relation between the transmission channel of LAN and a plurality of logical channels is described.
- the transmission channel of LAN is described in accordance with “LAN Port Number” and the logical channels are described in accordance with “Logical Port Number”.
- Group Mux ID is unnecessary. Moreover, to use a plurality of Muxes, Group Mux ID is necessary for each command of H.223. Furthermore, it is possible to use a flag for communicating the relation between ports used between the multiplexing means and separating means. Furthermore, it is possible to use a command making it possible to select whether to multiplex control information or transmit the information through another logical transmission line.
- the transmission channel uses LAN.
- Open Logical Channel shows the protocol description for defining the attribute of a transmission channel.
- MPEG4 Logical Channel Parameters is extended and defined for the protocol of H.245.
- FIG. 15 shows that a program number (corresponding to a TV channel) and a program name are made to correspond to the transmission channel of LAN (“MPEG4 Logical Channel Parameters”).
- “Broadcast Channel Program” denotes a description method for transmitting the correspondence between LAN transmission channel and program number in accordance with the broadcasting type.
- the example in FIG. 15 makes it possible to transmit the correspondence between up to 1,023 transmission channels and program numbers. Because transmission is unilaterally performed from the transmitting side to the receiving side in the case of broadcasting, it is necessary to cyclically transmit these pieces of information by considering the loss during transmission.
- FIG. 16 ( a ) the attribute of an object (e.g. picture or audio) to be transmitted as a program is described (“MPEG4 Object Classdefinition”).
- Object information (“Object Structure Element”) is made to correspond to a program identifier (“Program ID”). It is possible to make up to 1,023 objects correspond to program identifiers.
- object information a LAN transmission channel (“LAN Port Number”), a flag showing whether scramble is used (“Scramble Flag”), a field for defining an offset value for changing the processing priority when a terminal is overloaded (“CGD Offset), and an identifier (Media Type) for identifying a type of the media (picture or audio) to be transmitted are described.
- AL in this case, defined as additional information necessary to decode pictures for one frame
- ES in this case, defined as a data string corresponding to pictures for one frame
- AL information the following are defined.
- Random Access Flag (flag showing whether to be independently reproducible, true for an intra-frame encoded picture frame)
- the example shows a case of transmitting the data string for one frame by using RTP (protocol for transmitting continuous media through internet, Realtime Transfer Protocol).
- RTP protocol for transmitting continuous media through internet, Realtime Transfer Protocol.
- AL Reconfiguration is a transmission expression for changing the maximum value that can be expressed by the above AL.
- FIG. 16 ( b ) makes it possible to express up to 2 bits as “Random Access Flag Max Bit”. For example, when there is no bit, Random Access Flag is not used. When there are two bits, the maximum value is equal to 3.
- the expression with a real number part and a mantissa part is allowed (e.g. 3 ⁇ 6).
- an operation under the state decided by default is allowed.
- “Setup Request” shows a transmission expression for transmitting a setup time.
- “Setup Request” is transmitted before a program is transmitted, a transmission channel number (“Logical Channel Number”) to be transmitted, a program ID (“execute Program Number”) to be executed, a data ID (“data Number”) to be used, and the ID of a command (“execute Command Number”) to be executed are made to correspond to each other and transmitted to a receiving terminal.
- an execution authorizing flag (“flag”), a counter (“counter”) describing whether to start execution when receiving Setup Request how many times, and a timer value (“timer”) showing whether to start execution after how many hours pass can be used as other expression methods by making them correspond to transmission channel numbers.
- FIG. 18 is an illustration for explaining a transmission expression for communicating whether to use the AL described for FIG. 16 ( b ) from a transmitting terminal to a receiving terminal (“Control AL definition”).
- Random Access Flag Use if “Random Access Flag Use” is true, Random Access Flag is used. If not, it is not used. It is possible to transmit the AL change notification as control information through a transmission channel separate from that of data or transmit it through the transmission channel same as that of data together with the data.
- a decoder program is listed as a program to be executed. Moreover, a setup request can be used for broadcasting and communication. Furthermore, which item serving as control information is used as Al information is designated to a receiving terminal in accordance with the above request. Furthermore, it is possible to designate which item is used as communication header, which item is used as AL information and which item is used as control information to a receiving terminal.
- FIG. 19 ( a ) shows the example of a transmission expression for changing the structure of header information (data control information, transmission control information, and control information) to be transmitted by using an information frame identifier (“header ID”) between transmitting and receiving terminals in accordance with the purpose.
- header information data control information, transmission control information, and control information
- “class ES header” separates the structure of the data control information to be transmitted through a transmission channel same as that of data from that of the information with which transmission control information is transmitted between transmitting and receiving terminals in accordance with an information frame identifier.
- FIG. 19 ( b ) “AL configuration” shows an example for changing the structure of control information to be transmitted through a transmission channel different from that of data between transmitting and receiving terminals in accordance with the purpose.
- the usage of an information frame identifier and that of a default identifier are the same as the case of FIG. 19 ( a ).
- a method for dynamically changing header information (AL information) added to the data for a picture or audio to be transmitted (2) A method for dynamically changing header information (AL information) added to the data for a picture or audio to be transmitted.
- the present invention is not restricted to only synthesis of two-dimensional pictures. It is also possible to use an expression method of combining a two-dimensional picture with a three-dimensional picture or include a picture synthesizing method for synthesizing a plurality of pictures so that they are adjacent to each other like a wide-visual-field picture (panoramic picture).
- the present invention does not purpose only such communication systems as bidirectional CATV and B-ISDN.
- radio waves e.g. VHF band or UHF band
- a broadcasting satellite for transmission of pictures and audio from a center-side terminal to a home-side terminal
- an analog telephone line or N-ISDN for transmission of information from a home-side terminal to a center-side terminal (it is not always necessary that pictures, audio, and data are multiplexed).
- a communication system using radio such as IrDA, PHS (Personal Handy Phone), or radio LAN.
- a purposed terminal can be a portable terminal such as a portable information terminal or a desktop terminal such as a setup BOX or personal computer.
- a video telephone, multipoint monitoring system, multimedia database retrieval system, and game are listed as application fields.
- the present invention includes not only a receiving terminal but also a server and a repeater to be connected to a receiving terminal.
- the transmitting and receiving terminals e.g. information frame including the sequence of information to be added and the number of bits for firstly assigning a random access flag as 1-bit flag information and secondly assigning 16 bits in the form of a sequence number
- information frame including the sequence of information to be added and the number of bits for firstly assigning a random access flag as 1-bit flag information and secondly assigning 16 bits in the form of a sequence number
- the frame of each piece of information can be any one of the frames already shown in FIGS. 6 ( a ) to 6 ( d ) and in the case of RTP, the data control information (AL) can be the header information for each medium (e.g. in the case of H.263, the header information of the video or that of the payload intrinsic to H.263), transmission control information can be the header information of RTP, and control information can be the information for controlling RTP such as RTCP.
- the data control information can be the header information for each medium (e.g. in the case of H.263, the header information of the video or that of the payload intrinsic to H.263)
- transmission control information can be the header information of RTP
- control information can be the information for controlling RTP such as RTCP.
- the following two methods are considered to change information frames of data control information.
- the default identifier to be written in a fixed region or position
- information frame change contents are described.
- To change information frames of data control information by describing a method for changing only the information frames of data in the control information (information frame control information) as another method, a default identifier provided for control information is set, the contents of the information frames of the data control information to be changed are described, and it is communicated to a receiving terminal in accordance with ACK/Reject and confirmed that the information frames of the data control information are changed and thereafter, the data in which information frames are changed is transmitted.
- Information frames of transmission control information and control information can be also changed in accordance with the above two methods ( FIG. 19 ).
- the header information of MPEG2 is fixed, by providing a default identifier for a program map table (defined by PSI) for relating the video stream of MPEG2-Ts (transport stream) with the audio stream of it and defining a configuration stream in which a method for changing frames of the information for the video stream and audio stream is described, it is possible to first interpret the configuration stream and then, interpret the headers of the video and audio streams in accordance with the content of the configuration stream when the default identifier is set. It is possible for the configuration stream to have the contents shown in FIG. 19 .
- PSI program map table
- the contents (transmitted-format information) of the present invention about a transmission method and/or a structure of the data to be transmitted correspond to, for example, an information frame in the case of the above embodiment.
- an audio-video transmitter provided with (1) transmitting means 5001 for transmitting the content concerned with a transmission method and/or the structure of the data to be transmitted or an identifier showing the content as the transmitted-format information through the transmission line same as that of the data to be transmitted or a transmission line different from the former transmission line and (2) storing means 5002 for storing a plurality of types of the contents concerned with the transmission method and/or the structure of the data to be transmitted and a plurality of types of identifiers for the contents, in which the identifiers are included in at least one of the data control information, transmission control information, and information for controlling terminal-side processing.
- transmitting means 5001 for transmitting the content concerned with a transmission method and/or the structure of the data to be transmitted or an identifier showing the content as the transmitted-format information through the transmission line same as that of the data to be transmitted or a transmission line different from the former transmission line
- (2) storing means 5002 for storing a plurality of types of the contents concerned with the transmission method and/or the structure of
- an audio-video receiver provided with receiving means 5101 for receiving the transmission format information transmitted from the audio-video transmitter and transmission information interpreting means 5102 for interpreting the received transmission format information.
- the audio-video receiver can be constituted with storing means 5103 for storing a plurality of types of contents concerned with the transmission method and/or the structure of the data to be transmitted and a plurality of types of identifiers for the contents to use the contents stored in the storing means to interpret the contents of the identifiers when receiving the identifiers as the transmission format information.
- Identifiers of the present invention correspond to the above information frame identifiers.
- the present invention makes it possible to change frames of the information corresponding to the situation in accordance with the purpose or transmission line by dynamically determining the frame of data control information, transmission control information, or control information used by transmitting and receiving terminals.
- a “picture (or video)” used for the present invention includes both a static picture and a moving picture.
- a purposed picture can be a two-dimensional picture such as a computer graphics (CG) or three-dimensional picture data constituted with a wire-frame model.
- CG computer graphics
- FIG. 25 is a schematic block diagram of the picture encoder and a picture decoder of an embodiment of the present invention.
- a transmission control section 4011 for transmitting or recording various pieces of encoded information is means for transmitting the information for coaxial cable, CATV, LAN, or modem.
- a picture encoder 4101 has a picture encoding section 4012 for encoding picture information such as H.263, MPEG1/2, JPEG, or Huffman encoding and the transmission control section 4011 .
- a picture decoder 4102 has an output section 4016 constituted with a reception control section 4013 for receiving various pieces of encoded information, a picture decoding section 4014 for decoding various pieces of received picture information, a picture synthesizing section 4015 for synthesizing one decoded picture or more, and an output section 4016 constituted with a display and a printer for outputting pictures.
- FIG. 26 is a schematic block diagram of the audio encoder and an audio decoder of an embodiment of the present invention.
- An audio encoder(sound encorder) 4201 is constituted with a transmission control section 4021 for transmitting or recording various pieces of encoded information and an audio encoding section 4022 for encoding such audio information such as G.721 or MPEG1 audio.
- an audio decoder (a sound decoder) 4202 is constituted with a reception control section 4023 for receiving various pieces of encoded information, an audio decoding section 4024 for decoding the above pieces of audio information, an audio synthesizing section (a sound synthesizing section) 4025 for synthesizing one decoded audio or more, and output means 4026 for outputting audio.
- Time-series data for audio or picture is specifically encoded or decoded by the above encoder or decoder.
- the communication environments in FIGS. 25 and 26 can be a communication environment in which a plurality of logical transmission lines can be used without considering multiplexing means like the case of internet or a communication environment in which multiplexing means must be considered like the case of an analog telephone or satellite broadcasting.
- a system for bilaterally transferring a picture or audio between terminals like a video telephone or video conference or a system for broadcasting a broadcasting-type picture or audio on satellite broadcasting, CATV, or internet is listed as a terminal connection system.
- a method for synthesizing a picture and audio can be defined by describing a picture and an audio, structural information for a picture and an audio (display position and display time), an audio-video grouping method, a picture display layer (depth), and an object ID (ID for identifying each object such as a picture or audio) and the relation between the attributes of them with a script language such as JAVA, VRML, or MHEG.
- a script describing a synthesizing method is obtained from a network or local memory.
- a transmitting or receiving terminal by optionally combining an optional number of picture encoders, picture decoders, audio encoders, and audio decoders.
- FIG. 27 ( a ) is an illustration for explaining a priority adding section and a priority deciding section for controlling the priority for processing under overload.
- a priority adding section 31 for deciding the priority for processing encoded information under overload in accordance with a predetermined criteria by an encoding method such as H.263 or G.723 and relating the encoded information to the decided priority is provided for the picture encoder 4101 and audio encoder 4201 .
- the criteria for adding a priority are scene change in the case of a picture and audio and audioless blocks in the case of a picture frame, stream, or audio designated by an editor or user.
- a method for adding a priority to a communication header and a method for embedding a priority in the header of a bit stream to be encoded of a video or audio under encoding are considered as priority adding methods for defining a priority under overload.
- the former method makes it possible to obtain the information concerned with a priority without decoding the information and the latter method makes it possible to independently handle a single bit stream without depending on a system.
- a priority is added only to a communication header for transmitting the head of a picture frame accessible as single information in the case of a picture (when priorities are equal in the same picture frame, it is possible to assume that the priorities are not changed until the head of the next accessible picture frame appears).
- a priority deciding section 32 for deciding a processing method is provided for the picture decoder 4102 and audio decoder 4202 in accordance with the priorities of various pieces of encoded information received under overload.
- FIGS. 28 ( a ) to 28 ( c ) are illustrations for explaining the grading for adding a priority. Decoding is performed by using two types of priorities for deciding the priority for processing under overload at a terminal.
- a stream priority (Stream Priority; inter-time-series-data priority) for defining the priority for processing under overload in bit streams such as picture and audio and a frame priority (Frame Priority; intra-time-series-data priority) for defining the priority for processing under overload in frames such as picture frames in the same stream are defined (see FIG. 28 ( a )).
- the former stream priority makes it possible to handle a plurality of videos or audios.
- the latter frame priority makes it possible to add a different priority to a picture scene change or the same intra-frame encoded picture frame (I-frame) in accordance with the intention of an editor.
- a value expressed by the stream priority represents a case of handling it as a relative value and a case of handling it as an absolute value (see FIGS. 28 ( b ) and 28 ( c )).
- the stream and frame priorities are handled by a repeating terminal such as a router or gateway on a network and by transmitting and receiving terminals in the case of a terminal.
- FIG. 28 ( b ) Two types of methods for expressing an absolute value or relative value are considered. One of them is the method shown in FIG. 28 ( b ) and the other of them is the method shown in FIG. 28 ( c ).
- the priority of an absolute value is a value showing the sequence in which picture streams (video streams) or audio streams added by an editor or mechanically added are processed (or to be processed) under overload (but not a value considering the load fluctuation of an actual network or terminal).
- the priority of a relative value is a value for changing the value of an absolute priority in accordance with the load of a terminal or network.
- FIG. 28 ( b ) it is possible to fine the grading compared to a stream priority and handle a frame priority for defining the priority for frame processing under overload as the value of a relative priority or handle it as the value of an absolute priority.
- a frame priority for defining the priority for frame processing under overload as the value of a relative priority or handle it as the value of an absolute priority.
- FIG. 28 ( b ) when reproducing data at a receiving terminal while transmitting the data through a network without recording the data at the receiving terminal, it is possible to compute the value of an absolute priority and that of a relative priority at frame and stream levels at the transmitting side and thereafter transmit only absolute values because it is unnecessary to control absolute and relative values by separating them from each other at a receiving terminal.
- the priority of an absolute value is a value uniquely determined between frames obtained from the relation between Stream Priority and Frame Priority.
- the priority of a relative value is a value showing the sequence in which picture streams or audio streams added by an editor or mechanically added are processed (or to be processed) under overload.
- the frame priority of a picture or audio stream (relative; relative value) and the stream priority for each stream are added.
- absolute frame priority it is also possible to use a subtracting method or a constant-multiplying method.
- An absolute frame priority mainly uses a network. This is because the expression using an absolute value does not require the necessity for deciding a priority for each frame through a repeater such as a router or gateway by considering Stream Priority and Frame Priority. By using the absolute frame priority, such processing as disuse of a frame by a repeater is simplified.
- a relative frame priority mainly to an accumulation system for performing recording or editing.
- a plurality of picture and audio streams may be handled at the same time.
- the number of picture streams or the number of frames that can be reproduced may be limited depending on the load of a terminal or network.
- the value expressed by the stream priority is a relative value or absolute value by using a flag or identifier for expressing whether the value expressed by the stream priority is an absolute value or relative value.
- a flag or identifier is unnecessary because a relative value is described in a communication header and an absolute value is described in an encoded frame.
- a flag or identifier for identifying whether a frame priority is an absolute value or relative value is used.
- the frame priority is a priority calculated in accordance with a stream priority and a relative frame priority and therefore, the calculation is not performed by a repeater or terminal.
- the calculation formula is already known at a terminal, it is possible to inversely calculate a relative frame priority from an absolute frame priority and a stream priority.
- it is also possible to obtain the absolute priority (Access Unit Priority) of a packet to be transmitted from the relational expression “Access Unit Priority stream priority ⁇ frame priority”.
- the frame priority as a degradation priority because it is obtained after being subtracted from the stream priority.
- FIG. 29 is an illustration for explaining a method for assigning a priority to multi-resolution video data.
- one stream is constituted with a plurality of substreams
- the relation between picture streams is defined with AND (logical product) and OR (logical sum) in order to describe the relation.
- the relation between picture streams is defined that the stream B is disused in the case of disuse of stream data depending on the priority but the stream B is transmitted and processed without being disused even if the priority of the stream B is lower than the priority of a threshold in the case of AND by describing the relation between streams.
- relevant streams can be processed without being disused.
- relevant streams can be disused. It is possible to perform disuse processing at a transmitting or receiving terminal or a repeating terminal as ever.
- FIG. 30 is an illustration for explaining a communication payload constituting method.
- disuse at a transmission packet level becomes easy by, for example, constituting transmission packets starting with, for example, one having the highest priority in accordance with a stream priority added to a substream.
- disuse at a communication packet level becomes easy by fining grading and uniting the information for objects respectively having a high frame priority and thereby constituting a communication packet.
- the sliced structure of a picture represents the unit of collected picture information such as GOB or MB.
- FIG. 31 is an illustration for explaining a method for relating data to communication payload.
- a method for relating a stream or object to a communication packet together with control information or data it is possible to generate an optional data format in accordance with the communication state or purpose.
- RTP Real time Transfer Protocol
- the payload of RTP is defined for each encoding to be handled.
- the format of the existing RTP is fixed.
- H.263 as shown in FIG. 31 , three data formats from Mode A to Mode C are defined.
- a communication payload purposing a multi-resolution picture format is not defined.
- FIG. 32 is an illustration for explaining the relation between frame priority, stream priority, and communication packet priority.
- FIG. 32 shows an example of using a priority added to a communication packet on a transmission line as a communication packet priority and relating a stream priority and a frame priority to the communication packet priority.
- priorities (4 bits) from 0 to 7 are reserved for congestion-controlled traffic.
- Priorities from 8 to 15 are reserved for real-time communication traffic or not-congestion-controlled traffic.
- Priority 15 is the highest priority and priority 8 is the lowest priority. This represents the priority at the packet level of IP.
- Transmitting means is not restricted to only IP. It is possible to use a transmission packet having a flag showing whether it can be disused like TS (transport stream) of ATM or MPEG2.
- the frame priority and stream priority having been described so far can be applied to a transmitting medium or data-recording medium. It is possible to use a floppy disk or optical disk as a data-recording medium.
- preferential retransmission is realized by deciding time-series data to be retransmitted in accordance with the information of Stream Priority (inter-time-series-data priority) or Frame Priority (intra-time-series-data priority). For example, when decoding is performed at a receiving terminal in accordance with priority information, it is possible to prevent a stream or frame that is not an object for processing from being retransmitted.
- Stream Priority inter-time-series-data priority
- Frame Priority Intra-time-series-data priority
- preferential transmission is realized by deciding time-series data to be transmitted in accordance with the information of Stream Priority (inter-time-series-data priority) or Frame Priority (intra-time-series-data priority). For example, by deciding the priority of a stream or frame to be transmitted in accordance with an average transfer rate or retransmission frequency, it is possible to transmit an adaptive picture or audio even when a network is overloaded.
- the above embodiment is not restricted to two-dimensional-picture synthesis. It is also possible to use an expression method obtained by combining a two-dimensional picture with a three-dimensional picture or include a picture-synthesizing method for synthesizing a plurality of pictures so as to be adjacent to each other like a wide-visual-field picture (panorama picture).
- communication systems purposed by the present invention are not restricted to bidirectional CATV or B-ISDN. For example, transmission of pictures and audio from a center-side terminal to a house-side terminal can use radio waves (e.g.
- VHF band or UHF band or satellite broadcasting and information origination from the house-side terminal to the center-side terminal can use an analog telephone line or N-ISDN (it is not always necessary that pictures, audio, or data are multiplexed).
- N-ISDN it is not always necessary that pictures, audio, or data are multiplexed.
- radio such as an IrDA, PHS (Personal Handy Phone) or radio LAN.
- a purpose terminal can be a portable terminal such as a portable information terminal or a desktop terminal such as a set-top BOX or personal computer.
- the present invention makes it easy to handle a plurality of video streams and a plurality of audio streams and mainly synchronize and reproduce important scene cut together with audio by reflecting the intention of an editor.
- FIG. 33 shows the structure of the transmitter of the first embodiment.
- Symbol 2101 denotes a picture-input terminal and the size of a sheet of picture has 144 pixels by 176 pixels.
- Symbol 2102 denotes a video encoder that is constituted with four components 1021 , 1022 , 1023 , and 1024 (see Recommendation H.261).
- Symbol 1021 denotes a switching unit for dividing an input picture into macroblocks (a square region of 16 pixels by 16 pixels) and deciding whether to intra-encode or inter-encode the blocks and 1022 denotes movement compensating means for generating a movement compensating picture in accordance with the local decoded picture which can be calculated in accordance with the last-time encoding result, calculating the difference between the movement compensating picture and an input picture, and outputting the result in macroblocks. Movement compensation includes halfpixel prediction having a long processing time and fullpixel prediction having a short processing time.
- Symbol 1023 denotes orthogonal transforming means for applying DCT transformation to each macroblock and 1024 denotes variable-length-encoding means for applying entropy encoding to the DCT transformation result and other encoded information.
- Symbol 2103 denotes counting means for counting execution frequencies of four components of the video encoder 2102 and outputting the counting result to transforming means every input picture. In this case, the execution frequency of the halfpixel prediction and that of the fullpixel prediction are counted from the movement compensating means 1022 .
- Symbol 2104 denotes transforming means for outputting the data string shown in FIG. 34 .
- Symbol 2105 denotes transmitting means for multiplexing a variable-length code sent from the video encoder 2102 and a data string sent from the transforming means 2104 into a data string and outputting the data string to a data output terminal 2109 .
- FIG. 40 is a flowchart of the transmitting method of the second embodiment.
- a picture is input in step 801 (picture input terminal 2101 ) and the picture is divided into macroblocks in step 802 .
- processings from step 803 to step 806 are repeated until the processing corresponding to every macroblock is completed in accordance with the conditional branch in step 807 .
- a corresponding variable is incremented by 1.
- step 803 switching unit 1021 .
- movement compensation is performed in step 804 (movement compensating means 1022 ).
- DCT transformation and variable-length encoding are performed in steps 805 and 806 (orthogonal transforming means 1023 and variable-length encoding means 1024 ).
- step 808 the variable showing the execution frequency corresponding to each processing is read in step 808 , the data string shown in FIG. 2 is generated, and the data string and a code are multiplexed and output.
- the processings from step 801 to step 808 are repeatedly executed as long as input pictures are continued.
- the above structure makes it possible to transmit the execution frequency of each processing.
- FIG. 35 shows the structure of the receiver of the third embodiment.
- symbol 307 denotes an input terminal for inputting the output of the transmitter of the first embodiment and 301 denotes receiving means for fetching a variable-length code and a data string through inverse multiplexing in accordance with the output of the transmitter of the first embodiment and outputting them.
- 301 denotes receiving means for fetching a variable-length code and a data string through inverse multiplexing in accordance with the output of the transmitter of the first embodiment and outputting them.
- the time required to receive the data for one sheet is measured and also output.
- Symbol 303 denotes a decoder for a video using a variable-length code as an input, which is constituted with five components.
- Symbol 3031 denotes variable-length decoding means for fetching a DCT coefficient and other encoded information from a variable-length code
- 3032 denotes inverse orthogonal transforming means for applying inverse DCT transformation to a DCT coefficient
- 3033 denotes a switching unit for switching an output to upside or downside every macroblock in accordance with the encoded information showing whether the macroblock is intra-encoded or inter-encoded.
- Symbol 3034 denotes movement compensating means for generating a movement compensating picture by using the last-time decoded picture and movement encoded information, and adding and outputting the outputs of the inverse orthogonal transforming means 3032 .
- Symbol 3035 denotes execution-time measuring means for measuring and outputting the execution time until decoding and outputting of a picture is completed after a variable-length code is input to the decoder 303 .
- Symbol 302 denotes estimating means for receiving the execution frequency of each element (variable-length decoding means 3031 , inverse orthogonal transforming means 3032 , switching unit 3033 , or movement compensating means 3034 ) from a data string sent from the receiving means 301 and execution time from the execution-time measuring means 3035 to estimate the execution time of each element.
- Symbol 304 denotes frequency reducing means for changing the execution frequency of each element so as to reduce the execution frequency of fullpixel prediction and increase the execution frequency of halfpixel prediction by a corresponding value. The method for calculating the corresponding value is shown below.
- the execution frequency and estimated execution time of each element are received from the estimating means 302 to estimate an execution time.
- the execution time exceeds the time required to receive the data from the receiving means 301
- the execution frequency of fullpixel prediction is increased and the execution frequency of halfpixel prediction is decreased until the former time does not exceed the latter time.
- Symbol 306 denotes an output terminal for a decoded picture.
- the movement compensating means 3034 is designated so as to perform halfpixel prediction in accordance with encoded information.
- the predetermined execution frequency of halfpixel prediction is exceeded, a halfpixel movement is rounded to a fullpixel movement to execute fullpixel prediction.
- the execution time of decoding is estimated in accordance with the estimated execution time of each element and, when the decoding execution time may exceed the time (designated time) required to receive the data for one sheet, halfpixel prediction having a long execution time is replaced with fullpixel prediction. Thereby, it is possible to prevent an execution time from exceeding a designated time and solve the problem (C1).
- FIG. 41 is a flowchart of the receiving method of the fourth embodiment.
- step 901 the variable a_i for expressing the execution time of each element is initialized (estimating means 302 ).
- step 902 multiplexed data is input and the time required for multiplexing the data is measured (receiving means 301 ).
- step 903 the multiplexed data is divided into a variable-length code and a data string and output (receiving means 301 ).
- each execution frequency is fetched from a data string ( FIG. 2 ) and it is set to x_i.
- step 905 an actual execution frequency is calculated in accordance with the execution time a_i of each element and each execution frequency x_i (frequency reducing means 304 ).
- step 906 measurement of the execution time for decoding is started.
- step 907 a decoding routine to be described later is started.
- step 908 measurement of the decoding execution time is ended (video decoder 303 and execution-time measuring means 3035 ).
- step 908 the execution time of each element is estimated in accordance with the decoding execution time in step 908 and the actual execution frequency of each element in step 905 to update a_i (estimating means 302 ). The above processing is executed every input multiplexed data.
- step 907 for decoding routine variable-length decoding is performed in step 910 (variable-length decoding means 3031 ), inverse orthogonal transformation is performed in step 911 (inverse orthogonal transforming means 3032 ), and processing is branched in step 912 in accordance with the information of the intra-/inter-processing fetched through the processing in step 910 (switching unit 3033 ).
- movement compensation is performed in step 913 (movement compensating means 3034 ).
- step 913 the execution frequency of halfpixel prediction is counted in step 913 . When the counted execution frequency exceeds the actual execution frequency obtained in step 905 , halfpixel prediction is replaced with fullpixel prediction for execution.
- the execution time of decoding is estimated in accordance with the estimated execution time of each element and, when the execution time may exceed the time required to receive the data for one sheet (designated time), halfpixel prediction having a long execution time is replaced with fullpixel prediction. Thereby, it is possible to prevent an execution time from exceeding a designated time and solve the problem (C1).
- FIG. 36 shows the structure of the receiver of the fifth embodiment.
- Symbol 402 denotes estimating means obtained by correcting the estimating means 302 described for the second embodiment so as to output the execution time of each element obtained as the result of estimation separately from an output to frequency limiting means 304 .
- Symbol 408 denotes transmitting means for generating the data string shown in FIG. 37 in accordance with the execution time of each element and outputting it. When expressing an execution time with 16 bits by using microsecond as the unit, up to approx. 65 msec can be expressed. Therefore, approx. 65 msec will be enough.
- Symbol 409 denotes an output terminal for transmitting the data string to transmitting means.
- a receiving method corresponding to the fifth embodiment can be obtained only by adding a step for generating the data string shown in FIG. 37 immediately after symbol 808 in FIG. 40 .
- FIG. 38 shows the structure of the transmitter of the sixth embodiment.
- Symbol 606 denotes an input terminal for receiving a data string output by the receiver of the third embodiment and 607 denotes receiving means for receiving the data string and outputting the execution time of each element.
- Symbol 608 denotes deciding means for obtaining the execution frequency of each element and its obtaining procedure is described below.
- Every macroblock in a picture is processed by the switching unit 1021 to obtain the execution frequency of the switching unit 1021 at this point of time.
- the execution time required for decoding at the receiver side is estimated by using these execution frequencies and the execution time sent from the receiving means 607 .
- the estimated decoding time is obtained as the total sum of the product between the execution time and execution frequency of each element every element.
- the estimated decoding time is equal to or more than the time required to transmit the number of codes (e.g. 16 Kbits) to be generated through this picture designated by a rate controller or the like (e.g. 250 msec when a transmission rate is 64 Kbits/sec)
- the execution frequency of fullpixel prediction is increased and the execution frequency of halfpixel prediction is decreased so that the estimated decoding execution time does not exceed the time required for transmission.
- Because fullpixel prediction has a shorter execution time, it is possible to reduce the execution time of fullpixel prediction by reducing the frequency of fullpixel prediction.
- the video encoder 2102 performs various processings in accordance with the execution frequency designated by the deciding means 608 . For example, after the movement compensating means 1022 executes halfpixel prediction by the predetermined execution frequency of halfpixel prediction, it executes only fullpixel prediction.
- halfpixel prediction is uniformly dispersed in a picture.
- the execution time of each estimated element is transmitted to the transmitting side, the execution time of decoding is estimated at the transmitting side, and halfpixel prediction having a long execution time is replaced with fullpixel prediction so that the estimated decoding execution time does not exceed the time (designated time) probably required to receive the data for one sheet.
- the information for halfpixel prediction among the sent encoded information is not disused and thereby, it is possible to prevent an execution time from exceeding a designated time and solve the problem (C2).
- inter-macroblock encoding into such three movement compensations as normal movement compensation, 8 ⁇ 8 movement compensation, and overlap movement compensation.
- FIG. 42 is a flowchart of the transmitting method of the seventh embodiment.
- step 1001 the initial value of the execution time of each processing is set.
- a picture is input (input terminal 2101 ) in step 801 and it is divided into macroblocks in step 802 .
- step 1002 it is decided whether to intra-encode or inter-encode every macroblock (switching unit 1021 ). Resultantly, the execution frequency of each processing from step 1005 to step 806 is known. Therefore, in step 1003 , an actual execution frequency is calculated in accordance with the above execution frequency and the execution time of each processing (deciding means 608 ).
- step 1005 to step 806 are repeated until the processing for every macroblock is completed in accordance with the conditional branch in step 807 .
- step 1005 branching is performed in accordance with the decision result in step 1002 (switching unit 1021 ).
- movement compensation is performed in step 804 (movement compensating means 1022 ).
- the frequency of halfpixel prediction is counted.
- fullpixel prediction is executed instead without executing halfpixel prediction.
- steps 805 and 806 DCT transformation and variable-length encoding are performed (orthogonal transforming means 1023 and variable-length encoding means 1024 ).
- the variable showing the execution frequency corresponding to each processing is read in step 808 , the data string shown in FIG. 2 is generated, and the data string and a code are multiplexed and output.
- the data string is received and the execution time of each processing is fetched from the data string and set.
- Processings from step 801 to step 1004 are repeatedly executed as long as pictures are input.
- the estimated execution time of each element is transmitted to the transmitting side, the execution time of decoding is estimated at the transmitting side, and halfpixel prediction having a long execution time is replaced with fullpixel prediction so that the estimated decoding execution time does not exceed the time (designated time) probably required to receive the data for one sheet.
- the information for halfpixel prediction among the sent encoded information is not disused and it is possible to prevent the execution time from exceeding the designated time and solve the problem (C2).
- FIG. 39 shows the structure of the transmitting apparatus of the eighth embodiment of the present invention.
- Symbol 7010 denotes execution-time measuring means for measuring the execution time until encoding and outputting of a picture are completed after the picture is input to an encoder 2102 and outputting the measured execution time.
- Symbol 706 denotes estimating means for receiving execution frequencies of elements (switching unit 1021 , movement compensating means 1022 , orthogonal transforming means 1023 , and variable-length decoding means 1024 ) of a data string from counting means 2103 and the execution time from the execution-time measuring means 7010 and estimating the execution time of each element. It is possible to use an estimating method same as that described for the estimating means 302 of the second embodiment.
- Symbol 707 denotes an input terminal for inputting a frame rate value sent from a user and 708 denotes deciding means for obtaining the execution frequency of each element. The obtaining procedure is described below.
- every macroblock in a picture is processed by the switching unit 1021 to obtain the execution frequency of the switching unit 1021 at this point of time. Thereafter, it is possible to uniquely decide execution frequencies by the movement compensating means 1022 , orthogonal transforming means 1023 , and variable-length encoding means 1024 in accordance with the processing result up to this point of time. Then, the total sum of products between the execution frequency and the estimated execution time of each element sent from the estimating means 706 is obtained every element to calculate an estimated encoding time.
- the execution frequency of fullpixel prediction is increased and that of halfpixel prediction is decreased.
- each execution frequency is decided.
- the video encoder 2102 performs various processings in accordance with the execution frequency designated by the deciding means 608 . For example, after the movement compensating means 1022 executes halfpixel prediction by the predetermined execution frequency of halfpixel prediction, it executes only fullpixel prediction.
- halfpixel prediction is uniformly dispersed in a picture.
- the above eighth embodiment makes it possible to solve the problem (C3) by estimating the execution time of each processing, estimating an execution time required for encoding in accordance with the estimated execution time, and deciding an execution frequency so that the estimated encoding time becomes equal to or shorter than the time usable for encoding of a picture determined in accordance with a frame rate.
- the movement compensating means 1022 detects a movement vector
- a full-search movement-vector detecting method for detecting a vector for minimizing SAD (sum of absolute values of differences every pixel) among vectors in a range of 15 horizontal and vertical pixels.
- a three-step movement-vector detecting method (described in annex of H.261). The three-step movement-vector detecting method executes the processing of selecting nine points uniformly arranged in the above retrieval range to select a point having a minimum SAD and then, selecting nine points again in a narrow range close to the above point to select a point having a minimum SAD one more time.
- FIG. 43 is a flowchart of the transmitting method of the ninth embodiment.
- step 1101 the initial value of the execution time of each processing is set to a variable a_i.
- step 1102 a frame rate is input (input terminal 707 ).
- step 1103 an actual execution frequency is decided in accordance with the frame rate and the execution time a_i of each processing in step 1102 and the execution frequency of each processing obtained from the intra-/inter-processing decision result in step 1002 (deciding means 708 ).
- steps 1105 and 1106 the execution time of encoding is measured.
- step 1104 the execution time of each processing is estimated in accordance with the execution time obtained in step 1106 and the actual execution frequency of each processing to update the variable a_i (estimating means 706 ).
- the execution time of each processing is estimated and an execution time required for encoding is previously measured in accordance with the estimated execution time.
- C3 it is possible to solve the problem (C3) by deciding an actual execution frequency so that the estimated encoding time becomes the time usable for the encoding of a picture determined in accordance with a frame rate or shorter.
- the fourth embodiment it is also possible to extract a code length from the two-byte region when multiplexed data is input in step 902 and use the code transmission time obtained from the code length and the code transmission rate for the execution frequency calculation in step 905 (the execution frequency of halfpixel prediction is decreased so as not to exceed the code transmission time).
- the third embodiment it is also possible to extract a code length from the two-byte region when multiplexed data is input in step 301 and use a code transmission time obtained from the code length and the code transmission rate for the execution frequency calculation in step 304 (the execution frequency of halfpixel prediction is decreased so as not to exceed the code transmission time).
- an actual execution frequency of halfpixel prediction is recorded immediately after step 909 to calculate a maximum value.
- a small-enough value e.g. 2 or 3
- the above concept can be applied to cases other than movement compensation. For example, it is possible to reduce the DCT calculation time by using no high-frequency component for DCT calculation. That is, in the case of a receiving method, when the rate of the IDCT-calculation execution time to the entire execution time exceeds a certain value, a data string showing that the rate exceeds a certain value is transmitted to the transmitting side. When the transmitting side receives the data string, it is also possible to calculate only low-frequency components through the DCT calculation and decrease all high-frequency components to zero.
- an actual execution frequency of halfpixel prediction is recorded in step 3034 to calculate a maximum execution frequency. Then, when the maximum value is a small-enough value or less (e.g. 2 or 3), it is possible to generate and transmit a data string showing that halfpixel prediction is not used (data string comprising a specific bit pattern). Furthermore, in the case of the first embodiment, when receiving a data string showing that halfpixel prediction is not used, it is possible to make the movement compensation processing in step 1022 always serve as fullpixel prediction.
- the above concept can be applied to cases other than movement compensation. For example, by using no high-frequency component for DCT calculation, it is possible to reduce the DCT calculation processing time. That is, in the case of a receiving method, when the rate of IDCT-calculation execution time to the entire execution time exceeds a certain value, a data string showing that the rate exceeds a certain value is transmitted to the transmitting side.
- the transmitting side receives the data string, it is possible to calculate only low-frequency components through the DCT calculation and reduce all high-frequency components to zero.
- the execution time of decoding is estimated in accordance with the estimated execution time of each element and, when the estimated decoding execution time may exceed the time (designated time) required to receive the data for one sheet, halfpixel prediction having a long execution time is replaced with fullpixel prediction. Thereby, it is possible to prevent the execution time from exceeding the designated time and solve the problem (C1).
- the estimated execution time of each element is transmitted to the transmitting side, the execution time of decoding is estimated at the transmitting side, and halfpixel prediction having a long execution time is replaced with fullpixel prediction so that the estimated decoding time does not exceed the time (designated time) probably required to receive the data for one sheet.
- the information for halfpixel prediction in the sent encoded information is not disused and it is possible to prevent the execution time from exceeding the designated time and solve the problem (C2).
- the ninth embodiment it is possible to solve the problem (C3) by estimating the execution time of each processing, moreover estimating the execution time required for encoding in accordance with the estimated execution time, and deciding an executing frequency so that the estimated encoding time becomes equal to or less than the time usable for encoding of a picture decided in accordance with a frame rate.
- the present invention makes it possible to realize a function (CGD: Computational Graceful Degradation) for slowly degrading quality even if a calculated load increases and thereby, a very large advantage can be obtained.
- CCD Computational Graceful Degradation
- the present invention makes it possible to change information frames correspondingly to the situation, purpose, or transmission line by dynamically deciding the frames of data control information, transmission control information, and control information used for transmitting and receiving terminals. Moreover, it is easy to handle a plurality of video streams or a plurality of audio streams and mainly reproducing an important scene cut synchronously with audio by reflecting the intention of an editor. Furthermore, it is possible to prevent an execution time from exceeding a designated time by estimating the execution time of decoding in accordance with the execution time of each estimated element and replacing halfpixel prediction having a long execution time with fullpixel prediction when the estimated decoding execution time may exceed the time (designated time) required to receive the data for one sheet.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Television Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
- Circuits Of Receivers In General (AREA)
Abstract
A reception control section 11 for receiving the information including data and its transmission format information from a memory or communication channel, a separating section 12 for analyzing and separating received information, a transmitting section 13 for transmitting information to a memory or transmission channel, a video extending section 14 for extending a video, and video-extension control section 15 control the processing state of said video extending section 14 for extending at least one or more videos and a video synthesizing apparatus constituted with a video synthesizing section 16 for synthesizing videos in accordance with extended information, an output section 17 for outputting a synthesized result, and a terminal control section 18 for controlling the above means makes it possible to synthesize a plurality of videos at the same time and correspond to a dynamic change of transmission format information.
Description
- This application is a continuation of U.S. patent application Ser. No. 10/626,271 filed Jul. 24, 2003 which is a divisional of U.S. patent application Ser. No. 09/194,008, filed Mar. 17, 1999, which is a U.S. National Phase Application of PCT International Application PCT/JP98/01084, now U.S. Pat. No. 6,674,477, the contents of which are incorporated by reference.
- The present invention relates to audio-video transmitter and audio-video receiver, data-processing apparatus and method, waveform-data-transmitting method and apparatus and waveform-data-receiving method and apparatus, and video-transmitting method and apparatus and video-receiving method and apparatus.
- There has been an apparatus which satisfies the sense of real existence that a counterpart is present in front of you and aims at realistic picture communication by extracting, for example, a person's picture out of the scenery picture of a space in which you are present and superimposing the person's picture, a person's picture sent from the counterpart, and the picture of a virtual space to be displayed commonly with a previously-stored counterpart on each other and displaying them (Japanese Patent Publication No. 4-24914).
- Particularly, in the case of the prior art, inventions concerned with acceleration for performing picture synthesis and a method for reducing memories are made (e.g. Official gazette of Japanese Patent Publication No. 5-46592: Picture synthesizer).
- Though a communication system using picture synthesis for synthesizing two-dimensional static pictures or three-dimensional CG data has been proposed by the prior art, specific discussion on a method for realizing a system for simultaneously synthesizing a plurality of video (picture) and a plurality of audio and displaying them has not been performed from the following viewpoints.
- That is, there has been a problem that no specific discussion has been performed from the following viewpoints:
- (A1) a method for transmitting (communicating and broadcasting) and controlling pictures and audio under the environment in which data and control information (information transmitted by a packet different from that of data to control the processing of terminal side) are independently transmitted by using a plurality of logical transmission lines constructed by software on one real transmission line or more;
- (A2) a method for dynamically changing header information (corresponding to data control information of the present invention) to be added to data for a picture or audio to be transmitted;
- (A3) a method for dynamically changing header information (corresponding to transmission control information of the present invention) to be added for transmission;
- (A4) a method for transmitting information by dynamically multiplexing and separating a plurality of logical transmission lines;
- (A5) a method for transmitting pictures and audio considering the read and rise periods of program or data; and
- (A6) a method for transmitting pictures and audio considering zapping.
- However, the method for changing encoding systems and a method of discussing data in frames in accordance with the frame type of a picture have been proposed so far as a method for dynamically adjusting the amount of data to be transmitted to a network (H. Jinzenji and T. Tajiri, A study of distributive-adaptive-type VOD system, D-81, System Society of Institute of Electronics, Information and Communication Engineers (IEICE) (1995)).
- A dynamic throughput scalable algorithm capable of providing a high-quality video under a restricted processing time is proposed as a method for adjusting throughput at the encoder side (T. Osako, Y. Yajima, H. Kodera, H. Watanabe, K. Shimamura: Encoding of software video using a dynamic throughput scalable algorithm, Thesis Journal of IEICE, D-2, Vol. 80-D-2, No. 2, pp. 444-458 (1997)).
- Moreover, there is an MPEG1/MPEG2 system as an example of realizing synchronous reproduction of video and audio.
- (B1) The conventional method for discussing a picture correspondingly to the frame type of the video has a problem that it is difficult to preponderantly reproduce an important scene cut synchronously with audio by handling a plurality of video streams or a plurality of audio streams and reflecting the intention of an editor because the grading of the information which can be handled is in a single stream.
- (B2) Moreover, it must be possible that a decoder decodes every supplied bit stream because it is a prerequisite that MPEG1/MPEG2 is realized by hardware. Therefore, it is a problem how to correspond to the case of exceeding the throughput of the decoder.
- Moreover, to transmit video, there have been some systems including a system such as H. 261 (ITU-T Recommendation H. 261-Video codec for audio-visual services at px 64) and they have been mounted by hardware. Therefore, the case has not occurred that decoding is not completed within a designated time because of considering the upper limit of a necessary performance when designing hardware.
- The above-designated time denotes a time required to transmit a bit stream obtained by coding a sheet of video. If decoding is not completed within the time, an extra time becomes a delay. If the delay is accumulated, the delay from the transmitting side to the receiving side increases and the system cannot be used as a video telephone. This state must be avoided.
- Moreover, when decoding cannot be completed within a designated time because a communication counterpart generates an out-of-spec bit stream, a problem occurs that a video cannot be transmitted.
- The above problem occurs not only for a video but also for audio data.
- However, in recent years, because the network environment formed by personal computers (PCs) has been arranged as the result of spread of internet and ISDN, the transmission rate has been improved and it has been possible to transmit a video by using PCs and a network. Moreover, requests for transmission of video by users have been rapidly increased. Furthermore, a video can be completely decoded by software because CPU performances have been improved.
- However, because the same software can be executed by personal computers different in structure such as a CPU, bus width, or accelerator, it is difficult to previously consider the upper limit of a necessary performance and therefore, a problem occurs that a picture cannot be decoded within a designated time.
- Moreover, when coded data for a video having a length exceeding the throughput of a receiver is transmitted, coding cannot be completed within a designated time.
- Problem (C1): Decreasing a delay by decoding a picture within a designated time.
- When inputting a video as the waveform data of the present invention or outputting a video as the waveform data of the present invention as means for solving the
problem 1, a problem may be left that the substantial working efficiency of a transmission line is lowered because a part of a transmitted bit stream is not used. Moreover, there are some coding systems that generate a present decoded video in accordance with a last decoded picture (e.g. P picture). However, because the last decoded picture is not completely restored by the means for solving theproblem 1, there is a problem that deterioration of the picture quality influentially increases as time passes. - Problem (C2): In the case of the means for solving the
problem 1, the substantial working efficiency of a transmission line is lowered. Moreover, picture-quality deterioration is spread. - Furthermore, in the case of mounting by software, the frame rate of a picture is determined by the time required for one-time coding. Therefore, when the frame rate designated by a user exceeds the throughput of a computer, it is impossible to correspond to the designation.
- Problem (C3): When the frame rate designated by a user exceeds the throughput of a computer, it is impossible to correspond to the designation.
- When considering the problems (A1) to (A6), it is an object of the present invention to provide an audio-video transmitter and audio-video receiver and data-processing apparatus and method in order to solve at least any one of the problems.
- Moreover, when considering the problems (B1) and (B2), it is another object of the present invention to provide data-processing apparatus and method in order to solve at least one of the problems.
- Furthermore, when considering the problems (C1) to (C3), it is still another object of the present invention to provide waveform-data-receiving method and apparatus and waveform-data-transmitting method and apparatus, and video-transmitting method and apparatus and video-receiving method and apparatus in order to solve at least one of the problems.
- The present invention is an audio-video transmitting apparatus comprising transmitting means for transmitting the content concerned with a transmitting method and/or the structure of data to be transmitted or an identifier showing the content as transmission format information through a transmission line same as that of the data to be transmitted or a transmission line different from the data transmission line; wherein
- said data to be transmitted is video data and/or audio data.
- One aspect of the present invention is the audio-video transmitting apparatus, wherein said transmission format information is included in at least one of data control information added to said data to control said data, transmission control information added to said data to transmit said data, and information for controlling the processing of the terminal side.
- Another aspect of the present invention is the audio-video transmitting apparatus, wherein at least one of said data control information, transmission control information, and information for controlling the processing of said terminal side is dynamically changed.
- Still another aspect of the present invention is the audio-video transmitting apparatus, wherein said data is divided into a plurality of packets, and said data control information or said transmission control information is added not only to the head packet of said divided packets but also to a middle packet of them.
- Yet another aspect of the present invention is the audio-video transmitting apparatus, wherein an identifier showing whether to use timing information concerned with said data as information showing the reproducing time of said data is included in said transmission format information.
- Still yet another aspect of the present invention is the audio-video transmitting apparatus, wherein said transmission format information is the structural information of said data and a signal which is output from a receiving apparatus receiving the transmitted structural information of said data and which can be received is confirmed and thereafter, said transmitting means transmits corresponding data to said receiving apparatus.
- A further aspect of the present invention is the audio-video transmitting apparatus, wherein said transmission format information include (1) an identifier for identifying a program or data to be used by a receiving apparatus later and (2) at least one of a flag, counter, and timer as information for knowing the point of time in which said program or data is used or the term of validity for using said program or data.
- Still a further aspect of the present invention is the audio-video transmitting apparatus, wherein said point of time in which said program or data is used is transmitted as transmission control information by using a transmission serial number for identifying a transmission sequence or as information to be transmitted by a packet different from that of data to control terminal-side processing.
- Still yet a further aspect of the present invention is the audio-video transmitting apparatus, wherein storing means for storing a plurality of contents concerned with said transmitting method and/or said structure of data to be transmitted and a plurality of its identifiers are included, and said identifier is included in at least one of said data control information, transmission control information, and information for controlling terminal-side processing as said transmission format information.
- Another aspect of the present invention is the audio-video transmitting apparatus, wherein storing means for storing a plurality of contents concerned with said transmitting method and/or said structure of data to be transmitted are included, and said contents are included in at least one of said data control information, transmission control information, and information for controlling terminal-side processing as said transmission format information.
- Still another aspect of the present invention is the audio-video transmitting apparatus, wherein a default identifier showing whether to change the contents concerned with said transmitting method and/or structure of data to be transmitted is added.
- Still yet another aspect of the present invention is the audio-video transmitting apparatus, wherein said identifier or said default identifier is added to a predetermined fixed-length region of information to be transmitted or said predetermined position.
- A further aspect of the present invention is an audio-video receiving apparatus comprising: receiving means for receiving said transmission format information transmitted from the audio-video transmitting apparatus; and transmitted-information interpreting means for interpreting said received transmission-format information.
- A still further aspect of the present invention is the audio-video receiving apparatus, wherein storing means for storing a plurality of contents concerned with said transmitting method and/or said structure of data to be transmitted and a plurality of its identifiers are included, and the contents stored in said storing means are used to interpret said transmission format information.
- A still yet further aspect of the present invention is an audio-video transmitting apparatus comprising: information multiplexing means for controlling start and end of multiplexing the information for a plurality of logical transmission lines for transmitting data and/or control information is included; wherein, not only said data and/or control information multiplexed by said information multiplexing means but also control contents concerned with start and end of said multiplexing by said information multiplexing means are transmitted as multiplexing control information, and said data includes video data and/or audio data.
- Another aspect of the present invention is the audio-video transmitting apparatus wherein it is possible to select whether to transmit said multiplexing control information by arranging said information without multiplexing it before said data and/or control information or transmit said multiplexing control information through a transmission line different from the transmission line for transmitting said data and/or control information.
- Yet another aspect of the present invention is an audio-video receiving apparatus comprising: main looking-listening means for looking at and listening to a broadcast program; and auxiliary looking-listening means for cyclically detecting the state of a broadcast program other than the broadcast program looked and listened through said main looking-listening means; wherein said detection is performed so that a program and/or data necessary when said broadcast program looked and listened through said main looking-listening means is switched to other broadcast program can be smoothly processed, and said data includes video data and/or audio data.
- Still yet another aspect of the present invention is the audio-video transmitting apparatus, wherein priority values can be changed in accordance with the situation by transmitting the offset value of information showing the priority for processing of said data.
- A further aspect of the present invention is an audio-video receiving apparatus comprising: receiving means for receiving encoded information to which the information concerned with the priority for processing under an overload state is previously added; and priority deciding means for deciding a threshold serving as a criterion for selecting whether to process an object in said information received by said receiving means; wherein
- the timing for outputting said received information is compared with the elapsed time after start of processing or the timing for decoding said received information is compared with the elapsed time after start of processing to change said threshold in accordance with the comparison result, and video data and/or audio data are or is included as said encoding object.
- A still further aspect of the present invention is the audio-video transmitting apparatus, wherein retransmission-request-priority deciding means for deciding a threshold serving as a criterion for selecting whether to request retransmission of some of said information not received because it is lost under transmission when it is necessary to retransmit said information is included, and
- said decided threshold is decided in accordance with at least one of the priority controlled by said priority deciding means, retransmission frequency, lost factor of information, insertion interval between in-frame-encoded frames, and grading of priority.
- A yet further aspect of the present invention is an audio-video transmitting apparatus comprising: retransmission-priority deciding means for deciding a threshold serving as a criterion for selecting whether to request retransmission of some of said information not received because it is lost under transmission when retransmission of said unreceived information is requested is included, wherein said decided threshold is decided in accordance with at least one of the priority controlled by the priority deciding means of said audio-video receiving apparatus, retransmission frequency, lost factor of information, insertion interval between in-frame-encoded frames, and grading of priority.
- A still yet further aspect of the present invention is an audio-video transmitting apparatus for transmitting said encoded information by using the priority added to said encoded information and thereby thinning it when (1) an actual transfer rate exceeds the target transfer rate of information for a video or audio or (2) it is decided that writing of said encoded information into a transmitting buffer is delayed as the result of comparing the elapsed time after start of transmission with a period to be decoded or output added to said encoded information.
- Another aspect of the present invention is a data processing apparatus comprising: receiving means for receiving a data series including (1) time-series data for audio or video, (2) an inter-time-series-data priority showing the priority of the processing between said time-series-data values, and (3) a plurality of in-time-series-data priorities for dividing said time-series data value to show the processing priority between divided data values; and data processing means for performing processing by using said inter-time-series-data priority and said in-time-series-data priority together when pluralities of said time-series-data values are simultaneously present.
- Still another aspect of the present invention is a data processing apparatus comprising: receiving means for receiving a data series including (1) time-series data for audio or video, (2) an inter-time-series-data priority showing the priority of the processing between said time-series-data values, and (3) a plurality of in-time-series-data priorities for dividing said time-series data value to show the processing priority between divided data values; and data processing means for distributing throughput to each of said time-series-data values in accordance with said inter-time-series-data priority and moreover, adaptively deteriorating the processing quality of the divided data in said time-series data in accordance with said in-time-series-data priority so that each of said time-series-data values is kept within said distributed throughput.
- Yet another aspect of the present invention is a data processing apparatus characterized by, when an in-time-series-data priority for a video is added every frame of said video and said video for each frame is divided into a plurality of packets, adding said in-time-series-data priority only to the header portion of a packet for transmitting the head portion of a frame of said video accessible as independent information.
- Still yet another aspect of the present invention is the data processing apparatus, wherein said in-time-series-data priority is described in the header of a packet to perform priority processing.
- A further aspect of the present invention is the data processing apparatus, wherein the range of a value capable of expressing said in-time-series-data priority is made variable to perform priority processing.
- A still further aspect of the present invention is a data processing method comprising the steps of: inputting a data series including time-series data for audio or video and an inter-time-series-data priority showing the processing priority between said time-series data values; and
- processing priorities by using said inter-time-series-data priority as the value of a relative or absolute priority.
- A yet further aspect of the present invention is a data processing method comprising the steps of: classifying time-series data values for audio or video; inputting a data series including said time-series data and a plurality of in-time-series-data priorities showing the processing priority between said classified data values; and processing priorities by using said in-time-series-data priority as the value of a relative or absolute priority.
- Moreover, the present invention is characterized by:
- inputting, for example, a video as waveform data in accordance with the waveform-data-transmitting method; or
- outputting, for example, a video as waveform data in accordance with the waveform-data-receiving.
- Moreover, the present invention is characterized by:
- (d) outputting the execution time of each group obtained through estimation in accordance with the waveform-data-receiving method; or
- (d) inputting a data string constituted with the execution time of each group; and
- (e) computing the execution frequency of each group for completing decoding within a time required to transmit a code length determined by the designation of a rate controller or the like in accordance with each execution time of the receiving means in accordance with the wave-data-transmitting method.
- Furthermore, the present invention is characterized by:
- (d) estimating the execution time of each group in accordance with the processing time required to encode a video and each execution frequency output by counting means; and
- (e) estimating the processing time required to encode a video by using the above execution time and computing the execution frequency of each group in which the processing time does not exceed a time usable to process one sheet of picture determined by a frame rate given as the designation of a user in accordance with the waveform-data-transmitting method.
- The present invention has the above structure to obtain the execution frequency of indispensable processing and that of dispensable processing, transmit the execution frequencies to the receiving side, and estimate the time required for each processing in accordance with the execution frequencies and the decoding time.
- By reducing each execution frequency of dispensable processing so that the time required for decoding becomes shorter than a designated time in accordance with the estimated time of each processing, it is possible to control the decoding time to the designated time or shorter and keep a delay small.
- Moreover, it is possible to set the decoding execution time to a value equal to or less than a designated time by transmitting the execution time of indispensable processing and that of dispensable processing estimated by the receiving side to the transmitting side and determining each execution frequency at the transmitting side in accordance with each execution time.
- Moreover, it is possible to set the encoding estimation time to a value equal to or less than a user designated time by estimating the execution time of indispensable processing and that of dispensable processing and determining each execution frequency in accordance with each execution time and the user designated time determined by a frame rate designated by a user.
-
FIG. 1 is a schematic block diagram of the audio-video transceiver of an embodiment of the present invention; -
FIG. 2 is an illustration showing a reception control section and a separating section; -
FIG. 3 is an illustration showing a method for transmitting and controlling video and audio by using a plurality of logical transmission lines; -
FIG. 4 is an illustration showing a method for dynamically changing header information added to the data for a video or audio to be transmitted; - FIGS. 5(a) and 5(b) are illustrations showing a method for adding AL information;
- FIGS. 6(a) to 6(d) are illustrations showing examples of a method for adding AL information;
-
FIG. 7 is an illustration showing a method for transmitting information by dynamically multiplexing and separating a plurality of logical transmission lines; -
FIG. 8 is an illustration showing a procedure for transmitting a broadcasting program; -
FIG. 9 (a) is an illustration showing a method for transmitting a video or audio considering the read and rise time of program or data when the program or data is present at a receiving terminal; -
FIG. 9 (b) is an illustration showing a method for transmitting a video or audio considering the read and rise time of program or data when the program or data is transmitted; -
FIG. 10 (a) is an illustration showing a method for corresponding to zapping; -
FIG. 10 (b) is an illustration showing a method for corresponding to zapping; -
FIG. 11 (a) is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 11 (b) is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 12 is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 13 (a) is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 13 (b) is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 13 (c) is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 14 is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 15 is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 16 (a) is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 16 (b) is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 17 is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 18 is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 19 (a) is an illustration showing a specific example of the protocol to be actually transferred between terminals; -
FIG. 19 (b) is an illustration showing a specific example of the protocol to be actually transferred between terminals; - FIGS. 20(a) to 20(c) are block diagrams of demonstration systems of CGD of the present invention;
-
FIG. 21 is an illustration showing a method for adding a priority under overload at an encoder; -
FIG. 22 is an illustration describing a method for deciding a priority at a receiving terminal under overload; -
FIG. 23 is an illustration showing temporal change of priorities; -
FIG. 24 is an illustration showing stream priority and object priority; -
FIG. 25 is a schematic block diagram of a video encoder and a video decoder of an embodiment of the present invention; -
FIG. 26 is a schematic block diagram of an audio encoder and an audio decoder of an embodiment of the present invention; - FIGS. 27(a) and 27(b) are illustrations showing a priority adding section and a priority deciding section for controlling the priority of processing under overload;
- FIGS. 28(a) to 28(c) are illustrations showing the grading for adding a priority;
-
FIG. 29 is an illustration showing a method for assigning a priority to multi-resolution video data; -
FIG. 30 is an illustration showing a method for constituting a communication payload; -
FIG. 31 is an illustration showing a method for making data correspond to a communication payload; -
FIG. 32 is an illustration showing the relation between object priority, stream priority, and communication packet priority; -
FIG. 33 is a block diagram of a transmitter of the first embodiment of the present invention; -
FIG. 34 is an illustration of the first embodiment; -
FIG. 35 is a block diagram of the receiver of the third embodiment of the present invention; -
FIG. 36 is a block diagram of the receiver of the fifth embodiment of the present invention; -
FIG. 37 is an illustration of the fifth embodiment; -
FIG. 38 is a block diagram of the transmitter of the sixth embodiment of the present invention; -
FIG. 39 is a block diagram of the transmitter of the eighth embodiment of the present invention; -
FIG. 40 is a flowchart of the transmission method of the second embodiment of the present invention; -
FIG. 41 is a flowchart of the reception method of the fourth embodiment of the present invention; -
FIG. 42 is a flowchart of the transmission method of the seventh embodiment of the present invention; -
FIG. 43 is a flowchart of the transmission method of the ninth embodiment of the present invention; -
FIG. 44 is a block diagram showing an audio-video transmitter of the present invention; -
FIG. 45 is a block diagram showing an audio-video receiver of the present invention; -
FIG. 46 is an illustration for explaining priority adding means for adding a priority to a video and audio of an audio-video transmitter of the present invention; and -
FIG. 47 is an illustration for explaining priority deciding means for deciding whether to perform decoding by interpreting the priority added to a video and audio of an audio-video receiver of the present invention. -
- 11 Reception control section
- 12 Separating section
- 13 Transmitting section
- 14 Video extending section (Picture extending section)
- 15 Video-extension control section (Picture-extension control section)
- 16 Video synthesizing section (Picture synthesizing section)
- 17 Output section
- 18 Terminal control section
- 4011 Transmission control section
- 4012 Video encoding section (Picture encoding section)
- 4013 Reception control section
- 4014 Video decoding section (Picture decoding section)
- 4015 Video synthesizing section (Picture synthesizing section)
- 4016 Output section
- 4101 Video encoder (Picture encoder)
- 4102 Video decoder (Picture decoder)
- 301 Receiving means
- 302 Estimating means
- 303 Video decoder(i.e. Dynamic-picture or Moving picture decoder)
- 304 Frequency reducing means
- 306 Output terminal
- 307 Input terminal
- 3031 Variable decoding means
- 3032 Inverse orthogonal transforming means
- 3033 Switching unit
- 3034 Movement compensating means
- 3035 Execution-time measuring means
- The entire disclosure of U.S. patent application Ser. No. 09/194,008, filed Mar. 17, 1999, is expressly incorporated by reference herein.
- Embodiments of the present invention are described below by referring to the accompanying drawings.
- The embodiments described below mainly solve any one of the above problems (A1) to (A6).
- A “picture (or video)” used for the present invention includes a static-picture and a moving-picture. Moreover, a purposed picture can be a two-dimensional picture like computer graphics (CG) or three-dimensional picture data constituted with a wire-frame model.
-
FIG. 1 is a schematic block diagram of the audio-video transceiver of an embodiment of the present invention. - In
FIG. 1 , areception control section 11 for receiving information and a transmittingsection 13 for transmitting information are information transmitting means such as a coaxial cable, CATV, LAN, and modem. Communication environment can be the environment in which a plurality of logical transmission lines can be used without considering multiplexing means such as internet or the environment in which multiplexing means must be considered such as analog telephone or satellite broadcast. - Moreover, a system for bidirectionally transferring video and audio between terminals such as a picture telephone or teleconference system or a system for broadcasting broadcast-type video and audio through satellite broadcast, CATV, or internet are listed as terminal connection systems. The present invention takes such terminal connection systems into consideration.
- A separating
section 12 shown inFIG. 1 is means for analyzing received information and separating data from control information. Specifically, thesection 12 is means for decomposing the header information for transmission added to data and data or decomposing the header for data control added to the data and the contents of the data. Apicture extending section 14 is means for extending a received video. For example, a video to be extended can be the compressed picture of a standardized moving (dynamic) or static picture such as H.261, H.263, MPEG1/2, or JPEG or not. - The picture-
extension control section 15 shown inFIG. 1 is means for monitoring the extended state of a video. For example, by monitoring the extended state of a picture, it is possible to empty-read a receiving buffer without extending the picture when the receiving buffer almost causes overflow and restart the extension of the picture after the picture is ready for extension. - Moreover, in
FIG. 1 , apicture synthesizing section 16 is means for synthesizing a next ended picture. A picture synthesizing method can be defined by describing a picture and its structural information (display position and display time (moreover, a display period can be included)), a method for grouping pictures, a picture display layer (depth), an object ID (SSRC to be described later), and the relation between attributes of them with a script language such as JAVA, VRML, or MHEG. The script describing the synthesizing method is input or output through a network or a local memory. - Moreover, an
output section 17 is a display or printer for outputting a picture synthesized result. Aterminal control section 18 is means for controlling each section. Furthermore, it is possible to use a structure for extending an audio instead of a picture (it is possible to constitute the structure by changing a picture extending section to an audio extending section, a picture extension control section to an audio extension control section, and a picture synthesizing section to an audio synthesizing section) or a structure for extending a picture and an audio and synthesizing and displaying them while keeping temporal synchronization. - Furthermore, it is possible to transmit a picture and an audio by using a picture compressing section for compressing a picture, a picture compression control section for controlling the picture compressing section, an audio compressing section for compressing an audio, and an audio compression control section for controlling the audio compressing section.
-
FIG. 2 is an illustration showing a reception control section and a separating section. - By constituting the
reception control section 11 shown inFIG. 1 with adata receiving section 101 for receiving data and a controlinformation receiving section 102 for receiving the control information for controlling data and the separatingsection 12 with a transmissionformat storing section 103 for storing a transmission structure (to be described later in detail) for interpreting transmission contents and a transmissioninformation interpreting section 104 for interpreting transmission contents in accordance with the transmission structure stored in the transmissionformat storing section 103, it is possible to independently receive data and control information. Therefore, for example, it is easy to delete or move a received video or audio while receiving it. - As described above, it is possible for the communication environment purposed by the
reception control section 11 to use a communication environment (internet profile) in which a plurality of logical transmission lines can be used without considering multiplexing means like internet or a communication environment (Raw profile) in which multiplexing means must be considered like analog telephone or satellite broadcast. However, a user premises a communication environment in which a plurality of logical transmission lines (logical channels) are prepared (for example, in the case of a communication environment in which TCP/IP can be used, the expression referred to as “communication port” is generally used). - Moreover, as shown in
FIG. 2 , it is assumed that thereception control section 11 receives one type of data transmission line or more and one type of control logical transmission line for controlling data to be transmitted or more. It is also possible to prepare a plurality of transmission lines for transmitting data and only one transmission line for controlling data. Moreover, it is possible to prepare a transmission line for controlling data every data transmission like the RTP/RTCP also used for H.323. Furthermore, when considering the broadcast using UDP, it is possible to use a communication system using a single communication port (multicast address). -
FIG. 3 is an illustration for explaining a method for transmitting and controlling video and audio by using a plurality of logical transmission lines. The data to be transmitted is referred to as ES (Elementary Stream), which can be picture information for one frame or picture information in GOBs or macroblocks smaller than one frame in the case of a picture. - In the case of an audio, it is possible to use a fixed length decided by a user. Moreover, the data-control header information added to the data to be transmitted is referred to as AL (Adaptation Layer information). The information showing whether it is a start position capable of processing data, information showing data-reproducing time, and information showing the priority of data processing are listed as the AL information. Data control information of the present invention corresponds to the AL information. Moreover, it is not always necessary for the ES and AL used for the present invention to coincide with the contents defined by MPEG1/2.
- The information showing whether it is a start position capable of processing data specifically includes two types of information. First one is a flag for random access, that is, the information showing that it can be individually read and reproduced independently of preceding or following data such as intra-frame (I picture) in the case of a picture. Second one is the information capable of defining an access flag as a flag for showing that it can be individually read, that is, the information showing that it is the head of pictures in GOBs or macroblocks in the case of a picture. Therefore, absence of an access flag shows the middle of data. Both random access flag and access flag are not always necessary as the information showing that it is a start position capable of processing data.
- There is a case in which no problem occurs even if both the flags are not added in the case of the real time communication such as a teleconference system. However, to simply perform edition, a random access flag is necessary. It is also possible to decide whether a flag is necessary or which flag is necessary through a communication channel before transferring data.
- The information indicating a data reproducing time shows the information for time synchronization when a picture and an audio are reproduced, which is referred to as PTS (Presentation Time Stamp) in the case of MEPG1/2. Because time synchronization is not normally considered in the case of the real time communication such as a teleconference system, the information representing a reproducing time is not always necessary. The time interval between encoded frames may be necessary information.
- By making the receiving side adjust a time interval, it is possible to prevent a large fluctuation of frame intervals. However, by making the receiving side adjust the reproducing interval, a delay may occur. Therefore, it may be decided that the time information showing the frame interval between encoded frames is unnecessary.
- To decide whether the information showing a data reproducing time represents a PTS or frame interval, it is also possible to decide that the data reproducing time is not added to data before transmitting the data and communicate the decision to a receiving terminal through the communication channel and transmit the data together with decided data control information.
- When the information showing the priority for processing data cannot be processed or transmitted due to the load of a receiving terminal or that of a network, it is possible to reduce the load of the receiving terminal or network by stopping the processing or transmission of the data.
- The receiving terminal is able to process the data with the picture-
extension control section 15 and the network is able to process the data with a relay terminal or router. The priority can be expressed by a numerical value or a flag. Moreover, by transmitting the offset value of the information showing the data-processing priority as control information or data control information (AL information) together with data and adding the offset value to the priority previously assigned to a video or audio in the case of a sudden fluctuation of the load of a receiving terminal or network, it is possible to set a dynamic priority corresponding to the operation state of a system. - Furthermore, by transmitting the information for identifying presence/absence of scramble, presence/absence of copyright, and original or copy as control information together with a data identifier (SSRC) separately from data as control information, it is simplified to cancel the scramble at a relay node.
- Moreover, the information showing the data processing priority can be added every stream constituted with the aggregation of frames of a plurality of pictures or audios or every frame of video or audio.
- Priority adding means for deciding the encoded-information processing priority under overload in accordance with the predetermined rules by the encoding method such as H.263 or G.723 and making the encoded information correspond to the decided priority is provided for a transmitting terminal unit (see
FIG. 46 ). -
FIG. 46 is an illustration for explaining priority adding means 5201 for adding a priority to a picture and an audio. - That is, as shown in
FIG. 46 , a priority is added to encoded-video data (to be processed by video encoding means 5202) and encoded-audio data (to be processed by audio encoding means 5203) in accordance with predetermined rules. The rules for adding priorities are stored inpriority adding rules 5204. The rules include rules for adding a priority higher than that of a P-frame (inter-frame encoded picture frame) to an I-frame (intra-frame encoded picture frame) and rules for adding a priority lower than that of an audio to a picture. Moreover, it is possible to change the rules in accordance with the designation of a user. - Priority-adding objects are scene changes in the case of a picture or an audio block and audioless block in the case of a picture frame, stream, or audio designated by an editor or user.
- To add a priority in picture or audio frames for defining the processing priority under overload, the following methods are considered: a method for adding a priority to a communication header and a method for embedding a priority in the header of a bit stream in which a video or audio is encoded under encoding. The former makes it possible to obtain the information for priority without decoding it and the latter makes it possible to independently handle a single bit stream without depending on a system.
- When one picture frame (e.g. intra-frame encoded I-frame or inter-frame encoded P- or B-frame) is divided into a plurality of transmission packets, a priority is added only to a communication header for transmitting the head of a picture frame accessible as independent information in the case of a picture (when priorities are equal in the same picture frame, it is possible to assume that the priorities are not changed before the head of the next accessible picture frame appears).
- Moreover, it is possible to realize configuration in accordance with control information by making the range of a value capable of expressing a priority variable (for example, expressing time information with 16 bits or 32 bits depending on the purpose).
- Furthermore, in the case of a decoder, priority deciding means for deciding a processing method is provided for a receiving terminal unit in accordance with the priority under overload of received various encoded pieces of information (see
FIG. 47 ). -
FIG. 47 is an illustration for interpreting priorities added to a picture and an audio and explaining priority deciding means 5301 for deciding whether to perform decoding. - That is, as shown in
FIG. 47 , the priorities include a priority added to each stream of each picture or audio and a priority added to each frame of a picture or audio. It is possible to use these priorities independently or by making a frame priority correspond to a stream priority. The priority deciding means 5301 decides a stream or frame to be decoded in accordance with these priorities. - Decoding is performed by using two types of priorities for deciding a processing priority under overload at a terminal.
- That is, a stream priority (inter-time-series priority) for defining a relative priority between bit streams such as a picture and audio and a frame priority (intra-time-series priority) for defining a relative priority between decoding units such as picture frames in the same stream are defined (
FIG. 24 ). - The former stream priority makes it possible to handle a plurality of videos or audios. The latter frame priority makes it possible to change scenes or add different priorities even to the same intra-frame encoded picture frames (I-frame) in accordance with the intention of an editor.
- By making a stream priority correspond to a time assigned to an operating system (OS) for encoding or decoding a picture or audio or a processing priority and thereby controlling the stream priority, it is possible to control a processing time at an OS level. For example, in the case of Windows95/NT of Microsoft Corporation, a priority can be defined at five OS levels. By realizing encoding or decoding means by software in threads, it is possible to decide a priority at an OS level to be assigned to each thread in accordance with the stream priority of a purposed stream.
- The frame priority and stream priority described above can be applied to a transmission medium or data-recording medium. For example, by defining the priority of a packet to be transmitted as an access unit priority, it is possible to decide a priority concerned with packet transmission or a priority for processing by a terminal under overload in accordance with the relation between frame priority and stream priority such as the relation of Access Unit Priority=Stream Priority−Frame Priority.
- Moreover, it is possible to decide a priority by using a floppy disk or optical disk as a data-recording medium. Furthermore, it is possible to decide a priority by using not only a recording medium but also an object capable of recording a program such as an IC card or ROM cassette. Furthermore, it is possible to use a repeater for a picture or audio such as a router or gateway for relaying data.
- As a specific method for using a priority, when a receiving terminal is overloaded, priority deciding means for deciding the threshold of the priority of encoded information to be processed is set to a picture-extension control section or audio-extension control section and the time to be displayed (PTS) is compared with the elapsed time after start of processing or the time to be decoded (DTS) is compared with the time elapsed time after start of processing to change thresholds of the priority of encoded information to be processed in accordance with the comparison result (it is also possible to refer to the insertion interval of I-frame or the grading of a priority as the information for changing thresholds).
- In the case of the example shown in
FIG. 20 (a), a picture with the size of captured QCIF or CIF is encoded by an encoder (H.263) under encoding to output a time stamp (PTS) showing the time for decoding (DTS) or the time for displaying the picture, priority information showing processing sequence under overload (CGD, Computational Graceful Degradation), frame type (SN), and sequence number together with encoded information. - Moreover, in the case of the example shown in
FIG. 20 (b), an audio is also recorded through a microphone and encoded by an encoder (G.721) to output a time stamp (PTS) showing the time for decoding (DTS) or the time for reproducing an audio, priority information (CGD), and sequence number (SN) together with encoded information. - Under decoding, as shown in
FIG. 20 (c), a picture and an audio are supplied to separate buffers to compare their respective DTS (decoding time) with the elapsed time after start of processing. When DTS is not delayed, the picture and the audio are supplied to their corresponding decoders (H.263 and G.721). - The example in
FIG. 21 describes a method for adding a priority by an encoder under overload. For a picture, high priorities of “0” and “1” are assigned to I-frame (intra-frame encoded picture frame) (the smaller a numerical becomes, the lower a priority becomes). P-frame has a priority of “2” which is lower than that of I-frame. Because two levels of priorities are assigned to I-frame, it is possible to reproduce only I-frame having a priority of “0” when a terminal for decoding has a large load. Moreover, it is necessary to adjust the insertion interval of I-frame in accordance with a priority adding method. - The example in
FIG. 22 shows an illustration showing a method for deciding a priority at a receiving terminal under overload. The priority of a frame to be disused is set to a value larger than a cutOffPriority. That is, every picture frame is assumed as an object to be processed. It is possible to previously know the maximum value of priorities added to picture frames by communicating it from the transmitting side to the receiving side (step 101). - When DTS is compared with the elapsed time after start of processing and resultantly, the elapsed time is larger than DTS (when decoding is not in time), the threshold of the priority of a picture or audio to be processed is decreased to thin out processings (step 102). However, when the elapsed time after start of processing is smaller than DTS (decoding is in time), the threshold of a priority is increased in order to increase the number of pictures or audio which can be processed (step 103).
- If the image from one before is skipped by P-frame, no processing is performed. If not, a priority offset value is added to the priority of a picture frame (or audio frame) to compare the priority offset value with the threshold of the priority. When the offset value does not exceed the threshold, data to be decoded is supplied to a decoder (step 104).
- A priority offset allows the usage of previously checking the performance of a machine and communicating the offset to a receiving terminal (it is also possible that a user issues designation at the receiving terminal) and the usage of changing priorities of a plurality of video and audio streams in streams (for example, thinning out processings by increasing the offset value of the rearmost background).
- When a multi-stream is purposed, it is also possible to add a priority for each stream and decide the skip of decoding of a picture or audio. Moreover, in the case of real time communication, it is possible to decide whether decoding is advanced or delayed at the terminal by handling the TR (Temporary Reference) of H.263 similarly to DTS and realize the skipping same as described above.
-
FIG. 23 is an illustration showing temporal change of priorities by using the above algorithm. -
FIG. 23 shows the change of a priority to be added to a picture frame. This priority is a priority for deciding whether to perform decoding when a terminal is overloaded, which is added every frame. The smaller the value of a priority becomes, the higher the priority becomes. In the case of the example inFIG. 23 , 0 has the highest priority. When the threshold of a priority is 3, a frame having a priority to which a value larger than 3 is added is disused without being decoded and a frame having a priority to which a value of 3 or less is added is decoded. By selectively discussing frames in accordance with priorities, it is possible to control the load of a terminal. It is also possible to dynamically decide the priority threshold in accordance with the relation between the present processing time and the decoding time (DTS) to be added to each frame. This technique can be applied not only to a picture frame but also to an audio in accordance with the same procedure. - In the case of a transmission line such as internet, when it is necessary to retransmit encoded information lost under transmission, it is possible to retransmit only a picture or audio required by the receiving side by providing a retransmission request priority deciding section for deciding the threshold of the priority of the encoded information to be retransmitted for a reception control section and deciding the threshold of the priority added to the encoded information whose retransmission should be requested in accordance with the information for priority, retransmission frequency, loss rate of information, insertion interval of intra-frame encoded frame, grading of priority (e.g. five-level priority) which are controlled by the priority deciding section. If the retransmission frequency or loss rate of information is too large, it is necessary to raise the priority of the information to be retransmitted and lower the retransmission or loss rate. Moreover, by knowing the priority used for the priority deciding section, it is possible to prevent the information to be processed from being transmitted.
- In the case of a transmitting terminal, when an actual transfer rate exceeds the target transfer rate of the information of the transmitting terminal or when writing of the encoded information into a transmitting buffer is delayed as the result of comparing the elapsed time after start of transfer processing with the time added to the encoded information to be decoded or displayed, it is possible to transmit a picture or audio matching with the target rate by using a priority added to encoded information and used by the priority deciding section of the receiving terminal when the terminal is overloaded and thereby thinning out transmissions of information. Moreover, by introducing the processing skipping function under overload performed at the receiving-side terminal into the transmitting-side terminal, it is possible to control a failure due to overload of the transmitting-side terminal.
- By making it possible to transmit only necessary information out of the above-described AL information according to necessity, it is possible to adjust the amount of information to be transmitted to a narrow-band communication channel such as an analog telephone line. It is possible to recombine the AL information (data control information) used for the transmitting side by deciding the data control information to be added to data at a transmitting-side terminal before transmitting the data, communicating the data control information to be used to a receiving terminal as control information (for example, using only a random access flag), and rewriting at the receiving-side terminal based on the obtained control information the information about a transmission structure (showing which AL information is used) stored in the transmission format storing section 103 (see
FIG. 16 ). -
FIG. 4 is an illustration for explaining a method for dynamically changing header information added to the data for a picture or audio to be transmitted. In the case of the example inFIG. 4 , the data (ES) to be transmitted is decomposed into data pieces and the identifying information (sequence number) for showing the sequence of data, the information (marker bit) showing whether it is a start position capable of processing data pieces, and time information (time stamp) concerned with transfer of data pieces are added to data pieces in the form of communication headers by assuming that the above pieces of information correspond to transmission control information of the present invention. - Specifically, RTP (Realtime Transfer Protocol, RFC1889) uses the information for the above sequence number, marker bit, time stamp, object ID (referred to as SSRC), and version number as communication headers. Though a header-information item can be extended, the above items are always added as fixed items. However, when the realtime communication such as the case of a video telephone and transmission of accumulated media such as the case of video-on-demand are present together in an environment in which a plurality of different encoded pictures or audio are simultaneously transmitted, identifying means is necessary because meanings of communication headers are different from each other.
- For example, time-stamp information shows PTS that is a reproducing time as previously described in the case of MPEG1/2. In the case of H.261 or H.263, however, the time-stamp information shows a time interval when the information is encoded. However, to process H.263 synchronously with an audio, it is necessary to show that a time stamp is PTS information. This is because time-stamp information shows the time interval between encoded frames in the case of H. 263 and it is defined by RTP that the time stamp of the first frame is random.
- Therefore, it is necessary to add a flag showing whether a time stamp is PTS as (a) communication header information (it is necessary to extend a communication header) or (b) header information for payload of H.263 or H.261 (that is, AL information) (in this case, it is necessary to extend payload information).
- A marker bit serving as the information showing whether it is a start position capable of processing data pieces is added as RTP header information. Moreover, as described above, there is a case in which it is necessary to provide an access flag showing that it is a start position capable of accessing data and a random access flag showing that it is possible to access data at random for AL information. Because doubly providing flags for a communication header lowers the efficiency, a method of substituting an AL flag by a flag prepared for the communication header is also considered.
- (c) The problem is solved by newly providing a flag showing that an AL flag is substituted by the header added to a communication header without adding a flag to AL for the communication header or defining that the marker bit of the communication header is the same as that of AL (it is expected that interpretation can be quickly performed compared to the case of providing a flag for AL). That is, a flag is used which shows whether the marker bit has the same meaning as the flag of AL. In this case, it is considered to improve the communication header or describe it in an extension region.
- However, (d) it is also possible to interpret the meaning of the marker bit of the communication header so as to mean that at least either of a random access flag and an access flag is present in AL. In this case, it is possible to know that the meaning of interpretation is changed from the conventional case by the version number of the communication header. Moreover, processing is simplified by providing an access flag or random access flag only for the communication header or the header of AL (for the former, a case of providing the flag for both the headers is considered but it is necessary to newly extend the communication header).
- It is already described to add the information showing the priority of data processing as the information for AL. By adding the data-processing priority to the communication header, it is possible to decide the processing of the data-processing priority without interpreting the contents of data also on a network. Moreover, in the case of IPv6, it is possible to add the priority at a layer lower than the level of RTP.
- By adding a timer or counter for showing the effective period of data processing to the communication header of RTP, it is possible to decide how the state of a transmitted packet changes. For example, when necessary decoder software is stored in a memory having a low access speed, it is possible to decide the information required by a decoder and when the information is required by a timer or counter. In this case, the information for the priority of a timer or counter or the information for the priority of data processing is unnecessary for AL information depending on the purpose.
- FIGS. 5(a) and 5(b) and FIGS. 6(a) to 6(d) are illustrations for explaining a method for adding AL information.
- By sending the control information for communicating whether to add AL to only the head of the data to be transmitted as shown in
FIG. 5 (a) or whether to add AL to each data piece after decomposing the data to be transmitted (ES) into one data piece or more to a receiving terminal as shown inFIG. 5 (b), it is possible to select the grading for handling transmission information. Adding AL to subdivided data is effective when access delay is a problem. - As described above, to previously communicate recombination of data control information at the receiving side or change of methods for arranging data control information to data to a receiving-side terminal, receiving-terminal correspondence can be smoothly performed by using the expression of a flag, counter, or timer and thereby, preparing the expression as AL information or as a communication header to communicate it to the receiving terminal.
- In the case of the above examples, a method for avoiding duplication of the header of RTP (or communication header) with AL information and a method for extending the communication header of RTP or AL information are described. However, it is not always necessary for the present invention to use RTP. For example, it is possible to newly define an original communication header or AL information by using UDP or TCP. Though the internet profile uses RTP sometimes, a multifunctional header such as RTP is not defined in the Raw profile. The following four types of concepts are considered for AL information and communication header (see FIGS. 6(a) to 6(d)).
- (1) The header information of RTP or AL information is corrected and extended so that the header information already assigned to RTP and that already assigned to AL are not overlapped (particularly, the information for a time stamp is overlapped and the priority information for a timer, counter, or data processing becomes extension information). Or, it is possible to use a method of not extending the header of RTP or not considering duplication of AL information with information of RTP. They correspond to the contents having been shown so far. Because a part of RTP is already practically used for H.323, it is effective to extend RTP having compatibility. (See
FIG. 6 (a).) - (2) Independently of RTP, a communication header is simplified (for example, using only a sequence number) and remainder is provided for AL information as multifunctional control information. Moreover, by making it possible to variably set items used for AL information before communication, it is possible to specify a flexible transmission format. (See
FIG. 6 (b).) - (3) Independently of RTP, AL information is simplified (for an extreme example, no information is added to AL) and every control information is provided for a communication header. A sequence number, time stamp, marker bit, payload type, and object ID frequently used as communication headers are kept as fixed information and data-processing priority information and timer information are respectively provided with an identifier showing whether extended information is present as extended information to refer to the extended information if the information is defined. (See
FIG. 6 (c).) - (4) Independently of RTP, a communication header and AL information are simplified and a format is defined as a packet separate from the communication header or AL information to transmit the format. For example, a method is also considered in which only a marker bit, time stamp, and object ID are defined for AL information, only a sequence number is defined for a communication header, and payload information, data-processing priority information, and timer information are defined as a transmission packet (second packet) separate from the above information and transmitted. (See
FIG. 6 (d).) - As described above, when considering a purpose and header information already added to a picture or audio, it is preferable so as to be able to freely define (customize) a packet (second packet) to be transmitted separately from a communication header, AL information, or data in accordance with the purpose.
-
FIG. 7 is an illustration for explaining a method for transmitting information by dynamically multiplexing and separating a plurality of logical transmission lines. The number of logical transmission lines can be decreased by providing an information multiplexing section capable of starting or ending multiplexing of the information for logical transmission lines for transmitting a plurality of pieces of data or control information in accordance with the designation by a user or the number of logical transmission lines for a transmitting section and an information separating section for separating multiplexed information for a reception control section. - In
FIG. 7 , the information multiplexing section is referred to as “Group MUX” and specifically, it is possible to use a multiplexing system such as H.223. It is possible to provide the Group MUX for a transmitting/receiving terminal. By providing the Group MUX for a relay router or terminal, it is possible to correspond to a narrow-band communication channel. Moreover, by realizing Group MUX with H.223, it is possible to interconnect H.223 and H.324. - To quickly fetch the control information (multiplexing control information) for the information multiplexing section, it is possible to reduce a delay due to multiplexing by transmitting the control information in the information multiplexing section through another logical transmission line without multiplexing the control information with data by the information multiplexing section. Thereby, it is possible for a user to select whether to keep the consistency with conventional multiplexing or reduce a delay due to multiplexing by communicating and transmitting whether to multiplex the control information concerned with the information multiplexing section with data and transmit them or transmit the control information through another logical transmission line without multiplexing the information with the data. In this case, the multiplexing control information concerned with the information multiplexing section is information showing the content of multiplexing about how the information multiplexing section performs multiplexing for each piece of data.
- As described above, similarly, it is possible to transmit the notification of a method for transmitting at least the information for communicating the start and end of multiplexing, information for communicating the combination of logical transmission lines to be multiplexed, and control information concerned with multiplexing (multiplexing control information) as control information in accordance with an expression method such as a flag, counter, or timer or reduce the setup time at the receiving side by transmitting data control information to a receiving-side terminal together with data. Moreover, as previously described, it is possible to provide an item for expressing a flag, counter, or timer for the transmission header of RTP.
- When a plurality of information multiplexing sections or a plurality of information separating sections are present, it is possible to identify to which information multiplexing section the control information (multiplexing control information) belongs by transmitting the control information (multiplexing control information) together with an identifier for identifying an information multiplexing section or information separating section. The control information (multiplexing control information) includes a multiplexing pattern. Moreover, by using a table of random number and thereby, deciding an identifier of an information multiplexing section or information separating section between terminals, it is possible to generate an identifier of the information multiplexing section. For example, it is possible to generate random numbers in a range determined between transmitting and receiving terminals and use the largest value for the identifier (identification number) of the information multiplexing section.
- Because the data multiplexed by the information multiplexing section is conventionally different from the media type defined in RTP, it is necessary to define the information showing that it is information multiplexed by the information multiplexing section (new media type H.223 is defined) for the payload type of RTP.
- By arranging the information to be transmitted by or recorded in the information multiplexing section in the sequence of control information and data information so as to improve the access speed to multiplexed data, it is expected to quickly analyze multiplexed information. Moreover, it is possible to quickly analyze header information by fixing an item which is described in accordance with the data control information added to control information and adding and multiplexing an identifier (unique pattern) different from data.
-
FIG. 8 is an illustration for explaining the transmission procedure of a broadcasting program. By using the relation between the identifier of a logical transmission line and the identifier of a broadcasting program as the information of the broadcasting program and thereby, transmitting control information or adding the identifier of a broadcasting program to data as data control information (AL information), it is possible to identify that the data transmitted through a plurality of transmission lines is broadcasted for which program. Moreover, by transmitting the relation between the identifier of data (SSRC in the case of RTP) and the identifier of a logical transmission line (e.g. port number of LAN) to a receiving-side terminal as control information and transmitting corresponding data after it is confirmed that the control information can be received by the receiving-side terminal (Ack/Reject), it is possible to form the correspondence between data pieces even if control information and data are respectively transmitted through an independent transmission line. - By combining an identifier showing the transmission sequence of broadcasting programs or data pieces with the information for a counter or timer for showing a term of validity in which broadcasting program or data can be used as information, adding the combined identifier and information to the broadcasting program or data, and transmitting them, it is possible to realize broadcasting without return channel (when the term of validity almost expires, reproduction of the information or data for a broadcasting program is started even if information is insufficient). Moreover, a method can be considered in which control information and data are broadcasted without being separated from each other by using the address of a single communication port (multicast address).
- In the case of communication with no back channel, it is necessary to transmit control information sufficiently before transmitting data so as to enable the receiving terminal to know a structural information of data. Moreover, control information should be transmitted through a transmission channel free from packet loss and having a high reliability. However, when using a transmission channel having a low reliability, it is necessary to cyclically transmit the control information having the same transmission sequence number. This is not restricted to the case of transmitting the control information concerned with a setup time.
- Moreover, it is possible to flexibly control and transmit data by selecting an item which can be added as data control information (e.g. access flag, random access flag, data reproducing time (PTS), or data-processing-priority information), deciding whether to transmit the data control information together with the identifier (SSRC) of data as control information through a logical transmission line different from that of the data or transmit the data control information as data control information (information for AL) together with the data at the transmitting side before transmitting the data, and communicating and transmitting the data to the receiving side as control information.
- Thereby, it is possible to transmit data information without adding information to AL. Therefore, to transmit the data for a picture or audio by using RTP, it is unnecessary to extend the definition of the payload having been defined so far.
- FIGS. 9(a) and 9(b) are illustrations showing a picture or audio transmission method considering the read time and rise time of program or data. Particularly, when the resources of a terminal are limited like the case of satellite broadcasting or a portable terminal having no return channel and being unidirectional, program or data is present and used at a receiving-side terminal, a necessary program (e.g. H.263, MPEG1/2, or software of audio decoder) or data (e.g. video data or audio data) is present in a memory (e.g. DVD, hard disk, or file server on network) requiring a lot of read time, it is possible to reduce the setup time of program or data required in advance by previously receiving it as control information or receiving it together with data as data control information in accordance with the expression method such as the identifier for identifying the program or data, identifier (e.g. SSRC, or Logical Channel Number) of a stream to be transmitted, or a flag, counter (count-up/down), or timer for estimating the point of time necessary for a receiving terminal (
FIG. 18 ). - When program or data is transmitted, by transmitting the program or data from the transmitting side together with the information showing the storage destination (e.g. hard disk or memory) of the program or data at a receiving terminal, time required for start or read, relation between the type or storage destination of a terminal and the time required for start or read (e.g. relation between CPU power, storage device, and average response time), and utilization sequence, it is possible to schedule the storage destination and read time of the program or data if the program or data necessary for the receiving terminal is actually required.
- FIGS. 10(a) and 10(b) are illustrations for explaining a method for corresponding to zapping (channel change of TV).
- When it is necessary to execute a program at a receiving terminal differently from the case of conventional satellite broadcasting for receiving only pictures, the setup time until the program is read and started is a large problem. The same is true for the case in which available resources are limited like the case of a portable terminal.
- It is expected that the setup time at a receiving-side terminal can be decreased by (a) using a main looking-listening section by which the user looks at and listens to, and an auxiliary looking-listening section in which a receiving terminal cyclically monitors programs other than the program looked and listened by a user and receiving the relation between identifier for identifying program or data required in advance, information for a flag, counter, or timer for estimating the point of time necessary for the receiving terminal, and program as control information (information transmitted by a packet different from that of data to control terminal processing) or as data control information (information for AL), and preparing read of the program or data together with data as one of the settlement measures when to program or data necessary for a program other than the program looked and listened by the user is present in a memory requiring a lot of time for read.
- It is possible to prevent a screen from stopping under setup by setting a broadcasting channel for broadcasting only heading pictures of the pictures broadcasted through a plurality of channels and switching programs by a user, and thereby, when necessary program or data is present in a memory requiring a lot of time for read, temporarily selecting the heading picture of a program required by the user and showing it for the user or showing that program or data is currently read, and restarting the program required by the user after necessary program or data is read by the memory as the second one of the settlement measures. The above heading pictures include broadcasted pictures obtained by cyclically sampling programs broadcasted through a plurality of channels.
- Moreover, a timer is a time expression and shows the point of time when a program necessary to decode a data stream sent from the transmitting side is necessary. A counter is the basic time unit determined between transmitting and receiving terminals, which can be information showing what-th time. A flag is transmitted and communicated together with the data transmitted before the time necessary for setup or control information (information transmitted through a packet different from that of data to control terminal processing). It is possible to transmit the timer and counter by embedding them in data or transmit them as control information.
- Furthermore, to decide a setup time, the time in which setup is performed can be estimated by, when using a transmission line such as ISDN operating on the clock base, using a transmission serial number for identifying a transmission sequence as transmission control information in order to communicate from the transmitting terminal to the receiving terminal a time point when program or data is required and thereby communicating the serial number to a receiving terminal together with data as data control information or as control information. Furthermore, when a transmission time is fluctuated due to jitter or delay like internet, it is necessary to add the transmission time to the setup time by considering the propagation delay of transmission in accordance with jitter or delay time by the means for realizing RTCP (media transmission protocol used for internet).
- FIGS. 11(a) to 19(b) are illustrations showing specific examples of protocols actually transferred between terminals.
- A transmission format and a transmission procedure are described in ASN.1. Moreover, the transmission format is extended on the basis of H.245 of ITU. As shown in
FIG. 11 (a), objects of a picture and audio can have a hierarchical structure. In the case of this example, each object ID has the attributes of a broadcasting-program identifier (program ID) and an object ID (S SRC) and the structural information and synthesizing method between pictures are described by a script language such as Java or VRML. -
FIG. 11 (a) is an illustration showing examples of the relation between objects. - In
FIG. 11 (a), objects are media such as an audio-video, CG, and text. In the case of the examples inFIG. 11 (a), objects constitute a hierarchical structure. Each object has a program number “Program ID” corresponding to TV channel) and an object identifier “Object ID” for identifying an object. When transmitting each object in accordance with RTP (media transmission protocol for transmitting media used for internet, Realtime Transfer Protocol), it is possible to easily identify the object by making the object identifier correspond to SSRC (synchronous source identifier). Moreover, it is possible to describe the structure between objects with a description language such as JAVA or VRML. - Two types of methods for transmitting the objects are considered. One is the broadcasting type in which the objects are unilaterally transmitted from a transmitting-side terminal. The other is the type (communication type) for transferring the objects between transmitting and receiving terminals (terminals A and B).
- For example, it is possible to use RTP as a transmission method in the case of internet. Control information is transmitted by using a transmission channel referred to as LCNO in the case of the standard for video telephones. In the case of the example in
FIG. 11 (a), a plurality of transmission channels are used for transmission. The same program channel (program ID) is assigned to these channels. -
FIG. 11 (b) is an illustration for explaining how to realize a protocol for realizing the functions described for the present invention. The transmission protocol (H.245) used for the video-telephone standards (H.324 and H.323) is described below. The functions described for the present invention are realized by extending H.245. - The description method shown by the example in
FIG. 11 (b) is the protocol description method referred to as ASN.1. “Terminal Capability Set” expresses the performance of a terminal. In the case of the example inFIG. 11 (b), the function described as “mpeg4 Capability” is extended for the conventional H.245. - In
FIG. 12 , “mpeg4 Capability” describes the maximum number of pictures “Max Number Of pictures” and the maximum number of audio (“Max Number Of Audio”) which can be simultaneously processed by a terminal and the maximum number of multiplexing functions (“MaxNumberOfMux”) which can be realized by a terminal. - In
FIG. 12 , these are expressed as the maximum number of objects (“Number Of Process Object”) which can be processed. Moreover, a flag showing whether a communication header (expressed as AL inFIG. 12 ) can be changed is described. When the value of the flag is true, the communication header can be changed. To communicate the number of objects which can be processed between terminals to each other by using “MPEG4 Capability”, the communicated side returns “MEPG4 Capability Ack” to a terminal from which “MEPG4 Capability” is transmitted if the communicated side can accept (process) the objects but returns “MEPG4 Capability Reject” to the terminal if not. -
FIG. 13 (a) shows how to describe a protocol for using the above Group MUX for multiplexing a plurality of logical channels to one transmission channel (transmission channel of LAN in the case of this example) in order to share the transmission channel by logical channels. In the case of the example inFIG. 13 (a), multiplexing means (Group MUX) is made to correspond to the transmission channel (“LAN Port Number”) of LAN (Local Area Network). “Group Mux ID” is an identifier for identifying the multiplexing means. To share the multiplexing means by terminals by using “Create Group Mux” and perform communication between the terminals, the communicated side returns “Create Group Mux Ack” to a terminal from which “Create Group Mux” is transmitted if the side can accept (use) the multiplexing means but returns “Create Group Mux Reject” to the terminal if not. Separating means serving as means for performing an operation reverse to that of the multiplexing means can be realized by the same method. - In
FIG. 13 (b), a case of deleting already-generated multiplexing means is described. - In
FIG. 13 (c), the relation between the transmission channel of LAN and a plurality of logical channels is described. - The transmission channel of LAN is described in accordance with “LAN Port Number” and the logical channels are described in accordance with “Logical Port Number”.
- In the case of the examples in
FIG. 13 (c), it is possible to make the transmission channel of one LAN correspond to up to 15 logical channels. - In
FIG. 13 , when the number of MUXs that can be used is only one, Group Mux ID is unnecessary. Moreover, to use a plurality of Muxes, Group Mux ID is necessary for each command of H.223. Furthermore, it is possible to use a flag for communicating the relation between ports used between the multiplexing means and separating means. Furthermore, it is possible to use a command making it possible to select whether to multiplex control information or transmit the information through another logical transmission line. - In the case of the explanation in FIGS. 13(a) to 13(c), the transmission channel uses LAN. However, it is also possible to use a system using no internet protocol like H.223 or MPEG2.
- In
FIG. 14 , “Open Logical Channel” shows the protocol description for defining the attribute of a transmission channel. In the case of the example inFIG. 14 , “MPEG4 Logical Channel Parameters” is extended and defined for the protocol of H.245. -
FIG. 15 shows that a program number (corresponding to a TV channel) and a program name are made to correspond to the transmission channel of LAN (“MPEG4 Logical Channel Parameters”). - Moreover, in
FIG. 15 , “Broadcast Channel Program” denotes a description method for transmitting the correspondence between LAN transmission channel and program number in accordance with the broadcasting type. The example inFIG. 15 makes it possible to transmit the correspondence between up to 1,023 transmission channels and program numbers. Because transmission is unilaterally performed from the transmitting side to the receiving side in the case of broadcasting, it is necessary to cyclically transmit these pieces of information by considering the loss during transmission. - In
FIG. 16 (a), the attribute of an object (e.g. picture or audio) to be transmitted as a program is described (“MPEG4 Object Classdefinition”). Object information (“Object Structure Element”) is made to correspond to a program identifier (“Program ID”). It is possible to make up to 1,023 objects correspond to program identifiers. As the object information, a LAN transmission channel (“LAN Port Number”), a flag showing whether scramble is used (“Scramble Flag”), a field for defining an offset value for changing the processing priority when a terminal is overloaded (“CGD Offset), and an identifier (Media Type) for identifying a type of the media (picture or audio) to be transmitted are described. - In the case of the example in
FIG. 16 (b), AL (in this case, defined as additional information necessary to decode pictures for one frame) is added to control decoding of ES (in this case, defined as a data string corresponding to pictures for one frame). As AL information, the following are defined. - (1) Random Access Flag (flag showing whether to be independently reproducible, true for an intra-frame encoded picture frame)
- (2) Presentation Time Stamp (time displayed by frame)
- (3) CGD Priority (Value of priority for deciding processing priority when terminal is overloaded)
- The example shows a case of transmitting the data string for one frame by using RTP (protocol for transmitting continuous media through internet, Realtime Transfer Protocol). “AL Reconfiguration” is a transmission expression for changing the maximum value that can be expressed by the above AL.
- The example in
FIG. 16 (b) makes it possible to express up to 2 bits as “Random Access Flag Max Bit”. For example, when there is no bit, Random Access Flag is not used. When there are two bits, the maximum value is equal to 3. - Moreover, the expression with a real number part and a mantissa part is allowed (e.g. 3ˆ6). When no data is set, an operation under the state decided by default is allowed.
- In
FIG. 17 , “Setup Request” shows a transmission expression for transmitting a setup time. “Setup Request” is transmitted before a program is transmitted, a transmission channel number (“Logical Channel Number”) to be transmitted, a program ID (“execute Program Number”) to be executed, a data ID (“data Number”) to be used, and the ID of a command (“execute Command Number”) to be executed are made to correspond to each other and transmitted to a receiving terminal. Moreover, an execution authorizing flag (“flag”), a counter (“counter”) describing whether to start execution when receiving Setup Request how many times, and a timer value (“timer”) showing whether to start execution after how many hours pass can be used as other expression methods by making them correspond to transmission channel numbers. - Rewriting of AL information and securing of rise time of Group Mux are listed as examples of requests to be demanded.
-
FIG. 18 is an illustration for explaining a transmission expression for communicating whether to use the AL described forFIG. 16 (b) from a transmitting terminal to a receiving terminal (“Control AL definition”). - In
FIG. 18 , if “Random Access Flag Use” is true, Random Access Flag is used. If not, it is not used. It is possible to transmit the AL change notification as control information through a transmission channel separate from that of data or transmit it through the transmission channel same as that of data together with the data. - A decoder program is listed as a program to be executed. Moreover, a setup request can be used for broadcasting and communication. Furthermore, which item serving as control information is used as Al information is designated to a receiving terminal in accordance with the above request. Furthermore, it is possible to designate which item is used as communication header, which item is used as AL information and which item is used as control information to a receiving terminal.
-
FIG. 19 (a) shows the example of a transmission expression for changing the structure of header information (data control information, transmission control information, and control information) to be transmitted by using an information frame identifier (“header ID”) between transmitting and receiving terminals in accordance with the purpose. - In
FIG. 19 (a), “class ES header” separates the structure of the data control information to be transmitted through a transmission channel same as that of data from that of the information with which transmission control information is transmitted between transmitting and receiving terminals in accordance with an information frame identifier. - For example, only the item of “buffer Size ES” is used when the value of “header ID” is 0 but the item of “reserved” is added when the value of “header ID” is 1.
- Moreover, by using a default identifier (“use Header Extension”), it is decided whether to use a default-type information frame. When “use Header Extension” is true, an item in an if-statement is used. It is assumed that these pieces of structural information are previously decided between transmitting and receiving terminals. Furthermore, it is possible to use a structure for using either of an information frame identifier and a default identifier.
- In
FIG. 19 (b), “AL configuration” shows an example for changing the structure of control information to be transmitted through a transmission channel different from that of data between transmitting and receiving terminals in accordance with the purpose. The usage of an information frame identifier and that of a default identifier are the same as the case ofFIG. 19 (a). - In the case of the present invention, methods for realizing a system for simultaneously synthesizing and displaying a plurality of pictures and a plurality of audio are specifically described from the following viewpoints.
- (1) A method for transmitting (communicating and broadcasting) a picture and an audio through a plurality of logical transmission lines and controlling them. Particularly, a method for respectively transmitting control information and data through an independent logical transmission line is described.
- (2) A method for dynamically changing header information (AL information) added to the data for a picture or audio to be transmitted.
- (3) A method for dynamically changing communication header information added for transmission.
- Specifically, for Items (2) and (3), a method for uniting and controlling the information overlapped in AL information and communication header and a method for transmitting AL information as control information are described.
- (4) A method for dynamically multiplexing and separating a plurality of logical transmission lines and transmitting information.
- A method for economizing the number of channels of transmission lines and a method for realizing efficient multiplexing are described.
- (5) A method for reading a program or data and transmitting pictures and audio considering a rise time. Moreover, a method for reducing an apparent setup time for various functions and purposes is described.
- (6) A method for transmitting a picture or audio for zapping.
- The present invention is not restricted to only synthesis of two-dimensional pictures. It is also possible to use an expression method of combining a two-dimensional picture with a three-dimensional picture or include a picture synthesizing method for synthesizing a plurality of pictures so that they are adjacent to each other like a wide-visual-field picture (panoramic picture).
- Moreover, the present invention does not purpose only such communication systems as bidirectional CATV and B-ISDN. For example, it is possible to use radio waves (e.g. VHF band or UHF band) or a broadcasting satellite for transmission of pictures and audio from a center-side terminal to a home-side terminal and an analog telephone line or N-ISDN for transmission of information from a home-side terminal to a center-side terminal (it is not always necessary that pictures, audio, and data are multiplexed).
- Moreover, it is possible to use a communication system using radio such as IrDA, PHS (Personal Handy Phone), or radio LAN. Furthermore, a purposed terminal can be a portable terminal such as a portable information terminal or a desktop terminal such as a setup BOX or personal computer. Furthermore, a video telephone, multipoint monitoring system, multimedia database retrieval system, and game are listed as application fields. The present invention includes not only a receiving terminal but also a server and a repeater to be connected to a receiving terminal.
- Furthermore, in the case of the above examples, a method for avoiding the overlap of the (communication) header of RTP with AL information and a method for extending the communication header of RTP or AL information are described. However, it is not always necessary for the present invention to use RTP. For example, it is also possible to newly define an original communication header or AL information by using UDP or TCP. Though an internet profile uses RTP sometimes, a multifunctional header such as RTP is not defined for a Raw profile. There are four types of concepts about AL information and communication header as described above.
- Thus, by dynamically deciding the information frame of data control information, transmission control information, or control information used by the transmitting and receiving terminals (e.g. information frame including the sequence of information to be added and the number of bits for firstly assigning a random access flag as 1-bit flag information and secondly assigning 16 bits in the form of a sequence number), it is possible to change only an information frame corresponding to the situation in accordance with the purpose or transmission line.
- The frame of each piece of information can be any one of the frames already shown in FIGS. 6(a) to 6(d) and in the case of RTP, the data control information (AL) can be the header information for each medium (e.g. in the case of H.263, the header information of the video or that of the payload intrinsic to H.263), transmission control information can be the header information of RTP, and control information can be the information for controlling RTP such as RTCP.
- Moreover, in the case of a publicly-known information frame previously set between transmitting and receiving terminals, by providing a default identifier for showing whether to process information by transmitting and receiving for data control information, transmission control information, and control information (information transmitted through a packet different from that of data to control terminal processing) respectively, it is possible to know whether information frames are changed. By setting the default identifier and communicating the changed content (such as change of time stamp information from 32 to 16 bits) only when change is performed in accordance with the method shown in
FIG. 16 , it is prevented to transmit unnecessary configuration information even when frame information of information is not changed. - For example, the following two methods are considered to change information frames of data control information. First, to describe a method for changing information frames of data control information in data, the default identifier (to be written in a fixed region or position) of the information present in the data described for the information frame of data control information is set and then, information frame change contents are described.
- To change information frames of data control information by describing a method for changing only the information frames of data in the control information (information frame control information) as another method, a default identifier provided for control information is set, the contents of the information frames of the data control information to be changed are described, and it is communicated to a receiving terminal in accordance with ACK/Reject and confirmed that the information frames of the data control information are changed and thereafter, the data in which information frames are changed is transmitted. Information frames of transmission control information and control information can be also changed in accordance with the above two methods (
FIG. 19 ). - More specifically, though the header information of MPEG2 is fixed, by providing a default identifier for a program map table (defined by PSI) for relating the video stream of MPEG2-Ts (transport stream) with the audio stream of it and defining a configuration stream in which a method for changing frames of the information for the video stream and audio stream is described, it is possible to first interpret the configuration stream and then, interpret the headers of the video and audio streams in accordance with the content of the configuration stream when the default identifier is set. It is possible for the configuration stream to have the contents shown in
FIG. 19 . - The contents (transmitted-format information) of the present invention about a transmission method and/or a structure of the data to be transmitted correspond to, for example, an information frame in the case of the above embodiment.
- Moreover, for the above embodiments, a case of transmitting the contents to be changed concerned with a transmission method and/or the structure of the data to be transmitted is mainly described. However, it is also possible to use a structure for transmitting only the identifier for the contents. In this case, as shown in
FIG. 44 , it is also possible to use an audio-video transmitter provided with (1) transmitting means 5001 for transmitting the content concerned with a transmission method and/or the structure of the data to be transmitted or an identifier showing the content as the transmitted-format information through the transmission line same as that of the data to be transmitted or a transmission line different from the former transmission line and (2) storing means 5002 for storing a plurality of types of the contents concerned with the transmission method and/or the structure of the data to be transmitted and a plurality of types of identifiers for the contents, in which the identifiers are included in at least one of the data control information, transmission control information, and information for controlling terminal-side processing. Moreover, as shown inFIG. 45 , it is possible to use an audio-video receiver provided with receiving means 5101 for receiving the transmission format information transmitted from the audio-video transmitter and transmissioninformation interpreting means 5102 for interpreting the received transmission format information. Furthermore, the audio-video receiver can be constituted with storing means 5103 for storing a plurality of types of contents concerned with the transmission method and/or the structure of the data to be transmitted and a plurality of types of identifiers for the contents to use the contents stored in the storing means to interpret the contents of the identifiers when receiving the identifiers as the transmission format information. - More specifically, by preparing a plurality of types of information frames previously determined between transmitting and receiving terminals and transmitting identifiers for the above information frames and information frame identifiers for a plurality of types of data control information, a plurality of types of transmission control information, and a plurality of types of control information (information-frame control information) together with data or as control information, it is possible to identify a plurality of types of data control information, a plurality of types of transmission control information, and a plurality of types of control information and optionally select the information frame of each type of information in accordance with the type of a medium to be transmitted or the size of a transmission line. Identifiers of the present invention correspond to the above information frame identifiers.
- It is possible to read and interpret these information identifiers and default identifiers even if information frames are changed at a receiving-side terminal by adding the identifiers to a predetermined fixed-length region or predetermined position of the information to be transmitted.
- Moreover, in addition to the structures described for the above embodiments, it is possible to use a structure for temporarily selecting the caption picture of a program to be looked and listened by the user and showing it for the user when it takes a lot of time to set up a necessary program or data by using a broadcasting channel for broadcasting only the heading pictures of pictures broadcasted through a plurality of channels and switching programs to be looked and listened by the user.
- As described above, the present invention makes it possible to change frames of the information corresponding to the situation in accordance with the purpose or transmission line by dynamically determining the frame of data control information, transmission control information, or control information used by transmitting and receiving terminals.
- Moreover, it is possible to know whether information frames are changed by providing a default identifier for showing whether to transmit or receive and process information by a publicly-known information frame previously set between transmitting and receiving terminals for data control information, transmission control information, and control information respectively and it is possible to prevent unnecessary configuration information from being transmitted even if information frames of information are not changed by setting a default identifier and communicating changed contents only when change is performed.
- Furthermore, it is possible to identify a plurality of types of data control information, a plurality of types of transmission control information, and a plurality of types of control information by preparing a plurality of information frames previously determined between transmitting and receiving terminals and transmitting information frame identifiers for identifying a plurality of types of data control information, a plurality of types of transmission control information, and a plurality of types of control information together with data or as control information and optionally select the information frame of each type of information in accordance with the type of a medium to be transmitted or the size of a transmission line.
- It is possible to read and interpret these information identifiers and default identifiers even if information frames are changed at a receiving-side terminal by adding the identifiers to a predetermined fixed-length region or predetermined position of the information to be transmitted.
- Embodiments of the present invention are described below by referring to the accompanying drawings.
- In this case, any one of the above-described problems (B1) to (B3) is solved.
- A “picture (or video)” used for the present invention includes both a static picture and a moving picture. Moreover, a purposed picture can be a two-dimensional picture such as a computer graphics (CG) or three-dimensional picture data constituted with a wire-frame model.
-
FIG. 25 is a schematic block diagram of the picture encoder and a picture decoder of an embodiment of the present invention. - A
transmission control section 4011 for transmitting or recording various pieces of encoded information is means for transmitting the information for coaxial cable, CATV, LAN, or modem. Apicture encoder 4101 has apicture encoding section 4012 for encoding picture information such as H.263, MPEG1/2, JPEG, or Huffman encoding and thetransmission control section 4011. Moreover, apicture decoder 4102 has anoutput section 4016 constituted with areception control section 4013 for receiving various pieces of encoded information, apicture decoding section 4014 for decoding various pieces of received picture information, apicture synthesizing section 4015 for synthesizing one decoded picture or more, and anoutput section 4016 constituted with a display and a printer for outputting pictures. -
FIG. 26 is a schematic block diagram of the audio encoder and an audio decoder of an embodiment of the present invention. - An audio encoder(sound encorder) 4201 is constituted with a
transmission control section 4021 for transmitting or recording various pieces of encoded information and anaudio encoding section 4022 for encoding such audio information such as G.721 or MPEG1 audio. Moreover, an audio decoder (a sound decoder) 4202 is constituted with areception control section 4023 for receiving various pieces of encoded information, anaudio decoding section 4024 for decoding the above pieces of audio information, an audio synthesizing section (a sound synthesizing section) 4025 for synthesizing one decoded audio or more, and output means 4026 for outputting audio. - Time-series data for audio or picture is specifically encoded or decoded by the above encoder or decoder.
- The communication environments in
FIGS. 25 and 26 can be a communication environment in which a plurality of logical transmission lines can be used without considering multiplexing means like the case of internet or a communication environment in which multiplexing means must be considered like the case of an analog telephone or satellite broadcasting. Moreover, a system for bilaterally transferring a picture or audio between terminals like a video telephone or video conference or a system for broadcasting a broadcasting-type picture or audio on satellite broadcasting, CATV, or internet is listed as a terminal connection system. - Moreover, a method for synthesizing a picture and audio can be defined by describing a picture and an audio, structural information for a picture and an audio (display position and display time), an audio-video grouping method, a picture display layer (depth), and an object ID (ID for identifying each object such as a picture or audio) and the relation between the attributes of them with a script language such as JAVA, VRML, or MHEG. A script describing a synthesizing method is obtained from a network or local memory.
- Moreover, it is possible to constitute a transmitting or receiving terminal by optionally combining an optional number of picture encoders, picture decoders, audio encoders, and audio decoders.
-
FIG. 27 (a) is an illustration for explaining a priority adding section and a priority deciding section for controlling the priority for processing under overload. A priority adding section 31 for deciding the priority for processing encoded information under overload in accordance with a predetermined criteria by an encoding method such as H.263 or G.723 and relating the encoded information to the decided priority is provided for thepicture encoder 4101 andaudio encoder 4201. - The criteria for adding a priority are scene change in the case of a picture and audio and audioless blocks in the case of a picture frame, stream, or audio designated by an editor or user.
- A method for adding a priority to a communication header and a method for embedding a priority in the header of a bit stream to be encoded of a video or audio under encoding are considered as priority adding methods for defining a priority under overload. The former method makes it possible to obtain the information concerned with a priority without decoding the information and the latter method makes it possible to independently handle a single bit stream without depending on a system.
- As shown in
FIG. 27 (b), when priority information is added to a communication header and one picture frame (e.g. intra-frame encoded I-frame or inter-frame encoded P- or B-frame) is divided into a plurality of transmission packets, a priority is added only to a communication header for transmitting the head of a picture frame accessible as single information in the case of a picture (when priorities are equal in the same picture frame, it is possible to assume that the priorities are not changed until the head of the next accessible picture frame appears). - Moreover, in the case of a decoder, a
priority deciding section 32 for deciding a processing method is provided for thepicture decoder 4102 andaudio decoder 4202 in accordance with the priorities of various pieces of encoded information received under overload. - FIGS. 28(a) to 28(c) are illustrations for explaining the grading for adding a priority. Decoding is performed by using two types of priorities for deciding the priority for processing under overload at a terminal.
- That is, a stream priority (Stream Priority; inter-time-series-data priority) for defining the priority for processing under overload in bit streams such as picture and audio and a frame priority (Frame Priority; intra-time-series-data priority) for defining the priority for processing under overload in frames such as picture frames in the same stream are defined (see
FIG. 28 (a)). - The former stream priority makes it possible to handle a plurality of videos or audios. The latter frame priority makes it possible to add a different priority to a picture scene change or the same intra-frame encoded picture frame (I-frame) in accordance with the intention of an editor.
- A value expressed by the stream priority represents a case of handling it as a relative value and a case of handling it as an absolute value (see FIGS. 28(b) and 28(c)).
- The stream and frame priorities are handled by a repeating terminal such as a router or gateway on a network and by transmitting and receiving terminals in the case of a terminal.
- Two types of methods for expressing an absolute value or relative value are considered. One of them is the method shown in
FIG. 28 (b) and the other of them is the method shown inFIG. 28 (c). - In
FIG. 28 (b), the priority of an absolute value is a value showing the sequence in which picture streams (video streams) or audio streams added by an editor or mechanically added are processed (or to be processed) under overload (but not a value considering the load fluctuation of an actual network or terminal). The priority of a relative value is a value for changing the value of an absolute priority in accordance with the load of a terminal or network. - By dividing a priority into a relative value and an absolute value to control the values and thereby changing only relative values at the transmitting side or by a repeater in accordance with the load fluctuation of a network or the like, it is possible to record the value of an absolute value into a hard disk or VTR while leaving the absolute priority added to a video or audio stream. Thus, when the value of the absolute priority is recorded, it is possible to reproduce a picture or audio that is not influenced by the load fluctuation of a network or the like. Moreover, it is possible to transmit a relative or absolute priority through a control channel independently of data.
- Moreover, in
FIG. 28 (b), it is possible to fine the grading compared to a stream priority and handle a frame priority for defining the priority for frame processing under overload as the value of a relative priority or handle it as the value of an absolute priority. For example, by describing an absolute frame priority in encoded picture information and describing a relative frame priority corresponding to the absolute priority added to the picture frame in the communication header of a communication packet for transmitting encoded information in order to reflect the load fluctuation of a network or terminal, it is possible to add a priority corresponding to the load of a network or terminal even at a frame level while leaving an original priority. - Moreover, it is possible to transmit a relative priority by describing the relation with a frame not in a communication header but in a control channel independently of data. Thereby, it is possible to record data into a hard disk or VTR while leaving an absolute priority originally added to a picture or audio stream.
- Furthermore, in
FIG. 28 (b), when reproducing data at a receiving terminal while transmitting the data through a network without recording the data at the receiving terminal, it is possible to compute the value of an absolute priority and that of a relative priority at frame and stream levels at the transmitting side and thereafter transmit only absolute values because it is unnecessary to control absolute and relative values by separating them from each other at a receiving terminal. - In
FIG. 28 (c), the priority of an absolute value is a value uniquely determined between frames obtained from the relation between Stream Priority and Frame Priority. The priority of a relative value is a value showing the sequence in which picture streams or audio streams added by an editor or mechanically added are processed (or to be processed) under overload. In the case of the example inFIG. 28 (c), the frame priority of a picture or audio stream (relative; relative value) and the stream priority for each stream are added. - An absolute frame priority (absolute; absolute value) is obtained from the sum of a relative frame priority and a stream priority (That is, absolute frame priority=relative frame priority+stream priority). To obtain an absolute frame priority, it is also possible to use a subtracting method or a constant-multiplying method.
- An absolute frame priority mainly uses a network. This is because the expression using an absolute value does not require the necessity for deciding a priority for each frame through a repeater such as a router or gateway by considering Stream Priority and Frame Priority. By using the absolute frame priority, such processing as disuse of a frame by a repeater is simplified.
- Moreover, it can be expected to apply a relative frame priority mainly to an accumulation system for performing recording or editing. In the case of an editing operation, a plurality of picture and audio streams may be handled at the same time. In this case, the number of picture streams or the number of frames that can be reproduced may be limited depending on the load of a terminal or network.
- In the above case, it is unnecessary to recalculate every Frame Priority differently from the case in which an absolute value is expressed only by separating Stream Priority from Frame Priority, that is, only by changing Stream Priority of a stream which an editor wants to preferentially display or a user wants to see. Thus, it is necessary to use an absolute expression or a relative expression in accordance with the purpose.
- By describing whether to use a stream priority as a relative value or absolute value, it is possible to effectively express a priority for transmission and accumulation.
- In the case of the example in
FIG. 28 (b), it is differentiated by following a stream priority that the value expressed by the stream priority is a relative value or absolute value by using a flag or identifier for expressing whether the value expressed by the stream priority is an absolute value or relative value. In the case of a frame priority, a flag or identifier is unnecessary because a relative value is described in a communication header and an absolute value is described in an encoded frame. - In the case of the example in
FIG. 28 (c), a flag or identifier for identifying whether a frame priority is an absolute value or relative value is used. In the case of an absolute value, the frame priority is a priority calculated in accordance with a stream priority and a relative frame priority and therefore, the calculation is not performed by a repeater or terminal. Moreover, when the calculation formula is already known at a terminal, it is possible to inversely calculate a relative frame priority from an absolute frame priority and a stream priority. For example, it is also possible to obtain the absolute priority (Access Unit Priority) of a packet to be transmitted from the relational expression
“Access Unit Priority=stream priority−frame priority”.
In this case, it is also possible to express the frame priority as a degradation priority because it is obtained after being subtracted from the stream priority. - Moreover, it is also possible to control data processing by relating one stream priority or more to the priority for processing of the data passing through the logical channel of TCP/IP (port No. of LAN).
- Furthermore, it is expected that the necessity for retransmission can be reduced by assigning a stream priority or frame priority lower than that of a character or control information to a picture or audio. This is because no problem occurs in most cases even if a part of a picture or audio is lost.
-
FIG. 29 is an illustration for explaining a method for assigning a priority to multi-resolution video data. - When one stream is constituted with a plurality of substreams, it is possible to define a substream processing method by adding a stream priority to the substreams and describing a logical sum or logical product under accumulation or transmission.
- In the case of a wavelet, it is possible to decompose one picture frame into a plurality of different-resolution picture frames. Moreover, even in the case of a DCT-base encoding method, it is possible to decompose one picture frame into a plurality of different-resolution picture frames by dividing the picture frame into a high-frequency component and a low-frequency component and encoding them.
- In addition to stream priorities added to a plurality of picture streams constituted with a series of decomposed picture frames, the relation between picture streams is defined with AND (logical product) and OR (logical sum) in order to describe the relation. Specifically, when the stream priority of a stream A is 5 and that of a stream B is 10 (the smaller a numerical value gets, the higher a priority becomes), the relation between picture streams is defined that the stream B is disused in the case of disuse of stream data depending on the priority but the stream B is transmitted and processed without being disused even if the priority of the stream B is lower than the priority of a threshold in the case of AND by describing the relation between streams.
- Thereby, relevant streams can be processed without being disused. In the case of OR, it is defined that relevant streams can be disused. It is possible to perform disuse processing at a transmitting or receiving terminal or a repeating terminal as ever.
- Moreover, when the same video clip is encoded to 24 Kbps and 48 Kbps respectively as an operator for relational description, there is a case in which either 24 or 48 Kbps may be reproduced (exclusive logical sum EX-OR as relational description).
- When the priority of the former is set to 10 and that of the latter is set to 5, a user can reproduce the latter in accordance with a priority or select the latter without following the priority.
-
FIG. 30 is an illustration for explaining a communication payload constituting method. - When constituted with a plurality of substreams, disuse at a transmission packet level becomes easy by, for example, constituting transmission packets starting with, for example, one having the highest priority in accordance with a stream priority added to a substream. Moreover, disuse at a communication packet level becomes easy by fining grading and uniting the information for objects respectively having a high frame priority and thereby constituting a communication packet.
- By relating the sliced structure of a picture to a communication packet, return of a missing packet becomes easy. That is, by relating the sliced structure of a video to a packet structure, a re-sync marker for resynchronization is unnecessary. Unless a sliced structure coincides with the structure of a communication packet, it is necessary to add a re-sync marker (marker for making a returning position known) so that resynchronization can be performed if information is damaged due to a missing packet).
- In accordance with the above-mentioned, it is considered to apply a high error protection to a communication packet having a high priority. Moreover, the sliced structure of a picture represents the unit of collected picture information such as GOB or MB.
-
FIG. 31 is an illustration for explaining a method for relating data to communication payload. By transmitting a method for relating a stream or object to a communication packet together with control information or data, it is possible to generate an optional data format in accordance with the communication state or purpose. For example, in the case of RTP (Real time Transfer Protocol), the payload of RTP is defined for each encoding to be handled. The format of the existing RTP is fixed. In the case of H.263, as shown inFIG. 31 , three data formats from Mode A to Mode C are defined. In the case of H.263, a communication payload purposing a multi-resolution picture format is not defined. - In the case of the example in
FIG. 31 , Layer No. and the above relational description (AND, OR) are added to the data format of Mode A and defined. -
FIG. 32 is an illustration for explaining the relation between frame priority, stream priority, and communication packet priority. - Moreover,
FIG. 32 shows an example of using a priority added to a communication packet on a transmission line as a communication packet priority and relating a stream priority and a frame priority to the communication packet priority. - Generally, in the case of communication using IP, it is necessary to transmit data by relating a frame priority or stream priority added to picture or audio data to the priority of a low-order IP packet. Because the picture or audio data is divided into IP packets and transmitted, it is necessary to relate priorities to each other. In the case of the example in
FIG. 32 , because the stream priority takes values from 0 to 3 and the frame priority takes values from 0 to 5, high-order data can take priorities from 0 to 15. - In the case of IPv6, priorities (4 bits) from 0 to 7 are reserved for congestion-controlled traffic. Priorities from 8 to 15 are reserved for real-time communication traffic or not-congestion-controlled traffic.
Priority 15 is the highest priority andpriority 8 is the lowest priority. This represents the priority at the packet level of IP. - In the case of data transmission using IP, it is necessary to relate high-order priorities from 0 to 15 to low-order IP priorities from 8 to 15. To relate priorities to each other, it is possible to use a method of clipping some of high-order priorities or relate priorities to each other by using a performance function. Relating of high-order data with a low-order IP priority is performed at a repeating node (router or gateway) or transmitting and receiving terminals.
- Transmitting means is not restricted to only IP. It is possible to use a transmission packet having a flag showing whether it can be disused like TS (transport stream) of ATM or MPEG2.
- The frame priority and stream priority having been described so far can be applied to a transmitting medium or data-recording medium. It is possible to use a floppy disk or optical disk as a data-recording medium.
- Moreover, it is possible to use not only the floppy disk or optical disk but also a medium such as an IC card or ROM cassette as long as a program can be recorded in the medium. Furthermore, it is possible to use an audio-video repeater such as a router or gateway for relaying data.
- Furthermore, preferential retransmission is realized by deciding time-series data to be retransmitted in accordance with the information of Stream Priority (inter-time-series-data priority) or Frame Priority (intra-time-series-data priority). For example, when decoding is performed at a receiving terminal in accordance with priority information, it is possible to prevent a stream or frame that is not an object for processing from being retransmitted.
- Furthermore, separately from a present priority to be processed, it is possible to decide a stream or frame having a priority to be retransmitted in accordance with the relation between retransmission frequency and successful transmission frequency.
- Furthermore, in the case of a transmitting-side terminal, preferential transmission is realized by deciding time-series data to be transmitted in accordance with the information of Stream Priority (inter-time-series-data priority) or Frame Priority (intra-time-series-data priority). For example, by deciding the priority of a stream or frame to be transmitted in accordance with an average transfer rate or retransmission frequency, it is possible to transmit an adaptive picture or audio even when a network is overloaded.
- The above embodiment is not restricted to two-dimensional-picture synthesis. It is also possible to use an expression method obtained by combining a two-dimensional picture with a three-dimensional picture or include a picture-synthesizing method for synthesizing a plurality of pictures so as to be adjacent to each other like a wide-visual-field picture (panorama picture). Moreover, communication systems purposed by the present invention are not restricted to bidirectional CATV or B-ISDN. For example, transmission of pictures and audio from a center-side terminal to a house-side terminal can use radio waves (e.g. VHF band or UHF band) or satellite broadcasting and information origination from the house-side terminal to the center-side terminal can use an analog telephone line or N-ISDN (it is not always necessary that pictures, audio, or data are multiplexed). Moreover, it is possible to use a communication system using radio such as an IrDA, PHS (Personal Handy Phone) or radio LAN.
- Furthermore, a purpose terminal can be a portable terminal such as a portable information terminal or a desktop terminal such as a set-top BOX or personal computer.
- As described above, the present invention makes it easy to handle a plurality of video streams and a plurality of audio streams and mainly synchronize and reproduce important scene cut together with audio by reflecting the intention of an editor.
- An embodiment of the present invention is described below by referring to the accompanying drawings.
- The embodiment described below solves any one of the above problems (C1) to (C3).
-
FIG. 33 shows the structure of the transmitter of the first embodiment.Symbol 2101 denotes a picture-input terminal and the size of a sheet of picture has 144 pixels by 176 pixels.Symbol 2102 denotes a video encoder that is constituted with fourcomponents -
Symbol 1021 denotes a switching unit for dividing an input picture into macroblocks (a square region of 16 pixels by 16 pixels) and deciding whether to intra-encode or inter-encode the blocks and 1022 denotes movement compensating means for generating a movement compensating picture in accordance with the local decoded picture which can be calculated in accordance with the last-time encoding result, calculating the difference between the movement compensating picture and an input picture, and outputting the result in macroblocks. Movement compensation includes halfpixel prediction having a long processing time and fullpixel prediction having a short processing time.Symbol 1023 denotes orthogonal transforming means for applying DCT transformation to each macroblock and 1024 denotes variable-length-encoding means for applying entropy encoding to the DCT transformation result and other encoded information. -
Symbol 2103 denotes counting means for counting execution frequencies of four components of thevideo encoder 2102 and outputting the counting result to transforming means every input picture. In this case, the execution frequency of the halfpixel prediction and that of the fullpixel prediction are counted from themovement compensating means 1022. -
Symbol 2104 denotes transforming means for outputting the data string shown inFIG. 34 .Symbol 2105 denotes transmitting means for multiplexing a variable-length code sent from thevideo encoder 2102 and a data string sent from the transformingmeans 2104 into a data string and outputting the data string to adata output terminal 2109. - According to the above structure, it is possible to transmit the execution frequencies of indispensable processing (
switching unit 1021, orthogonal transformingmeans 1023, and variable-length encoding means 1024) and dispensable processing (movement compensating means 1022) to a receiver. -
FIG. 40 is a flowchart of the transmitting method of the second embodiment. - Because operations of this embodiment are similar to those of the first embodiment, corresponding elements are added. A picture is input in step 801 (picture input terminal 2101) and the picture is divided into macroblocks in
step 802. Hereafter, processings fromstep 803 to step 806 are repeated until the processing corresponding to every macroblock is completed in accordance with the conditional branch instep 807. Moreover, when each processing is executed so that frequencies of the processings fromstep 803 to step 806 can be recorded in specific variables, a corresponding variable is incremented by 1. - First, it is decided whether to intra-encode or inter-encode a macroblock to be processed in step 803 (switching unit 1021). When inter-encoding the macroblock, movement compensation is performed in step 804 (movement compensating means 1022). Thereafter, DCT transformation and variable-length encoding are performed in
steps 805 and 806 (orthogonal transformingmeans 1023 and variable-length encoding means 1024). When processing for every macroblock is completed (in the case of Yes in step 807), the variable showing the execution frequency corresponding to each processing is read instep 808, the data string shown inFIG. 2 is generated, and the data string and a code are multiplexed and output. The processings fromstep 801 to step 808 are repeatedly executed as long as input pictures are continued. - The above structure makes it possible to transmit the execution frequency of each processing.
-
FIG. 35 shows the structure of the receiver of the third embodiment. - In
FIG. 35 ,symbol 307 denotes an input terminal for inputting the output of the transmitter of the first embodiment and 301 denotes receiving means for fetching a variable-length code and a data string through inverse multiplexing in accordance with the output of the transmitter of the first embodiment and outputting them. In this case, it is assumed that the time required to receive the data for one sheet is measured and also output. -
Symbol 303 denotes a decoder for a video using a variable-length code as an input, which is constituted with five components.Symbol 3031 denotes variable-length decoding means for fetching a DCT coefficient and other encoded information from a variable-length code, 3032 denotes inverse orthogonal transforming means for applying inverse DCT transformation to a DCT coefficient, and 3033 denotes a switching unit for switching an output to upside or downside every macroblock in accordance with the encoded information showing whether the macroblock is intra-encoded or inter-encoded.Symbol 3034 denotes movement compensating means for generating a movement compensating picture by using the last-time decoded picture and movement encoded information, and adding and outputting the outputs of the inverse orthogonal transformingmeans 3032.Symbol 3035 denotes execution-time measuring means for measuring and outputting the execution time until decoding and outputting of a picture is completed after a variable-length code is input to thedecoder 303. -
Symbol 302 denotes estimating means for receiving the execution frequency of each element (variable-length decoding means 3031, inverse orthogonal transformingmeans 3032, switchingunit 3033, or movement compensating means 3034) from a data string sent from the receiving means 301 and execution time from the execution-time measuring means 3035 to estimate the execution time of each element. - To estimate the execution time of each element, it is possible to use the linear regression and assume an estimated execution time as a purposed variable y and the execution frequency of each component as an explanatory variable x_i. In this case, it may be possible to regard a regression parameter a_i as the execution time of each element. Moreover, in the case of linear regression, it is necessary to accumulate much-enough past data and resultantly, many memories are wasted. However, to avoid many memories from being wasted, it is also possible to use the estimation of an internal-state variable by a Kalman filter. It is possible to consider the above case as a case in which an observed value is assumed as an execution time, the execution time of each element is assumed as an internal-state variable, and an observation matrix C changes every step due to the execution frequency of each element.
Symbol 304 denotes frequency reducing means for changing the execution frequency of each element so as to reduce the execution frequency of fullpixel prediction and increase the execution frequency of halfpixel prediction by a corresponding value. The method for calculating the corresponding value is shown below. - First, the execution frequency and estimated execution time of each element are received from the estimating means 302 to estimate an execution time. When the execution time exceeds the time required to receive the data from the receiving means 301, the execution frequency of fullpixel prediction is increased and the execution frequency of halfpixel prediction is decreased until the former time does not exceed the latter time.
Symbol 306 denotes an output terminal for a decoded picture. - Moreover, there is a case in which the
movement compensating means 3034 is designated so as to perform halfpixel prediction in accordance with encoded information. In this case, when the predetermined execution frequency of halfpixel prediction is exceeded, a halfpixel movement is rounded to a fullpixel movement to execute fullpixel prediction. - According to the above-described first and third embodiments, the execution time of decoding is estimated in accordance with the estimated execution time of each element and, when the decoding execution time may exceed the time (designated time) required to receive the data for one sheet, halfpixel prediction having a long execution time is replaced with fullpixel prediction. Thereby, it is possible to prevent an execution time from exceeding a designated time and solve the problem (C1).
- Moreover, a case of regarding the parts of indispensable and dispensable processings as two groups and a case of regarding the part of a video as waveform data.
- Furthermore, by using no high-frequency components in the IDCT calculation by a receiver, it is possible to reduce the processing time for the IDCT calculation. That is, by regarding the calculation of low-frequency components as indispensable processing and the calculation of high-frequency components as dispensable processing in the IDCT calculation, it is also possible to reduce the calculation frequency of high-frequency components in the IDCT calculation.
-
FIG. 41 is a flowchart of the receiving method of the fourth embodiment. - Because operations of this embodiment are similar to those of the third embodiment, corresponding elements are added. In
step 901, the variable a_i for expressing the execution time of each element is initialized (estimating means 302). Instep 902, multiplexed data is input and the time required for multiplexing the data is measured (receiving means 301). Instep 903, the multiplexed data is divided into a variable-length code and a data string and output (receiving means 301). Instep 904, each execution frequency is fetched from a data string (FIG. 2 ) and it is set to x_i. Instep 905, an actual execution frequency is calculated in accordance with the execution time a_i of each element and each execution frequency x_i (frequency reducing means 304). Instep 906, measurement of the execution time for decoding is started. Instep 907, a decoding routine to be described later is started. Thereafter, instep 908, measurement of the decoding execution time is ended (video decoder 303 and execution-time measuring means 3035). Instep 908, the execution time of each element is estimated in accordance with the decoding execution time instep 908 and the actual execution frequency of each element instep 905 to update a_i (estimating means 302). The above processing is executed every input multiplexed data. - Moreover, in
step 907 for decoding routine, variable-length decoding is performed in step 910 (variable-length decoding means 3031), inverse orthogonal transformation is performed in step 911 (inverse orthogonal transforming means 3032), and processing is branched instep 912 in accordance with the information of the intra-/inter-processing fetched through the processing in step 910 (switching unit 3033). In the case of inter-processing, movement compensation is performed in step 913 (movement compensating means 3034). Instep 913, the execution frequency of halfpixel prediction is counted instep 913. When the counted execution frequency exceeds the actual execution frequency obtained instep 905, halfpixel prediction is replaced with fullpixel prediction for execution. After the above processing is applied to every macroblock (step 914), the routine is ended. - According to the above-described second and fourth embodiments, the execution time of decoding is estimated in accordance with the estimated execution time of each element and, when the execution time may exceed the time required to receive the data for one sheet (designated time), halfpixel prediction having a long execution time is replaced with fullpixel prediction. Thereby, it is possible to prevent an execution time from exceeding a designated time and solve the problem (C1).
-
FIG. 36 shows the structure of the receiver of the fifth embodiment. - Most components of this embodiment are the same as those described for the second embodiment. However, two added components and one corrected component are described below.
-
Symbol 402 denotes estimating means obtained by correcting the estimating means 302 described for the second embodiment so as to output the execution time of each element obtained as the result of estimation separately from an output tofrequency limiting means 304.Symbol 408 denotes transmitting means for generating the data string shown inFIG. 37 in accordance with the execution time of each element and outputting it. When expressing an execution time with 16 bits by using microsecond as the unit, up to approx. 65 msec can be expressed. Therefore, approx. 65 msec will be enough.Symbol 409 denotes an output terminal for transmitting the data string to transmitting means. - Moreover, a receiving method corresponding to the fifth embodiment can be obtained only by adding a step for generating the data string shown in
FIG. 37 immediately aftersymbol 808 inFIG. 40 . -
FIG. 38 shows the structure of the transmitter of the sixth embodiment. - Most components of this embodiment are the same as those described for the first embodiment. However, two added components are described below.
Symbol 606 denotes an input terminal for receiving a data string output by the receiver of the third embodiment and 607 denotes receiving means for receiving the data string and outputting the execution time of each element.Symbol 608 denotes deciding means for obtaining the execution frequency of each element and its obtaining procedure is described below. First, every macroblock in a picture is processed by theswitching unit 1021 to obtain the execution frequency of theswitching unit 1021 at this point of time. Moreover, it is possible to uniquely decide execution frequencies by themovement compensating means 1022, orthogonal transformingmeans 1023, and variable-length encoding means 1024 in accordance with the processing result up to this point of time. Therefore, the execution time required for decoding at the receiver side is estimated by using these execution frequencies and the execution time sent from the receiving means 607. The estimated decoding time is obtained as the total sum of the product between the execution time and execution frequency of each element every element. Moreover, when the estimated decoding time is equal to or more than the time required to transmit the number of codes (e.g. 16 Kbits) to be generated through this picture designated by a rate controller or the like (e.g. 250 msec when a transmission rate is 64 Kbits/sec), the execution frequency of fullpixel prediction is increased and the execution frequency of halfpixel prediction is decreased so that the estimated decoding execution time does not exceed the time required for transmission. (Because fullpixel prediction has a shorter execution time, it is possible to reduce the execution time of fullpixel prediction by reducing the frequency of fullpixel prediction.) - Moreover, the
video encoder 2102 performs various processings in accordance with the execution frequency designated by the decidingmeans 608. For example, after themovement compensating means 1022 executes halfpixel prediction by the predetermined execution frequency of halfpixel prediction, it executes only fullpixel prediction. - Furthermore, it is possible to improve the selecting method so that halfpixel prediction is uniformly dispersed in a picture. For example, it is possible to use a method of first obtaining every macroblock requiring halfpixel prediction, calculating the product (3) obtained by dividing the number of the above macroblocks (e.g. 12) by the execution frequency of halfpixel prediction (e.g. 4), and applying halfpixel prediction only to a macroblock whose sequence from the beginning of the macroblocks requiring half pixel prediction is divided by the above product without a remainder (0, 3, 6, or 9).
- According to the above-described fifth and sixth embodiments, the execution time of each estimated element is transmitted to the transmitting side, the execution time of decoding is estimated at the transmitting side, and halfpixel prediction having a long execution time is replaced with fullpixel prediction so that the estimated decoding execution time does not exceed the time (designated time) probably required to receive the data for one sheet. Thereby, the information for halfpixel prediction among the sent encoded information is not disused and thereby, it is possible to prevent an execution time from exceeding a designated time and solve the problem (C2).
- Moreover, in the case of dispensable processing, it is possible to divide inter-macroblock encoding into such three movement compensations as normal movement compensation, 8×8 movement compensation, and overlap movement compensation.
-
FIG. 42 is a flowchart of the transmitting method of the seventh embodiment. - Because operations of this embodiment are similar to those of the sixth embodiment, corresponding elements are added. In
step 1001, the initial value of the execution time of each processing is set. A picture is input (input terminal 2101) instep 801 and it is divided into macroblocks instep 802. Instep 1002, it is decided whether to intra-encode or inter-encode every macroblock (switching unit 1021). Resultantly, the execution frequency of each processing fromstep 1005 to step 806 is known. Therefore, instep 1003, an actual execution frequency is calculated in accordance with the above execution frequency and the execution time of each processing (deciding means 608). - Hereafter, the processings from
step 1005 to step 806 are repeated until the processing for every macroblock is completed in accordance with the conditional branch instep 807. - Moreover, when each processing is executed, a corresponding variable is incremented by 1 so that the processing frequencies from
step 1005 to step 806 can be recorded in a specific variable. First, instep 1005, branching is performed in accordance with the decision result in step 1002 (switching unit 1021). In the case of inter-encoding, movement compensation is performed in step 804 (movement compensating means 1022). In this case, the frequency of halfpixel prediction is counted. When the counted frequency exceeds the actual frequency obtained instep 1003, fullpixel prediction is executed instead without executing halfpixel prediction. Thereafter, insteps means 1023 and variable-length encoding means 1024). When the processing for every macroblock is completed, (in the case of Yes in step 807), the variable showing the execution frequency corresponding to each processing is read instep 808, the data string shown inFIG. 2 is generated, and the data string and a code are multiplexed and output. Instep 1004, the data string is received and the execution time of each processing is fetched from the data string and set. - Processings from
step 801 to step 1004 are repeatedly executed as long as pictures are input. - According to the paragraph beginning with the final “Moreover” of the descriptive portion of the fifth embodiment and the seventh embodiment, the estimated execution time of each element is transmitted to the transmitting side, the execution time of decoding is estimated at the transmitting side, and halfpixel prediction having a long execution time is replaced with fullpixel prediction so that the estimated decoding execution time does not exceed the time (designated time) probably required to receive the data for one sheet. Thereby, the information for halfpixel prediction among the sent encoded information is not disused and it is possible to prevent the execution time from exceeding the designated time and solve the problem (C2).
-
FIG. 39 shows the structure of the transmitting apparatus of the eighth embodiment of the present invention. - Most components of this embodiment are the same as those described for the first embodiment. Therefore, four added components are described below.
-
Symbol 7010 denotes execution-time measuring means for measuring the execution time until encoding and outputting of a picture are completed after the picture is input to anencoder 2102 and outputting the measured execution time.Symbol 706 denotes estimating means for receiving execution frequencies of elements (switchingunit 1021,movement compensating means 1022, orthogonal transformingmeans 1023, and variable-length decoding means 1024) of a data string from counting means 2103 and the execution time from the execution-time measuring means 7010 and estimating the execution time of each element. It is possible to use an estimating method same as that described for the estimating means 302 of the second embodiment.Symbol 707 denotes an input terminal for inputting a frame rate value sent from a user and 708 denotes deciding means for obtaining the execution frequency of each element. The obtaining procedure is described below. - First, every macroblock in a picture is processed by the
switching unit 1021 to obtain the execution frequency of theswitching unit 1021 at this point of time. Thereafter, it is possible to uniquely decide execution frequencies by themovement compensating means 1022, orthogonal transformingmeans 1023, and variable-length encoding means 1024 in accordance with the processing result up to this point of time. Then, the total sum of products between the execution frequency and the estimated execution time of each element sent from the estimating means 706 is obtained every element to calculate an estimated encoding time. Then, when the estimated encoding time is equal to or longer than the time usable for encoding of a sheet of picture obtained from the inverse number of the frame rate sent fromsymbol 707, the execution frequency of fullpixel prediction is increased and that of halfpixel prediction is decreased. - By repeating the above change of execution frequencies and calculation of the estimated encoding time until the estimated encoding time becomes equal to or shorter than the usable time, each execution frequency is decided.
- Moreover, the
video encoder 2102 performs various processings in accordance with the execution frequency designated by the decidingmeans 608. For example, after themovement compensating means 1022 executes halfpixel prediction by the predetermined execution frequency of halfpixel prediction, it executes only fullpixel prediction. - Furthermore, it is also possible to improve a selecting method so that halfpixel prediction is uniformly dispersed in a picture. For example, it is possible to use a method of obtaining every macroblock requiring halfpixel prediction, calculating the product (3) obtained by dividing the number of macroblocks requiring halfpixel prediction (e.g. 12) by the execution frequency of halfpixel prediction (e.g. 4), and applying halfpixel prediction only to a macroblock whose sequence from the beginning of the macroblocks requiring halfpixel prediction is divided by the product without remainder (0, 3, 6, or 9).
- The above eighth embodiment makes it possible to solve the problem (C3) by estimating the execution time of each processing, estimating an execution time required for encoding in accordance with the estimated execution time, and deciding an execution frequency so that the estimated encoding time becomes equal to or shorter than the time usable for encoding of a picture determined in accordance with a frame rate.
- Moreover, because the
movement compensating means 1022 detects a movement vector, there is a full-search movement-vector detecting method for detecting a vector for minimizing SAD (sum of absolute values of differences every pixel) among vectors in a range of 15 horizontal and vertical pixels. Furthermore, there is a three-step movement-vector detecting method (described in annex of H.261). The three-step movement-vector detecting method executes the processing of selecting nine points uniformly arranged in the above retrieval range to select a point having a minimum SAD and then, selecting nine points again in a narrow range close to the above point to select a point having a minimum SAD one more time. - It is also possible to properly decrease the execution frequency of the full-search movement-vector detecting method and properly increase the execution frequency of the three-step movement-vector detecting method by regarding these two methods as a dispensable processing method and estimating the execution time of each of the two methods, estimating an execution time required for encoding in accordance with the estimated execution time so that the estimated execution time becomes equal to or shorter than the time designated by a user.
- Moreover, it is possible to use a movement-vector detecting method using a fixed retrieval frequency and further simplifying the processing or a movement-vector detecting method of returning only the movement vector (0, 0) as a result together with the three-step movement-vector detecting method.
-
FIG. 43 is a flowchart of the transmitting method of the ninth embodiment. - Because operations of this embodiment are similar to those of the eighth embodiment, corresponding elements are added. For the detailed operation in each flow, refer to the description of corresponding elements.
- Moreover, because this embodiment is almost the same as the second embodiment, only different points are explained below.
- In
step 1101, the initial value of the execution time of each processing is set to a variable a_i. Moreover, instep 1102, a frame rate is input (input terminal 707). Instep 1103, an actual execution frequency is decided in accordance with the frame rate and the execution time a_i of each processing instep 1102 and the execution frequency of each processing obtained from the intra-/inter-processing decision result in step 1002 (deciding means 708). Insteps step 1104, the execution time of each processing is estimated in accordance with the execution time obtained instep 1106 and the actual execution frequency of each processing to update the variable a_i (estimating means 706). - According to the above-described ninth embodiment, the execution time of each processing is estimated and an execution time required for encoding is previously measured in accordance with the estimated execution time. Thus, it is possible to solve the problem (C3) by deciding an actual execution frequency so that the estimated encoding time becomes the time usable for the encoding of a picture determined in accordance with a frame rate or shorter.
- In the case of the second embodiment, it is also possible to add a two-byte region immediately after the start code shown in
FIG. 2 when the data string is generated instep 808 and add the binary notation of a code length to the region. - Moreover, in the case of the fourth embodiment, it is also possible to extract a code length from the two-byte region when multiplexed data is input in
step 902 and use the code transmission time obtained from the code length and the code transmission rate for the execution frequency calculation in step 905 (the execution frequency of halfpixel prediction is decreased so as not to exceed the code transmission time). - Furthermore, in the case of the first embodiment, it is also possible to add a two-byte region immediately after the start code shown in
FIG. 2 when a data string is generated instep 2104 and add the binary notation of a code length to the region. - Furthermore, in the case of the third embodiment, it is also possible to extract a code length from the two-byte region when multiplexed data is input in
step 301 and use a code transmission time obtained from the code length and the code transmission rate for the execution frequency calculation in step 304 (the execution frequency of halfpixel prediction is decreased so as not to exceed the code transmission time). - Furthermore, in the case of the fourth embodiment, an actual execution frequency of halfpixel prediction is recorded immediately after
step 909 to calculate a maximum value. When the maximum value is equal to or less than a small-enough value (e.g. 2 or 3), it is also possible to generate a data string (data string comprising a specific bit pattern) showing that halfpixel prediction is not used and transmit the generated data string. Furthermore, in the case of the second embodiment, it is confirmed whether the data string is received immediately afterstep 808 and when the data string showing that halfpixel prediction is not used is received, it is also possible to make movement compensation processing always serve as fullpixel prediction instep 808. - Furthermore, the above concept can be applied to cases other than movement compensation. For example, it is possible to reduce the DCT calculation time by using no high-frequency component for DCT calculation. That is, in the case of a receiving method, when the rate of the IDCT-calculation execution time to the entire execution time exceeds a certain value, a data string showing that the rate exceeds a certain value is transmitted to the transmitting side. When the transmitting side receives the data string, it is also possible to calculate only low-frequency components through the DCT calculation and decrease all high-frequency components to zero.
- Furthermore, though the embodiment is described above by using a picture, it is possible to apply each of the above methods to audio instead of video.
- Furthermore, in the case of the third embodiment, an actual execution frequency of halfpixel prediction is recorded in
step 3034 to calculate a maximum execution frequency. Then, when the maximum value is a small-enough value or less (e.g. 2 or 3), it is possible to generate and transmit a data string showing that halfpixel prediction is not used (data string comprising a specific bit pattern). Furthermore, in the case of the first embodiment, when receiving a data string showing that halfpixel prediction is not used, it is possible to make the movement compensation processing instep 1022 always serve as fullpixel prediction. - Furthermore, the above concept can be applied to cases other than movement compensation. For example, by using no high-frequency component for DCT calculation, it is possible to reduce the DCT calculation processing time. That is, in the case of a receiving method, when the rate of IDCT-calculation execution time to the entire execution time exceeds a certain value, a data string showing that the rate exceeds a certain value is transmitted to the transmitting side.
- When the transmitting side receives the data string, it is possible to calculate only low-frequency components through the DCT calculation and reduce all high-frequency components to zero.
- Furthermore, though the embodiment is described above by using a picture, it is also possible to apply the above method to audio instead of picture.
- As described above, according to the first and third embodiments, the execution time of decoding is estimated in accordance with the estimated execution time of each element and, when the estimated decoding execution time may exceed the time (designated time) required to receive the data for one sheet, halfpixel prediction having a long execution time is replaced with fullpixel prediction. Thereby, it is possible to prevent the execution time from exceeding the designated time and solve the problem (C1).
- Furthermore, according to the fifth and seventh embodiments, the estimated execution time of each element is transmitted to the transmitting side, the execution time of decoding is estimated at the transmitting side, and halfpixel prediction having a long execution time is replaced with fullpixel prediction so that the estimated decoding time does not exceed the time (designated time) probably required to receive the data for one sheet. Thereby, the information for halfpixel prediction in the sent encoded information is not disused and it is possible to prevent the execution time from exceeding the designated time and solve the problem (C2).
- Furthermore, according to the ninth embodiment, it is possible to solve the problem (C3) by estimating the execution time of each processing, moreover estimating the execution time required for encoding in accordance with the estimated execution time, and deciding an executing frequency so that the estimated encoding time becomes equal to or less than the time usable for encoding of a picture decided in accordance with a frame rate.
- Thus, the present invention makes it possible to realize a function (CGD: Computational Graceful Degradation) for slowly degrading quality even if a calculated load increases and thereby, a very large advantage can be obtained.
- Moreover, it is possible to perform operations same as described above by a computer by using a recording medium such as a magnetic recording medium or optical recording medium in which a program for making the computer execute all or part (or operations of each means) of the each steps (or each means) described in any one of the above-described embodiments.
- As described above, the present invention makes it possible to change information frames correspondingly to the situation, purpose, or transmission line by dynamically deciding the frames of data control information, transmission control information, and control information used for transmitting and receiving terminals. Moreover, it is easy to handle a plurality of video streams or a plurality of audio streams and mainly reproducing an important scene cut synchronously with audio by reflecting the intention of an editor. Furthermore, it is possible to prevent an execution time from exceeding a designated time by estimating the execution time of decoding in accordance with the execution time of each estimated element and replacing halfpixel prediction having a long execution time with fullpixel prediction when the estimated decoding execution time may exceed the time (designated time) required to receive the data for one sheet.
Claims (7)
1. A receiving terminal comprising:
a sub-program receiving section for receiving a sub-program relating to a program; and
a looking-listening section for selecting and representing, when switching a first program to a second program, a sub-program relating to the second program, which sub-program being received by the sub-program receiving section.
2. The receiving terminal of claim 1 , wherein the looking-listening section represents the sub-program relating to the second program and then the second program.
3. The receiving terminal of claim 1 , wherein the program and the sub-program are received by a different channel.
4. The receiving terminal of claim 1 , wherein the sub-program is obtained by sampling a program.
5. The receiving terminal of claim 4 , wherein the sub-program is obtained by cyclically sampling a program.
6. The receiving terminal of claim 4 , wherein the sub-program is obtained by sampling a plurality of programs.
7. A receiving method comprising the steps of:
receiving a sub-program relating to a program; and
selecting and representing, when switching a first program to a second program, a sub-program relating to the second program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/742,810 US20070201563A1 (en) | 1997-03-17 | 2007-05-01 | Method and apparatus for processing a data series including processing priority data |
Applications Claiming Priority (16)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP6266797 | 1997-03-17 | ||
JP09-062667 | 1997-03-17 | ||
JP09-090640 | 1997-04-09 | ||
JP9064097 | 1997-04-09 | ||
JP17934297 | 1997-07-04 | ||
JP09-179342 | 1997-07-04 | ||
JP22604597 | 1997-08-22 | ||
JP22602797 | 1997-08-22 | ||
JP09-226027 | 1997-08-22 | ||
JP09-226045 | 1997-08-22 | ||
JP09-332101 | 1997-12-02 | ||
JP33210197 | 1997-12-02 | ||
PCT/JP1998/001084 WO1998042132A1 (en) | 1997-03-17 | 1998-03-13 | Method of processing, transmitting and receiving dynamic image data and apparatus therefor |
US09/194,008 US6674477B1 (en) | 1997-03-17 | 1998-03-13 | Method and apparatus for processing a data series including processing priority data |
US10/626,271 US7502070B2 (en) | 1997-03-17 | 2003-07-24 | Method and apparatus for processing a data series including processing priority data |
US11/742,810 US20070201563A1 (en) | 1997-03-17 | 2007-05-01 | Method and apparatus for processing a data series including processing priority data |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/626,271 Continuation US7502070B2 (en) | 1997-03-17 | 2003-07-24 | Method and apparatus for processing a data series including processing priority data |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070201563A1 true US20070201563A1 (en) | 2007-08-30 |
Family
ID=27550897
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/194,008 Expired - Lifetime US6674477B1 (en) | 1997-03-17 | 1998-03-13 | Method and apparatus for processing a data series including processing priority data |
US10/626,271 Expired - Fee Related US7502070B2 (en) | 1997-03-17 | 2003-07-24 | Method and apparatus for processing a data series including processing priority data |
US10/626,075 Expired - Fee Related US7436454B2 (en) | 1997-03-17 | 2003-07-24 | Method and apparatus for transmitting encoded information based upon piority data in the encoded information |
US10/626,060 Abandoned US20040212729A1 (en) | 1997-03-17 | 2003-07-24 | Method and apparatus for processing a data series including processing priority data |
US11/742,810 Abandoned US20070201563A1 (en) | 1997-03-17 | 2007-05-01 | Method and apparatus for processing a data series including processing priority data |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/194,008 Expired - Lifetime US6674477B1 (en) | 1997-03-17 | 1998-03-13 | Method and apparatus for processing a data series including processing priority data |
US10/626,271 Expired - Fee Related US7502070B2 (en) | 1997-03-17 | 2003-07-24 | Method and apparatus for processing a data series including processing priority data |
US10/626,075 Expired - Fee Related US7436454B2 (en) | 1997-03-17 | 2003-07-24 | Method and apparatus for transmitting encoded information based upon piority data in the encoded information |
US10/626,060 Abandoned US20040212729A1 (en) | 1997-03-17 | 2003-07-24 | Method and apparatus for processing a data series including processing priority data |
Country Status (5)
Country | Link |
---|---|
US (5) | US6674477B1 (en) |
EP (4) | EP1439705A3 (en) |
KR (2) | KR100557103B1 (en) |
CN (3) | CN1190081C (en) |
WO (1) | WO1998042132A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060098689A1 (en) * | 2004-11-08 | 2006-05-11 | Harris Corporation | Adaptive bandwidth utilization for telemetered data |
US20070120958A1 (en) * | 2005-11-29 | 2007-05-31 | Sei Sunahara | Communication system, terminal apparatus and computer program |
US20120219062A1 (en) * | 2011-02-28 | 2012-08-30 | Cisco Technology, Inc. | System and method for managing video processing in a network environment |
US20140007172A1 (en) * | 2012-06-29 | 2014-01-02 | Samsung Electronics Co. Ltd. | Method and apparatus for transmitting/receiving adaptive media in a multimedia system |
US8798135B2 (en) * | 2004-12-22 | 2014-08-05 | Entropic Communications, Inc. | Video stream modifier |
US20150120882A1 (en) * | 2012-05-30 | 2015-04-30 | Canon Kabushiki Kaisha | Information processing apparatus, program, and control method |
US20150170326A1 (en) * | 2010-05-06 | 2015-06-18 | Kenji Tanaka | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US9722808B2 (en) | 2014-01-22 | 2017-08-01 | Ricoh Company, Limited | Data transmission system, a terminal device, and a recording medium |
JPWO2016199587A1 (en) * | 2015-06-12 | 2018-04-05 | 日本電気株式会社 | Relay device, terminal device, communication system, PDU relay method, PDU reception method, and program |
Families Citing this family (139)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998057273A1 (en) * | 1997-06-13 | 1998-12-17 | Koninklijke Philips Electronics N.V. | Cyclic transmission of a plurality of mutually related objects |
EP0986267A3 (en) * | 1998-09-07 | 2003-11-19 | Robert Bosch Gmbh | Method and terminals for including audiovisual coded information in a given transmission standard |
US6587985B1 (en) | 1998-11-30 | 2003-07-01 | Matsushita Electric Industrial Co., Ltd. | Data transmission method, data transmission apparatus, data receiving apparatus, and packet data structure |
US11109114B2 (en) * | 2001-04-18 | 2021-08-31 | Grass Valley Canada | Advertisement management method, system, and computer program product |
US9123380B2 (en) | 1998-12-18 | 2015-09-01 | Gvbb Holdings S.A.R.L. | Systems, methods, and computer program products for automated real-time execution of live inserts of repurposed stored content distribution, and multiple aspect ratio automated simulcast production |
US20030001880A1 (en) * | 2001-04-18 | 2003-01-02 | Parkervision, Inc. | Method, system, and computer program product for producing and distributing enhanced media |
CN1153427C (en) | 1999-01-26 | 2004-06-09 | 松下电器产业株式会社 | Method and device for data trunking processing and information discarding and program recording medium |
US7120167B1 (en) * | 1999-06-03 | 2006-10-10 | Matsushita Electric Industrial Co., Ltd. | Broadcasting system and its method |
US7869462B2 (en) * | 1999-06-03 | 2011-01-11 | Panasonic Corporation | Broadcast system and method therefor |
JP3938824B2 (en) * | 1999-10-29 | 2007-06-27 | 松下電器産業株式会社 | Communication apparatus and communication method |
DE60020672T2 (en) | 2000-03-02 | 2005-11-10 | Matsushita Electric Industrial Co., Ltd., Kadoma | Method and apparatus for repeating the video data frames with priority levels |
IT1319973B1 (en) * | 2000-03-17 | 2003-11-12 | Cselt Centro Studi Lab Telecom | PROCEDURE AND SYSTEM FOR TIMING THE TRANSMISSION OF INFORMATION FLOWS, FOR EXAMPLE AUDIOVISUAL OR MULTIMEDIA FLOWS, RELATED |
JP2001292164A (en) * | 2000-04-06 | 2001-10-19 | Nec Corp | Packet switch and its switching method |
US6823324B2 (en) * | 2000-04-21 | 2004-11-23 | Matsushita Electric Industrial Co., Ltd. | Data broadcast program producing apparatus, a computer program for producing data broadcast programs, and a computer-readable recording medium storing the computer program |
US7191242B1 (en) * | 2000-06-22 | 2007-03-13 | Apple, Inc. | Methods and apparatuses for transferring data |
US7062557B1 (en) * | 2000-07-10 | 2006-06-13 | Hewlett-Packard Development Company, L.P. | Web server request classification system that classifies requests based on user's behaviors and expectations |
US7111163B1 (en) | 2000-07-10 | 2006-09-19 | Alterwan, Inc. | Wide area network using internet with quality of service |
JP3590949B2 (en) * | 2000-08-17 | 2004-11-17 | 松下電器産業株式会社 | Data transmission device and data transmission method |
JP2002074853A (en) * | 2000-08-31 | 2002-03-15 | Toshiba Corp | Information recording device, information recording method, information reproducing device, information reproducing method, information recording medium and electronic distribution system |
WO2002030067A1 (en) * | 2000-10-05 | 2002-04-11 | Mitsubishi Denki Kabushiki Kaisha | Packet retransmission system, packet transmission device, packet reception device, packet retransmission method, packet transmission method and packet reception method |
JP2002141945A (en) * | 2000-11-06 | 2002-05-17 | Sony Corp | Data transmission system and data transmission method, and program storage medium |
US20030012287A1 (en) * | 2001-03-05 | 2003-01-16 | Ioannis Katsavounidis | Systems and methods for decoding of systematic forward error correction (FEC) codes of selected data in a video bitstream |
US20070053428A1 (en) * | 2001-03-30 | 2007-03-08 | Vixs Systems, Inc. | Managed degradation of a video stream |
US8107524B2 (en) * | 2001-03-30 | 2012-01-31 | Vixs Systems, Inc. | Adaptive bandwidth footprint matching for multiple compressed video streams in a fixed bandwidth network |
JP3516929B2 (en) * | 2001-04-11 | 2004-04-05 | シャープ株式会社 | Transmitting device, receiving device, and communication system including the same |
US7958532B2 (en) * | 2001-06-18 | 2011-06-07 | At&T Intellectual Property Ii, L.P. | Method of transmitting layered video-coded information |
JP2003152544A (en) * | 2001-11-12 | 2003-05-23 | Sony Corp | Data communication system, data transmitter, data receiver, data-receiving method and computer program |
GB2384932B (en) * | 2002-01-30 | 2004-02-25 | Motorola Inc | Video conferencing system and method of operation |
ATE487327T1 (en) * | 2002-03-08 | 2010-11-15 | France Telecom | METHOD FOR TRANSMITTING DEPENDENT DATA STREAMS |
US7404001B2 (en) * | 2002-03-27 | 2008-07-22 | Ericsson Ab | Videophone and method for a video call |
JP4000904B2 (en) | 2002-05-21 | 2007-10-31 | ソニー株式会社 | Information processing apparatus and method, recording medium, and program |
US20030233464A1 (en) * | 2002-06-10 | 2003-12-18 | Jonathan Walpole | Priority progress streaming for quality-adaptive transmission of data |
JP2004021996A (en) | 2002-06-12 | 2004-01-22 | Sony Corp | Recording device, server, recording method, program, and storage medium |
US7533398B2 (en) * | 2002-07-26 | 2009-05-12 | The Associated Press | Automatic selection of encoding parameters for transmission of media objects |
JP3971984B2 (en) * | 2002-10-15 | 2007-09-05 | 松下電器産業株式会社 | Communication apparatus and communication method |
FR2846179B1 (en) | 2002-10-21 | 2005-02-04 | Medialive | ADAPTIVE AND PROGRESSIVE STRIP OF AUDIO STREAMS |
WO2004051906A2 (en) * | 2002-11-27 | 2004-06-17 | Rgb Media, Inc. | Apparatus and method for dynamic channel mapping and optimized scheduling of data packets |
EP1432196A1 (en) * | 2002-12-20 | 2004-06-23 | Matsushita Electric Industrial Co., Ltd. | Control traffic compression method in media data transmission |
FR2849980B1 (en) * | 2003-01-15 | 2005-04-08 | Medialive | METHOD FOR THE DISTRIBUTION OF VIDEO SEQUENCES, DECODER AND SYSTEM FOR THE IMPLEMENTATION OF THIS PRODUCT |
JP3888307B2 (en) * | 2003-01-15 | 2007-02-28 | 船井電機株式会社 | Optical disk playback device |
FR2853786B1 (en) * | 2003-04-11 | 2005-08-05 | Medialive | METHOD AND EQUIPMENT FOR DISTRIBUTING DIGITAL VIDEO PRODUCTS WITH A RESTRICTION OF CERTAIN AT LEAST REPRESENTATION AND REPRODUCTION RIGHTS |
KR100586101B1 (en) * | 2003-05-12 | 2006-06-07 | 엘지전자 주식회사 | Moving picture coding method |
US7374079B2 (en) * | 2003-06-24 | 2008-05-20 | Lg Telecom, Ltd. | Method for providing banking services by use of mobile communication system |
CN1830164A (en) * | 2003-10-30 | 2006-09-06 | 松下电器产业株式会社 | Mobile-terminal-oriented transmission method and apparatus |
WO2005043784A1 (en) * | 2003-10-30 | 2005-05-12 | Matsushita Electric Industrial Co., Ltd. | Device and method for receiving broadcast wave in which a plurality of services are multiplexed |
US7379608B2 (en) * | 2003-12-04 | 2008-05-27 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung, E.V. | Arithmetic coding for transforming video and picture data units |
JP3891174B2 (en) * | 2003-12-08 | 2007-03-14 | 株式会社日立製作所 | Control method |
US7599435B2 (en) | 2004-01-30 | 2009-10-06 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Video frame encoding and decoding |
US7675939B2 (en) * | 2004-01-30 | 2010-03-09 | Sony Corporation | Transmission apparatus and method, reception apparatus and method, communication system, recording medium, and program |
US7586924B2 (en) * | 2004-02-27 | 2009-09-08 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream |
CN1564595A (en) * | 2004-04-14 | 2005-01-12 | 冯彦 | Dynamic displaying method for receiving images of telephone display panel |
CN100345164C (en) | 2004-05-14 | 2007-10-24 | 腾讯科技(深圳)有限公司 | Method for synthesizing dynamic virtual images |
US8195744B2 (en) | 2004-07-09 | 2012-06-05 | Orb Networks, Inc. | File sharing system for use with a network |
US7937484B2 (en) | 2004-07-09 | 2011-05-03 | Orb Networks, Inc. | System and method for remotely controlling network resources |
US9077766B2 (en) * | 2004-07-09 | 2015-07-07 | Qualcomm Incorporated | System and method for combining memory resources for use on a personal network |
US8787164B2 (en) | 2004-07-09 | 2014-07-22 | Qualcomm Incorporated | Media delivery system and method for transporting media to desired target devices |
US8738693B2 (en) | 2004-07-09 | 2014-05-27 | Qualcomm Incorporated | System and method for managing distribution of media files |
JP2006025281A (en) * | 2004-07-09 | 2006-01-26 | Hitachi Ltd | Information source selection system, and method |
US8819140B2 (en) | 2004-07-09 | 2014-08-26 | Qualcomm Incorporated | System and method for enabling the establishment and use of a personal network |
US7706262B2 (en) * | 2005-09-30 | 2010-04-27 | Alcatel-Lucent Usa Inc. | Identifying data and/or control packets in wireless communication |
JP3928807B2 (en) * | 2005-01-14 | 2007-06-13 | 船井電機株式会社 | Optical disk playback device |
ATE512550T1 (en) * | 2005-01-17 | 2011-06-15 | Koninkl Philips Electronics Nv | SYSTEM, TRANSMITTER, RECEIVER, METHOD AND SOFTWARE FOR SENDING AND RECEIVING ORDERED QUANTITIES OF VIDEO FRAME |
DE102005012668B4 (en) * | 2005-03-17 | 2012-02-16 | Bernhard Blöchl | Frame error detection and correction method for digital video |
JP5105458B2 (en) * | 2005-10-04 | 2012-12-26 | 任天堂株式会社 | Game system and game program |
GB2432985A (en) * | 2005-12-05 | 2007-06-06 | Univ Robert Gordon | Encoder control system based on a target encoding value |
US7852853B1 (en) * | 2006-02-07 | 2010-12-14 | Nextel Communications Inc. | System and method for transmitting video information |
US20080019398A1 (en) * | 2006-07-20 | 2008-01-24 | Adimos Systems Ltd. | Clock recovery in wireless media streaming |
JP4707623B2 (en) * | 2006-07-21 | 2011-06-22 | 富士通東芝モバイルコミュニケーションズ株式会社 | Information processing device |
WO2008026896A1 (en) * | 2006-08-31 | 2008-03-06 | Samsung Electronics Co., Ltd. | Video encoding apparatus and method and video decoding apparatus and method |
US8973072B2 (en) | 2006-10-19 | 2015-03-03 | Qualcomm Connected Experiences, Inc. | System and method for programmatic link generation with media delivery |
EP1936908A1 (en) * | 2006-12-19 | 2008-06-25 | Deutsche Thomson OHG | Method, apparatus and data container for transferring high resolution audio/video data in a high speed IP network |
WO2008075663A1 (en) * | 2006-12-21 | 2008-06-26 | Ajinomoto Co., Inc. | Method for evaluation of colorectal cancer, colorectal cancer evaluation apparatus, colorectal cancer evaluation method, colorectal cancer evaluation system, colorectal cancer evaluation program, and recording medium |
EP2506530B1 (en) * | 2007-01-15 | 2018-09-05 | BlackBerry Limited | Fragmenting large packets in the presence of high priority packets |
US20080187291A1 (en) * | 2007-02-05 | 2008-08-07 | Microsoft Corporation | Prioritization for video acquisition |
US8176386B1 (en) | 2007-04-10 | 2012-05-08 | Marvell International Ltd. | Systems and methods for processing streaming data |
JP2009021837A (en) * | 2007-07-12 | 2009-01-29 | Panasonic Corp | Decoding system |
WO2009016723A1 (en) * | 2007-07-30 | 2009-02-05 | Fujitsu Limited | Electronic device, information processing system, electronic device failure notification method, and failure notification program |
US9203445B2 (en) | 2007-08-31 | 2015-12-01 | Iheartmedia Management Services, Inc. | Mitigating media station interruptions |
WO2009029889A1 (en) * | 2007-08-31 | 2009-03-05 | Clear Channel Management Services, L.P. | Radio receiver and method for receiving and playing signals from multiple broadcast channels |
MY162861A (en) * | 2007-09-24 | 2017-07-31 | Koninl Philips Electronics Nv | Method and system for encoding a video data signal, encoded video data signal, method and system for decoding a video data signal |
JP2009124510A (en) * | 2007-11-15 | 2009-06-04 | Canon Inc | Display control apparatus and method thereof, program, and recording medium |
JP5092897B2 (en) * | 2008-05-26 | 2012-12-05 | 富士通株式会社 | Data migration processing program, data migration processing device, and data migration processing method |
KR101066830B1 (en) * | 2008-08-20 | 2011-09-26 | 삼성전자주식회사 | Method For Transmitting and Receiving, and Apparatus using the same |
JP2010056964A (en) * | 2008-08-28 | 2010-03-11 | Canon Inc | Receiving apparatus and control method thereof, program, and recording medium |
JP2010103969A (en) * | 2008-09-25 | 2010-05-06 | Renesas Technology Corp | Image-decoding method, image decoder, image encoding method, and image encoder |
US9154942B2 (en) * | 2008-11-26 | 2015-10-06 | Free Stream Media Corp. | Zero configuration communication between a browser and a networked media device |
JP5004309B2 (en) * | 2009-02-18 | 2012-08-22 | ソニーモバイルコミュニケーションズ, エービー | Movie output method and movie output device |
CN101510299B (en) * | 2009-03-04 | 2011-07-20 | 上海大学 | Image self-adapting method based on vision significance |
JP5340296B2 (en) * | 2009-03-26 | 2013-11-13 | パナソニック株式会社 | Decoding device, encoding / decoding device, and decoding method |
US8271106B2 (en) | 2009-04-17 | 2012-09-18 | Hospira, Inc. | System and method for configuring a rule set for medical event management and responses |
JP5332854B2 (en) * | 2009-04-20 | 2013-11-06 | ソニー株式会社 | Wireless transmitter, wireless transmission method, wireless receiver, and wireless reception method |
JP5141633B2 (en) * | 2009-04-24 | 2013-02-13 | ソニー株式会社 | Image processing method and image information encoding apparatus using the same |
CN102439974B (en) * | 2009-05-22 | 2015-01-28 | 株式会社巨晶片 | Video playback system and video playback method |
US8837453B2 (en) * | 2009-05-28 | 2014-09-16 | Symbol Technologies, Inc. | Methods and apparatus for transmitting data based on interframe dependencies |
JP5764140B2 (en) | 2009-12-16 | 2015-08-12 | ダウ グローバル テクノロジーズ エルエルシー | Production of epoxy resins using improved ion exchange resin catalysts. |
CN102754445A (en) * | 2010-02-15 | 2012-10-24 | 松下电器产业株式会社 | Content communication device, content processing device and content communication system |
US20110255631A1 (en) * | 2010-04-20 | 2011-10-20 | Samsung Electronics Co., Ltd. | Methods and apparatus for fast synchronization using tail biting convolutional codes |
US8356109B2 (en) * | 2010-05-13 | 2013-01-15 | Canon Kabushiki Kaisha | Network streaming of a video stream over multiple communication channels |
US8843803B2 (en) * | 2011-04-01 | 2014-09-23 | Cleversafe, Inc. | Utilizing local memory and dispersed storage memory to access encoded data slices |
JP5801614B2 (en) * | 2011-06-09 | 2015-10-28 | キヤノン株式会社 | Image processing apparatus and image processing method |
JP5839848B2 (en) | 2011-06-13 | 2016-01-06 | キヤノン株式会社 | Image processing apparatus and image processing method |
EP2547062B1 (en) * | 2011-07-14 | 2016-03-16 | Nxp B.V. | Media streaming with adaptation |
US9237356B2 (en) | 2011-09-23 | 2016-01-12 | Qualcomm Incorporated | Reference picture list construction for video coding |
US20130094518A1 (en) * | 2011-10-13 | 2013-04-18 | Electronics And Telecommunications Research Institute | Method for configuring and transmitting mmt payload |
CA2852271A1 (en) | 2011-10-21 | 2013-04-25 | Hospira, Inc. | Medical device update system |
GB2496414A (en) * | 2011-11-10 | 2013-05-15 | Sony Corp | Prioritising audio and/or video content for transmission over an IP network |
PL3917140T3 (en) * | 2012-01-19 | 2023-12-04 | Vid Scale, Inc. | Method and apparatus for signaling and construction of video coding reference picture lists |
WO2013108954A1 (en) * | 2012-01-20 | 2013-07-25 | 전자부품연구원 | Method for transmitting and receiving program configuration information for scalable ultra high definition video service in hybrid transmission environment, and method and apparatus for effectively transmitting scalar layer information |
US9979958B2 (en) | 2012-04-20 | 2018-05-22 | Qualcomm Incorporated | Decoded picture buffer processing for random access point pictures in video sequences |
CN103379360B (en) * | 2012-04-23 | 2015-05-27 | 华为技术有限公司 | Assessment method and device for video quality |
US9479776B2 (en) | 2012-07-02 | 2016-10-25 | Qualcomm Incorporated | Signaling of long-term reference pictures for video coding |
US9794143B1 (en) * | 2012-09-14 | 2017-10-17 | Arris Enterprises Llc | Video delivery over IP packet networks |
US10341047B2 (en) | 2013-10-31 | 2019-07-02 | Hewlett Packard Enterprise Development Lp | Method and system for controlling the forwarding of error correction data |
US9571404B2 (en) * | 2012-11-09 | 2017-02-14 | Aruba Networks, Inc. | Method and system for prioritizing network packets |
US9515941B2 (en) | 2012-11-09 | 2016-12-06 | Aruba Networks, Inc. | Dynamic determination of transmission parameters based on packet priority and network conditions |
WO2014138446A1 (en) | 2013-03-06 | 2014-09-12 | Hospira,Inc. | Medical device communication method |
CN103338103A (en) * | 2013-06-04 | 2013-10-02 | 中联重科股份有限公司 | Data encryption method and system and handheld device |
US20150066531A1 (en) | 2013-08-30 | 2015-03-05 | James D. Jacobson | System and method of monitoring and managing a remote infusion regimen |
US9662436B2 (en) | 2013-09-20 | 2017-05-30 | Icu Medical, Inc. | Fail-safe drug infusion therapy system |
EP3703379B1 (en) * | 2013-12-16 | 2022-06-22 | Panasonic Intellectual Property Corporation of America | Transmission method, reception method, transmitting device, and receiving device |
CN106031187B (en) * | 2014-03-03 | 2019-12-24 | 索尼公司 | Transmitting apparatus, transmitting method, receiving apparatus and receiving method |
CA2945647C (en) | 2014-04-30 | 2023-08-08 | Hospira, Inc. | Patient care system with conditional alarm forwarding |
US9724470B2 (en) | 2014-06-16 | 2017-08-08 | Icu Medical, Inc. | System for monitoring and delivering medication to a patient and method of using the same to minimize the risks associated with automated therapy |
MX368827B (en) * | 2014-08-07 | 2019-10-18 | Sony Corp | Transmission device, transmission method and reception device. |
US9539383B2 (en) | 2014-09-15 | 2017-01-10 | Hospira, Inc. | System and method that matches delayed infusion auto-programs with manually entered infusion programs and analyzes differences therein |
US9774650B2 (en) * | 2014-09-23 | 2017-09-26 | Cisco Technology, Inc. | Frame priority system |
US9838571B2 (en) | 2015-04-10 | 2017-12-05 | Gvbb Holdings S.A.R.L. | Precision timing for broadcast network |
US10750217B2 (en) * | 2016-03-21 | 2020-08-18 | Lg Electronics Inc. | Broadcast signal transmitting/receiving device and method |
US20180109469A1 (en) * | 2016-10-17 | 2018-04-19 | International Business Machines Corporation | Systems and methods for controlling process priority for efficient resource allocation |
FR3070566B1 (en) * | 2017-08-30 | 2020-09-04 | Sagemcom Broadband Sas | PROCESS FOR RECOVERING A TARGET FILE OF AN OPERATING SOFTWARE AND DEVICE FOR USE |
US11606528B2 (en) * | 2018-01-03 | 2023-03-14 | Saturn Licensing Llc | Advanced television systems committee (ATSC) 3.0 latency-free display of content attribute |
NZ771914A (en) | 2018-07-17 | 2023-04-28 | Icu Medical Inc | Updating infusion pump drug libraries and operational software in a networked environment |
WO2020018389A1 (en) | 2018-07-17 | 2020-01-23 | Icu Medical, Inc. | Systems and methods for facilitating clinical messaging in a network environment |
CN109089067A (en) * | 2018-09-12 | 2018-12-25 | 深圳市沃特沃德股份有限公司 | Videophone and its image capture method, device and computer readable storage medium |
EP4055742A1 (en) * | 2019-11-07 | 2022-09-14 | Telefonaktiebolaget LM Ericsson (publ) | Method and device for determining transmission priority |
JP7440065B2 (en) * | 2020-04-03 | 2024-02-28 | 国立大学法人京都大学 | blockchain network system |
GB2598701B (en) * | 2020-05-25 | 2023-01-25 | V Nova Int Ltd | Wireless data communication system and method |
CN112233606B (en) * | 2020-12-15 | 2021-06-01 | 武汉华星光电技术有限公司 | Display device, display system and distributed function system |
CN113709510A (en) * | 2021-08-06 | 2021-11-26 | 联想(北京)有限公司 | High-speed data real-time transmission method and device, equipment and storage medium |
Citations (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4706121A (en) * | 1985-07-12 | 1987-11-10 | Patrick Young | TV schedule system and process |
US4777531A (en) * | 1986-01-06 | 1988-10-11 | Sony Corporation | Still sub-picture-in-picture television receiver |
US4845564A (en) * | 1987-04-16 | 1989-07-04 | Sony Corp. | Television receiver incorporating a video cassette recorder and capable of displaying a sub-channel picture within a main-channel picture |
US4879611A (en) * | 1986-08-01 | 1989-11-07 | Sanyo Electric Co., Ltd. | Record mode setting apparatus responsive to transmitted code containing time-start information |
US4903129A (en) * | 1989-04-06 | 1990-02-20 | Thomson Consumer Electronics, Inc. | Audio signal section apparatus |
US4908707A (en) * | 1987-07-20 | 1990-03-13 | U.S. Philips Corp. | Video cassette recorder programming via teletext transmissions |
US4959719A (en) * | 1988-12-21 | 1990-09-25 | North American Philips Corporation | Picture-in-picture television receiver control |
US5038211A (en) * | 1989-07-05 | 1991-08-06 | The Superguide Corporation | Method and apparatus for transmitting and receiving television program information |
US5047867A (en) * | 1989-06-08 | 1991-09-10 | North American Philips Corporation | Interface for a TV-VCR system |
US5111292A (en) * | 1991-02-27 | 1992-05-05 | General Electric Company | Priority selection apparatus as for a video signal processor |
US5151789A (en) * | 1989-10-30 | 1992-09-29 | Insight Telecast, Inc. | System and method for automatic, unattended recording of cable television programs |
US5223924A (en) * | 1992-05-27 | 1993-06-29 | North American Philips Corporation | System and method for automatically correlating user preferences with a T.V. program information database |
US5231492A (en) * | 1989-03-16 | 1993-07-27 | Fujitsu Limited | Video and audio multiplex transmission system |
US5247365A (en) * | 1991-04-12 | 1993-09-21 | Sony Corporation | Channel-scanning picture-in-picture-in-picture television receiver |
US5311317A (en) * | 1991-04-19 | 1994-05-10 | Sony Corporation | Video signal processing apparatus for displaying stored video signal during channel selection |
US5432561A (en) * | 1992-03-27 | 1995-07-11 | North American Philips Corporation | System for automatically activating picture-in-picture when an auxiliary signal is detected |
US5465385A (en) * | 1991-10-28 | 1995-11-07 | Pioneer Electronic Corporation | CATV system with an easy program reservation |
US5512939A (en) * | 1994-04-06 | 1996-04-30 | At&T Corp. | Low bit rate audio-visual communication system having integrated perceptual speech and video coding |
US5524195A (en) * | 1993-05-24 | 1996-06-04 | Sun Microsystems, Inc. | Graphical user interface for interactive television with an animated agent |
US5532754A (en) * | 1989-10-30 | 1996-07-02 | Starsight Telecast Inc. | Background television schedule system |
US5535216A (en) * | 1995-01-17 | 1996-07-09 | Digital Equipment Corporation | Multiplexed gapped constant bit rate data transmission |
US5594509A (en) * | 1993-06-22 | 1997-01-14 | Apple Computer, Inc. | Method and apparatus for audio-visual interface for the display of multiple levels of information on a display |
US5610841A (en) * | 1993-09-30 | 1997-03-11 | Matsushita Electric Industrial Co., Ltd. | Video server |
US5617145A (en) * | 1993-12-28 | 1997-04-01 | Matsushita Electric Industrial Co., Ltd. | Adaptive bit allocation for video and audio coding |
US5633683A (en) * | 1994-04-15 | 1997-05-27 | U.S. Philips Corporation | Arrangement and method for transmitting and receiving mosaic video signals including sub-pictures for easy selection of a program to be viewed |
US5657414A (en) * | 1992-12-01 | 1997-08-12 | Scientific-Atlanta, Inc. | Auxiliary device control for a subscriber terminal |
US5659653A (en) * | 1978-09-11 | 1997-08-19 | Thomson Consumer Electronics, S.A. | Method for programming a recording device and programming device |
US5686954A (en) * | 1994-09-29 | 1997-11-11 | Sony Corporation | Program information broadcasting method program information display method, and receiving device |
US5699362A (en) * | 1993-12-30 | 1997-12-16 | Lucent Technologies Inc. | System and method for direct output of constant rate high bandwidth packets streams from long term memory devices |
US5699474A (en) * | 1993-07-12 | 1997-12-16 | Sony Corporation | Method and apparatus for decoding MPEG-type data reproduced from a recording medium during a high-speed reproduction operation |
US5784522A (en) * | 1995-04-07 | 1998-07-21 | Sony Corporation | Information signal transmitting system |
US5928330A (en) * | 1996-09-06 | 1999-07-27 | Motorola, Inc. | System, device, and method for streaming a multimedia file |
US6008802A (en) * | 1998-01-05 | 1999-12-28 | Intel Corporation | Method and apparatus for automatically performing a function based on the reception of information corresponding to broadcast data |
US6021432A (en) * | 1994-10-31 | 2000-02-01 | Lucent Technologies Inc. | System for processing broadcast stream comprises a human-perceptible broadcast program embedded with a plurality of human-imperceptible sets of information |
US6025837A (en) * | 1996-03-29 | 2000-02-15 | Micrsoft Corporation | Electronic program guide with hyperlinks to target resources |
US6038545A (en) * | 1997-03-17 | 2000-03-14 | Frankel & Company | Systems, methods and computer program products for generating digital multimedia store displays and menu boards |
US6088360A (en) * | 1996-05-31 | 2000-07-11 | Broadband Networks Corporation | Dynamic rate control technique for video multiplexer |
US6088064A (en) * | 1996-12-19 | 2000-07-11 | Thomson Licensing S.A. | Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display |
US6094457A (en) * | 1996-12-31 | 2000-07-25 | C-Cube Microsystems, Inc. | Statistical multiplexed video encoding using pre-encoding a priori statistics and a priori and a posteriori statistics |
US6097878A (en) * | 1997-02-25 | 2000-08-01 | Sony Corporation | Automatic timer event entry |
US6108706A (en) * | 1997-06-09 | 2000-08-22 | Microsoft Corporation | Transmission announcement system and method for announcing upcoming data transmissions over a broadcast network |
US6208799B1 (en) * | 1997-04-29 | 2001-03-27 | Time Warner Entertainment Company L.P. | VCR recording timeslot adjustment |
US6469753B1 (en) * | 1996-05-03 | 2002-10-22 | Starsight Telecast, Inc. | Information system |
US6498625B1 (en) * | 1991-08-13 | 2002-12-24 | Canon Kabushiki Kaisha | Dynamic image transmission method and apparatus for enhancing spatial resolution of image data |
US6674958B2 (en) * | 1996-12-16 | 2004-01-06 | Thomson Licensing S.A. | Television apparatus control system |
US6732369B1 (en) * | 1995-10-02 | 2004-05-04 | Starsight Telecast, Inc. | Systems and methods for contextually linking television program information |
US6990680B1 (en) * | 1998-01-05 | 2006-01-24 | Gateway Inc. | System for scheduled caching of in-band data services |
Family Cites Families (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6036145B2 (en) | 1978-06-23 | 1985-08-19 | 株式会社東芝 | Signal transmission method |
US4593194A (en) * | 1983-10-05 | 1986-06-03 | Quantum Corporation | Optical encoder with digital gain compensation controlling source intensity |
NL8402364A (en) * | 1984-07-27 | 1986-02-17 | Philips Nv | METHOD, STATION AND SYSTEM FOR TRANSMISSION OF MESSAGES CONTAINING DATA PACKAGES. |
CA1256984A (en) * | 1985-12-28 | 1989-07-04 | Kunio Hakamada | Television receiver |
JPH088685B2 (en) * | 1986-09-16 | 1996-01-29 | 日本電信電話株式会社 | Hierarchical burst communication method |
JPH01101718A (en) * | 1987-10-14 | 1989-04-19 | Clarion Co Ltd | Surface acoustic wave device |
JPH0286241A (en) | 1988-09-21 | 1990-03-27 | Nippon Telegr & Teleph Corp <Ntt> | Variable rate image hierarchy coding transmission system |
US4918531A (en) * | 1988-10-25 | 1990-04-17 | Thomson Consumer Electronics, Inc. | Commercial message timer |
JP2736092B2 (en) * | 1989-01-10 | 1998-04-02 | 株式会社東芝 | Buffer device |
JPH03234139A (en) | 1990-02-08 | 1991-10-18 | Oki Electric Ind Co Ltd | Picture packet multiplex system |
JPH03276941A (en) | 1990-03-27 | 1991-12-09 | Mitsubishi Electric Corp | Packet communication terminal equipment |
JPH043684A (en) | 1990-04-20 | 1992-01-08 | Matsushita Electric Ind Co Ltd | Variable rate moving image encoder |
JPH0424914A (en) | 1990-05-16 | 1992-01-28 | Kawasaki Steel Corp | Evaluation of light intensity distribution in resist film |
JP2690180B2 (en) | 1990-08-08 | 1997-12-10 | 沖電気工業株式会社 | Packet interpolation method |
JPH0530138A (en) | 1991-07-25 | 1993-02-05 | Nippon Telegr & Teleph Corp <Ntt> | Multi-media transfer system |
JP2515643B2 (en) | 1991-08-15 | 1996-07-10 | 新日本製鐵株式会社 | Method for analyzing meandering behavior of strip |
JP2960803B2 (en) * | 1991-08-28 | 1999-10-12 | 株式会社日立製作所 | Digital broadcast signal receiving apparatus and digital broadcast signal receiving television receiver |
US5404505A (en) * | 1991-11-01 | 1995-04-04 | Finisar Corporation | System for scheduling transmission of indexed and requested database tiers on demand at varying repetition rates |
JPH05137125A (en) | 1991-11-13 | 1993-06-01 | Victor Co Of Japan Ltd | Teletext receiver |
JPH05244435A (en) | 1992-02-28 | 1993-09-21 | Fujitsu Ltd | Hierarchical coding method for picture and picture coder |
JPH0614049A (en) | 1992-03-19 | 1994-01-21 | Fujitsu Ltd | Cell abort controller in atm and its method |
JP3110137B2 (en) | 1992-04-03 | 2000-11-20 | 積水化学工業株式会社 | Communication method in home wireless communication network |
JPH05327758A (en) | 1992-05-25 | 1993-12-10 | Fujitsu Ltd | Packet formation system for image coding data |
US5289276A (en) * | 1992-06-19 | 1994-02-22 | General Electric Company | Method and apparatus for conveying compressed video data over a noisy communication channel |
US5287178A (en) * | 1992-07-06 | 1994-02-15 | General Electric Company | Reset control network for a video signal encoder |
US5412416A (en) * | 1992-08-07 | 1995-05-02 | Nbl Communications, Inc. | Video media distribution network apparatus and method |
JP2943516B2 (en) | 1992-08-17 | 1999-08-30 | 日本電気株式会社 | Video encoding / decoding device |
US5420801A (en) * | 1992-11-13 | 1995-05-30 | International Business Machines Corporation | System and method for synchronization of multimedia streams |
DE69320458D1 (en) * | 1992-12-17 | 1998-09-24 | Samsung Electronics Co Ltd | DISK RECORDING MEDIUM AND METHOD AND DEVICE FOR PLAYBACK THEREFOR |
JP3240017B2 (en) * | 1993-01-11 | 2001-12-17 | ソニー株式会社 | MPEG signal recording method and MPEG signal reproducing method |
JP3521436B2 (en) | 1993-03-18 | 2004-04-19 | 日本電気株式会社 | Image compression device and image reproduction device |
JPH06339137A (en) | 1993-05-31 | 1994-12-06 | Nippon Telegr & Teleph Corp <Ntt> | Video packer communications system |
JP3438259B2 (en) | 1993-06-02 | 2003-08-18 | ソニー株式会社 | Block data transmission method and block data transmission device |
CA2173355A1 (en) * | 1993-06-09 | 1994-12-22 | Andreas Richter | Method and apparatus for multiple media digital communication system |
US5703908A (en) * | 1993-10-08 | 1997-12-30 | Rutgers University | Fixed reference shift keying modulation for mobile radio telecommunications |
US5461415A (en) | 1994-03-15 | 1995-10-24 | International Business Machines Corporation | Look-ahead scheduling to support video-on-demand applications |
JP3244399B2 (en) * | 1994-03-25 | 2002-01-07 | 三洋電機株式会社 | Circuit and method for converting information amount of compressed moving image code signal |
JPH07322248A (en) * | 1994-05-30 | 1995-12-08 | Matsushita Electric Ind Co Ltd | Motion image data transmission method and transmitter |
JPH0818524A (en) | 1994-06-28 | 1996-01-19 | Sofuitsuku:Kk | Scrambler for data |
US5487072A (en) * | 1994-06-30 | 1996-01-23 | Bell Communications Research Inc. | Error monitoring algorithm for broadband signaling |
US6004028A (en) * | 1994-08-18 | 1999-12-21 | Ericsson Ge Mobile Communications Inc. | Device and method for receiving and reconstructing signals with improved perceived signal quality |
JP3603364B2 (en) * | 1994-11-14 | 2004-12-22 | ソニー株式会社 | Digital data recording / reproducing apparatus and method |
US5510844A (en) | 1994-11-18 | 1996-04-23 | At&T Corp. | Video bitstream regeneration using previously agreed to high priority segments |
DE19501517C1 (en) * | 1995-01-19 | 1996-05-02 | Siemens Ag | Speech information transmission method |
US5689439A (en) * | 1995-03-31 | 1997-11-18 | Lucent Technologies, Inc. | Switched antenna diversity transmission method and system |
JPH08294123A (en) | 1995-04-24 | 1996-11-05 | Kokusai Electric Co Ltd | Moving image data transmitter |
US5959980A (en) * | 1995-06-05 | 1999-09-28 | Omnipoint Corporation | Timing adjustment control for efficient time division duplex communication |
WO1997002683A1 (en) * | 1995-07-05 | 1997-01-23 | Siemens Aktiengesellschaft | Arrangement (iwf) for the bidirectional connection of an elan and a cls wide-area network |
JP3597267B2 (en) * | 1995-09-26 | 2004-12-02 | 富士通株式会社 | Optical repeater with redundancy |
JP3330797B2 (en) | 1995-10-02 | 2002-09-30 | 富士通株式会社 | Moving image data storage method and moving image data decoding method |
JPH09191453A (en) | 1995-11-07 | 1997-07-22 | Sony Corp | Device for data transmission reception and data recording reproduction, its method and recording medium |
JPH09139937A (en) * | 1995-11-14 | 1997-05-27 | Fujitsu Ltd | Moving image stream converter |
FI956360A (en) * | 1995-12-29 | 1997-06-30 | Nokia Telecommunications Oy | Method for detecting connection set bursts and receivers |
JP3165635B2 (en) | 1996-02-07 | 2001-05-14 | 三洋電機株式会社 | Multiplex broadcast receiver |
DE19614737A1 (en) * | 1996-04-15 | 1997-10-16 | Bosch Gmbh Robert | Error-proof multiplex process with possible retransmission |
US5752166A (en) * | 1996-09-04 | 1998-05-12 | Motorola, Inc. | Method and apparatus for controlling how a receiver responds to a message |
US6324694B1 (en) * | 1996-09-06 | 2001-11-27 | Intel Corporation | Method and apparatus for providing subsidiary data synchronous to primary content data |
JP3431465B2 (en) | 1996-09-11 | 2003-07-28 | 松下電器産業株式会社 | Data presentation control device for controlling data presentation, data transmission device for transmitting information used for controlling data presentation |
JPH10232658A (en) | 1996-12-20 | 1998-09-02 | Fujitsu Ltd | Display changeover system and recording medium |
US5931908A (en) * | 1996-12-23 | 1999-08-03 | The Walt Disney Corporation | Visual object present within live programming as an actionable event for user selection of alternate programming wherein the actionable event is selected by human operator at a head end for distributed data and programming |
US5850304A (en) * | 1997-01-08 | 1998-12-15 | Scottsdale Technologies, Inc. | Optically programmable controller |
JPH10243374A (en) | 1997-02-27 | 1998-09-11 | Hitachi Ltd | System for distributing picture voice information |
US5918002A (en) * | 1997-03-14 | 1999-06-29 | Microsoft Corporation | Selective retransmission for efficient and reliable streaming of multimedia packets in a computer network |
US6292834B1 (en) * | 1997-03-14 | 2001-09-18 | Microsoft Corporation | Dynamic bandwidth selection for efficient transmission of multimedia streams in a computer network |
US6215762B1 (en) * | 1997-07-22 | 2001-04-10 | Ericsson Inc. | Communication system and method with orthogonal block encoding |
US6351467B1 (en) * | 1997-10-27 | 2002-02-26 | Hughes Electronics Corporation | System and method for multicasting multimedia content |
JP3276941B2 (en) | 1999-06-30 | 2002-04-22 | 三洋電機株式会社 | Air conditioner |
-
1998
- 1998-03-13 EP EP04007704A patent/EP1439705A3/en not_active Withdrawn
- 1998-03-13 KR KR1020057003554A patent/KR100557103B1/en not_active IP Right Cessation
- 1998-03-13 EP EP04007703A patent/EP1439704A3/en not_active Withdrawn
- 1998-03-13 CN CNB98800657XA patent/CN1190081C/en not_active Expired - Lifetime
- 1998-03-13 KR KR1020057003555A patent/KR20050052484A/en not_active Application Discontinuation
- 1998-03-13 US US09/194,008 patent/US6674477B1/en not_active Expired - Lifetime
- 1998-03-13 EP EP07107939A patent/EP1835745A3/en not_active Withdrawn
- 1998-03-13 CN CNB2004100323659A patent/CN100525443C/en not_active Expired - Lifetime
- 1998-03-13 EP EP98907236A patent/EP0905976A4/en not_active Withdrawn
- 1998-03-13 WO PCT/JP1998/001084 patent/WO1998042132A1/en active IP Right Grant
- 1998-03-13 CN CNB2004100323610A patent/CN100334880C/en not_active Expired - Lifetime
-
2003
- 2003-07-24 US US10/626,271 patent/US7502070B2/en not_active Expired - Fee Related
- 2003-07-24 US US10/626,075 patent/US7436454B2/en not_active Expired - Fee Related
- 2003-07-24 US US10/626,060 patent/US20040212729A1/en not_active Abandoned
-
2007
- 2007-05-01 US US11/742,810 patent/US20070201563A1/en not_active Abandoned
Patent Citations (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5659653A (en) * | 1978-09-11 | 1997-08-19 | Thomson Consumer Electronics, S.A. | Method for programming a recording device and programming device |
US4706121B1 (en) * | 1985-07-12 | 1993-12-14 | Insight Telecast, Inc. | Tv schedule system and process |
US4706121A (en) * | 1985-07-12 | 1987-11-10 | Patrick Young | TV schedule system and process |
US4777531A (en) * | 1986-01-06 | 1988-10-11 | Sony Corporation | Still sub-picture-in-picture television receiver |
US4879611A (en) * | 1986-08-01 | 1989-11-07 | Sanyo Electric Co., Ltd. | Record mode setting apparatus responsive to transmitted code containing time-start information |
US4845564A (en) * | 1987-04-16 | 1989-07-04 | Sony Corp. | Television receiver incorporating a video cassette recorder and capable of displaying a sub-channel picture within a main-channel picture |
US4908707A (en) * | 1987-07-20 | 1990-03-13 | U.S. Philips Corp. | Video cassette recorder programming via teletext transmissions |
US4959719A (en) * | 1988-12-21 | 1990-09-25 | North American Philips Corporation | Picture-in-picture television receiver control |
US5231492A (en) * | 1989-03-16 | 1993-07-27 | Fujitsu Limited | Video and audio multiplex transmission system |
US4903129A (en) * | 1989-04-06 | 1990-02-20 | Thomson Consumer Electronics, Inc. | Audio signal section apparatus |
US5047867A (en) * | 1989-06-08 | 1991-09-10 | North American Philips Corporation | Interface for a TV-VCR system |
US5038211A (en) * | 1989-07-05 | 1991-08-06 | The Superguide Corporation | Method and apparatus for transmitting and receiving television program information |
US5151789A (en) * | 1989-10-30 | 1992-09-29 | Insight Telecast, Inc. | System and method for automatic, unattended recording of cable television programs |
US5532754A (en) * | 1989-10-30 | 1996-07-02 | Starsight Telecast Inc. | Background television schedule system |
US5111292A (en) * | 1991-02-27 | 1992-05-05 | General Electric Company | Priority selection apparatus as for a video signal processor |
US5247365A (en) * | 1991-04-12 | 1993-09-21 | Sony Corporation | Channel-scanning picture-in-picture-in-picture television receiver |
US5311317A (en) * | 1991-04-19 | 1994-05-10 | Sony Corporation | Video signal processing apparatus for displaying stored video signal during channel selection |
US6498625B1 (en) * | 1991-08-13 | 2002-12-24 | Canon Kabushiki Kaisha | Dynamic image transmission method and apparatus for enhancing spatial resolution of image data |
US5465385A (en) * | 1991-10-28 | 1995-11-07 | Pioneer Electronic Corporation | CATV system with an easy program reservation |
US5432561A (en) * | 1992-03-27 | 1995-07-11 | North American Philips Corporation | System for automatically activating picture-in-picture when an auxiliary signal is detected |
US5223924A (en) * | 1992-05-27 | 1993-06-29 | North American Philips Corporation | System and method for automatically correlating user preferences with a T.V. program information database |
US5657414A (en) * | 1992-12-01 | 1997-08-12 | Scientific-Atlanta, Inc. | Auxiliary device control for a subscriber terminal |
US5524195A (en) * | 1993-05-24 | 1996-06-04 | Sun Microsystems, Inc. | Graphical user interface for interactive television with an animated agent |
US5594509A (en) * | 1993-06-22 | 1997-01-14 | Apple Computer, Inc. | Method and apparatus for audio-visual interface for the display of multiple levels of information on a display |
US5699474A (en) * | 1993-07-12 | 1997-12-16 | Sony Corporation | Method and apparatus for decoding MPEG-type data reproduced from a recording medium during a high-speed reproduction operation |
US5610841A (en) * | 1993-09-30 | 1997-03-11 | Matsushita Electric Industrial Co., Ltd. | Video server |
US5617145A (en) * | 1993-12-28 | 1997-04-01 | Matsushita Electric Industrial Co., Ltd. | Adaptive bit allocation for video and audio coding |
US5699362A (en) * | 1993-12-30 | 1997-12-16 | Lucent Technologies Inc. | System and method for direct output of constant rate high bandwidth packets streams from long term memory devices |
US5512939A (en) * | 1994-04-06 | 1996-04-30 | At&T Corp. | Low bit rate audio-visual communication system having integrated perceptual speech and video coding |
US5633683A (en) * | 1994-04-15 | 1997-05-27 | U.S. Philips Corporation | Arrangement and method for transmitting and receiving mosaic video signals including sub-pictures for easy selection of a program to be viewed |
US5686954A (en) * | 1994-09-29 | 1997-11-11 | Sony Corporation | Program information broadcasting method program information display method, and receiving device |
US6021432A (en) * | 1994-10-31 | 2000-02-01 | Lucent Technologies Inc. | System for processing broadcast stream comprises a human-perceptible broadcast program embedded with a plurality of human-imperceptible sets of information |
US5535216A (en) * | 1995-01-17 | 1996-07-09 | Digital Equipment Corporation | Multiplexed gapped constant bit rate data transmission |
US5784522A (en) * | 1995-04-07 | 1998-07-21 | Sony Corporation | Information signal transmitting system |
US6732369B1 (en) * | 1995-10-02 | 2004-05-04 | Starsight Telecast, Inc. | Systems and methods for contextually linking television program information |
US6025837A (en) * | 1996-03-29 | 2000-02-15 | Micrsoft Corporation | Electronic program guide with hyperlinks to target resources |
US6469753B1 (en) * | 1996-05-03 | 2002-10-22 | Starsight Telecast, Inc. | Information system |
US6088360A (en) * | 1996-05-31 | 2000-07-11 | Broadband Networks Corporation | Dynamic rate control technique for video multiplexer |
US5928330A (en) * | 1996-09-06 | 1999-07-27 | Motorola, Inc. | System, device, and method for streaming a multimedia file |
US6674958B2 (en) * | 1996-12-16 | 2004-01-06 | Thomson Licensing S.A. | Television apparatus control system |
US6088064A (en) * | 1996-12-19 | 2000-07-11 | Thomson Licensing S.A. | Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display |
US6094457A (en) * | 1996-12-31 | 2000-07-25 | C-Cube Microsystems, Inc. | Statistical multiplexed video encoding using pre-encoding a priori statistics and a priori and a posteriori statistics |
US6097878A (en) * | 1997-02-25 | 2000-08-01 | Sony Corporation | Automatic timer event entry |
US6038545A (en) * | 1997-03-17 | 2000-03-14 | Frankel & Company | Systems, methods and computer program products for generating digital multimedia store displays and menu boards |
US6208799B1 (en) * | 1997-04-29 | 2001-03-27 | Time Warner Entertainment Company L.P. | VCR recording timeslot adjustment |
US6108706A (en) * | 1997-06-09 | 2000-08-22 | Microsoft Corporation | Transmission announcement system and method for announcing upcoming data transmissions over a broadcast network |
US6008802A (en) * | 1998-01-05 | 1999-12-28 | Intel Corporation | Method and apparatus for automatically performing a function based on the reception of information corresponding to broadcast data |
US6990680B1 (en) * | 1998-01-05 | 2006-01-24 | Gateway Inc. | System for scheduled caching of in-band data services |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7620068B2 (en) * | 2004-11-08 | 2009-11-17 | Harris Corporation | Adaptive bandwidth utilization for telemetered data |
US20060098689A1 (en) * | 2004-11-08 | 2006-05-11 | Harris Corporation | Adaptive bandwidth utilization for telemetered data |
US8798135B2 (en) * | 2004-12-22 | 2014-08-05 | Entropic Communications, Inc. | Video stream modifier |
US8681197B2 (en) * | 2005-11-29 | 2014-03-25 | Sony Corporation | Communication system, terminal apparatus and computer program |
US20070120958A1 (en) * | 2005-11-29 | 2007-05-31 | Sei Sunahara | Communication system, terminal apparatus and computer program |
US10178349B2 (en) | 2010-05-06 | 2019-01-08 | Ricoh Company, Ltd. | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US12058477B2 (en) | 2010-05-06 | 2024-08-06 | Ricoh Company, Ltd. | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US20150170326A1 (en) * | 2010-05-06 | 2015-06-18 | Kenji Tanaka | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US9412148B2 (en) * | 2010-05-06 | 2016-08-09 | Ricoh Company, Ltd. | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US11563917B2 (en) | 2010-05-06 | 2023-01-24 | Ricoh Company, Ltd. | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US9787944B2 (en) | 2010-05-06 | 2017-10-10 | Ricoh Company, Ltd. | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US10931917B2 (en) | 2010-05-06 | 2021-02-23 | Ricoh Company, Ltd. | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US10477147B2 (en) | 2010-05-06 | 2019-11-12 | Ricoh Company, Ltd. | Transmission terminal, transmission method, and computer-readable recording medium storing transmission program |
US20120219062A1 (en) * | 2011-02-28 | 2012-08-30 | Cisco Technology, Inc. | System and method for managing video processing in a network environment |
US9538128B2 (en) * | 2011-02-28 | 2017-01-03 | Cisco Technology, Inc. | System and method for managing video processing in a network environment |
US10791202B2 (en) * | 2012-05-30 | 2020-09-29 | Canon Kabushiki Kaisha | Information processing apparatus, program, and control method for determining priority of logical channel |
US20150120882A1 (en) * | 2012-05-30 | 2015-04-30 | Canon Kabushiki Kaisha | Information processing apparatus, program, and control method |
US20140007172A1 (en) * | 2012-06-29 | 2014-01-02 | Samsung Electronics Co. Ltd. | Method and apparatus for transmitting/receiving adaptive media in a multimedia system |
US9722808B2 (en) | 2014-01-22 | 2017-08-01 | Ricoh Company, Limited | Data transmission system, a terminal device, and a recording medium |
EP3310063A4 (en) * | 2015-06-12 | 2018-12-05 | Nec Corporation | Relay device, terminal device, communication system, pdu relay method, pdu reception method, and program |
US10555033B2 (en) | 2015-06-12 | 2020-02-04 | Nec Corporation | Relay device, terminal device, communication system, PDU relay method, PDU reception method, and program |
JPWO2016199587A1 (en) * | 2015-06-12 | 2018-04-05 | 日本電気株式会社 | Relay device, terminal device, communication system, PDU relay method, PDU reception method, and program |
Also Published As
Publication number | Publication date |
---|---|
US20040212729A1 (en) | 2004-10-28 |
CN1533176A (en) | 2004-09-29 |
CN100525443C (en) | 2009-08-05 |
EP1439704A2 (en) | 2004-07-21 |
US7436454B2 (en) | 2008-10-14 |
CN100334880C (en) | 2007-08-29 |
CN1545322A (en) | 2004-11-10 |
CN1227031A (en) | 1999-08-25 |
EP0905976A4 (en) | 2010-09-29 |
KR20050052484A (en) | 2005-06-02 |
US7502070B2 (en) | 2009-03-10 |
EP1439704A3 (en) | 2011-08-10 |
EP1439705A3 (en) | 2011-09-14 |
US20040237122A1 (en) | 2004-11-25 |
US20040120345A1 (en) | 2004-06-24 |
CN1190081C (en) | 2005-02-16 |
KR100557103B1 (en) | 2006-03-03 |
EP1835745A3 (en) | 2010-09-29 |
WO1998042132A1 (en) | 1998-09-24 |
EP0905976A1 (en) | 1999-03-31 |
US6674477B1 (en) | 2004-01-06 |
EP1439705A2 (en) | 2004-07-21 |
KR20050052483A (en) | 2005-06-02 |
EP1835745A2 (en) | 2007-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6674477B1 (en) | Method and apparatus for processing a data series including processing priority data | |
JP3516585B2 (en) | Data processing device and data processing method | |
KR100711635B1 (en) | Picture coding method | |
US8144764B2 (en) | Video coding | |
US7010032B1 (en) | Moving image coding apparatus and decoding apparatus | |
US6580756B1 (en) | Data transmission method, data transmission system, data receiving method, and data receiving apparatus | |
US9819955B2 (en) | Carriage systems encoding or decoding JPEG 2000 video | |
JP2006524948A (en) | A method for encoding a picture with a bitstream, a method for decoding a picture from a bitstream, an encoder for encoding a picture with a bitstream, a transmission apparatus and system including an encoder for encoding a picture with a bitstream, bit Decoder for decoding picture from stream, and receiving apparatus and client comprising a decoder for decoding picture from bitstream | |
ZA200208713B (en) | Video error resilience. | |
JP4102223B2 (en) | Data processing apparatus and data processing method | |
KR100530919B1 (en) | Data processing method and data processing apparatus | |
JP3448047B2 (en) | Transmitting device and receiving device | |
JP2007221826A (en) | Receiving terminal and reception method | |
JP3519722B2 (en) | Data processing method and data processing device | |
KR100530920B1 (en) | Image and voice transmitting apparatus and receiving apparatus | |
CN100473158C (en) | Method and apparatus for processing, transmitting and receiving dynamic image data | |
JP2006304309A (en) | Transmitter, receiver, and communication system | |
JP2004048657A (en) | Image/audio receiving apparatus | |
Burlacu et al. | MPEG-4 Technology Strategy Analysis | |
Murugan | Multiplexing H. 264/AVC Video with MPEG-AAC Audio |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |