[go: nahoru, domu]

US9100716B2 - Augmenting client-server architectures and methods with personal computers to support media applications - Google Patents

Augmenting client-server architectures and methods with personal computers to support media applications Download PDF

Info

Publication number
US9100716B2
US9100716B2 US12/349,913 US34991309A US9100716B2 US 9100716 B2 US9100716 B2 US 9100716B2 US 34991309 A US34991309 A US 34991309A US 9100716 B2 US9100716 B2 US 9100716B2
Authority
US
United States
Prior art keywords
image
client device
output
scene
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/349,913
Other versions
US20090183200A1 (en
Inventor
Charles W. K. Gritton
Frank A. Hunleth
Stephen Scheirey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ceva Technologies Inc
Original Assignee
Hillcrest Laboratories Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hillcrest Laboratories Inc filed Critical Hillcrest Laboratories Inc
Priority to US12/349,913 priority Critical patent/US9100716B2/en
Assigned to HILLCREST LABORATORIES, INC. reassignment HILLCREST LABORATORIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GRITTON, CHARLES W.K., HUNLETH, FRANK A., SCHEIREY, STEPHEN
Publication of US20090183200A1 publication Critical patent/US20090183200A1/en
Application granted granted Critical
Publication of US9100716B2 publication Critical patent/US9100716B2/en
Assigned to MULTIPLIER CAPITAL, LP reassignment MULTIPLIER CAPITAL, LP SECURITY AGREEMENT Assignors: HILLCREST LABORATORIES, INC.
Assigned to IDHL HOLDINGS, INC. reassignment IDHL HOLDINGS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HILLCREST LABORATORIES, INC.
Assigned to HILLCREST LABORATORIES, INC. reassignment HILLCREST LABORATORIES, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: MULTIPLIER CAPITAL, LP
Assigned to CEVA TECHNOLOGIES, INC. reassignment CEVA TECHNOLOGIES, INC. ASSIGNMENT OF PATENTS Assignors: IDHL HOLDINGS, INC.
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/654Transmission by server directed to the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • H04N21/4356Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen by altering the spatial resolution, e.g. to reformat additional data on a handheld device, attached to the STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/633Control signals issued by server directed to the network components or client
    • H04N21/6332Control signals issued by server directed to the network components or client directed to client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/633Control signals issued by server directed to the network components or client
    • H04N21/6332Control signals issued by server directed to the network components or client directed to client
    • H04N21/6336Control signals issued by server directed to the network components or client directed to client directed to decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/162Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
    • H04N7/165Centralised control of user terminal ; Registering at central
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen

Definitions

  • the present invention describes systems and methods for processing and transferring multimedia data between nodes in a communication system, e.g., an interactive television system, usable to create, for example, sophisticated entertainment user interfaces in the home.
  • a communication system e.g., an interactive television system
  • the television was tuned to the desired channel by adjusting a tuner knob and the viewer watched the selected program. Later, remote control devices were introduced that permitted viewers to tune the television from a distance. This addition to the user-television interface created the phenomenon known as “channel surfing” whereby a viewer could rapidly view short segments being broadcast on a number of channels to quickly learn what programs were available at any given time.
  • this data includes so-called metadata that describes the media content.
  • metadata refers to all of the supplementary information that describes the particular content of interest associated with media items available for selection by a user.
  • the metadata could include, e.g., the title, description, genre, cast, DVD cover art, price/availability, cast bios and filmographies, links to similar movies, critical reviews, user reviews, the rights associated with the metadata itself, rights associated with the content, advertising metadata linked to the content of interest, etc.
  • An exemplary system for capturing, processing, synthesizing and forwarding metadata suitable for such advanced user interfaces is described in U.S. patent application Ser. No. 11/037,897 entitled “A Metadata Brokering Server and Method”, filed on Jan. 18, 2005, the disclosure of which is incorporated here by reference.
  • the data needs to be communicated from, for example, a head-end portion of the system to, for example, a set-top box in a manner which enables sufficient data to be supplied to render rich user interfaces, while at the same time being sensitive to time delay and operating within the constraints imposed by legacy hardware. Accordingly, it would be desirable to provide architectures and methods which resolve these conflicting parameters and enable advanced user interfaces to be generated.
  • Systems and methods according to exemplary embodiments can improve service within the telecommunications field.
  • a zoomable user interface system includes: a display device for displaying the zoomable user interface; a client device connected to the display device for receiving a command to zoom into the zoomable user interface and for transmitting a request to perform a function associated with the command; and a second device connected to the client device for receiving the request, performing the function and returning a result to the client device, wherein the client device uses the result to perform the zoom into the zoomable user interface on the display device.
  • a method for augmenting a client device includes: receiving a request to perform at least one function; processing the request to perform the at least one function; performing the at least one function which results in a first output; selectively translating the first output into a format usable by the client device into a second output; and transmitting either the first output or the second output to the client device.
  • a communications node for augmenting a client device includes: a processor in conjunction with at least one software application for processing a request to perform at least one function, wherein the processor performs the steps of: performing the at least one function which results in a first output; and selectively translating the first output into a format usable by the client device into a second output; a memory for storing the at least one software application, the first output and the second output; and a communications interface for receiving the request to perform at least one function and for transmitting either the first output or the second output to the client device.
  • FIGS. 1( a ) and 1 ( b ) depict screens of a user interface showing a hoverzoom feature which can be generated using data processed in accordance with the present invention
  • FIG. 2 depicts another screen of a user interface which can be generated using data processed in accordance with the present invention
  • FIG. 3 is a table showing exemplary metadata types and sources
  • FIG. 4 shows a client-server architecture according to exemplary embodiments of the present invention
  • FIG. 5 illustrates the MPEG-2 transition and scene encoder of FIG. 4 in more detail in accordance with an exemplary embodiment of the present invention
  • FIG. 6 illustrates the scene request processor of FIG. 4 in more detail in accordance with an exemplary embodiment of the present invention
  • FIG. 7 illustrates the client UI state machine of FIG. 4 in more detail in accordance with an exemplary embodiment of the present invention
  • FIG. 8 depicts an exemplary messaging interaction between an event processor, scene loader, exclusive scene and overlay scene in accordance with an exemplary embodiment of the present invention
  • FIG. 9 shows another exemplary messaging interaction associated with architecture and methods in accordance with the present invention.
  • FIG. 10 depicts a technique for encoding data associated with a hoverzoom effect according to an exemplary embodiment of the present invention
  • FIG. 11 illustrates selective encoding of data for transmission to a client device according to an exemplary embodiment of the present invention
  • FIG. 12 shows an exemplary embodiment wherein a home PC augments a client device according to an exemplary embodiment of the present invention
  • FIG. 13 shows an exemplary embodiment wherein a home PC augments a client device according to an exemplary embodiment of the present invention
  • FIG. 14 shows a communications node according to exemplary embodiments.
  • FIG. 15 shows a method flow diagram for augmenting a client device according to exemplary embodiments.
  • FIGS. 1( a ) and 1 ( b ) exemplary user interface screens which can be created using data and instructions forwarded from a server to a client in accordance with exemplary embodiments of the present invention are shown in FIGS. 1( a ) and 1 ( b ).
  • a portion of an exemplary user interface screen which can be generated based on information transferred to an end user's system e.g., set-top box/television or personal computer
  • an end user's system e.g., set-top box/television or personal computer
  • the interested reader is directed to the above-incorporated by reference U.S. patent application Ser. No. 10/768,432 as well as to U.S. patent application Ser. No.
  • FIG. 1( a ) shows a user interface screen having a plurality of media objects available for selection as images, e.g., DVD cover art.
  • the image associated with the movie “Apollo 13” has been magnified as a result of a preliminary selection activity, e.g., a user passing a cursor (not shown) over this image on the display screen.
  • This feature referred to as a hoverzoom effect and described in more detail below under the heading “Hoverzoom”, can be achieved by transmitting data (e.g., metadata) and instructions between nodes, e.g., a headend and a set-top box according to exemplary embodiments of the present invention.
  • additional data e.g., metadata delivered from content providers
  • additional data can be used to generate the user interface screen.
  • user selection of this magnified image e.g., by depressing a button on an input device (not shown)
  • information about the movie “Apollo 13” including, among other things, the movie's runtime, price and actor information is shown.
  • this GUI screen includes GUI control objects including, for example, button control objects for buying the movie, watching a trailer or returning to the previous GUI screen (which could also be accomplished by depressing the ZOOM OUT button on the input device).
  • Hyperlinks generated from metadata processed in a manner described below can also be used to allow the user to jump to, for example, GUI screens associated with the related movies identified in the lower right hand corner of the GUI screen of FIG. 2 or information associated with the actors in this movie.
  • some or all of the film titles under the heading “Filmography” can be implemented as hyperlinks which, when actuated by the user via the input device, will cause the GUI to display a GUI screen corresponding to that of FIG. 2 for the indicated movie.
  • Some or all of the information used to generate the interface screens of FIGS. 1( a ), 1 ( b ) and 2 comes from metadata provided by one or more metadata providers and processed in accordance with exemplary embodiments of the present invention as will now be described.
  • FIGS. 1( a ), 1 ( b ) and 2 are purely exemplary and metadata (and other data) transferred and processed in accordance with the present invention can be used to support other interfaces or for purposes other than interface generation.
  • many different types of information can be received and processed in accordance with the present invention.
  • Examples of metadata types, sources and associated uses, e.g., for a TV browser interface, a video-on-demand (VOD) interface or a music browser, are shown in the table of FIG. 3 .
  • zooming features associated with user interfaces generated in accordance with these exemplary embodiments of the present invention are shown in the zooming features associated with user interfaces generated in accordance with these exemplary embodiments of the present invention.
  • zoom refers to techniques wherein a user interface action results in changes to the displayed portion of the user interface that a creates a change of perspective which is consistent and informative to the user.
  • Zooming will typically include changes in object magnification (e.g., camera-style zooming), but is expressly not limited thereto.
  • semantic zooming which includes the modification of a zoomed object in a manner which is independent of magnification, e.g., the addition of text or a graphic to an object which was not present as part of the object (at any level of magnification) prior to the semantic zoom.
  • magnification e.g., the addition of text or a graphic to an object which was not present as part of the object (at any level of magnification) prior to the semantic zoom.
  • one example of a zooming transitions in accordance with exemplary embodiments of the present invention is the zooming transition between the user interface screen of FIGS. 1( a ) and 1 ( b ), which involves a magnification change of a hoverzoomed object and, optionally, semantic zooming to that object as well.
  • Another example is found in the transition between the user interface screen of FIG. 1( b ) and FIG. 2 , wherein the image associated with “Apollo 13” has its magnification changed (e.g., enlarged in FIG. 2 relative to the similar image shown in FIG. 1( b )) and translated for use in FIG. 2 .
  • Panning effects can also be used to animate the zooming transition.
  • a general client-server architecture 40 for providing data processing and transport according to an exemplary embodiment of the present invention is shown in FIG. 4 .
  • a user interface server 42 communicates with a client device 44 to generate a user interface on a display device 46 in conjunction with inputs from, for example, a pointing device 48 .
  • Communication of data, e.g., metadata and content data, between the user interface server 42 and the client device 44 can involve any number of intermediate nodes (not shown) between the user interface server 42 and the client device 44 including hubs, distribution servers, and the like.
  • some or all of the functional elements illustrated as being part of the user interface server 42 can be located within one or more of these intermediate nodes or reside at the headend of the system 40 .
  • the display device 46 can, for example, be a television, a computer monitor/display, or any other display device.
  • the client device 44 can be embodied as a set-top box, a personal computer, or any other device including a processing unit.
  • the pointer 48 can, for example, be a free space pointing device, a mouse, a remote control device, a track ball, a joystick, or any other device capable of providing a pointing capability and can be connected to the client device 44 either via wireline or wirelessly.
  • the server 42 includes a transition and screen capturer 50 , an MPEG-2 transition and scene encoder, an MPEG and ZSD cache 54 , a scene request processor 56 and an MPEG stream transmitter 58 , which components operate to generate and manage the streaming of MPEG-2 data to client devices 44 , and to receive and respond to upstream requests from clients 44 .
  • the transition and screen capturer 50 automates the gathering of scene data used to generate the user interface.
  • scene description files and formats also referred to herein as “ZSD data”
  • ZSD data Detailed information associated with scene description files and formats according to exemplary embodiments of the present invention is provided below under the header “Scene Description Data Format”.
  • a “scene” as that term is used herein generally refers to the framework associated with any user interface screen which can be generated by the user interface which, despite the sophisticated and dynamic nature of user interfaces in accordance with the present invention, are all known a priori albeit at least some of the data used to populate the scenes will vary, e.g., over time as content providers change, for example, metadata associated with their offerings.
  • FIGS. 1( a ), 1 ( b ) and 2 show only portions of user interface screens, each of those complete screens would be considered to be a scene.
  • Table 1 below lists exemplary data which can be collected for each transition and Table 2 lists exemplary data for each scene:
  • Scene ID The scene ID of the this scene Location
  • the interface location instance for the starting scene Scene Description The user supplied description or an automatically generated description.
  • the transition and scene capturer 50 is thus able to acquire all of the information necessary to simulate all desired transitions in the user interface from, for example, a database not shown in FIG. 4 which contains the complete user interface “universe”.
  • the transition and scene capturer 50 includes navigator controller and capture controller components which become active as a user generates inputs to the interface which command scene transitions.
  • the navigation controller has the responsibility of navigation to and from every transition and scene.
  • An exemplary navigation controller performs the following operations, (1) obtain the next transition, (2) navigate to the “from” scene, (3) execute a focus command for this transition, (4) notify the capture controller with the scene and transition information, (5) execute the activation command, (6) notify the capture controller when the animation completes, (7) notify the capture controller with the scene and transition information reversed (for the back transition), (8) invoke a goBacko routine, and (9) notify the capture controller when the animation completes.
  • the capture controller integrates with the MPEG-2 transition and scene encoder 52 to create the MPEG-2 clips and ZSD files.
  • the capture controller receives notifications from the navigation controller when the transition begins and ends and invokes routines on the MPEG-2 transition and scene encoder at every animation step. To provide a visual indication of the progress to the user, the capture controller ensures that the canvas still paints the visible scene graph to the scene and adds a text overlay that indicates the percent of transitions executed.
  • Raw scene data e.g., images, text, metadata, etc.
  • object extraction unit 502 e.g., images, text, metadata, etc.
  • client-rendered feature extraction unit 504 e.g., text, metadata, etc.
  • the object extraction unit 502 (handling user-interactable objects on the user interface screens) and client-rendered feature extraction unit 504 (handling, e.g., hoverzoom and text, features to be rendered by the client device 44 ) operate, under the control of the render-location controller 508 , to extract information from the raw data stream and provide it to the ZSD encoder 507 , which encodes the extracted information using the scene description format described in detail below. None, some or all of the ZSD encoded data can be sent within the MPEG data stream, for example as part of the private data fields within MPEG frames, using MPEG-2 data encapsulator 509 , while other ZSD encoded data can be transmitted using the OOB link described above with respect to FIG. 4 .
  • the video information extraction unit 506 operates to extract video information suitable for MPEG-2 encoding, again under the control of the render location controller 508 .
  • the ability of render location controller 508 to selectively determine which type of encoding to apply to particular data, in this example MPEG or ZSD encoding, and the benefits associated therewith are described in more detail below with respect to FIG. 11 .
  • MPEG encoding is generic to MPEG-1, MPEG-2 and similar encodings, although some exemplary embodiments of the present invention do specifically refer to MPEG-2 encoding.
  • General details associated with MPEG encoding per se will be known to those skilled in the art and are further available in the form of draft standards (e.g., ISO CD 11172).
  • An exemplary MPEG-2 encoder 500 includes a plurality of unnumbered blocks which operate in accordance with the standard to perform MPEG-2 encoding (an exception being motion estimation unit 510 described in more detail below).
  • MPEG encoder which provides a more detailed description of the unnumbered blocks of MPEG encoder 500 can be found in the various MPEG-2 standards documents, for example, Test Model 5 documents which evolved as a joint effort between ITU-T SG15.1 (known then as CCITT SG XV, Working Party XV/1, Experts Group on ATM Video Coding) and ISO/IEC JTC1/SC29 WG11 (MPEG).
  • Test Model 5 documents which evolved as a joint effort between ITU-T SG15.1 (known then as CCITT SG XV, Working Party XV/1, Experts Group on ATM Video Coding) and ISO/IEC JTC1/SC29 WG11 (MPEG).
  • the MPEG version of Test Model 5 is known as MPEG 93/225b
  • the ITU version of Test Model 5 is known as AVC-445b, the disclosures of which are incorporated here by reference.
  • MPEG encoded data is stored in the MPEG/ZSD cache unit 54 for subsequent transmission to the client device 44 .
  • MPEG-encoder 500 in the MPEG-2 transition and scene encoder 52 is its ability to quickly and efficiently provide a high level of compression of the MPEG data being encoded. Among other things, this can be achieved by using knowledge of where each of the scenes are “located” relative to one another in the user interface, which is defined a priori in exemplary user interfaces according to the present invention. This enables selective simplification of the standard MPEG motion estimation algorithm, which in turn speeds up the MPEG encoding process and/or reduces the amount of processing power that needs to be dedicated thereto.
  • part of the information that is used to perform the encoding is information regarding where blocks of pixels have moved from one MPEG frame to the next MPEG frame (and/or backwards from a previous MPEG frame to a current MPEG frame). For example, if a block of pixels in a first MPEG frame has simply moved to a new screen location in a second MPEG frame, it is generally more efficient to determine and transmit a motion vector associated with that block of pixels than to re-encode that entire block of pixels again and resend them. Similarly, if that block of pixels has experienced a relatively uniform color difference (e.g., by transiting through a lighting effect), it is still efficient to provide a motion vector and some color difference information rather than retransmit the entire block of pixels.
  • a relatively uniform color difference e.g., by transiting through a lighting effect
  • standard MPEG motion estimation algorithms perform a search for blocks of pixel data determine which blocks of pixels have moved (and in which direction) from frame to frame. For example, some searches, call full pel searchs, use 16 ⁇ 16 blocks, while others, called half-pel searches, use 16 ⁇ 8 blocks. These searches can become computationally expensive, particularly for high definition video data, and have been estimated to require up to 80% of the processing time/power associated with the operations performed by a standard MPEG encoder 500 (e.g., without the modifications introduced by the encoder hint collector 512 ).
  • motion estimation associated with MPEG encoding is simplified using the fact that the user interface being generated by these client/server architectures does not involve random movement of objects.
  • the image associated with “Apollo 13” moves from a first position on a display screen to a second position on a display screen (optionally with some magnification), both positions being known a priori to the encoder hint collector 512 , which can calculate an MPEG motion vector therefrom.
  • the encoder hint collector 512 can pass the MPEG motion vector to motion estimation unit 510 with a command to use the passed motion vector for performing MPEG compression rather than performing a search in accordance with standard MPEG techniques.
  • this use of knowledge of interrelated user interface screens to generate MPEG motion vectors may not always be able to generate a valid MPEG motion vector (e.g., due to limitations on the number of bits assigned for expressing MPEG motion vectors).
  • encoder hint collector 512 also has the capability to command motion estimation unit 510 to employ the standard MPEG search algorithm to determine motion vectors on a frame-by-frame (or other) basis.
  • a third category of motion vectors which can be determined in accordance with the present invention are those which are calculated by the standard MPEG search algorithm having a search range which is limited in range based on the information available to the encoder hint collector 512 .
  • MPEG data and scene description data generated by blocks 50 and 52 can be cached in memory device 54 for retrieval as needed by the scene request processor 56 .
  • the scene request processor 56 processes requests for scenes from client 44 , e.g., if the client user interface state machine 62 receives an indication that the cursor associated with pointer 48 has paused over the image associated with “Apollo 13” ( FIG. 1 ), then a request is sent back to scene request processor 56 to initiate a hoverzoom scene (described below) or if the client user interface state machine 62 receives an indication that the user wants to view a more detailed scene associated with “Apollo 13” ( FIG. 2 ), then a request is sent back to scene request processor 56 to initiate that scene.
  • the scene request processor 56 returns MPEG-2 transitions and scene description data back to the client 44 in response to the upstream requests. According to exemplary embodiments described in more detail below, for certain upstream requests the scene request processor 56 may dynamically determine whether MPEG data, scene description data or some combination of both is appropriate to service the requests. A detailed example of the scene request processor 56 is illustrated in FIG. 6 .
  • the client request processor 600 coordinates all client interaction, e.g., by interpreting client requests and dispatching those requests to the appropriate components within scene request processor 56 .
  • the client request processor tracks states and statistics on a per-client basis and stores such information in database 602 .
  • An out-of-band (OOB) client communication component 604 handles all communication with clients over OOB channels, including responding to connection requests and extracting protocol requests.
  • the video playback control function 606 coordinates the operation of the MPEG-2 stream generation components, e.g., the scene loop generator 608 and the transition playback function 610 .
  • the scene loop generator 608 component generates loops of the user interface scenes and transmits them when no transitions occur.
  • the transition playback function 610 loads MPEG-2 transition streams that were previously generated by the MPEG-2 transition and scene encoder 52 (e.g., via cache 54 ) and streams them to the requested client.
  • the transition playback function 610 may serve multiple streams simultaneously.
  • the MPEG-2 transport stream encapsulation unit 612 updates the MPEG-2 transport stream as appropriate and forwards the stream to the UDP encapsulation unit 614 which groups MPEG-2 transport stream packets together and sends them over UDP to a IP to QAM gateway (not shown) in the MPEG stream transmitter 58 .
  • MPEG stream transmitter 58 on the server side, and MPEG stream receiver 64 and MPEG decoder 66 , on the client side, enable the communication of both metadata, e.g., data used to populate the text fields shown in the user interface screen of FIG. 2 , and content via a video streaming protocol link.
  • the MPEG transmitter 58 , receiver 64 and decoder 66 can be implemented using off-the-shelf components and, accordingly, are not described in detail herein. However readers interested in more details relating to these elements, as well as other exemplary interactive television system architectures in which the present invention can be implemented, are referred to U.S. Pat. No.
  • the on-screen display (OSD) graphics controller 68 receives data scene data from the client state machine 62 and input from the cursor controller 69 to generate overlay graphics and local animations, e.g., zooming transitions, for the user interface.
  • the MPEG video data and the OSD video data output from decoder 66 and OSD graphics controller 68 , respectively, are combined by video combiner 70 and forwarded to display device 46 to generate the user interface.
  • the DVD cover art images shown in FIG. 1( a ) are examples of user interface elements created using MPEG video data
  • the zoomed version of the “Apollo 13” image in FIG. 1( b ) and the circular icons in the upper right hand corner of the user interface screen of FIG. 1( a ) are examples of user interface elements generated using scene description data.
  • the client user interface state machine 62 interprets scene data and/or scripts received from the scene request processor 56 to present user interface scenes (e.g., as shown in FIGS. 1( a ), 1 ( b ) and 2 ) on client devices 44 .
  • the client user interface state machine 62 can also retrieve scene data and MPEG-2 transition clips from either the headend 42 (as represented by block 700 ) or from a local hard disk drive 702 .
  • Out-of-band (OOB) communications 704 can be used to provide signaling and commands to the client user interface state machine 62 via an operating system (OS) 706 , e.g., PowerTV, Linux, Win32, etc., and operating system portal layer 708 .
  • OS operating system
  • the OS and OS porting layer 706 , 708 can also track the user's activities with respect to the user interface and provide data to an event mapper function 710 .
  • Event mapper 710 translates user interface data, e.g., cursor movement, voice command input, motion of free space pointer, etc., into events which may require some change in the user interface, e.g., display change, audio change, zooming transition, etc. For example, when the user's cursor hovers over or passes over the image of “Apollo 13” in FIG. 1( a ), the event mapper 710 would receive raw cursor data from the OS and map that into, for example, a hoverzoom event which results in that image being slightly magnified as illustrated in FIG. 1( b ) and described in more detail below.
  • user interface data e.g., cursor movement, voice command input, motion of free space pointer, etc.
  • events may require some change in the user interface, e.g., display change, audio change, zooming transition, etc.
  • the event mapper 710 would receive raw cursor data from the OS and map that into, for example, a hoverzoom event which results in that image being slightly magnified
  • the event mapper 710 could identify a “transition to detailed view event” associated therewith, leading to a transition to the user interface screen of FIG. 2 .
  • Events detected by event mapper 710 are queued in the event queue 712 for processing by event processor 714 .
  • the event processor 714 coordinates the activities of the client user interface state machine 62 by receiving events from the event queue 712 and dispatching them to the action library 716 based on, for example, the currently active scene data and/or script.
  • the action library 716 in conjunction with a scene data loader 720 and various storage units 718 , 722 , operates to generate the change(s) to the currently displayed user interface screen based on the detected event as will be described in more detail below with respect to the discussion of scene data.
  • this exemplary data format effectively creates a state machine that enables the client device 44 to respond to user interactions and system events.
  • This data format is arbitrarily extensible to support both very low powered client devices 44 and high end client devices 44 , e.g., PCs.
  • Other goals of this exemplary scene data format include theme support, future language support, demo scripting, and automated test support.
  • the ZSD format supports two types of scenes: the exclusive scene and overlay scenes.
  • the exclusive scene is referred to simply as the scene, since it occupies the full screen and contains the primary user interaction elements.
  • Overlay scenes describe full or partial scenes that the client user interface state machine 62 logically overlays on top of the exclusive scene. While the exclusive scene changes as the user navigates, the overlay scenes may or may not change. This enables them to support features such as music controls, global navigation, bookmarks, etc., that follow the user as they navigate from exclusive scene to scene.
  • Exclusive scenes launch overlay scenes initially, but overlay scenes may launch other overlays. Although it is possible to terminate all overlay scenes, the overlay scenes control their own lifetime based on interaction from the user or based on the current exclusive scene.
  • Inter-scene communication is useful for operations such as notifying overlay scenes what is in the exclusive scene.
  • the sender triggers actions to generate events. These events are then dispatched by the event processor 714 to each scene. When the event contains a Resource ID, that ID is mapped to an equivalent resource in the destination scene. If the destination scene does not contain an equivalent resource, the event processor 714 moves on to test dispatching the event to the next scene.
  • Every exclusive scene passes through the following states sequentially on the client, (1) Entered, (2) Loaded, (3) Steady State, (4) Unloading and (5) Exited.
  • the exclusive scene's ZSD data is initially decoded, the scene enters the Entered state.
  • the event processor 714 fires the OnLoad event so that the exclusive scene can perform any initial actions.
  • the event processor 714 completes the OnLoad event dispatch process, the exclusive scene enters the Loaded state.
  • the event processor 714 may have pending events in its queue 712 .
  • the event processor 714 clears out this queue 712 and then transitions the exclusive scene to its Steady State.
  • FIG. 8 illustrates an exemplary exclusive scene life cycle using scene membership messaging to show event processing in all states.
  • the process for unloading an exclusive scene is essentially the reverse of the load process. For this case, a GoToScene or other scene-changing action initiates the unload process. At this point, the exclusive scene changes to the Unloading state. Once all ZSD unload processing completes, the process transitions to the Exited state, wherein the client may optionally retain some or all of the exclusive scene's ZSD data. The changes in the exclusive scene's state are communicated to all currently loaded overlay scenes so the overlay scene can take action (if needed).
  • Overlay scenes exist independent and on top of the exclusive scene.
  • the three icons depicted in the upper righthand corner can be implemented as overlay scenes on the exclusive scene (the images of various DVD covers, implemented in the MPEG layer).
  • Another example, not shown in FIGS. 1 and 2 is the provision of volume control and/or channel selection user interface objects as overlay scenes. Termination of an overlay scene can be accomplished from within the scene itself, or by request from the exclusive scene. Additionally, SceneMembershipNotifcation events can be used to limit the lifetime of an overlay scene to a particular set of exclusive scenes as shown, for example, in FIG. 9 .
  • Each of the exclusive scenes that belong to this scene group would send a SceneMembershipNotification message when they are loaded.
  • the overlay scene associated with this scene group would use the ExclusiveSceneChange events and the SceneMembershipNotification message to tell if the overlay scene should stay loaded or should terminate itself. As long as it receives a SceneMembershipNotifaction that matches its Scene Group, the overlay screen can stay loaded.
  • Triple tables (mentioned in FIG. 9) are described in more detail below.
  • each scene contains the following descriptive information:
  • a client device 44 may optionally implement a ZSD cache 722 .
  • ZSD-encoded scenes specify caching properties to direct clients when the caching behavior is no longer useful. For example, temporally important information such as sports scores should not be cached for a long period of time. Table 4 lists exemplary caching properties types and describes their use.
  • An exemplary scene data format has four fundamental data types (sometimes referred to herein as “elements”), specifically objects, events, actions, and resources.
  • objects describe scene components such as the bounds for buttons and icons in the MPEG layer, overlay text, and overlay images.
  • Events describe the notifications that are pertinent to the scene. These include mouse (pointer) move events, keyboard events, application state change events, etc.
  • Actions describe responses to events such as going to another scene, and finally, resources contain the raw data used by objects, events, and actions, e.g., image data.
  • each event is assigned a globally unique value.
  • Some event types employ filters to constrain the actions that they would trigger. For example, the OnKeyPress event uses the key of interest.
  • events can push resources onto the action stack, described below. Actions may use the information on the stack to modify their behavior.
  • OnFocusOut 10 Both None None This event gets sent when the associated object loses focus.
  • OnSceneMembership- 11 Scenes SceneMembership SceneMembership This event gets sent when Notification with Resource Resource ID a NotifySceneMembership Resource ID action gets fired.
  • Arguments OnScrollUp 12 Both Wheel Wheel This event gets fired for every notch that the specified scroll wheel moves up.
  • OnScrollDown 13 Both Wheel Wheel This event gets fired for every notch that the specified scroll wheel moves down.
  • OnTimeout 14 Both Timer Timer This event gets fired when a timer expires.
  • OnActivate 15 Both None None This event gets fired when an object gets activated.
  • OnExclusiveScene 16 Overlays Entered, None This event gets fired when Change Only Loaded, the exclusive scene Unloading, changes.
  • the argument Exited specifies the exact moment in the scene change. See the scene the scene life cycle sequence diagram.
  • OnUnload 17 Both None None None This event gets fired when an object gets unloaded as the result of a scene change.
  • Actions may be linked together in a ZSD Action Table to form programs.
  • a ZSD interpreter maintains an action stack. The action stack is initialized before dispatching the first action in an action list with the following items in order:
  • this command bundles the context with the scene navigation request. NavigateHome 4 None None None 0 Navigate to the home scene. If the server context buffer has information, this command bundles the context with the scene navigation request. NavigateUp 5 Count, Parameters None ⁇ 2 Navigate to the Duration scene that is geographically up n times in the specified time. If the server context buffer has information, this command bundles the context with the scene navigation request. StartTimer 6 Timer, Parameters None ⁇ 2 Start a timer that Duration sends a timeout event in the specified duration. Timers are global to the scene. StopTimer 7 Timer Parameters None ⁇ 1 Stop the specified timer.
  • StartHoverZoom 8 X, Y, Width, Parameters None ⁇ 7 Hoverzoom to the Height, end coordinates (x, Resource y, width, height) ID, Duration over the specified duration, using the Resource ID associated with a HoverZoomPixelData resource to create the HoverZoom.
  • StopHoverZoom 9 Duration Parameters None ⁇ 1 Stop the hoverzoom over the specified number of millisecond Focus 10 Object ID Parameters None ⁇ 1 Force the focus to change to the specified object.
  • ChangePointer 11 Resource Parameters None ⁇ 2 Change the pointer ID, Object to that specified by ID the Resource ID when over the object specified by the Object ID.
  • UTF8String 0 UTF8String This resource type holds string characters from the UTF8 character set. The string may not exceed 256 characters.
  • UnicodeString 1 UnicodeString This resource type holds Unicode characters. The string may not exceed 256 characters.
  • MPEG2TransitionClip 2 Scene ID This resource type points to an MPEG-2 clip Scene ID, file for the transition between the two scenes.
  • MPEG-2 clip Scenes list all of the MPEG-2 clips for clients with hard disk support or for servers. These clips may change based on the current theme.
  • Cursor 3 Image This resource holds the cursor image.
  • Image 4 Image This resource holds an image.
  • HoverZoom 5 PixMask This resource holds the image data for FGTransPix, creating a hoverzoom.
  • FGOpaquePix, BGPix SceneMembership 6 UTF8String This resource identifies a scene's members such as belonging to a application.
  • OverlayScene 7 Scene This resource holds an embedded ZSD description for an overlay scene.
  • the scene description format groups all scene interaction information into five tables: the object table, the event table, the action table, the resource table and one or more triple tables as described below in Tables 12-17. This division into tables eliminates most redundant information and enables quick lookup of interaction behavior on low end clients 44 .
  • Object Table This table lists all of the objects in the scene. Objects may be high level entities such as PNodes or just regions on the scene. Event Table This table lists all events that need processing on this scene. A client may ignore any event not listed in this table. Action Table This table lists all actions that can be invoked on objects on this scene. Resource Table This table contains strings and images. Its main use is to decouple the string and image data from the above tables so that it is trivial for the server to switch themes and languages. Triple Table This table associates objects, events, and actions. A ZSD encoding may include more than one triple table and use actions to switch between the active one. This enables the creation of state machines within a scene.
  • Object ID A unique ID for this object.
  • OID number 0 represents the whole scene.
  • Object Type The type of the object Description An optional string description to make the XML clearer Parameters Additional parameters that describe the object
  • Event ID A unique ID for this event
  • Event Type The type of the event Description
  • Action ID A unique ID for this action
  • Action Type The type of the action Next Action
  • Description An optional string description to make the XML clearer Parameters Additional parameters that describe the action
  • Client devices 44 without local storage request scenes and transitions from the server 42 An exemplary set of messages which can be used to perform this function is provided below in Table 18.
  • the client/server link can, for example, be made over an Ethernet connection, QPSK channels (used by cable networks currently for OOB communications) or any other protocol or type of connection.
  • QPSK channels used by cable networks currently for OOB communications
  • this message set is purely exemplary and that messages can be added or deleted therefrom.
  • NotifyTriple 7 Client Notify that a triple table entry matched. The client only sends this message in debug mode.
  • GenerateEvent 8 Server Generate and fire the specified event on the client. These events will be fired event in lockout mode. The client only accepts this message in debug mode.
  • Lockout 9 Server Lockout/unlock all user-generated events on the client. Example events include mouse and keyboard events. The client only accepts this message in debug mode.
  • Identity 10 Client The client sends this message every time that it establishes a connection with the server to identify itself.
  • NotifyServerContext 11 Client The client sends this message when its server context buffer is not empty and an action command invokes a server notification or request.
  • RequestScreenCapture 12 Server The server sends this message to request that the client take a snapshot of the screen and send it back to the server in a ScreenCapture message.
  • ScreenCapture 13 Client This is the response message to RequentScreenCapture. It contains the snapshot. Hoverzoom
  • one feature of exemplary client-server architectures and methods according to the present invention is to provide the capability for sophisticated user interfaces to be generated at the client-side, while taking into account the relatively small amount of available memory and/or processing power associated with some existing client devices.
  • a hoverzoom e.g., the process whereby when a user rolls a cursor over and/or pauses an indicator relative to a media item that can be selected, the image associated therewith is magnified so that the user can easily see which object is poised for selection, an example of which is illustrated in FIGS. 1( a ) and 1 ( b ).
  • One way to implement the hoverzoom effect is to have the user interface server 42 transmit complete sets of MPEG data corresponding to both the background layer and the foreground layer to the client 44 .
  • the amount of data needed to be transmitted by the user interface server 42 could be quite large to implement this exemplary embodiment of the present invention, resulting in additional delay in rendering the screen transitions on the client device 44 .
  • the relationship between pixels in the background layer and the foreground layer can be determined and used to reduce the amount of data that needs to be transmitted to the client device 44 to generate a hoverzoom effect.
  • this relationship can be relatively simple or somewhat more complex. For example, enlarging the size of the rectangular DVD cover art images of FIG.
  • 1( a ) primarily involves enlarging a rectangular image to occlude neighboring images as part of the transition.
  • more complex shapes e.g., a doughnut shaped object with a hole in the center
  • the hole in the middle will expand such that background layer pixels that were previously hidden, become revealed after the hoverzoom effect has occurred.
  • each pixel in the foregoround version of the image is categorized as being one of: (1) completely opaque (can extract pixel color from background layer, so do not need to resend for foreground layer generation) (2) transparent (irrelevant, so do not need to resend for foreground layer), (3) translucent (e.g., pixels around edges of image can have anti-aliasing applied thereto, need to send foreground layer data for these pixels) and (4) null (e.g., doughnut “hole” pixels which reveal background pixels, need to send background layer pixels since those cannot necessarily be extracted from background layer that was originally sent to create the unzoomed interface screen).
  • null e.g., doughnut “hole” pixels which reveal background pixels, need to send background layer pixels since those cannot necessarily be extracted from background layer that was originally sent to create the unzoomed interface screen.
  • This categorization can be done a priori using any desired technique, including manual observation and/or using the pseudocode processing techniques described below, and a foreground/background map is generated wherein each pixel in the foreground layer is categorized.
  • a hoverzoom map can be stored for each image for which a hoverzoom effect can be triggered in the user interface.
  • FIG. 10 Hoverzoom processing in accordance with this exemplary embodiment of the presents invention is generally illustrated in FIG. 10 .
  • an MPEG (background) version of the image 1000 and an unzoomed version 1002 of the image to be magnified are provided.
  • the background image 1000 is combined with the unzoomed version 1002 of the image and transmitted to the client device 44 in the MPEG data stream, after compression at step 1006 .
  • the foreground/background map described above is retrieved from storage at step 1008 , and used to determine which pixel data associated with the foreground layer and the background layer needs to be transmitted.
  • That data is encoded (compressed) at steps 1010 and 1012 , saved as a ZSD image file and transmitted to the client device 44 .
  • this exemplary embodiment of the present invention transmits this information as scene data (ZSD data) outside of the MPEG data stream, it can alternatively be embedded in the MPEG data stream.
  • exemplary embodiments of the present invention contemplate that other server/client data transfers may benefit from selectively deciding, at one of the upstream nodes which is supplying data to the client device 44 , which type of encoding/data stream is appropriate for data to be transmitted, in particular for data associated with zooming user interfaces.
  • data is evaluated at block 1100 to determine whether it is first data or second data and selectively determining a type of encoding (and associated transmit data stream) for handling that data.
  • First and second data can be different types of data or the same type of data having different characteristics.
  • An example of the foregoing is the hoverzoom data (background data being first data and foreground data being second data).
  • An example of the latter is text.
  • MPEG encoding is not particularly efficient for encoding text and, accordingly, it may be desirable to encode text under certain circumstances using another type of encoding, e.g., if the text to be transmitted is less than a predetermined font size (e.g., 16 point).
  • such client devices will continue to have difficulties rendering screens associated with zoomable user interfaces (ZUIs), as well as other applications, such as Internet browsing.
  • ZUIs zoomable user interfaces
  • embedded platforms which typically run on such “thin” client devices, e.g., set-top boxes and the like, have access to limited memory/processing power and, therefore, cannot handle certain content and application support.
  • Another challenge which arises with such client devices is their lack of support for certain types of media and associated codecs.
  • a home personal computer 1200 can be inserted into the processing stream between the client device 1202 and the headend system 1204 to assist the thin client device 1202 in rendering content provided from the headend 1204 .
  • the content can be rendered in accordance with one or more software applications (SAs) 1208 running on the home computer 1200 .
  • SAs software applications
  • software application 1208 can be a zoomable user interface which provides access to media as described above.
  • software application 1208 can be an Internet browser (described below in more detail with respect to FIG. 13 ).
  • software application 1208 can be both an Internet browser and a ZUI, and/or other applications, e.g., office applications media applications, phone and communications applications, drawing applications, etc.
  • more of the processing can be performed by the (relatively) local home personal computer 1200 , which will typically have more memory and/or more processing bandwidth than the thin client device 1202 .
  • the home personal computer 1200 operates an Internet browser 1208 (acting as a software application 1208 ) which is remoted to the TV 1206 as follows.
  • a user e.g., pointing toward the TV 1206 using a 3D pointing device 1300 and providing keystroke inputs via a virtual keyboard 1302 displayed on the TV 1206 , requests access to a particular web page 1304 .
  • This user input information is relayed to user input function 1306 in the client device 1202 , which passes the information on to a corresponding function of the home PC 1200 .
  • Home PC 1200 uses the, e.g., input address, and the browser application 1208 to access web page 1304 .
  • other types of devices e.g., gaming consoles, network attached storage (NAS) devices, cell phones, PDAs, etc., which have enough processing capability as well as access to the desired interface and other support features, could be used in place of home PC 1200 .
  • NAS network attached storage
  • the web page 1304 typically has one or more objects (also sometimes referred to as “rectangles”) associated therewith.
  • web page 1304 has a video rectangle and an audio rectangle associated therewith.
  • the PC 1200 's processor (not shown) scans the web page 1304 , and more precisely the HTML code associated therewith, to identify how many, and what type, of rectangles are present on the web page 1304 .
  • the PC 1200 matches the identified information with the known capabilities of the client 1202 to determine what type of subsequent processing, if any, is needed before it sends information about the web page over to the client device 1202 for display on TV 1206 .
  • the client device 1202 supports MPEG encoded video (i.e., has an MPEG codec) but does not support Flash encoded video content.
  • the home PC 1200 scans a web page and determines that the web page has a Flash encoded rectangle, it will first re-encode (block 1310 ) that particular rectangle to MPEG so that the thin client 1202 can fully display the web page 1304 on the television 1206 . Once selected video rectangles are re-encoded at block 1310 , they are passed through to the client device 1206 via video transmit function 1312 (which may perform other coding operations associated with transmission of the video data) to video replay function 1314 which, e.g., decodes the received video data for handling by the client's graphics chip 1316 .
  • video transmit function 1312 which may perform other coding operations associated with transmission of the video data
  • video replay function 1314 which, e.g., decodes the received video data for handling by the client's graphics chip 1316 .
  • static graphics and audio rectangles associated with the web page 1304 can be identified as part of the HTML scanning process and coded directly for transmission from the home PC 1200 via screen transmit 1318 and audio transmit 1320 functions, respectively.
  • the resulting data streams from blocks 1318 and 1320 are received by corresponding functions 1322 and 1324 on the client side and used to recreate the web page 1304 on the television 1206 .
  • the home PC 1200 has the capability to re-encode video content into a format useable by the client device 1206 . Such a transcoding operation may, for example, be performed at either the signal level or the rendering level of the processing. According to alternative exemplary embodiments, the home PC 1200 is able to transmit new codecs as well as codec updates to the client device 1206 for its use. Initially, the home PC 1200 and client device 1206 communicate such that home PC 1200 understands which codecs the client device 1206 has.
  • the home PC 1200 can either translate the video into a format known by the client device 1206 , or transmit the new codec to the client device for its use, followed by the desired video content.
  • a plurality of re-encoding functions 1310 can be provided as video plug-ins for home PC 1200 to adapt various content which may be found on web pages to the known capabilities of the client device 1202 , which capabilities (such as the types and/or versions of video codecs provided in the client 1202 ) can be stored by the home computer 1200 , e.g., in a memory associated therewith.
  • the interface 1312 , 1318 and 1320 via which it provides data to the client 1202 can be the same, i.e., a standardized interface for remoting a home PC 1200 to the television 1206 via a client device 1202 such as a wireless home network, e.g., a LAN.
  • a wireless home network e.g., a LAN
  • Systems and methods for processing data according to exemplary embodiments of the present invention can be performed by processors executing sequences of instructions contained in a memory device (not shown). Such instructions may be read into the memory device from other computer-readable mediums such as secondary data storage device(s). Execution of the sequences of instructions contained in the memory device causes the processor to operate, for example, as described above. In alternative embodiments, hard-wire circuitry may be used in place of or in combination with software instructions to implement the present invention.
  • Communications node 1400 can contain a processor 1402 (or multiple processor cores), memory 1404 , one or more secondary storage devices 1406 , software application (SA) and a communications interface 1408 .
  • processor 1402 is capable of processing instructions, e.g., software instructions 1408 , in support of a client device to increase the client devices capabilities.
  • processor 1402 can receive media desired by the client device and translate it into a format usable by the client device prior to transmitting the translated media.
  • communications node 1400 is capable of performing the tasks of a home PC 1200 (or other device) as described in the exemplary embodiments herein to augment the capabilities of a client device 1206 .
  • a method for augmenting a client-server includes the steps of: receiving a request to perform at least one function in step 1502 ; processing the request to perform the at least one function in step 1504 ; performing the at least one function which results in a first output in step 1506 ; selectively translating the first output into a format usable by the client device into a second output in step 1508 ; and transmitting either the first output or the second output to the client device in step 1510 .
  • (MPEG) image and/or video data is described as being transmitted through all or part of a cable network
  • the present invention is equally applicable to systems wherein the image and/or video data is available locally, e.g., on a home disk or from a local server. All such variations and modifications are considered to be within the scope and spirit of the present invention as defined by the following claims. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Systems and methods according to exemplary embodiments provide systems and methods for augmenting the capabilities of a client device. The client device can be augmented by a device which has additional processing and memory capability to perform additional functions such as, for example, the translation of desired media into a format usable by the client device.

Description

RELATED APPLICATIONS
This application is related to U.S. patent application Ser. No. 11/144,880, filed on Jun. 3, 2005, entitled “Client-server Architectures and Methods for Zoomable User Interfaces”, the disclosure of which is incorporated here by reference. This application is related to, and claims priority from, U.S. Provisional Patent Application Ser. No. 61/010,226 filed on Jan. 7, 2008, entitled “Augmenting Client-Server Architectures and Methods with Personal Computers to Support Media Applications”, the disclosure of which is incorporated here by reference.
BACKGROUND
The present invention describes systems and methods for processing and transferring multimedia data between nodes in a communication system, e.g., an interactive television system, usable to create, for example, sophisticated entertainment user interfaces in the home.
Technologies associated with the communication of information have evolved rapidly over the last several decades. Television, cellular telephony, the Internet and optical communication techniques (to name just a few things) combine to inundate consumers with available information and entertainment options. Taking television as an example, the last three decades have seen the introduction of cable television service, satellite television service, pay-per-view movies and video-on-demand. Whereas television viewers of the 1960s could typically receive perhaps four or five over-the-air TV channels on their television sets, today's TV watchers have the opportunity to select from hundreds and potentially thousands of channels of shows and information. Video-on-demand technology, currently used primarily in hotels and the like, provides the potential for in-home entertainment selection from among thousands of movie titles. Digital video recording (DVR) equipment such as offered by TiVo, Inc., 2160 Gold Street, Alviso, Calif. 95002, further expand the available choices.
The technological ability to provide so much information and content to end users provides both opportunities and challenges to system designers and service providers. One challenge is that while end users typically prefer having more choices rather than fewer, this preference is counterweighted by their desire that the selection process be both fast and simple. Unfortunately, the development of the systems and interfaces by which end users access media items has resulted in selection processes which are neither fast nor simple. Consider again the example of television programs. When television was in its infancy, determining which program to watch was a relatively simple process primarily due to the small number of choices. One would consult a printed guide which was formatted, for example, as series of columns and rows which showed the correspondence between (1) nearby television channels, (2) programs being transmitted on those channels and (3) date and time. The television was tuned to the desired channel by adjusting a tuner knob and the viewer watched the selected program. Later, remote control devices were introduced that permitted viewers to tune the television from a distance. This addition to the user-television interface created the phenomenon known as “channel surfing” whereby a viewer could rapidly view short segments being broadcast on a number of channels to quickly learn what programs were available at any given time.
Despite the fact that the number of channels and amount of viewable content has dramatically increased, the generally available user interface and control device options and frameworks for televisions have not changed much over the last 30 years. Printed guides are still the most prevalent mechanism for conveying programming information. The multiple button remote control with simple up and down arrows is still the most prevalent channel/content selection mechanism. The reaction of those who design and implement the TV user interface to the increase in available media content has been a straightforward extension of the existing selection procedures and interface objects. Thus, the number of rows and columns in the printed guides has been increased to accommodate more channels. The number of buttons on the remote control devices has been increased to support additional functionality and content handling. However, this approach has significantly increased both the time required for a viewer to review the available information and the complexity of actions required to implement a selection. Arguably, the cumbersome nature of the existing interface has hampered commercial implementation of some services, e.g., video-on-demand, since consumers are resistant to new services that will add complexity to an interface that they view as already too slow and complex.
An exemplary control framework having a zoomable graphical user interface for organizing, selecting and launching media items is described in U.S. patent application Ser. No. 10/768,432, filed on Jan. 30, 2004 to Frank A. Hunleth, the disclosure of which is incorporated here by reference. This framework provides exemplary solutions to the afore-described problems of conventional interfaces. Among other things, such exemplary frameworks provide mechanisms which display metadata associated with media items available for selection by a user in a manner which is easy-to-use, but allows a large number of different media items to be accessible. One feature of exemplary frameworks described in this patent application is the use of zooming to provide, among other things, visually informative transitions between different semantic levels of media objects displayed by the interface and as a mechanism for highlighting objects currently being considered by a user.
The implementation of these types of advanced user interfaces is complicated by the system architectures and communication nodes involved in the processing and transport of data used to generate these interfaces from various sources to an end user's device, e.g., a television. As will be described in more detail below, this data includes so-called metadata that describes the media content. The term “metadata” as it is used herein refers to all of the supplementary information that describes the particular content of interest associated with media items available for selection by a user. As an example for movie objects, the metadata could include, e.g., the title, description, genre, cast, DVD cover art, price/availability, cast bios and filmographies, links to similar movies, critical reviews, user reviews, the rights associated with the metadata itself, rights associated with the content, advertising metadata linked to the content of interest, etc. An exemplary system for capturing, processing, synthesizing and forwarding metadata suitable for such advanced user interfaces is described in U.S. patent application Ser. No. 11/037,897 entitled “A Metadata Brokering Server and Method”, filed on Jan. 18, 2005, the disclosure of which is incorporated here by reference.
Once captured and processed, however, the data needs to be communicated from, for example, a head-end portion of the system to, for example, a set-top box in a manner which enables sufficient data to be supplied to render rich user interfaces, while at the same time being sensitive to time delay and operating within the constraints imposed by legacy hardware. Accordingly, it would be desirable to provide architectures and methods which resolve these conflicting parameters and enable advanced user interfaces to be generated.
SUMMARY
Systems and methods according to exemplary embodiments can improve service within the telecommunications field.
According to one exemplary embodiment a zoomable user interface system includes: a display device for displaying the zoomable user interface; a client device connected to the display device for receiving a command to zoom into the zoomable user interface and for transmitting a request to perform a function associated with the command; and a second device connected to the client device for receiving the request, performing the function and returning a result to the client device, wherein the client device uses the result to perform the zoom into the zoomable user interface on the display device.
According to another exemplary embodiment a method for augmenting a client device includes: receiving a request to perform at least one function; processing the request to perform the at least one function; performing the at least one function which results in a first output; selectively translating the first output into a format usable by the client device into a second output; and transmitting either the first output or the second output to the client device.
According to yet another exemplary embodiment a communications node for augmenting a client device includes: a processor in conjunction with at least one software application for processing a request to perform at least one function, wherein the processor performs the steps of: performing the at least one function which results in a first output; and selectively translating the first output into a format usable by the client device into a second output; a memory for storing the at least one software application, the first output and the second output; and a communications interface for receiving the request to perform at least one function and for transmitting either the first output or the second output to the client device.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings illustrate exemplary embodiments of the present invention, wherein:
FIGS. 1( a) and 1(b) depict screens of a user interface showing a hoverzoom feature which can be generated using data processed in accordance with the present invention;
FIG. 2 depicts another screen of a user interface which can be generated using data processed in accordance with the present invention;
FIG. 3 is a table showing exemplary metadata types and sources;
FIG. 4 shows a client-server architecture according to exemplary embodiments of the present invention;
FIG. 5 illustrates the MPEG-2 transition and scene encoder of FIG. 4 in more detail in accordance with an exemplary embodiment of the present invention;
FIG. 6 illustrates the scene request processor of FIG. 4 in more detail in accordance with an exemplary embodiment of the present invention;
FIG. 7 illustrates the client UI state machine of FIG. 4 in more detail in accordance with an exemplary embodiment of the present invention;
FIG. 8 depicts an exemplary messaging interaction between an event processor, scene loader, exclusive scene and overlay scene in accordance with an exemplary embodiment of the present invention;
FIG. 9 shows another exemplary messaging interaction associated with architecture and methods in accordance with the present invention.
FIG. 10 depicts a technique for encoding data associated with a hoverzoom effect according to an exemplary embodiment of the present invention;
FIG. 11 illustrates selective encoding of data for transmission to a client device according to an exemplary embodiment of the present invention;
FIG. 12 shows an exemplary embodiment wherein a home PC augments a client device according to an exemplary embodiment of the present invention;
FIG. 13 shows an exemplary embodiment wherein a home PC augments a client device according to an exemplary embodiment of the present invention;
FIG. 14 shows a communications node according to exemplary embodiments; and
FIG. 15 shows a method flow diagram for augmenting a client device according to exemplary embodiments.
DETAILED DESCRIPTION
The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims.
In order to provide some context for this discussion, exemplary user interface screens which can be created using data and instructions forwarded from a server to a client in accordance with exemplary embodiments of the present invention are shown in FIGS. 1( a) and 1(b). Therein, a portion of an exemplary user interface screen which can be generated based on information transferred to an end user's system (e.g., set-top box/television or personal computer) shows ten media selection items. For more information regarding this purely exemplary interface, including previous screens and navigation techniques, the interested reader is directed to the above-incorporated by reference U.S. patent application Ser. No. 10/768,432 as well as to U.S. patent application Ser. No. 11/437,215, entitled “Global Navigation Objects in User Interfaces, the disclosure of which is also incorporated here by reference. It will be appreciated that such user interfaces are purely exemplary and that architectures and methods in accordance with the present invention can be implemented to support other interfaces.
FIG. 1( a) shows a user interface screen having a plurality of media objects available for selection as images, e.g., DVD cover art. In FIG. 1( b), the image associated with the movie “Apollo 13” has been magnified as a result of a preliminary selection activity, e.g., a user passing a cursor (not shown) over this image on the display screen. This feature, referred to as a hoverzoom effect and described in more detail below under the heading “Hoverzoom”, can be achieved by transmitting data (e.g., metadata) and instructions between nodes, e.g., a headend and a set-top box according to exemplary embodiments of the present invention. At lower levels of the user interface, additional data, e.g., metadata delivered from content providers, can be used to generate the user interface screen. For example, as shown in FIG. 2, user selection of this magnified image, e.g., by depressing a button on an input device (not shown), can result in a further zoom to display additional details. For example, information about the movie “Apollo 13” including, among other things, the movie's runtime, price and actor information is shown. Those skilled in the art will appreciate that other types of information could be provided here. Additionally, this GUI screen includes GUI control objects including, for example, button control objects for buying the movie, watching a trailer or returning to the previous GUI screen (which could also be accomplished by depressing the ZOOM OUT button on the input device). Hyperlinks generated from metadata processed in a manner described below can also be used to allow the user to jump to, for example, GUI screens associated with the related movies identified in the lower right hand corner of the GUI screen of FIG. 2 or information associated with the actors in this movie. In this example, some or all of the film titles under the heading “Filmography” can be implemented as hyperlinks which, when actuated by the user via the input device, will cause the GUI to display a GUI screen corresponding to that of FIG. 2 for the indicated movie. Some or all of the information used to generate the interface screens of FIGS. 1( a), 1(b) and 2 comes from metadata provided by one or more metadata providers and processed in accordance with exemplary embodiments of the present invention as will now be described.
The interface screens shown in FIGS. 1( a), 1(b) and 2 are purely exemplary and metadata (and other data) transferred and processed in accordance with the present invention can be used to support other interfaces or for purposes other than interface generation. Likewise, many different types of information can be received and processed in accordance with the present invention. Examples of metadata types, sources and associated uses, e.g., for a TV browser interface, a video-on-demand (VOD) interface or a music browser, are shown in the table of FIG. 3. Of particular interest for this detailed discussion are the zooming features associated with user interfaces generated in accordance with these exemplary embodiments of the present invention. Although the present invention is not limited to techniques or systems for generating zoomable user interfaces, and in fact one exemplary embodiment described below supports other applications, such as an Internet browser, some of the client/server features discussed herein are particularly beneficial for use in conjunction with user interfaces which include zooming transitions between user interface screens. For the purpose of this detailed description, the terms “zoom”, “zoomable” and “zooming” refer to techniques wherein a user interface action results in changes to the displayed portion of the user interface that a creates a change of perspective which is consistent and informative to the user. Zooming will typically include changes in object magnification (e.g., camera-style zooming), but is expressly not limited thereto. For example, another aspect of zooming in accordance with user interfaces is semantic zooming which includes the modification of a zoomed object in a manner which is independent of magnification, e.g., the addition of text or a graphic to an object which was not present as part of the object (at any level of magnification) prior to the semantic zoom. For more information related to zoomable user interfaces, the interested reader is referred to the above-identified, incorporated by reference patent application.
For context, one example of a zooming transitions in accordance with exemplary embodiments of the present invention is the zooming transition between the user interface screen of FIGS. 1( a) and 1(b), which involves a magnification change of a hoverzoomed object and, optionally, semantic zooming to that object as well. Another example is found in the transition between the user interface screen of FIG. 1( b) and FIG. 2, wherein the image associated with “Apollo 13” has its magnification changed (e.g., enlarged in FIG. 2 relative to the similar image shown in FIG. 1( b)) and translated for use in FIG. 2. Panning effects can also be used to animate the zooming transition.
A general client-server architecture 40 for providing data processing and transport according to an exemplary embodiment of the present invention is shown in FIG. 4. Therein, a user interface server 42 communicates with a client device 44 to generate a user interface on a display device 46 in conjunction with inputs from, for example, a pointing device 48. Communication of data, e.g., metadata and content data, between the user interface server 42 and the client device 44 can involve any number of intermediate nodes (not shown) between the user interface server 42 and the client device 44 including hubs, distribution servers, and the like. Moreover, some or all of the functional elements illustrated as being part of the user interface server 42 can be located within one or more of these intermediate nodes or reside at the headend of the system 40. The display device 46 can, for example, be a television, a computer monitor/display, or any other display device. The client device 44 can be embodied as a set-top box, a personal computer, or any other device including a processing unit. The pointer 48 can, for example, be a free space pointing device, a mouse, a remote control device, a track ball, a joystick, or any other device capable of providing a pointing capability and can be connected to the client device 44 either via wireline or wirelessly.
According to this exemplary embodiment of the present invention, the server 42 includes a transition and screen capturer 50, an MPEG-2 transition and scene encoder, an MPEG and ZSD cache 54, a scene request processor 56 and an MPEG stream transmitter 58, which components operate to generate and manage the streaming of MPEG-2 data to client devices 44, and to receive and respond to upstream requests from clients 44. The transition and screen capturer 50 automates the gathering of scene data used to generate the user interface. At a high level, this can be accomplished by navigating through, e.g., a scene graph provided as input to the transition and screen capturer 50, along with metadata and content, and calling the MPEG-2 transition and scene encoder 52 to generate MPEG-2 clips and scene description files associated with selected scenes to be displayed on display device 46. Detailed information associated with scene description files and formats (also referred to herein as “ZSD data”) according to exemplary embodiments of the present invention is provided below under the header “Scene Description Data Format”.
Navigation through the scene graph involves capturing and processing data associated with the various scenes which can be generated by the user interface. A “scene” as that term is used herein generally refers to the framework associated with any user interface screen which can be generated by the user interface which, despite the sophisticated and dynamic nature of user interfaces in accordance with the present invention, are all known a priori albeit at least some of the data used to populate the scenes will vary, e.g., over time as content providers change, for example, metadata associated with their offerings. Thus, although FIGS. 1( a), 1(b) and 2 show only portions of user interface screens, each of those complete screens would be considered to be a scene. Table 1 below lists exemplary data which can be collected for each transition and Table 2 lists exemplary data for each scene:
TABLE 1
Per-Transition Information
Field Description
From Scene ID The scene ID of the starting scene
To Scene ID The scene ID of the destination scene
Focus Command The command to move the focus in interface to the
icon, button, etc. that causes the transition when
selected. An example of a focus command is to
move the mouse pointer over an icon to cause it to
focus. Another focus command could directly
activate a hoverzoom effect.
Activation This command activates the icon, button, etc. to
Command start the transition from the “From Location” to the
“To Location”.
TABLE 2
Scene Information
Field Description
Scene ID The scene ID of the this scene
Location The interface location instance for the starting scene
Scene Description The user supplied description or an automatically
generated description.
The transition and scene capturer 50 is thus able to acquire all of the information necessary to simulate all desired transitions in the user interface from, for example, a database not shown in FIG. 4 which contains the complete user interface “universe”. The transition and scene capturer 50 includes navigator controller and capture controller components which become active as a user generates inputs to the interface which command scene transitions. At a high level, the navigation controller has the responsibility of navigation to and from every transition and scene. An exemplary navigation controller performs the following operations, (1) obtain the next transition, (2) navigate to the “from” scene, (3) execute a focus command for this transition, (4) notify the capture controller with the scene and transition information, (5) execute the activation command, (6) notify the capture controller when the animation completes, (7) notify the capture controller with the scene and transition information reversed (for the back transition), (8) invoke a goBacko routine, and (9) notify the capture controller when the animation completes.
The capture controller integrates with the MPEG-2 transition and scene encoder 52 to create the MPEG-2 clips and ZSD files. The capture controller receives notifications from the navigation controller when the transition begins and ends and invokes routines on the MPEG-2 transition and scene encoder at every animation step. To provide a visual indication of the progress to the user, the capture controller ensures that the canvas still paints the visible scene graph to the scene and adds a text overlay that indicates the percent of transitions executed.
A detailed example of an MPEG-2 transition and scene encoder 52 according to an exemplary embodiment of the present invention is shown in FIG. 5. Raw scene data, e.g., images, text, metadata, etc., is delivered from the transition and screen capturer 50 and provided to an object extraction unit 502, a client-rendered feature extraction unit 504 and a video information extraction unit 506. The object extraction unit 502 (handling user-interactable objects on the user interface screens) and client-rendered feature extraction unit 504 (handling, e.g., hoverzoom and text, features to be rendered by the client device 44) operate, under the control of the render-location controller 508, to extract information from the raw data stream and provide it to the ZSD encoder 507, which encodes the extracted information using the scene description format described in detail below. None, some or all of the ZSD encoded data can be sent within the MPEG data stream, for example as part of the private data fields within MPEG frames, using MPEG-2 data encapsulator 509, while other ZSD encoded data can be transmitted using the OOB link described above with respect to FIG. 4.
The video information extraction unit 506 operates to extract video information suitable for MPEG-2 encoding, again under the control of the render location controller 508. The ability of render location controller 508 to selectively determine which type of encoding to apply to particular data, in this example MPEG or ZSD encoding, and the benefits associated therewith are described in more detail below with respect to FIG. 11.
As used herein, the term “MPEG encoding” is generic to MPEG-1, MPEG-2 and similar encodings, although some exemplary embodiments of the present invention do specifically refer to MPEG-2 encoding. General details associated with MPEG encoding per se will be known to those skilled in the art and are further available in the form of draft standards (e.g., ISO CD 11172). An exemplary MPEG-2 encoder 500 includes a plurality of unnumbered blocks which operate in accordance with the standard to perform MPEG-2 encoding (an exception being motion estimation unit 510 described in more detail below). One example of an MPEG encoder which provides a more detailed description of the unnumbered blocks of MPEG encoder 500 can be found in the various MPEG-2 standards documents, for example, Test Model 5 documents which evolved as a joint effort between ITU-T SG15.1 (known then as CCITT SG XV, Working Party XV/1, Experts Group on ATM Video Coding) and ISO/IEC JTC1/SC29 WG11 (MPEG). Specifically, the MPEG version of Test Model 5 is known as MPEG 93/225b and the ITU version of Test Model 5 is known as AVC-445b, the disclosures of which are incorporated here by reference. MPEG encoded data is stored in the MPEG/ZSD cache unit 54 for subsequent transmission to the client device 44.
Of particular interest with respect to the exemplary MPEG-2 transition and scene encoder 52 illustrated in FIG. 5 is the encoder hint collector 512 and motion estimator 510. One aspect of MPEG-encoder 500 in the MPEG-2 transition and scene encoder 52 is its ability to quickly and efficiently provide a high level of compression of the MPEG data being encoded. Among other things, this can be achieved by using knowledge of where each of the scenes are “located” relative to one another in the user interface, which is defined a priori in exemplary user interfaces according to the present invention. This enables selective simplification of the standard MPEG motion estimation algorithm, which in turn speeds up the MPEG encoding process and/or reduces the amount of processing power that needs to be dedicated thereto. More specifically, when encoding sequential MPEG frames in an MPEG data stream, part of the information that is used to perform the encoding is information regarding where blocks of pixels have moved from one MPEG frame to the next MPEG frame (and/or backwards from a previous MPEG frame to a current MPEG frame). For example, if a block of pixels in a first MPEG frame has simply moved to a new screen location in a second MPEG frame, it is generally more efficient to determine and transmit a motion vector associated with that block of pixels than to re-encode that entire block of pixels again and resend them. Similarly, if that block of pixels has experienced a relatively uniform color difference (e.g., by transiting through a lighting effect), it is still efficient to provide a motion vector and some color difference information rather than retransmit the entire block of pixels.
In order to accommodate random object movement to support all types of, e.g., video data compression, standard MPEG motion estimation algorithms perform a search for blocks of pixel data determine which blocks of pixels have moved (and in which direction) from frame to frame. For example, some searches, call full pel searchs, use 16×16 blocks, while others, called half-pel searches, use 16×8 blocks. These searches can become computationally expensive, particularly for high definition video data, and have been estimated to require up to 80% of the processing time/power associated with the operations performed by a standard MPEG encoder 500 (e.g., without the modifications introduced by the encoder hint collector 512). Thus, according to exemplary embodiments of the present invention, motion estimation associated with MPEG encoding is simplified using the fact that the user interface being generated by these client/server architectures does not involve random movement of objects. For example, in transitioning between the exemplary user interface screens of FIGS. 1( b) and 2, the image associated with “Apollo 13” moves from a first position on a display screen to a second position on a display screen (optionally with some magnification), both positions being known a priori to the encoder hint collector 512, which can calculate an MPEG motion vector therefrom.
Thus, the encoder hint collector 512 can pass the MPEG motion vector to motion estimation unit 510 with a command to use the passed motion vector for performing MPEG compression rather than performing a search in accordance with standard MPEG techniques. However, this use of knowledge of interrelated user interface screens to generate MPEG motion vectors may not always be able to generate a valid MPEG motion vector (e.g., due to limitations on the number of bits assigned for expressing MPEG motion vectors). Accordingly, encoder hint collector 512 also has the capability to command motion estimation unit 510 to employ the standard MPEG search algorithm to determine motion vectors on a frame-by-frame (or other) basis. In addition to either (1) using motion vectors which are generated entirely using the standard MPEG search algorithm or (2) using motion vectors which are generated entirely by the encoder hint generator 512 without use of the standard MPEG search algorithm, a third category of motion vectors which can be determined in accordance with the present invention are those which are calculated by the standard MPEG search algorithm having a search range which is limited in range based on the information available to the encoder hint collector 512.
Referring back again to FIG. 4, MPEG data and scene description data generated by blocks 50 and 52 can be cached in memory device 54 for retrieval as needed by the scene request processor 56. The scene request processor 56 processes requests for scenes from client 44, e.g., if the client user interface state machine 62 receives an indication that the cursor associated with pointer 48 has paused over the image associated with “Apollo 13” (FIG. 1), then a request is sent back to scene request processor 56 to initiate a hoverzoom scene (described below) or if the client user interface state machine 62 receives an indication that the user wants to view a more detailed scene associated with “Apollo 13” (FIG. 2), then a request is sent back to scene request processor 56 to initiate that scene. The scene request processor 56 returns MPEG-2 transitions and scene description data back to the client 44 in response to the upstream requests. According to exemplary embodiments described in more detail below, for certain upstream requests the scene request processor 56 may dynamically determine whether MPEG data, scene description data or some combination of both is appropriate to service the requests. A detailed example of the scene request processor 56 is illustrated in FIG. 6.
Therein, the client request processor 600 coordinates all client interaction, e.g., by interpreting client requests and dispatching those requests to the appropriate components within scene request processor 56. For example, the client request processor tracks states and statistics on a per-client basis and stores such information in database 602. An out-of-band (OOB) client communication component 604 handles all communication with clients over OOB channels, including responding to connection requests and extracting protocol requests. The video playback control function 606 coordinates the operation of the MPEG-2 stream generation components, e.g., the scene loop generator 608 and the transition playback function 610. The scene loop generator 608 component generates loops of the user interface scenes and transmits them when no transitions occur. The transition playback function 610 loads MPEG-2 transition streams that were previously generated by the MPEG-2 transition and scene encoder 52 (e.g., via cache 54) and streams them to the requested client. The transition playback function 610 may serve multiple streams simultaneously. The MPEG-2 transport stream encapsulation unit 612 updates the MPEG-2 transport stream as appropriate and forwards the stream to the UDP encapsulation unit 614 which groups MPEG-2 transport stream packets together and sends them over UDP to a IP to QAM gateway (not shown) in the MPEG stream transmitter 58.
Referring again to FIG. 4, MPEG stream transmitter 58, on the server side, and MPEG stream receiver 64 and MPEG decoder 66, on the client side, enable the communication of both metadata, e.g., data used to populate the text fields shown in the user interface screen of FIG. 2, and content via a video streaming protocol link. The MPEG transmitter 58, receiver 64 and decoder 66 can be implemented using off-the-shelf components and, accordingly, are not described in detail herein. However readers interested in more details relating to these elements, as well as other exemplary interactive television system architectures in which the present invention can be implemented, are referred to U.S. Pat. No. 6,804,708 to Jerding et al., the disclosure of which is incorporated here by reference. The on-screen display (OSD) graphics controller 68 receives data scene data from the client state machine 62 and input from the cursor controller 69 to generate overlay graphics and local animations, e.g., zooming transitions, for the user interface. The MPEG video data and the OSD video data output from decoder 66 and OSD graphics controller 68, respectively, are combined by video combiner 70 and forwarded to display device 46 to generate the user interface. As mentioned above, the DVD cover art images shown in FIG. 1( a) are examples of user interface elements created using MPEG video data, while the zoomed version of the “Apollo 13” image in FIG. 1( b) and the circular icons in the upper right hand corner of the user interface screen of FIG. 1( a) are examples of user interface elements generated using scene description data.
Of particular interest for exemplary embodiments of the present invention is the client user interface state machine 62, a more detailed example of which is provided in FIG. 7. The client user interface state machine 62 interprets scene data and/or scripts received from the scene request processor 56 to present user interface scenes (e.g., as shown in FIGS. 1( a), 1(b) and 2) on client devices 44. The client user interface state machine 62 can also retrieve scene data and MPEG-2 transition clips from either the headend 42 (as represented by block 700) or from a local hard disk drive 702. Those skilled in the art will appreciate that, depending upon the system and/or type of client device involved, that only one data source 700, 702 may be present in a particular implementation of the present invention or that some other type of data source can be used. Out-of-band (OOB) communications 704 can be used to provide signaling and commands to the client user interface state machine 62 via an operating system (OS) 706, e.g., PowerTV, Linux, Win32, etc., and operating system portal layer 708. The OS and OS porting layer 706, 708 can also track the user's activities with respect to the user interface and provide data to an event mapper function 710. Event mapper 710 translates user interface data, e.g., cursor movement, voice command input, motion of free space pointer, etc., into events which may require some change in the user interface, e.g., display change, audio change, zooming transition, etc. For example, when the user's cursor hovers over or passes over the image of “Apollo 13” in FIG. 1( a), the event mapper 710 would receive raw cursor data from the OS and map that into, for example, a hoverzoom event which results in that image being slightly magnified as illustrated in FIG. 1( b) and described in more detail below. As another example, if the OS 706, 708 passed a button click through to the event mapper 710 while the cursor was positioned over the magnified version of the “Apollo 13” image in FIG. 1( b), indicating that the user wanted more detail regarding this movie, then the event mapper 710 could identify a “transition to detailed view event” associated therewith, leading to a transition to the user interface screen of FIG. 2.
Events detected by event mapper 710 are queued in the event queue 712 for processing by event processor 714. The event processor 714 coordinates the activities of the client user interface state machine 62 by receiving events from the event queue 712 and dispatching them to the action library 716 based on, for example, the currently active scene data and/or script. The action library 716, in conjunction with a scene data loader 720 and various storage units 718, 722, operates to generate the change(s) to the currently displayed user interface screen based on the detected event as will be described in more detail below with respect to the discussion of scene data.
Scene Description Data Format
Having described some exemplary server/client architecture for generating user interfaces according to exemplary embodiments of the present invention, a second exemplary data format (in addition to MPEG/MPEG-2) which can be used in conjunction with this architecture will now be described. Although other data formats can be used in conjunction with the present invention, this exemplary data format effectively creates a state machine that enables the client device 44 to respond to user interactions and system events. This data format is arbitrarily extensible to support both very low powered client devices 44 and high end client devices 44, e.g., PCs. Other goals of this exemplary scene data format (also referred to as “ZSD”) include theme support, future language support, demo scripting, and automated test support.
The ZSD format supports two types of scenes: the exclusive scene and overlay scenes. Herein, the exclusive scene is referred to simply as the scene, since it occupies the full screen and contains the primary user interaction elements. Overlay scenes describe full or partial scenes that the client user interface state machine 62 logically overlays on top of the exclusive scene. While the exclusive scene changes as the user navigates, the overlay scenes may or may not change. This enables them to support features such as music controls, global navigation, bookmarks, etc., that follow the user as they navigate from exclusive scene to scene. Exclusive scenes launch overlay scenes initially, but overlay scenes may launch other overlays. Although it is possible to terminate all overlay scenes, the overlay scenes control their own lifetime based on interaction from the user or based on the current exclusive scene.
The exclusive scene and all overlay scenes logically exist in their own namespaces. In order for ZSD elements to refer to elements in other scenes, ZSD references as described herein could be modified to include a field to specify the namespace. Inter-scene communication is useful for operations such as notifying overlay scenes what is in the exclusive scene. To support inter-scene communication, the sender triggers actions to generate events. These events are then dispatched by the event processor 714 to each scene. When the event contains a Resource ID, that ID is mapped to an equivalent resource in the destination scene. If the destination scene does not contain an equivalent resource, the event processor 714 moves on to test dispatching the event to the next scene.
Every exclusive scene passes through the following states sequentially on the client, (1) Entered, (2) Loaded, (3) Steady State, (4) Unloading and (5) Exited. When the exclusive scene's ZSD data is initially decoded, the scene enters the Entered state. At this point, the event processor 714 fires the OnLoad event so that the exclusive scene can perform any initial actions. Once the event processor 714 completes the OnLoad event dispatch process, the exclusive scene enters the Loaded state. At this point, the event processor 714 may have pending events in its queue 712. The event processor 714 clears out this queue 712 and then transitions the exclusive scene to its Steady State. FIG. 8 illustrates an exemplary exclusive scene life cycle using scene membership messaging to show event processing in all states. The process for unloading an exclusive scene is essentially the reverse of the load process. For this case, a GoToScene or other scene-changing action initiates the unload process. At this point, the exclusive scene changes to the Unloading state. Once all ZSD unload processing completes, the process transitions to the Exited state, wherein the client may optionally retain some or all of the exclusive scene's ZSD data. The changes in the exclusive scene's state are communicated to all currently loaded overlay scenes so the overlay scene can take action (if needed).
Overlay scenes exist independent and on top of the exclusive scene. For example, in FIG. 1( a) the three icons depicted in the upper righthand corner (home, up arrow and TV) can be implemented as overlay scenes on the exclusive scene (the images of various DVD covers, implemented in the MPEG layer). Another example, not shown in FIGS. 1 and 2, is the provision of volume control and/or channel selection user interface objects as overlay scenes. Termination of an overlay scene can be accomplished from within the scene itself, or by request from the exclusive scene. Additionally, SceneMembershipNotifcation events can be used to limit the lifetime of an overlay scene to a particular set of exclusive scenes as shown, for example, in FIG. 9. Each of the exclusive scenes that belong to this scene group would send a SceneMembershipNotification message when they are loaded. The overlay scene associated with this scene group would use the ExclusiveSceneChange events and the SceneMembershipNotification message to tell if the overlay scene should stay loaded or should terminate itself. As long as it receives a SceneMembershipNotifaction that matches its Scene Group, the overlay screen can stay loaded. Triple tables (mentioned in FIG. 9) are described in more detail below.
According to one exemplary embodiment of the present invention, each scene contains the following descriptive information:
TABLE 3
Scene Information Fields
Field Description
Scene ID A globally unique ID for this scene
Description An optional string description to help
identify this scene to a developer
SceneDimension The dimensions used to layout the scene
ZSD Format Version This field has the integer value one.
ZSD Profile This field is the name of the minimally
supported profile. Currently it can take
on the value “Simple” and “Advanced”.
Maximum Action Stack Size This field specifies the maximum number
of elements that may be pushed onto the
Action Stack for this scene.
Cache Property Type This field specifies how a ZSD inter-
preter may cache this scene.
Cache Property Value This field can be used to specify a 32
bit integer value based on the Cache
Property Type. It should be set to 0 if
unused.

In order to improve ZSD load time performance, a client device 44 may optionally implement a ZSD cache 722. ZSD-encoded scenes specify caching properties to direct clients when the caching behavior is no longer useful. For example, temporally important information such as sports scores should not be cached for a long period of time. Table 4 lists exemplary caching properties types and describes their use.
TABLE 4
Cache Properties
Cache Property
Type Description Property Value Units
Timeout Time out this scene after the Seconds
specified number of seconds.
(0 seconds implies no caching)
An exemplary scene data format according to the present invention has four fundamental data types (sometimes referred to herein as “elements”), specifically objects, events, actions, and resources. At a high level, objects describe scene components such as the bounds for buttons and icons in the MPEG layer, overlay text, and overlay images. Events describe the notifications that are pertinent to the scene. These include mouse (pointer) move events, keyboard events, application state change events, etc. Actions describe responses to events such as going to another scene, and finally, resources contain the raw data used by objects, events, and actions, e.g., image data. Each of these data types are described in more detail below.
Exemplary object types and parameters associated therewith (including an optional set of properties) according to an exemplary embodiment of the present invention are described in tables 5-8.
TABLE 5
Object Types
Object Type Value Parameters Description
WholeScene
0 None The whole scene object,
OID 0, has this type.
Bounds 1 X, Y, Width, This object specifies a
Height rectangular bound in the
scene coordinate system.
PNode 2 X, Y, Width, This object specifies
Height, Parent a PNode with the
Object specified bounds
TABLE 6
Reserved Object IDs
Name Object ID Type Description
WholeScene
0 WholeScene The whole scene
Reserved 1-63 N/A Reserved
TABLE 7
Object Type Support
Object Type Simple Profile Advanced Profile
WholeScene
Bounds
PNode x
TABLE 8
Object Properties
Property Type Parameters Required For: Optional For:
Cursor Cursor Resource ID WholeScene Bounds, PNode
Like the other scene description format elements, each event is assigned a globally unique value. Some event types employ filters to constrain the actions that they would trigger. For example, the OnKeyPress event uses the key of interest. In addition to filters, events can push resources onto the action stack, described below. Actions may use the information on the stack to modify their behavior.
Exemplary event types are listed in Table 9 below. Overlay scenes affect the propagation of events by the dispatcher. Dispatch semantics are abbreviated in the table as follows:
    • 1. Active—the dispatcher sends the event only to the active scene. For example, when a scene is loaded, the OnLoad event only gets sent to that scene.
    • 2. Scenes with Resource Filters—the dispatcher only sends these events to scenes that contain Resource Table entries for the event. Before iterating through a scene's triple table, the event dispatcher remaps the Resource IDs in the event to their equivalents in the scene.
    • 3. Overlays Only—the dispatcher only sends these events to overlay scenes.
    • 4. Both—the dispatcher first sends this event to the overlay scenes and then to the exclusive scene
TABLE 9
Event Types
Event Type Value Semantics Filter Action Stack Description
OnLoad
0 Active None None This event gets sent when
the object gets loaded.
OnKeyPress 1 Both Key Key This event gets sent when
the user presses a key or
remote control button.
OnKeyRelease 2 Both Key Key This event gets sent when
the user releases a key or
remote control button.
OnKeyTyped 3 Both Key Key This event gets sent when
the user types a key. If
the key supports auto-
repeat, the system sends
this event repeatedly
while the key is down.
OnMouseEnter 4 Both None None This event gets sent when
the mouse pointer goes
over the object.
OnMouseExit 5 Both None None This event gets sent when
the mouse pointer exits
the bounds of the object.
OnMousePress 6 Both Button X, Y, Button This event gets sent when
the user presses a mouse
button.
OnMouseRelease 7 Both Button X, Y, Button This event gets sent when
the user releases a
mouse button.
OnMouseClick 8 Both Button X, Y, Button The event gets sent when
the user presses and
releases a mouse button.
OnFocusIn 9 Both None None This event gets sent when
the associated object
receives focus. Other
events generally cause
focus such as key presses
and mouse enter.
OnFocusOut 10 Both None None This event gets sent when
the associated object
loses focus.
OnSceneMembership- 11 Scenes SceneMembership SceneMembership This event gets sent when
Notification with Resource Resource ID a NotifySceneMembership
Resource ID action gets fired.
Arguments
OnScrollUp 12 Both Wheel Wheel This event gets fired for
every notch that the
specified scroll wheel
moves up.
OnScrollDown 13 Both Wheel Wheel This event gets fired for
every notch that the
specified scroll wheel
moves down.
OnTimeout 14 Both Timer Timer This event gets fired when
a timer expires.
OnActivate 15 Both None None This event gets fired when
an object gets activated.
OnExclusiveScene 16 Overlays Entered, None This event gets fired when
Change Only Loaded, the exclusive scene
Unloading, changes. The argument
Exited specifies the exact
moment in the scene
change. See the scene
the scene life cycle
sequence diagram.
OnUnload 17 Both None None This event gets fired when
an object gets unloaded
as the result of a scene
change.
In operation of the architectures and methods described herein, the result of an event on an object is an action. Actions may be linked together in a ZSD Action Table to form programs. To facilitate parameter passing to actions from events and to linked actions, a ZSD interpreter maintains an action stack. The action stack is initialized before dispatching the first action in an action list with the following items in order:
    • 1. The object in the triple table entry that triggered the action
    • 2. The event in the triple table entry that triggered the action
    • 3. Elements pushed onto the action stack from the event
      Before dispatching each action, the ZSD interpreter logically pushes the parameters of the action onto the stack. Implementations may short-circuit this behavior on built-in actions for simplicity. Each action type specifies its use of the stack. In general, a ZSD interpreter will only be able to allocate a small action stack (e.g. 16-32 elements), so stack usage should be kept to a minimum. To ensure that the ZSD interpreter always has a sufficient stack, the ZSD encoder must specify the maximum stack size in the header. All action types should avoid recursion to simplify the maximum stack size calculation. Exemplary action types are listed below in Table 10.
TABLE 10
Action Types
Action Stack
Post
Stack
Action Type Value Parameters Inputs Outputs Delta Description
NoAction
0 None None None 0 This action is a
NOP.
GoToScene 1 Scene ID, Parameters None −2 This action causes
Duration the client to
animate to a new
location in the
specified time. If
the server context
buffer has
information, this
command bundles
the context with the
scene navigation
request.
NavigateBack 2 Count Parameters None −1 Navigate the
specified number of
scenes back in
history. If the
history does not
contain that many
scenes, it navigates
back as far as
possible. If the
server context
buffer has
information, this
command bundles
the context with the
scene navigation
request.
NavigateForward 3 Count Parameters None −1 Navigate the
specified number of
scenes forward in
history. It the
history does not
contain that many
scenes, it navigates
forward as far as
possible. If the
server context
buffer has
information, this
command bundles
the context with the
scene navigation
request.
NavigateHome 4 None None None 0 Navigate to the
home scene. If the
server context
buffer has
information, this
command bundles
the context with the
scene navigation
request.
NavigateUp 5 Count, Parameters None −2 Navigate to the
Duration scene that is
geographically up n
times in the
specified time. If
the server context
buffer has
information, this
command bundles
the context with the
scene navigation
request.
StartTimer 6 Timer, Parameters None −2 Start a timer that
Duration sends a timeout
event in the
specified duration.
Timers are global to
the scene.
StopTimer 7 Timer Parameters None −1 Stop the specified
timer.
StartHoverZoom 8 X, Y, Width, Parameters None −7 Hoverzoom to the
Height, end coordinates (x,
Resource y, width, height)
ID, Duration over the specified
duration, using the
Resource ID
associated with a
HoverZoomPixelData
resource to
create the
HoverZoom.
StopHoverZoom 9 Duration Parameters None −1 Stop the
hoverzoom over the
specified number of
millisecond
Focus
10 Object ID Parameters None −1 Force the focus to
change to the
specified object.
ChangePointer 11 Resource Parameters None −2 Change the pointer
ID, Object to that specified by
ID the Resource ID
when over the
object specified by
the Object ID.
ChangePointerVisibility 12 Visible, Parameters None −2 True to show the
Duration pointer; false to
hide it. Animate for
specified duration.
MovePointer 13 X, Y, Parameters None −3 Move the pointer to
Duration the specified
location over the
specified duration.
Activate 14 Object ID Parameters None −1 Activate the
specified object.
PushServerContext 15 Resource ID Parameters None −1 Push the specified
resource for
transmission back
to the server.
ReportServerContext 16 None None None 0 Report the
gathered context to
the server. If no
pending context,
then this action is
ignored. After the
report, this
command clears
the context buffer.
CreateTextObject 17 Object ID, Parameters None −2 Show the text
Resource ID object specified by
the Resource ID
using the Object
specified by the
Object ID
CreateImageObject 18 Object ID, Parameters None −2 Show the image
Resource ID specified by the
Resource ID using
the Object specified
by the Object ID
NotifySceneMembership 19 SceneMembership Parameters None −2 Notify scene
Resource ID membership. This
is usually done in
response to an
OnLoad event.
StartOverlayScene 20 Overlay Parameters None −2 Load and start the
Scene specified overlay
Resource ID scene.
TerminateOverlayScene 21 None None None 0 Terminate the
current overlay
scene. Triggering
this action from the
main scene does
nothing.
TerminateAllOverlayScenes 22 None None None 0 Terminate all
overlay scenes.
This action is useful
for resyncing client
and server state.
SetActiveTripleTable 23 Triple Table Parameters None −1 Set the active Triple
Index Table. Index 0 is
the set by default.
RunScript 24 Resource ID Parameters 0+ Arbitrary Interpret the
specified script
Exemplary resources which can be used in conjunction with the present invention are listed below in Table 11.
TABLE 11
Resource Types
Resource Type Value Parameters Description
UTF8String
0 UTF8String This resource type holds string characters
from the UTF8 character set. The string may
not exceed 256 characters.
UnicodeString 1 UnicodeString This resource type holds Unicode characters.
The string may not exceed 256 characters.
MPEG2TransitionClip 2 Scene ID, This resource type points to an MPEG-2 clip
Scene ID, file for the transition between the two scenes.
MPEG-2 clip Scenes list all of the MPEG-2 clips for clients
with hard disk support or for servers. These
clips may change based on the current
theme.
Cursor 3 Image This resource holds the cursor image.
Image 4 Image This resource holds an image.
HoverZoom 5 PixMask, This resource holds the image data for
FGTransPix, creating a hoverzoom.
FGOpaquePix,
BGPix
SceneMembership
6 UTF8String This resource identifies a scene's members
such as belonging to a application.
OverlayScene 7 Scene This resource holds an embedded ZSD
description for an overlay scene.
According to an exemplary embodiment of the present invention, the scene description format groups all scene interaction information into five tables: the object table, the event table, the action table, the resource table and one or more triple tables as described below in Tables 12-17. This division into tables eliminates most redundant information and enables quick lookup of interaction behavior on low end clients 44.
TABLE 12
ZSD Tables
Table Description
Object Table This table lists all of the objects in the
scene. Objects may be high level entities
such as PNodes or just regions on the scene.
Event Table This table lists all events that need
processing on this scene. A client may
ignore any event not listed in this table.
Action Table This table lists all actions that can be
invoked on objects on this scene.
Resource Table This table contains strings and images.
Its main use is to decouple the string and
image data from the above tables so that it
is trivial for the server to switch themes
and languages.
Triple Table This table associates objects, events, and
actions. A ZSD encoding may include more
than one triple table and use actions to
switch between the active one. This enables
the creation of state machines within a scene.
TABLE 13
Object Table Fields
Field Description
Object ID A unique ID for this object. OID number 0 represents the
whole scene.
Object Type The type of the object
Description An optional string description to make the XML clearer
Parameters Additional parameters that describe the object
TABLE 14
Event Table Fields
Field Description
Event ID A unique ID for this event
Event Type The type of the event
Description An optional string description to make the XML clearer
Parameters Additional parameters that describe the event
TABLE 15
Action Table Fields
Field Description
Action ID A unique ID for this action
Action Type The type of the action
Next Action The Action ID of the next action to run. Specify the
NoAction instance to stop executing actions. It is illegal to
specify a loop of actions.
Description An optional string description to make the XML clearer
Parameters Additional parameters that describe the action
TABLE 16
Resource Table Fields
Field Description
Resource ID A unique ID for this resource
Theme ID The theme ID for this resource
Language ID The language ID for this resource
Resource Type The type of the resource
Description An optional string description to make the XML clearer
Parameters Additional parameters that describe the resource
TABLE 17
Triple Table Fields
Field Type Description
Object ID The triple's object
Event ID The event to monitor
Action ID The action to invoke upon receiving the event
Boolean True to terminate event processing if this triple matches an
event
Description An optional string description to make the XML clearer

Various additional information regarding an exemplary scene data format according to the present invention can be found in the above-incorporated by reference priority application.
Client devices 44 without local storage request scenes and transitions from the server 42. An exemplary set of messages which can be used to perform this function is provided below in Table 18. The client/server link can, for example, be made over an Ethernet connection, QPSK channels (used by cable networks currently for OOB communications) or any other protocol or type of connection. Those skilled in the art will appreciate that this message set is purely exemplary and that messages can be added or deleted therefrom.
TABLE 18
Client-Server Messages
Message Name ID Source Description
RequestScene
0 Client Request the specified scene.
RequestSceneAck 1 Server Acknowledgment that the server is
sending the requested scene.
SceneDetails 2 Server The server may send this to the client if it
does not send scene details in-band with
the MPEG-2 scene transitions
DebugControl 3 Server The server sends this message to
enable/disable debug logging and remote
control support on the client.
LogMessage 4 Client Log a text message. The client only
sends this message in debug mode.
NotifyEvent 5 Client Notify that an event has occurred. The
client only sends this message in debug
mode.
NotifyAction 6 Client Notify that an action has been fired. The
client only sends this message in debug
mode.
NotifyTriple 7 Client Notify that a triple table entry matched.
The client only sends this message in
debug mode.
GenerateEvent 8 Server Generate and fire the specified event on
the client. These events will be fired
event in lockout mode. The client only
accepts this message in debug mode.
Lockout 9 Server Lockout/unlock all user-generated events
on the client. Example events include
mouse and keyboard events. The client
only accepts this message in debug
mode.
Identity 10 Client The client sends this message every time
that it establishes a connection with the
server to identify itself.
NotifyServerContext 11 Client The client sends this message when its
server context buffer is not empty and an
action command invokes a server
notification or request.
RequestScreenCapture 12 Server The server sends this message to request
that the client take a snapshot of the
screen and send it back to the server in a
ScreenCapture message.
ScreenCapture 13 Client This is the response message to
RequentScreenCapture. It contains the
snapshot.

Hoverzoom
As mentioned above, one feature of exemplary client-server architectures and methods according to the present invention is to provide the capability for sophisticated user interfaces to be generated at the client-side, while taking into account the relatively small amount of available memory and/or processing power associated with some existing client devices. One example of the ways in which the above-described systems and methods address this issue can be seen with respect to the user interface interaction referred to herein as a “hoverzoom”, e.g., the process whereby when a user rolls a cursor over and/or pauses an indicator relative to a media item that can be selected, the image associated therewith is magnified so that the user can easily see which object is poised for selection, an example of which is illustrated in FIGS. 1( a) and 1(b).
There are a number of challenges associated with implementing a hoverzoom feature in bandwidth limited systems, such as interactive television systems wherein the client devices have limited memory and/or processing power. Consider the example wherein the user interface screen illustrated in FIG. 1( a) is rendered using MPEG data streams transmitted from the user interface server 42 to the client 44 containing the cover art images associated with various movies. This visual portion of the user interface screen will be referred to herein as the background layer. When the event mapper 710 and event processor 714 recognize that the user has triggered a hoverzoom response, a foreground layer (e.g., the magnified version of the “Apollo 13 image) is generated and used to modify the user interface screen of FIG. 1( a). There are several possibilities for providing the data used to transition from the user interface screen shown in FIG. 1( a) to the user interface screen shown in FIG. 1( b). One way to implement the hoverzoom effect is to have the user interface server 42 transmit complete sets of MPEG data corresponding to both the background layer and the foreground layer to the client 44. However, when one considers that the user can roll the cursor over a potentially very large number of screen objects in the user interface, e.g., dozens or hundreds, quite rapidly, the amount of data needed to be transmitted by the user interface server 42 could be quite large to implement this exemplary embodiment of the present invention, resulting in additional delay in rendering the screen transitions on the client device 44.
Moreover, it can be seen from comparing FIG. 1( a) with FIG. 1( b) that a significant portion of the pixel data associated with the unzoomed version of FIG. 1( a) is reused in creating the hoverzoomed version of FIG. 1( b). Thus, according to another exemplary embodiment of the present invention, the relationship between pixels in the background layer and the foreground layer can be determined and used to reduce the amount of data that needs to be transmitted to the client device 44 to generate a hoverzoom effect. Depending upon the object to be magnified as part of the hoverzoom effect, this relationship can be relatively simple or somewhat more complex. For example, enlarging the size of the rectangular DVD cover art images of FIG. 1( a) primarily involves enlarging a rectangular image to occlude neighboring images as part of the transition. On the other hand, more complex shapes, e.g., a doughnut shaped object with a hole in the center, present more complex situations for generating a hoverzoom effect. Consider that as the doughnut-shaped object is enlarged, the hole in the middle will expand such that background layer pixels that were previously hidden, become revealed after the hoverzoom effect has occurred.
According to one exemplary embodiment of the present invention, each pixel in the foregoround version of the image is categorized as being one of: (1) completely opaque (can extract pixel color from background layer, so do not need to resend for foreground layer generation) (2) transparent (irrelevant, so do not need to resend for foreground layer), (3) translucent (e.g., pixels around edges of image can have anti-aliasing applied thereto, need to send foreground layer data for these pixels) and (4) null (e.g., doughnut “hole” pixels which reveal background pixels, need to send background layer pixels since those cannot necessarily be extracted from background layer that was originally sent to create the unzoomed interface screen). This categorization can be done a priori using any desired technique, including manual observation and/or using the pseudocode processing techniques described below, and a foreground/background map is generated wherein each pixel in the foreground layer is categorized. A hoverzoom map can be stored for each image for which a hoverzoom effect can be triggered in the user interface.
To Capture Background
for (node=scenegraph.rootO; node !=foreground node; node=next node) if (node bounds within foreground bounds)
    • paint node to background image
      To Capture Foreground
      Draw the foreground node to an image with the foreground's original size (low-res foreground)
      Draw the foreground node to an image with the foreground's maximum size (high-res foreground)
      After mapping, this data is encoded to reduce the amount of data to be saved and transferred at steps 1010 and 1012 using, for example, the following pseudocode to evaluate the relevance of the background pixels based on alpha information.
      To Capture Alpha Information
      Calculate Foreground Node starting bounds Calculate Foreground Node ending bounds
      Create an alpha image the size of the foreground starting bounds which only contains alpha values, initialized to opaque
      Set the image's alpha composite rule to keep the minimum value of either its current value or the value of the pixel being drawn to it
      while (foreground.size( )<ending bounds) draw foreground to alpha image increase foreground size
      To Calculate Which Pixels Are Needed For The Background Image
      Any pixels in the original background image which are transparent are irrelevant
      For all remaining relevant background pixels
    • If (low-res foreground pixel is transparent)
      • Background pixel is irrelevant
    • Else if (low-res foreground pixel is opaque and captured alpha pixel is opaque)
      • Background pixel is irrelevant
    • Else
      • Background pixel is relevant
        Depending upon the particular image to be encoded in this way, most of the foreground layer pixels will be designated as opaque and need not be resent to the client device 44 to generate the hoverzoom effect.
Hoverzoom processing in accordance with this exemplary embodiment of the presents invention is generally illustrated in FIG. 10. Therein, an MPEG (background) version of the image 1000 and an unzoomed version 1002 of the image to be magnified (for example, Apollo 13 in FIG. 1( a)), e.g., PNG or JPEG, are provided. The background image 1000 is combined with the unzoomed version 1002 of the image and transmitted to the client device 44 in the MPEG data stream, after compression at step 1006. The foreground/background map described above is retrieved from storage at step 1008, and used to determine which pixel data associated with the foreground layer and the background layer needs to be transmitted. That data is encoded (compressed) at steps 1010 and 1012, saved as a ZSD image file and transmitted to the client device 44. Although this exemplary embodiment of the present invention transmits this information as scene data (ZSD data) outside of the MPEG data stream, it can alternatively be embedded in the MPEG data stream.
As will be appreciated by reading the foregoing discussion of hoverzoom techniques in accordance with an exemplary embodiment of the present invention, some of the challenges associated with generating sophisticated user interfaces (e.g., which employ zooming) at client devices connected to, for example, a cable network, can be addressed by intelligent selection of an encoding stream for particular data to be transmitted. In the foregoing hoverzoom example, background data was sent using the MPEG encoding stream available in such networks, while the foreground information was sent using a different type of encoding (described above), handled for presentation through the OSD layer. However, exemplary embodiments of the present invention contemplate that other server/client data transfers may benefit from selectively deciding, at one of the upstream nodes which is supplying data to the client device 44, which type of encoding/data stream is appropriate for data to be transmitted, in particular for data associated with zooming user interfaces.
This general concept is illustrated in FIG. 11. Therein, data is evaluated at block 1100 to determine whether it is first data or second data and selectively determining a type of encoding (and associated transmit data stream) for handling that data. First and second data can be different types of data or the same type of data having different characteristics. An example of the foregoing is the hoverzoom data (background data being first data and foreground data being second data). An example of the latter is text. MPEG encoding is not particularly efficient for encoding text and, accordingly, it may be desirable to encode text under certain circumstances using another type of encoding, e.g., if the text to be transmitted is less than a predetermined font size (e.g., 16 point).
Augmenting Client Devices Via a PC
In some cases, such client devices will continue to have difficulties rendering screens associated with zoomable user interfaces (ZUIs), as well as other applications, such as Internet browsing. For example, embedded platforms, which typically run on such “thin” client devices, e.g., set-top boxes and the like, have access to limited memory/processing power and, therefore, cannot handle certain content and application support. For example, it would be desirable to provide a full-featured Internet browsing capability, in addition to or as an alternative to the afore-described ZUIs on a user's television(s), e.g., in the living room. Another challenge which arises with such client devices is their lack of support for certain types of media and associated codecs. For example, frequent updates and versions are typically made available to Flash codecs on an ongoing basis. However, embedded platforms which operate on thin client devices may only have access to out-of-date codecs, in some cases several versions out-of-date due to OEM practices associated with the provision of such software. Accordingly, it becomes difficult or impossible to render certain types of content on the television through such thin client devices.
According to exemplary embodiments, this challenge is addressed by adding a personal computer (PC) to the processing chain in, e.g., the afore-described systems. As generally shown in FIG. 12, a home personal computer 1200 can be inserted into the processing stream between the client device 1202 and the headend system 1204 to assist the thin client device 1202 in rendering content provided from the headend 1204. The content can be rendered in accordance with one or more software applications (SAs) 1208 running on the home computer 1200. According to one exemplary embodiment, software application 1208 can be a zoomable user interface which provides access to media as described above. According to another exemplary embodiment, software application 1208 can be an Internet browser (described below in more detail with respect to FIG. 13). According to still another exemplary embodiment, software application 1208 can be both an Internet browser and a ZUI, and/or other applications, e.g., office applications media applications, phone and communications applications, drawing applications, etc.
In such a combination, more of the processing can be performed by the (relatively) local home personal computer 1200, which will typically have more memory and/or more processing bandwidth than the thin client device 1202. Consider the example shown in FIG. 13 wherein the home personal computer 1200 operates an Internet browser 1208 (acting as a software application 1208) which is remoted to the TV 1206 as follows. Suppose that a user, e.g., pointing toward the TV 1206 using a 3D pointing device 1300 and providing keystroke inputs via a virtual keyboard 1302 displayed on the TV 1206, requests access to a particular web page 1304. This user input information is relayed to user input function 1306 in the client device 1202, which passes the information on to a corresponding function of the home PC 1200. Home PC 1200 uses the, e.g., input address, and the browser application 1208 to access web page 1304. Alternatively, other types of devices, e.g., gaming consoles, network attached storage (NAS) devices, cell phones, PDAs, etc., which have enough processing capability as well as access to the desired interface and other support features, could be used in place of home PC 1200.
The web page 1304 typically has one or more objects (also sometimes referred to as “rectangles”) associated therewith. In this purely illustrative example, web page 1304 has a video rectangle and an audio rectangle associated therewith. The PC 1200's processor (not shown) scans the web page 1304, and more precisely the HTML code associated therewith, to identify how many, and what type, of rectangles are present on the web page 1304. The PC 1200 then matches the identified information with the known capabilities of the client 1202 to determine what type of subsequent processing, if any, is needed before it sends information about the web page over to the client device 1202 for display on TV 1206. For example, suppose that the client device 1202 supports MPEG encoded video (i.e., has an MPEG codec) but does not support Flash encoded video content.
If the home PC 1200 scans a web page and determines that the web page has a Flash encoded rectangle, it will first re-encode (block 1310) that particular rectangle to MPEG so that the thin client 1202 can fully display the web page 1304 on the television 1206. Once selected video rectangles are re-encoded at block 1310, they are passed through to the client device 1206 via video transmit function 1312 (which may perform other coding operations associated with transmission of the video data) to video replay function 1314 which, e.g., decodes the received video data for handling by the client's graphics chip 1316. Similarly, static graphics and audio rectangles associated with the web page 1304 can be identified as part of the HTML scanning process and coded directly for transmission from the home PC 1200 via screen transmit 1318 and audio transmit 1320 functions, respectively. The resulting data streams from blocks 1318 and 1320 are received by corresponding functions 1322 and 1324 on the client side and used to recreate the web page 1304 on the television 1206.
As described above, according to exemplary embodiments, the home PC 1200 has the capability to re-encode video content into a format useable by the client device 1206. Such a transcoding operation may, for example, be performed at either the signal level or the rendering level of the processing. According to alternative exemplary embodiments, the home PC 1200 is able to transmit new codecs as well as codec updates to the client device 1206 for its use. Initially, the home PC 1200 and client device 1206 communicate such that home PC 1200 understands which codecs the client device 1206 has. When a request comes from the client device 1206 which results in a video media that the client device does not support, the home PC 1200 can either translate the video into a format known by the client device 1206, or transmit the new codec to the client device for its use, followed by the desired video content.
A plurality of re-encoding functions 1310 can be provided as video plug-ins for home PC 1200 to adapt various content which may be found on web pages to the known capabilities of the client device 1202, which capabilities (such as the types and/or versions of video codecs provided in the client 1202) can be stored by the home computer 1200, e.g., in a memory associated therewith. According to one exemplary embodiment, although the type of application or applications 1208 running on the home PC 1200 may vary, the interface 1312, 1318 and 1320 via which it provides data to the client 1202 can be the same, i.e., a standardized interface for remoting a home PC 1200 to the television 1206 via a client device 1202 such as a wireless home network, e.g., a LAN.
Systems and methods for processing data according to exemplary embodiments of the present invention can be performed by processors executing sequences of instructions contained in a memory device (not shown). Such instructions may be read into the memory device from other computer-readable mediums such as secondary data storage device(s). Execution of the sequences of instructions contained in the memory device causes the processor to operate, for example, as described above. In alternative embodiments, hard-wire circuitry may be used in place of or in combination with software instructions to implement the present invention.
The exemplary embodiments described above provide methods and systems for augmenting the capabilities of a client device 1206, e.g. a thin client device such as a set-top box, with a personal computer 1200. Communications node 1400 can contain a processor 1402 (or multiple processor cores), memory 1404, one or more secondary storage devices 1406, software application (SA) and a communications interface 1408. Processor 1402 is capable of processing instructions, e.g., software instructions 1408, in support of a client device to increase the client devices capabilities. For example, processor 1402 can receive media desired by the client device and translate it into a format usable by the client device prior to transmitting the translated media. As such, communications node 1400 is capable of performing the tasks of a home PC 1200 (or other device) as described in the exemplary embodiments herein to augment the capabilities of a client device 1206.
Utilizing the above-described exemplary systems according to exemplary embodiments, a method for augmenting a client-server is shown in the flowchart of FIG. 15. Initially a method for augmenting a client device includes the steps of: receiving a request to perform at least one function in step 1502; processing the request to perform the at least one function in step 1504; performing the at least one function which results in a first output in step 1506; selectively translating the first output into a format usable by the client device into a second output in step 1508; and transmitting either the first output or the second output to the client device in step 1510.
The above-described exemplary embodiments are intended to be illustrative in all respects, rather than restrictive, of the present invention. Thus the present invention is capable of many variations in detailed implementation that can be derived from the description contained herein by a person skilled in the art. For example, although MPEG encoding and MPEG data streams have been described in the foregoing exemplary embodiments, it will be appreciated that different types of encodings and data streams can be substituted thereof in part or in whole, e.g., video encodings used in Windows Media-based content and the like. Moreover, although (MPEG) image and/or video data is described as being transmitted through all or part of a cable network, the present invention is equally applicable to systems wherein the image and/or video data is available locally, e.g., on a home disk or from a local server. All such variations and modifications are considered to be within the scope and spirit of the present invention as defined by the following claims. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items.

Claims (16)

What is claimed:
1. A method for augmenting a client device connected to a display device for displaying a zoomable user interface, the augmenting being performed by another device and comprising:
receiving at said another device a request by said client device to perform a hoverzoom function with respect to said zoomable user interface;
processing by said another device said request by said client device to perform said hoverzoom function with respect to said zoomable user interface;
determining by said another device, a format usable by said client device;
performing by said another device said hoverzoom function with respect to said zoomable user interface which results in a first output, wherein said first output contains at least a first image and a second image, wherein said first image is in a different format than said second image;
selectively translating by said another device either said first image or said second image of said first output into said format usable by said client device into a second output containing either said first image or said second image; and
transmitting by said another device either said first image or said second image of said first output to said client device and transmitting said second output containing either said first image or said second image to said client device,
wherein said another device is a home personal computer, and
wherein said step of selectively translating either said first image or said second image of said first output into a format usable by said client device into said second output containing either said first image or said second image only occurs when said another device determines either said first image or said second image of said first output is in a format that is not usable by said client device.
2. The method of claim 1 further comprising:
providing additional processing capability by said another device for performing said hoverzoom function with respect to said zoomable user interface.
3. The method of claim 1, wherein said first output is in a first codec and said second output is in a second codec which is different from said first codec.
4. The method of claim 1, wherein said first codec is a FLASH codec.
5. The method of claim 1, wherein said second codec is a Moving Picture Experts Group (MPEG) codec.
6. The method of claim 1, wherein said client device is a set-top box.
7. The method of claim 1, wherein when it is determined by said another device that said client device has no format usable for said first output, said another device transmits a new codec to said client device prior to transmitting said first output to said client device.
8. The method of claim 1, wherein said request by said client device to perform said at least one function is a request to execute a web browsing application.
9. A communications node for augmenting a client device connected to a display device for displaying a zoomable user interface, comprising:
a processor in conjunction with at least one software application for processing a request by said client device to perform a hoverzoom function with respect to said zoomable user interface, wherein said processor performs the steps of:
performing said hoverzoom function with respect to said zoomable user interface which results in a first output, wherein said first output contains at least a first image and a second image, wherein said first image is in a different format than said second image;
determining a format usable by said client device; and
selectively translating either said first image or said second image of said first output into a format usable by said client device into a second output containing either said first image or said second image;
a memory for storing said at least one software application, said first output and said second output; and
a communications interface for receiving said request by said client device to perform said hoverzoom function with respect to said zoomable user interface and for transmitting either said first image or said second image of said first output and transmitting said second output containing either said first image or said second image to said client device,
wherein said communications node is a home personal computer, and
wherein selectively translating either said first image or said second image of said first output into a format usable by said client device into said second output containing either said first image or said second image only occurs when said communications node determines either said first image or said second image of said first output is in a format that is not usable by said client device.
10. The communications node of claim 9, wherein said communications node provides additional processing capability for said client device.
11. The communications node of claim 9, wherein said first output is in a first codec and said second output is in a second codec which is different from said first codec.
12. The communications node of claim 9, wherein said first codec is a FLASH codec.
13. The communications node of claim 9, wherein said second codec is a Moving Picture Experts Group (MPEG) codec.
14. The communications node of claim 9, wherein said client device is a set-top box.
15. The communications node of claim 9, wherein when it is determined by said communications node that said client device has no format usable for said first output, said communications node transmits a new codec to said client device prior to transmitting said first output to said client device.
16. The communications node of claim 9, wherein said request by said client device to perform said at least one function is a request to execute a web browsing application.
US12/349,913 2008-01-07 2009-01-07 Augmenting client-server architectures and methods with personal computers to support media applications Expired - Fee Related US9100716B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/349,913 US9100716B2 (en) 2008-01-07 2009-01-07 Augmenting client-server architectures and methods with personal computers to support media applications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US1022608P 2008-01-07 2008-01-07
US12/349,913 US9100716B2 (en) 2008-01-07 2009-01-07 Augmenting client-server architectures and methods with personal computers to support media applications

Publications (2)

Publication Number Publication Date
US20090183200A1 US20090183200A1 (en) 2009-07-16
US9100716B2 true US9100716B2 (en) 2015-08-04

Family

ID=40851850

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/349,913 Expired - Fee Related US9100716B2 (en) 2008-01-07 2009-01-07 Augmenting client-server architectures and methods with personal computers to support media applications

Country Status (1)

Country Link
US (1) US9100716B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170347537A1 (en) * 2015-02-24 2017-12-07 Gaïa Écosystèmes Inc. Multilevel closed ecosystem greenhouse

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9189124B2 (en) 2009-04-15 2015-11-17 Wyse Technology L.L.C. Custom pointer features for touch-screen on remote client devices
US9106696B2 (en) 2009-04-15 2015-08-11 Wyse Technology L.L.C. Method and apparatus for portability of a remote session
US9448815B2 (en) 2009-04-15 2016-09-20 Wyse Technology L.L.C. Server-side computing from a remote client device
US20120054618A1 (en) * 2010-08-25 2012-03-01 Ames Jean A Interactive Trailers
US9400585B2 (en) 2010-10-05 2016-07-26 Citrix Systems, Inc. Display management for native user experiences
US9612724B2 (en) 2011-11-29 2017-04-04 Citrix Systems, Inc. Integrating native user interface components on a mobile device
US9215266B2 (en) * 2012-01-26 2015-12-15 Zoom International S.R.O. System and method for zero-footprint screen capture
US10025550B2 (en) * 2016-03-15 2018-07-17 Intel Corporation Fast keyboard for screen mirroring
US10354394B2 (en) * 2016-09-16 2019-07-16 Dolby Laboratories Licensing Corporation Dynamic adjustment of frame rate conversion settings
US10860649B2 (en) * 2018-03-14 2020-12-08 TCL Research America Inc. Zoomable user interface for TV
US10848819B2 (en) 2018-09-25 2020-11-24 Rovi Guides, Inc. Systems and methods for adjusting buffer size
US11265597B2 (en) * 2018-10-23 2022-03-01 Rovi Guides, Inc. Methods and systems for predictive buffering of related content segments
CN114401445B (en) * 2021-12-31 2024-03-22 深圳云天励飞技术股份有限公司 Video processing control method, device, monitoring equipment, client and storage medium

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745710A (en) 1993-05-24 1998-04-28 Sun Microsystems, Inc. Graphical user interface for selection of audiovisual programming
US5845083A (en) 1996-03-07 1998-12-01 Mitsubishi Semiconductor America, Inc. MPEG encoding and decoding system for multimedia applications
US5907323A (en) 1995-05-05 1999-05-25 Microsoft Corporation Interactive program summary panel
US5991800A (en) * 1997-05-22 1999-11-23 Motorola, Inc. Method and apparatus for providing internet service at a subscriber premise
WO2000001154A1 (en) 1998-06-26 2000-01-06 General Instrument Corporation Terminal for composing and presenting mpeg-4 video programs
WO2000079797A1 (en) 1999-06-18 2000-12-28 Intel Corporation Systems and methods for enhanced visual presentation using interactive video streams
EP1126701A1 (en) 1998-10-02 2001-08-22 Matsushita Electric Industrial Co., Ltd. Epg information display method, epg information display device, video recording/reproducing device, and program
US20010039658A1 (en) * 1999-12-23 2001-11-08 Walton Anthony S. Television receiving and internet browsing apparatus
US6381748B1 (en) * 1997-05-02 2002-04-30 Gte Main Street Incorporated Apparatus and methods for network access using a set top box and television
WO2002047393A1 (en) 2000-12-07 2002-06-13 Thomson Licensing S.A. Coding process and device for the displaying of a zoomed mpeg2 coded image
US20030011636A1 (en) 2001-06-14 2003-01-16 Gene Feroglia Method for magnifying images on a display screen and an interactive television guide system implementing the method
US20030046691A1 (en) 2001-08-31 2003-03-06 Toshiyuki Nakagawa Data processing apparatus and method
US20030066084A1 (en) * 2001-09-28 2003-04-03 Koninklijke Philips Electronics N. V. Apparatus and method for transcoding data received by a recording device
US20030070181A1 (en) * 2001-10-04 2003-04-10 Russell Holm Interactive TV client device with integrated removable storage system
US6804708B1 (en) 2000-06-29 2004-10-12 Scientific-Atlanta, Inc. Media-on-demand flexible and adaptive architecture
US20040268393A1 (en) 2003-05-08 2004-12-30 Hunleth Frank A. Control framework with a zoomable graphical user interface for organizing, selecting and launching media items
US20050182792A1 (en) 2004-01-16 2005-08-18 Bruce Israel Metadata brokering server and methods
US20050283798A1 (en) * 2004-06-03 2005-12-22 Hillcrest Laboratories, Inc. Client-server architectures and methods for zoomable user interfaces
US20060143657A1 (en) * 2004-12-29 2006-06-29 Samsung Electronics Co., Ltd. Method for displaying broadcasting program guide information and remote controller for the same
US7103906B1 (en) * 2000-09-29 2006-09-05 International Business Machines Corporation User controlled multi-device media-on-demand system
US20060262116A1 (en) 2005-05-19 2006-11-23 Hillcrest Laboratories, Inc. Global navigation objects in user interfaces
US20070078948A1 (en) * 2004-07-09 2007-04-05 Luc Julia Media delivery system and method for transporting media to desired target devices
US20070183493A1 (en) * 2005-02-04 2007-08-09 Tom Kimpe Method and device for image and video transmission over low-bandwidth and high-latency transmission channels
US7634795B2 (en) * 2002-01-11 2009-12-15 Opentv, Inc. Next generation television receiver
US20090320082A1 (en) * 2008-06-19 2009-12-24 Iwatchnow Inc. Method of delivering content data
US7950041B2 (en) * 2000-07-31 2011-05-24 International Business Machines Corporation Broadcasting for browsing the web

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6473998B1 (en) * 1999-04-30 2002-11-05 Superba (Societe Anonyme) Process for pre-drying textile filaments after wet treatment and device for practicing this method

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745710A (en) 1993-05-24 1998-04-28 Sun Microsystems, Inc. Graphical user interface for selection of audiovisual programming
US5907323A (en) 1995-05-05 1999-05-25 Microsoft Corporation Interactive program summary panel
US5845083A (en) 1996-03-07 1998-12-01 Mitsubishi Semiconductor America, Inc. MPEG encoding and decoding system for multimedia applications
US6381748B1 (en) * 1997-05-02 2002-04-30 Gte Main Street Incorporated Apparatus and methods for network access using a set top box and television
US5991800A (en) * 1997-05-22 1999-11-23 Motorola, Inc. Method and apparatus for providing internet service at a subscriber premise
CN1313008A (en) 1998-06-26 2001-09-12 通用仪器公司 Terminal for composing and presenting MPEG-4 video programs
WO2000001154A1 (en) 1998-06-26 2000-01-06 General Instrument Corporation Terminal for composing and presenting mpeg-4 video programs
EP1126701A1 (en) 1998-10-02 2001-08-22 Matsushita Electric Industrial Co., Ltd. Epg information display method, epg information display device, video recording/reproducing device, and program
CN1329795A (en) 1998-10-02 2002-01-02 松下电器产业株式会社 EPG information display method, display device, video recording/reproducing device, and program recording medium
WO2000079797A1 (en) 1999-06-18 2000-12-28 Intel Corporation Systems and methods for enhanced visual presentation using interactive video streams
US20010039658A1 (en) * 1999-12-23 2001-11-08 Walton Anthony S. Television receiving and internet browsing apparatus
US6804708B1 (en) 2000-06-29 2004-10-12 Scientific-Atlanta, Inc. Media-on-demand flexible and adaptive architecture
US7950041B2 (en) * 2000-07-31 2011-05-24 International Business Machines Corporation Broadcasting for browsing the web
US7103906B1 (en) * 2000-09-29 2006-09-05 International Business Machines Corporation User controlled multi-device media-on-demand system
WO2002047393A1 (en) 2000-12-07 2002-06-13 Thomson Licensing S.A. Coding process and device for the displaying of a zoomed mpeg2 coded image
US20030011636A1 (en) 2001-06-14 2003-01-16 Gene Feroglia Method for magnifying images on a display screen and an interactive television guide system implementing the method
US20030046691A1 (en) 2001-08-31 2003-03-06 Toshiyuki Nakagawa Data processing apparatus and method
US20030066084A1 (en) * 2001-09-28 2003-04-03 Koninklijke Philips Electronics N. V. Apparatus and method for transcoding data received by a recording device
US20030070181A1 (en) * 2001-10-04 2003-04-10 Russell Holm Interactive TV client device with integrated removable storage system
US7634795B2 (en) * 2002-01-11 2009-12-15 Opentv, Inc. Next generation television receiver
US20040268393A1 (en) 2003-05-08 2004-12-30 Hunleth Frank A. Control framework with a zoomable graphical user interface for organizing, selecting and launching media items
US20050182792A1 (en) 2004-01-16 2005-08-18 Bruce Israel Metadata brokering server and methods
US20050283798A1 (en) * 2004-06-03 2005-12-22 Hillcrest Laboratories, Inc. Client-server architectures and methods for zoomable user interfaces
US20070078948A1 (en) * 2004-07-09 2007-04-05 Luc Julia Media delivery system and method for transporting media to desired target devices
US20060143657A1 (en) * 2004-12-29 2006-06-29 Samsung Electronics Co., Ltd. Method for displaying broadcasting program guide information and remote controller for the same
US20070183493A1 (en) * 2005-02-04 2007-08-09 Tom Kimpe Method and device for image and video transmission over low-bandwidth and high-latency transmission channels
US20060262116A1 (en) 2005-05-19 2006-11-23 Hillcrest Laboratories, Inc. Global navigation objects in user interfaces
US20090320082A1 (en) * 2008-06-19 2009-12-24 Iwatchnow Inc. Method of delivering content data

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
International Search Report for PCT/US2005/019705 mailed Aug. 17, 2006.
Office Action for Chinese Patent Application No. 200580017859.2, mailed Apr. 25, 2008.
Office Action for Chinese Patent Application No. 200580017859.2, mailed Nov. 2, 2007.
Office Action for European Patent Application No. 05 757 363.6, mailed Jan. 22, 2008.
Office Action for European Patent Application No. 05 757 363.6, mailed Sep. 10, 2007.
Supplementary European Search Report for EP 05 757 363.6 mailed Jun. 7, 2007.
Written Opinion for PCT/US2005/019705 mailed Aug. 17, 2006.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170347537A1 (en) * 2015-02-24 2017-12-07 Gaïa Écosystèmes Inc. Multilevel closed ecosystem greenhouse

Also Published As

Publication number Publication date
US20090183200A1 (en) 2009-07-16

Similar Documents

Publication Publication Date Title
US7634793B2 (en) Client-server architectures and methods for zoomable user interfaces
US9100716B2 (en) Augmenting client-server architectures and methods with personal computers to support media applications
US9084020B2 (en) Method and apparatus for providing and receiving user interface
US7664813B2 (en) Dynamic data presentation
US20080201736A1 (en) Using Triggers with Video for Interactive Content Identification
US20170053673A1 (en) MPEG objects and systems and methods for using MPEG objects
US8438492B2 (en) Apparatus and method for providing user interface service in a multimedia system
US20100064332A1 (en) Systems and methods for presenting media content obtained from multiple sources
US10271105B2 (en) Method for playing video, client, and computer storage medium
US9389881B2 (en) Method and apparatus for generating combined user interface from a plurality of servers to enable user device control
AU2010260683A1 (en) Apparatus and method for transmitting and receiving a user interface in a communication system
EP2704397B1 (en) Presenting media content obtained from multiple sources
US8332897B1 (en) Remote rendering of webpages on television
US7692562B1 (en) System and method for representing digital media
CN115550740A (en) Display device, server and language version switching method
KR20090110200A (en) Method and apparatus for providing/receiving user interface

Legal Events

Date Code Title Description
AS Assignment

Owner name: HILLCREST LABORATORIES, INC., MARYLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GRITTON, CHARLES W.K.;HUNLETH, FRANK A.;SCHEIREY, STEPHEN;REEL/FRAME:022480/0849;SIGNING DATES FROM 20090109 TO 20090112

Owner name: HILLCREST LABORATORIES, INC., MARYLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GRITTON, CHARLES W.K.;HUNLETH, FRANK A.;SCHEIREY, STEPHEN;SIGNING DATES FROM 20090109 TO 20090112;REEL/FRAME:022480/0849

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MULTIPLIER CAPITAL, LP, MARYLAND

Free format text: SECURITY AGREEMENT;ASSIGNOR:HILLCREST LABORATORIES, INC.;REEL/FRAME:037963/0405

Effective date: 20141002

AS Assignment

Owner name: IDHL HOLDINGS, INC., DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HILLCREST LABORATORIES, INC.;REEL/FRAME:042747/0445

Effective date: 20161222

AS Assignment

Owner name: HILLCREST LABORATORIES, INC., DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MULTIPLIER CAPITAL, LP;REEL/FRAME:043339/0214

Effective date: 20170606

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Expired due to failure to pay maintenance fee

Effective date: 20190804

AS Assignment

Owner name: CEVA TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF PATENTS;ASSIGNOR:IDHL HOLDINGS, INC.;REEL/FRAME:050966/0756

Effective date: 20190719