US20030120748A1 - Alternate delivery mechanisms of customized video streaming content to devices not meant for receiving video - Google Patents
Alternate delivery mechanisms of customized video streaming content to devices not meant for receiving video Download PDFInfo
- Publication number
- US20030120748A1 US20030120748A1 US10/303,045 US30304502A US2003120748A1 US 20030120748 A1 US20030120748 A1 US 20030120748A1 US 30304502 A US30304502 A US 30304502A US 2003120748 A1 US2003120748 A1 US 2003120748A1
- Authority
- US
- United States
- Prior art keywords
- user
- video stream
- source video
- destination
- delivering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/26603—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for automatically generating descriptors from content, e.g. when it is not made available by its provider, using content analysis techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25891—Management of end-user data being end-user preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/632—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing using a connection between clients on a wide area network, e.g. setting up a peer-to-peer communication via Internet for retrieving video segments from the hard-disk of other client devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N7/17309—Transmission or handling of upstream communications
- H04N7/17318—Direct or substantially direct transmission and handling of requests
Definitions
- the invention relates to the delivery of multimedia assets to a user. More specifically, the invention relates to a method and system for transforming streaming video content for delivery to devices not equipped to receive data in video format.
- FIG. 6 illustrates a table of representative portable and non-portable destination devices according to one embodiment of the invention. For each representative device, FIG. 6 indicates whether the device typically provides the capability for a user to be presented with text, audio, image, or video media.
- a typical wired telephone is configured only to receive audio, however it is common for cellular or other wireless telephones to also be equipped for receipt of textual information, subject to subscription agreements between the user and a network service provider.
- a smart phone as referred to in FIG. 6, may be a more capable device such as a PalmPhoneTM or PocketPC Phone (hybrid devices functioning both as a personal digital assistant and a telephone), a Web-enabled phone having Wireless Application Protocol (WAP), an I-mode phone (phones having protocols tailored for access to compatible Web sites), or other hybrid telephones.
- WAP Wireless Application Protocol
- I-mode phone phones having protocols tailored for access to compatible Web sites
- Facsimile machines pagers, one or two-way radios, and personal computers are well-known destination devices.
- FIG. 6 contemplates PDA's with network communication capabilities.
- An electronic picture frame refers to a special class of computers having a network communication capability, and adapted, typically with a large high-resolution display, to function as a digital photo display device.
- Ceiva's Digital Photo Receiver is an example of an electronic picture frame.
- a tablet PC is a type of notebook-sized personal computer where a user makes inputs via a digital pen and input panel.
- FIG. 6 thus refers to a wide range of potential destination devices, many of which are adapted to mobile application environments.
- the destination devices of FIG. 6 present various disadvantages in terms of the type of media that they are capable of presenting to a user. For example, only selected device types are capable of receiving and presenting video streams to a user. Moreover, even within those few device types, only selected models have such capability.
- Video broadcasts however, represent an abundant source of information. Thus, even where advances are made in searching video stream sources, there exists a need for systems and methods to transform video stream content for delivery to destination devices not equipped to receive streaming video media.
- the invention relates to a method for delivering content, including: reading profile data related to a user; automatically identifying a portion of at least one source video stream based on relevance to the profile data; and transforming the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream.
- the invention provides a system for delivering content, having: a server configured to read profile data related to a user, automatically identify a portion of at least one source video stream based on relevance to the profile data, and transform the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream; and an interface to a destination device coupled to the server and configured to receive the destination media.
- FIG. 1 demonstrates an exemplary methodology for media processing according to one embodiment of the invention.
- FIG. 2 illustrates an architecture for implementing an exemplary embodiment of the invention.
- FIG. 3 demonstrates a more specific hardware architecture according to another exemplary embodiment of the invention.
- FIG. 4 is an exemplary page view of a page viewed by a user utilizing a client according to one embodiment of the invention.
- FIG. 5 demonstrates a page view showing a content retrieval page according to the exemplary embodiment shown in FIG. 4.
- FIG. 6 illustrates a table of representative destination devices according to one embodiment of the invention.
- FIG. 7 is a flow diagram illustrating transformation of video source data according to one embodiment of the invention.
- FIG. 8A is a system diagram illustrating a functional architecture according to one embodiment of the invention.
- FIG. 8B is a flow diagram illustrating a method for delivering video source content according to one embodiment of the invention.
- FIG. 9A is a system diagram illustrating a functional architecture according to one embodiment of the invention.
- FIG. 9B is a flow diagram illustrating a method for delivering video source content according to one embodiment of the invention.
- the service works by recording all of the video streams of appropriate source and interest to a target audience.
- the service may record content from a collection of (or a particular one of) sports or news channels on television.
- the service may record content related to training videos, presentations or executive meetings in a business, school or other particularized environment. Recording may occur as the content is originally being broadcast (i.e., live), afterwards from recorded media, or even before the content is broadcast to its intended audience.
- the content can be segmented, analyzed and/or classified, and thereafter stored on a platform.
- the content can be broken down into its component parts, such as video, audio and/or text.
- the text can include, for example, closed caption text associated with the original transmission, text generated from an audio portion by speech recognition software, or a transcription of the audio portion created before or after the transmission. In the latter case, it becomes possible to utilize the invention in conjunction with executive speeches, conferences, corporate training, business TV, advertising, and many other sources of video which do not typically have available an associated textual basis for searching the video.
- the text provides the basis for an exemplary methodology for overcoming the above-identified problems associated with searching video in the prior art. That is, if a user wishes to search the stored content for video segments relevant to the President of the United States discussing a particular topic, then the President's name and the associated topic can be searched for within the text associated with the video segments. Whenever the President's name and the associated topic are located, an algorithm can be used to determine which portion of an entire video file actually pertains to the desired content and should therefore be extracted for delivery to the user.
- a video file comprises an entire news broadcast about a number of subjects
- the user will receive only those portions of the broadcast, if any, that pertain to the President and the particular topic desired. For example, this could include segments in which the President talks about the topic, or segments in which another talks about the topic and the President's position.
- the pertinent segments of the broadcast have been appropriately extracted, for a given user, they can be stitched together for continuous delivery to that user.
- the segments can be streamed to the user as a means of providing an easy-to-use delivery methodology for the user, and as a means of conserving bandwidth.
- Users can view the delivered multimedia asset in its entirety, skip between the assets, or view only portions of the assets, as they desire.
- a user can have access to portions of the original video file that occurred immediately before or after the extracted segments; for example, the user could choose to watch the entire original video file. Such access can be granted by including a “more” or “complete” button in a user interface.
- a profile of the user is stored which specifies criteria for searching available multimedia assets.
- the criteria may include, for example, key words and/or phrases, a source(s) of the content, etc.
- the profile can be set directly by the user via interaction with an appropriately designed graphical user interface (GUI).
- GUI graphical user interface
- the invention is capable of automatically searching the available assets on a periodic basis, and thereafter extracting, combining and delivering the compiled assets (or segments thereof, regardless of their original source) to the user.
- the invention can be utilized such that a service platform assisting in implementing the invention notifies the user whenever new multimedia assets consistent with the user's profile have been prepared.
- the invention may automatically deliver multimedia assets in accordance with a user's profile according to a predetermined schedule, such as hourly or daily. Alternatively, the invention may notify the user of the presence of desired video clips, rather than actually deliver those clips.
- the assets can be classified and indexed on-the-fly as they are received. In this way, the assets can be compared against the user's profile virtually in real-time, so that results can be provided to the user (and the user can be notified) whenever they become available. Furthermore, a user can provide criteria for a search or searches beyond those set in the user's profile.
- the identified assets can be delivered to the user in a variety of manners. For example, delivery may occur via cable or satellite television, or directly to a personal computer.
- the invention can be practiced via a plurality of platforms and networks. For example, the invention may be practiced over the Internet to reach a large consumer audience, or it may be practiced over an Intranet to reach a highly targeted business or industry target.
- the invention allows video streaming of identified video clips.
- Video streaming i.e., allowing the viewing of a video clip as it is downloaded rather than only after it is downloaded, which speeds the viewing process and largely obviates the need for video storage at the user location
- Video streaming is a communications technique that is growing in popularity with the increasing availability of both video players (especially for use with personal computers) and bandwidth to the average consumer.
- no conventional service allows users to accurately and quickly find desired clips for playing, and do not provide a ready means for providers to profit from the video streams that are provided.
- users may receive only those video clips identified by a search executed on the user's behalf. However, if a user desires, he or she may also choose to view an entire program from which the clip(s) was extracted. A user may also be allowed to choose some or all of the video clips for long-term storage, whereby the clip(s) can be archived for later use. In one embodiment, the user may store the clips at a local computer, and thereafter make the clips available to other users connected via a peer-to-peer network.
- the invention allows improved video-on-demand (VOD).
- VOD is typically defined in the cable/satellite television arena as the ability to request programming at any time and to have VCR-like controls over the content being streamed to the TV.
- the invention adds value to conventional VOD by allowing the user to demand video more accurately and completely.
- PVR personal video recorder
- Current PVR implementations are offered by TiVo and ReplayTV, and allow users great flexibility in storing programs for later viewing and/or manipulation in viewing (e.g., skipping over commercials in a television program).
- the invention provides a searching tool for allowing users to find interesting programs, even from a variety of channel sources, to thereafter be recorded and viewed using PVR technology.
- the invention permits the recording of only those portions of programs that the user desires.
- the invention contemplates recording the desired portions either by doing so directly from the program, or by recording the entire program locally and then utilizing only those portions of the program desired by the user.
- video file refers generically to any analog or digital video information, including any content associated therewith, such as multimedia content, closed caption text, etc.
- clip refers to any subsection of a video program that is selected based on a user search criterion.
- extracting refers to the use of a selected portion of the video file. Such use may include literal removal (permanent or temporary) from the context of a larger file, copying of the selected portion for external use, or any other method for utilizing the selected portion.
- a user may accurately, completely and promptly receive multimedia assets that he or she finds interesting, and may conveniently exploit the received assets in a manner best-suited to that user.
- FIG. 1 demonstrates an exemplary methodology for media processing in a digital video library (DVL) according to one embodiment of the invention.
- DVD digital video library
- Such media processing is used in implementing the invention at a user level, by capturing, segmenting and classifying multimedia assets for later use and manipulation.
- FIG. 1 and discussion of associated concepts are provided in greater detail in the following documents, which are hereby incorporated herein by reference: Shahraray B., “Scene Change Detection and Content-Based Sampling of Video Sequences,” Proc. SPIE 2419, Digital Video Compression: Algorithms and Technologies, pp.
- multimedia assets including video 105 , associated text captions 110 and corresponding audio portions 115 are imported into the system for processing.
- Content-based sampling engine 135 receives the video 105 and segments it into individual shots or video frames; this information will be combined with information extracted from the other components of the video program to enable the extraction of individual stories (i.e., video segments related to a particular topic or topics), as will be described. Additionally, this process allows a representative image for a particular story, segment or clip to be selected by engine 160 ; and second, the process allows boundaries around the story, segment or clip to be set by engine 155 .
- a database 120 of linguistic rules is used by linguistic analysis engine 140 to combine the caption information 110 with the segmented video within engines 155 and 160 , to thereby assist in the functionality of those two engines.
- information within model databases 125 and 130 is used by acoustic classification engine 145 and program identification engine 150 to provide segmentation/identification of commercials and programs, respectively.
- All of the information from engines 135 - 150 is utilized in engines 155 and 160 to discern a length of a particular video story or clip that will be associated with each topic.
- multimodal story segmentation algorithms such as those described in “Automated Generation of News Content Hierarchy By Integrating Audio, Video, and Text Information” (above) can be used to determine an appropriate length of a video clip to be associated with a particular topic.
- the algorithm can be used in conjunction with the user profile to either compare the profile information to newly-acquired content on-the-fly, or to similarly determine an appropriate length for a video clip to be associated with a particular portion of the user profile.
- textual information used to identify clips of interest can be derived, for example, from closed caption text that accompanies most television programs.
- Real-time closed captioning typically lags behind the audio and video by a variable amount of time from about 1 to 10 seconds.
- the embodiment of FIG. 1 is capable of using speech processing to generate very accurate word timestamps.
- a large vocabulary automatic speech recognition system can be used to generate a transcript of the audio track. While the accuracy of the automatically generated transcripts is below that of closed captions, they provide a reasonable alternative for identifying clips of interest with reduced, but acceptable, accuracy.
- a parallel text alignment algorithm can be used to import high quality off-line transcripts of the program when they are or become available.
- FIG. 2 implements an architecture for implementing an exemplary embodiment of the invention. It should be noted that the architectural elements discussed below can be deployed to a user and/or provider of multimedia assets in whole or in part, and therefore each element interfaces with one another and external components using standard, conventional interfaces.
- Video Capture/Media Analysis component 205 records and compresses broadcast TV programming. Also at component 205 , various functions can be performed on the content such as scene change detection, audio analysis, and compression. These video files are shipped to the Video Storage database 210 from which they will be served when the video is streamed to the client 250 .
- Associated metadata is shipped to the Metadata database 215 .
- thumbnail images are included as part of the metadata, as well as terms and/or phrases associated with a clip(s) for categorizing the clip(s) within a topical subset.
- this video capture/media analysis process need not occur in real time. However, there is no reason why it could not occur in real time if an operator so desires and wishes to devote sufficient computational resources. In any case, it is not necessary to wait until a show is completed before indexing and searching that show.
- Video Server 220 responds to clip requests and makes the video content available to the client 250 .
- the video server 220 may download the video clips in whole or in part, stream the clips (e.g., via MPEG4 ASF or MPEG2) to the client 250 or generate the clip metadata discussed above (such as terms and/or phrases associated with a clip for categorizing the clip within a topical subset).
- DVL Server 225 handles query requests (such as how many clips are available, which shows have clips, etc.) and/or clip content requests (metadata that describes clip content including “clip pointer” to video content). Thus, it handles multimedia search (such as closed caption text) and determines the start and stop times of the clips, which are designated with “clip pointers,” as just mentioned.
- eClips server 230 handles client requests for web pages related to a service for providing eClips.
- eClips server 230 utilizes Perl Common Gateway Interface (CGI) scripts that the client navigates in order to perform the functions of the eClips service. For example, the scripts deal with login/registration related pages, home page, profile related pages, archive related pages, player pages, and administration related pages. Player scripts can be launched in a separate window.
- CGI request from the client 250 will return HTML with HTML DIVs, JavaScript, and CSS style sheets. The DIVs and CSS style sheets are used to position the various elements of the page.
- DHTML is used to dynamically load DIV content on the fly (for instance, a list of shows in an instant search pulldown performed by a user).
- three databases 235 , 240 and 245 are shown as Extensible Markup Language (XML) databases.
- XML Extensible Markup Language
- Perl scripts can be utilized to access (i.e., read from and/or write to) these databases via XML.
- these three databases include show database 235 , which contains information about recorded broadcasts, Profile database 245 , which contains personal search terms and/or phrases, and Archive database 240 , which contains saved clip information (e.g., entire clips or simply clip pointers).
- eClips Client 250 includes a JavaScript that each Perl script includes in the HTML that is returned from the eClips server 230 . It is through the JavaScript that the client 250 interacts with the DVL server 225 to determine the desired content and through JavaScript that the client initiates the streaming content with the video server 220 . The JavaScript also accesses (reads) the Show and Profile XML files in those databases.
- the Video Server 220 may have a separate IP host name, and should support HTTP streaming.
- the DVL and eClips servers 225 and 230 may have the same IP host name, and may be collocated within a single machine.
- FIG. 2 the key interactions that cause video to be streamed to the client 250 are demonstrated.
- a user has logged in already and should see a list of topics determined by their profile, as well as the number of clips for each topic.
- An example of a topic could be “sports” and the keyword string associated with this topic could be football, baseball, hockey.
- the keyword string is used to search the CC text (in this case, clips that have any of these terms will be valid).
- JavaScript will send a CGI query to DVL server 225 , which generates an XML response.
- the XML is parsed into JavaScript variables on the client using the XML document object model (DOM).
- the CGI query and XML response is implemented as part of the DVL system and acts as a layer above an Index Server, which, as part of the DVL server 225 , performs text indexing of the video clips (as discussed above) that allows the user to locate a desired clip.
- the XML response will include the number of clips found for each topic. It is with these query responses that the home page knows which topics have hits and can activate the links to play the content.
- JavaScript links when clicked, can launch the player page in a separate window.
- the JavaScript may also run a query to get the list of shows with clips for a particular topic.
- the JavaScript then loops through all the shows with hits and queries the DVL server via the separate CGI script to get the clip information needed to play the clip. This information is also returned via XML and parsed via the JavaScript.
- the JavaScript loads various DIVs that depend on this information, such as hit search term found in CC text, CC text, and thumbnail.
- the player page JavaScript starts the media player with the first clip using a pointer (start time) to the video.
- start time a pointer
- eClips client 250 may reside on, for example, a user's home or business computer, a personal digital assistant (PDA), or a set-top box on a user's television set.
- Client 250 interacts with eClips server 230 as discussed above to provide the user with an interface for viewing and utilizing the video clips.
- Client 250 can be written to contain, for example, a JavaScript object that contains profile results (eClips object).
- a user using eClips client 250 running on a PC may access stored clips through a network, such as the Internet or a locally defined Intranet.
- the user defines a search criterion, either through an “instant search” feature or within a user profile. When multiple clips are found matching the user search, the clips can be stitched together and streamed to the user as one continuous program.
- eClips server periodically searches for clips matching a given user's profile, and makes the clips available to the user, perhaps by notifying the user via email of the availability of the clips.
- the architecture shown in FIG. 2 allows for video to be stored and displayed in several formats including MPEG2 (e.g., for digital television and video on demand) and MPEG4 (e.g., for streaming video on the Internet).
- MPEG2 e.g., for digital television and video on demand
- MPEG4 e.g., for streaming video on the Internet
- the video may be stored for later use by the user; in particular, a user may archive some or all of the received video and thereafter permit searching and uploading of the video from storage by other members of a peer-to-peer computer network.
- FIG. 3 demonstrates a more specific hardware architecture according to another exemplary embodiment of the invention.
- video feeds 310 are received through various sources (such as television channels CNN, ESPN and CNBC) at Video Capture/Media Analysis component 205 within Video Distribution Center 305 .
- Component 205 receives the feeds and forwards captured/analyzed results to video server 220 and/or DVL/eClips server 225 / 230 within cable Headend 325 .
- video analysis portion 315 is illustrated within component 205 , although it should be understood from FIG. 2 and the associated discussion above that component 205 may perform other media analysis such as audio analysis.
- the DVL/eClips servers 225 / 230 operate as described above in conjunction with FIG.
- HFC Hybrid Fiber Coaxial
- the feed is received at cable modem 350 via high speed data line (HSD) to a PC 360 running eClips client 250 .
- the feed could be sent to Set top box 370 atop TV 380 , where Set top box 370 runs eClips client 250 .
- the service can be streamed as high speed data (HSD) through a cable modem as MPEG4 video.
- HSD high speed data
- MPEG4 video video on demand
- FIG. 4 is an exemplary page view of a page viewed by a user utilizing an eClips client according to one embodiment of the invention.
- the user might see page view 400 just after logging in to a system implementing the invention.
- section 405 demonstrates the results of a profile search performed for the user on a given day, or over some other pre-defined period, according to the previously stored profile of that user.
- clips are listed both by topic and by number of clips related to that topic.
- the user therefore has the option of viewing one or more of the clips related to a particular topic.
- Section 405 also identifies a source for the criteria used to select the various topical clips. More specifically, on a profile page, a user can select default sources (shows) which will be searched based on the user's profile; this is referred to as a “Main” list, and would restrict any profile topic that has the Main option to search only those shows selected on the profile page. On a topic editor page, where a user is allowed to add or modify topics for searching, the user can specify this Main list, or can make Custom selections that are only valid for a particular search topic. In section 405 , the user has selected the latter option, and so a “source” is shown as Custom.
- section 410 the user additionally has the option of entering new search terms and/or phrases not related to his or her current profile, whereby the invention searches a clips database via DVL server as described above with respect to FIG. 2.
- Section 415 indicates the media sources which will be searched for the terms or phrases entered in section 410 .
- button 420 “Play all clips,” allows a user to view all currently available clips with one click.
- the user can add a new topic using button 425 .
- the user can return to a home page by clicking on button 430 (although this option is only valid when the user is on a page different from the home page 400 itself), access his profile via button 435 and access an archive of previously saved clips via button 440 .
- a user can log out of the service using button 445 .
- FIG. 5 demonstrates a page view 500 showing a content retrieval page according to the exemplary embodiment shown in FIG. 4.
- section 505 still frames of the beginning of each clip (i.e., thumbnails) within a topic can be viewed by the user.
- Section 505 can be controlled by section 515 , which allows the user to select a topic of clips to be shown, as well as section 520 , which allows a user to select a portion of the clips from that topic that will be played.
- buttons 560 and 565 a user may clear or select all of the clips being shown within a particular topic.
- Section 510 can be controlled by buttons 525 - 550 , which allow a user to skip to a previous clip with button 525 , stop the clip with button 530 , play the clip with button 535 , skip the clip with button 540 , switch to a new topic of clips with button 545 or view footage after the selected clip(s) with button 550 .
- section 510 may also include advertisements 555 , and may display a time remaining for a currently playing clip, a source of the clip, and a date and time the clip was originally broadcast.
- page 500 will play all of the clips currently available in a predetermined order (e.g., reverse chronological order, by source of content, etc.) if the user does not choose a specific topic or clip.
- Button 570 is activated when a user wants to view the clip(s) available; i.e., as shown in view 500 .
- Button 575 allows the user to send (e.g., email) the clip(s) to another user, and button 580 allows the user to save the clip(s) to an archive (i.e., the archive accessed by button 440 in FIG. 4).
- the invention can capture content from nearly any multimedia source and then use standard streaming media to deliver the appropriate associated clips, it is nearly limitless in the markets and industries that it can support.
- the invention can be packaged to address different market segments. Therefore, it should be assumed that the target markets and applications supported could fall into, for example, any or all of the Consumer, Business-to-Consumer or Business-to-Business Marketplaces. The following discussion summarizes some exemplary application categories.
- the invention can be provided as an extension to standard television programming.
- an ISP may allow consumers to sign up for this service, or the set of features provided by the invention can be provided as a premium subscription.
- a consumer would enter a set of keywords and/or phrases in the profile.
- the user may determine that only specific content sources should be monitored.
- the user profile is created or changed it would be updated in the user profile database.
- the user profile database is matched against the closed caption text.
- a consumer may be interested in sports but only want to see the specific “play of the day.”
- the consumer would enter the key words “play of the day” and then identify in the profile the specific content sources (channels or programs) that should be recorded/analyzed by the invention. For example, the consumer could choose channels that play sports games or report on sports news.
- the invention in a Business-to-Consumer offering, can be provided as an extension to standard television programming.
- both the programming and its sponsorship would be different from the consumer model above.
- a corporate sponsor or numerous corporate sponsors may offer specific types of content, or may offer an assemblage of content overlaid with advertising sponsorship.
- the sponsorship would be evident in the advertising that would be embedded in the player or in the content, since the design of the invention is modular in design and allows for customization.
- the Business-to-Consumer service model a consumer would enter a set of keywords in the profile. As the user profile is created or changed it would be updated in the user profile database. Because this model and the content provided would be underwritten by corporate sponsorship, the content provided may be limited to a proprietary set of content. As an example, if CNN were the sponsor of the service, all of the content provided may be limited to CNN's own broadcasts. In addition, it may be very evident to the consumer that the service is brought to them by CNN in that the CNN logo may be embedded in the user interface, or may be embedded in the content itself.
- the invention can be used in intra-company applications as well as extra-company applications.
- the applications supported include, as just a few examples: Business TV, Advertising, Executive Announcements, Financial News, Training, Competitive Information Services, Industry Conferences, etc.
- the invention can be used as a tool to assist employees in retrieving and viewing specific portions of content on demand.
- the user may wish to combine sources from within the business and sources outside of the business.
- a user may wish to see all clips dealing with the category “Virtual Private Networks.”
- a business may have planned a new advertising campaign talking about “Virtual Private Networks” and have an advertisement available to its internal personnel.
- there may be an internal training class that has been recorded and is available internally in which a section talks about “Virtual Private Networks.” Again, this could be another content option captured by the invention.
- one of this company's competitors may have provided a talk at an industry conference the day before about their solution for the “Virtual Private Network” area.
- the invention can provide businesses, their suppliers, their best customers, and all other members of communities of interests with specific targeted content clips that strengthen the relationships. These may include (but not be limited to) product details, new announcements, public relations messages, etc.
- financial information can be available for both professionals and potential clients to receive late-breaking information on stocks, companies and the global markets.
- the information can be from a variety of sources such as Financial News Network, Bloomberg, CNN, etc. and allow users to identify key areas of interest and to continually be up to date.
- the movie industry can use the invention to easily scan through archives of old and new movie footage that can be digitized and stored in a central repository. Sports highlights can be made available for particular games or events. Networks could maintain a library of indexed TV shows (e.g., PBS) where users can search for a particular episode/topic.
- indexed TV shows e.g., PBS
- the invention can be an information dissemination tool for finding the latest information quickly when videos are captured of talks and demonstrations in key events.
- One embodiment of the invention relates to current bandwidth shortages and limitations which sometimes limit the-prompt and effective provisioning of streaming video and other media.
- Internet users particularly home Internet users, often do not have access to high-speed data rates such as those found in cable and/or fiber-optic transmissions.
- high-speed data rates such as those found in cable and/or fiber-optic transmissions.
- Such users often experience a significant delay between the time a video stream is selected and the time the stream actually begins to play. This delay time may be additionally and/or further exacerbated by the need to buffer an initial portion of the video stream locally, so that the video stream will play smoothly once it does begin to play.
- These shortcomings of conventional streaming techniques may therefore also affect the provisioning of eClips servers according to the invention, as has already been described.
- the invention provides relevant information to the user during such a potential wait time, thereby providing entertainment, advertising or other services and reducing the apparent wait time until playing begins.
- a user receiving a customized media presentation might have information relevant to the subject matter of the presentation automatically downloaded from a DVL/eClips server during an off-time (such as late at night).
- the relevant information can be determined based on, for example, a user profile set up as part of the eClips service and in a manner similar to that described above for formulating the customized media presentation itself.
- the information might also be information previously obtained and stored locally by the user for viewing which has simply not yet been viewed by the user. This way, the information can be made available on the user's local hard drive, and can therefore be played immediately upon selection of a particular media stream, during the time when the media stream is being delivered and/or buffered for viewing. While the information is being displayed, the viewer may choose to see the information in its entirety before viewing the particular video stream selected for viewing. In another embodiment, however, the user may discontinue viewing the local information as soon as the primary stream becomes available.
- Relevant information that might be embedded into a media stream being delivered as just described might include, for example, information about the subject matter of the stream or information related thereto, such as advertising for related products or services. Additional possibilities for embedding into the media stream include graphics, games, text, pictures and other types of known media assets.
- the invention might operate through the use of multiple media players, perhaps displaying only one instance of a particular video stream. For example, if the particular video stream is selected for viewing on a certain media player, the invention might automatically (or optionally) open a second media player for playing the locally stored information to be displayed prior to the playing of the primary video stream. Moreover, the invention might display multiple pieces of relevant information, so that the user may choose what to view during the wait time for the primary stream.
- a number of video thumbscreens or video shots might be displayed from which the user can choose for viewing.
- Software at the user's local system may be operable to set forth a criterion and/or timing according to which information to be embedded is located, stored and/or displayed.
- this functionality may be enabled at a server location, for example, using the DVL/eClips server discussed above.
- the invention embeds locally stored media into a video or other media stream to be presented to the viewer, so that the user avoids any wait time in viewing the selected stream that may occur due to bandwidth shortages or other system considerations.
- the locally stored media may be relevant to the content of the primary stream, so that the user does not have to wait an undue amount of time to view information about a desired topic.
- the content can include video clips as discussed primarily above, or can be limited to still frames and text (or just text) if bandwidth/storage does not permit full motion video with audio.
- Hybrid schemes are also contemplated in which some of the content includes video, but other (e.g., perhaps older, or repeated similar stories from multiple sources) clips only include audio, or include only still images and/or text.
- multimedia analysis techniques can be used to determine if stories are about the same topic, or contain the same video material. Because the invention is capable of using standard access and delivery methods, it can be employed in virtually any home or industry application where delivery of multimedia assets is desired.
- FIG. 7 is a flow diagram illustrating transformation of video source data 710 according to one embodiment of the invention.
- FIG. 7 illustrates several alternative transformation paths to deliver content of video source 710 to destination devices 775 .
- video source data 710 may be live streaming video, delayed streaming video, or stored video data.
- Sampling function 715 processes video source data 710 to produce static images 720 .
- video source data 710 is streaming video
- capture process 723 produces a video file 725 from video source data 710 .
- Static images 720 or video files 725 are then delivered to destination devices 775 .
- FIG. 7 illustrates that demultiplexing process 745 processes video source file 710 to obtain or produce audio stream 750 .
- the flowchart shows that there are at least four options for the delivery of audio stream 750 .
- audio stream 750 can be delivered to destination devices 775 directly.
- capture process 753 can create sound file 755 from audio stream 750 for eventual delivery to destination devices 775 via link 780 .
- speech recognition process 760 can process audio stream 750 to produce text 765 . Text 765 can then be delivered to destination devices 775 .
- process 768 can further process text 765 to provide for correction of errors generated by the speech recognition process 760 , or may, either in the alternative or in combination, translate text 765 to another language to produce processed text 770 .
- Processed text 770 can then be delivered to destination devices 775 .
- FIG. 7 illustrates that extraction process 728 generates Closed Caption Text (CCT) 730 from video source data 710 .
- Process 733 corrects for errors in CCT 730 , provides language translation, and/or performs other translations to generate processed CCT 735 .
- Processed CCT 735 may be delivered directly to destination devices 775 .
- text-to-speech process 740 operates on either CCT 730 or processed CCT 735 to produce audio stream 750 , with at least all transformation paths available as described above with regard to audio stream 750 for eventual delivery to destination devices 775 .
- Destination devices 775 may be or include, for example, any of the representative devices referred to in FIG. 6 and described in the background section of this specification. A user's choice of destination device will effect the manner in which the user will select and navigate delivered content.
- a user may use a single destination device, or the user may use multiple devices in combination to receive delivered content. For instance, a particular user may utilize a facsimile to receive an image 720 and a wireless telephone to receive an audio stream 750 . Where multiple destination devices are used, and where the media delivered to the multiple destination devices are related, the delivered content may be associated using tags or other identifiers that allow a user to align the content received on multiple devices. For example, audio stream 750 received on a wireless telephone may be associated to images 720 sent to a facsimile with reference to a page number of the facsimile transmission.
- Content may be tailored to a target destination device 775 according to known alerting or notification utilities that communicate a class of destination device to a content provider at or near a time of delivery.
- content may be tailored according to a predetermined user profile.
- Transformed content may be delivered to destination devices 775 according to alternative timing schemes.
- CCT 730 , processed CCT 735 , audio stream 750 , text 765 , processed text 770 may be delivered in near real-time (e.g., where content delivery is delayed only by processing and communication overhead).
- transformed content is stored for later delivery.
- the timing for delivery of stored content may be according to a predetermined schedule, such as a set time of day.
- content can be delivered according to a set interval of time, such as every hour or other fixed period of time.
- the predetermined schedule may be specified in a user's profile data.
- the delivery of near real-time and/or stored content may be event-triggered.
- a user profile may specify that breaking headline news, special reports, and/or severe weather warnings trigger near real-time delivery of content separate from, or together with, related stored content.
- Sample process 715 , demultiplexing process 745 , extraction process 728 , text-to-speech process 740 , speech recognition process 760 , capture processes 723 and 753 , and processes 733 and 768 may be performed on a server or other network based host computer having access to video source data 710 .
- Specific embodiments for delivering audio stream 750 or sound file 755 to destination devices 775 are provided with reference to FIGS. 8 A- 9 B below.
- FIG. 8A is a system diagram illustrating a functional architecture according to one embodiment of the invention.
- server 810 is coupled to user profile data 825 and is further coupled to voice mailbox 815 via automatic load path 830 .
- voice mailbox 815 may optionally be hosted on the same computer.
- One or more destination devices 820 are coupled to the voice mailbox 815 via data retrieval path 835 , and may optionally be coupled to server 810 via link 840 .
- the user profile data 825 may be or include, for example, user identifiers, topics of interest to the user, and other information.
- the user profile data 825 is loaded and periodically updated from destination device 820 or another device to a database accessible by server 810 .
- system of FIG. 8A is configured to perform the functions described with reference to FIG. 8B.
- FIG. 8B is a flow diagram illustrating a method for delivering video source content according to one embodiment of the invention.
- server 810 reads previously stored profile data 825 in step 850 .
- Server 810 identifies information in video source data 710 relevant to topics in the user profile data 825 in step 855 .
- Server 810 transforms the relevant video source data 710 into an audio stream 750 or a sound file 755 in step 860 .
- demultiplexing process 745 can create audio stream 750 from video source data 710
- capture process 753 can create sound file 755 from the audio stream 750 .
- server 810 streams audio stream 750 to voice mailbox 815 in step 865 .
- server 810 loads sound file 755 to voice mailbox 815 in step 865 .
- Server 810 plays the transformed information (i.e., audio stream 750 or sound file 755 ) from voice mailbox 815 in step 870 .
- delivery of information to destination device 820 is under the local control of voice mailbox 815 .
- voice mailbox 815 may receive Dual-Tone Multi-Frequency (DTMF) signals and/or voice commands to effect the delivery of audio stream 750 or sound file 755 according to user input.
- DTMF Dual-Tone Multi-Frequency
- server 810 may also send media to destination device 820 via link 840 .
- destination device 820 is a smart phone
- a user may simultaneously receive an audio stream 750 via voice mailbox 815 and an image 720 via link 840 .
- FIG. 9A is a system diagram illustrating a functional architecture according to one embodiment of the invention.
- the architecture in FIG. 9A is an alternative approach to the architecture in FIG. 8A for delivery of audio stream 750 and/or sound file 755 .
- a Web server 910 having access to user profile data 905 , and including Voice Extensible Mark-up Language (VXML) generator 915 , is coupled to VXML gateway 920 .
- VXML gateway 920 includes Interactive Voice Response (IVR) system 925 and is coupled to client 930 .
- Client 930 may be a wired, wireless, or smart telephone, for example, having DTMF and speech input capability 935 .
- the description of profile data 825 above is applicable to profile data 905 .
- the system of FIG. 9A is configured to perform the process shown in FIG. 9B.
- FIG. 9B is a flow diagram illustrating a method for delivering video source content according to one embodiment of the invention.
- Web server 910 reads previously stored profile data 905 in step 940 .
- Web server 910 identifies information in video source data 710 relevant to the user profile data 905 in step 945 , and transforms the relevant info in step 950 .
- Server 910 optionally stores the transformed information in step 955 .
- Gateway 920 receives a call from client 930 , for example at a toll free number, in step 960 .
- the VXML gateway 920 has a table that correlates the toll free number with a particular Uniform Resource Locator (URL) related to a particular application.
- VXML gateway 920 fetches the corresponding URL on Web Server 910 in step 965 , and Web server 910 generates VXML based on code derived from the corresponding URL in step 970 . Accordingly, when the VXML gateway 920 runs IVR system 925 to deliver the transformed information to client 930 in step 980 , the application, greeting, and content of the IVR session may be tailored according to the incoming call in step 960 .
- step 980 is, or includes, the delivery of an audio stream 750 ; in another embodiment, step 980 is, or includes, delivery of a sound file 755 to be played by client 935 .
- a service for providing personalized multimedia assets such as electronic clips from video programs, based upon personal profiles.
- it uses text to ascertain the appropriate clips to extract and then assembles these clips into a single session.
- users only see the specific portions of videos that they desire. Therefore, users do not have to undertake the arduous task of manually finding desired video segments, and further don't have to manually select the specified videos one at a time. Rather, the invention generates all of the desired content automatically.
- one embodiment of the invention provides an improved system and method for delivering video content to destination devices not adapted to receive streaming video.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
In one exemplary embodiment, the invention relates to a system and method for delivering content, including: reading profile data related to a user; automatically identifying a portion of at least one source video stream based on relevance to the profile data; and transforming the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream.
Description
- This application is a continuation-in-part of nonprovisional application Ser. No. 10/034,679, which was filed on Dec. 28, 2001, and claims priority to provisional application No. 60/282,204, which was filed Apr. 6, 2001, and to provisional application 60/296,436, which was filed Jun. 6, 2001, all of which are hereby incorporated by reference in their entireties.
- 1. Field of the Invention
- The invention relates to the delivery of multimedia assets to a user. More specifically, the invention relates to a method and system for transforming streaming video content for delivery to devices not equipped to receive data in video format.
- 2. Description of the Related Art
- FIG. 6 illustrates a table of representative portable and non-portable destination devices according to one embodiment of the invention. For each representative device, FIG. 6 indicates whether the device typically provides the capability for a user to be presented with text, audio, image, or video media.
- For instance, a typical wired telephone is configured only to receive audio, however it is common for cellular or other wireless telephones to also be equipped for receipt of textual information, subject to subscription agreements between the user and a network service provider. As used herein, a smart phone, as referred to in FIG. 6, may be a more capable device such as a PalmPhone™ or PocketPC Phone (hybrid devices functioning both as a personal digital assistant and a telephone), a Web-enabled phone having Wireless Application Protocol (WAP), an I-mode phone (phones having protocols tailored for access to compatible Web sites), or other hybrid telephones.
- Facsimile machines, pagers, one or two-way radios, and personal computers are well-known destination devices.
- Personal Digital Assistants (PDA's) have evolved into a range of products; FIG. 6 contemplates PDA's with network communication capabilities. An electronic picture frame, as used herein, refers to a special class of computers having a network communication capability, and adapted, typically with a large high-resolution display, to function as a digital photo display device. Ceiva's Digital Photo Receiver is an example of an electronic picture frame. A tablet PC is a type of notebook-sized personal computer where a user makes inputs via a digital pen and input panel.
- FIG. 6 thus refers to a wide range of potential destination devices, many of which are adapted to mobile application environments. The destination devices of FIG. 6 present various disadvantages in terms of the type of media that they are capable of presenting to a user. For example, only selected device types are capable of receiving and presenting video streams to a user. Moreover, even within those few device types, only selected models have such capability. Video broadcasts, however, represent an abundant source of information. Thus, even where advances are made in searching video stream sources, there exists a need for systems and methods to transform video stream content for delivery to destination devices not equipped to receive streaming video media.
- The foregoing description of the known art is hereby applied to the detailed description of the invention to the extent that such disclosure enables one to practice the invention, or for other reasons.
- In one exemplary embodiment, the invention relates to a method for delivering content, including: reading profile data related to a user; automatically identifying a portion of at least one source video stream based on relevance to the profile data; and transforming the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream.
- In another embodiment, the invention provides a system for delivering content, having: a server configured to read profile data related to a user, automatically identify a portion of at least one source video stream based on relevance to the profile data, and transform the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream; and an interface to a destination device coupled to the server and configured to receive the destination media.
- The features and advantages of the invention will become apparent from the following drawings and description.
- The invention is described with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Additionally, the left-most digit of a reference number identifies the drawing in which the reference number first appears.
- FIG. 1 demonstrates an exemplary methodology for media processing according to one embodiment of the invention.
- FIG. 2 illustrates an architecture for implementing an exemplary embodiment of the invention.
- FIG. 3 demonstrates a more specific hardware architecture according to another exemplary embodiment of the invention.
- FIG. 4 is an exemplary page view of a page viewed by a user utilizing a client according to one embodiment of the invention.
- FIG. 5 demonstrates a page view showing a content retrieval page according to the exemplary embodiment shown in FIG. 4.
- FIG. 6 illustrates a table of representative destination devices according to one embodiment of the invention.
- FIG. 7 is a flow diagram illustrating transformation of video source data according to one embodiment of the invention.
- FIG. 8A is a system diagram illustrating a functional architecture according to one embodiment of the invention.
- FIG. 8B is a flow diagram illustrating a method for delivering video source content according to one embodiment of the invention.
- FIG. 9A is a system diagram illustrating a functional architecture according to one embodiment of the invention.
- FIG. 9B is a flow diagram illustrating a method for delivering video source content according to one embodiment of the invention.
- While the invention is described below with respect to various exemplary embodiments, the invention is not limited to only those embodiments that are disclosed. Other embodiments can be implemented by those skilled in the art without departing from the spirit and scope of the invention.
- The invention solves the above-discussed problems and provides a personalized, customizable multimedia delivery service that is convenient and easy to use. In one embodiment of the invention, the service works by recording all of the video streams of appropriate source and interest to a target audience. For example, the service may record content from a collection of (or a particular one of) sports or news channels on television. In another example, the service may record content related to training videos, presentations or executive meetings in a business, school or other particularized environment. Recording may occur as the content is originally being broadcast (i.e., live), afterwards from recorded media, or even before the content is broadcast to its intended audience.
- Once the content is captured and recorded, it can be segmented, analyzed and/or classified, and thereafter stored on a platform. For example, the content can be broken down into its component parts, such as video, audio and/or text. The text can include, for example, closed caption text associated with the original transmission, text generated from an audio portion by speech recognition software, or a transcription of the audio portion created before or after the transmission. In the latter case, it becomes possible to utilize the invention in conjunction with executive speeches, conferences, corporate training, business TV, advertising, and many other sources of video which do not typically have available an associated textual basis for searching the video.
- Having obtained or generated the text, it can then be used as a basis for searching the multimedia content. In particular, the text provides the basis for an exemplary methodology for overcoming the above-identified problems associated with searching video in the prior art. That is, if a user wishes to search the stored content for video segments relevant to the President of the United States discussing a particular topic, then the President's name and the associated topic can be searched for within the text associated with the video segments. Whenever the President's name and the associated topic are located, an algorithm can be used to determine which portion of an entire video file actually pertains to the desired content and should therefore be extracted for delivery to the user. Thus, if a video file comprises an entire news broadcast about a number of subjects, the user will receive only those portions of the broadcast, if any, that pertain to the President and the particular topic desired. For example, this could include segments in which the President talks about the topic, or segments in which another talks about the topic and the President's position.
- Once the pertinent segments of the broadcast have been appropriately extracted, for a given user, they can be stitched together for continuous delivery to that user. In this way, for example, the segments can be streamed to the user as a means of providing an easy-to-use delivery methodology for the user, and as a means of conserving bandwidth. Users can view the delivered multimedia asset in its entirety, skip between the assets, or view only portions of the assets, as they desire. Moreover, a user can have access to portions of the original video file that occurred immediately before or after the extracted segments; for example, the user could choose to watch the entire original video file. Such access can be granted by including a “more” or “complete” button in a user interface.
- In one embodiment of the invention, a profile of the user is stored which specifies criteria for searching available multimedia assets. The criteria may include, for example, key words and/or phrases, a source(s) of the content, etc. The profile can be set directly by the user via interaction with an appropriately designed graphical user interface (GUI). When such a profile is available, the invention is capable of automatically searching the available assets on a periodic basis, and thereafter extracting, combining and delivering the compiled assets (or segments thereof, regardless of their original source) to the user. In one embodiment, the invention can be utilized such that a service platform assisting in implementing the invention notifies the user whenever new multimedia assets consistent with the user's profile have been prepared. In another embodiment, the invention may automatically deliver multimedia assets in accordance with a user's profile according to a predetermined schedule, such as hourly or daily. Alternatively, the invention may notify the user of the presence of desired video clips, rather than actually deliver those clips.
- The assets can be classified and indexed on-the-fly as they are received. In this way, the assets can be compared against the user's profile virtually in real-time, so that results can be provided to the user (and the user can be notified) whenever they become available. Furthermore, a user can provide criteria for a search or searches beyond those set in the user's profile.
- The identified assets can be delivered to the user in a variety of manners. For example, delivery may occur via cable or satellite television, or directly to a personal computer. The invention can be practiced via a plurality of platforms and networks. For example, the invention may be practiced over the Internet to reach a large consumer audience, or it may be practiced over an Intranet to reach a highly targeted business or industry target.
- In one embodiment, the invention allows video streaming of identified video clips. Video streaming (i.e., allowing the viewing of a video clip as it is downloaded rather than only after it is downloaded, which speeds the viewing process and largely obviates the need for video storage at the user location) is a communications technique that is growing in popularity with the increasing availability of both video players (especially for use with personal computers) and bandwidth to the average consumer. However, no conventional service allows users to accurately and quickly find desired clips for playing, and do not provide a ready means for providers to profit from the video streams that are provided.
- When streaming the identified video clips, users may receive only those video clips identified by a search executed on the user's behalf. However, if a user desires, he or she may also choose to view an entire program from which the clip(s) was extracted. A user may also be allowed to choose some or all of the video clips for long-term storage, whereby the clip(s) can be archived for later use. In one embodiment, the user may store the clips at a local computer, and thereafter make the clips available to other users connected via a peer-to-peer network.
- In another embodiment, the invention allows improved video-on-demand (VOD). VOD is typically defined in the cable/satellite television arena as the ability to request programming at any time and to have VCR-like controls over the content being streamed to the TV. The invention adds value to conventional VOD by allowing the user to demand video more accurately and completely.
- An extension to VOD is personal video recorder (PVR) technology, which allows even more control over TV programs being viewed. Current PVR implementations are offered by TiVo and ReplayTV, and allow users great flexibility in storing programs for later viewing and/or manipulation in viewing (e.g., skipping over commercials in a television program). The invention provides a searching tool for allowing users to find interesting programs, even from a variety of channel sources, to thereafter be recorded and viewed using PVR technology.
- Moreover, whereas conventional PVR records only entire programs based on a user's directions, the invention permits the recording of only those portions of programs that the user desires. In this regard, the invention contemplates recording the desired portions either by doing so directly from the program, or by recording the entire program locally and then utilizing only those portions of the program desired by the user.
- Having described various exemplary embodiments of the invention, it should be noted that the terms “video file,” “video input,” “video,” “video program” or any similar term refers generically to any analog or digital video information, including any content associated therewith, such as multimedia content, closed caption text, etc. The terms “clip,” “video clip,” “electronic clip” or “eClip” should be understood to refer to any subsection of a video program that is selected based on a user search criterion. Also, the terms “extracting,” “parsing,” “removing,” “accessing” or any similar term with respect to a video file refers to the use of a selected portion of the video file. Such use may include literal removal (permanent or temporary) from the context of a larger file, copying of the selected portion for external use, or any other method for utilizing the selected portion.
- Based on the above-described features of the invention, a user may accurately, completely and promptly receive multimedia assets that he or she finds interesting, and may conveniently exploit the received assets in a manner best-suited to that user.
- FIG. 1 demonstrates an exemplary methodology for media processing in a digital video library (DVL) according to one embodiment of the invention. Such media processing is used in implementing the invention at a user level, by capturing, segmenting and classifying multimedia assets for later use and manipulation. It should be noted that the media processing implementation of FIG. 1 and discussion of associated concepts are provided in greater detail in the following documents, which are hereby incorporated herein by reference: Shahraray B., “Scene Change Detection and Content-Based Sampling of Video Sequences,” Proc. SPIE 2419,Digital Video Compression: Algorithms and Technologies, pp. 2-13, February 1995; Shahraray B., Cox R., Haskell B., LeCun Y., Rabiner L., “Multimedia Processing for Advanced Communications Services”, in Multimedia Communications, F. De Natale and S. Pupolin Editors, pp. 510-523, Springer-Verlag, 1999; Gibbon D., “Generating Hypermedia Documents from Transcriptions of Television Programs Using Parallel Text Alignment,” in Handbook of Internet and Multimedia Systems and Applications, Borko Furht Editor, CRC Press 1998; Shahraray B. “Multimedia Information Retrieval Using Pictorial Transcripts,” in Handbook of Multimedia Computing, Borko Furht Editor, CRC Press 1998; and Huang Q., Liu Z., Rosenberg A., Gibbon D., Shahraray B., “Automated Generation of News Content Hierarchy By Integrating Audio, Video, and Text Information,” Proc. IEEE International Conference On Acoustics, Speech, and Signal Processing ICASSP'99, pp. 3025-3028, Phoenix; Ariz., May 1999.
- In FIG. 1, multimedia assets including video105, associated text captions 110 and corresponding
audio portions 115 are imported into the system for processing. Content-basedsampling engine 135 receives the video 105 and segments it into individual shots or video frames; this information will be combined with information extracted from the other components of the video program to enable the extraction of individual stories (i.e., video segments related to a particular topic or topics), as will be described. Additionally, this process allows a representative image for a particular story, segment or clip to be selected byengine 160; and second, the process allows boundaries around the story, segment or clip to be set byengine 155. - A database120 of linguistic rules is used by linguistic analysis engine 140 to combine the caption information 110 with the segmented video within
engines model databases acoustic classification engine 145 andprogram identification engine 150 to provide segmentation/identification of commercials and programs, respectively. Once the multimedia asset(s) have been captured, segmented and classified as described above, they can be stored thereafter inDVL database 165. - All of the information from engines135-150 is utilized in
engines - As referred to above, textual information used to identify clips of interest can be derived, for example, from closed caption text that accompanies most television programs. Real-time closed captioning typically lags behind the audio and video by a variable amount of time from about 1 to 10 seconds. To take this factor into account, the embodiment of FIG. 1 is capable of using speech processing to generate very accurate word timestamps.
- When closed caption text is not available, a large vocabulary automatic speech recognition system can be used to generate a transcript of the audio track. While the accuracy of the automatically generated transcripts is below that of closed captions, they provide a reasonable alternative for identifying clips of interest with reduced, but acceptable, accuracy. Alternatively, a parallel text alignment algorithm can be used to import high quality off-line transcripts of the program when they are or become available.
- FIG. 2 implements an architecture for implementing an exemplary embodiment of the invention. It should be noted that the architectural elements discussed below can be deployed to a user and/or provider of multimedia assets in whole or in part, and therefore each element interfaces with one another and external components using standard, conventional interfaces.
- In FIG. 2, Video Capture/
Media Analysis component 205 records and compresses broadcast TV programming. Also atcomponent 205, various functions can be performed on the content such as scene change detection, audio analysis, and compression. These video files are shipped to theVideo Storage database 210 from which they will be served when the video is streamed to theclient 250. - Associated metadata is shipped to the
Metadata database 215. Note that thumbnail images are included as part of the metadata, as well as terms and/or phrases associated with a clip(s) for categorizing the clip(s) within a topical subset. Typically, this video capture/media analysis process need not occur in real time. However, there is no reason why it could not occur in real time if an operator so desires and wishes to devote sufficient computational resources. In any case, it is not necessary to wait until a show is completed before indexing and searching that show. -
Video Server 220 responds to clip requests and makes the video content available to theclient 250. For example, thevideo server 220 may download the video clips in whole or in part, stream the clips (e.g., via MPEG4 ASF or MPEG2) to theclient 250 or generate the clip metadata discussed above (such as terms and/or phrases associated with a clip for categorizing the clip within a topical subset). -
DVL Server 225 handles query requests (such as how many clips are available, which shows have clips, etc.) and/or clip content requests (metadata that describes clip content including “clip pointer” to video content). Thus, it handles multimedia search (such as closed caption text) and determines the start and stop times of the clips, which are designated with “clip pointers,” as just mentioned. -
eClips server 230 handles client requests for web pages related to a service for providing eClips.eClips server 230 utilizes Perl Common Gateway Interface (CGI) scripts that the client navigates in order to perform the functions of the eClips service. For example, the scripts deal with login/registration related pages, home page, profile related pages, archive related pages, player pages, and administration related pages. Player scripts can be launched in a separate window. Each CGI request from theclient 250 will return HTML with HTML DIVs, JavaScript, and CSS style sheets. The DIVs and CSS style sheets are used to position the various elements of the page. DHTML is used to dynamically load DIV content on the fly (for instance, a list of shows in an instant search pulldown performed by a user). - In FIG. 2, three
databases show database 235, which contains information about recorded broadcasts,Profile database 245, which contains personal search terms and/or phrases, and Archive database 240, which contains saved clip information (e.g., entire clips or simply clip pointers). -
eClips Client 250, in one embodiment, includes a JavaScript that each Perl script includes in the HTML that is returned from theeClips server 230. It is through the JavaScript that theclient 250 interacts with theDVL server 225 to determine the desired content and through JavaScript that the client initiates the streaming content with thevideo server 220. The JavaScript also accesses (reads) the Show and Profile XML files in those databases. - The
Video Server 220 may have a separate IP host name, and should support HTTP streaming. The DVL andeClips servers - In FIG. 2, the key interactions that cause video to be streamed to the
client 250 are demonstrated. In a home page view, a user has logged in already and should see a list of topics determined by their profile, as well as the number of clips for each topic. An example of a topic could be “sports” and the keyword string associated with this topic could be football, baseball, hockey. The keyword string is used to search the CC text (in this case, clips that have any of these terms will be valid). - When the home page is loaded, JavaScript will send a CGI query to
DVL server 225, which generates an XML response. The XML is parsed into JavaScript variables on the client using the XML document object model (DOM). The CGI query and XML response is implemented as part of the DVL system and acts as a layer above an Index Server, which, as part of theDVL server 225, performs text indexing of the video clips (as discussed above) that allows the user to locate a desired clip. The XML response will include the number of clips found for each topic. It is with these query responses that the home page knows which topics have hits and can activate the links to play the content. - These JavaScript links, when clicked, can launch the player page in a separate window. When the player page is loaded, essentially the same JavaScript can be used to recalculate the number of clips for each topic. In principle, this could be changed to calculate this only once and to pass this on to the player script thereafter. The JavaScript may also run a query to get the list of shows with clips for a particular topic. The JavaScript then loops through all the shows with hits and queries the DVL server via the separate CGI script to get the clip information needed to play the clip. This information is also returned via XML and parsed via the JavaScript. The JavaScript loads various DIVs that depend on this information, such as hit search term found in CC text, CC text, and thumbnail. Finally, the player page JavaScript starts the media player with the first clip using a pointer (start time) to the video. It should be noted that, in one embodiment of the invention, the just-described process is almost completely automated, so that dynamic clip extraction occurs when a clip is selected, and a show automatically starts and will play completely through if not interrupted by the user.
- In the architecture shown in FIG. 2,
eClips client 250 may reside on, for example, a user's home or business computer, a personal digital assistant (PDA), or a set-top box on a user's television set.Client 250 interacts witheClips server 230 as discussed above to provide the user with an interface for viewing and utilizing the video clips.Client 250 can be written to contain, for example, a JavaScript object that contains profile results (eClips object). A user usingeClips client 250 running on a PC may access stored clips through a network, such as the Internet or a locally defined Intranet. - In one embodiment, the user defines a search criterion, either through an “instant search” feature or within a user profile. When multiple clips are found matching the user search, the clips can be stitched together and streamed to the user as one continuous program. In another embodiment, eClips server periodically searches for clips matching a given user's profile, and makes the clips available to the user, perhaps by notifying the user via email of the availability of the clips.
- The architecture shown in FIG. 2 allows for video to be stored and displayed in several formats including MPEG2 (e.g., for digital television and video on demand) and MPEG4 (e.g., for streaming video on the Internet). As mentioned above, the video may be stored for later use by the user; in particular, a user may archive some or all of the received video and thereafter permit searching and uploading of the video from storage by other members of a peer-to-peer computer network.
- FIG. 3 demonstrates a more specific hardware architecture according to another exemplary embodiment of the invention. In FIG. 3, video feeds310 are received through various sources (such as television channels CNN, ESPN and CNBC) at Video Capture/
Media Analysis component 205 withinVideo Distribution Center 305.Component 205 receives the feeds and forwards captured/analyzed results tovideo server 220 and/or DVL/eClips server 225/230 withincable Headend 325. In FIG. 3,video analysis portion 315 is illustrated withincomponent 205, although it should be understood from FIG. 2 and the associated discussion above thatcomponent 205 may perform other media analysis such as audio analysis. The DVL/eClips servers 225/230 operate as described above in conjunction with FIG. 2 to deliver, using, for example, Hybrid Fiber Coaxial (HFC) connections, all or part of the video feeds torouting hub 330, and then throughfiber node 340 tocable modem 350 located withinuser home 355. Additional marketing and advertising (such as a commercial placed between every third clip stitched together) could be tied into the video stream in one embodiment of the invention at the Headend fromproviders 320 such as Double Click. - Within
user home 355 the feed is received atcable modem 350 via high speed data line (HSD) to aPC 360 runningeClips client 250. Alternatively, the feed could be sent to Settop box 370 atopTV 380, whereSet top box 370 runseClips client 250. In the example where the video clips are received viacable modem 350, the service can be streamed as high speed data (HSD) through a cable modem as MPEG4 video. When the video is received viaSet top box 370, it can be delivered as MPEG2 over video on demand (VOD) channels that could be set up in advance for a service providing the invention. - FIG. 4 is an exemplary page view of a page viewed by a user utilizing an eClips client according to one embodiment of the invention. In FIG. 4, for example, the user might see
page view 400 just after logging in to a system implementing the invention. Inpage view 400,section 405 demonstrates the results of a profile search performed for the user on a given day, or over some other pre-defined period, according to the previously stored profile of that user. Insection 405, clips are listed both by topic and by number of clips related to that topic. Insection 405, the user therefore has the option of viewing one or more of the clips related to a particular topic. -
Section 405 also identifies a source for the criteria used to select the various topical clips. More specifically, on a profile page, a user can select default sources (shows) which will be searched based on the user's profile; this is referred to as a “Main” list, and would restrict any profile topic that has the Main option to search only those shows selected on the profile page. On a topic editor page, where a user is allowed to add or modify topics for searching, the user can specify this Main list, or can make Custom selections that are only valid for a particular search topic. Insection 405, the user has selected the latter option, and so a “source” is shown as Custom. - In
section 410, the user additionally has the option of entering new search terms and/or phrases not related to his or her current profile, whereby the invention searches a clips database via DVL server as described above with respect to FIG. 2.Section 415 indicates the media sources which will be searched for the terms or phrases entered insection 410. - Also, in
page view 400,button 420, “Play all clips,” allows a user to view all currently available clips with one click. The user can add a newtopic using button 425. The user can return to a home page by clicking on button 430 (although this option is only valid when the user is on a page different from thehome page 400 itself), access his profile viabutton 435 and access an archive of previously saved clips via button 440. Finally, a user can log out of theservice using button 445. - FIG. 5 demonstrates a page view500 showing a content retrieval page according to the exemplary embodiment shown in FIG. 4. In
section 505, still frames of the beginning of each clip (i.e., thumbnails) within a topic can be viewed by the user.Section 505 can be controlled bysection 515, which allows the user to select a topic of clips to be shown, as well assection 520, which allows a user to select a portion of the clips from that topic that will be played. Withbuttons - When one or more of these clips is chosen for viewing by the user, that clip is shown in
section 510.Section 510 can be controlled by buttons 525-550, which allow a user to skip to a previous clip withbutton 525, stop the clip withbutton 530, play the clip with button 535, skip the clip withbutton 540, switch to a new topic of clips withbutton 545 or view footage after the selected clip(s) withbutton 550. Note thatsection 510 may also include advertisements 555, and may display a time remaining for a currently playing clip, a source of the clip, and a date and time the clip was originally broadcast. - In one exemplary embodiment of the invention, page500 will play all of the clips currently available in a predetermined order (e.g., reverse chronological order, by source of content, etc.) if the user does not choose a specific topic or clip.
Button 570 is activated when a user wants to view the clip(s) available; i.e., as shown in view 500.Button 575 allows the user to send (e.g., email) the clip(s) to another user, andbutton 580 allows the user to save the clip(s) to an archive (i.e., the archive accessed by button 440 in FIG. 4). - Having discussed various exemplary embodiments of the invention and associated features thereof, as well as potential uses of the invention, the following provides a more detailed summary of application categories in which the invention is of use.
- Generally speaking, because the invention can capture content from nearly any multimedia source and then use standard streaming media to deliver the appropriate associated clips, it is nearly limitless in the markets and industries that it can support.
- As a practical matter, the invention can be packaged to address different market segments. Therefore, it should be assumed that the target markets and applications supported could fall into, for example, any or all of the Consumer, Business-to-Consumer or Business-to-Business Marketplaces. The following discussion summarizes some exemplary application categories.
- First, as a consumer offering, the invention can be provided as an extension to standard television programming. In this model, an ISP, Cable Programming Provider, Web Portal Provider, etc., may allow consumers to sign up for this service, or the set of features provided by the invention can be provided as a premium subscription.
- In the consumer service model, a consumer would enter a set of keywords and/or phrases in the profile. In addition, as part of the preferences selected in the profile the user may determine that only specific content sources should be monitored. As the user profile is created or changed it would be updated in the user profile database. As video content is captured in the system, the user profile database is matched against the closed caption text. As an example, a consumer may be interested in sports but only want to see the specific “play of the day.” In this scenario, the consumer would enter the key words “play of the day” and then identify in the profile the specific content sources (channels or programs) that should be recorded/analyzed by the invention. For example, the consumer could choose channels that play sports games or report on sports news. When the consumer returns from work that evening, a site or channel for accessing the invention would be accessed. This consumer would then see all of the clips of programs that matched the keywords “play of the day,” meaning that this consumer would see in one session all of the content and clips matching that set of words.
- As another example, in a Business-to-Consumer offering, the invention can be provided as an extension to standard television programming. In this case, both the programming and its sponsorship would be different from the consumer model above. For example, a corporate sponsor or numerous corporate sponsors may offer specific types of content, or may offer an assemblage of content overlaid with advertising sponsorship. The sponsorship would be evident in the advertising that would be embedded in the player or in the content, since the design of the invention is modular in design and allows for customization.
- In the Business-to-Consumer service model, a consumer would enter a set of keywords in the profile. As the user profile is created or changed it would be updated in the user profile database. Because this model and the content provided would be underwritten by corporate sponsorship, the content provided may be limited to a proprietary set of content. As an example, if CNN were the sponsor of the service, all of the content provided may be limited to CNN's own broadcasts. In addition, it may be very evident to the consumer that the service is brought to them by CNN in that the CNN logo may be embedded in the user interface, or may be embedded in the content itself.
- Next, as a Business-to-Business offering, the invention can be used in intra-company applications as well as extra-company applications. The applications supported include, as just a few examples: Business TV, Advertising, Executive Announcements, Financial News, Training, Competitive Information Services, Industry Conferences, etc. In essence, the invention can be used as a tool to assist employees in retrieving and viewing specific portions of content on demand.
- In this Business-to-Business service model, a user would enter a set of keywords in the profile that would be updated in the user profile database. In this case, the content captured will be dependent upon the business audience using the service.
- In an intra-business application, the user may wish to combine sources from within the business and sources outside of the business. As an example a user may wish to see all clips dealing with the category “Virtual Private Networks.” In this example, a business may have planned a new advertising campaign talking about “Virtual Private Networks” and have an advertisement available to its internal personnel. At the same time, there may be an internal training class that has been recorded and is available internally in which a section talks about “Virtual Private Networks.” Again, this could be another content option captured by the invention. Also, one of this company's competitors may have provided a talk at an industry conference the day before about their solution for the “Virtual Private Network” area. As with the other content options, this too could be captured and available as a content option through the invention. Therefore, when our user begins a session using the invention and looks under the term “Virtual Private Networks,” there could be numerous clips available from multiple sources (internal and external) to provide this user with a complete multimedia view of “Virtual Private Networks”.
- As an extra-business tool, the invention can provide businesses, their suppliers, their best customers, and all other members of communities of interests with specific targeted content clips that strengthen the relationships. These may include (but not be limited to) product details, new announcements, public relations messages, etc.
- As further examples of applications of the invention, the following represent industry applications which may benefit from use of the invention.
- In the financial industry, financial information can be available for both professionals and potential clients to receive late-breaking information on stocks, companies and the global markets. The information can be from a variety of sources such as Financial News Network, Bloomberg, CNN, etc. and allow users to identify key areas of interest and to continually be up to date.
- In the advertising/announcements industry, advertisers would be able to target their ads to consumers based on peoples' preferences as expressed in their profiles. This is potentially a win/win situation because people would not be getting any more ads but they would be seeing more things that interest them. Advertisers could charge more for this targeted approach and thereby pay for any costs associated with the invention.
- Similarly, large companies run TV advertisements for a multitude of products, services, target markets, etc. These companies could benefit by housing these commercials on an on-line database that can be accessible to their marketing staff, the advertising agencies, and clients interested in seeing particular commercials that used specific words or product names. The invention can then allow these commercials to be easily searched and accessed.
- In the entertainment industry, the movie industry can use the invention to easily scan through archives of old and new movie footage that can be digitized and stored in a central repository. Sports highlights can be made available for particular games or events. Networks could maintain a library of indexed TV shows (e.g., PBS) where users can search for a particular episode/topic.
- In the travel industry, searches can be done on new information in the travel industry such as airlines, causes of delays, etc. In addition, the invention can be used to provide key clips from specific resorts and other potential vacation destinations.
- In the distance learning/education industry, a large variety of courses could be stored on-line. In many circumstances, a user may want to only see the salient points on a specific topic of interest. The invention can then play a key role in providing support to the user for access and retrieval of the key needed information.
- For conferences and trade events, the invention can be an information dissemination tool for finding the latest information quickly when videos are captured of talks and demonstrations in key events.
- One embodiment of the invention relates to current bandwidth shortages and limitations which sometimes limit the-prompt and effective provisioning of streaming video and other media. For example, Internet users, particularly home Internet users, often do not have access to high-speed data rates such as those found in cable and/or fiber-optic transmissions. As a result, such users often experience a significant delay between the time a video stream is selected and the time the stream actually begins to play. This delay time may be additionally and/or further exacerbated by the need to buffer an initial portion of the video stream locally, so that the video stream will play smoothly once it does begin to play. These shortcomings of conventional streaming techniques may therefore also affect the provisioning of eClips servers according to the invention, as has already been described.
- In order to alleviate the need for a user of the eClips service or other media streaming service to wait in front of a blank screen while the media prepares to play, the invention provides relevant information to the user during such a potential wait time, thereby providing entertainment, advertising or other services and reducing the apparent wait time until playing begins. For example, with respect to the eClips service described above, a user receiving a customized media presentation might have information relevant to the subject matter of the presentation automatically downloaded from a DVL/eClips server during an off-time (such as late at night). The relevant information can be determined based on, for example, a user profile set up as part of the eClips service and in a manner similar to that described above for formulating the customized media presentation itself. The information might also be information previously obtained and stored locally by the user for viewing which has simply not yet been viewed by the user. This way, the information can be made available on the user's local hard drive, and can therefore be played immediately upon selection of a particular media stream, during the time when the media stream is being delivered and/or buffered for viewing. While the information is being displayed, the viewer may choose to see the information in its entirety before viewing the particular video stream selected for viewing. In another embodiment, however, the user may discontinue viewing the local information as soon as the primary stream becomes available.
- Relevant information that might be embedded into a media stream being delivered as just described might include, for example, information about the subject matter of the stream or information related thereto, such as advertising for related products or services. Additional possibilities for embedding into the media stream include graphics, games, text, pictures and other types of known media assets. The invention might operate through the use of multiple media players, perhaps displaying only one instance of a particular video stream. For example, if the particular video stream is selected for viewing on a certain media player, the invention might automatically (or optionally) open a second media player for playing the locally stored information to be displayed prior to the playing of the primary video stream. Moreover, the invention might display multiple pieces of relevant information, so that the user may choose what to view during the wait time for the primary stream. For example, a number of video thumbscreens or video shots might be displayed from which the user can choose for viewing. Software at the user's local system may be operable to set forth a criterion and/or timing according to which information to be embedded is located, stored and/or displayed. Alternatively, this functionality may be enabled at a server location, for example, using the DVL/eClips server discussed above.
- In another embodiment, the invention embeds locally stored media into a video or other media stream to be presented to the viewer, so that the user avoids any wait time in viewing the selected stream that may occur due to bandwidth shortages or other system considerations. The locally stored media may be relevant to the content of the primary stream, so that the user does not have to wait an undue amount of time to view information about a desired topic.
- Although large multimedia files often must be delivered via broadband communication links, the fact that the invention extracts exactly what the user is interested in makes it possible to deliver downloadable content to portable devices efficiently. The content can include video clips as discussed primarily above, or can be limited to still frames and text (or just text) if bandwidth/storage does not permit full motion video with audio. Hybrid schemes are also contemplated in which some of the content includes video, but other (e.g., perhaps older, or repeated similar stories from multiple sources) clips only include audio, or include only still images and/or text. In this regard, multimedia analysis techniques can be used to determine if stories are about the same topic, or contain the same video material. Because the invention is capable of using standard access and delivery methods, it can be employed in virtually any home or industry application where delivery of multimedia assets is desired.
- FIG. 7 is a flow diagram illustrating transformation of
video source data 710 according to one embodiment of the invention. FIG. 7 illustrates several alternative transformation paths to deliver content ofvideo source 710 todestination devices 775. As used herein,video source data 710 may be live streaming video, delayed streaming video, or stored video data. -
Sampling function 715 processesvideo source data 710 to producestatic images 720. In an embodiment wherevideo source data 710 is streaming video,capture process 723 produces avideo file 725 fromvideo source data 710.Static images 720 orvideo files 725 are then delivered todestination devices 775. - FIG. 7 illustrates that
demultiplexing process 745 processes video source file 710 to obtain or produceaudio stream 750. The flowchart shows that there are at least four options for the delivery ofaudio stream 750. First,audio stream 750 can be delivered todestination devices 775 directly. Second,capture process 753 can create sound file 755 fromaudio stream 750 for eventual delivery todestination devices 775 vialink 780. Third,speech recognition process 760 can processaudio stream 750 to producetext 765.Text 765 can then be delivered todestination devices 775. Fourth,process 768 can further processtext 765 to provide for correction of errors generated by thespeech recognition process 760, or may, either in the alternative or in combination, translatetext 765 to another language to produce processedtext 770.Processed text 770 can then be delivered todestination devices 775. - In addition, FIG. 7 illustrates that
extraction process 728 generates Closed Caption Text (CCT) 730 fromvideo source data 710.Process 733 corrects for errors inCCT 730, provides language translation, and/or performs other translations to generate processedCCT 735.Processed CCT 735 may be delivered directly todestination devices 775. In the alternative, text-to-speech process 740 operates on eitherCCT 730 or processedCCT 735 to produceaudio stream 750, with at least all transformation paths available as described above with regard toaudio stream 750 for eventual delivery todestination devices 775. -
Destination devices 775 may be or include, for example, any of the representative devices referred to in FIG. 6 and described in the background section of this specification. A user's choice of destination device will effect the manner in which the user will select and navigate delivered content. - A user may use a single destination device, or the user may use multiple devices in combination to receive delivered content. For instance, a particular user may utilize a facsimile to receive an
image 720 and a wireless telephone to receive anaudio stream 750. Where multiple destination devices are used, and where the media delivered to the multiple destination devices are related, the delivered content may be associated using tags or other identifiers that allow a user to align the content received on multiple devices. For example,audio stream 750 received on a wireless telephone may be associated toimages 720 sent to a facsimile with reference to a page number of the facsimile transmission. - As a general matter, not all transformations described with reference to FIG. 7 will need to be performed in delivering content from a source video to a user. Content may be tailored to a
target destination device 775 according to known alerting or notification utilities that communicate a class of destination device to a content provider at or near a time of delivery. In the alternative, or in combination, content may be tailored according to a predetermined user profile. - Transformed content may be delivered to
destination devices 775 according to alternative timing schemes. For example,CCT 730, processedCCT 735,audio stream 750,text 765, processedtext 770 may be delivered in near real-time (e.g., where content delivery is delayed only by processing and communication overhead). In other embodiments, transformed content is stored for later delivery. Moreover, the timing for delivery of stored content may be according to a predetermined schedule, such as a set time of day. In addition, or in the alternative, content can be delivered according to a set interval of time, such as every hour or other fixed period of time. The predetermined schedule may be specified in a user's profile data. In addition, or in the alternative, the delivery of near real-time and/or stored content may be event-triggered. For instance, a user profile may specify that breaking headline news, special reports, and/or severe weather warnings trigger near real-time delivery of content separate from, or together with, related stored content. -
Sample process 715,demultiplexing process 745,extraction process 728, text-to-speech process 740,speech recognition process 760, capture processes 723 and 753, and processes 733 and 768 may be performed on a server or other network based host computer having access tovideo source data 710. Specific embodiments for deliveringaudio stream 750 orsound file 755 todestination devices 775 are provided with reference to FIGS. 8A-9B below. - FIG. 8A is a system diagram illustrating a functional architecture according to one embodiment of the invention. As shown therein,
server 810 is coupled touser profile data 825 and is further coupled tovoice mailbox 815 viaautomatic load path 830. Although the functions ofserver 810 andvoice mailbox 815 are distinct, persons skilled in the art will appreciate thatserver 810 andvoice mailbox 815 may optionally be hosted on the same computer. One ormore destination devices 820 are coupled to thevoice mailbox 815 viadata retrieval path 835, and may optionally be coupled toserver 810 vialink 840. - The
user profile data 825 may be or include, for example, user identifiers, topics of interest to the user, and other information. Theuser profile data 825 is loaded and periodically updated fromdestination device 820 or another device to a database accessible byserver 810. - In one embodiment, the system of FIG. 8A is configured to perform the functions described with reference to FIG. 8B.
- FIG. 8B is a flow diagram illustrating a method for delivering video source content according to one embodiment of the invention. As shown therein,
server 810 reads previously storedprofile data 825 instep 850.Server 810 then identifies information invideo source data 710 relevant to topics in theuser profile data 825 instep 855.Server 810 transforms the relevantvideo source data 710 into anaudio stream 750 or asound file 755 instep 860. As described above,demultiplexing process 745 can createaudio stream 750 fromvideo source data 710, andcapture process 753 can create sound file 755 from theaudio stream 750. In one embodiment,server 810 streamsaudio stream 750 to voicemailbox 815 instep 865. In an alternative embodiment,server 810 loadssound file 755 to voicemailbox 815 instep 865.Server 810 plays the transformed information (i.e.,audio stream 750 or sound file 755) fromvoice mailbox 815 instep 870. - In an alternative embodiment, delivery of information to
destination device 820 is under the local control ofvoice mailbox 815. Wheredestination device 820 is a wireless phone,voice mailbox 815 may receive Dual-Tone Multi-Frequency (DTMF) signals and/or voice commands to effect the delivery ofaudio stream 750 orsound file 755 according to user input. - Where multiple media formats are delivered,
server 810 may also send media todestination device 820 vialink 840. For example, wheredestination device 820 is a smart phone, a user may simultaneously receive anaudio stream 750 viavoice mailbox 815 and animage 720 vialink 840. - FIG. 9A is a system diagram illustrating a functional architecture according to one embodiment of the invention. The architecture in FIG. 9A is an alternative approach to the architecture in FIG. 8A for delivery of
audio stream 750 and/orsound file 755. As shown in FIG. 9A, aWeb server 910, having access touser profile data 905, and including Voice Extensible Mark-up Language (VXML)generator 915, is coupled toVXML gateway 920.VXML gateway 920 includes Interactive Voice Response (IVR)system 925 and is coupled toclient 930.Client 930 may be a wired, wireless, or smart telephone, for example, having DTMF andspeech input capability 935. The description ofprofile data 825 above is applicable toprofile data 905. In one embodiment, the system of FIG. 9A is configured to perform the process shown in FIG. 9B. - FIG. 9B is a flow diagram illustrating a method for delivering video source content according to one embodiment of the invention. As shown therein,
Web server 910 reads previously storedprofile data 905 instep 940.Web server 910 then identifies information invideo source data 710 relevant to theuser profile data 905 instep 945, and transforms the relevant info instep 950.Server 910 optionally stores the transformed information instep 955. -
Gateway 920 receives a call fromclient 930, for example at a toll free number, instep 960. TheVXML gateway 920 has a table that correlates the toll free number with a particular Uniform Resource Locator (URL) related to a particular application.VXML gateway 920 fetches the corresponding URL onWeb Server 910 instep 965, andWeb server 910 generates VXML based on code derived from the corresponding URL instep 970. Accordingly, when theVXML gateway 920runs IVR system 925 to deliver the transformed information toclient 930 instep 980, the application, greeting, and content of the IVR session may be tailored according to the incoming call instep 960. - In one embodiment,
step 980 is, or includes, the delivery of anaudio stream 750; in another embodiment,step 980 is, or includes, delivery of asound file 755 to be played byclient 935. - In conclusion, a service for providing personalized multimedia assets such as electronic clips from video programs, based upon personal profiles, has been presented. In one embodiment, it uses text to ascertain the appropriate clips to extract and then assembles these clips into a single session. Thus, users only see the specific portions of videos that they desire. Therefore, users do not have to undertake the arduous task of manually finding desired video segments, and further don't have to manually select the specified videos one at a time. Rather, the invention generates all of the desired content automatically. Moreover, one embodiment of the invention provides an improved system and method for delivering video content to destination devices not adapted to receive streaming video.
- While this invention has been described in various explanatory embodiments, other embodiments and variations can be effected by a person of ordinary skill in the art without departing from the scope of the invention.
Claims (38)
1. A method for delivering content, comprising:
reading profile data related to a user;
automatically identifying a portion of at least one source video stream based on relevance to the profile data; and
transforming the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream.
2. The method of claim 1 , wherein the profile data is updated.
3. The method of claim 1 , wherein the profile data comprises topical information.
4. The method of claim 1 , wherein automatically identifying comprises determining a start of the identified portion of the at least one source video stream and an end of the identified portion of the at least one source video stream.
5. The method of claim 1 , wherein transforming comprises sampling the identified portion of the at least one source video stream, and the destination media comprises at least one image.
6. The method of claim 1 , wherein transforming comprises extracting information from the identified portion of the at least one source video stream to yield closed caption text.
7. The method of claim 6 , wherein transforming further comprises processing the closed caption text for at least one of error correction and language translation.
8. The method of claim 6 , wherein transforming further comprises a text-to-speech conversion of the closed caption text into an audio stream.
9. The method of claim 8 , wherein transforming comprises storing the audio stream as a sound file.
10. The method of claim 1 , wherein transforming comprises demultiplexing the at least one source video stream to yield an audio stream.
11. The method of claim 10 wherein transforming further comprises speech recognition processing of the audio stream to yield a text file.
12. The method of claim 11 , wherein transforming further comprises processing the text file for at least one of error correction and language translation.
13. The method of claim 11 , wherein transforming is tailored to a class of destination device.
14. The method of claim 1 , further comprising delivering the destination media to at least one destination device.
15. The method of claim 14 , wherein delivering the destination media comprises running an interactive voice response system in response to instructions from the user.
16. The method of claim 15 , wherein delivering the destination media comprises loading a voice mailbox.
17. The method of claim 16 , wherein delivering the destination media comprises playing the destination media to the user in response to at least one of DTMF and voice instruction.
18. The method of claim 15 , wherein delivering the destination media includes generating VXML and storing the VXML on a server.
19. The method of claim 18 , wherein delivering the destination media further includes receiving a call from the destination device in a VXML gateway.
20. The method of claim 19 , wherein delivering the destination media further includes fetching a URL from the server and receiving the generated VXML in the VXML gateway.
21. The method of claim 14 , wherein the at least one destination device comprises at least one of a wired telephone, a wireless telephone, a smart phone, a facsimile machine, a personal digital assistant, a pager, a radio, and an electronic picture frame.
22. The method of claim 14 , wherein the at least one destination media is delivered in near real-time.
23. The method of claim 14 , wherein delivering the destination media comprises storing the destination media in a server prior to delivering the destination media to the at least one destination device.
24. The method of claim 14 , wherein delivering the destination media comprises storing the destination media to the at least one destination device.
25. The method of claim 14 , wherein delivering the destination media is performed according to at least one of a predetermined time and a predetermined time interval.
26. The method of claim 14 , wherein delivering the destination media is event-triggered.
27. A method for delivering content, comprising:
reading profile data related to a user;
step for automatically identifying a portion of at least one source video stream based on relevance to the profile data; and
step for transforming the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream.
28. The method of claim 27 , further comprising step for delivering the destination media to at least one destination device.
29. A system for delivering content, comprising:
means for reading profile data related to a user;
means for automatically identifying a portion of at least one source video stream based on relevance to the profile data; and
means for transforming the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream.
30. The system of claim 29 , further comprising means for delivering the destination media to at least one destination device.
31. A system for delivering content, comprising:
a server configured to read profile data related to a user, automatically identify a portion of at least one source video stream based on relevance to the profile data, and transform the identified portion of the at least one source video stream into a destination media, wherein the destination media does not comprise a video stream; and
an interface to a destination device coupled to the server and configured to receive the destination media.
32. A system for delivering content, comprising:
a server configured to read profile data related to a user, automatically identify a portion of at least one source video stream based on relevance to the profile data, and transform the identified portion of the at least one source video stream into an audio file; and
an interface to a voice mailbox, wherein the voice mailbox is configured to receive the audio file from the server and play the audio file in response to at least one of DTMF and voice instruction from the user.
33. A system for delivering content, comprising:
a server configured to read profile data related to a user, automatically identify a portion of at least one source video stream based on relevance to the profile data, transform the identified portion of the at least one source video stream into an audio file, store the audio file, and generate VXML related to the stored audio file; and
a VXML gateway, wherein the VXML gateway is coupled to the server and configured to receive the generated VXML.
34. The system of claim 33 , further comprising an interface to at least one destination device coupled to the VXML gateway, and wherein the VXML gateway is configured to deliver the audio file to the at least one destination device via an interactive voice response system.
35. A system for delivering content, comprising:
a server configured to read profile data related to a user, automatically identify a portion of at least one source video stream based on relevance to the profile data, transform the identified portion of the at least one source video stream into an audio stream, store the audio stream, and generate VXML related to the stored audio stream; and
a VXML gateway, wherein the VXML gateway is coupled to the server and configured to receive the generated VXML.
36. The system of claim 35 , further comprising an interface to at least one destination device coupled to the VXML gateway, and wherein the VXML gateway is configured to deliver the audio stream to the at least one destination device via an interactive voice response system.
37. A method for conveying information derived from a source video stream to a user comprising:
searching for at least one portion of the source video stream based on preferences of the user;
selecting at least one delivery medium based on at least one of the user's destination devices; and
transforming the at least one portion of the source video stream into the at least one delivery medium.
38. The method of claim 37 , further comprising transmitting the at least one transformed portion of the source video stream to the at least one of the user's destination devices.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/303,045 US20030120748A1 (en) | 2001-04-06 | 2002-11-25 | Alternate delivery mechanisms of customized video streaming content to devices not meant for receiving video |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US28220401P | 2001-04-06 | 2001-04-06 | |
US29643601P | 2001-06-06 | 2001-06-06 | |
US10/034,679 US20030163815A1 (en) | 2001-04-06 | 2001-12-28 | Method and system for personalized multimedia delivery service |
US10/303,045 US20030120748A1 (en) | 2001-04-06 | 2002-11-25 | Alternate delivery mechanisms of customized video streaming content to devices not meant for receiving video |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/034,679 Continuation-In-Part US20030163815A1 (en) | 2001-04-06 | 2001-12-28 | Method and system for personalized multimedia delivery service |
Publications (1)
Publication Number | Publication Date |
---|---|
US20030120748A1 true US20030120748A1 (en) | 2003-06-26 |
Family
ID=27364714
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/034,679 Abandoned US20030163815A1 (en) | 2001-04-06 | 2001-12-28 | Method and system for personalized multimedia delivery service |
US10/163,091 Expired - Fee Related US8151298B2 (en) | 2001-04-06 | 2002-06-06 | Method and system for embedding information into streaming media |
US10/303,045 Abandoned US20030120748A1 (en) | 2001-04-06 | 2002-11-25 | Alternate delivery mechanisms of customized video streaming content to devices not meant for receiving video |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/034,679 Abandoned US20030163815A1 (en) | 2001-04-06 | 2001-12-28 | Method and system for personalized multimedia delivery service |
US10/163,091 Expired - Fee Related US8151298B2 (en) | 2001-04-06 | 2002-06-06 | Method and system for embedding information into streaming media |
Country Status (2)
Country | Link |
---|---|
US (3) | US20030163815A1 (en) |
CA (1) | CA2380898A1 (en) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030005076A1 (en) * | 2001-06-28 | 2003-01-02 | Bellsouth Intellectual Property Corporation | Simultaneous visual and telephonic access to interactive information delivery |
US20030101267A1 (en) * | 2001-11-28 | 2003-05-29 | Thompson Mark R. | Peer-to-peer caching network |
US20030126126A1 (en) * | 2001-12-29 | 2003-07-03 | Lee Jin Soo | Apparatus and method for searching multimedia object |
US20030202504A1 (en) * | 2002-04-30 | 2003-10-30 | Avaya Technology Corp. | Method of implementing a VXML application into an IP device and an IP device having VXML capability |
US20040177317A1 (en) * | 2003-03-07 | 2004-09-09 | John Bradstreet | Closed caption navigation |
US20050044105A1 (en) * | 2003-08-19 | 2005-02-24 | Kelly Terrell | System and method for delivery of content-specific video clips |
US20050138183A1 (en) * | 2003-12-19 | 2005-06-23 | O'rourke Thomas | Computer telephone integration over a network |
US20050229048A1 (en) * | 2004-03-30 | 2005-10-13 | International Business Machines Corporation | Caching operational code in a voice markup interpreter |
US20060010467A1 (en) * | 2004-07-12 | 2006-01-12 | Alcatel | Personalized video entertainment system |
WO2006004844A3 (en) * | 2004-06-30 | 2006-07-27 | Glenayre Electronics Inc | System and method for outbound calling from a distributed telecommunications platform |
US20060218226A1 (en) * | 2005-03-23 | 2006-09-28 | Matsushita Electric Industrial Co., Ltd. | Automatic recording based on preferences |
US20070204285A1 (en) * | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media monitoring, purchase, and display |
US20070203945A1 (en) * | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media preview, analysis, purchase, and display |
US20080005266A1 (en) * | 2006-06-30 | 2008-01-03 | Gene Fein | Multimedia delivery system |
WO2008033454A2 (en) * | 2006-09-13 | 2008-03-20 | Video Monitoring Services Of America, L.P. | System and method for assessing marketing data |
US20080086754A1 (en) * | 2006-09-14 | 2008-04-10 | Sbc Knowledge Ventures, Lp | Peer to peer media distribution system and method |
US20080177864A1 (en) * | 2007-01-22 | 2008-07-24 | Minborg Invent I Goeteborg Ab | Method and Apparatus For Obtaining Digital Objects In A Communication Network |
US20080207233A1 (en) * | 2007-02-28 | 2008-08-28 | Waytena William L | Method and System For Centralized Storage of Media and for Communication of Such Media Activated By Real-Time Messaging |
US20080270913A1 (en) * | 2007-04-26 | 2008-10-30 | Howard Singer | Methods, Media, and Devices for Providing a Package of Assets |
US20080281974A1 (en) * | 2007-05-07 | 2008-11-13 | Biap, Inc. | Providing personalized resources on-demand over a broadband network to consumer device applications |
US20090182712A1 (en) * | 2008-01-15 | 2009-07-16 | Kamal Faiza H | Systems and methods for rapid delivery of media content |
US20090319365A1 (en) * | 2006-09-13 | 2009-12-24 | James Hallowell Waggoner | System and method for assessing marketing data |
US20100077298A1 (en) * | 2008-09-25 | 2010-03-25 | Microsoft Corporation | Multi-platform presentation system |
US20110106536A1 (en) * | 2009-10-29 | 2011-05-05 | Rovi Technologies Corporation | Systems and methods for simulating dialog between a user and media equipment device |
US20110107215A1 (en) * | 2009-10-29 | 2011-05-05 | Rovi Technologies Corporation | Systems and methods for presenting media asset clips on a media equipment device |
US8169916B1 (en) * | 2007-11-23 | 2012-05-01 | Media Melon, Inc. | Multi-platform video delivery configuration |
CN105407384A (en) * | 2014-09-15 | 2016-03-16 | 上海天脉聚源文化传媒有限公司 | Method, device and system for identifying media player content by using two-dimensional code |
US20180359537A1 (en) * | 2017-06-07 | 2018-12-13 | Naver Corporation | Content providing server, content providing terminal, and content providing method |
US10277953B2 (en) * | 2016-12-06 | 2019-04-30 | The Directv Group, Inc. | Search for content data in content |
US20200059696A1 (en) * | 2004-04-07 | 2020-02-20 | Visible World, Llc | System And Method For Enhanced Video Selection |
CN116233472A (en) * | 2023-05-08 | 2023-06-06 | 湖南马栏山视频先进技术研究院有限公司 | Audio and video synchronization method and cloud processing system |
Families Citing this family (161)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6263503B1 (en) | 1999-05-26 | 2001-07-17 | Neal Margulis | Method for effectively implementing a wireless television system |
US8266657B2 (en) | 2001-03-15 | 2012-09-11 | Sling Media Inc. | Method for effectively implementing a multi-room television system |
US7490092B2 (en) | 2000-07-06 | 2009-02-10 | Streamsage, Inc. | Method and system for indexing and searching timed media information based upon relevance intervals |
US20060015904A1 (en) | 2000-09-08 | 2006-01-19 | Dwight Marcus | Method and apparatus for creation, distribution, assembly and verification of media |
US9419844B2 (en) | 2001-09-11 | 2016-08-16 | Ntech Properties, Inc. | Method and system for generation of media |
US20030088687A1 (en) | 2001-12-28 | 2003-05-08 | Lee Begeja | Method and apparatus for automatically converting source video into electronic mail messages |
US8924383B2 (en) * | 2001-04-06 | 2014-12-30 | At&T Intellectual Property Ii, L.P. | Broadcast video monitoring and alerting system |
JP2003037834A (en) * | 2001-05-16 | 2003-02-07 | Sony Corp | Content distribution system, content distribution controller, content distribution control method, content distribution control program and content distribution control program-stored medium |
US20030093814A1 (en) * | 2001-11-09 | 2003-05-15 | Birmingham Blair B.A. | System and method for generating user-specific television content based on closed captioning content |
US20030122966A1 (en) * | 2001-12-06 | 2003-07-03 | Digeo, Inc. | System and method for meta data distribution to customize media content playback |
GB0205410D0 (en) * | 2002-03-07 | 2002-04-24 | Nokia Corp | Method of digital recording |
US20030192045A1 (en) * | 2002-04-04 | 2003-10-09 | International Business Machines Corporation | Apparatus and method for blocking television commercials and displaying alternative programming |
US7403990B2 (en) * | 2002-05-08 | 2008-07-22 | Ricoh Company, Ltd. | Information distribution system |
US7231607B2 (en) * | 2002-07-09 | 2007-06-12 | Kaleidescope, Inc. | Mosaic-like user interface for video selection and display |
US7246322B2 (en) | 2002-07-09 | 2007-07-17 | Kaleidescope, Inc. | Grid-like guided user interface for video selection and display |
US20070245247A1 (en) * | 2002-05-14 | 2007-10-18 | Kaleidescape, Inc. | Grid-like guided user interface for video selection and display |
JP3966503B2 (en) * | 2002-05-30 | 2007-08-29 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Content reproduction control device, data management device, storage-type content distribution system, content distribution method, control data transmission server, program |
WO2004006579A1 (en) | 2002-07-09 | 2004-01-15 | Kaleidescape, Inc. | Content and key distribution system for digital content representing media streams |
US7111171B2 (en) * | 2002-07-09 | 2006-09-19 | Kaleidescope, Inc. | Parallel distribution and fingerprinting of digital content |
US7003131B2 (en) * | 2002-07-09 | 2006-02-21 | Kaleidescape, Inc. | Watermarking and fingerprinting digital content using alternative blocks to embed information |
US20040083487A1 (en) * | 2002-07-09 | 2004-04-29 | Kaleidescape, A Corporation | Content and key distribution system for digital content representing media streams |
US7454772B2 (en) | 2002-07-25 | 2008-11-18 | International Business Machines Corporation | Apparatus and method for blocking television commercials and providing an archive interrogation program |
US20040025191A1 (en) * | 2002-07-31 | 2004-02-05 | B. Popular, Inc. | System and method for creating and presenting content packages |
CN1682224B (en) * | 2002-09-09 | 2012-08-15 | 皇家飞利浦电子股份有限公司 | A data network, user terminal and method for providing recommendations |
US8225194B2 (en) * | 2003-01-09 | 2012-07-17 | Kaleidescape, Inc. | Bookmarks and watchpoints for selection and presentation of media streams |
WO2004070998A2 (en) | 2003-01-31 | 2004-08-19 | Kaleidescape, Inc. | Recovering from de-synchronization attacks against watermarking and fingerprinting |
US20040172650A1 (en) * | 2003-02-28 | 2004-09-02 | Hawkins William J. | Targeted content delivery system in an interactive television network |
JP2004287699A (en) * | 2003-03-20 | 2004-10-14 | Tama Tlo Kk | Image composition device and method |
US8572104B2 (en) | 2003-04-18 | 2013-10-29 | Kaleidescape, Inc. | Sales of collections excluding those already purchased |
US20050086069A1 (en) * | 2003-07-15 | 2005-04-21 | Kaleidescape, Inc. | Separable presentation control rules with distinct control effects |
JP2004355069A (en) * | 2003-05-27 | 2004-12-16 | Sony Corp | Information processor, information processing method, program, and recording medium |
US20040243627A1 (en) * | 2003-05-28 | 2004-12-02 | Integrated Data Control, Inc. | Chat stream information capturing and indexing system |
AU2004254950A1 (en) * | 2003-06-24 | 2005-01-13 | Ntech Properties, Inc. | Method, system and apparatus for information delivery |
US9615061B2 (en) * | 2003-07-11 | 2017-04-04 | Tvworks, Llc | System and method for creating and presenting composite video-on-demand content |
US20050010950A1 (en) * | 2003-07-11 | 2005-01-13 | John Carney | System and method for automatically generating a composite video-on-demand content |
US20050144305A1 (en) * | 2003-10-21 | 2005-06-30 | The Board Of Trustees Operating Michigan State University | Systems and methods for identifying, segmenting, collecting, annotating, and publishing multimedia materials |
US20050128192A1 (en) * | 2003-12-12 | 2005-06-16 | International Business Machines Corporation | Modifying visual presentations based on environmental context and user preferences |
US20050129252A1 (en) * | 2003-12-12 | 2005-06-16 | International Business Machines Corporation | Audio presentations based on environmental context and user preferences |
US20050198014A1 (en) * | 2004-02-06 | 2005-09-08 | Barbara De Lury | Systems, methods and apparatus of a whole/part search engine |
US7533081B2 (en) * | 2004-02-06 | 2009-05-12 | General Electric Company | Systems, methods and apparatus to determine relevance of search results in whole/part search |
US8495089B2 (en) * | 2004-05-14 | 2013-07-23 | Google Inc. | System and method for optimizing media play transactions |
WO2007001247A2 (en) * | 2004-06-02 | 2007-01-04 | Yahoo! Inc. | Content-management system for user behavior targeting |
KR101011134B1 (en) | 2004-06-07 | 2011-01-26 | 슬링 미디어 인코퍼레이티드 | Personal media broadcasting system |
US7917932B2 (en) | 2005-06-07 | 2011-03-29 | Sling Media, Inc. | Personal video recorder functionality for placeshifting systems |
US7769756B2 (en) * | 2004-06-07 | 2010-08-03 | Sling Media, Inc. | Selection and presentation of context-relevant supplemental content and advertising |
US9998802B2 (en) | 2004-06-07 | 2018-06-12 | Sling Media LLC | Systems and methods for creating variable length clips from a media stream |
US7975062B2 (en) | 2004-06-07 | 2011-07-05 | Sling Media, Inc. | Capturing and sharing media content |
US20060010472A1 (en) * | 2004-07-06 | 2006-01-12 | Balazs Godeny | System, method, and apparatus for creating searchable media files from streamed media |
US8112548B2 (en) * | 2004-09-28 | 2012-02-07 | Yahoo! Inc. | Method for providing a clip for viewing at a remote device |
US7970020B2 (en) * | 2004-10-27 | 2011-06-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Terminal having plural playback pointers for jitter buffer |
KR100664181B1 (en) * | 2004-11-22 | 2007-01-03 | 엘지전자 주식회사 | Method for searching program in wireless terminal with digital multimedia broadcasting |
US8065604B2 (en) * | 2004-12-30 | 2011-11-22 | Massachusetts Institute Of Technology | Techniques for relating arbitrary metadata to media files |
US20070016930A1 (en) * | 2005-03-08 | 2007-01-18 | Podfitness, Inc. | Creation and navigation of media content with chaptering elements |
US20060224757A1 (en) * | 2005-03-15 | 2006-10-05 | Han Fang | System and method for streaming service replication a in peer-to-peer network |
US20070088844A1 (en) * | 2005-06-07 | 2007-04-19 | Meta Interfaces, Llc | System for and method of extracting a time-based portion of media and serving it over the Web |
US20070027844A1 (en) * | 2005-07-28 | 2007-02-01 | Microsoft Corporation | Navigating recorded multimedia content using keywords or phrases |
US8316301B2 (en) * | 2005-08-04 | 2012-11-20 | Samsung Electronics Co., Ltd. | Apparatus, medium, and method segmenting video sequences based on topic |
US7382933B2 (en) * | 2005-08-24 | 2008-06-03 | International Business Machines Corporation | System and method for semantic video segmentation based on joint audiovisual and text analysis |
US20070130585A1 (en) * | 2005-12-05 | 2007-06-07 | Perret Pierre A | Virtual Store Management Method and System for Operating an Interactive Audio/Video Entertainment System According to Viewers Tastes and Preferences |
US20070157228A1 (en) | 2005-12-30 | 2007-07-05 | Jason Bayer | Advertising with video ad creatives |
US20080036917A1 (en) * | 2006-04-07 | 2008-02-14 | Mark Pascarella | Methods and systems for generating and delivering navigatable composite videos |
US11678026B1 (en) | 2006-05-19 | 2023-06-13 | Universal Innovation Council, LLC | Creating customized programming content |
US9602884B1 (en) | 2006-05-19 | 2017-03-21 | Universal Innovation Counsel, Inc. | Creating customized programming content |
US8170584B2 (en) * | 2006-06-06 | 2012-05-01 | Yahoo! Inc. | Providing an actionable event in an intercepted text message for a mobile device based on customized user information |
US8261300B2 (en) * | 2006-06-23 | 2012-09-04 | Tivo Inc. | Method and apparatus for advertisement placement in a user dialog on a set-top box |
US9015172B2 (en) | 2006-09-22 | 2015-04-21 | Limelight Networks, Inc. | Method and subsystem for searching media content within a content-search service system |
US8396878B2 (en) | 2006-09-22 | 2013-03-12 | Limelight Networks, Inc. | Methods and systems for generating automated tags for video files |
US8966389B2 (en) | 2006-09-22 | 2015-02-24 | Limelight Networks, Inc. | Visual interface for identifying positions of interest within a sequentially ordered information encoding |
US8214374B1 (en) * | 2011-09-26 | 2012-07-03 | Limelight Networks, Inc. | Methods and systems for abridging video files |
WO2008058259A2 (en) * | 2006-11-08 | 2008-05-15 | Mywaves, Inc. | An apparatus and method for dynamically providing web-based multimedia to a mobile phone |
US8081958B2 (en) | 2006-12-01 | 2011-12-20 | Yahoo! Inc. | User initiated invite for automatic conference participation by invitee |
US20080155627A1 (en) * | 2006-12-04 | 2008-06-26 | O'connor Daniel | Systems and methods of searching for and presenting video and audio |
US20080147636A1 (en) * | 2006-12-14 | 2008-06-19 | Yahoo! Inc. | Video distribution systems and methods |
US8046803B1 (en) | 2006-12-28 | 2011-10-25 | Sprint Communications Company L.P. | Contextual multimedia metatagging |
JP2008167363A (en) * | 2007-01-05 | 2008-07-17 | Sony Corp | Information processor and information processing method, and program |
US20080178219A1 (en) * | 2007-01-23 | 2008-07-24 | At&T Knowledge Ventures, Lp | System and method for providing video content |
US8843989B2 (en) * | 2007-02-09 | 2014-09-23 | At&T Intellectual Property I, L.P. | Method and system to provide interactive television content |
US20090024049A1 (en) | 2007-03-29 | 2009-01-22 | Neurofocus, Inc. | Cross-modality synthesis of central nervous system, autonomic nervous system, and effector data |
WO2008137581A1 (en) | 2007-05-01 | 2008-11-13 | Neurofocus, Inc. | Neuro-feedback based stimulus compression device |
US8392253B2 (en) | 2007-05-16 | 2013-03-05 | The Nielsen Company (Us), Llc | Neuro-physiology and neuro-behavioral based stimulus targeting system |
US8145704B2 (en) | 2007-06-13 | 2012-03-27 | Ntech Properties, Inc. | Method and system for providing media programming |
US8503523B2 (en) * | 2007-06-29 | 2013-08-06 | Microsoft Corporation | Forming a representation of a video item and use thereof |
EP2170161B1 (en) | 2007-07-30 | 2018-12-05 | The Nielsen Company (US), LLC. | Neuro-response stimulus and stimulus attribute resonance estimator |
US8744118B2 (en) | 2007-08-03 | 2014-06-03 | At&T Intellectual Property I, L.P. | Methods, systems, and products for indexing scenes in digital media |
US8386313B2 (en) | 2007-08-28 | 2013-02-26 | The Nielsen Company (Us), Llc | Stimulus placement system using subject neuro-response measurements |
US8392255B2 (en) | 2007-08-29 | 2013-03-05 | The Nielsen Company (Us), Llc | Content based selection and meta tagging of advertisement breaks |
US8060407B1 (en) | 2007-09-04 | 2011-11-15 | Sprint Communications Company L.P. | Method for providing personalized, targeted advertisements during playback of media |
US8015167B1 (en) * | 2007-09-05 | 2011-09-06 | Adobe Systems Incorporated | Media players and download manager functionality |
US20090083129A1 (en) | 2007-09-20 | 2009-03-26 | Neurofocus, Inc. | Personalized content delivery using neuro-response priming data |
US8327395B2 (en) | 2007-10-02 | 2012-12-04 | The Nielsen Company (Us), Llc | System providing actionable insights based on physiological responses from viewers of media |
US8739200B2 (en) | 2007-10-11 | 2014-05-27 | At&T Intellectual Property I, L.P. | Methods, systems, and products for distributing digital media |
US20090133047A1 (en) | 2007-10-31 | 2009-05-21 | Lee Hans C | Systems and Methods Providing Distributed Collection and Centralized Processing of Physiological Responses from Viewers |
US20090150784A1 (en) * | 2007-12-07 | 2009-06-11 | Microsoft Corporation | User interface for previewing video items |
US20090198580A1 (en) * | 2008-01-31 | 2009-08-06 | Horizon Capital Securities Limited | Distribution and Targeting of Advertising for Mobile Devices |
US8806530B1 (en) | 2008-04-22 | 2014-08-12 | Sprint Communications Company L.P. | Dual channel presence detection and content delivery system and method |
US20100153848A1 (en) * | 2008-10-09 | 2010-06-17 | Pinaki Saha | Integrated branding, social bookmarking, and aggregation system for media content |
US9519772B2 (en) | 2008-11-26 | 2016-12-13 | Free Stream Media Corp. | Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device |
US10977693B2 (en) | 2008-11-26 | 2021-04-13 | Free Stream Media Corp. | Association of content identifier of audio-visual data with additional data through capture infrastructure |
US10419541B2 (en) | 2008-11-26 | 2019-09-17 | Free Stream Media Corp. | Remotely control devices over a network without authentication or registration |
US9961388B2 (en) | 2008-11-26 | 2018-05-01 | David Harrison | Exposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements |
US9026668B2 (en) | 2012-05-26 | 2015-05-05 | Free Stream Media Corp. | Real-time and retargeted advertising on multiple screens of a user watching television |
US9386356B2 (en) | 2008-11-26 | 2016-07-05 | Free Stream Media Corp. | Targeting with television audience data across multiple screens |
US10880340B2 (en) | 2008-11-26 | 2020-12-29 | Free Stream Media Corp. | Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device |
US9154942B2 (en) | 2008-11-26 | 2015-10-06 | Free Stream Media Corp. | Zero configuration communication between a browser and a networked media device |
US10631068B2 (en) | 2008-11-26 | 2020-04-21 | Free Stream Media Corp. | Content exposure attribution based on renderings of related content across multiple devices |
US10567823B2 (en) | 2008-11-26 | 2020-02-18 | Free Stream Media Corp. | Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device |
US9986279B2 (en) | 2008-11-26 | 2018-05-29 | Free Stream Media Corp. | Discovery, access control, and communication with networked services |
US10334324B2 (en) | 2008-11-26 | 2019-06-25 | Free Stream Media Corp. | Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device |
US8180891B1 (en) | 2008-11-26 | 2012-05-15 | Free Stream Media Corp. | Discovery, access control, and communication with networked services from within a security sandbox |
US7996566B1 (en) * | 2008-12-23 | 2011-08-09 | Genband Us Llc | Media sharing |
US8713016B2 (en) * | 2008-12-24 | 2014-04-29 | Comcast Interactive Media, Llc | Method and apparatus for organizing segments of media assets and determining relevance of segments to a query |
US9442933B2 (en) * | 2008-12-24 | 2016-09-13 | Comcast Interactive Media, Llc | Identification of segments within audio, video, and multimedia items |
US11531668B2 (en) | 2008-12-29 | 2022-12-20 | Comcast Interactive Media, Llc | Merging of multiple data sets |
US8176043B2 (en) | 2009-03-12 | 2012-05-08 | Comcast Interactive Media, Llc | Ranking search results |
US20100250325A1 (en) | 2009-03-24 | 2010-09-30 | Neurofocus, Inc. | Neurological profiles for market matching and stimulus presentation |
US8533223B2 (en) | 2009-05-12 | 2013-09-10 | Comcast Interactive Media, LLC. | Disambiguation and tagging of entities |
US8799253B2 (en) * | 2009-06-26 | 2014-08-05 | Microsoft Corporation | Presenting an assembled sequence of preview videos |
US9892730B2 (en) | 2009-07-01 | 2018-02-13 | Comcast Interactive Media, Llc | Generating topic-specific language models |
US8655437B2 (en) * | 2009-08-21 | 2014-02-18 | The Nielsen Company (Us), Llc | Analysis of the mirror neuron system for evaluation of stimulus |
US10987015B2 (en) | 2009-08-24 | 2021-04-27 | Nielsen Consumer Llc | Dry electrodes for electroencephalography |
US8990104B1 (en) | 2009-10-27 | 2015-03-24 | Sprint Communications Company L.P. | Multimedia product placement marketplace |
US8209224B2 (en) | 2009-10-29 | 2012-06-26 | The Nielsen Company (Us), Llc | Intracluster content management using neuro-response priming data |
US20110106750A1 (en) | 2009-10-29 | 2011-05-05 | Neurofocus, Inc. | Generating ratings predictions using neuro-response data |
US9560984B2 (en) | 2009-10-29 | 2017-02-07 | The Nielsen Company (Us), Llc | Analysis of controlled and automatic attention for introduction of stimulus material |
US8572488B2 (en) * | 2010-03-29 | 2013-10-29 | Avid Technology, Inc. | Spot dialog editor |
US8684742B2 (en) | 2010-04-19 | 2014-04-01 | Innerscope Research, Inc. | Short imagery task (SIT) research method |
US8655428B2 (en) | 2010-05-12 | 2014-02-18 | The Nielsen Company (Us), Llc | Neuro-response data synchronization |
WO2011146898A2 (en) | 2010-05-21 | 2011-11-24 | Bologh Mark J | Internet system for ultra high video quality |
US8423555B2 (en) | 2010-07-09 | 2013-04-16 | Comcast Cable Communications, Llc | Automatic segmentation of video |
US9544528B2 (en) * | 2010-08-17 | 2017-01-10 | Verizon Patent And Licensing Inc. | Matrix search of video using closed caption information |
US8396744B2 (en) | 2010-08-25 | 2013-03-12 | The Nielsen Company (Us), Llc | Effective virtual reality environments for presentation of marketing materials |
US9001886B2 (en) | 2010-11-22 | 2015-04-07 | Cisco Technology, Inc. | Dynamic time synchronization |
US8689269B2 (en) * | 2011-01-27 | 2014-04-01 | Netflix, Inc. | Insertion points for streaming video autoplay |
US20120246240A1 (en) * | 2011-03-24 | 2012-09-27 | Apple Inc. | Providing Context Information Relating To Media Content That Is Being Presented |
US8683013B2 (en) | 2011-04-18 | 2014-03-25 | Cisco Technology, Inc. | System and method for data streaming in a computer network |
EP2697980B1 (en) * | 2011-05-10 | 2017-12-20 | NDS Limited | Customized zapping |
US9226034B1 (en) | 2011-05-10 | 2015-12-29 | Google Inc. | Apparatus and methods for generating clips using recipes with slice definitions |
US20120290409A1 (en) * | 2011-05-11 | 2012-11-15 | Neurofocus, Inc. | Marketing material enhanced wait states |
US8832729B2 (en) * | 2011-07-05 | 2014-09-09 | Yahoo! Inc. | Methods and systems for grabbing video surfers' attention while awaiting download |
US10467289B2 (en) | 2011-08-02 | 2019-11-05 | Comcast Cable Communications, Llc | Segmentation of video according to narrative theme |
US8898717B1 (en) * | 2012-01-11 | 2014-11-25 | Cisco Technology, Inc. | System and method for obfuscating start-up delay in a linear media service environment |
US9591098B2 (en) | 2012-02-01 | 2017-03-07 | Cisco Technology, Inc. | System and method to reduce stream start-up delay for adaptive streaming |
US9569986B2 (en) | 2012-02-27 | 2017-02-14 | The Nielsen Company (Us), Llc | System and method for gathering and analyzing biometric user feedback for use in social media and advertising applications |
US20130226983A1 (en) * | 2012-02-29 | 2013-08-29 | Jeffrey Martin Beining | Collaborative Video Highlights |
US9113203B2 (en) | 2012-06-28 | 2015-08-18 | Google Inc. | Generating a sequence of audio fingerprints at a set top box |
US8989835B2 (en) | 2012-08-17 | 2015-03-24 | The Nielsen Company (Us), Llc | Systems and methods to gather and analyze electroencephalographic data |
US9661361B2 (en) | 2012-09-19 | 2017-05-23 | Google Inc. | Systems and methods for live media content matching |
US20140101551A1 (en) * | 2012-10-05 | 2014-04-10 | Google Inc. | Stitching videos into an aggregate video |
US9320450B2 (en) | 2013-03-14 | 2016-04-26 | The Nielsen Company (Us), Llc | Methods and apparatus to gather and analyze electroencephalographic data |
US9560103B2 (en) | 2013-06-26 | 2017-01-31 | Echostar Technologies L.L.C. | Custom video content |
US9923945B2 (en) | 2013-10-10 | 2018-03-20 | Cisco Technology, Inc. | Virtual assets for on-demand content generation |
US10331661B2 (en) | 2013-10-23 | 2019-06-25 | At&T Intellectual Property I, L.P. | Video content search using captioning data |
US9622702B2 (en) | 2014-04-03 | 2017-04-18 | The Nielsen Company (Us), Llc | Methods and apparatus to gather and analyze electroencephalographic data |
US9936250B2 (en) | 2015-05-19 | 2018-04-03 | The Nielsen Company (Us), Llc | Methods and apparatus to adjust content presented to an individual |
US9858943B1 (en) | 2017-05-09 | 2018-01-02 | Sony Corporation | Accessibility for the hearing impaired using measurement and object based audio |
US10805676B2 (en) | 2017-07-10 | 2020-10-13 | Sony Corporation | Modifying display region for people with macular degeneration |
US10650702B2 (en) | 2017-07-10 | 2020-05-12 | Sony Corporation | Modifying display region for people with loss of peripheral vision |
US10845954B2 (en) | 2017-07-11 | 2020-11-24 | Sony Corporation | Presenting audio video display options as list or matrix |
US10303427B2 (en) | 2017-07-11 | 2019-05-28 | Sony Corporation | Moving audio from center speaker to peripheral speaker of display device for macular degeneration accessibility |
US10051331B1 (en) * | 2017-07-11 | 2018-08-14 | Sony Corporation | Quick accessibility profiles |
US11558650B2 (en) * | 2020-07-30 | 2023-01-17 | At&T Intellectual Property I, L.P. | Automated, user-driven, and personalized curation of short-form media segments |
Citations (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5481296A (en) * | 1993-08-06 | 1996-01-02 | International Business Machines Corporation | Apparatus and method for selectively viewing video information |
US5532735A (en) * | 1994-04-29 | 1996-07-02 | At&T Corp. | Method of advertisement selection for interactive service |
US5614940A (en) * | 1994-10-21 | 1997-03-25 | Intel Corporation | Method and apparatus for providing broadcast information with indexing |
US5734893A (en) * | 1995-09-28 | 1998-03-31 | Ibm Corporation | Progressive content-based retrieval of image and video with adaptive and iterative refinement |
US5805763A (en) * | 1995-05-05 | 1998-09-08 | Microsoft Corporation | System and method for automatically recording programs in an interactive viewing system |
US5835087A (en) * | 1994-11-29 | 1998-11-10 | Herz; Frederick S. M. | System for generation of object profiles for a system for customized electronic identification of desirable objects |
US5924105A (en) * | 1997-01-27 | 1999-07-13 | Michigan State University | Method and product for determining salient features for use in information searching |
US5996007A (en) * | 1997-06-16 | 1999-11-30 | John Klug | Method for providing selected content during waiting time of an internet session |
US6061056A (en) * | 1996-03-04 | 2000-05-09 | Telexis Corporation | Television monitoring system with automatic selection of program material of interest and subsequent display under user control |
US6188398B1 (en) * | 1999-06-02 | 2001-02-13 | Mark Collins-Rector | Targeting advertising using web pages with video |
US6229524B1 (en) * | 1998-07-17 | 2001-05-08 | International Business Machines Corporation | User interface for interaction with video |
US20010013123A1 (en) * | 1991-11-25 | 2001-08-09 | Freeman Michael J. | Customized program creation by splicing server based video, audio, or graphical segments |
US6289346B1 (en) * | 1998-03-12 | 2001-09-11 | At&T Corp. | Apparatus and method for a bookmarking system |
US6298482B1 (en) * | 1997-11-12 | 2001-10-02 | International Business Machines Corporation | System for two-way digital multimedia broadcast and interactive services |
US20010049826A1 (en) * | 2000-01-19 | 2001-12-06 | Itzhak Wilf | Method of searching video channels by content |
US6345279B1 (en) * | 1999-04-23 | 2002-02-05 | International Business Machines Corporation | Methods and apparatus for adapting multimedia content for client devices |
US6353825B1 (en) * | 1999-07-30 | 2002-03-05 | Verizon Laboratories Inc. | Method and device for classification using iterative information retrieval techniques |
US6363380B1 (en) * | 1998-01-13 | 2002-03-26 | U.S. Philips Corporation | Multimedia computer system with story segmentation capability and operating program therefor including finite automation video parser |
US20020052747A1 (en) * | 2000-08-21 | 2002-05-02 | Sarukkai Ramesh R. | Method and system of interpreting and presenting web content using a voice browser |
US6385619B1 (en) * | 1999-01-08 | 2002-05-07 | International Business Machines Corporation | Automatic user interest profile generation from structured document access information |
US6411952B1 (en) * | 1998-06-24 | 2002-06-25 | Compaq Information Technologies Group, Lp | Method for learning character patterns to interactively control the scope of a web crawler |
US20020093591A1 (en) * | 2000-12-12 | 2002-07-18 | Nec Usa, Inc. | Creating audio-centric, imagecentric, and integrated audio visual summaries |
US20020100046A1 (en) * | 2000-11-16 | 2002-07-25 | Dudkiewicz Gil Gavriel | System and method for determining the desirability of video programming events |
US6434550B1 (en) * | 2000-04-14 | 2002-08-13 | Rightnow Technologies, Inc. | Temporal updates of relevancy rating of retrieved information in an information search system |
US20020138843A1 (en) * | 2000-05-19 | 2002-09-26 | Andrew Samaan | Video distribution method and system |
US20020152464A1 (en) * | 2001-04-13 | 2002-10-17 | Sony Corporation | System and method for pushing internet content onto interactive television |
US20020152477A1 (en) * | 1998-05-29 | 2002-10-17 | Opentv, Inc. | Module manager for interactive television system |
US6477565B1 (en) * | 1999-06-01 | 2002-11-05 | Yodlee.Com, Inc. | Method and apparatus for restructuring of personalized data for transmission from a data network to connected and portable network appliances |
US6477707B1 (en) * | 1998-03-24 | 2002-11-05 | Fantastic Corporation | Method and system for broadcast transmission of media objects |
US20020173964A1 (en) * | 2001-03-30 | 2002-11-21 | International Business Machines Corporation | Speech driven data selection in a voice-enabled program |
US6496857B1 (en) * | 2000-02-08 | 2002-12-17 | Mirror Worlds Technologies, Inc. | Delivering targeted, enhanced advertisements across electronic networks |
US6507941B1 (en) * | 1999-04-28 | 2003-01-14 | Magma Design Automation, Inc. | Subgrid detailed routing |
US6526580B2 (en) * | 1999-04-16 | 2003-02-25 | Digeo, Inc. | Broadband data broadcasting service |
US6564263B1 (en) * | 1998-12-04 | 2003-05-13 | International Business Machines Corporation | Multimedia content description framework |
US6671715B1 (en) * | 2000-01-21 | 2003-12-30 | Microstrategy, Inc. | System and method for automatic, real-time delivery of personalized informational and transactional data to users via high throughput content delivery device |
US6678890B1 (en) * | 1999-03-10 | 2004-01-13 | Sony Corporation | Bidirectional transmission/reception system and method and transmission apparatus |
US6751776B1 (en) * | 1999-08-06 | 2004-06-15 | Nec Corporation | Method and apparatus for personalized multimedia summarization based upon user specified theme |
US20040117831A1 (en) * | 1999-06-28 | 2004-06-17 | United Video Properties, Inc. | Interactive television program guide system and method with niche hubs |
US6810526B1 (en) * | 1996-08-14 | 2004-10-26 | March Networks Corporation | Centralized broadcast channel real-time search system |
US20050028194A1 (en) * | 1998-01-13 | 2005-02-03 | Elenbaas Jan Hermanus | Personalized news retrieval system |
US20050076357A1 (en) * | 1999-10-28 | 2005-04-07 | Fenne Adam Michael | Dynamic insertion of targeted sponsored video messages into Internet multimedia broadcasts |
US20050076378A1 (en) * | 1999-12-16 | 2005-04-07 | Microsoft Corporation | Live presentation searching |
US20050223408A1 (en) * | 1999-09-13 | 2005-10-06 | Microstrategy, Incorporated | System and method for real-time, personalized, dynamic, interactive voice services for entertainment-related information |
US6956573B1 (en) * | 1996-11-15 | 2005-10-18 | Sarnoff Corporation | Method and apparatus for efficiently representing storing and accessing video information |
US6961954B1 (en) * | 1997-10-27 | 2005-11-01 | The Mitre Corporation | Automated segmentation, information extraction, summarization, and presentation of broadcast news |
US6970915B1 (en) * | 1999-11-01 | 2005-11-29 | Tellme Networks, Inc. | Streaming content over a telephone interface |
US20050278741A1 (en) * | 1997-03-31 | 2005-12-15 | Microsoft Corporation | Query-based electronic program guide |
US7000242B1 (en) * | 2000-07-31 | 2006-02-14 | Jeff Haber | Directing internet shopping traffic and tracking revenues generated as a result thereof |
US7130790B1 (en) * | 2000-10-24 | 2006-10-31 | Global Translations, Inc. | System and method for closed caption data translation |
US7178107B2 (en) * | 1999-09-16 | 2007-02-13 | Sharp Laboratories Of America, Inc. | Audiovisual information management system with identification prescriptions |
US20070079327A1 (en) * | 2000-01-19 | 2007-04-05 | Individual Networks, Llc | System for providing a customized media list |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5664227A (en) * | 1994-10-14 | 1997-09-02 | Carnegie Mellon University | System and method for skimming digital audio/video data |
US5835667A (en) * | 1994-10-14 | 1998-11-10 | Carnegie Mellon University | Method and apparatus for creating a searchable digital video library and a system and method of using such a library |
US5821945A (en) * | 1995-02-03 | 1998-10-13 | The Trustees Of Princeton University | Method and apparatus for video browsing based on content and structure |
US5708767A (en) * | 1995-02-03 | 1998-01-13 | The Trustees Of Princeton University | Method and apparatus for video browsing based on content and structure |
GB9504376D0 (en) | 1995-03-04 | 1995-04-26 | Televitesse Systems Inc | Automatic broadcast monitoring system |
EP0820677B1 (en) * | 1995-04-13 | 2002-01-09 | Siemens Aktiengesellschaft | Method and device for storing, searching for and playing back data in a multimedia e-mail system |
US5710591A (en) * | 1995-06-27 | 1998-01-20 | At&T | Method and apparatus for recording and indexing an audio and multimedia conference |
WO1997012486A1 (en) * | 1995-09-29 | 1997-04-03 | Boston Technology, Inc. | Multimedia architecture for interactive advertising |
US5903892A (en) * | 1996-05-24 | 1999-05-11 | Magnifi, Inc. | Indexing of media content on a network |
US5874986A (en) * | 1996-06-26 | 1999-02-23 | At&T Corp | Method for communicating audiovisual programs over a communications network |
US6098082A (en) * | 1996-07-15 | 2000-08-01 | At&T Corp | Method for automatically providing a compressed rendition of a video program in a format suitable for electronic searching and retrieval |
US6637032B1 (en) * | 1997-01-06 | 2003-10-21 | Microsoft Corporation | System and method for synchronizing enhancing content with a video program using closed captioning |
US5864366A (en) * | 1997-02-05 | 1999-01-26 | International Business Machines Corporation | System and method for selecting video information with intensity difference |
CA2257577C (en) * | 1997-04-07 | 2002-03-19 | At&T Corp. | System and method for interfacing mpeg-coded audiovisual objects permitting adaptive control |
US6038296A (en) * | 1997-10-07 | 2000-03-14 | Lucent Technologies Inc. | Internet/intranet user interface to a multimedia messaging system |
US6166735A (en) * | 1997-12-03 | 2000-12-26 | International Business Machines Corporation | Video story board user interface for selective downloading and displaying of desired portions of remote-stored video data objects |
US5956026A (en) * | 1997-12-19 | 1999-09-21 | Sharp Laboratories Of America, Inc. | Method for hierarchical summarization and browsing of digital video |
US6453355B1 (en) * | 1998-01-15 | 2002-09-17 | Apple Computer, Inc. | Method and apparatus for media data transmission |
US6029200A (en) * | 1998-03-09 | 2000-02-22 | Microsoft Corporation | Automatic protocol rollover in streaming multimedia data delivery system |
US6698020B1 (en) * | 1998-06-15 | 2004-02-24 | Webtv Networks, Inc. | Techniques for intelligent video ad insertion |
US6233389B1 (en) * | 1998-07-30 | 2001-05-15 | Tivo, Inc. | Multimedia time warping system |
US6223213B1 (en) * | 1998-07-31 | 2001-04-24 | Webtv Networks, Inc. | Browser-based email system with user interface for audio/video capture |
US6324338B1 (en) * | 1998-08-07 | 2001-11-27 | Replaytv, Inc. | Video data recorder with integrated channel guides |
GB2341502B (en) * | 1998-09-08 | 2003-01-22 | Mitel Semiconductor Ltd | Image reject mixer circuit arrangements |
US6338094B1 (en) * | 1998-09-08 | 2002-01-08 | Webtv Networks, Inc. | Method, device and system for playing a video file in response to selecting a web page link |
US20020087973A1 (en) * | 2000-12-28 | 2002-07-04 | Hamilton Jeffrey S. | Inserting local signals during MPEG channel changes |
US20020026638A1 (en) * | 2000-08-31 | 2002-02-28 | Eldering Charles A. | Internet-based electronic program guide advertisement insertion method and apparatus |
US6615039B1 (en) * | 1999-05-10 | 2003-09-02 | Expanse Networks, Inc | Advertisement subgroups for digital streams |
US6748421B1 (en) * | 1998-12-23 | 2004-06-08 | Canon Kabushiki Kaisha | Method and system for conveying video messages |
US6243676B1 (en) * | 1998-12-23 | 2001-06-05 | Openwave Systems Inc. | Searching and retrieving multimedia information |
US6236395B1 (en) * | 1999-02-01 | 2001-05-22 | Sharp Laboratories Of America, Inc. | Audiovisual information management system |
US7051351B2 (en) * | 1999-03-08 | 2006-05-23 | Microsoft Corporation | System and method of inserting advertisements into an information retrieval system display |
US6604239B1 (en) * | 1999-06-25 | 2003-08-05 | Eyescene Inc. | System and method for virtual television program rating |
US6415438B1 (en) * | 1999-10-05 | 2002-07-02 | Webtv Networks, Inc. | Trigger having a time attribute |
US6349410B1 (en) * | 1999-08-04 | 2002-02-19 | Intel Corporation | Integrating broadcast television pause and web browsing |
US6324512B1 (en) * | 1999-08-26 | 2001-11-27 | Matsushita Electric Industrial Co., Ltd. | System and method for allowing family members to access TV contents and program media recorder over telephone or internet |
KR100350787B1 (en) | 1999-09-22 | 2002-08-28 | 엘지전자 주식회사 | Multimedia browser based on user profile having ordering preference of searching item of multimedia data |
US6304898B1 (en) * | 1999-10-13 | 2001-10-16 | Datahouse, Inc. | Method and system for creating and sending graphical email |
US7159232B1 (en) * | 1999-11-16 | 2007-01-02 | Microsoft Corporation | Scheduling the recording of television programs |
US20010052019A1 (en) * | 2000-02-04 | 2001-12-13 | Ovt, Inc. | Video mail delivery system |
US6385306B1 (en) * | 2000-03-02 | 2002-05-07 | John Francis Baxter, Jr. | Audio file transmission method |
JP3810268B2 (en) * | 2000-04-07 | 2006-08-16 | シャープ株式会社 | Audio visual system |
-
2001
- 2001-12-28 US US10/034,679 patent/US20030163815A1/en not_active Abandoned
-
2002
- 2002-04-05 CA CA002380898A patent/CA2380898A1/en not_active Abandoned
- 2002-06-06 US US10/163,091 patent/US8151298B2/en not_active Expired - Fee Related
- 2002-11-25 US US10/303,045 patent/US20030120748A1/en not_active Abandoned
Patent Citations (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010013123A1 (en) * | 1991-11-25 | 2001-08-09 | Freeman Michael J. | Customized program creation by splicing server based video, audio, or graphical segments |
US5481296A (en) * | 1993-08-06 | 1996-01-02 | International Business Machines Corporation | Apparatus and method for selectively viewing video information |
US5532735A (en) * | 1994-04-29 | 1996-07-02 | At&T Corp. | Method of advertisement selection for interactive service |
US5614940A (en) * | 1994-10-21 | 1997-03-25 | Intel Corporation | Method and apparatus for providing broadcast information with indexing |
US5835087A (en) * | 1994-11-29 | 1998-11-10 | Herz; Frederick S. M. | System for generation of object profiles for a system for customized electronic identification of desirable objects |
US5805763A (en) * | 1995-05-05 | 1998-09-08 | Microsoft Corporation | System and method for automatically recording programs in an interactive viewing system |
US5734893A (en) * | 1995-09-28 | 1998-03-31 | Ibm Corporation | Progressive content-based retrieval of image and video with adaptive and iterative refinement |
US6061056A (en) * | 1996-03-04 | 2000-05-09 | Telexis Corporation | Television monitoring system with automatic selection of program material of interest and subsequent display under user control |
US6810526B1 (en) * | 1996-08-14 | 2004-10-26 | March Networks Corporation | Centralized broadcast channel real-time search system |
US6956573B1 (en) * | 1996-11-15 | 2005-10-18 | Sarnoff Corporation | Method and apparatus for efficiently representing storing and accessing video information |
US5924105A (en) * | 1997-01-27 | 1999-07-13 | Michigan State University | Method and product for determining salient features for use in information searching |
US20050278741A1 (en) * | 1997-03-31 | 2005-12-15 | Microsoft Corporation | Query-based electronic program guide |
US5996007A (en) * | 1997-06-16 | 1999-11-30 | John Klug | Method for providing selected content during waiting time of an internet session |
US6961954B1 (en) * | 1997-10-27 | 2005-11-01 | The Mitre Corporation | Automated segmentation, information extraction, summarization, and presentation of broadcast news |
US6298482B1 (en) * | 1997-11-12 | 2001-10-02 | International Business Machines Corporation | System for two-way digital multimedia broadcast and interactive services |
US20050028194A1 (en) * | 1998-01-13 | 2005-02-03 | Elenbaas Jan Hermanus | Personalized news retrieval system |
US6363380B1 (en) * | 1998-01-13 | 2002-03-26 | U.S. Philips Corporation | Multimedia computer system with story segmentation capability and operating program therefor including finite automation video parser |
US6289346B1 (en) * | 1998-03-12 | 2001-09-11 | At&T Corp. | Apparatus and method for a bookmarking system |
US6477707B1 (en) * | 1998-03-24 | 2002-11-05 | Fantastic Corporation | Method and system for broadcast transmission of media objects |
US20020152477A1 (en) * | 1998-05-29 | 2002-10-17 | Opentv, Inc. | Module manager for interactive television system |
US6411952B1 (en) * | 1998-06-24 | 2002-06-25 | Compaq Information Technologies Group, Lp | Method for learning character patterns to interactively control the scope of a web crawler |
US6229524B1 (en) * | 1998-07-17 | 2001-05-08 | International Business Machines Corporation | User interface for interaction with video |
US6564263B1 (en) * | 1998-12-04 | 2003-05-13 | International Business Machines Corporation | Multimedia content description framework |
US6385619B1 (en) * | 1999-01-08 | 2002-05-07 | International Business Machines Corporation | Automatic user interest profile generation from structured document access information |
US6678890B1 (en) * | 1999-03-10 | 2004-01-13 | Sony Corporation | Bidirectional transmission/reception system and method and transmission apparatus |
US6526580B2 (en) * | 1999-04-16 | 2003-02-25 | Digeo, Inc. | Broadband data broadcasting service |
US6345279B1 (en) * | 1999-04-23 | 2002-02-05 | International Business Machines Corporation | Methods and apparatus for adapting multimedia content for client devices |
US6507941B1 (en) * | 1999-04-28 | 2003-01-14 | Magma Design Automation, Inc. | Subgrid detailed routing |
US6477565B1 (en) * | 1999-06-01 | 2002-11-05 | Yodlee.Com, Inc. | Method and apparatus for restructuring of personalized data for transmission from a data network to connected and portable network appliances |
US6188398B1 (en) * | 1999-06-02 | 2001-02-13 | Mark Collins-Rector | Targeting advertising using web pages with video |
US20040117831A1 (en) * | 1999-06-28 | 2004-06-17 | United Video Properties, Inc. | Interactive television program guide system and method with niche hubs |
US6353825B1 (en) * | 1999-07-30 | 2002-03-05 | Verizon Laboratories Inc. | Method and device for classification using iterative information retrieval techniques |
US6751776B1 (en) * | 1999-08-06 | 2004-06-15 | Nec Corporation | Method and apparatus for personalized multimedia summarization based upon user specified theme |
US20050223408A1 (en) * | 1999-09-13 | 2005-10-06 | Microstrategy, Incorporated | System and method for real-time, personalized, dynamic, interactive voice services for entertainment-related information |
US7178107B2 (en) * | 1999-09-16 | 2007-02-13 | Sharp Laboratories Of America, Inc. | Audiovisual information management system with identification prescriptions |
US20050076357A1 (en) * | 1999-10-28 | 2005-04-07 | Fenne Adam Michael | Dynamic insertion of targeted sponsored video messages into Internet multimedia broadcasts |
US6970915B1 (en) * | 1999-11-01 | 2005-11-29 | Tellme Networks, Inc. | Streaming content over a telephone interface |
US20050076378A1 (en) * | 1999-12-16 | 2005-04-07 | Microsoft Corporation | Live presentation searching |
US20070079327A1 (en) * | 2000-01-19 | 2007-04-05 | Individual Networks, Llc | System for providing a customized media list |
US20010049826A1 (en) * | 2000-01-19 | 2001-12-06 | Itzhak Wilf | Method of searching video channels by content |
US6671715B1 (en) * | 2000-01-21 | 2003-12-30 | Microstrategy, Inc. | System and method for automatic, real-time delivery of personalized informational and transactional data to users via high throughput content delivery device |
US6496857B1 (en) * | 2000-02-08 | 2002-12-17 | Mirror Worlds Technologies, Inc. | Delivering targeted, enhanced advertisements across electronic networks |
US6434550B1 (en) * | 2000-04-14 | 2002-08-13 | Rightnow Technologies, Inc. | Temporal updates of relevancy rating of retrieved information in an information search system |
US20020138843A1 (en) * | 2000-05-19 | 2002-09-26 | Andrew Samaan | Video distribution method and system |
US7000242B1 (en) * | 2000-07-31 | 2006-02-14 | Jeff Haber | Directing internet shopping traffic and tracking revenues generated as a result thereof |
US20020052747A1 (en) * | 2000-08-21 | 2002-05-02 | Sarukkai Ramesh R. | Method and system of interpreting and presenting web content using a voice browser |
US7130790B1 (en) * | 2000-10-24 | 2006-10-31 | Global Translations, Inc. | System and method for closed caption data translation |
US20020100046A1 (en) * | 2000-11-16 | 2002-07-25 | Dudkiewicz Gil Gavriel | System and method for determining the desirability of video programming events |
US20020093591A1 (en) * | 2000-12-12 | 2002-07-18 | Nec Usa, Inc. | Creating audio-centric, imagecentric, and integrated audio visual summaries |
US20020173964A1 (en) * | 2001-03-30 | 2002-11-21 | International Business Machines Corporation | Speech driven data selection in a voice-enabled program |
US20020152464A1 (en) * | 2001-04-13 | 2002-10-17 | Sony Corporation | System and method for pushing internet content onto interactive television |
Cited By (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10123186B2 (en) | 2001-06-28 | 2018-11-06 | At&T Intellectual Property I, L.P. | Simultaneous visual and telephonic access to interactive information delivery |
US7908381B2 (en) | 2001-06-28 | 2011-03-15 | At&T Intellectual Property I, L.P. | Simultaneous visual and telephonic access to interactive information delivery |
US20060200569A1 (en) * | 2001-06-28 | 2006-09-07 | Bellsouth Intellectual Property Corporation | Simultaneous visual and telephonic access to interactive information delivery |
US20030005076A1 (en) * | 2001-06-28 | 2003-01-02 | Bellsouth Intellectual Property Corporation | Simultaneous visual and telephonic access to interactive information delivery |
US8775635B2 (en) | 2001-06-28 | 2014-07-08 | At&T Intellectual Property I, L.P. | Simultaneous visual and telephonic access to interactive information delivery |
US7054939B2 (en) * | 2001-06-28 | 2006-05-30 | Bellsouth Intellectual Property Corportion | Simultaneous visual and telephonic access to interactive information delivery |
US20110125911A1 (en) * | 2001-06-28 | 2011-05-26 | At&T Intellectual Property I, L.P. | Simultaneous visual and telephonic access to interactive information delivery |
US20030101267A1 (en) * | 2001-11-28 | 2003-05-29 | Thompson Mark R. | Peer-to-peer caching network |
US20030126126A1 (en) * | 2001-12-29 | 2003-07-03 | Lee Jin Soo | Apparatus and method for searching multimedia object |
US20030202504A1 (en) * | 2002-04-30 | 2003-10-30 | Avaya Technology Corp. | Method of implementing a VXML application into an IP device and an IP device having VXML capability |
US20040177317A1 (en) * | 2003-03-07 | 2004-09-09 | John Bradstreet | Closed caption navigation |
US20050044105A1 (en) * | 2003-08-19 | 2005-02-24 | Kelly Terrell | System and method for delivery of content-specific video clips |
US20050138183A1 (en) * | 2003-12-19 | 2005-06-23 | O'rourke Thomas | Computer telephone integration over a network |
US7571235B2 (en) * | 2003-12-19 | 2009-08-04 | Nortel Networks Limited | Computer telephone integration over a network |
US20050229048A1 (en) * | 2004-03-30 | 2005-10-13 | International Business Machines Corporation | Caching operational code in a voice markup interpreter |
US12015829B2 (en) * | 2004-04-07 | 2024-06-18 | Tivo Corporation | System and method for enhanced video selection |
US11936956B2 (en) | 2004-04-07 | 2024-03-19 | Tivo Corporation | System and method for enhanced video selection |
US20200059696A1 (en) * | 2004-04-07 | 2020-02-20 | Visible World, Llc | System And Method For Enhanced Video Selection |
WO2006004844A3 (en) * | 2004-06-30 | 2006-07-27 | Glenayre Electronics Inc | System and method for outbound calling from a distributed telecommunications platform |
US20090100473A1 (en) * | 2004-07-12 | 2009-04-16 | Alcatel Lucent | Personalized video entertainment system |
US9554182B2 (en) | 2004-07-12 | 2017-01-24 | Alcatel Lucent | Personalized video entertainment system |
US20110231764A1 (en) * | 2004-07-12 | 2011-09-22 | Alcatel Lucent | Personalized video entertainment system |
EP1617669A2 (en) * | 2004-07-12 | 2006-01-18 | Alcatel | Personalized video entertainment system |
US20060010467A1 (en) * | 2004-07-12 | 2006-01-12 | Alcatel | Personalized video entertainment system |
US7627824B2 (en) * | 2004-07-12 | 2009-12-01 | Alcatel Lucent | Personalized video entertainment system |
US20060218226A1 (en) * | 2005-03-23 | 2006-09-28 | Matsushita Electric Industrial Co., Ltd. | Automatic recording based on preferences |
US20070204285A1 (en) * | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media monitoring, purchase, and display |
US20070203945A1 (en) * | 2006-02-28 | 2007-08-30 | Gert Hercules Louw | Method for integrated media preview, analysis, purchase, and display |
US20080005266A1 (en) * | 2006-06-30 | 2008-01-03 | Gene Fein | Multimedia delivery system |
WO2008033454A2 (en) * | 2006-09-13 | 2008-03-20 | Video Monitoring Services Of America, L.P. | System and method for assessing marketing data |
US20090319365A1 (en) * | 2006-09-13 | 2009-12-24 | James Hallowell Waggoner | System and method for assessing marketing data |
US20080091513A1 (en) * | 2006-09-13 | 2008-04-17 | Video Monitoring Services Of America, L.P. | System and method for assessing marketing data |
WO2008033454A3 (en) * | 2006-09-13 | 2008-06-19 | Video Monitoring Services Of A | System and method for assessing marketing data |
US20080086754A1 (en) * | 2006-09-14 | 2008-04-10 | Sbc Knowledge Ventures, Lp | Peer to peer media distribution system and method |
US8589973B2 (en) | 2006-09-14 | 2013-11-19 | At&T Intellectual Property I, L.P. | Peer to peer media distribution system and method |
US20080177864A1 (en) * | 2007-01-22 | 2008-07-24 | Minborg Invent I Goeteborg Ab | Method and Apparatus For Obtaining Digital Objects In A Communication Network |
US20110153785A1 (en) * | 2007-01-22 | 2011-06-23 | Min Tnetap I Go | Method and Apparatus for Obtaining Digital Objects in a Communication Network |
US20120226778A1 (en) * | 2007-01-22 | 2012-09-06 | Min Tnetap I Gö | Method and Apparatus for Obtaining Digital Objects in a Communication Network |
US7921221B2 (en) * | 2007-01-22 | 2011-04-05 | Minborg Invent I Goteborg Ab | Method and apparatus for obtaining digital objects in a communication network |
US20080207233A1 (en) * | 2007-02-28 | 2008-08-28 | Waytena William L | Method and System For Centralized Storage of Media and for Communication of Such Media Activated By Real-Time Messaging |
US20080270913A1 (en) * | 2007-04-26 | 2008-10-30 | Howard Singer | Methods, Media, and Devices for Providing a Package of Assets |
US8639826B2 (en) * | 2007-05-07 | 2014-01-28 | Fourthwall Media, Inc. | Providing personalized resources on-demand over a broadband network to consumer device applications |
US20080281974A1 (en) * | 2007-05-07 | 2008-11-13 | Biap, Inc. | Providing personalized resources on-demand over a broadband network to consumer device applications |
US8169916B1 (en) * | 2007-11-23 | 2012-05-01 | Media Melon, Inc. | Multi-platform video delivery configuration |
US8654684B1 (en) | 2007-11-23 | 2014-02-18 | Media Melon, Inc. | Multi-platform video delivery configuration |
US20090182712A1 (en) * | 2008-01-15 | 2009-07-16 | Kamal Faiza H | Systems and methods for rapid delivery of media content |
US8645822B2 (en) * | 2008-09-25 | 2014-02-04 | Microsoft Corporation | Multi-platform presentation system |
US20100077298A1 (en) * | 2008-09-25 | 2010-03-25 | Microsoft Corporation | Multi-platform presentation system |
US20110107215A1 (en) * | 2009-10-29 | 2011-05-05 | Rovi Technologies Corporation | Systems and methods for presenting media asset clips on a media equipment device |
US20110106536A1 (en) * | 2009-10-29 | 2011-05-05 | Rovi Technologies Corporation | Systems and methods for simulating dialog between a user and media equipment device |
CN105407384A (en) * | 2014-09-15 | 2016-03-16 | 上海天脉聚源文化传媒有限公司 | Method, device and system for identifying media player content by using two-dimensional code |
US10277953B2 (en) * | 2016-12-06 | 2019-04-30 | The Directv Group, Inc. | Search for content data in content |
CN109005444A (en) * | 2017-06-07 | 2018-12-14 | 纳宝株式会社 | Content providing server, content providing terminal and content providing |
US20180359537A1 (en) * | 2017-06-07 | 2018-12-13 | Naver Corporation | Content providing server, content providing terminal, and content providing method |
US11128927B2 (en) * | 2017-06-07 | 2021-09-21 | Naver Corporation | Content providing server, content providing terminal, and content providing method |
CN116233472A (en) * | 2023-05-08 | 2023-06-06 | 湖南马栏山视频先进技术研究院有限公司 | Audio and video synchronization method and cloud processing system |
Also Published As
Publication number | Publication date |
---|---|
US20030163815A1 (en) | 2003-08-28 |
US20030030752A1 (en) | 2003-02-13 |
US8151298B2 (en) | 2012-04-03 |
CA2380898A1 (en) | 2002-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10462510B2 (en) | Method and apparatus for automatically converting source video into electronic mail messages | |
US20030120748A1 (en) | Alternate delivery mechanisms of customized video streaming content to devices not meant for receiving video | |
US8060906B2 (en) | Method and apparatus for interactively retrieving content related to previous query results | |
US20240007696A1 (en) | Systems and methods for using video metadata to associate advertisements therewith | |
US8589973B2 (en) | Peer to peer media distribution system and method | |
US9454775B2 (en) | Systems and methods for rendering content | |
US9595050B2 (en) | Method of disseminating advertisements using an embedded media player page | |
US7281260B2 (en) | Streaming media publishing system and method | |
US8453189B2 (en) | Method and system for retrieving information about television programs | |
US20030097301A1 (en) | Method for exchange information based on computer network | |
US20060053470A1 (en) | Management and non-linear presentation of augmented broadcasted or streamed multimedia content | |
WO2003014949A9 (en) | Method, system, and computer program product for producing and distributing enhanced media | |
CN101395627A (en) | Improved advertising with video ad creatives | |
Wales et al. | IPTV-The revolution is here | |
Begeja et al. | eClips: A new personalized multimedia delivery service | |
IE20030840U1 (en) | Multimedia management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AT&T CORP., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BEGEJA, LEE;GIBBON, DAVID CRAWFORD;LIU, ZHU;AND OTHERS;REEL/FRAME:013526/0142;SIGNING DATES FROM 20021111 TO 20021119 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |