[go: nahoru, domu]

WO2013138370A1 - Interactive overlay object layer for online media - Google Patents

Interactive overlay object layer for online media Download PDF

Info

Publication number
WO2013138370A1
WO2013138370A1 PCT/US2013/030584 US2013030584W WO2013138370A1 WO 2013138370 A1 WO2013138370 A1 WO 2013138370A1 US 2013030584 W US2013030584 W US 2013030584W WO 2013138370 A1 WO2013138370 A1 WO 2013138370A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
information
layer
product
timeline
Prior art date
Application number
PCT/US2013/030584
Other languages
French (fr)
Inventor
Teemu Airamo
Original Assignee
Mini Broadcasting
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mini Broadcasting filed Critical Mini Broadcasting
Publication of WO2013138370A1 publication Critical patent/WO2013138370A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8543Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8583Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by creating hot-spots

Definitions

  • the technology disclosed relates to appending an interactive object layer to an online media, such as an online video, tablet or PC media player, for providing information and actions including data storage objects that are associated with the unique ID number of the playing file.
  • the object layer is structured as one or more virtual grids of cells. Each grid is associated with a portion of the video, referred to as "dropzone.” Actions are associated with individual cells in the grids.
  • the actions can be associated with portions of images being displayed by the player, such as images of products, places, people, etc.
  • the actions can further initiate activities such as accessing information about the underlying portion of the image, initiating a purchase transaction for an item, initiating a query for information, and the like.
  • the technology disclosed further relates to displaying a separate media layer object as an overlay of the video player, which indicates the presence of the object layer.
  • a web browser in a web page can display the media layer, its objects, and the video player.
  • the interactive object layer, its objects and video player can be executed in a separated manner.
  • a user can interact with the object layer, e.g., by clicking on ones of the cells for which actions are defined. The actions are automatically executed at that point.
  • advertising for online media currently uses pre-roll, mid-roll, and post-roll ads, as well as banner ads that appear on of a small portion of a video.
  • These types of advertisements are not directly and interactively coupled to the images of products, services, or other content visually appearing in the videos for which an advertiser wishes to advertise.
  • the technology disclosed relates to appending an interactive object layer to an online media, such as an online video, tablet or PC media player, for providing information and actions including data storage objects that are associated with the unique ID number of the playing file.
  • the object layer is structured as one or more virtual grids of cells. Each grid is associated with a portion of the video, referred to as "dropzone.” Actions are associated with individual cells in the grids.
  • the actions can be associated with portions of images being displayed by the player, such as images of products, places, people, etc.
  • the actions can further initiate activities such as accessing information about the underlying portion of the image, initiating a purchase transaction for an item, initiating a query for information, and the like.
  • the technology disclosed further relates to displaying a separate media layer object as an overlay of the video player, which indicates the presence of the object layer.
  • a web browser in a web page can display the media layer, its objects, and the video player.
  • the interactive object layer, its objects and video player can be executed in a separated manner.
  • a user can interact with the object layer, e.g., by clicking on ones of the cells for which actions are defined. The actions are automatically executed at that point.
  • FIG. 001 illustrates one implementation of a server dataflow.
  • FIG. 002 shows one implementation of a detailed dataflow.
  • FIG. 003 illustrates one implementation of an individual producer scoring.
  • FIG. 004 is one implementation of object services.
  • FIG. 005 illustrates one implementation of producer services and overall object distribution.
  • FIG. 006 shows one implementation of dropzones.
  • FIG. 007 illustrates one implementation of a virtual grid.
  • FIG. 008a shows one implementation of XML buttons.
  • FIG. 008b illustrates one implementation of a dynamic user interface.
  • FIG. 009 shows one implementation of overlay services.
  • FIG. 009.1 is another implementation of overlay services.
  • FIG. 010 illustrates one implementation of an event detector.
  • FIG. 011 shows one implementation of a web services layer.
  • FIG. 012 illustrates one implementation of social services.
  • FIG. 013a shows one implementation of seller services.
  • FIG. 013b is another implementation of seller services.
  • FIG. 013c illustrates another implementation of seller services.
  • FIG. 014 shows one implementation of seller tracking services.
  • FIG. 015 illustrates one implementation of a seller account.
  • FIG. 016 shows one implementation of an end user application.
  • FIG. 17 is one implementation of a purchase action.
  • FIG. 18 illustrates one implementation of a location action.
  • FIG. 19 shows one implementation of a dropzone paused screen.
  • FIG. 20 illustrates one implementation of an indication of a dropzone in a video.
  • the technology disclosed relates to creating interactive videos for use in a computer- implemented system.
  • the described subject matter can be implemented in the context of any computer-implemented system, such as a software-based system, a database system, a multi- tenant environment, or the like.
  • the described subject matter can be implemented in connection with two or more separate and distinct computer-implemented systems that cooperate and communicate with one another.
  • One or more implementations may be implemented in numerous ways, including as a process, an apparatus, a system, a device, a method, a computer readable medium such as a computer readable storage medium containing computer readable instructions or computer program code, or as a computer program product comprising a computer usable medium having a computer readable program code embodied therein.
  • the technology disclosed can select a media based on desired relationships such as user preferences and identities of media files.
  • the technology disclosed can be implemented on portable devices, web browsers and other software capable of presenting media files.
  • Other implementations to match objects to videos and insert the objects to a video can include preference matching, on-fly insertion of predefined applications such as object-specific video players, and the like. Additional objects such as serving commercial advertisements, and the like can be introduced to the video to enable interactivity, improve user experience and provide value to video owners.
  • an interactive object layer icon can be displayed in upper right hand corner as an overlay to indicate that an interactive object layer action on a product is available.
  • the video can then be paused to view additional details and actions of the interactive object layer item.
  • the interactive object layer icon can disappear until a next interactive object layer item is recognized.
  • a list of available interactive object layer products linked to the video timecode can be displayed at the top of the screen.
  • five (5) images relevant to the timecode can be displayed at any time. The image sequence can move along with the timecode.
  • a list of all interactive object layer items can be made available, along with various actions from the entire video such as social media, purchase information and location services.
  • the screenshots displayed at the bottom of the screen can display close up shots of the products, enabling viewers to recognize where the product is from.
  • the video can be paused and an enlarged image of a new item can be displayed along with different actions such as purchase, social media, location or information. Actions can then be performed directly from the screen. Exiting the screen can automatically resume the video.
  • the full overlay can include a background image, image of the product, overlay text and/or a separate action panel that includes social media information, location maps, purchasing tools or other links.
  • the technology disclosed can use real time tracking of objects to enable personalization. Referring to FIG. 009, the technology disclosed can enable an interaction between a video player 025 running a video including images of various items of interest to an advertiser and an object 033.
  • the items of interest can include products (e.g., clothing, household items, industrial items, digital media), services (personal, professional, travel, etc.), or anything else that an advertiser or company may wish to sell or promote to a consumer.
  • the object 33 can be a representation that is external to the video image and video player 025. It can be directly and programmatically associated with the underlying image of the item of interest, allowing users: to obtain additional information about the item of interest, make a query about the item, initiate a purchase for the item, promote the item in a social network, and/or engage in other behaviors with respect thereto.
  • the video player 025 can be embedded, enhanced or otherwise extended with other objects such as interactive actions 020.
  • object 33 and actions 20 are shown in an interactive object layer 034 that is external to the player 025.
  • layer 034 and its content can be displayed as an overlay on top of the video player 025.
  • the technology disclosed can use an event detector 026 to enable interaction between an object 033 and a video 025 matched on the fly.
  • the event detector 026 can identify an event associated with the video being displayed by the video player 025 or an object 033 presented by the video player 025.
  • the event detector 026 can also dispatch the event to an external object that is responsive to the event.
  • the technology disclosed can identify an event. It can provide synchronized or otherwise coordinated action between two components, such as the video player and the external object. The components can be executed separately and still be coordinated.
  • FIG. 001 is an illustration of cloud-based storage and delivery of interactive advertisements in relation to video service and the user. It can include a content database server 007 that stores content such as videos. Edge servers 005 can be used to deliver the content to client devices (computers, mobile phones, settop boxes, receivers) of end users 004. Object storage 003 stores data pertaining to objects in the interactive object layer information for the videos. Social services 002 include social networking systems. Video service 001 can be a video player, shown in its paused state.
  • user 004 can utilize a player 025 to execute a widget 042 retrieved from the Internet 006.
  • the user 004 can use the widget 042 when viewing the video in order to interact with the interactive object layer.
  • the player 025 can include a web browser 041 configured to execute the widget 042 as a browser extension.
  • the user 004 can view a web page served by a web browser 041.
  • the web page can include a video player 025 like HTML5 video player that can be provided to a client 004 by the web browser 041.
  • the web page may include a video player configured to play a video preselected by a content database server 007 or to play any video selected by the user 004 from the database.
  • the content database server 007 can select objects from content storage 003 based on various parameters, rules and configurations such as input from the user 004, IP address of the user 004, preferences of the user 004 stored in profile database 058, etc.
  • FIG. 005 illustrates the user interface of a software tool that can be used by a producer.
  • Producer can be a user 004, who uses the software tool to access a video and create the interactive object layer for the video.
  • the video can be automatically segmented into a number of scenes 022 using a scene detection algorithm.
  • the scenes 022 can be arranged with respect to a time line with time code 021.
  • the producer can select any scene or group of scenes 022 to control the playback of the video, which appears in the preview window of the video player 025.
  • FIG. 006 is an illustration of "dropzones" created by marking the in and out points in relation to the timecode of a video. Any actions can then be embedded within these dropzones.
  • Each dropzone is associated with a virtual grid which overlays the video for the duration of the video between the dropzone's in and out points.
  • a producer can create one or more dropzones 035 by placing a mark in button 018 and a mark out button 019 on the timecode 021
  • the dropzone 035 can be associated with the temporal segment of the video bounded by mark in 018 and mark out 019.
  • the length of a dropzone 035 can be anything between zero and the length of the video. As shown in FIG. 006, dropzone 035 can extend over several scenes like scene 022 and a single scene 022 can be associated with several dropzones like dropzone 035 as well.
  • FIG. 007 is a frontal and lateral illustration of the virtual grid of a dropzone overlaying a video service. Actions are associated with various ones of the cells of the virtual grid.
  • FIG. 007 also shows a virtual grid 023 with a plurality of cells (equivalently, "boxes") in a column/row arrangement.
  • the number of cells in the vertical (column) and horizontal (row) directions can be proportional to the aspect ratio (height in pixels to width in pixels) of the video.
  • the video has a 3 :4 aspect ratio in pixel terms, then it can have a 3 :4 ratio of rows to column.
  • grid 023 overlays an image from a video, it can place the cells over various parts of the image.
  • any particular item of interest in the video e.g., a cellphone being held by a user
  • the producer can then decide what actions to associate with the item of interest.
  • a dropzone 035 can be a data holder that stores information about one or more actions 020.
  • Each action 020 can be associated with one or more cells in the virtual grid.
  • the producer can select an action and place it on the grid, covering or more of the cells.
  • the action can be graphically represented as a rectangle.
  • the producer can resize the rectangle after placing it on the grid, thereby covering the desired cells.
  • Appendix provides an example of the XML code for dropzones for a video.
  • FIG. 005 is also an illustration of detailed structure of producer services enabling embedding actions onto a layer on a video, which functions on a website, smartphone, tablet or any other device that has a connection to the world wide web. Shown here is a preview window with a grid 023 in a video player 025, along with scenes 022 of the video, and action buttons 020.
  • the technology disclosed can provide a variety of predefined types of actions 020.
  • the "location" action 020 can be associated with a hyperlink to an online mapping service, which can then display a map of some location of interest, as defined by the producer.
  • the producer can place the location action 020 into the cell as a cellphone icon to define a hyperlink to a location map of a store selling the cellphone.
  • the location action 020 can be triggered to cause a browser to display the map.
  • FIG. 18 shows the result of the user selecting a location action when viewing a video. This action can retrieve a location map and related information for the selected location.
  • a music action 020 can be associated with a particular audio recording file, which can be played in response to the user selection.
  • the producer can place a music action 020 into the cell as a cellphone icon to define a hyperlink to an audio recording (musical or otherwise) about the cellphone.
  • the end user viewing the video clicks on the cellphone icon in the video, the browser and underlying audio player can play the audio recording back.
  • a social action 020 can provide user-editable links that create postings on social media networks such as Facebook, Twitter, Pheed, Google+ or on any other related to the item of interest.
  • the producer can define specific content for the posting (e.g., a short textual/graphical/video message) as well as allow the end user to further augment the posting.
  • the producer can place a social action 020 into the cell as a cellphone icon.
  • the social action 020 can be triggered on one more of the social networks including posting about the cellphone like "I'm buying this cellphone...will be chatting with you soon!"
  • a purchase action 020 can provide a link to an ecommerce website where the item of interest can be purchased.
  • the action 020 can also retrieve metadata about item, such as current availability, price, shipping costs, etc.
  • the producer can define the particular workflow for the transaction as well.
  • the producer can place a purchase action 020 into the cell with the cellphone to define a hyperlink to an ecommerce site selling the cellphone.
  • the browser can access the ecommerce site and bring up a product page from which the user can purchase the cellphone.
  • all for the purchase transaction steps can be embedded into the playback experience, so that the user never has to leave the video player.
  • a search action 020 can provides a link to a search engine with a predefined query (e.g., one or more keywords, browser context) established by the producer, which can be dynamically modified with user specific information (e.g., demographics, web or purchase history) to make a user specific, customizer query which is then transmitted to a search engine.
  • a predefined query e.g., one or more keywords, browser context
  • user specific information e.g., demographics, web or purchase history
  • actions 020 may not have the same elements as those listed above and/or can have other/different actions 020 instead of, or in addition to, those listed above using JavaScript, HTML5, or any other language.
  • an action 020 when an action 020 is associated with a dropzone 035, it can be added to a series of action previews 024, displayed next to the video player 025 window for the producer to see.
  • the producer can scroll through these to see which actions have been defined, as well as sort, filter, and search them.
  • the producer can define for each action a monetary value associated with the end user selecting the action.
  • the monetary value represents a type of conversion value. For example, a user selecting a location action 020 can have a value of $ 1.00, while a user selecting purchase action can have a value of $ 100.
  • the monetary value can be predefined, or it can be a variable and connected to an external system that determines the value at run time.
  • the value can be value to the producer, to the seller of the product, or to any other party; multiple different monetary values can be assigned as well, for example, different ones for different parties.
  • the producer can also access a summary page that lists the number of each type of action, along with details thereof. For actions with monetary value, one or more summary total monetary values can be displayed, for example by action type.
  • a predefined dropzone can be defined as a background or default layer for the entire video, which provides actions such as general information about the producer, the video, the advertiser, or the like.
  • a specific example can include a video created by a retail store showing its various products; the predefined dropzone can include actions to provide general information about retail store, such as location actions, information (store hours and telephone numbers).
  • the producer can then combine such a predefined dropzone, with dropzones for individual scenes in the video.
  • a scene in the video can show a group of products (e.g., a furnished living room with products such as a couch, table, chairs, lamp, and rug) with individual purchase actions associated with the cells overlaying various ones of the products.
  • a group of products e.g., a furnished living room with products such as a couch, table, chairs, lamp, and rug
  • FIG. 011 is an illustration of the manner by which web page containing an enabled widget ("Tapvert Widget”) can be used to display video content.
  • the producer Once the producer has defined all of the dropzones for a video, they are compiled into a data layer object, which is then associated with the video file. When the video is served to a browser, the associated data layer object containing the dropzone information is served as well.
  • the video information can be handled by the video player in the normal manner, while the data layer object can be processed by the widget 042 that can be a browser 041 add on.
  • Appendix B provides an example of the services interface that can be registered by the widget 042.
  • Appendix C provides an example of how the widget 042 can be mounted by the browse 041.
  • the video file as obtained from a video hosting service can include additional content, such as objects selected from object storage 003.
  • the object storage 003 can retain objects, such as interactive objects, displayable objects or the like.
  • Some of the objects 033 are associated with a video 025, such as for example, as an image, action 020 or a button 061, in accordance with a perspective of predefined virtual grid 023 of the video 025, which is overlaid onto the video. Examples of grids at different scales are shown in FIG. 007.
  • Some of the objects can be displayed outside a video 025 or companion to the video 025.
  • an object can display additional information in response to an event associated with a video.
  • an object can display additional information in response to an interaction of the user 004 with the video or with an object 033 displayed in the video 025, such as an object dynamically inserted to the video as an overlay on the video 034.
  • FIG. 12 shows a connection between embedded social services and interactive object layer API ("Tapvert API").
  • FIG. 009 also illustrates detection of objects in the object layer, while the video is being played back, followed by presentation of the objects in an overlay along with embedded actions and information.
  • FIG. 009.1 shows the virtual grid within the context of the video player window.
  • the object 033 is displayed and its content is a picture of overlay 034, which here shows the virtual grid.
  • the overlay 034 can be displayed as part of an composition.
  • video player 025 displays a video of various objects.
  • a user such as 004 of FIG. 001, can utilize a device, such as a computer, touchpad, a touch screen or the like, to interact with the video, such as by pointing to an object 033.
  • the user 004 can interact with an object 033 such as jacket.
  • the user can click on the jacket, hover above it, or otherwise interact with the jacket.
  • the jacket can be an object 033, such as dynamically inserted object, can be associated with a hot spot, or otherwise defined for the interaction disclosed in Overlay services'
  • a hot spot is an area within predefined grid 023 of a media file, such as a video, that enables an interaction by the user 004.
  • a hot spot can be defined as an area of the jacket 033 in the video 025, and upon a user interaction, such as by touching the hot spot; event detector 026 can trigger an event.
  • the video player 025 can pause the video being displayed in response to an interaction of the user 004 via device. In some implementations, the video player 025 can resume the video upon predefined action. In other implementations, the video player 025 can continue to show the video while the object composition is displayed.
  • an animation can be associated with the object 033, without any additional animation related to the video 025, such as the action of FIG. 009.
  • the object 033 which can be referred to as a companion object, or a video object, can be described by identification and layout parameters, such as position, size and the like.
  • the object 033 can be responsive to events associated with the identification. For example, in some cases multiple objects similar to the object 033 can be presented, each associated with a different identification. Some events can induce a functionality of some of the multiple objects.
  • the event can include an identification, which can be used to determine which objects the event can be dispatched or which objects can be responsive to the event.
  • FIG. 009 also shows a block diagram of an interactive object layer system.
  • a video- service provided by external operator such as YouTube, Vimeo or the like, can be configured to serve a video to a user, such as 004 of FIG. 001.
  • the interactive object layer 034 inserted on top of video 025 service can include an event detector 026, a coordinating module 039 or an Input/Output (I/O) module 038.
  • the media layer can run via an Input/Output (I/O) module 038.
  • the input/output (I/O) module 038 can provide an interface to object storage 003.
  • the object storage 003 can provide an object composition from content inventory 009.
  • the Input/Output (I/O) module 038 can provide an interface to an object storage 003.
  • the object storage 003 can be a database, external storage server or the like.
  • the object 033 can be a displayable object, such as the jacket 033 of FIG. 009.
  • the object 033 can be an interactive object.
  • the object 033 can be associated with a social interactions 020, purchase actions 020, a set of external actions 020, location services 020 or similar videos having a common characteristic or the like.
  • the input/output (I/O) module 038 can provide an interface to a web browser 041.
  • the input/output (I/O) module 038 can enable serving a web page to a user 004 via the internet 006 illustrated on FIG. 011.
  • the event detector 026 can be configured to identify an event associated with the video player 025.
  • the event can be an interactive event, such as social network action 020, location service 020 or purchase action 020, as initiated by an end user viewing the video, and clicking on the cell overlaying the video which is associated with the event.
  • the event can be a tracking activity of user 004 within the interactive object layer.
  • the event can be a keyword event, such as a keyword associated with a grid 023 and video being played.
  • the keyword can be dynamically determined.
  • the keyword can then be passed to a search action 020.
  • the events can utilize other characteristics of the metadata associated on grid 023 or user inputs and the like, which is then passed to the corresponding actions 020.
  • the coordinating module 039 can be configured to coordinate action of two or more elements.
  • the coordinating module 039 can coordinate action of an object, such as jacket 033 of FIG. 009, and the video being played by the video player 025.
  • the coordinating module 039 can synchronize the object such that the object 033 can be assigned an action in accordance with associated user preferences.
  • the widget can determine from which dropzone(s) are associated with the current playback time using the video time codes. The widget can then retrieve the grid and cell information for such dropzones, and then renders the interactive object layer over the video player 025. In this manner, the player can be completely independent to, and agnostic of, the object layer. As the video continues to play, the widget can update the object layer with the corresponding dropzones depending on the current time code for the playback.
  • buttons or other graphic indicia can be displayed to indicate the various actions 020 defined for the current dropzone(s). These indicia can be shown at their assigned grid locations. In some implementations, the grid can be shown as well.
  • a key can be assigned that causes the indicia to be shown over the playing video. This results in the user not needing to pause the video to see the indicia of the actions.
  • the click when a user clicks on a playing or paused video, the click can be passed by the browser to the event detector 026 to detect an event in the dropzone 035.
  • a predefined action can be coordinated in dropzone 035.
  • a coordinating module 039 of FIG. 011 can coordinate the action. The coordination can induce an order of elements with the predefined action, between elements from different actions, between elements of actions and grid 023 of the video 025 and the like.
  • Actions 020 can be links to product, social action, location, music store integration, etc.
  • Content sequencing web services 016, 010 can support content refreshing for clients that cache content, as requests for clients such as browser based players.
  • Content database server 007 can be used to catalog the available content.
  • Profile database 058 and content server 003 can be used to serve content in the form of digital media files.
  • Content sequencer 010 can decide which objects should be sent across the interactive object layer to be displayed. Collectively, the web services are available across the Internet 006.
  • FIG. 008b is an illustration of dynamic user interface generation and user interface seeding process.
  • the technology disclosed can include generating XML based user interfaces that performs an auto-run function at stage 081.
  • the auto-run function can detect whether XML based user interface can be used in portable devices or distributed as web widget 042.
  • FIG. 002 is an illustration of how content gets delivered to web services and how web services function between content database server, web services and profile database.
  • a web browser 041 including a widget 042 (e.g., a browser extension), can access video content from: content database 007, user profiles from the profile database 058, and interactive object information from the content storage 003.
  • Web browser 041 and web services 083 can serve HTML, JavaScript, Images, HTML5 and other objects utilized by users 004.
  • Profile web services 016 and a profile database 058 can store end-user specific profile information including personal customizations, content preferences, and history of recent end-user actions and other events.
  • a Profile database 058 can store: end-user's preferences in profile database 058, producer preferences and action history. In some implementations, this data can be expressed as one or more XML documents or trees (nodes).
  • the profile database 058 can also store object definitions.
  • producer customization information such as rate score 052 and preferences in profile database 058, action history, and/or other user customization criteria can be stored in a common user 004 profile in profile database 058.
  • user customization information can be stored as one or more separate user profiles.
  • user profiles can be synchronized between two or more types of users 004, facilitating user profile updating and synchronization across multiple types of users, used by a particular end-user.
  • user profiles can be synchronized between a web based player 025 and a portable player such as tablet so that the user profile information on all synchronized users can be updated to the most recent profile.
  • FIG. 003 is an illustration of scoring based on sales of products through individual producers in relation to all producers selling any particular product.
  • producers can be scored based on the value of their ability to effectively promote the sale of products related to the content of a video.
  • scoring can include scanning the user preferences with execution proceeding to profile database 058, where the actions and purchase history is examined.
  • a producer score can be calculated at stage 055. It can also be compared to an average score 057 and adjusted accordingly. The new score can then be returned at stage 056 to profile database 058, where the process can be repeated.
  • FIG. 008a illustrates an implementation of a state diagram showing some of the states used in an implementation of a dynamic user interface.
  • FIG. 010 shows an associated event-processing table that detects interactions with objects in the grid.
  • the application can register event handlers and other events required by the object grid 023 after UI transfers to an initialization phase between states 066 and 067.
  • the handler can be dispatched at state 068.
  • the event handler can create a new thread to handle the event or place the event in the queue of an existing thread.
  • FIG. 011 further illustrates state and event processing of one implementation of the invention.
  • Interactive object layer user details can be tracked as the customer uses interactive object layer services.
  • details regarding purchases, likes, shares on social media, information requests and location details can be saved in the interactive object layer 'memory'.
  • the technology disclosed can also provide a learning platform in which the more the users interact with interactive object layer, the more details can be saved and more personal services can be provided.
  • interactive object layer can learn the user's shoe or dress size, location, brand preference.
  • Other details such as age, sex and location can also be drawn from person-related data sources like access controlled API, public Internet and social networking sites.
  • FIG. 013a is an illustration of actions relating to addition of objects to interactive object layer database, along with information regarding previous objects.
  • FIG. 013b shows details required to enter a new object into the database by the seller through web services.
  • FIG. 013c is an illustration of tracking services available for sellers.
  • FIG. 014 is also an illustration of tracking services available to the seller.
  • FIG. 015 illustrates actions related to account details of any particular user of Interactive object layer Seller services.
  • FIG. 016 shows the home screen for a user application for watching videos, with buttons for selecting TV shows, movies, music videos, and advertisements.
  • FIG. 17 shows the purchase action, after the user has paused the playback of a video, and selected an action button associated with a purchase.
  • the user can be shown an image of the product to purchase, along with its price, and optionally its price as compared to RRP. The user can login and complete the purchase transaction.
  • Seller Login is separate from regular Interactive object layer Login.
  • the seller section is for brands selling their products via Interactive object layer.
  • the seller can request approval to become 'interactive object layer seller', and can be given unique passwords to access the product section.
  • the seller can then add products to the interactive object layer database.
  • An image of the product can be in file format such as PNG, SVG, MP4, M4V, BMP, TIFF, PSD, GIF, TGA, AVI, MOV or a scaled vector format (SV) any other image format that enables adjustable separate alpha channel.
  • Product information can be added as text overlay.
  • Products can be categorized by brand, and sub-categorized by model that can be subcategorized by size or color.
  • the recommended retail price (RRP) for the product can be inserted.
  • the seller can then have a choice to apply Interactive object layer discount of 10%, 20% or 'other%', which can be automatically calculated from the RRP.
  • the seller can then add the number of products in 'interactive object layer seller', which the seller can guarantee available in stock.
  • the seller can then monitor the number of products sold live, and add more items to the 'interactive object layer stock', provided that the product is ready and available in the brand's stock storage.
  • FIG. 19 shows what is seen by the user after pausing playback, whereby an image/button can be shown for purchase action (the T-shirt), as well as buttons for social actions (buttons with "F” and "T”).
  • FIG. 20 illustrates how an icon (here the stylized "M”) can be shown to signal to the viewer that a dropzone present in the current portion of the video.
  • Studio Extra an icon
  • the interactive object layer can be extended for use in large production companies, studios, and commercial agencies.
  • the extended versions can allow addition of pre-rolls and other video features to the original content.
  • the technology disclosed can be used to promote television shows on social media.
  • a producer can create dropzones in the shows' videos.
  • the dropzones can be created in the video timeline to specify interesting events such as funny jokes, etc. that are likely to generate greater user response.
  • the technology disclosed can invoke the appropriate social action and further create posting on social networking sites like Facebook, Twitter, etc. that includes a caption along with an anchor point or time line marker to at least the beginning point of the video segments defined by the dropzones.
  • Other users connected to the first user on the social networking sites can select these postings and conveniently view the linked video segments.
  • Producers of television shows can provision real-time information about the shows' content for viewers to expose while they are watching the shows, using the technology disclosed. For instance, producers of travel shows can create dropzones to provide viewers with content including information, reviews and photos about the destinations shown in the shows, trip planning tips, etc. that they can expose while viewing the travel show videos.
  • producers of cooking shows can create dropzones that define onscreen user playback controls and share with viewers recipes, scaling of recipes, information about stores that carry the ingredients of the recipes.
  • the dropzones can define on screen controls timed to match the instant events in the cooking shows. For example, at the bread baking segments of the cooking shows, instructions on how to bake the bread can appear on the screen. After listing ingredients, a pause or resume control can give a viewer an opportunity to assemble the ingredients before proceeding.
  • broadcasters of sporting events can provide viewers real-time information such as scores, statistics, player and team information, etc. while they are watching the events. Dynamic controls can be defined that allow viewers to choose the type of information they see. In some implementations, producers or viewers can embed markers like "penalty moment" in the event videos and facilitate dynamic replay.
  • broadcasters of dance shows can use the technology disclosed to help viewers learn the dance sequences shown in the shows in real-time i.e. while they are watching the shows.
  • dropzones can be created to include information related to the dancing sequences including number and types of steps, feet positions, song, etc.
  • the technology disclosed can be used to efficiently present education videos.
  • different events with in a video can be categorized as compact video segments that can be independently accessed and easily shared on social media.
  • the video segments can be appended with searchable tags that to provide video indices to viewers.
  • the technology disclosed may be practiced as a method or system adapted to practice the method.
  • the technology disclosed can include a media player 025 that is configured to display an embedded widget 042.
  • the media player 025 can have a display layout, an event detector 026 configured to identify an event associated with the widget 042 displayed by the media player 025, and an object 033.
  • the object 033 can be configured to display online player 025 and is also responsive to the event identified by event detector 026.
  • the media player 025 and object 033 can be executed by online XML file.
  • object 033 can be configured to perform a predefined action in response to the event associated to grid 023.
  • the media player 025 can include a second object that is configured to perform a second predefined action in response to the event associated to grid 023.
  • an event can be selected from the group consisting of:
  • interactions with a second object tracking events of entities in the video, placement events entities in the video, frame events, keyword events, and ambient events of the video.
  • the event can include frame object identification and target object identification.
  • the target object identification can be associated with object 033.
  • the technology disclosed generate media layer objects 033 by maintaining a ratio of different displays with templates including background images and regions, where information specific to an entity can be inserted for display.
  • objects 033 can be incorporated into grid 023 and one of the plurality of templates can be automatically selected.
  • the information about the entity can be automatically inserted into one or more regions of the grid to automatically create the display ad for a later online display.
  • a template can be a background image on a specified area of the grid 023 for object 033. Template information identifying what information is to be provided into the specified areas of the grid 023 can be stored when an object layer is created.
  • FIG. 008a is an illustration of XML buttons and their scaling for various screen sizes based on x and y coordinates.
  • 033 can be extensible markup language (XML) as illustrated in FIG. 008a.
  • XML extensible markup language
  • the technology disclosed can provide the information associated with each template region within the grid 023. This information can identify the information provided in the region.
  • the technology disclosed can automatically provide graphical images in the interactive object layer by scaling the size of the graphical images.
  • a web browser 041 can display media player 025 and object
  • a video-associated object 033 can be utilized in an embedded online environment by displaying a widget 042 in a display layout by a media player 025, identifying an event associated with the data stream, and displaying an object 033 in the predefined grid 023 in response to the event.
  • a second object can be displayed in the predefined grid 023 in response to the event by performing a predefined action that includes: playing a media content, displaying an object, displaying an animated image, and displaying a text or predefined action 020.
  • the identity of a first media file can be recognized from a plurality of media files. Furthermore, a user input indicative of a desired relationship measure and a desire to select a second media file of plurality can be received by accessing user preference data that indicates a relationship measure among the media files. The second media file can then be selected in accordance with desired user preference measure, user input, user preference data, and identity of said first media file.
  • the media device can include a portable media player, a personal computer-based media player, online enabled DVR streaming appliance, a personal computer-based media player, and/or smartphone or a cellular telephone.
  • the second media file can be streamed on a media device.
  • the identity of a second media file can be recognized from a plurality of media files.
  • a user input indicative of a desired relationship measure and a desire to select a third media file of plurality can be received by accessing user preference data that indicates a relationship measure among the media files.
  • the third media file can then be selected in accordance with desired user preference measure, user input, user preference data, and identity of said first media file.
  • the identity of the first media file can include a brand name associated with the media file and product details associated with the media file.
  • the user preference data can be based upon a statistical measure of co-occurrence of media files in a particular set of media files.
  • the set of media files can be a saved list or a dynamic media library in the content database server 007.
  • the user preference data can be established by analyzing a play history 013 of users 004 of media files.
  • the user preference data can be established by analyzing user preferences in profile database 058 constructed by users 004 of media files.
  • the user preference data can be based observed user 004 behavior.
  • the user preference measure can correlate to a degree of similarity between the first media file and the second media file to present more relevant media files to the user 004.
  • the user preference measure can correlate to a degree of similarity between the first media file and other media files.
  • the user preference data can be established by analyzing a statistical measure of co-occurrence of said plurality of media files in published objects.
  • the user interface can be represented using geometric vectors (FIG. 008a illustrates) and UI template generator 082 can determine user preference.
  • the UI template generator can be generated based on distance between vectors.
  • the user preference data can be stored as a UI template with the media files associated to vertexes in a graph and edges representing relationships between the media files.
  • interactive content can be generated by: selecting a media file to be displayed online as object overlay 033 and defining one or more items over rendered grid 023 or background image for holding text, images or embedded web services 083.
  • the technology disclosed can include receiving information about an entity and incorporating it into a display ad. It can further include automatically selecting a background image based on a category associated with the entity, and automatically inserting the information about the entity into the one or more bounding boxes to create an display ad that is later displaced online or on other internet enabled device. [00141] In one implementation, the technology disclosed can further include electronically receiving the information to be incorporated into a display ad such that a display ad with a background image and portions of text and/or overlaid images is automatically created without human intervention.
  • a textual description of each bounding box can be stored in a graphical file format that allows embedded comments.
  • an image of the product can be in a file format including PNG, SVG, MP4, M4V, BMP, TIFF, PSD, GIF, TGA, AVI, MOV -or any other image format that enables adjustable separate alpha channel.
  • the technology disclosed can determine whether a first display ad can be shown on a screen with another display ad having the same background image as the first display ad.
  • the technology disclosed can regenerate in an automated manner a different background image before displaying that display ad.
  • the technology disclosed can automatically determine whether another display ad for the same category and in a similar geographic area has the same background image as the selected background image prior to creating the display ad, and can further select a different background image if another display ad for the same category has the same background image as the selected background image in the same geographic area.
  • the technology disclosed can automatically provide a best fit process for incorporating text into bounding boxes by iteratively adjusting the size of the text.
  • the interactive object layer can have pre-defined size and location and can be associated with a background image.
  • the technology disclosed can insert in an interactive object layer a graphical image of a coupon, or any other image.
  • the technology disclosed can automatically regenerate interactive object layer by receiving information about an object 033 in response to a query and automatically compare the received information about the object 033 in object storage 003 to information in a file identifying what information about the object 033 is to be displayed in video canvas 032. If there is a difference between the displayed information and the received information from content preference XML 008, the technology disclosed can automatically replace the displayed information with the received information, and further automatically regenerate the media layer object 033 to include the received information. It can then store the media layer object 033 in object storage 003.
  • the technology disclosed can update information in a database about an entity with the received information without also regenerating media layer object 033 for that entity if it is determined that the difference in the information is not in information that is displayed in media layer object 033.
  • the technology disclosed can deliver multiple object layers with the same background image from being displayed.
  • the technology disclosed can compare a changed data field to tagged information indicating information that is displayed in the interactive object layer.
  • the technology disclosed can include tagged information as extensible markup language (XML) embedded in a graphical file associated with the interactive object layer.
  • XML extensible markup language
  • a method of augmenting a video with product information for content seen on the video includes augmenting a video file with text encoded layers of content objects with text encoded supplemental information or external links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline.
  • the method further includes that the augmented file provides information that a video player that recognizes the augmentation can use to display the supplemental information on request and to retrieve information from the external links on request. It further includes that the user can control flow of the video and access to information available via the content objects.
  • This method and other implementations of the technology disclosed can each optionally include one or more of the following features and/or features described in connection with additional methods disclosed.
  • the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations.
  • the method further includes one or more of a location layer, music layer, social layer, purchase layer, and search layer.
  • the location layer identifies a location appearing in the video, wherein the location is automatically viewed on a map upon user selection.
  • the music layer provides a link to the audio recording, wherein the audio recording is automatically played back upon user selection, and information about the audio recording.
  • the social layer provides links that create postings to social media networks.
  • the method further includes the purchase layer provides information about a product appearing in the video, wherein the information includes links to websites that sell the product. It further includes that the search layer provides query links to search engines, wherein the resulting queries can be dynamically modified with user specific information.
  • implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above.
  • implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
  • a method of augmenting a video with product information for content seen on the video includes augmenting a video file with text encoded content objects, content external links, and social interaction launching links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline.
  • the augmentation can be in the video file, such as in a header or as a component of a file package or in a separate file.
  • the method further includes a particular content object that includes text encoded supplemental information related to the content It further includes a particular content external link that supplements information in the particular content object with a link to a picture or other non-text encoded information about the particular content. It further includes a particular social interaction-launching link that specifies data or references to data to be transmitted upon a user selection of the particular social interaction link, wherein the data identifies a video segment and links to the video at a particular anchor point.
  • implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above.
  • implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
  • a method of augmenting a video with product information for objects seen in the video includes augmenting a video file with text encoded product objects, product external links, and social interaction launching links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline.
  • the method further includes that a particular product object includes a time code for an anchor point on the timeline from which appearance of the product can be replayed upon user selection. It further includes a product external link that supplements information in the particular product object with a link to a picture or other non-text encoded information about the particular product. It further includes a social interaction launching link that specifies data or references to data to be transmitted upon a user selection of a particular social interaction link, wherein the data identifies and links to the video at a particular anchor point.
  • implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above.
  • implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
  • a method of giving a video watcher access to object information for objects seen in the video includes receiving an augmented video file that links object instances and object information for the instances to a timeline for the video It further includes running the video in a player that supports opening a supplemental interface, separate from the video, wherein the interface includes currently relevant object information consistent with the timeline. The player pausing the video and simultaneously displaying the currently relevant object information enhances the method.
  • the method further includes linking active screen regions on the timeline to the object instances and object information, wherein the active screen regions are polygon overlays of the video. It further includes retrieving visual information using references in the augmented video file. It further includes using the player to visually signal an availability of the object information in time segments.
  • the method further includes that the supplemental interface is a dynamic XML driven interface including elements that are updatable during video streaming.
  • implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above.
  • implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
  • a method of embedding a layer in a video that triggers actions associated with content seen on the video includes appending an interactive object layer onto a video player that runs the video, wherein the object layer includes product instances, product information and social interactions for the instances. It further includes structuring the object layer on one or more predefined virtual grids of the video, placing data holders in the object layer by marking in and out points on a timeline of the video, storing information of one or more actions in the data holders, associating the actions with one or more cells on the virtual grids, and executing the actions in response to a user selection across the cells.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The technology disclosed relates to appending an interactive object layer to an online media, such as an online video, tablet or PC media player, for providing information and actions including data storage objects that are associated with the unique ID number of the playing file. The object layer is structured as one or more virtual grids of cells. Each grid is associated with a portion of the video, referred to as "dropzone." Actions are associated with individual cells in the grids. The actions can be associated with portions of images being displayed by the player, such as images of products, places, people, etc. The actions can further initiate activities such as accessing information about the underlying portion of the image, initiating a purchase transaction for an item, initiating a query for information, and the like. The technology disclosed further relates to displaying a separate media layer object as an overlay of the video player, which indicates the presence of the object layer. The media layer, its objects, and the video player can be displayed by a web browser in a web page. The interactive object layer, its objects and video player can be executed in a separated manner. A user can interact with the object layer, e.g., by clicking on ones of the cells for which actions are defined. The actions are automatically executed at that point.

Description

INTERACTIVE OVERLAY OBJECT LAYER FOR ONLINE MEDIA
Inventor: Teemu Matti Olavi Airamo
RELATED APPLICATION
[0001] This application claims the benefit of US Provisional Patent Application No.
61/609,869, entitled, "Interactive Overlay Object Layer for Online Media," filed on 12 March
2012. The provisional application is hereby incorporated by reference for all purposes.
BACKGROUND
[0002] The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed inventions.
[0003] The technology disclosed relates to appending an interactive object layer to an online media, such as an online video, tablet or PC media player, for providing information and actions including data storage objects that are associated with the unique ID number of the playing file. The object layer is structured as one or more virtual grids of cells. Each grid is associated with a portion of the video, referred to as "dropzone." Actions are associated with individual cells in the grids. The actions can be associated with portions of images being displayed by the player, such as images of products, places, people, etc. The actions can further initiate activities such as accessing information about the underlying portion of the image, initiating a purchase transaction for an item, initiating a query for information, and the like.
[0004] The technology disclosed further relates to displaying a separate media layer object as an overlay of the video player, which indicates the presence of the object layer. A web browser in a web page can display the media layer, its objects, and the video player. The interactive object layer, its objects and video player can be executed in a separated manner. A user can interact with the object layer, e.g., by clicking on ones of the cells for which actions are defined. The actions are automatically executed at that point.
[0005] Generally, advertising for online media currently uses pre-roll, mid-roll, and post-roll ads, as well as banner ads that appear on of a small portion of a video. These types of advertisements are not directly and interactively coupled to the images of products, services, or other content visually appearing in the videos for which an advertiser wishes to advertise. [0006] An opportunity arises to enable interactive advertising by augmenting videos with information about content seen on the videos. Enhanced user experience, more effective user interactions, and higher overall user satisfaction and retention may result.
SUMMARY
[0007] The technology disclosed relates to appending an interactive object layer to an online media, such as an online video, tablet or PC media player, for providing information and actions including data storage objects that are associated with the unique ID number of the playing file. The object layer is structured as one or more virtual grids of cells. Each grid is associated with a portion of the video, referred to as "dropzone." Actions are associated with individual cells in the grids. The actions can be associated with portions of images being displayed by the player, such as images of products, places, people, etc. The actions can further initiate activities such as accessing information about the underlying portion of the image, initiating a purchase transaction for an item, initiating a query for information, and the like.
[0008] The technology disclosed further relates to displaying a separate media layer object as an overlay of the video player, which indicates the presence of the object layer. A web browser in a web page can display the media layer, its objects, and the video player. The interactive object layer, its objects and video player can be executed in a separated manner. A user can interact with the object layer, e.g., by clicking on ones of the cells for which actions are defined. The actions are automatically executed at that point.
[0009] Other aspects and advantages of the technology disclosed can be seen on review of the drawings, the detailed description and the claims, which follow.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The following drawing figures, which form a part of this application, are illustrative of implementations of the present invention and are not meant to limit the scope of the invention in any manner, which scope shall be based on the claims appended hereto.
[0011] FIG. 001 illustrates one implementation of a server dataflow.
[0012] FIG. 002 shows one implementation of a detailed dataflow.
[0013] FIG. 003 illustrates one implementation of an individual producer scoring.
[0014] FIG. 004 is one implementation of object services.
[0015] FIG. 005 illustrates one implementation of producer services and overall object distribution.
[0016] FIG. 006 shows one implementation of dropzones.
[0017] FIG. 007 illustrates one implementation of a virtual grid. [0018] FIG. 008a shows one implementation of XML buttons.
[0019] FIG. 008b illustrates one implementation of a dynamic user interface.
[0020] FIG. 009 shows one implementation of overlay services.
[0021] FIG. 009.1 is another implementation of overlay services.
[0022] FIG. 010 illustrates one implementation of an event detector.
[0023] FIG. 011 shows one implementation of a web services layer.
[0024] FIG. 012 illustrates one implementation of social services.
[0025] FIG. 013a shows one implementation of seller services.
[0026] FIG. 013b is another implementation of seller services.
[0027] FIG. 013c illustrates another implementation of seller services.
[0028] FIG. 014 shows one implementation of seller tracking services.
[0029] FIG. 015 illustrates one implementation of a seller account.
[0030] FIG. 016 shows one implementation of an end user application.
[0031] FIG. 17 is one implementation of a purchase action.
[0032] FIG. 18 illustrates one implementation of a location action.
[0033] FIG. 19 shows one implementation of a dropzone paused screen.
[0034] FIG. 20 illustrates one implementation of an indication of a dropzone in a video.
DETAILED DESCRIPTION
[0035] The following detailed description is made with reference to the figures. Sample implementations are described to illustrate the technology disclosed, not to limit its scope, which is defined by the claims. Those of ordinary skill in the art can recognize a variety of equivalent variations on the description that follows.
[0036] The technology disclosed relates to creating interactive videos for use in a computer- implemented system. The described subject matter can be implemented in the context of any computer-implemented system, such as a software-based system, a database system, a multi- tenant environment, or the like. Moreover, the described subject matter can be implemented in connection with two or more separate and distinct computer-implemented systems that cooperate and communicate with one another. One or more implementations may be implemented in numerous ways, including as a process, an apparatus, a system, a device, a method, a computer readable medium such as a computer readable storage medium containing computer readable instructions or computer program code, or as a computer program product comprising a computer usable medium having a computer readable program code embodied therein. [0037] In one implementation, the technology disclosed can select a media based on desired relationships such as user preferences and identities of media files. In another implementation, the technology disclosed can be implemented on portable devices, web browsers and other software capable of presenting media files.
[0038] Other implementations to match objects to videos and insert the objects to a video can include preference matching, on-fly insertion of predefined applications such as object-specific video players, and the like. Additional objects such as serving commercial advertisements, and the like can be introduced to the video to enable interactivity, improve user experience and provide value to video owners.
[0039] During a video, when an interactive object layer enabled item is recognized in the layer, an interactive object layer icon can be displayed in upper right hand corner as an overlay to indicate that an interactive object layer action on a product is available. The video can then be paused to view additional details and actions of the interactive object layer item. When the interactive object layer item is no longer available, the interactive object layer icon can disappear until a next interactive object layer item is recognized.
[0040] In some implementations, a list of available interactive object layer products linked to the video timecode can be displayed at the top of the screen. In some implementations, five (5) images relevant to the timecode can be displayed at any time. The image sequence can move along with the timecode. At the end of the program, a list of all interactive object layer items can be made available, along with various actions from the entire video such as social media, purchase information and location services.
[0041] The screenshots displayed at the bottom of the screen can display close up shots of the products, enabling viewers to recognize where the product is from. When an item from the sequence is clicked, the video can be paused and an enlarged image of a new item can be displayed along with different actions such as purchase, social media, location or information. Actions can then be performed directly from the screen. Exiting the screen can automatically resume the video.
[0042] Interactive object layer can be operated as an alpha layer, in which enabled items have Alpha=0 when there is no action. When a playing video is paused, Alpha=0 can be converted to Alpha=l, and the screen can be covered with a full overlay including item details and various actions. The full overlay can include a background image, image of the product, overlay text and/or a separate action panel that includes social media information, location maps, purchasing tools or other links. [0043] In some implementations, the technology disclosed can use real time tracking of objects to enable personalization. Referring to FIG. 009, the technology disclosed can enable an interaction between a video player 025 running a video including images of various items of interest to an advertiser and an object 033. The items of interest can include products (e.g., clothing, household items, industrial items, digital media), services (personal, professional, travel, etc.), or anything else that an advertiser or company may wish to sell or promote to a consumer.
[0044] The object 33 can be a representation that is external to the video image and video player 025. It can be directly and programmatically associated with the underlying image of the item of interest, allowing users: to obtain additional information about the item of interest, make a query about the item, initiate a purchase for the item, promote the item in a social network, and/or engage in other behaviors with respect thereto.
[0045] The video player 025 can be embedded, enhanced or otherwise extended with other objects such as interactive actions 020. In FIG. 009, object 33 and actions 20 are shown in an interactive object layer 034 that is external to the player 025. In other implementations, layer 034 and its content can be displayed as an overlay on top of the video player 025.
[0046] In some implementations, such as with live broadcast, the technology disclosed can use an event detector 026 to enable interaction between an object 033 and a video 025 matched on the fly. The event detector 026 can identify an event associated with the video being displayed by the video player 025 or an object 033 presented by the video player 025. The event detector 026 can also dispatch the event to an external object that is responsive to the event.
[0047] In some implementations, the technology disclosed can identify an event. It can provide synchronized or otherwise coordinated action between two components, such as the video player and the external object. The components can be executed separately and still be coordinated.
[0048] FIG. 001 is an illustration of cloud-based storage and delivery of interactive advertisements in relation to video service and the user. It can include a content database server 007 that stores content such as videos. Edge servers 005 can be used to deliver the content to client devices (computers, mobile phones, settop boxes, receivers) of end users 004. Object storage 003 stores data pertaining to objects in the interactive object layer information for the videos. Social services 002 include social networking systems. Video service 001 can be a video player, shown in its paused state.
[0049] In some implementations, user 004 can utilize a player 025 to execute a widget 042 retrieved from the Internet 006. For example, the user 004 can use the widget 042 when viewing the video in order to interact with the interactive object layer. In some implementations, the player 025 can include a web browser 041 configured to execute the widget 042 as a browser extension.
[0050] In some implementations, the user 004 can view a web page served by a web browser 041. The web page can include a video player 025 like HTML5 video player that can be provided to a client 004 by the web browser 041. The web page may include a video player configured to play a video preselected by a content database server 007 or to play any video selected by the user 004 from the database. The content database server 007 can select objects from content storage 003 based on various parameters, rules and configurations such as input from the user 004, IP address of the user 004, preferences of the user 004 stored in profile database 058, etc.
[0051] FIG. 005 illustrates the user interface of a software tool that can be used by a producer. Producer can be a user 004, who uses the software tool to access a video and create the interactive object layer for the video. The video can be automatically segmented into a number of scenes 022 using a scene detection algorithm. The scenes 022 can be arranged with respect to a time line with time code 021. The producer can select any scene or group of scenes 022 to control the playback of the video, which appears in the preview window of the video player 025.
[0052] FIG. 006 is an illustration of "dropzones" created by marking the in and out points in relation to the timecode of a video. Any actions can then be embedded within these dropzones. Each dropzone is associated with a virtual grid which overlays the video for the duration of the video between the dropzone's in and out points.
[0053] In some implementations, a producer can create one or more dropzones 035 by placing a mark in button 018 and a mark out button 019 on the timecode 021The dropzone 035 can be associated with the temporal segment of the video bounded by mark in 018 and mark out 019. The length of a dropzone 035 can be anything between zero and the length of the video. As shown in FIG. 006, dropzone 035 can extend over several scenes like scene 022 and a single scene 022 can be associated with several dropzones like dropzone 035 as well.
[0054] FIG. 007 is a frontal and lateral illustration of the virtual grid of a dropzone overlaying a video service. Actions are associated with various ones of the cells of the virtual grid. FIG. 007 also shows a virtual grid 023 with a plurality of cells (equivalently, "boxes") in a column/row arrangement. In some implementations, the number of cells in the vertical (column) and horizontal (row) directions can be proportional to the aspect ratio (height in pixels to width in pixels) of the video. Thus, if the video has a 3 :4 aspect ratio in pixel terms, then it can have a 3 :4 ratio of rows to column. [0055] As grid 023 overlays an image from a video, it can place the cells over various parts of the image. In some implementations, any particular item of interest in the video (e.g., a cellphone being held by a user) can be made visually apparent to the producer. The producer can then decide what actions to associate with the item of interest.
[0056] A dropzone 035 can be a data holder that stores information about one or more actions 020. Each action 020 can be associated with one or more cells in the virtual grid. The producer can select an action and place it on the grid, covering or more of the cells. The action can be graphically represented as a rectangle. The producer can resize the rectangle after placing it on the grid, thereby covering the desired cells. Thus, if the producer is interested in promoting the aforementioned cellphone appearing in the video, he can drag and drop an action 020 onto the cell in which the cellphone appears. Appendix provides an example of the XML code for dropzones for a video.
[0057] FIG. 005 is also an illustration of detailed structure of producer services enabling embedding actions onto a layer on a video, which functions on a website, smartphone, tablet or any other device that has a connection to the world wide web. Shown here is a preview window with a grid 023 in a video player 025, along with scenes 022 of the video, and action buttons 020.
[0058] In some implementations, the technology disclosed can provide a variety of predefined types of actions 020. The "location" action 020 can be associated with a hyperlink to an online mapping service, which can then display a map of some location of interest, as defined by the producer. In one example, the producer can place the location action 020 into the cell as a cellphone icon to define a hyperlink to a location map of a store selling the cellphone. When the end user viewing the video clicks on the cellphone icon in the video, the location action 020 can be triggered to cause a browser to display the map. FIG. 18 shows the result of the user selecting a location action when viewing a video. This action can retrieve a location map and related information for the selected location.
[0059] A music action 020 can be associated with a particular audio recording file, which can be played in response to the user selection. In one example, the producer can place a music action 020 into the cell as a cellphone icon to define a hyperlink to an audio recording (musical or otherwise) about the cellphone. When the end user viewing the video clicks on the cellphone icon in the video, the browser and underlying audio player can play the audio recording back.
[0060] A social action 020 can provide user-editable links that create postings on social media networks such as Facebook, Twitter, Pheed, Google+ or on any other related to the item of interest. The producer can define specific content for the posting (e.g., a short textual/graphical/video message) as well as allow the end user to further augment the posting. In one example, the producer can place a social action 020 into the cell as a cellphone icon. When the end user viewing the video clicks on the cellphone icon in the video, the social action 020 can be triggered on one more of the social networks including posting about the cellphone like "I'm buying this cellphone...will be chatting with you soon!"
[0061] A purchase action 020 can provide a link to an ecommerce website where the item of interest can be purchased. The action 020 can also retrieve metadata about item, such as current availability, price, shipping costs, etc. The producer can define the particular workflow for the transaction as well. In some implementations, the producer can place a purchase action 020 into the cell with the cellphone to define a hyperlink to an ecommerce site selling the cellphone. When a user clicks on the cellphone icon in the video, the browser can access the ecommerce site and bring up a product page from which the user can purchase the cellphone. In some implementations, all for the purchase transaction steps can be embedded into the playback experience, so that the user never has to leave the video player.
[0062] A search action 020 can provides a link to a search engine with a predefined query (e.g., one or more keywords, browser context) established by the producer, which can be dynamically modified with user specific information (e.g., demographics, web or purchase history) to make a user specific, customizer query which is then transmitted to a search engine. The results can be provided directly within the context of the video playback web page, or the user can be directed to a separate web page to obtain the results.
[0063] In other implementations, actions 020 may not have the same elements as those listed above and/or can have other/different actions 020 instead of, or in addition to, those listed above using JavaScript, HTML5, or any other language.
[0064] In some implementations, when an action 020 is associated with a dropzone 035, it can be added to a series of action previews 024, displayed next to the video player 025 window for the producer to see. The producer can scroll through these to see which actions have been defined, as well as sort, filter, and search them.
[0065] The producer can define for each action a monetary value associated with the end user selecting the action. The monetary value represents a type of conversion value. For example, a user selecting a location action 020 can have a value of $ 1.00, while a user selecting purchase action can have a value of $ 100. The monetary value can be predefined, or it can be a variable and connected to an external system that determines the value at run time. The value can be value to the producer, to the seller of the product, or to any other party; multiple different monetary values can be assigned as well, for example, different ones for different parties. [0066] The producer can also access a summary page that lists the number of each type of action, along with details thereof. For actions with monetary value, one or more summary total monetary values can be displayed, for example by action type.
[0067] As noted, there can be multiple dropzones 035 for a video. The producer can combine predefined dropzones with individual ones created by the producer. For example, a predefined dropzone can be defined as a background or default layer for the entire video, which provides actions such as general information about the producer, the video, the advertiser, or the like. A specific example can include a video created by a retail store showing its various products; the predefined dropzone can include actions to provide general information about retail store, such as location actions, information (store hours and telephone numbers). The producer can then combine such a predefined dropzone, with dropzones for individual scenes in the video.
Continuing this example, a scene in the video can show a group of products (e.g., a furnished living room with products such as a couch, table, chairs, lamp, and rug) with individual purchase actions associated with the cells overlaying various ones of the products.
[0068] FIG. 011 is an illustration of the manner by which web page containing an enabled widget ("Tapvert Widget") can be used to display video content. Once the producer has defined all of the dropzones for a video, they are compiled into a data layer object, which is then associated with the video file. When the video is served to a browser, the associated data layer object containing the dropzone information is served as well. As shown in FIG. 011, the video information can be handled by the video player in the normal manner, while the data layer object can be processed by the widget 042 that can be a browser 041 add on. Appendix B provides an example of the services interface that can be registered by the widget 042. Appendix C provides an example of how the widget 042 can be mounted by the browse 041.
[0069] In some implementations, the video file as obtained from a video hosting service can include additional content, such as objects selected from object storage 003. The object storage 003 can retain objects, such as interactive objects, displayable objects or the like. Some of the objects 033 are associated with a video 025, such as for example, as an image, action 020 or a button 061, in accordance with a perspective of predefined virtual grid 023 of the video 025, which is overlaid onto the video. Examples of grids at different scales are shown in FIG. 007. Some of the objects can be displayed outside a video 025 or companion to the video 025.
[0070] In other implementations, there can be many different objects identified for a video, with corresponding actions associated therewith. Some objects can provide additional information to that detailed in the video 025. Some objects can be associated with a product, person, object or anything else appearing within the video. Some objects can be associated with metadata of a video, such as for example, a keyword for use in search action, a user preference or the like. Some objects can be associated with a video on-the-fly. In some implementations, an object can display additional information in response to an event associated with a video. In some implementations, an object can display additional information in response to an interaction of the user 004 with the video or with an object 033 displayed in the video 025, such as an object dynamically inserted to the video as an overlay on the video 034. FIG. 12 shows a connection between embedded social services and interactive object layer API ("Tapvert API").
[0071] FIG. 009 also illustrates detection of objects in the object layer, while the video is being played back, followed by presentation of the objects in an overlay along with embedded actions and information.
[0072] Web browser 041 can display a web page that includes a video player 025 and an object 033. The video player 025 can be a HTML5 based video player, a proprietary video player, an open-source video player or the like. In some implementations, the video player 025 can be a standard embedded video player to provide the functionality in accordance with the disclosed subject matter. The object 033 can be displayed side-by-side to the video player 025, above or below it, or on top of the video player 025. In some implementations, the object 033 is displayed in an external position relative to the video player 025. The object 033 can be displayed or can be in an undisplayed state (alpha=l, alpha=0).
[0073] FIG. 009.1 shows the virtual grid within the context of the video player window. In FIG. 009.1, the object 033 is displayed and its content is a picture of overlay 034, which here shows the virtual grid. The overlay 034 can be displayed as part of an composition. In FIG. 009.1 video player 025 displays a video of various objects. A user, such as 004 of FIG. 001, can utilize a device, such as a computer, touchpad, a touch screen or the like, to interact with the video, such as by pointing to an object 033. The user 004 can interact with an object 033 such as jacket. The user can click on the jacket, hover above it, or otherwise interact with the jacket. It can be noted that the jacket can be an object 033, such as dynamically inserted object, can be associated with a hot spot, or otherwise defined for the interaction disclosed in Overlay services' It can be noted that a hot spot is an area within predefined grid 023 of a media file, such as a video, that enables an interaction by the user 004. In an implementation, a hot spot can be defined as an area of the jacket 033 in the video 025, and upon a user interaction, such as by touching the hot spot; event detector 026 can trigger an event.
[0074] In some implementations, the video player 025 can pause the video being displayed in response to an interaction of the user 004 via device. In some implementations, the video player 025 can resume the video upon predefined action. In other implementations, the video player 025 can continue to show the video while the object composition is displayed.
[0075] In some implementations, an animation can be associated with the object 033, without any additional animation related to the video 025, such as the action of FIG. 009.
[0076] In some implementations, the object 033, which can be referred to as a companion object, or a video object, can be described by identification and layout parameters, such as position, size and the like. The object 033 can be responsive to events associated with the identification. For example, in some cases multiple objects similar to the object 033 can be presented, each associated with a different identification. Some events can induce a functionality of some of the multiple objects.
[0077] In some implementations, the event can include an identification, which can be used to determine which objects the event can be dispatched or which objects can be responsive to the event.
[0078] FIG. 009 also shows a block diagram of an interactive object layer system. A video- service provided by external operator such as YouTube, Vimeo or the like, can be configured to serve a video to a user, such as 004 of FIG. 001. The interactive object layer 034 inserted on top of video 025 service can include an event detector 026, a coordinating module 039 or an Input/Output (I/O) module 038.
[0079] In some implementations, such as with live broadcast, the media layer can run via an Input/Output (I/O) module 038. In some implementations, the input/output (I/O) module 038 can provide an interface to object storage 003. The object storage 003 can provide an object composition from content inventory 009.
[0080] In some implementations, the Input/Output (I/O) module 038 can provide an interface to an object storage 003. The object storage 003 can be a database, external storage server or the like. The object 033 can be a displayable object, such as the jacket 033 of FIG. 009. The object 033 can be an interactive object. The object 033 can be associated with a social interactions 020, purchase actions 020, a set of external actions 020, location services 020 or similar videos having a common characteristic or the like.
[0081] In some implementations, the input/output (I/O) module 038 can provide an interface to a web browser 041. The input/output (I/O) module 038 can enable serving a web page to a user 004 via the internet 006 illustrated on FIG. 011.
[0082] In some implementations, the event detector 026 can be configured to identify an event associated with the video player 025. The event can be an interactive event, such as social network action 020, location service 020 or purchase action 020, as initiated by an end user viewing the video, and clicking on the cell overlaying the video which is associated with the event.
[0083] The event can be a tracking activity of user 004 within the interactive object layer. The event can be a keyword event, such as a keyword associated with a grid 023 and video being played. The keyword can be dynamically determined. The keyword can then be passed to a search action 020. The events can utilize other characteristics of the metadata associated on grid 023 or user inputs and the like, which is then passed to the corresponding actions 020.
[0084] In some implementations, the coordinating module 039 can be configured to coordinate action of two or more elements. The coordinating module 039 can coordinate action of an object, such as jacket 033 of FIG. 009, and the video being played by the video player 025. For example, the coordinating module 039 can synchronize the object such that the object 033 can be assigned an action in accordance with associated user preferences.
[0085] In some implementations, as the video is played, the widget can determine from which dropzone(s) are associated with the current playback time using the video time codes. The widget can then retrieve the grid and cell information for such dropzones, and then renders the interactive object layer over the video player 025. In this manner, the player can be completely independent to, and agnostic of, the object layer. As the video continues to play, the widget can update the object layer with the corresponding dropzones depending on the current time code for the playback.
[0086] When the user pauses the video, icons, buttons or other graphic indicia can be displayed to indicate the various actions 020 defined for the current dropzone(s). These indicia can be shown at their assigned grid locations. In some implementations, the grid can be shown as well.
[0087] In some implementations, a key can be assigned that causes the indicia to be shown over the playing video. This results in the user not needing to pause the video to see the indicia of the actions.
[0088] In some implementations, when a user clicks on a playing or paused video, the click can be passed by the browser to the event detector 026 to detect an event in the dropzone 035.
[0089] In some implementations, a predefined action can be coordinated in dropzone 035. A coordinating module 039 of FIG. 011 can coordinate the action. The coordination can induce an order of elements with the predefined action, between elements from different actions, between elements of actions and grid 023 of the video 025 and the like. Actions 020 can be links to product, social action, location, music store integration, etc. [0090] Content sequencing web services 016, 010 can support content refreshing for clients that cache content, as requests for clients such as browser based players. Content database server 007 can be used to catalog the available content. Profile database 058 and content server 003 can be used to serve content in the form of digital media files. Content sequencer 010 can decide which objects should be sent across the interactive object layer to be displayed. Collectively, the web services are available across the Internet 006.
[0091] FIG. 008b is an illustration of dynamic user interface generation and user interface seeding process. As shown in FIG. 008b, the technology disclosed can include generating XML based user interfaces that performs an auto-run function at stage 081. The auto-run function can detect whether XML based user interface can be used in portable devices or distributed as web widget 042.
[0092] FIG. 002 is an illustration of how content gets delivered to web services and how web services function between content database server, web services and profile database. A web browser 041, including a widget 042 (e.g., a browser extension), can access video content from: content database 007, user profiles from the profile database 058, and interactive object information from the content storage 003.
[0093] Web browser 041 and web services 083 can serve HTML, JavaScript, Images, HTML5 and other objects utilized by users 004. Profile web services 016 and a profile database 058 can store end-user specific profile information including personal customizations, content preferences, and history of recent end-user actions and other events.
[0094] A Profile database 058 can store: end-user's preferences in profile database 058, producer preferences and action history. In some implementations, this data can be expressed as one or more XML documents or trees (nodes).
[0095] The profile database 058 can also store object definitions. In some implementations, producer customization information such as rate score 052 and preferences in profile database 058, action history, and/or other user customization criteria can be stored in a common user 004 profile in profile database 058. In other implementations, user customization information can be stored as one or more separate user profiles.
[0096] In some implementations, user profiles can be synchronized between two or more types of users 004, facilitating user profile updating and synchronization across multiple types of users, used by a particular end-user. For example, in some implementations user profiles can be synchronized between a web based player 025 and a portable player such as tablet so that the user profile information on all synchronized users can be updated to the most recent profile. [0097] FIG. 003 is an illustration of scoring based on sales of products through individual producers in relation to all producers selling any particular product. In some implementations, producers can be scored based on the value of their ability to effectively promote the sale of products related to the content of a video. In other implementations, scoring can include scanning the user preferences with execution proceeding to profile database 058, where the actions and purchase history is examined.
[0098] A producer score can be calculated at stage 055. It can also be compared to an average score 057 and adjusted accordingly. The new score can then be returned at stage 056 to profile database 058, where the process can be repeated.
[0099] FIG. 008a illustrates an implementation of a state diagram showing some of the states used in an implementation of a dynamic user interface. FIG. 010 shows an associated event-processing table that detects interactions with objects in the grid.
[00100] The application can register event handlers and other events required by the object grid 023 after UI transfers to an initialization phase between states 066 and 067. When an event is received, the handler can be dispatched at state 068. The event handler can create a new thread to handle the event or place the event in the queue of an existing thread. FIG. 011 further illustrates state and event processing of one implementation of the invention.
Tracking
[00101] Interactive object layer user details can be tracked as the customer uses interactive object layer services. In some implementations, details regarding purchases, likes, shares on social media, information requests and location details can be saved in the interactive object layer 'memory'. The technology disclosed can also provide a learning platform in which the more the users interact with interactive object layer, the more details can be saved and more personal services can be provided. For example, interactive object layer can learn the user's shoe or dress size, location, brand preference. Other details such as age, sex and location can also be drawn from person-related data sources like access controlled API, public Internet and social networking sites.
Seller Log In and Product Inserting
[00102] FIG. 013a is an illustration of actions relating to addition of objects to interactive object layer database, along with information regarding previous objects. FIG. 013b shows details required to enter a new object into the database by the seller through web services. FIG. 013c is an illustration of tracking services available for sellers. [00103] FIG. 014 is also an illustration of tracking services available to the seller. FIG. 015 illustrates actions related to account details of any particular user of Interactive object layer Seller services.
[00104] FIG. 016 shows the home screen for a user application for watching videos, with buttons for selecting TV shows, movies, music videos, and advertisements. FIG. 17 shows the purchase action, after the user has paused the playback of a video, and selected an action button associated with a purchase. The user can be shown an image of the product to purchase, along with its price, and optionally its price as compared to RRP. The user can login and complete the purchase transaction.
[00105] Seller Login is separate from regular Interactive object layer Login. The seller section is for brands selling their products via Interactive object layer. Using the technology disclosed, the seller can request approval to become 'interactive object layer seller', and can be given unique passwords to access the product section. The seller can then add products to the interactive object layer database. In some implementations, there can be a fixed fee of $29.99 per product for 6 months in 'interactive object layer database'.
[00106] An image of the product can be in file format such as PNG, SVG, MP4, M4V, BMP, TIFF, PSD, GIF, TGA, AVI, MOV or a scaled vector format (SV) any other image format that enables adjustable separate alpha channel. Product information can be added as text overlay. Products can be categorized by brand, and sub-categorized by model that can be subcategorized by size or color. The recommended retail price (RRP) for the product can be inserted. The seller can then have a choice to apply Interactive object layer discount of 10%, 20% or 'other%', which can be automatically calculated from the RRP. The seller can then add the number of products in 'interactive object layer seller', which the seller can guarantee available in stock. The seller can then monitor the number of products sold live, and add more items to the 'interactive object layer stock', provided that the product is ready and available in the brand's stock storage.
[00107] FIG. 19 shows what is seen by the user after pausing playback, whereby an image/button can be shown for purchase action (the T-shirt), as well as buttons for social actions (buttons with "F" and "T").
[00108] FIG. 20 illustrates how an icon (here the stylized "M") can be shown to signal to the viewer that a dropzone present in the current portion of the video. Studio Extra
[00109] In some implementations, the interactive object layer can be extended for use in large production companies, studios, and commercial agencies. The extended versions can allow addition of pre-rolls and other video features to the original content.
Use Cases
[00110] The technology disclosed can be used to promote television shows on social media. Using the technology disclosed, a producer can create dropzones in the shows' videos. In some implementations, the dropzones can be created in the video timeline to specify interesting events such as funny jokes, etc. that are likely to generate greater user response. When a first user selects a screen icon referencing these dropzones, the technology disclosed can invoke the appropriate social action and further create posting on social networking sites like Facebook, Twitter, etc. that includes a caption along with an anchor point or time line marker to at least the beginning point of the video segments defined by the dropzones. Other users connected to the first user on the social networking sites can select these postings and conveniently view the linked video segments.
[00111] Producers of television shows can provision real-time information about the shows' content for viewers to expose while they are watching the shows, using the technology disclosed. For instance, producers of travel shows can create dropzones to provide viewers with content including information, reviews and photos about the destinations shown in the shows, trip planning tips, etc. that they can expose while viewing the travel show videos.
[00112] In another example, producers of cooking shows can create dropzones that define onscreen user playback controls and share with viewers recipes, scaling of recipes, information about stores that carry the ingredients of the recipes. In some implementations, the dropzones can define on screen controls timed to match the instant events in the cooking shows. For example, at the bread baking segments of the cooking shows, instructions on how to bake the bread can appear on the screen. After listing ingredients, a pause or resume control can give a viewer an opportunity to assemble the ingredients before proceeding.
[00113] In another example, broadcasters of sporting events can provide viewers real-time information such as scores, statistics, player and team information, etc. while they are watching the events. Dynamic controls can be defined that allow viewers to choose the type of information they see. In some implementations, producers or viewers can embed markers like "penalty moment" in the event videos and facilitate dynamic replay. [00114] In another example, broadcasters of dance shows can use the technology disclosed to help viewers learn the dance sequences shown in the shows in real-time i.e. while they are watching the shows. In some implementations, dropzones can be created to include information related to the dancing sequences including number and types of steps, feet positions, song, etc.
[00115] In another example, the technology disclosed can be used to efficiently present education videos. In some implementations, different events with in a video can be categorized as compact video segments that can be independently accessed and easily shared on social media. In some implementations, the video segments can be appended with searchable tags that to provide video indices to viewers.
Some Particular Implementations
[00116] The technology disclosed may be practiced as a method or system adapted to practice the method.
[00117] In one implementation, the technology disclosed can include a media player 025 that is configured to display an embedded widget 042. The media player 025 can have a display layout, an event detector 026 configured to identify an event associated with the widget 042 displayed by the media player 025, and an object 033. The object 033 can be configured to display online player 025 and is also responsive to the event identified by event detector 026.
[00118] In one implementation, the media player 025 and object 033 can be executed by online XML file. In other implementations, object 033 can be configured to perform a predefined action in response to the event associated to grid 023.
[00119] In one implementation, the media player 025 can include a second object that is configured to perform a second predefined action in response to the event associated to grid 023.
[00120] In one implementation, an event can be selected from the group consisting of:
interactions with a second object, tracking events of entities in the video, placement events entities in the video, frame events, keyword events, and ambient events of the video.
[00121] In one implementation, the event can include frame object identification and target object identification. The target object identification can be associated with object 033.
[00122] In one implementation, the technology disclosed generate media layer objects 033 by maintaining a ratio of different displays with templates including background images and regions, where information specific to an entity can be inserted for display. On receiving information from object storage 003, objects 033 can be incorporated into grid 023 and one of the plurality of templates can be automatically selected. Furthermore, the information about the entity can be automatically inserted into one or more regions of the grid to automatically create the display ad for a later online display.
[00123] In one implementation, a template can be a background image on a specified area of the grid 023 for object 033. Template information identifying what information is to be provided into the specified areas of the grid 023 can be stored when an object layer is created.
[00124] FIG. 008a is an illustration of XML buttons and their scaling for various screen sizes based on x and y coordinates. In one implementation, the specified area of the grid 023 for object
033 can be extensible markup language (XML) as illustrated in FIG. 008a.
[00125] In another implementation, the technology disclosed can provide the information associated with each template region within the grid 023. This information can identify the information provided in the region.
[00126] In one implementation, the technology disclosed can automatically provide graphical images in the interactive object layer by scaling the size of the graphical images.
[00127] In one implementation, a web browser 041 can display media player 025 and object
033.
[00128] In one implementation, a video-associated object 033 can be utilized in an embedded online environment by displaying a widget 042 in a display layout by a media player 025, identifying an event associated with the data stream, and displaying an object 033 in the predefined grid 023 in response to the event.
[00129] In one implementation, a second object can be displayed in the predefined grid 023 in response to the event by performing a predefined action that includes: playing a media content, displaying an object, displaying an animated image, and displaying a text or predefined action 020.
[00130] In one implementation, the identity of a first media file can be recognized from a plurality of media files. Furthermore, a user input indicative of a desired relationship measure and a desire to select a second media file of plurality can be received by accessing user preference data that indicates a relationship measure among the media files. The second media file can then be selected in accordance with desired user preference measure, user input, user preference data, and identity of said first media file.
[00131] In one implementation, the media device can include a portable media player, a personal computer-based media player, online enabled DVR streaming appliance, a personal computer-based media player, and/or smartphone or a cellular telephone. The second media file can be streamed on a media device. [00132] In another implementation, the identity of a second media file can be recognized from a plurality of media files. Furthermore, a user input indicative of a desired relationship measure and a desire to select a third media file of plurality can be received by accessing user preference data that indicates a relationship measure among the media files. The third media file can then be selected in accordance with desired user preference measure, user input, user preference data, and identity of said first media file. The identity of the first media file can include a brand name associated with the media file and product details associated with the media file.
[00133] In one implementation, the user preference data can be based upon a statistical measure of co-occurrence of media files in a particular set of media files. The set of media files can be a saved list or a dynamic media library in the content database server 007.
[00134] In another implementation, the user preference data can be established by analyzing a play history 013 of users 004 of media files. The user preference data can be established by analyzing user preferences in profile database 058 constructed by users 004 of media files.
[00135] In one implementation, the user preference data can be based observed user 004 behavior. The user preference measure can correlate to a degree of similarity between the first media file and the second media file to present more relevant media files to the user 004.
[00136] In one implementation, the user preference measure can correlate to a degree of similarity between the first media file and other media files. The user preference data can be established by analyzing a statistical measure of co-occurrence of said plurality of media files in published objects.
[00137] In another implementation, the user interface can be represented using geometric vectors (FIG. 008a illustrates) and UI template generator 082 can determine user preference.
[00138] In one implementation, the UI template generator can be generated based on distance between vectors. The user preference data can be stored as a UI template with the media files associated to vertexes in a graph and edges representing relationships between the media files.
[00139] In one implementation, interactive content can be generated by: selecting a media file to be displayed online as object overlay 033 and defining one or more items over rendered grid 023 or background image for holding text, images or embedded web services 083.
[00140] In one implementation, the technology disclosed can include receiving information about an entity and incorporating it into a display ad. It can further include automatically selecting a background image based on a category associated with the entity, and automatically inserting the information about the entity into the one or more bounding boxes to create an display ad that is later displaced online or on other internet enabled device. [00141] In one implementation, the technology disclosed can further include electronically receiving the information to be incorporated into a display ad such that a display ad with a background image and portions of text and/or overlaid images is automatically created without human intervention.
[00142] In one implementation, a textual description of each bounding box can be stored in a graphical file format that allows embedded comments.
[00143] In another implementation, an image of the product can be in a file format including PNG, SVG, MP4, M4V, BMP, TIFF, PSD, GIF, TGA, AVI, MOV -or any other image format that enables adjustable separate alpha channel.
[00144] In another implementation, the technology disclosed can determine whether a first display ad can be shown on a screen with another display ad having the same background image as the first display ad.
[00145] In another implementation, responsive to a determination that a display ad is similar to another display ad to be displayed on the same canvas, the technology disclosed can regenerate in an automated manner a different background image before displaying that display ad.
[00146] In another implementation, the technology disclosed can automatically determine whether another display ad for the same category and in a similar geographic area has the same background image as the selected background image prior to creating the display ad, and can further select a different background image if another display ad for the same category has the same background image as the selected background image in the same geographic area.
[00147] In another implementation, the technology disclosed can automatically provide a best fit process for incorporating text into bounding boxes by iteratively adjusting the size of the text.
[00148] In another implementation, the interactive object layer can have pre-defined size and location and can be associated with a background image.
[00149] In another implementation, the technology disclosed can insert in an interactive object layer a graphical image of a coupon, or any other image.
[00150] In another implementation, the technology disclosed can automatically regenerate interactive object layer by receiving information about an object 033 in response to a query and automatically compare the received information about the object 033 in object storage 003 to information in a file identifying what information about the object 033 is to be displayed in video canvas 032. If there is a difference between the displayed information and the received information from content preference XML 008, the technology disclosed can automatically replace the displayed information with the received information, and further automatically regenerate the media layer object 033 to include the received information. It can then store the media layer object 033 in object storage 003.
[00151] In yet another implementation, the technology disclosed can update information in a database about an entity with the received information without also regenerating media layer object 033 for that entity if it is determined that the difference in the information is not in information that is displayed in media layer object 033.
[00152] In yet another implementation, the technology disclosed can deliver multiple object layers with the same background image from being displayed.
[00153] In yet another implementation, the technology disclosed can compare a changed data field to tagged information indicating information that is displayed in the interactive object layer.
[00154] In yet another implementation, the technology disclosed, the technology disclosed can include tagged information as extensible markup language (XML) embedded in a graphical file associated with the interactive object layer.
[00155] In one implementation, a method of augmenting a video with product information for content seen on the video is described. The method includes augmenting a video file with text encoded layers of content objects with text encoded supplemental information or external links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline.
[00156] The method further includes that the augmented file provides information that a video player that recognizes the augmentation can use to display the supplemental information on request and to retrieve information from the external links on request. It further includes that the user can control flow of the video and access to information available via the content objects.
[00157] This method and other implementations of the technology disclosed can each optionally include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations.
[00158] The method further includes one or more of a location layer, music layer, social layer, purchase layer, and search layer. The location layer identifies a location appearing in the video, wherein the location is automatically viewed on a map upon user selection. The music layer provides a link to the audio recording, wherein the audio recording is automatically played back upon user selection, and information about the audio recording. The social layer provides links that create postings to social media networks. [00159] The method further includes the purchase layer provides information about a product appearing in the video, wherein the information includes links to websites that sell the product. It further includes that the search layer provides query links to search engines, wherein the resulting queries can be dynamically modified with user specific information.
[00160] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
[00161] In another implementation, a method of augmenting a video with product information for content seen on the video is described. The method includes augmenting a video file with text encoded content objects, content external links, and social interaction launching links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline. The augmentation can be in the video file, such as in a header or as a component of a file package or in a separate file.
[00162] The method further includes a particular content object that includes text encoded supplemental information related to the content It further includes a particular content external link that supplements information in the particular content object with a link to a picture or other non-text encoded information about the particular content. It further includes a particular social interaction-launching link that specifies data or references to data to be transmitted upon a user selection of the particular social interaction link, wherein the data identifies a video segment and links to the video at a particular anchor point.
[00163] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
[00164] In yet another implementation, a method of augmenting a video with product information for objects seen in the video is described. The method includes augmenting a video file with text encoded product objects, product external links, and social interaction launching links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline.
[00165] The method further includes that a particular product object includes a time code for an anchor point on the timeline from which appearance of the product can be replayed upon user selection. It further includes a product external link that supplements information in the particular product object with a link to a picture or other non-text encoded information about the particular product. It further includes a social interaction launching link that specifies data or references to data to be transmitted upon a user selection of a particular social interaction link, wherein the data identifies and links to the video at a particular anchor point.
[00166] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
[00167] In yet another implementation, a method of giving a video watcher access to object information for objects seen in the video is described. The method includes receiving an augmented video file that links object instances and object information for the instances to a timeline for the video It further includes running the video in a player that supports opening a supplemental interface, separate from the video, wherein the interface includes currently relevant object information consistent with the timeline. The player pausing the video and simultaneously displaying the currently relevant object information enhances the method.
[00168] This method and other implementations of the technology disclosed can each optionally include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations.
[00169] The method further includes linking active screen regions on the timeline to the object instances and object information, wherein the active screen regions are polygon overlays of the video. It further includes retrieving visual information using references in the augmented video file. It further includes using the player to visually signal an availability of the object information in time segments.
[00170] The method further includes that the supplemental interface is a dynamic XML driven interface including elements that are updatable during video streaming.
[00171] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
[00172] In yet another implementation, a method of embedding a layer in a video that triggers actions associated with content seen on the video is described. The method includes appending an interactive object layer onto a video player that runs the video, wherein the object layer includes product instances, product information and social interactions for the instances. It further includes structuring the object layer on one or more predefined virtual grids of the video, placing data holders in the object layer by marking in and out points on a timeline of the video, storing information of one or more actions in the data holders, associating the actions with one or more cells on the virtual grids, and executing the actions in response to a user selection across the cells.
[00173] What is claimed is:

Claims

1. A method of augmenting a video with product information for content seen on the video, the method including:
augmenting a video file with text encoded layers of content objects with text encoded supplemental information or external links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline;
wherein the augmented file provides information that a video player that recognizes the augmentation can use to display the supplemental information on request and to retrieve information from the external links on request; and
whereby the user can control flow of the video and access to information available via the content objects.
2. The method of claim 1, wherein the layers of content objects include a location layer, music layer, social layer, purchase layer, and search layer.
3. The method of claim 2, wherein the location layer identifies a location appearing in the video, wherein the location is automatically viewed on a map upon user selection.
4. The method of claim 2, wherein the music layer provides a link to an audio recording, wherein the audio recording is automatically played back upon user selection, and information about the audio recording.
5. The method of claim 2, wherein the social layer provides user-editable links that create postings to social media networks.
6. The method of claim 2, wherein the purchase layer provides information about a product appearing in the video, wherein the information includes links to websites that sell the product.
7. The method of claim 2, wherein the search layer provides query links to search engines, wherein resulting queries can be dynamically modified with user specific information.
8. A method of augmenting a video with product information for content seen on the video, the method including:
augmenting a video file with text encoded content objects, content external links, and social interaction launching links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline; wherein a particular content object includes text encoded supplemental information related to the content;
wherein a particular content external link supplements information in the particular content object with a link to a picture or other non-text encoded information about the particular content; and
wherein a particular social interaction launching link specifies data or references to data to be transmitted upon a user selection of the particular social interaction link, wherein the data identifies a video segment and links to the video at a particular anchor point.
9. A method of augmenting a video with product information for objects seen in the video, the method including:
augmenting a video file with text encoded product objects, product external links, and social interaction launching links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline;
wherein a particular product object includes a time code for an anchor point on the timeline from which appearance of the product can be replayed upon user selection;
wherein a particular product external link supplements information in the particular product object with a link to a picture or other non-text encoded information about the particular product; and
wherein a particular social interaction launching link specifies data or references to data to be transmitted upon a user selection of a particular social interaction link, wherein the data identifies and links to the video at a particular anchor point.
10. A method of giving a video watcher access to object information for objects seen in the video, the method including:
receiving an augmented video file that links object instances and object information for the instances to a timeline for the video;
running the video in a player that supports opening a supplemental interface, separate from the video, wherein the interface includes currently relevant object information consistent with the timeline; and
enhanced by the player pausing the video and simultaneously displaying the currently relevant object information.
1 1. The method of claim 9, further including:
linking active screen regions on the timeline to the object instances and object information, wherein the active screen regions are polygon overlays of the video.
12. The method of claim 9, further including:
retrieving visual information using references in the augmented video file.
13. The method of claim 9, further including:
using the player to visually signal an availability of the object information in time segments.
14. The method of claim 9, wherein the supplemental interface is a dynamic XML driven interface including elements that are updatable during video streaming.
15. A method of embedding a layer in a video that triggers actions associated with content seen on the video, the method including:
appending an interactive object layer onto a video player that runs the video, wherein the object layer includes product instances, product information and social interactions for the instances; structuring the object layer on one or more predefined virtual grids of the video;
placing data holders in the object layer by marking in and out points on a timeline of the video; storing information of one or more actions in the data holders;
associating the actions with one or more cells on the virtual grids; and
executing the actions in response to a user selection across the cells.
16. A system for augmenting a video with product information for content seen on the video, the system including:
a processor and a computer readable storage medium storing computer instructions configured to cause the processor to augment a video file with text encoded layers of content objects with text encoded supplemental information or external links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline;
wherein the augmented file provides information that a video player that recognizes the augmentation can use to display the supplemental information on request and to retrieve information from the external links on request; and
whereby the user can control flow of the video and access to information available via the content objects.
17. The system of claim 16, wherein the layers of content objects include a location layer, music layer, social layer, purchase layer, and search layer.
18. The system of claim 17, wherein the social layer provides user-editable links that create postings on social media networks.
19. The system of claim 17, wherein the purchase layer provides information about a product appearing in the video, wherein the information includes links to websites that sell the product.
20. A system for augmenting a video with product information for content seen on the video, the method including:
a processor and a computer readable storage medium storing computer instructions configured to cause the processor to augment a video file with text encoded content objects, content external links, and social interaction launching links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline;
wherein a particular content object includes text encoded supplemental information related to the content;
wherein a particular content external link supplements information in the particular content object with a link to a picture or other non-text encoded information about the particular content; and
wherein a particular social interaction launching link specifies data or references to data to be transmitted upon a user selection of the particular social interaction link, wherein the data identifies a video segment and links to the video at a particular anchor point.
21. A system for augmenting a video with product information for objects seen in the video, the method including:
a processor and a computer readable storage medium storing computer instructions configured to cause the processor to augment a video file with text encoded product objects, product external links, and social interaction launching links that are tied to anchor points on a timeline of video display and that have a duration or end time on the timeline;
wherein a particular product object includes a time code for an anchor point on the timeline from which appearance of the product can be replayed upon user selection;
wherein a particular product external link supplements information in the particular product object with a link to a picture or other non-text encoded information about the particular product; and
wherein a particular social interaction launching link specifies data or references to data to be transmitted upon a user selection of a particular social interaction link, wherein the data identifies and links to the video at a particular anchor point.
22. A system for giving a video watcher access to object information for objects seen in the video, the method including:
a processor and a computer readable storage medium storing computer instructions configured to cause the processor to:
receive an augmented video file that links object instances and object information for the instances to a timeline for the video;
run the video in a player that supports opening a supplemental interface, separate from the video, wherein the interface includes currently relevant object information consistent with the timeline; and
use the player for pausing the video and simultaneously displaying the currently relevant object information.
23. The system of claim 22, further configured to cause the processor to:
link active screen regions on the timeline to the object instances and object information, wherein the active screen regions are polygon overlays of the video.
24. The system of claim 22, wherein the supplemental interface is a dynamic XML driven interface including elements that are updatable during video streaming.
25. A system for embedding a layer in a video that triggers actions associated with content seen on the video, the method including:
a processor and a computer readable storage medium storing computer instructions configured to cause the processor to:
append an interactive object layer onto a video player that runs the video, wherein the object layer includes product instances, product information and social interactions for the instances;
structure the object layer on one or more predefined virtual grids of the video; place data holders in the object layer by marking in and out points on a timeline of the video;
store information of one or more actions in the data holders;
associate the actions with one or more cells on the virtual grids; and
execute the actions in response to a user selection across the cells.
PCT/US2013/030584 2012-03-12 2013-03-12 Interactive overlay object layer for online media WO2013138370A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261609869P 2012-03-12 2012-03-12
US61/609,869 2012-03-12

Publications (1)

Publication Number Publication Date
WO2013138370A1 true WO2013138370A1 (en) 2013-09-19

Family

ID=49161735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/030584 WO2013138370A1 (en) 2012-03-12 2013-03-12 Interactive overlay object layer for online media

Country Status (1)

Country Link
WO (1) WO2013138370A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015171287A1 (en) * 2014-05-06 2015-11-12 At&T Intellectual Property I, L.P. Embedding interactive objects into a video session
US20160147868A1 (en) * 2014-11-25 2016-05-26 International Business Machines Corporation Media content search based on a relationship type and a relationship strength
US9380324B2 (en) 2013-01-23 2016-06-28 Steven Schoenwald Video content distribution package
WO2017004059A1 (en) * 2015-06-30 2017-01-05 Microsoft Technology Licensing, Llc Layered interactive video platform for interactive video experiences
US9767854B2 (en) 2013-01-23 2017-09-19 Steven Schoenwald Video content distribution package
WO2017197135A1 (en) * 2016-05-11 2017-11-16 Ebay Inc. Managing data transmissions over a network connection
US9830621B2 (en) 2013-03-14 2017-11-28 Vdopia Inc. Systems and methods for layering content
CN108961848A (en) * 2018-07-06 2018-12-07 深圳点猫科技有限公司 A kind of method and electronic equipment of the generation DOM element for intelligent tutoring
US10185468B2 (en) 2015-09-23 2019-01-22 Microsoft Technology Licensing, Llc Animation editor
WO2019074773A1 (en) * 2017-10-12 2019-04-18 Microsoft Technology Licensing, Llc Interactive event broadcasting
EP3704662A4 (en) * 2017-12-29 2020-09-16 Facebook Inc. Systems and methods for enhancing content
US20210350131A1 (en) * 2017-01-18 2021-11-11 Snap Inc. Media overlay selection system
US11663218B2 (en) 2019-09-18 2023-05-30 Cgip Holdco, Llc Systems and methods for associating dual-path resource locators with streaming content
WO2023148295A1 (en) * 2022-02-04 2023-08-10 Gokou Arnaud Method and system for accessing remote resources and services from audiovisual content
EP4197194A4 (en) * 2020-08-14 2024-05-08 Global Sports & Entertainment Marketing, LLC Interactive video overlay

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008017228A (en) * 2006-07-06 2008-01-24 Toshiba Corp Commodity information provision method, and video playback unit
KR20110025261A (en) * 2009-09-04 2011-03-10 브로드밴드미디어주식회사 Interactive broadcasting service system and method for providing interactive service hidden in content screen
US20110138416A1 (en) * 2009-12-04 2011-06-09 Lg Electronics Inc. Augmented remote controller and method for operating the same
KR20110073211A (en) * 2009-12-21 2011-06-29 주식회사 케이티 Method and apparatus for providing iptv object information service
US20110199479A1 (en) * 2010-02-12 2011-08-18 Apple Inc. Augmented reality maps

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008017228A (en) * 2006-07-06 2008-01-24 Toshiba Corp Commodity information provision method, and video playback unit
KR20110025261A (en) * 2009-09-04 2011-03-10 브로드밴드미디어주식회사 Interactive broadcasting service system and method for providing interactive service hidden in content screen
US20110138416A1 (en) * 2009-12-04 2011-06-09 Lg Electronics Inc. Augmented remote controller and method for operating the same
KR20110073211A (en) * 2009-12-21 2011-06-29 주식회사 케이티 Method and apparatus for providing iptv object information service
US20110199479A1 (en) * 2010-02-12 2011-08-18 Apple Inc. Augmented reality maps

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9767854B2 (en) 2013-01-23 2017-09-19 Steven Schoenwald Video content distribution package
US9380324B2 (en) 2013-01-23 2016-06-28 Steven Schoenwald Video content distribution package
US9830621B2 (en) 2013-03-14 2017-11-28 Vdopia Inc. Systems and methods for layering content
WO2015171287A1 (en) * 2014-05-06 2015-11-12 At&T Intellectual Property I, L.P. Embedding interactive objects into a video session
US10452704B2 (en) * 2014-11-25 2019-10-22 International Business Machines Corporation Media content search based on a relationship type and a relationship strength
US10417271B2 (en) * 2014-11-25 2019-09-17 International Business Machines Corporation Media content search based on a relationship type and a relationship strength
US20160147868A1 (en) * 2014-11-25 2016-05-26 International Business Machines Corporation Media content search based on a relationship type and a relationship strength
US20160147735A1 (en) * 2014-11-25 2016-05-26 International Business Machines Corporation Media content search based on a relationship type and a relationship strength
US9837124B2 (en) 2015-06-30 2017-12-05 Microsoft Technology Licensing, Llc Layered interactive video platform for interactive video experiences
CN107736033A (en) * 2015-06-30 2018-02-23 微软技术许可有限责任公司 layered interactive video platform for interactive video experience
CN107736033B (en) * 2015-06-30 2020-08-11 微软技术许可有限责任公司 Layered interactive video platform for interactive video experience
WO2017004059A1 (en) * 2015-06-30 2017-01-05 Microsoft Technology Licensing, Llc Layered interactive video platform for interactive video experiences
US10185468B2 (en) 2015-09-23 2019-01-22 Microsoft Technology Licensing, Llc Animation editor
US11856043B2 (en) 2016-05-11 2023-12-26 Ebay Inc. Managing data transmissions over a network connection
US11388213B2 (en) 2016-05-11 2022-07-12 Ebay Inc. Managing data transmissions over a network connection
US10999341B2 (en) 2016-05-11 2021-05-04 Ebay Inc. Managing data transmissions over a network connection
WO2017197135A1 (en) * 2016-05-11 2017-11-16 Ebay Inc. Managing data transmissions over a network connection
US10554714B2 (en) 2016-05-11 2020-02-04 Ebay Inc. Managing data transmissions over a network connection
US20210350131A1 (en) * 2017-01-18 2021-11-11 Snap Inc. Media overlay selection system
US11836185B2 (en) * 2017-01-18 2023-12-05 Snap Inc. Media overlay selection system
US10814230B2 (en) 2017-10-12 2020-10-27 Microsoft Technology Licensing, Llc Interactive event broadcasting
WO2019074773A1 (en) * 2017-10-12 2019-04-18 Microsoft Technology Licensing, Llc Interactive event broadcasting
EP3704662A4 (en) * 2017-12-29 2020-09-16 Facebook Inc. Systems and methods for enhancing content
CN108961848A (en) * 2018-07-06 2018-12-07 深圳点猫科技有限公司 A kind of method and electronic equipment of the generation DOM element for intelligent tutoring
US11663218B2 (en) 2019-09-18 2023-05-30 Cgip Holdco, Llc Systems and methods for associating dual-path resource locators with streaming content
EP4197194A4 (en) * 2020-08-14 2024-05-08 Global Sports & Entertainment Marketing, LLC Interactive video overlay
WO2023148295A1 (en) * 2022-02-04 2023-08-10 Gokou Arnaud Method and system for accessing remote resources and services from audiovisual content
FR3132579A1 (en) * 2022-02-04 2023-08-11 Arnaud GOKOU Method and system for accessing remote resources and services from audiovisual content

Similar Documents

Publication Publication Date Title
WO2013138370A1 (en) Interactive overlay object layer for online media
US11902614B2 (en) Interactive video distribution system and video player utilizing a client server architecture
US20190174191A1 (en) System and Method for Integrating Interactive Call-To-Action, Contextual Applications with Videos
US9342212B2 (en) Systems, devices and methods for streaming multiple different media content in a digital container
US20130166382A1 (en) System For Selling Products Based On Product Collections Represented In Video
US20190268650A1 (en) Interactive video distribution system and video player utilizing a client server architecture
US20080163283A1 (en) Broadband video with synchronized highlight signals
US11388483B2 (en) Interaction overlay on video content
US20100312596A1 (en) Ecosystem for smart content tagging and interaction
US20190325474A1 (en) Shape-based advertising for electronic visual media
JP2016539589A (en) Dynamic binding of live video content
US20140344070A1 (en) Context-aware video platform systems and methods
US20180348972A1 (en) Lithe clip survey facilitation systems and methods
US11768648B2 (en) System and method for simultaneously displaying multiple GUIs via the same display
US11432046B1 (en) Interactive, personalized objects in content creator's media with e-commerce link associated therewith
US20110208583A1 (en) Advertising control system and method for motion media content
JP2017130033A (en) Generation apparatus, generation method, and generation program
WO2015118563A1 (en) A method and system for providing information on one or more frames selected from a video by a user

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13761446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/02/2015)

122 Ep: pct application non-entry in european phase

Ref document number: 13761446

Country of ref document: EP

Kind code of ref document: A1