US20060036756A1 - Scalable, multi-user server and method for rendering images from interactively customizable scene information - Google Patents
Scalable, multi-user server and method for rendering images from interactively customizable scene information Download PDFInfo
- Publication number
- US20060036756A1 US20060036756A1 US09/844,511 US84451101A US2006036756A1 US 20060036756 A1 US20060036756 A1 US 20060036756A1 US 84451101 A US84451101 A US 84451101A US 2006036756 A1 US2006036756 A1 US 2006036756A1
- Authority
- US
- United States
- Prior art keywords
- scene
- module
- job
- image
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
Definitions
- the invention relates generally to the fields of computer graphics and distribution of information in graphical form, generally in the form of rendered images, over networks such as the Internet.
- the invention provides a new and improved scalable, multi-user server and method for rendering images from interactively customizable three-dimensional scene information.
- LAN local area networks
- WAN wide area networks
- Some networks are private, maintained by an organization such as a corporation, government agency and the like, and may be accessed only by, for example, employees and other authorized people.
- Some networks, such as the Internet or World Wide Web are public and typically may be accessed by anyone who has access to a suitable digital device and network connection.
- a number of types of paradigms and protocols exist for transferring information over a network such as a WAN such as the Internet and World Wide Web (generally, “Internet”), or a LAN (“Intranet”).
- a network such as a WAN such as the Internet and World Wide Web (generally, “Internet”), or a LAN (“Intranet”).
- One paradigm is the so-called client/server paradigm, in which some devices, which are referred to as servers, store digital information that may be retrieved by other devices, which are referred to as clients.
- HTTP HyperText transfer protocol
- HTML HyperText Markup Language
- Web pages include textual and graphical information that is to be displayed on a display provided by the client device.
- a browser provides a convenient mechanism by which a user can identify the particular item of information that is to be downloaded, by providing a “URL,” or “universal resource locator.”
- a URL is identifies a computer, network domain or Web site (generally, “web site”) from which the item of information is to be retrieved, and may also specify a particular item of information that is to be retrieved.
- URL's are in relatively user-friendly form, typically identifying at least the Web site by name or a mnemonic of the name of the person or organization that maintains the Web site.
- the browser will convert at least the portion of the URL that identifies the web site to a network address, which is typically in numerical form, which it uses to contact the Web site and establish a “connection” therewith.
- a browser may need to contact another device, referred to as a name server, that maintains a concordance between URL's and network addresses, to obtain the network address.
- the browser After the browser has the web site's network address, it can use the network address, the identification of the particular item of information that is to be retrieved, and possibly other parameters to establish a connection with the Web site and initiate retrieval of the information item.
- a browser typically retrieves information in the form of documents or “Web pages,” which may include text and graphical images, and may also include streaming video and audio information.
- the textual information is specified in one of a number of document description languages, typically in the well-known HyperText Markup Language (HTML).
- HTML HyperText Markup Language
- the HTML description identifies the locations on the Web page at which the images or streaming video information are to be displayed and the sizes of regions of the Web page on which the respective images or video information are to be displayed.
- the HTML description will provide URL's for the respective images and streaming video information.
- the Web page is to be displayed along with audio information, the HTML description will specify the audio information that is to be played.
- the browser As the browser displays the Web page on the computer's video display screen, it will display the text as specified in the HTML description, in the process reserving regions of the displayed Web page on which the respective images are to be displayed. In addition, the browser will retrieve the graphical images, using the provided URL's provided in the HTML description in a manner similar to that described above, and display them in the regions on the video display screen that were reserved therefor. Furthermore, if streaming video information is to be displayed, the browser can initiate retrieval of the streaming video information either while displaying the other elements of the Web page or at some point after the Web page has been displayed. The user may need to perform some action, such as actuating a pushbutton displayed on the Web page.
- a pushbutton can be actuated in any of a number of ways, including clicking on it using a pointing device such as, for example, a mouse, pressing on the region of a touch screen on which the pushbutton is displayed by, for example, a stylus, or any other mechanism for actuating a pushbutton displayed on a video display screen as will be appreciated by those skilled in the art.
- Audio information may be retrieved in a manner similar to the streaming video information and played through an audio reproduction device, such as a speaker, provided with the computer.
- a Web page may also be associated with programs, termed “applets,” that may be retrieved with the other types of information and executed under control of the browser.
- the Web pages that are currently displayed by browsers are static documents. That is, a user, through the browser, requests a Web page, and the browser retrieves the information associated with the Web page and displays it. Essentially, when the Web site has provided the information associated with the Web page, that essentially ends the transaction between the browser and the Web site in relation to that Web page. If the user wishes to retrieve another Web page from the same Web site, he or she may do so by, for example, entering another URL or actuating a link on the Web page that is currently being displayed, which will initiate another transaction.
- a user cannot modify or customize the way a Web page is displayed, unless an image depicts a scene that is to be displayed in three-dimensional form, in, for example VRML or X3D format.
- an image depicts a scene that is to be displayed in three-dimensional form, in, for example VRML or X3D format.
- actuating controls that may be provided on the Web page, a user can enable the three-dimensional scene to be displayed from a number of orientations. While this can be useful in some situations, there are a number of limitations that make it less than optimum.
- the amount of information required to define objects in a three-dimensional scene in any significant degree of detail can be quite large, and, given bandwidth limitations that are typical in many connections to the World Wide Web, it would require an inordinate amount of time to retrieve the information required to display the three-dimensional scene if the scene has any significant degree of detail. Accordingly, typically for three dimensional scenes, the amount of image information will be limited sufficiently so that the three-dimensional scenes have only a few relatively small objects and textures, with an extremely limited range of illumination and surface property effects.
- a user can change the viewpoint from which the scene is displayed, he or she cannot change the orientation or a number of other characteristics of the objects in the scene.
- the manufacturer is an automobile manufacturer
- the amount of information that would be necessary to allow a user to perform such operations may require a significant amount of time to transfer.
- the amount of information that may be required may constitute a significant amount of the design information for the object(s) in the scene, which may be confidential.
- the invention provides a new and improved scalable, multi-user server and method for rendering images from interactively customizable three-dimensional scene information.
- the invention provides a server for use in connection with a network including at least one client and a communication link interconnecting the client and server.
- the server comprises an image rendering module and an interface.
- the image rendering module is configured to render, from three-dimensional scene data representing a scene, a two-dimensional image.
- the interface configured to transmit the two-dimensional image over the communication link to the client.
- the server is also provided with a user interaction control module that regulates interactions between the server, in particular the image rendering module, and respective clients who may be using the server concurrently to control images in which customizations requested by, for example, respective clients are rendered.
- a user interaction control module that regulates interactions between the server, in particular the image rendering module, and respective clients who may be using the server concurrently to control images in which customizations requested by, for example, respective clients are rendered.
- FIG. 1 is a functional block diagram of an arrangement including a scalable multi-user server that provides for rendering of images based on scenes that can be interactively customized by clients, constructed in accordance with the invention;
- FIG. 2 is a functional block diagram of the server depicted in FIG. 1 ;
- FIG. 3 is a flow diagram useful in understanding operations performed by a user manager in the server depicted in FIG. 2 .
- FIG. 1 is a functional block diagram of an arrangement 10 including a scalable, multi-user server 11 that provides for rendering of images based on scenes that can be interactively customized by clients, constructed in accordance with the invention.
- the server 11 provides web pages, including text and images, individual images, sequences of images, streams of images that provide for the perception of continuous motion of rendered scene elements (generally, “streaming video”), and the like to the respective clients.
- arrangement 10 includes a plurality of client devices 12 A, . . . , 12 N (generally identified by reference numeral 12 n ) that can access the server 11 over a network 13 .
- the network 13 is a wide area network (WAN) such as the Internet and World Wide Web, but it will be appreciated that an arrangement in accordance with the invention can include any form of network, including local area networks (LAN's).
- the client devices 11 n may be any kind of information utilization devices that may receive, utilize and display information in digital form, including computers such as personal computers, workstations, personal digital assistants (PDA's), cellular telephones and the like.
- the server 11 can be implemented using, for example, a suitably-programmed computer.
- the client devices 12 n and server 11 communicate over the network 13 according to a client/server communication model.
- a client such as a client device 12 n
- generates an information retrieval request that requests retrieval of a particular item or items of information from the server 11 , and transmits it to a server, such as the server 11 , over the network 13 .
- the information retrieval request may be generated in response to input provided by an operator, in response to a request generated by a program, or in response to other occurrences as will be apparent to those skilled in the art.
- the server When the server receives an information retrieval request from a client, depending on the information whose retrieval is being requested, it may obtain the information item(s) from, for example, a database that it maintains, and transmit them to the client over the network 13 .
- the client When the client receives the information item(s), it can make use of the item(s) in any of a number of ways. If, for example, the information item(s) comprise a Web page, the client device can display the Web page on a display device, store the Web page in a storage device, provide the web page through a suitable editor to a user to allow him or her to edit the Web page and so forth.
- the client device 12 n may put other types of information item(s) will be apparent to those skilled in the art.
- a client device 12 n may need to generate multiple information retrieval requests each requesting retrieval of one or more of the Web page's components. All of the information retrieval requests for the various components of the Web page may be transmitted to the same server, such as server 11 , for response. On the other hand, one or more of the information retrieval requests for various ones of the components of the Web page may be transmitted to other servers (not shown), which can provide the respective components. As the respective client 12 n that issues the information retrieval request(s) receives the requested information, or at some point thereafter, it can make use of the information, and, if the requested information is a Web page, display the Web page.
- the invention provides an arrangement whereby a server, such as the server 11 , can provide Web pages to client devices 12 n , which Web pages contain two-dimensional images that are rendered from three-dimensional scenes.
- the invention further provides an arrangement whereby a server, such as the server 11 , can provide images rendered from three-dimensional scenes, which scenes can, in turn, be interactively modified or customized (generally “customized”) during a session in response to customization input from a user who is using a respective client device 12 n ′, or a user who is another client device 12 n ′′.
- a client device 12 n ′ can request customizations to the scene, and the server 11 can selectively enable the requested customizations to be depicted only in images, sequences of images, or images comprising streaming video, that are rendered for that client device 12 n ′.
- the server 11 can selectively enable the customizations requested by one client device 12 n ′ to be depicted in images, sequences of images, or images comprising streaming video, that are rendered for selected ones of the client devices 12 n ′, 12 n ′′, . . .
- the server 11 can enable the customizations requested by one client device 12 n ′ to be depicted in images, sequences of images, or images comprising streaming video, that are rendered for all of the client devices 12 n ′, 12 n ′′, . . . that are contemporaneously engaged in sessions involving the same scene.
- the images provided by the server 11 may be still images, sequences of images, streaming video, or any other form or arrangement by which images can be provided to a client device.
- the user will initially initiate retrieval of a Web page from the server 11 , which Web page will include an image.
- the Web page also provides a set of tools, which can be displayed as, for example, push buttons, dial objects, radio buttons, dialog boxes and the like as part of the Web page.
- the user using user input devices provided by his or her client device 12 n ′, can manipulate the tools to enable customizations to be made to the scene, customizations to be made the viewing direction, and/or types of customizations as will be apparent to those skilled in the art, which can be made to a scene and how an image of the scene is rendered.
- the client device 12 n ′ can transmit indicia indicating the customizations that were requested by the user to the server 11 , which, in turn, can generate a new image reflecting the customizations, and transmit the new image to the user's client device. As the client device 12 n ′ receives the new image, or at some point thereafter, it can substitute the new image for the previous image in the Web page. These operations can be repeated during a session in response to user customization requests. Similarly, if the customizations requested by a user using one client device 12 n ′ are to be depicted in images rendered for other client devices 12 n ′′, . . .
- the server 11 can render new images depicting the customizations and transmit them to the other client devices 12 n ′′, . . . for display.
- the server 11 can provide sequences of images or streaming video to a respective clients without requiring user customization requests or other requests therefrom using a so-called “push” methodology.
- a server such as the server 11
- the server 11 can be used in a number of environments.
- the server 11 can be used as a server maintained by a marketer or seller of a product, and can provide Web pages containing images of the product.
- the user may wish to request customizations to the image, such as the orientation of the product from which the product is displayed in the image, the position of the light source, the color of the product within, for example, a set of colors in which the product is offered, and/or other types of customizations and be provided with an image with the customizations.
- types of customizations may also include changes in which the automobile is displayed, including, for example, the positions of one or more of the doors, illustratively, open or closed, the positions of the hood or trunk. If the automobile has a sun- or moon-roof or a convertible top, customizations may also include displaying the roof or top in a number of orientations. Since the server 11 has the database of the scene from which the image is rendered, it can readily provide a rendered images with the requested customizations without needing to provide any information from the database to the user's client device 12 n ′.
- the scene database used by the server 11 can include information from the product design database maintained by the manufacturer, and, since neither the scene database nor the information from the product design database is provided to the user's client device 12 n ′, trade secret information that may be present in the information contained in the product design database will not be transferred to a device that is external to the server 11 . This also reduces the amount of effort required to provide the scene database for the server 11 , since information from the product design database can be used generally directly or with few modifications in the scene database.
- the server 11 can be advantageously used in connection with sessions with multiple client devices 12 n ′, 12 n ′′, . . . , contemporaneously, in connection with requests for images generated for the same product and using the same scene database.
- the server 11 can selectively provide that customizations requested by a user using one client system 12 n ′ not be visible in images rendered using the same scene database for a user using another client system 12 n ′′ who has not requested similar customizations.
- the server 11 will provide an image in which the driver's side door is open only to the client system 12 n ′, reflecting the customization requested only by him or her, and not to the other client systems 12 n ′′, . . . , even in images transmitted to the other client systems 12 n ′′, . . . , subsequent to the customization requested by the user of client system 12 n ′.
- the server 11 will provide an image in which the color of the automobile is red, reflecting the customization requested only by him or her, and not to the other client systems 12 n ′, 12 n ′′′, . . . , even in image transmitted to the other client systems 12 n ′, 12 n ′′′, . . . subsequent to the customization requested by the user of client system 12 n ′′.
- the server 11 keeps track of changes to the scene database resulting from the requests from the individual client devices 12 n ′, 12 n ′′, . . . , on a client-by-client basis.
- the server 11 can selectively provide that customizations requested by a user using one client system 12 n ′ are visible in images rendered using the same scene database for users using other client systems 12 n ′′, 12 n ′′′, . . . , concurrently engaged in sessions with the server 11 in connection with the same scene database, regardless of whether the latter users have requested similar customizations.
- the customizations may be visible in images rendered for all or a subset of the other client systems 12 n ′′, 12 n ′′′, . . . .
- server 11 will provide an image in which the driver's side door is open to all or a selected subset of the client systems 12 n ′, 12 n ′′, . . . .
- the server 11 can provide the image with the customization to the other client systems 12 n ′′, 12 n ′′′, . . .
- the server 11 can, after or contemporaneous with providing the image with the customization to the client system 12 n ′ that requested the customization, also transmit the image to all or a selected subset of the other client devices 12 n ′′, 12 n ′′′, . . . that are engaged in a session involving the same scene database, without a request therefor from the other respective client devices. In that case, after a client device 12 n ′, 12 n ′′, 12 n ′′′, . . . receives the image with the customization, it can substitute the image in the Web page.
- the server 11 can also find utility in, for example, managing cooperative or competitive efforts by a plurality of users using respective ones of the client devices 12 n ′, 12 n ′′, . . . .
- the server 11 can be advantageously used to allow a plurality of users, in diverse locations using respective client devices to cooperatively design a product.
- the server 11 can enter information defining the components in the scene database. At some point, some or all of the information in the scene database may be converted to a product design database, which may be used in fabricating the product.
- the server 11 can be used in connection with playing of video games over the network 13 .
- the server 11 can render images of a scene used in the came from the same orientation for all of users who are playing the game, or from unique orientations for ones of the users.
- the server 11 can render successive images for the various users and transmit them to their respective client devices 12 n ′, 12 n ′′, . . . for display.
- FIG. 2 depicts a functional block diagram of server 11 constructed in accordance with the invention.
- the server 11 includes a number of components including a multiplexer module 20 , a web server module 21 , a script execution module 22 , a user interaction control module 23 , a rendering control module 24 , a script store 25 , a model store 26 and a rendering engine 27 .
- the multiplexer module 20 connects to the network 13 and receives information retrieval requests from a user's client device 12 n .
- a user using client device 12 n will provide input entered through, for example, a user input device to input request information to a browser 14 , and the browser will generate one or more information retrieval requests for transmission to the server 11 requesting retrieval of a Web page.
- the multiplexer 20 receives information retrieval requests, which may be a Web page retrieval request or, as will be described below, an image retrieval request, from the network 13 .
- the multiplexer 20 When the multiplexer 20 receives an information retrieval request from the network 13 , it will either respond to the Web page retrieval request itself, or it will transfer the request to one of the web server 21 or the user interaction control module 23 .
- the multiplexer 20 will generate a response that includes a user identification (UID) for the session, and transmit the response to the client device 12 n for use by the browser 14 .
- Subsequent information retrieval requests generated by the browser 14 for transmission to the image rendering device 11 for the session will include the UID, and the image rendering device 11 will use the UID to identify the session and keep track of the particular user for which images and customizations have been requested during the session.
- UID user identification
- the browser 14 After the browser 14 receives the response, including the UID, generated by the multiplexer 20 and transmitted thereby over the network 13 , the browser 14 will generate a new Web page retrieval request for transmission by the client device 12 n to the server 11 .
- the new Web page retrieval request will generally correspond to the previous Web page retrieval request, except that it will also include the UID received in the response that had previously been received from the multiplexer 20 .
- the multiplexer 20 receives a Web page retrieval request from the client device 12 n that includes a UID, it will provide the Web page retrieval request to the web server 21 .
- the web server 21 will provide information from the request to the script execution module 22 , which, using one or more scripts from a script store 25 and information provided by the user interaction control module 23 , will generate at least a portion of a Web page for transmission by the multiplexer 20 to the client 12 n.
- the portion of the Web page that is generated by the web server 21 will include at least the textual portion of the Web page requested in the Web page retrieval request, and in one embodiment will be generated in the well-known HyperText Markup Language (HTML).
- the Web page that is generated may include links identifying, for example, one or more images that are to be displayed as part of the Web page. The links are augmented to identify the UID that was included with the Web page retrieval request.
- the script execution module 22 will also provide information to the user interaction control module 23 , including the UID that was received in the Web page retrieval request, as well as identification of any customizations to one or more images that are to be displayed as part of the Web page that were requested in the Web page retrieval request. Since the Web page retrieval request is the first retrieval request for the particular Web page for the session, the Web page retrieval request generally will not include any customizations.
- the user interaction control module 23 can perform some preliminary processing operations to prepare to render the image(s) when it receives a request therefor from the multiplexer 20 , as will be described below.
- the client device 12 n When the client device 12 n receives the portion of the Web page as generated by the web server 21 and script execution module 22 from the server, it can provide it to the browser 14 .
- the browser 14 can display the received portion, and use the links, as augmented to identify the UID for the session, to generate one or more requests, which will typically be image retrieval requests, that are associated with respective ones of the links, to initiate retrieval of respective images for display as part of the Web page.
- Image retrieval request(s) generated by the browser 14 for the respective images will include both the image identification information from the respective links, as well as the UID with which the links were augmented, to identify the session for which the server 11 is to render the images.
- the client device 12 n transmits the image retrieval requests to the server 11 .
- the image retrieval requests will be received by the multiplexer 20 and forwarded directly to the user interaction control module 23 for processing, bypassing the web server 21 and script execution module 22 .
- the user interaction control module 23 , rendering control module 24 and the rendering engine 27 will render the respective images and provide them to the multiplexer 20 for forwarding over the network 13 to the client device 12 n .
- the rendering control module 24 will control the rendering operations in connection with information in a scene database that it maintains.
- the scene database contains information useful in connection with rendering of an image, including
- the Web page displayed by the browser 14 may provide tools or other controls that would allow the user to request customizations of the scene represented by the image.
- the browser 14 can generate a Web page retrieval request for the same Web page, but with customization information specified for at least one of the images rendered with the scene.
- the Web page retrieval request will identify, for each image for which customization information is provided, the particular image, as well as the customizations that are to be performed in connection with the scene from which the particular image is to be rendered.
- a number of types of customizations may be specified, including, for example
- the multiplexer 20 will receive the Web page retrieval request and provide it to the web server 21 .
- the web server 21 and script execution module 22 will generate the HTML portion of the requested Web page and provide it to the multiplexer 20 for transmission to the client 12 n , which will include links to the image or images that are to be displayed on the Web page, which, as before, have been augmented with the UID identifying the session for the user.
- the web server 21 and script execution module 22 will provide information to the user interaction control module 23 as to
- the client device 12 n After the client device 12 n receives the HTML portion of the Web page from the server 11 , it will provide it to the browser 14 , which can display that portion of the Web page in the same manner as before. In addition, as before, the browser 14 will generate one or more image retrieval requests to initiate retrieval of the image(s) from the server 11 . Further, as before, the image retrieval requests will include information from the link(s) that identify the particular image(s) that are to be retrieved, as well as the UID of the session as provided in the augmented links. The image retrieval request(s) will be provided to the client device 12 n , which, in turn, will transmit the image retrieval request(s) to the server 11 .
- the image retrieval request(s) will be received by the multiplexer 20 , which will provide them to the user interaction control module 23 .
- the user interaction control module 23 will enable the rendering control module 24 to render the image(s) and provide the rendered image(s) to the multiplexer 20 for transmission to the client device 12 n .
- the user interaction control module 23 will enable the rendering control module 24 to render the respective image with the customization(s), if any, that the user requested in the Web page retrieval request. In that operation, if a customization to an image is such as would require customization of the scene as stored in the scene database, the user interaction control module 23 will provide appropriate customizations to the scene database.
- the customization(s) may be such as to provide that they will be used only in connection with the image rendered for the session associated with the particular UID, or, alternatively, in connection with images rendered for all or a subset of the sessions that are contemporaneously engaged in sessions with the same scene in the scene database.
- the user interaction control module 23 can provide information thereof to the rendering control module 24 for use in rendering.
- the user interaction control module 23 can, for example, enable the rendering control module 24 retrieve information from the model database 26 describing the object to be added to the scene. Thereafter, the user interaction control module 23 can enable the rendering control module 24 to render an image, which the rendering control module 24 will provide to the user interaction control module 23 . The user interaction control module 23 will, in turn, provide the image to the multiplexer 20 , for transmission to the client device 12 , for display by the browser. These operations can be performed for each of the images for which image retrieval requests have been received.
- the user interaction control module 23 includes several components, including a user manager 30 , a connection manager 31 , an event manager 32 , a model manager 33 and a plurality of operators.
- the user interaction control module 23 makes use of operators to perform operations.
- One operator is a socket gateway operator 34 , which receives UID and customization information from the script execution module and image retrieval requests from the multiplexer 20 , and provides rendered images to the multiplexer 20 for transfer to respective ones of clients 12 n .
- the operators can also be linked together into a graph 35 , with the particular operators and sequence thereof in the operator graph being selected to facilitate generation of the image having the desired characteristics.
- a graph 35 the particular operators and sequence thereof in the operator graph being selected to facilitate generation of the image having the desired characteristics.
- the operator graph may be a default graph for use in rendering images using the particular model.
- operators that may be advantageously used in connection with the user interaction control module 23 may include such operators as a operators of the object translation operator type, operators of an object rotation operator type, operators of a color operator type, operators of a timekeeper operator type, and operators of a render operator type.
- An operator of the translation operator type can be used to facilitate updating of the model for a scene in the scene database to translate an object in the scene by a selected distance in a selected direction.
- an operator of the object translation operator type can be used to translate an object in the scene along a path in the scene.
- the particular object that is moved, as well as the distance and direction that the object is moved, can be specified as parameters whose values are determined by the image customization information. It will be appreciated that the extent to which an object can actually be moved may be constrained by other features of the scene, including, for example other objects that may be present in the scene.
- An operator of the rotation operator type can be used to facilitate updating of the scene in the scene database to rotate an object in the scene by a selected angle around a selected axis.
- the particular object that is rotated, as well as the angle and direction that the object is rotated, as well as possibly the axis of rotation can be specified as parameters whose values are determined by the image customization information.
- the axis of rotation will comprise the axis specified by the door's hinges, which may be determined by the model of the automobile as stored in the scene database.
- the angle that the door is rotated around the axis, and the direction of rotation can be specified as parameters whose values are determined by the image customization. It will be appreciated that the angle that the object may be rotated around any particular direction may be constrained by other objects in the scene, including other components of the automobile.
- An operator of the color operator type can be used to change the color of at least a portion of the surface of an object in the scene.
- a color operator can operate by editing parameters of shaders that are provided in the scene database in response to image customization information.
- An operator of the timekeeping operator type can be used to provide a time stamp or value to respective client devices 12 n .
- the server 11 is being used in connection with, for example, a game
- the time stamp or value as generated by the timekeeping operator can be transmitted to the client devices 12 n of all of the users who are playing the game to provide them with a common time reference.
- the time stamp or value can identify the particular time as determined by the server 11 , and can be provided to, for example, the script execution module 22 for use in the HTML portion of the Web page when it generates that portion for transmission to a client device 12 n.
- An operator of the rendering operator type can be used to initiate and control rendering of an image by the rendering engine 27 .
- an operator type can be provided to control the position, angular orientation, zoom/focal length, aperture setting, and so forth of a camera.
- an operator type can be provided to control the position, angular orientation, color, brightness, and so forth of a light source.
- a high-level operator type can build on and utilize operators of other operator types to perform compound operations such as moving an object on a motion path controlled by gravity or moving a car door between the “open” and “closed” states automatically.
- At least some of the types of operators may also be of one of two subtypes, including a private subtype and a public subtype.
- An operator of the private subtype is used to provide a customization that is only visible in the image(s) that are to be subsequently rendered of the particular scene for the particular session identified by the particular UID that is associated with the image customization information that requested the customization, or for a selected subset of UID's that are contemporaneously using the scene.
- an operator of the public subtype is used to provide a customization that is visible in the image(s) that are to be subsequently rendered of the particular scene for all or possibly a larger subset of UID's that are contemporaneously using the scene.
- the server 11 provides for several privatization levels, so that, for a lower level, a customization will be visible in images that are subsequently rendered of the particular scene for the particular session identified by the selected subset of UID's that are contemporaneously using the scene, and, for a higher level, a customization will be visible only in the image(s) that are to be subsequently rendered of the particular scene for the particular session identified by the particular UID that is associated with the image customization information that requested the customization.
- the user interaction control module 23 in generating an operator graph 25 to facilitate rendering of an image of a scene, will instantiate operators of the required types and link them together to form the operator graph 35 .
- the user interaction control module 23 can perform these operations after it receives the image customization information and UID associated with a Web page retrieval request from the script execution module. This will allow the operator graph to be ready when an image retrieval request is received for the particular image.
- the user interaction control module 23 can initiate execution of the various operators in the graph to, in turn, initiate rendering of the image.
- the user manager 30 is provided to keep track of UID's and converts between UID's and identifiers, referred to as RID's (rendering session identifiers) that are used to keep track of rendering sessions by the rendering control module 24 .
- RID's rendering session identifiers
- the model manager 33 and connection manager 31 cooperate to create operator graphs 35 in response to image customization information provided by the script execution module 22 in response to Web page retrieval requests from the respective client devices 12 n .
- the model manager 33 receives image customization information from the script execution module 22 relating to an image that is to be rendered, it will determine the operators of the respective types that are to be used in the operator graph 35 , instantiate the operators and determine the topology of the operator graph, that is, how the operators are to be linked together to form an operator graph 35 .
- the connection manager 31 will perform the actual linking.
- Each operator has at least one input and an output, and the model manager 33 will determine, for each input of each operator, the respective operator that is to provide a value or status information for the respective input.
- the operator that is to provide a value or status information for an input of an operator is upstream of that operator, and the operator whose input is to receive the value or status information is downstream of the operator that is to provide the value or status information.
- the connection manager 31 will perform the actual linking of the operators that have been instantiated by the model manager to form the operator graph 35 .
- the model manager 33 can determine the types of operators that are to be used from the image customization information that is provided by the script execution module 22 . For example, if the image customization information is to enable an object to be translated a predetermined distance in a particular direction, the model manager 33 can instantiate an operator of the translation operator type, and provide as parameters such information as, for example, the identification of the object that is to be translated, the direction that the object is to be translated and the displacement along the particular direction.
- the model manager 33 can instantiate an operator of the rotation operator type, and provide as parameters such information as the identification of the object that is to be rotated, the position and orientation of the axis around which the object is to be rotated, the direction around the axis that the rotation is to take place and the angle through which the object is to be rotated.
- the model manager 33 can instantiate an operator of the color operator type, and provide as a parameter the color to which the object is to be customized.
- the model manager 33 and connection manager 31 can instantiate and link corresponding operators for every object that is to be translated, rotated and whose color is to be customized, and link them into the operator graph that is to be used to control the rendering of the image associated with the image customization information.
- the particular order in which the operators are connected in an operator graph 35 may be determined by several factors, including whether the operators commute, that is, whether, if image customization information requires usage of operators of two types, the two operators can be applied in any order and provide the same result. Generally, for example, if an image customization requires operators of the translation operator, rotation operator and color operator type, operators of the respective type can be applied in any order.
- the operator of the rendering operator type will be expected to be one of the last operators in the operator graph 35 , after all of the operators instantiated to update the model of the scene in the model database 26 .
- the socket gateway 34 which, as noted above, is also an operator, will be both the first and the last operator in the operator graph 35 .
- the multiplexer 20 When the multiplexer 20 receives an image retrieval request from a client 13 n , the multiplexer 20 will notify the socket gateway and provide information identifying the UID and an identifier identifying the image that is to be rendered to the socket gateway 34 to initiate execution of the operator graph to facilitate updating of the model of the scene in the model database 26 , if necessary, and rendering of the image. After the image is rendered, it will be provided to the socket gateway 34 for provision to the multiplexer 20 and transmission to the particular client device 12 n that issued the image retrieval request.
- the event manager 33 controls execution of the operators that comprise an operator graph 35 and, in that case, manages events that occur during execution.
- the event manager 33 controls execution of the operator graph 35 according to a “data flow” paradigm, in which an operator in an operator graph 35 is executed when all of its inputs, which include both values of parameters provided by the image customization information provided in the Web page retrieval request and values and/or status information that are provided by operators that are upstream of the respective operator in the operator graph 35 , have been provided with respective values and/or status information.
- the status information may merely indicate that an upstream operator in the graph has finished execution.
- an operator graph 35 includes operators of the translation operator, rotation operator and color operator type, to translate, rotate and change the color of the same object, followed by an operator of the rendering operator type
- the event manager may enable the operators of the translation operator, rotation operator and color operator type in any order.
- the operators of the translation operator, rotation operator and color operator types will need to provide status information that indicates that they have successfully finished execution.
- each respective operator of the translation operator, rotation operator and color operator type is executed, it will update the scene in the scene database, and will generate status information to indicate when they are finished, which is provided to the rendering operator as an input.
- the event manager 33 After all of the operators of the translation operator, rotation operator and color operator type, as well as operators of other types that may be provided, have finished execution, the event manager 33 will note that all of the inputs to the rendering operator have received status information indicating that the operators upstream thereto had successfully completed, the event manager 32 can enable the operator of the rendering operator type to be executed.
- the event manager 32 can enable operator graphs comprising instantiated operators of any combination of operator types, connected in any of a number of topologies, to be executed in a similar manner.
- the event manager 32 can initiate execution of an operator graph 35 at any operator in the operator graph 35 .
- an input value is needed from another operator to allow for continued execution of the operator, execution of the one operator can be suspended, and the other operator executed to allow for generation of the input value that is needed by the one operator.
- the model manager 33 in response to image customization information from the script execution module 22 , initiates creation of an operator graph 35 to facilitate rendering of an image of a scene with the customizations, if any, requested in a Web page retrieval request.
- the model manager 33 can enable models of objects, that are stored in the model database 26 , and that may be needed in the scene database associated with a scene, to be loaded into the respective scene database.
- the rendering control module 24 while an operator graph 35 is being executed to facilitate rendering of an image, controls updating of a scene from which the image is to be rendered as represented in the scene database in response to execution of operators of the respective types. In addition, the rendering control module 24 controls rendering of the image during execution of an respective operator of the rendering operator type.
- the rendering control module 24 comprises an API control module 40 , a job manager 41 , a world/session/transaction manager 42 , and one or more scene databases 43 .
- the rendering engine 27 performs actual tessellating and rendering operations during execution of an operators of the respective operator type.
- the world/session/transaction manager 42 manages “worlds,” “sessions” and “transactions.”
- a transaction bounds a set of consistent database operations in connection with the scene database 43 .
- rendering is considered a transaction because rendering requires a consistent view of the scene in the scene database 43 that must not be changed during the rendering operation.
- a modification to a scene is also a transaction, since typically a modification to the scene requires incremental changes to many scene data elements in the scene database all of which need to be performed as a unit to ensure that the scene in the scene database 43 remains consistent.
- an object that is actually present in the scene is represented by an object type and an instance, so that, if a scene contains two objects of the same object type, that can be represented in the scene database 43 by one object type and two instances, all of which are scene data elements. If it is necessary to delete an object, represented by object type and instance scene data elements, both the object type and instance scene data elements will need to be deleted as a unit. This will ensure that problems do not arise in connection with the rendering engine 27 , which can occur if, for example, during rendering of the scene, the object type scene data element has been deleted but not the instance scene data element, since at some point the rendering engine 27 will need to attempt to access the deleted object type scene data element.
- a session as managed by the world/session/transaction manager 42 generally corresponds to a session with a client device 12 n .
- Worlds are used to disambiguate multiple scenes that may be used for client devices 12 n that are concurrently engaged in sessions with the server 11 , and particularly may be used to disambiguate scene data elements of the respective scenes that may have similar names. For example, if two different scenes both call their cameras “cam,” if a customization to the camera is requested for a scene being used for one session, that customization should only be made in that scene, and not in the other scene.
- the API control module 40 controls updating of the scene and rendering of an image in one or more jobs, and the job manager 41 schedules the jobs based on selected criteria.
- the criteria includes such issues as job age, whether a job is a prerequisite for a number of other jobs, job “cost,” and other criteria as will be described in more detail below.
- the age of a job can be a desirable criteria since delaying processing of a job based on other criteria can be undesirable to completion of rendering of the image(s) that depend on the delayed job.
- the job “cost” criteria may be a function of other criteria including, for example, an estimate of the processing time required to execute the job, or to finish execution of the job if execution is suspended, an estimate of the amount of various processing resources, such as memory, that may be required, and the like.
- a job for which the estimate of the processing time and/or required processing resources is higher will generally have a higher cost associated therewith than a job for which the estimate is lower.
- jobs associated with higher costs will be processed on a preferential basis over jobs with lower costs, which can increase the likelihood of that the jobs relating to rendering of an image, which may be processed in parallel, can be completed in less time than otherwise.
- the rendering engine 27 performs the actual rendering operations required to render the image.
- the rendering engine 27 comprises Mental Ray Version 3.0, available from Mental Images, G.m.b.H., & Co., KG, Berlin, Germany, although other rendering engines, such as OpenGL can be used.
- the rendering engine 27 performs tessellation of the scene data elements of a scene in the scene database 43 as necessary prior to rendering an image of the scene, and thereafter renders an image.
- the rendering engine 27 need not tessellate all of the scene data elements of a scene before it begins rendering an image; the rendering image 27 can instead tessellate a portion of the scene data elements before rendering an image of the portion that has been tessellated, and repeat these operations as necessary to render the image as desired. It will be appreciated that the rendering engine 27 need only tessellate portions of those scene data elements that will be depicted in an image that is to be rendered.
- the rendering control module 24 comprises the API control module 40 , the job manager 41 , the world/session/transaction manager 42 and the scene database 43 .
- the API control module 40 operates as the interface among the user interaction control module 23 and the other elements of the rendering control module 24 and the rendering engine 27 .
- the API control module 40 receives calls from the user interaction control module 23 when an operator graph 35 is executed, and provides status information and, while an image is being rendered or after the image has been rendered, provide either the portions rendered or the entire image to the user interaction control module 23 .
- the status information provided by the API control module 40 may be used by elements of the user interaction control module 23 during execution of an operator graph 35 .
- operators comprising an operator graph 35 may be executed in any order and, if, in executing an operator, the rendering control module 24 determines that it needs an input value from an operator that is upstream of the operator being executed, the API control module 40 will provide status information to so notify the user interaction control module 23 .
- the user interaction control module 23 in particular the event manager 32 , can enable the operator that is to provide the needed value to be executed. Any number of such sequences, one per user, may be in progress simultaneously
- Execution of an operator generally entails performing one or more jobs.
- loading an object from the model store 26 into the scene database 43 can entail several jobs, including, for example, retrieval of data describing the object from the model store, converting the data from, for example, a form that might be used by a computer-assisted design (“CAD”) system to a form that would be useful to the rendering engine 27 , and loading the converted data into the scene database 43 and linking it to the respective scene.
- CAD computer-assisted design
- Each of these operations can be performed as a respective job, or multiple jobs.
- rendering an image can entail several jobs, including rendering a rectangle, loading a texture, tessellating a surface, and so forth.
- the job manager 41 manages the jobs that are concurrently being executed so they will be executed in an efficient manner. This will allow the server 11 to provide Web pages and rendered images to a number of clients 12 n concurrently with minimal delay.
- the job manager 41 maintains a dependency graph 44 of jobs that are to be executed, with each of the jobs being annotated with a job cost value indicating the respective job's job cost value as described above.
- a job of a particular type is first linked into the dependency graph 44 of jobs to be executed, it can be accompanied by a job cost value that is an initial estimate.
- the job manager 41 executes jobs of the respective job type, it can keep track of the resources that are used and update the job cost value for use when a job of the same type is subsequently executed.
- operators can be public or private, with several levels of privacy.
- Scene elements in the scene database 43 can also be public or private, with corresponding levels of privacy. If, for example, a private translation operator is provided in an operator graph associated with a particular UID to facilitate translation of an object in a scene, the scene element(s) in the scene database 43 that represents the object as translated will be also be private.
- jobs are of public and private subtypes, with corresponding levels of privacy, and the job(s) that are executed during execution of an operator of the public will correspond to the public/private subtype of the operator for which they are executed.
- the job manager 41 enables other modules, such as the world/session/transaction manager 42 and, in particular, the rendering engine 27 , to perform the individual jobs, and in addition controls access to the scene information in the scene database 43 .
- the scene database 43 is in the form of a cache, in which design information for objects that are to in a scene can be loaded using model information from the model store 26 .
- the job manager 41 can select data to be removed from the scene database 43 .
- a number of selection criteria can be used to determine which data is to be removed from the scene database 43 .
- One embodiment makes use of a pin counter (not shown) associated with each element of screen data.
- Each element of scene data is also associated with an access sequence value, and, when a module issues an access request for an element, an integer is incremented and provided as the element's access sequence value.
- Each scene data element whose associated pin counter has the value “zero” is a scene data element for which all of the modules that needed to use the scene data element concurrently have finished using the scene data element. In that case, the scene data element can be deleted from the scene database 43 . It will be appreciated, however, that, although no modules are then using the scene data element at that point in time, a module may subsequently need to use the scene data element.
- the job manager 41 can sort the scene data elements whose pin counters have the value “zero” by access sequence values; it will be appreciated that the least recently accessed scene data elements will be those for which access sequence values are relatively low and the most recently accessed scene data elements will be those for which access sequence values are relatively high. Preferably, the job manager 41 will select one or more scene data elements for deletion for which access sequence values are relatively low on the sorted list.
- the job manager 41 will select one or more scene data elements, from the largest to the smallest according to their scaled size values size scaled i for deletion, as may be necessary to accommodate scene data elements that are to be loaded into the scene database 43 .
- the job manager 41 creates a job for linkage into the dependency graph 44 in response to a request therefor during execution of operators in an operator graph 35 , or from a module, such as rendering engine 27 or world/session/transaction manager 42 , while those modules are executing other jobs.
- a job is a request to create a result data set based on values of one or more parameters.
- the result data set will be stored in the scene database 43 .
- the parameters describe the operation to be performed during execution of the job and possibly other scene data elements that may be present in the scene database 43 .
- the job manager 41 In response to a job creation request, the job manager 41 generates a job description data structure that includes such information as:
- Items (i), (iv), and (v) can generally be provided as parameters in the job creation request.
- the job cost value estimate (item (vi) above) can be based on a number of criteria, including, for example,
- a job cost value estimate can be used in connection with a job cost value estimate, including factors such as the number of jobs of the same type that are suspended on the thread that is expected to execute the job or that is executing the job, and other factors as will be apparent to those skilled in the art. It will be appreciated that the job manager 41 may update a job's job cost value estimate in view of changes in, for example, the additional factors noted above.
- the job manager 41 uses a number of criteria in selecting a job for execution, including the number of known unresolved prerequisites, that is, jobs that need to be executed before the respective job, to provide a value therefor, but that have not yet been executed, as well as the jobs' job cost value estimate.
- the job manager 41 will select a job with no or relatively few unresolved prerequisites.
- a job is a direct or indirect prerequisite for a number of other jobs, it will preferably be executed before jobs that are prerequisites for fewer jobs.
- the job manager will preferably select a job with a relatively high job cost value estimate for execution, which can improve finalization parallelism, as long as respective threads do not exceed a particular concurrent aggregate job cost values for jobs assigned thereto.
- the job manager 41 will assign jobs to threads using a number of criteria, including maximizing data locality, minimizing and balancing the number of suspended jobs on a thread to keep stacks small, and other criteria as will be apparent to those skilled in the art.
- the job manager 41 will preferably assign the newly-discovered prerequisite to the same thread as the job for which it is a prerequisite, for similar reasons.
- the job manager 41 executes jobs as they are necessary to provide data for other jobs.
- texture image elements are created as placeholders in the scene database 43 , but they are empty and have no associated pixel array. When they are accessed for the first time, filing the texture image element necessitates executing an associated texture load job. A job that is accessing the texture image element will be suspended until a texture load job is executed.
- an object that has not yet been tessellated may initially be coarsely represented in the scene database 43 by a placeholder in the form of a bounding box.
- a bounding box may be in the form of, for example, a geometric object having surfaces in the form of, for example, triangles, quadrilaterals or the like, that coarsely bounds the actual object associated therewith.
- the ray may hit the bounding box.
- a placeholder for the bounding box will already exist in the scene database 43 , but the placeholder will not contain the scene data for the actual object represented by the bounding box. Instead, the placeholder contains a pointer to the job description for the job that is provided to generate the scene data.
- a ray generated to represent the illumination would hit one of the triangles, quadrilaterals, and so forth, that comprises the bounding box in the scene.
- the scene data representing at least a portion of the actual object will be need to be retrieved from the model store 26 , tessellated and stored in the scene database 43 , and the job manager 41 will enable respective jobs therefor to be executed, and a job that might be accessing the object will need to be suspended until those jobs are executed.
- the bounding box essentially operates as a placeholder for the object. The bounding box will not be deleted from the scene database 43 when the object is stored therein, since, as noted above, the object might be deleted from the scene database 43 .
- the data stored by a job in the placeholder may be actual scene data, or it may merely be status information indicating that the job has finished executing.
- the job manager 41 makes use of a state machine to control execution of each job.
- the current state of each job may be stored in the respective job's job description described above.
- the job control state machine 50 used by the job manager 41 will be described in connection with FIG. 3 .
- the job control state machine 50 in response to a request to create a job, the job control state machine 50 enters a job creation state 51 .
- the job manager creates a job description, as described above, and, in addition, creates a placeholder for the scene data element(s) to be associated with the job in the scene database 43 .
- a module that issues a job creation request may notify the job manager 41 of prerequisite jobs of which it is aware, either as part of the job creation request or subsequent thereto, and the job manager 41 can use that information to link the job in the dependency graph 44 .
- the job manager 41 can initiate creation of the job by issuing a job creation request for the job, in which case the prerequisite job will also be linked into the dependency graph 44 .
- the module that issued the job creation request has provided all of the information required for the creation of the job initiated by the job creation request, including the prerequisites of which it is aware, the module can issue a job creation end request, which marks the end of the job creation state. At that point, the job enters a dormant state 52 .
- a job is not executed until a request for the scene data element that would be generated by the job is issued (reference item (ii) directly above). Accordingly, a job will remain in the dormant state 52 until a request is made for the scene data element that would be generated by the job is issued. At that point, since the scene data element is represented by a placeholder, a job execute request will be generated by the scene database 43 for the job associated with the placeholder, at which point the job will sequence to the pending state 53 . In the pending state 53 , the job manager 41 will determine whether the job's prerequisite jobs have finished execution, and, if so, the job will enter a runnable state 54 .
- the job manager 41 can schedule the job for execution, based on the job's estimated job cost value as described above. If, while the job is in the runnable state 54 , the job manager 41 identifies a new prerequisite for the job, the job can return to the pending state 53 until the new prerequisite has been executed.
- the job manager 41 When the job manager 41 assigns the job to a thread, the job enters a running state 55 . In the running state, the job is actually being executed. While a job is being executed, a new prerequisite may be discovered, in which case the job manager 41 will sequence the job to a suspended state 56 until the prerequisite can be executed. If, for some reason, an error occurs during execution of the job, the job manager 41 will sequence the job to a failed state 57 . If a subsequent execution request is issued for the job, it will return to the pending state 53 .
- the module that is executing the thread in which the job is being executed determines that the job has been successfully executed and is ready to store results in the scene database 43 , it will issue a job finished notification to the job manager 41 .
- the job manager 41 sequences the job to a finished state 58 and issues a notification to the module that it can initiate a storage operation to store the results in the scene database 43 .
- the job manager 41 can control issuance of the notification allowing the module to initiate the storage operation so as to control the timing with which the module can initiate the storage operation. This may be desirable to maintain consistency of scene data for a scene in the scene database 43 if, for example, other jobs are using the scene data for rendering.
- the module can initiate the storage operation.
- the module will initiate a storage operation, not to the placeholder(s) for the job, but to respective temporary storage location(s) in the scene database 43 .
- the module will issue a job storage done request to the job manager 41 .
- the job manager 41 will sequence the job to a done state 59 .
- the job manager 41 will enable the information in the temporary storage location(s) to be transferred to the placeholder(s). In one embodiment, the transfer is accomplished by means of updating pointers from pointing to the storage locations associated with the placeholder(s) to point to the temporary storage location(s).
- the job manager 41 After the job manager 41 has sequenced the job to the done state 59 and updated the pointers, it can sequence the job to a paging state 60 , in which it will enable the job descriptor to be transferred from the scene database 43 to, for example, a permanent storage arrangement (not shown) such as a disk storage device. Following the paging state 60 , or following the done state 59 if the job is not sequenced to the paging state, a garbage collector maintained by the job manager 41 can delete the job description from the scene database 43 in the manner described above, at which point the job manager 41 sequences the job to a flushed state 61 . If the job manager 41 subsequently receives a job execution request requesting subsequent execution of the job, the job manager 41 can return the job to the pending state, at which point the operations described above can be repeated.
- a garbage collector maintained by the job manager 41 can delete the job description from the scene database 43 in the manner described above, at which point the job manager 41 sequences the job to a flushed state
- the invention provides a number of advantages.
- the invention provides a server for use in connection with one or more clients that can render images of scenes for delivery to respective clients.
- the server can render images of the same scene for a plurality of the clients, and can allow the clients to request customizations of the scene and control those clients for which the customizations will be visible in images rendered therefor.
- the server can selectively allow customizations requested by one client to be made in a scene for which images are rendered for, for example, only that client, for a selected group of contemporaneously-connected clients, or for all contemporaneously-connected clients for which images are to be rendered of the scene.
- the server By rendering the images itself, instead of providing three-dimensional scene information to the respective clients, the server ensures that confidential three-dimensional scene information concerning objects in a scene is not provided to the respective clients.
- the server also manages cooperative or competitive efforts by a plurality of users using respective ones of the client, allowing, for example, a plurality of users in diverse locations using respective client devices to cooperatively design a product, a plurality of users in diverse locations to play games, and other efforts as will be apparent to those skilled in the art.
- the arrangement allows clients to be used that have relatively limited storage and processing capacities, such as Web pads, personal data assistants (PDA's), cellular telephones and the like, and still allow for arbitrarily complex manipulation of three-dimensional scene data.
- clients since rendering is performed by the server, and not by the clients, the arrangement allows clients to be used that have relatively limited storage and processing capacities, such as Web pads, personal data assistants (PDA's), cellular telephones and the like, and still allow for arbitrarily complex manipulation of three-dimensional scene data.
- the server has been described as rendering images in response to image retrieval requests issued by respective clients, it will be appreciated that the server can render images in response to other events.
- the server can transmit an updated image to one client in response to an event initiated by another client. This can be useful if, for example, users of the respective clients are engaged in a collaborative design effort, playing a game, or the like.
- the server can stream updated images to one or more clients if the images are to represent, for example, video, using any convenient image transfer or streaming video transfer protocol.
- the server has been described as providing images rendered in response to requests issued by browsers in connection with links associated with Web pages, it will be appreciated that the image may be rendered and provided in response to requests issued by other types of programs executed by the client devices, in connection with other types of request initiation mechanisms, which need not be associated with a Web page.
- tools through which a user may request customizations to a scene may be implemented using any type of program. If customizations are requested through Web pages, as described above, the tools may be efficiently implemented by means of, for example, applets provided with the Web pages, which applets may be in, for example, the well-known Java programming language.
- a server may be implemented on a single processing platform, or on multiple processing platforms.
- the scene database 43 may be implemented as a virtual shared database that is distributed across a plurality of processing platforms, and various components of the server may also be distributed across the various processing platforms.
- a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program.
- Any program may in whole or in part comprise part of or be stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner.
- the system may be operated and/or otherwise controlled by means of information provided by a user using user input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Computer And Data Communications (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A server for use in connection with a network including at least one client and a communication link interconnecting the client and server. The server comprises a user interaction control module, an image rendering module and an interface. The image rendering module is configured to render, from three-dimensional scene data representing a scene, a two-dimensional image. The interface configured to transmit the two-dimensional image over the communication link to the client. The user interaction control module is configured to regulate interactions between the server, in particular the image rendering module, and respective clients who may be using the server concurrently to control images in which customizations requested by, for example, respective clients are rendered.
Description
- The invention relates generally to the fields of computer graphics and distribution of information in graphical form, generally in the form of rendered images, over networks such as the Internet. The invention provides a new and improved scalable, multi-user server and method for rendering images from interactively customizable three-dimensional scene information.
- Devices, such as computers, personal digital assistants (PDA's), cellular telephones, and the like, that can generate, process, display and otherwise make use of information in digital form, are often connected into networks to facilitate sharing of information thereamong. In some networks, so-called local area networks (LAN's), the networks extend over a relatively small geographic region, such as a building or group of buildings. Other networks, so-called wide area networks (WAN's), the networks extend over larger geographical regions, and may include LAN's as parts thereof. Some networks are private, maintained by an organization such as a corporation, government agency and the like, and may be accessed only by, for example, employees and other authorized people. On the other hand, some networks, such as the Internet or World Wide Web, are public and typically may be accessed by anyone who has access to a suitable digital device and network connection.
- A number of types of paradigms and protocols exist for transferring information over a network, such as a WAN such as the Internet and World Wide Web (generally, “Internet”), or a LAN (“Intranet”). One paradigm is the so-called client/server paradigm, in which some devices, which are referred to as servers, store digital information that may be retrieved by other devices, which are referred to as clients. Several protocols exist for retrieving information, including the so-called file-transfer protocol (“FTP”) for facilitating the retrieval of individual information files for, for example, later processing, and the HyperText transfer protocol (“HTTP”) for facilitating the retrieval of one or more information files, at least one of which will be in the so-called HyperText Markup Language (“HTML”), all of which constitute a Web page. Typically, Web pages include textual and graphical information that is to be displayed on a display provided by the client device.
- One popular type of program that is often used for retrieving and using information files comprising a Web page is referred to as a browser. A browser provides a convenient mechanism by which a user can identify the particular item of information that is to be downloaded, by providing a “URL,” or “universal resource locator.” A URL is identifies a computer, network domain or Web site (generally, “web site”) from which the item of information is to be retrieved, and may also specify a particular item of information that is to be retrieved. Typically, URL's are in relatively user-friendly form, typically identifying at least the Web site by name or a mnemonic of the name of the person or organization that maintains the Web site. The browser will convert at least the portion of the URL that identifies the web site to a network address, which is typically in numerical form, which it uses to contact the Web site and establish a “connection” therewith. A browser may need to contact another device, referred to as a name server, that maintains a concordance between URL's and network addresses, to obtain the network address. After the browser has the web site's network address, it can use the network address, the identification of the particular item of information that is to be retrieved, and possibly other parameters to establish a connection with the Web site and initiate retrieval of the information item.
- A browser typically retrieves information in the form of documents or “Web pages,” which may include text and graphical images, and may also include streaming video and audio information. The textual information is specified in one of a number of document description languages, typically in the well-known HyperText Markup Language (HTML). If a Web page is to have one or more graphical images and/or video information displayed therewith, the HTML description identifies the locations on the Web page at which the images or streaming video information are to be displayed and the sizes of regions of the Web page on which the respective images or video information are to be displayed. In addition, the HTML description will provide URL's for the respective images and streaming video information. Similarly, if the Web page is to be displayed along with audio information, the HTML description will specify the audio information that is to be played.
- As the browser displays the Web page on the computer's video display screen, it will display the text as specified in the HTML description, in the process reserving regions of the displayed Web page on which the respective images are to be displayed. In addition, the browser will retrieve the graphical images, using the provided URL's provided in the HTML description in a manner similar to that described above, and display them in the regions on the video display screen that were reserved therefor. Furthermore, if streaming video information is to be displayed, the browser can initiate retrieval of the streaming video information either while displaying the other elements of the Web page or at some point after the Web page has been displayed. The user may need to perform some action, such as actuating a pushbutton displayed on the Web page. A pushbutton can be actuated in any of a number of ways, including clicking on it using a pointing device such as, for example, a mouse, pressing on the region of a touch screen on which the pushbutton is displayed by, for example, a stylus, or any other mechanism for actuating a pushbutton displayed on a video display screen as will be appreciated by those skilled in the art. Audio information may be retrieved in a manner similar to the streaming video information and played through an audio reproduction device, such as a speaker, provided with the computer.
- In addition to text, image, streaming video and audio information, a Web page may also be associated with programs, termed “applets,” that may be retrieved with the other types of information and executed under control of the browser.
- Generally, the Web pages that are currently displayed by browsers are static documents. That is, a user, through the browser, requests a Web page, and the browser retrieves the information associated with the Web page and displays it. Essentially, when the Web site has provided the information associated with the Web page, that essentially ends the transaction between the browser and the Web site in relation to that Web page. If the user wishes to retrieve another Web page from the same Web site, he or she may do so by, for example, entering another URL or actuating a link on the Web page that is currently being displayed, which will initiate another transaction.
- Typically, a user cannot modify or customize the way a Web page is displayed, unless an image depicts a scene that is to be displayed in three-dimensional form, in, for example VRML or X3D format. For such images, by actuating controls that may be provided on the Web page, a user can enable the three-dimensional scene to be displayed from a number of orientations. While this can be useful in some situations, there are a number of limitations that make it less than optimum. For example, the amount of information required to define objects in a three-dimensional scene in any significant degree of detail can be quite large, and, given bandwidth limitations that are typical in many connections to the World Wide Web, it would require an inordinate amount of time to retrieve the information required to display the three-dimensional scene if the scene has any significant degree of detail. Accordingly, typically for three dimensional scenes, the amount of image information will be limited sufficiently so that the three-dimensional scenes have only a few relatively small objects and textures, with an extremely limited range of illumination and surface property effects. In addition, although a user can change the viewpoint from which the scene is displayed, he or she cannot change the orientation or a number of other characteristics of the objects in the scene.
- Even if the bandwidth were sufficient to enable sufficient three-dimensional scene information to be retrieved within a reasonable amount of time to facilitate display of the scene with a more photo-realistic quality, in a number of situations it may be undesirable to transfer the information to the user. For example, if a manufacturer uses the Web site to provide information about its products for, for example, potential customers, it may not wish to make information sufficient to provide photo-realistic three-dimensional images available for retrieval, since information that is sufficiently detailed to generate such images may also be sufficiently detailed to provide a significant amount of design information that may be of interest to competitors. This is particularly the case if the information is sufficiently detailed to allow a user to modify or customize the scene. For example, if the manufacturer is an automobile manufacturer, it may be desirable to allow a user to not only view the automobile from user-selectable orientation, but also to modify or customize the scene, by, for example, changing the color and texture of various surfaces, changing the positions of light sources, enable the automobile to be displayed with doors, hood and/or trunk in an open position, and the like. The amount of information that would be necessary to allow a user to perform such operations may require a significant amount of time to transfer. In addition, the amount of information that may be required may constitute a significant amount of the design information for the object(s) in the scene, which may be confidential.
- Accordingly, it will be desirable to maintain the three-dimensional scene information on the Web site and have the Web site render two-dimensional images in orientations and with modifications and customizations of the scene as specified by the user, and transmit the two-dimensional image information to the user's browser for display. However, problems arise since not only will the Web site need to retrieve the information from databases on which the information is stored for transmission to the user's browser, but need also render the two-dimensional images from orientations and with modifications and customizations specified by the user. For example, if a number of users are accessing the Web site concurrently, the amount of processing power required to render the images in a reasonable amount of time can become quite large. In addition, problems can arise if a group of users are making use of the same scene, for whom customizations made by any of the users in the group are to be incorporated into the scene as used by all of the members of the group, since all of the customizations would need to be transmitted to all of the users and incorporated into their respective three-dimensional scenes.
- The invention provides a new and improved scalable, multi-user server and method for rendering images from interactively customizable three-dimensional scene information.
- In brief summary, the invention provides a server for use in connection with a network including at least one client and a communication link interconnecting the client and server. The server comprises an image rendering module and an interface. The image rendering module is configured to render, from three-dimensional scene data representing a scene, a two-dimensional image. The interface configured to transmit the two-dimensional image over the communication link to the client.
- The server is also provided with a user interaction control module that regulates interactions between the server, in particular the image rendering module, and respective clients who may be using the server concurrently to control images in which customizations requested by, for example, respective clients are rendered.
- This invention is pointed out with particularity in the appended claims. The above and further advantages of this invention may be better understood by referring to the following description taken in conjunction with the accompanying drawings, in which:
-
FIG. 1 is a functional block diagram of an arrangement including a scalable multi-user server that provides for rendering of images based on scenes that can be interactively customized by clients, constructed in accordance with the invention; -
FIG. 2 is a functional block diagram of the server depicted inFIG. 1 ; and -
FIG. 3 is a flow diagram useful in understanding operations performed by a user manager in the server depicted inFIG. 2 . -
FIG. 1 is a functional block diagram of anarrangement 10 including a scalable,multi-user server 11 that provides for rendering of images based on scenes that can be interactively customized by clients, constructed in accordance with the invention. Theserver 11 provides web pages, including text and images, individual images, sequences of images, streams of images that provide for the perception of continuous motion of rendered scene elements (generally, “streaming video”), and the like to the respective clients. With reference toFIG. 1 , in addition to theserver 11,arrangement 10 includes a plurality ofclient devices 12A, . . . , 12N (generally identified byreference numeral 12 n) that can access theserver 11 over anetwork 13. In one embodiment, thenetwork 13 is a wide area network (WAN) such as the Internet and World Wide Web, but it will be appreciated that an arrangement in accordance with the invention can include any form of network, including local area networks (LAN's). The client devices 11 n may be any kind of information utilization devices that may receive, utilize and display information in digital form, including computers such as personal computers, workstations, personal digital assistants (PDA's), cellular telephones and the like. Theserver 11 can be implemented using, for example, a suitably-programmed computer. - Generally, the
client devices 12 n andserver 11 communicate over thenetwork 13 according to a client/server communication model. According to that model, a client, such as aclient device 12 n, generates an information retrieval request that requests retrieval of a particular item or items of information from theserver 11, and transmits it to a server, such as theserver 11, over thenetwork 13. The information retrieval request may be generated in response to input provided by an operator, in response to a request generated by a program, or in response to other occurrences as will be apparent to those skilled in the art. When the server receives an information retrieval request from a client, depending on the information whose retrieval is being requested, it may obtain the information item(s) from, for example, a database that it maintains, and transmit them to the client over thenetwork 13. When the client receives the information item(s), it can make use of the item(s) in any of a number of ways. If, for example, the information item(s) comprise a Web page, the client device can display the Web page on a display device, store the Web page in a storage device, provide the web page through a suitable editor to a user to allow him or her to edit the Web page and so forth. The uses to which aclient device 12 n may put other types of information item(s) will be apparent to those skilled in the art. If a Web page comprises several components, including one or more textual components, images and the like, aclient device 12 n may need to generate multiple information retrieval requests each requesting retrieval of one or more of the Web page's components. All of the information retrieval requests for the various components of the Web page may be transmitted to the same server, such asserver 11, for response. On the other hand, one or more of the information retrieval requests for various ones of the components of the Web page may be transmitted to other servers (not shown), which can provide the respective components. As therespective client 12 n that issues the information retrieval request(s) receives the requested information, or at some point thereafter, it can make use of the information, and, if the requested information is a Web page, display the Web page. - The invention provides an arrangement whereby a server, such as the
server 11, can provide Web pages toclient devices 12 n, which Web pages contain two-dimensional images that are rendered from three-dimensional scenes. The invention further provides an arrangement whereby a server, such as theserver 11, can provide images rendered from three-dimensional scenes, which scenes can, in turn, be interactively modified or customized (generally “customized”) during a session in response to customization input from a user who is using arespective client device 12 n′, or a user who is anotherclient device 12 n″. A number ofclients 12 n′, 12 n″, . . . may request images of, for example, the same scene contemporaneously, from the same or different viewing directions, and theserver 11 can efficiently render the images and transfer them to therespective clients 12 n′, 12 n″ for use thereby. Aclient device 12 n′ can request customizations to the scene, and theserver 11 can selectively enable the requested customizations to be depicted only in images, sequences of images, or images comprising streaming video, that are rendered for thatclient device 12 n′. Alternatively, theserver 11 can selectively enable the customizations requested by oneclient device 12 n′ to be depicted in images, sequences of images, or images comprising streaming video, that are rendered for selected ones of theclient devices 12 n′, 12 n″, . . . that are contemporaneously engaged in sessions involving the same scene. As a further alternative, theserver 11 can enable the customizations requested by oneclient device 12 n′ to be depicted in images, sequences of images, or images comprising streaming video, that are rendered for all of theclient devices 12 n′, 12 n″, . . . that are contemporaneously engaged in sessions involving the same scene. In all cases, the images provided by theserver 11 may be still images, sequences of images, streaming video, or any other form or arrangement by which images can be provided to a client device. - Generally, during a session, the user will initially initiate retrieval of a Web page from the
server 11, which Web page will include an image. The Web page also provides a set of tools, which can be displayed as, for example, push buttons, dial objects, radio buttons, dialog boxes and the like as part of the Web page. During the session, the user, using user input devices provided by his or herclient device 12 n′, can manipulate the tools to enable customizations to be made to the scene, customizations to be made the viewing direction, and/or types of customizations as will be apparent to those skilled in the art, which can be made to a scene and how an image of the scene is rendered. Theclient device 12 n′ can transmit indicia indicating the customizations that were requested by the user to theserver 11, which, in turn, can generate a new image reflecting the customizations, and transmit the new image to the user's client device. As theclient device 12 n′ receives the new image, or at some point thereafter, it can substitute the new image for the previous image in the Web page. These operations can be repeated during a session in response to user customization requests. Similarly, if the customizations requested by a user using oneclient device 12 n′ are to be depicted in images rendered forother client devices 12 n″, . . . contemporaneously engaged in a session involving the same scene, theserver 11 can render new images depicting the customizations and transmit them to theother client devices 12 n″, . . . for display. In addition, particularly in the case of image sequences, streaming video, or the like, theserver 11 can provide sequences of images or streaming video to a respective clients without requiring user customization requests or other requests therefrom using a so-called “push” methodology. - A server, such as the
server 11, can be used in a number of environments. For example, theserver 11 can be used as a server maintained by a marketer or seller of a product, and can provide Web pages containing images of the product. During a session during which the user may, for example, wish to receive information concerning the product, the user may wish to request customizations to the image, such as the orientation of the product from which the product is displayed in the image, the position of the light source, the color of the product within, for example, a set of colors in which the product is offered, and/or other types of customizations and be provided with an image with the customizations. If the product is, for example, an automobile, types of customizations may also include changes in which the automobile is displayed, including, for example, the positions of one or more of the doors, illustratively, open or closed, the positions of the hood or trunk. If the automobile has a sun- or moon-roof or a convertible top, customizations may also include displaying the roof or top in a number of orientations. Since theserver 11 has the database of the scene from which the image is rendered, it can readily provide a rendered images with the requested customizations without needing to provide any information from the database to the user'sclient device 12 n′. Accordingly, if the product is, for example, an automobile, the scene database used by theserver 11 can include information from the product design database maintained by the manufacturer, and, since neither the scene database nor the information from the product design database is provided to the user'sclient device 12 n′, trade secret information that may be present in the information contained in the product design database will not be transferred to a device that is external to theserver 11. This also reduces the amount of effort required to provide the scene database for theserver 11, since information from the product design database can be used generally directly or with few modifications in the scene database. - As noted above, the
server 11 can be advantageously used in connection with sessions withmultiple client devices 12 n′, 12 n″, . . . , contemporaneously, in connection with requests for images generated for the same product and using the same scene database. Theserver 11 can selectively provide that customizations requested by a user using oneclient system 12 n′ not be visible in images rendered using the same scene database for a user using anotherclient system 12 n″ who has not requested similar customizations. Thus, if, for example, only the user usingclient system 12 n′ has requested a customization in which, with reference to the preceding example in whichserver 11 is used to provide images of an automobile, the position of the driver's side door is changed from being closed to being open by a selected amount, theserver 11 will provide an image in which the driver's side door is open only to theclient system 12 n′, reflecting the customization requested only by him or her, and not to theother client systems 12 n″, . . . , even in images transmitted to theother client systems 12 n″, . . . , subsequent to the customization requested by the user ofclient system 12 n′. Similarly, if the user usingclient system 12 n″ has requested a customization in which the color of the automobile is changed from white to red, theserver 11 will provide an image in which the color of the automobile is red, reflecting the customization requested only by him or her, and not to theother client systems 12 n′, 12 n′″, . . . , even in image transmitted to theother client systems 12 n′, 12 n′″, . . . subsequent to the customization requested by the user ofclient system 12 n″. In that case, theserver 11 keeps track of changes to the scene database resulting from the requests from theindividual client devices 12 n′, 12 n″, . . . , on a client-by-client basis. - On the other hand, as also noted above, the
server 11 can selectively provide that customizations requested by a user using oneclient system 12 n′ are visible in images rendered using the same scene database for users usingother client systems 12 n″, 12 n′″, . . . , concurrently engaged in sessions with theserver 11 in connection with the same scene database, regardless of whether the latter users have requested similar customizations. The customizations may be visible in images rendered for all or a subset of theother client systems 12 n″, 12 n′″, . . . . Thus, in that case, if a user usingclient system 12 n′ has requested a customization in which, with continued reference to the preceding example in whichserver 11 is used to provide images of an automobile, the position of the driver's side door is changed from being closed to being open by a selected amount, theserver 11 will provide an image in which the driver's side door is open to all or a selected subset of theclient systems 12 n′, 12 n″, . . . . Theserver 11 can provide the image with the customization to theother client systems 12 n″, 12 n′″, . . . that are contemporaneously engaged in a session with theserver 11 in connection with the same scene database in response to a request therefor from each respectiveother client system 12 n″, 12 n′″, . . . , which request may also include a request for a further customization. Alternatively, theserver 11 can, after or contemporaneous with providing the image with the customization to theclient system 12 n′ that requested the customization, also transmit the image to all or a selected subset of theother client devices 12 n″, 12 n′″, . . . that are engaged in a session involving the same scene database, without a request therefor from the other respective client devices. In that case, after aclient device 12 n′, 12 n″, 12 n′″, . . . receives the image with the customization, it can substitute the image in the Web page. - If the
server 11 provides images with a customization, not only to theclient system 12 n′ that requested the customization, but also to all or a selected subset of theother client devices 12 n″, 12 n′″, . . . that are engaged in a session involving the same scene database, without a request therefor from the other respective client devices, theserver 11 can also find utility in, for example, managing cooperative or competitive efforts by a plurality of users using respective ones of theclient devices 12 n′, 12 n″, . . . . For example, theserver 11 can be advantageously used to allow a plurality of users, in diverse locations using respective client devices to cooperatively design a product. In that case, as the users enter customizations to a product, which can include, for example, providing an initial design one or more components of the product, theserver 11 can enter information defining the components in the scene database. At some point, some or all of the information in the scene database may be converted to a product design database, which may be used in fabricating the product. - In addition, the
server 11 can be used in connection with playing of video games over thenetwork 13. In that case, theserver 11 can render images of a scene used in the came from the same orientation for all of users who are playing the game, or from unique orientations for ones of the users. As the users play the game over time, theserver 11 can render successive images for the various users and transmit them to theirrespective client devices 12 n′, 12 n″, . . . for display. -
FIG. 2 depicts a functional block diagram ofserver 11 constructed in accordance with the invention. With reference toFIG. 2 , theserver 11 includes a number of components including amultiplexer module 20, aweb server module 21, ascript execution module 22, a user interaction control module 23, a rendering control module 24, ascript store 25, amodel store 26 and arendering engine 27. Themultiplexer module 20 connects to thenetwork 13 and receives information retrieval requests from a user'sclient device 12 n. Generally, a user usingclient device 12 n will provide input entered through, for example, a user input device to input request information to abrowser 14, and the browser will generate one or more information retrieval requests for transmission to theserver 11 requesting retrieval of a Web page. Themultiplexer 20, in turn, receives information retrieval requests, which may be a Web page retrieval request or, as will be described below, an image retrieval request, from thenetwork 13. When themultiplexer 20 receives an information retrieval request from thenetwork 13, it will either respond to the Web page retrieval request itself, or it will transfer the request to one of theweb server 21 or the user interaction control module 23. If the information retrieval request is the first request from the browser for a session, themultiplexer 20 will generate a response that includes a user identification (UID) for the session, and transmit the response to theclient device 12 n for use by thebrowser 14. Subsequent information retrieval requests generated by thebrowser 14 for transmission to theimage rendering device 11 for the session will include the UID, and theimage rendering device 11 will use the UID to identify the session and keep track of the particular user for which images and customizations have been requested during the session. - After the
browser 14 receives the response, including the UID, generated by themultiplexer 20 and transmitted thereby over thenetwork 13, thebrowser 14 will generate a new Web page retrieval request for transmission by theclient device 12 n to theserver 11. The new Web page retrieval request will generally correspond to the previous Web page retrieval request, except that it will also include the UID received in the response that had previously been received from themultiplexer 20. When themultiplexer 20 receives a Web page retrieval request from theclient device 12 n that includes a UID, it will provide the Web page retrieval request to theweb server 21. Theweb server 21, in turn, will provide information from the request to thescript execution module 22, which, using one or more scripts from ascript store 25 and information provided by the user interaction control module 23, will generate at least a portion of a Web page for transmission by themultiplexer 20 to theclient 12 n. - The portion of the Web page that is generated by the
web server 21 will include at least the textual portion of the Web page requested in the Web page retrieval request, and in one embodiment will be generated in the well-known HyperText Markup Language (HTML). The Web page that is generated may include links identifying, for example, one or more images that are to be displayed as part of the Web page. The links are augmented to identify the UID that was included with the Web page retrieval request. After theweb server 21 andscript execution module 22 have generated the portion of the Web page to be provided thereby, they can provide the Web page to themultiplexer 20 for transmission over thenetwork 13 to theclient device 12 n. - The
script execution module 22 will also provide information to the user interaction control module 23, including the UID that was received in the Web page retrieval request, as well as identification of any customizations to one or more images that are to be displayed as part of the Web page that were requested in the Web page retrieval request. Since the Web page retrieval request is the first retrieval request for the particular Web page for the session, the Web page retrieval request generally will not include any customizations. The user interaction control module 23 can perform some preliminary processing operations to prepare to render the image(s) when it receives a request therefor from themultiplexer 20, as will be described below. - When the
client device 12 n receives the portion of the Web page as generated by theweb server 21 andscript execution module 22 from the server, it can provide it to thebrowser 14. Thebrowser 14 can display the received portion, and use the links, as augmented to identify the UID for the session, to generate one or more requests, which will typically be image retrieval requests, that are associated with respective ones of the links, to initiate retrieval of respective images for display as part of the Web page. Image retrieval request(s) generated by thebrowser 14 for the respective images will include both the image identification information from the respective links, as well as the UID with which the links were augmented, to identify the session for which theserver 11 is to render the images. Theclient device 12 n transmits the image retrieval requests to theserver 11. - The image retrieval requests will be received by the
multiplexer 20 and forwarded directly to the user interaction control module 23 for processing, bypassing theweb server 21 andscript execution module 22. The user interaction control module 23, rendering control module 24 and therendering engine 27, will render the respective images and provide them to themultiplexer 20 for forwarding over thenetwork 13 to theclient device 12 n. Generally, the rendering control module 24 will control the rendering operations in connection with information in a scene database that it maintains. The scene database contains information useful in connection with rendering of an image, including -
- (i) a three-dimensional representation of at least a portion of a scene from which a two-dimensional image is to be generated,
- (ii) information as to the positions and orientations of light source(s) that are used to illuminate the object(s) in the scene, and
- (iii) the position(s) and orientation(s) of camera(s) that are to be simulated in rendering of the image, relative to the object(s) in the scene, as well as information as to the optical characteristics of the camera(s), such as, for example, the camera(s) magnification or zoom settings.
If Web page retrieval request received requesting an image of a scene did not include any requests for customizations, as will be generally the case for the first image retrieval request for a particular session, the user interaction control module 23 will enable the rendering control module 24 to render the image with selected characteristics, including, for example, particular portions of the scene, light source(s) in particular position(s) relative to the scene, a respective camera in a particular position, orientation and with particular optical characteristics, and the like. The characteristics can be default characteristics, the last valid characteristics of the scene as may be stored in and retrieved from a user database, or other characteristics as will be apparent to those skilled in the art. The image(s) can be provided in any convenient form including, for example, as a bitmap or compressed using any of the well-known compression methodologies. Theclient device 12 n will provide the image(s) to thebrowser 14, which can display them in regions of the displayed web page reserved therefor.
- The Web page displayed by the
browser 14 may provide tools or other controls that would allow the user to request customizations of the scene represented by the image. After the user has manipulated one or more of the tools to request one or more customizations, thebrowser 14 can generate a Web page retrieval request for the same Web page, but with customization information specified for at least one of the images rendered with the scene. The Web page retrieval request will identify, for each image for which customization information is provided, the particular image, as well as the customizations that are to be performed in connection with the scene from which the particular image is to be rendered. A number of types of customizations may be specified, including, for example -
- (i) translation and/or rotation of one or more of the objects in the scene relative to each other or to a coordinate system;
- (ii) addition of objects to, or deletion of objects in, the scene;
- (iii) changes to the forms of the objects in the scene;
- (iv) changes of the material characteristics of objects in the scene
- (v) the merging of a plurality of scenes to form a new scene;
- (vi) changes to the positions and orientations of light source(s) that are used to illuminate the object(s) in the scene;
- (vii) changes to the position(s) and orientation(s) of camera(s) that are to be simulated in rendering of the image, relative to the object(s) in the scene, as well as changes to the optical characteristics of the camera(s), such as, for example, the camera(s) magnification or zoom settings,
- (viii) high-level compound commands, such as “begin driving this car through a city,” and other types of customizations as will be apparent to those skilled in the art. In addition, the Web page retrieval request will include the UID provided by the
multiplexer 20 at the beginning of the session. After thebrowser 14 has generated the Web page retrieval request, theclient device 12 n will transmit the Web page retrieval request to theserver 11 over thenetwork 13.
- As before, the
multiplexer 20 will receive the Web page retrieval request and provide it to theweb server 21. Theweb server 21 andscript execution module 22 will generate the HTML portion of the requested Web page and provide it to themultiplexer 20 for transmission to theclient 12 n, which will include links to the image or images that are to be displayed on the Web page, which, as before, have been augmented with the UID identifying the session for the user. In addition, theweb server 21 andscript execution module 22 will provide information to the user interaction control module 23 as to -
- (i) the scene(s) for which customization(s) that have been requested,
- (ii) the particular customizations that were requested to the respective scenes, and
- (iii) the UID identifying the session for which the customizations have been requested. It will be appreciated that the user interaction control module 23 can use the UID identifying the session for which the customizations have been requested to associate the particular session for which the customizations have been requested. After the user interaction control module 23 receives the customizations that have been requested and the session for which the customizations have been requested, it can perform some preliminary processing operations to prepare to render the image(s) with the respective customizations, as will be described below.
- After the
client device 12 n receives the HTML portion of the Web page from theserver 11, it will provide it to thebrowser 14, which can display that portion of the Web page in the same manner as before. In addition, as before, thebrowser 14 will generate one or more image retrieval requests to initiate retrieval of the image(s) from theserver 11. Further, as before, the image retrieval requests will include information from the link(s) that identify the particular image(s) that are to be retrieved, as well as the UID of the session as provided in the augmented links. The image retrieval request(s) will be provided to theclient device 12 n, which, in turn, will transmit the image retrieval request(s) to theserver 11. - As before, the image retrieval request(s) will be received by the
multiplexer 20, which will provide them to the user interaction control module 23. The user interaction control module 23 will enable the rendering control module 24 to render the image(s) and provide the rendered image(s) to themultiplexer 20 for transmission to theclient device 12 n. In rendering each image, the user interaction control module 23 will enable the rendering control module 24 to render the respective image with the customization(s), if any, that the user requested in the Web page retrieval request. In that operation, if a customization to an image is such as would require customization of the scene as stored in the scene database, the user interaction control module 23 will provide appropriate customizations to the scene database. Depending on the environment in which theserver 11 is used, the customization(s) may be such as to provide that they will be used only in connection with the image rendered for the session associated with the particular UID, or, alternatively, in connection with images rendered for all or a subset of the sessions that are contemporaneously engaged in sessions with the same scene in the scene database. In addition, if a customization provides a change to the scene database, including, for example, viewing orientation or the position of the light source(s) illuminating the scene, addition of objects to or deletion of objects from the scene, the user interaction control module 23 can provide information thereof to the rendering control module 24 for use in rendering. If a customization requires an addition of an object to the scene, the user interaction control module 23 can, for example, enable the rendering control module 24 retrieve information from themodel database 26 describing the object to be added to the scene. Thereafter, the user interaction control module 23 can enable the rendering control module 24 to render an image, which the rendering control module 24 will provide to the user interaction control module 23. The user interaction control module 23 will, in turn, provide the image to themultiplexer 20, for transmission to the client device 12, for display by the browser. These operations can be performed for each of the images for which image retrieval requests have been received. - With this background, the structure and operation of the user interaction control module 23 and the rendering control module will be described in more detail in connection with
FIG. 2 . With continued reference toFIG. 2 , the user interaction control module 23 includes several components, including a user manager 30, aconnection manager 31, anevent manager 32, amodel manager 33 and a plurality of operators. Generally, in one embodiment, the user interaction control module 23 makes use of operators to perform operations. One operator is asocket gateway operator 34, which receives UID and customization information from the script execution module and image retrieval requests from themultiplexer 20, and provides rendered images to themultiplexer 20 for transfer to respective ones ofclients 12 n. To facilitate rendering of an image, the operators can also be linked together into agraph 35, with the particular operators and sequence thereof in the operator graph being selected to facilitate generation of the image having the desired characteristics. If, for example, an image is to be generated using a scene, and the user has provided a Web page retrieval request in which no customizations have been requested to the scene, an image of which is to be rendered for display on the Web page, which may be the case if, for example, the Web page retrieval request was the first request for the Web page during the respective session, the operator graph may be a default graph for use in rendering images using the particular model. - It will be appreciated that the particular set of operators that are provided for the user interaction control module 23 will depend on the particular environment in which the
server 11 is used. In addition to a socket operator for use as the socket gateway, operators that may be advantageously used in connection with the user interaction control module 23 may include such operators as a operators of the object translation operator type, operators of an object rotation operator type, operators of a color operator type, operators of a timekeeper operator type, and operators of a render operator type. An operator of the translation operator type can be used to facilitate updating of the model for a scene in the scene database to translate an object in the scene by a selected distance in a selected direction. For example, if theserver 11 is to be used in connection with a game in which objects in the scene are to be moved along a path, an operator of the object translation operator type can be used to translate an object in the scene along a path in the scene. The particular object that is moved, as well as the distance and direction that the object is moved, can be specified as parameters whose values are determined by the image customization information. It will be appreciated that the extent to which an object can actually be moved may be constrained by other features of the scene, including, for example other objects that may be present in the scene. - An operator of the rotation operator type can be used to facilitate updating of the scene in the scene database to rotate an object in the scene by a selected angle around a selected axis. The particular object that is rotated, as well as the angle and direction that the object is rotated, as well as possibly the axis of rotation, can be specified as parameters whose values are determined by the image customization information. For example, if the particular object that is to be rotated is an automobile door, the axis of rotation will comprise the axis specified by the door's hinges, which may be determined by the model of the automobile as stored in the scene database. The angle that the door is rotated around the axis, and the direction of rotation can be specified as parameters whose values are determined by the image customization. It will be appreciated that the angle that the object may be rotated around any particular direction may be constrained by other objects in the scene, including other components of the automobile.
- An operator of the color operator type can be used to change the color of at least a portion of the surface of an object in the scene. A color operator can operate by editing parameters of shaders that are provided in the scene database in response to image customization information.
- An operator of the timekeeping operator type can be used to provide a time stamp or value to
respective client devices 12 n. If theserver 11 is being used in connection with, for example, a game, the time stamp or value as generated by the timekeeping operator can be transmitted to theclient devices 12 n of all of the users who are playing the game to provide them with a common time reference. The time stamp or value can identify the particular time as determined by theserver 11, and can be provided to, for example, thescript execution module 22 for use in the HTML portion of the Web page when it generates that portion for transmission to aclient device 12 n. - An operator of the rendering operator type can be used to initiate and control rendering of an image by the
rendering engine 27. - Other types of operators useful in the
server 11 will be appreciated by those skilled in the art. For example, an operator type can be provided to control the position, angular orientation, zoom/focal length, aperture setting, and so forth of a camera. In addition, an operator type can be provided to control the position, angular orientation, color, brightness, and so forth of a light source. In addition, a high-level operator type can build on and utilize operators of other operator types to perform compound operations such as moving an object on a motion path controlled by gravity or moving a car door between the “open” and “closed” states automatically. - At least some of the types of operators may also be of one of two subtypes, including a private subtype and a public subtype. An operator of the private subtype is used to provide a customization that is only visible in the image(s) that are to be subsequently rendered of the particular scene for the particular session identified by the particular UID that is associated with the image customization information that requested the customization, or for a selected subset of UID's that are contemporaneously using the scene. On the other hand, an operator of the public subtype is used to provide a customization that is visible in the image(s) that are to be subsequently rendered of the particular scene for all or possibly a larger subset of UID's that are contemporaneously using the scene. The
server 11 provides for several privatization levels, so that, for a lower level, a customization will be visible in images that are subsequently rendered of the particular scene for the particular session identified by the selected subset of UID's that are contemporaneously using the scene, and, for a higher level, a customization will be visible only in the image(s) that are to be subsequently rendered of the particular scene for the particular session identified by the particular UID that is associated with the image customization information that requested the customization. - The user interaction control module 23, in generating an
operator graph 25 to facilitate rendering of an image of a scene, will instantiate operators of the required types and link them together to form theoperator graph 35. The user interaction control module 23 can perform these operations after it receives the image customization information and UID associated with a Web page retrieval request from the script execution module. This will allow the operator graph to be ready when an image retrieval request is received for the particular image. After the image retrieval request, the user interaction control module 23 can initiate execution of the various operators in the graph to, in turn, initiate rendering of the image. - The user manager 30 is provided to keep track of UID's and converts between UID's and identifiers, referred to as RID's (rendering session identifiers) that are used to keep track of rendering sessions by the rendering control module 24.
- The
model manager 33 andconnection manager 31 cooperate to createoperator graphs 35 in response to image customization information provided by thescript execution module 22 in response to Web page retrieval requests from therespective client devices 12 n. When themodel manager 33 receives image customization information from thescript execution module 22 relating to an image that is to be rendered, it will determine the operators of the respective types that are to be used in theoperator graph 35, instantiate the operators and determine the topology of the operator graph, that is, how the operators are to be linked together to form anoperator graph 35. Theconnection manager 31 will perform the actual linking. Each operator has at least one input and an output, and themodel manager 33 will determine, for each input of each operator, the respective operator that is to provide a value or status information for the respective input. The operator that is to provide a value or status information for an input of an operator is upstream of that operator, and the operator whose input is to receive the value or status information is downstream of the operator that is to provide the value or status information. Theconnection manager 31 will perform the actual linking of the operators that have been instantiated by the model manager to form theoperator graph 35. - The
model manager 33 can determine the types of operators that are to be used from the image customization information that is provided by thescript execution module 22. For example, if the image customization information is to enable an object to be translated a predetermined distance in a particular direction, themodel manager 33 can instantiate an operator of the translation operator type, and provide as parameters such information as, for example, the identification of the object that is to be translated, the direction that the object is to be translated and the displacement along the particular direction. Similarly, if the object is to be rotated around an axis, themodel manager 33 can instantiate an operator of the rotation operator type, and provide as parameters such information as the identification of the object that is to be rotated, the position and orientation of the axis around which the object is to be rotated, the direction around the axis that the rotation is to take place and the angle through which the object is to be rotated. In addition, if the color of the object is to be customized from a default color, themodel manager 33 can instantiate an operator of the color operator type, and provide as a parameter the color to which the object is to be customized. - The
model manager 33 andconnection manager 31 can instantiate and link corresponding operators for every object that is to be translated, rotated and whose color is to be customized, and link them into the operator graph that is to be used to control the rendering of the image associated with the image customization information. The particular order in which the operators are connected in anoperator graph 35 may be determined by several factors, including whether the operators commute, that is, whether, if image customization information requires usage of operators of two types, the two operators can be applied in any order and provide the same result. Generally, for example, if an image customization requires operators of the translation operator, rotation operator and color operator type, operators of the respective type can be applied in any order. Generally, the operator of the rendering operator type will be expected to be one of the last operators in theoperator graph 35, after all of the operators instantiated to update the model of the scene in themodel database 26. Thesocket gateway 34, which, as noted above, is also an operator, will be both the first and the last operator in theoperator graph 35. When themultiplexer 20 receives an image retrieval request from a client 13 n, themultiplexer 20 will notify the socket gateway and provide information identifying the UID and an identifier identifying the image that is to be rendered to thesocket gateway 34 to initiate execution of the operator graph to facilitate updating of the model of the scene in themodel database 26, if necessary, and rendering of the image. After the image is rendered, it will be provided to thesocket gateway 34 for provision to themultiplexer 20 and transmission to theparticular client device 12 n that issued the image retrieval request. - The
event manager 33 controls execution of the operators that comprise anoperator graph 35 and, in that case, manages events that occur during execution. Generally, theevent manager 33 controls execution of theoperator graph 35 according to a “data flow” paradigm, in which an operator in anoperator graph 35 is executed when all of its inputs, which include both values of parameters provided by the image customization information provided in the Web page retrieval request and values and/or status information that are provided by operators that are upstream of the respective operator in theoperator graph 35, have been provided with respective values and/or status information. The status information may merely indicate that an upstream operator in the graph has finished execution. Accordingly, if, for example anoperator graph 35 includes operators of the translation operator, rotation operator and color operator type, to translate, rotate and change the color of the same object, followed by an operator of the rendering operator type, the event manager may enable the operators of the translation operator, rotation operator and color operator type in any order. Before the operator of the rendering operator type can be executed, the operators of the translation operator, rotation operator and color operator types will need to provide status information that indicates that they have successfully finished execution. As each respective operator of the translation operator, rotation operator and color operator type is executed, it will update the scene in the scene database, and will generate status information to indicate when they are finished, which is provided to the rendering operator as an input. After all of the operators of the translation operator, rotation operator and color operator type, as well as operators of other types that may be provided, have finished execution, theevent manager 33 will note that all of the inputs to the rendering operator have received status information indicating that the operators upstream thereto had successfully completed, theevent manager 32 can enable the operator of the rendering operator type to be executed. Theevent manager 32 can enable operator graphs comprising instantiated operators of any combination of operator types, connected in any of a number of topologies, to be executed in a similar manner. - The
event manager 32 can initiate execution of anoperator graph 35 at any operator in theoperator graph 35. As will be described below in more detail, if during execution of an operator, an input value is needed from another operator to allow for continued execution of the operator, execution of the one operator can be suspended, and the other operator executed to allow for generation of the input value that is needed by the one operator. By allowing for execution of operators in this manner, if it turns out that a value generated during execution of one operator is not needed for another operator, which may occur if, for example, rendering of an image is aborted. - As noted above, the
model manager 33, in response to image customization information from thescript execution module 22, initiates creation of anoperator graph 35 to facilitate rendering of an image of a scene with the customizations, if any, requested in a Web page retrieval request. In addition, themodel manager 33 can enable models of objects, that are stored in themodel database 26, and that may be needed in the scene database associated with a scene, to be loaded into the respective scene database. - The rendering control module 24, while an
operator graph 35 is being executed to facilitate rendering of an image, controls updating of a scene from which the image is to be rendered as represented in the scene database in response to execution of operators of the respective types. In addition, the rendering control module 24 controls rendering of the image during execution of an respective operator of the rendering operator type. The rendering control module 24 comprises anAPI control module 40, ajob manager 41, a world/session/transaction manager 42, and one ormore scene databases 43. Therendering engine 27 performs actual tessellating and rendering operations during execution of an operators of the respective operator type. - The world/session/transaction manager 42 manages “worlds,” “sessions” and “transactions.” Generally, a transaction bounds a set of consistent database operations in connection with the
scene database 43. For example, rendering is considered a transaction because rendering requires a consistent view of the scene in thescene database 43 that must not be changed during the rendering operation. Similarly, a modification to a scene is also a transaction, since typically a modification to the scene requires incremental changes to many scene data elements in the scene database all of which need to be performed as a unit to ensure that the scene in thescene database 43 remains consistent. For example, generally, an object that is actually present in the scene is represented by an object type and an instance, so that, if a scene contains two objects of the same object type, that can be represented in thescene database 43 by one object type and two instances, all of which are scene data elements. If it is necessary to delete an object, represented by object type and instance scene data elements, both the object type and instance scene data elements will need to be deleted as a unit. This will ensure that problems do not arise in connection with therendering engine 27, which can occur if, for example, during rendering of the scene, the object type scene data element has been deleted but not the instance scene data element, since at some point therendering engine 27 will need to attempt to access the deleted object type scene data element. - A session as managed by the world/session/transaction manager 42 generally corresponds to a session with a
client device 12 n. Worlds are used to disambiguate multiple scenes that may be used forclient devices 12 n that are concurrently engaged in sessions with theserver 11, and particularly may be used to disambiguate scene data elements of the respective scenes that may have similar names. For example, if two different scenes both call their cameras “cam,” if a customization to the camera is requested for a scene being used for one session, that customization should only be made in that scene, and not in the other scene. - The
API control module 40 controls updating of the scene and rendering of an image in one or more jobs, and thejob manager 41 schedules the jobs based on selected criteria. In one embodiment, the criteria includes such issues as job age, whether a job is a prerequisite for a number of other jobs, job “cost,” and other criteria as will be described in more detail below. The age of a job can be a desirable criteria since delaying processing of a job based on other criteria can be undesirable to completion of rendering of the image(s) that depend on the delayed job. The job “cost” criteria may be a function of other criteria including, for example, an estimate of the processing time required to execute the job, or to finish execution of the job if execution is suspended, an estimate of the amount of various processing resources, such as memory, that may be required, and the like. A job for which the estimate of the processing time and/or required processing resources is higher will generally have a higher cost associated therewith than a job for which the estimate is lower. In one embodiment, jobs associated with higher costs will be processed on a preferential basis over jobs with lower costs, which can increase the likelihood of that the jobs relating to rendering of an image, which may be processed in parallel, can be completed in less time than otherwise. - As noted above, the
rendering engine 27 performs the actual rendering operations required to render the image. In one embodiment, therendering engine 27 comprises Mental Ray Version 3.0, available from Mental Images, G.m.b.H., & Co., KG, Berlin, Germany, although other rendering engines, such as OpenGL can be used. Generally, therendering engine 27 performs tessellation of the scene data elements of a scene in thescene database 43 as necessary prior to rendering an image of the scene, and thereafter renders an image. Therendering engine 27 need not tessellate all of the scene data elements of a scene before it begins rendering an image; therendering image 27 can instead tessellate a portion of the scene data elements before rendering an image of the portion that has been tessellated, and repeat these operations as necessary to render the image as desired. It will be appreciated that therendering engine 27 need only tessellate portions of those scene data elements that will be depicted in an image that is to be rendered. - The elements of the rendering control module 24 will be described in detail in connection with
FIG. 2 . As noted above, the rendering control module 24 comprises theAPI control module 40, thejob manager 41, the world/session/transaction manager 42 and thescene database 43. TheAPI control module 40 operates as the interface among the user interaction control module 23 and the other elements of the rendering control module 24 and therendering engine 27. In that function, theAPI control module 40 receives calls from the user interaction control module 23 when anoperator graph 35 is executed, and provides status information and, while an image is being rendered or after the image has been rendered, provide either the portions rendered or the entire image to the user interaction control module 23. The status information provided by theAPI control module 40 may be used by elements of the user interaction control module 23 during execution of anoperator graph 35. For example, as noted above, operators comprising anoperator graph 35 may be executed in any order and, if, in executing an operator, the rendering control module 24 determines that it needs an input value from an operator that is upstream of the operator being executed, theAPI control module 40 will provide status information to so notify the user interaction control module 23. After receiving the status information, the user interaction control module 23, in particular theevent manager 32, can enable the operator that is to provide the needed value to be executed. Any number of such sequences, one per user, may be in progress simultaneously - Execution of an operator generally entails performing one or more jobs. For example, loading an object from the
model store 26 into thescene database 43 can entail several jobs, including, for example, retrieval of data describing the object from the model store, converting the data from, for example, a form that might be used by a computer-assisted design (“CAD”) system to a form that would be useful to therendering engine 27, and loading the converted data into thescene database 43 and linking it to the respective scene. Each of these operations can be performed as a respective job, or multiple jobs. Similarly, rendering an image can entail several jobs, including rendering a rectangle, loading a texture, tessellating a surface, and so forth. Thejob manager 41 manages the jobs that are concurrently being executed so they will be executed in an efficient manner. This will allow theserver 11 to provide Web pages and rendered images to a number ofclients 12 n concurrently with minimal delay. Thejob manager 41 maintains adependency graph 44 of jobs that are to be executed, with each of the jobs being annotated with a job cost value indicating the respective job's job cost value as described above. When a job of a particular type is first linked into thedependency graph 44 of jobs to be executed, it can be accompanied by a job cost value that is an initial estimate. As thejob manager 41 executes jobs of the respective job type, it can keep track of the resources that are used and update the job cost value for use when a job of the same type is subsequently executed. - As noted above, operators can be public or private, with several levels of privacy. Scene elements in the
scene database 43 can also be public or private, with corresponding levels of privacy. If, for example, a private translation operator is provided in an operator graph associated with a particular UID to facilitate translation of an object in a scene, the scene element(s) in thescene database 43 that represents the object as translated will be also be private. In addition, jobs are of public and private subtypes, with corresponding levels of privacy, and the job(s) that are executed during execution of an operator of the public will correspond to the public/private subtype of the operator for which they are executed. - The
job manager 41 enables other modules, such as the world/session/transaction manager 42 and, in particular, therendering engine 27, to perform the individual jobs, and in addition controls access to the scene information in thescene database 43. In one embodiment, thescene database 43 is in the form of a cache, in which design information for objects that are to in a scene can be loaded using model information from themodel store 26. As the amount of data in thescene database 43 increases, thejob manager 41 can select data to be removed from thescene database 43. A number of selection criteria can be used to determine which data is to be removed from thescene database 43. One embodiment makes use of a pin counter (not shown) associated with each element of screen data. When a module that needs to make use of an element of scene data issues an access request for the element, it increments the pin counter, and when the module is finished with the element, it decrements the pin counter. Each element of scene data is also associated with an access sequence value, and, when a module issues an access request for an element, an integer is incremented and provided as the element's access sequence value. Each scene data element whose associated pin counter has the value “zero” is a scene data element for which all of the modules that needed to use the scene data element concurrently have finished using the scene data element. In that case, the scene data element can be deleted from thescene database 43. It will be appreciated, however, that, although no modules are then using the scene data element at that point in time, a module may subsequently need to use the scene data element. - Since generally the likelihood that a module will subsequently need to use a scene data element will decrease the longer it has been since it was last used, the
job manager 41 can sort the scene data elements whose pin counters have the value “zero” by access sequence values; it will be appreciated that the least recently accessed scene data elements will be those for which access sequence values are relatively low and the most recently accessed scene data elements will be those for which access sequence values are relatively high. Preferably, thejob manager 41 will select one or more scene data elements for deletion for which access sequence values are relatively low on the sorted list. However, since it will generally not be efficient to simply delete the oldest scene data element, since that scene data element may be relatively small, thejob manager 41 also takes the size of the respective scene data elements into account when selecting scene data elements for deletion, by scaling the scene data elements' sizes in relation to their relative position on the sorted list, and selecting for deletion the scene data elements in order of their scaled sizes. That is, if there are “n” scene data elements in the sorted list, i=0, 1, 2, . . . , n−1, where i=0 is the last scene data element (that is, most recently accessed, the scene data element having the highest sequence value) on the list, and i=n−1 is the first scene data element (that is, least recently accessed, the scene data element having the lowest sequence value) on the list, thejob manager 41 will generate scaled size values according to size
It will be appreciated that the scaled size value sizescaled n−1 for the first (that is, the oldest) scene data element i=n−1 will correspond to its size sizen−1, the scaled size value sizescaled n−2 of the second scene data element i=n−2, will correspond to one-half its size sizen−2, and so forth, and the scaled size value sizescaled 0 of the last scene data element i=0 will correspond to 1/n its size size0. After generating the scaled size values, thejob manager 41 will select one or more scene data elements, from the largest to the smallest according to their scaled size values sizescaled i for deletion, as may be necessary to accommodate scene data elements that are to be loaded into thescene database 43. - The
job manager 41 creates a job for linkage into thedependency graph 44 in response to a request therefor during execution of operators in anoperator graph 35, or from a module, such asrendering engine 27 or world/session/transaction manager 42, while those modules are executing other jobs. When an operator in anoperator graph 35 requests creation of a job, the request is provided to thejob manager 41 through theAPI control module 40. Similarly, when therendering engine 27 or world/session/transaction manager 42 requests creation of a job, the request is provided to thejob manager 41 through theAPI control module 40. Generally, a job is a request to create a result data set based on values of one or more parameters. The result data set will be stored in thescene database 43. The parameters describe the operation to be performed during execution of the job and possibly other scene data elements that may be present in thescene database 43. In response to a job creation request, thejob manager 41 generates a job description data structure that includes such information as: -
- (i) an operation code that identifies the type of operation to be performed during execution of the job;
- (ii) a module identifier identifying the particular module that is to execute the job;
- (iii) a status identifier that identifies the status of the job; in one embodiment, possible status identifiers include pending, running, suspended, finished, done, flushed and failed;
- (iv) one or more identifiers that identify the scene data element(s) in the scene database that are to be used in executing the job;
- (v) one or more identifiers that identify storage locations in the scene database in which results are to be stored during execution of the job;
- (vi) an estimate of the job cost value;
- (vii) an actual job cost value if the job has finished executing and the status is “done”;
- (viii) the identification of any prerequisite jobs, that is, jobs that need to be executed before this job begins execution; and
- (ix) control information such as the identification of the particular thread, on a processing platform that executes programs in threads, or similar structure (generally, “thread”), that is expected to execute the job if the status is pending, or that is executing the job if the job is being executed, caching control information, and so forth.
- Items (i), (iv), and (v) can generally be provided as parameters in the job creation request. Generally, the job cost value estimate (item (vi) above) can be based on a number of criteria, including, for example,
-
- (a) the apparent complexity of the job, which may be based on, for example, the number of vertices to tessellate if the job is one to tessellate, the number of pixels to render if the job is one to render, the number of photons to cast if the job is one to cast photons from a light source, and the like, which may be adjusted by options such as sampling densities;
- (b) the estimated memory requirements for the job; and
- (c) data transfer delays that may be incurred in obtaining data to be used in executing the job if the prerequisite jobs are executed by different host computers.
- In addition, additional factors can be used in connection with a job cost value estimate, including factors such as the number of jobs of the same type that are suspended on the thread that is expected to execute the job or that is executing the job, and other factors as will be apparent to those skilled in the art. It will be appreciated that the
job manager 41 may update a job's job cost value estimate in view of changes in, for example, the additional factors noted above. - The
job manager 41 uses a number of criteria in selecting a job for execution, including the number of known unresolved prerequisites, that is, jobs that need to be executed before the respective job, to provide a value therefor, but that have not yet been executed, as well as the jobs' job cost value estimate. Preferably, thejob manager 41 will select a job with no or relatively few unresolved prerequisites. On the other hand, if a job is a direct or indirect prerequisite for a number of other jobs, it will preferably be executed before jobs that are prerequisites for fewer jobs. In addition, preferably, the job manager will preferably select a job with a relatively high job cost value estimate for execution, which can improve finalization parallelism, as long as respective threads do not exceed a particular concurrent aggregate job cost values for jobs assigned thereto. Generally, thejob manager 41 will assign jobs to threads using a number of criteria, including maximizing data locality, minimizing and balancing the number of suspended jobs on a thread to keep stacks small, and other criteria as will be apparent to those skilled in the art. In addition, if, during execution of a job, it is discovered that another job, which was not previously known to be a prerequisite, is in fact a prerequisite, thejob manager 41 will preferably assign the newly-discovered prerequisite to the same thread as the job for which it is a prerequisite, for similar reasons. - Generally, the
job manager 41 executes jobs as they are necessary to provide data for other jobs. For example, texture image elements are created as placeholders in thescene database 43, but they are empty and have no associated pixel array. When they are accessed for the first time, filing the texture image element necessitates executing an associated texture load job. A job that is accessing the texture image element will be suspended until a texture load job is executed. As another example, generally, an object that has not yet been tessellated may initially be coarsely represented in thescene database 43 by a placeholder in the form of a bounding box. A bounding box may be in the form of, for example, a geometric object having surfaces in the form of, for example, triangles, quadrilaterals or the like, that coarsely bounds the actual object associated therewith. When a ray is sent into a scene, the ray may hit the bounding box. A placeholder for the bounding box will already exist in thescene database 43, but the placeholder will not contain the scene data for the actual object represented by the bounding box. Instead, the placeholder contains a pointer to the job description for the job that is provided to generate the scene data. When the object represented by the bounding box is illuminated, a ray generated to represent the illumination would hit one of the triangles, quadrilaterals, and so forth, that comprises the bounding box in the scene. When that occurs, the scene data representing at least a portion of the actual object will be need to be retrieved from themodel store 26, tessellated and stored in thescene database 43, and thejob manager 41 will enable respective jobs therefor to be executed, and a job that might be accessing the object will need to be suspended until those jobs are executed. The bounding box essentially operates as a placeholder for the object. The bounding box will not be deleted from thescene database 43 when the object is stored therein, since, as noted above, the object might be deleted from thescene database 43. - More generally, when a job being executed makes an access to a scene data element in the
scene database 43, the following operations are performed: -
- (i) a module initiates an access to a scene data element in the
scene database 43; - (ii) if the
scene database 43 that contains - (a) a valid scene data element, that is, a scene data element that is not a placeholder, it will provide the scene data element to the module that initiated the access, after which the module can make use of the scene data element; but
- (b) an invalid scene data element, that is, a scene data element that is a placeholder for data that has not yet been created but that is associated with a job description, it will provide a notification thereof to the
job manager 41; - (iii) the
job manager 41 will update itsjob dependency graph 44 to reflect the job for which it received notification from thescene database 43, thereby to enable the job to be executed at some point in the future; - (iv) the
job manager 41 also suspends execution of the thread containing the module that initiated the scene data element access; - (v) the
job manager 41 selects a new job and assigns it to the thread whose execution was suspended; the selection of the job is based on the estimated job cost values of the jobs in thedependency graph 44, and, accordingly, the selected job may, but need not, be the job that was added to the dependency graph 44 (reference item (iii) above); and - (vi) when a job is finished executing, it will enable data to be stored in the placeholder for the scene data element in the
scene database 43 associated therewith.
- (i) a module initiates an access to a scene data element in the
- It will be appreciated that the data stored by a job in the placeholder may be actual scene data, or it may merely be status information indicating that the job has finished executing.
- The
job manager 41 makes use of a state machine to control execution of each job. The current state of each job may be stored in the respective job's job description described above. The job control state machine 50 used by thejob manager 41 will be described in connection with FIG. 3. With reference toFIG. 3 , in response to a request to create a job, the job control state machine 50 enters ajob creation state 51. In thejob creation state 51, the job manager creates a job description, as described above, and, in addition, creates a placeholder for the scene data element(s) to be associated with the job in thescene database 43. A module that issues a job creation request may notify thejob manager 41 of prerequisite jobs of which it is aware, either as part of the job creation request or subsequent thereto, and thejob manager 41 can use that information to link the job in thedependency graph 44. In addition, if a job identified as a prerequisite is not already in thedependency graph 44, thejob manager 41 can initiate creation of the job by issuing a job creation request for the job, in which case the prerequisite job will also be linked into thedependency graph 44. After the module that issued the job creation request has provided all of the information required for the creation of the job initiated by the job creation request, including the prerequisites of which it is aware, the module can issue a job creation end request, which marks the end of the job creation state. At that point, the job enters adormant state 52. - As noted above, a job is not executed until a request for the scene data element that would be generated by the job is issued (reference item (ii) directly above). Accordingly, a job will remain in the
dormant state 52 until a request is made for the scene data element that would be generated by the job is issued. At that point, since the scene data element is represented by a placeholder, a job execute request will be generated by thescene database 43 for the job associated with the placeholder, at which point the job will sequence to the pendingstate 53. In the pendingstate 53, thejob manager 41 will determine whether the job's prerequisite jobs have finished execution, and, if so, the job will enter arunnable state 54. While a job is in therunnable state 54, thejob manager 41 can schedule the job for execution, based on the job's estimated job cost value as described above. If, while the job is in therunnable state 54, thejob manager 41 identifies a new prerequisite for the job, the job can return to the pendingstate 53 until the new prerequisite has been executed. - When the
job manager 41 assigns the job to a thread, the job enters a runningstate 55. In the running state, the job is actually being executed. While a job is being executed, a new prerequisite may be discovered, in which case thejob manager 41 will sequence the job to a suspendedstate 56 until the prerequisite can be executed. If, for some reason, an error occurs during execution of the job, thejob manager 41 will sequence the job to a failedstate 57. If a subsequent execution request is issued for the job, it will return to the pendingstate 53. - On the other hand, if the module that is executing the thread in which the job is being executed, determines that the job has been successfully executed and is ready to store results in the
scene database 43, it will issue a job finished notification to thejob manager 41. In response to the job finished notification, thejob manager 41 sequences the job to afinished state 58 and issues a notification to the module that it can initiate a storage operation to store the results in thescene database 43. Thejob manager 41 can control issuance of the notification allowing the module to initiate the storage operation so as to control the timing with which the module can initiate the storage operation. This may be desirable to maintain consistency of scene data for a scene in thescene database 43 if, for example, other jobs are using the scene data for rendering. After thejob manager 41 has issued the notification to the module indicating that it can initiate the storage operation, the module can initiate the storage operation. In one embodiment, the module will initiate a storage operation, not to the placeholder(s) for the job, but to respective temporary storage location(s) in thescene database 43. After the storage operation to the temporary storage location(s) has completed, the module will issue a job storage done request to thejob manager 41. In response to the job storage done request, thejob manager 41 will sequence the job to a donestate 59. In addition, thejob manager 41 will enable the information in the temporary storage location(s) to be transferred to the placeholder(s). In one embodiment, the transfer is accomplished by means of updating pointers from pointing to the storage locations associated with the placeholder(s) to point to the temporary storage location(s). - After the
job manager 41 has sequenced the job to the donestate 59 and updated the pointers, it can sequence the job to apaging state 60, in which it will enable the job descriptor to be transferred from thescene database 43 to, for example, a permanent storage arrangement (not shown) such as a disk storage device. Following thepaging state 60, or following the donestate 59 if the job is not sequenced to the paging state, a garbage collector maintained by thejob manager 41 can delete the job description from thescene database 43 in the manner described above, at which point thejob manager 41 sequences the job to a flushed state 61. If thejob manager 41 subsequently receives a job execution request requesting subsequent execution of the job, thejob manager 41 can return the job to the pending state, at which point the operations described above can be repeated. - The invention provides a number of advantages. In particular, the invention provides a server for use in connection with one or more clients that can render images of scenes for delivery to respective clients. The server can render images of the same scene for a plurality of the clients, and can allow the clients to request customizations of the scene and control those clients for which the customizations will be visible in images rendered therefor. In particular, the server can selectively allow customizations requested by one client to be made in a scene for which images are rendered for, for example, only that client, for a selected group of contemporaneously-connected clients, or for all contemporaneously-connected clients for which images are to be rendered of the scene.
- By rendering the images itself, instead of providing three-dimensional scene information to the respective clients, the server ensures that confidential three-dimensional scene information concerning objects in a scene is not provided to the respective clients.
- In addition, by allowing images of customized scene to be provided, not only to the client that requested the customization, but also to all or a selected subset of the other clients that are engaged in a session involving the same scene, the server also manages cooperative or competitive efforts by a plurality of users using respective ones of the client, allowing, for example, a plurality of users in diverse locations using respective client devices to cooperatively design a product, a plurality of users in diverse locations to play games, and other efforts as will be apparent to those skilled in the art.
- In addition, since rendering is performed by the server, and not by the clients, the arrangement allows clients to be used that have relatively limited storage and processing capacities, such as Web pads, personal data assistants (PDA's), cellular telephones and the like, and still allow for arbitrarily complex manipulation of three-dimensional scene data.
- It will be appreciated that numerous changes and modifications may be made to the arrangement as described herein. For example, although the server has been described as rendering images in response to image retrieval requests issued by respective clients, it will be appreciated that the server can render images in response to other events. For example, the server can transmit an updated image to one client in response to an event initiated by another client. This can be useful if, for example, users of the respective clients are engaged in a collaborative design effort, playing a game, or the like. In addition, the server can stream updated images to one or more clients if the images are to represent, for example, video, using any convenient image transfer or streaming video transfer protocol. Furthermore, although the server has been described as providing images rendered in response to requests issued by browsers in connection with links associated with Web pages, it will be appreciated that the image may be rendered and provided in response to requests issued by other types of programs executed by the client devices, in connection with other types of request initiation mechanisms, which need not be associated with a Web page.
- It will be appreciated that tools through which a user may request customizations to a scene may be implemented using any type of program. If customizations are requested through Web pages, as described above, the tools may be efficiently implemented by means of, for example, applets provided with the Web pages, which applets may be in, for example, the well-known Java programming language.
- It will further be appreciated that a server may be implemented on a single processing platform, or on multiple processing platforms. The
scene database 43, for example, may be implemented as a virtual shared database that is distributed across a plurality of processing platforms, and various components of the server may also be distributed across the various processing platforms. - In addition, although specific arrangements have been described by which the
job manager 41 determines which jobs are to be executed, determines job cost values and deletes scene data elements from thescene database 43, it will be appreciated that other arrangements can be used. Furthermore, it will be appreciated that, although a specific state machine has been described by which the job manager controls execution of a job (referenceFIG. 3 ), it will be appreciated that other state machines, and arrangements other than state machines, may be used instead. - It will be appreciated that a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program. Any program may in whole or in part comprise part of or be stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner. In addition, it will be appreciated that the system may be operated and/or otherwise controlled by means of information provided by a user using user input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transferring information in a conventional manner.
- The foregoing description has been limited to a specific embodiment of this invention. It will be apparent, however, that various variations and customizations may be made to the invention, with the attainment of some or all of the advantages of the invention. It is the object of the appended claims to cover these and such other variations and customizations as come within the true spirit and scope of the invention.
Claims (30)
1. A server for use in connection with a network including at least one client and a communication link interconnecting the client and server, the server comprising:
A. an image rendering module configured to render, from three-dimensional scene data representing a scene, a two-dimensional image; and
B. an interface configured to transmit the two-dimensional image over the communication link to the client.
2. A server as defined in claim 1 further comprising a user interaction control module configured to control interactions with said at least one client in connection with rendering of the image from the scene data.
3. A server as defined in claim 2 in which the image rendering module is configured to render images from scene data representing a plurality of scenes, the user interaction control module being configured to select scenes for which images are to be rendered.
4. A server as defined in claim 3 in which the user interaction control module is configured to select scenes for which images are to be rendered in response to requests therefor.
5. A server as defined in claim 4 in which the requests are received from the at least one client.
6. A server as defined in claim 4 in which a request can contain scene customization information requesting at least one customization to the scene, the user interaction control module being configured to enable the image rendering module to render an image of the scene as customized in relation to the customization information.
7. A server as defined in claim 6 in which the at least one customization to the scene can be represented in images rendered for selected ones of clients, the user interaction control module being configured to enable the image rendering module to control ones of the clients for whom images are rendered depicting the customization.
8. A server as defined in claim 2 in which the user interaction control module includes:
A. an operator graph generation module configured to generate, when the server is to render said image, an operator comprising at least one operator, said at least one operator being configured to enable said image rendering module to perform at least one operation in connection with rendering of the image; and
B. an event manager configured to control execution of said at least one operator in response to the occurrence of at least one event.
9. A server as defined in claim 8 in which the operator graph generation module comprises:
A. a user manager module configured to select operators of selected operator types for use in the operator graph, and
B. a connection manager module configured to connect the selected operators into the operator graph.
10. A server as defined in claim 9 in which scenes for which images are to be rendered are selected in response to requests therefor, and in which a request can include scene customization information requesting at least one customization to the scene, the user manager module being configured to select operators for use in the operator graph in response to the image requested by and scene customization information contained in a request.
11. A server as defined in claim 8 in which the image rendering module comprises:
A. a scene database configured to store scene data representing at least a portion of the scene for which an image is to be rendered
B. a customization module configured to customize the scene data contained in the scene database;
C. a rendering engine module configured to utilize the scene data in the scene database in connection with rendering at least a portion of an image; and
D. a job manager module configured to control the customization module and the rendering module in connection with execution of said at least one operator in the operator graph.
12. A server as defined in claim 11 in which, in response to execution of said at least one operator, the job manager module is configured to establish at least one job, the at least one job being executable by at least one of said customization module or the rendering engine module.
13. A server as defined in claim 12 in which, in response to execution of said at least one operator, the job manager module is configured to establish a plurality of jobs in a job dependency graph, each job being executable by at least one of said customization module or the rendering engine module, and select ones of the jobs in the graph for execution.
14. A server as defined in claim 13 in which the job manager module is configured to select ones of the jobs for execution in relation to respective job cost values associated with the respective jobs.
15. A server as defined in claim 14 in which the job manager module is configured to assign respective job cost values in relation to an estimate of server resources used during execution of the associated jobs.
16. A computer program product for use in connection with a computer to form a server for use in a network, the network including at least one client and a communication link interconnecting the client and server, the computer program product comprising a computer-readable medium having encoded thereon:
A. an image rendering module configured to enable the computer to render, from three-dimensional scene data representing a scene, a two-dimensional image; and
B. an interface module configured to enable the computer to transmit the two-dimensional image over the communication link to the client.
17. A computer program product as defined in claim 16 further comprising a user interaction control module configured to enable the computer to control interactions with said at least one client in connection with rendering of the image from the scene data.
18. A computer program product as defined in claim 17 in which the image rendering module is configured to enable the computer to render images from scene data representing a plurality of scenes, the user interaction control module being configured to enable the computer to select scenes for which images are to be rendered.
19. A computer program product as defined in claim 18 in which the user interaction control module is configured to enable the computer to select scenes for which images are to be rendered in response to requests therefor.
20. A computer program product as defined in claim 19 in which the requests are received from the at least one client.
21. A computer program product as defined in claim 19 in which a request can contain scene customization information requesting at least one customization to the scene, the user interaction control module being configured to enable the computer to enable the image rendering module to render an image of the scene as customized in relation to the customization information.
22. A computer program product as defined in claim 21 in which the at least one customization to the scene can be represented in images rendered for selected ones of clients, the user interaction control module being configured to enable the computer to enable the image rendering module to control ones of the clients for whom images are rendered depicting the customization.
23. A computer program product as defined in claim 17 in which the user interaction control module includes:
A. an operator graph generation module configured to enable the computer to generate, when the server is to render said image, an operator comprising at least one operator, said at least one operator being configured to enable the computer to enable said image rendering module to perform at least one operation in connection with rendering of the image; and
B. an event manager configured to enable the computer to control execution of said at least one operator in response to the occurrence of at least one event.
24. A computer program product as defined in claim 23 in which the operator graph generation module comprises:
A. a user manager module configured to enable the computer to select operators of selected operator types for use in the operator graph, and
B. a connection manager module configured to enable the computer to connect the selected operators into the operator graph.
25. A computer program product as defined in claim 24 in which scenes for which images are to be rendered are selected in response to requests therefor, and in which a request can include scene customization information requesting at least one customization to the scene, the user manager module being configured to enable the computer to select operators for use in the operator graph in response to the image requested by and scene customization information contained in a request.
26. A computer program product as defined in claim 23 in which the image rendering module comprises:
A. a scene database configured to enable the computer to store scene data representing at least a portion of the scene for which an image is to be rendered
B. a customization module configured to enable the computer to customize the scene data contained in the scene database;
C. a rendering engine module configured to enable the computer to utilize the scene data in the scene database in connection with rendering at least a portion of an image; and
D. a job manager module configured to enable the computer to control the customization module and the rendering module in connection with execution of said at least one operator in the operator graph.
27. A computer program product as defined in claim 26 in which, in response to execution of said at least one operator, the job manager module is configured to enable the computer to establish at least one job, the at least one job being executable by at least one of said customization module or the rendering engine module.
28. A computer program product as defined in claim 27 in which, in response to execution of said at least one operator, the job manager module is configured to enable the computer to establish a plurality of jobs in a job dependency graph, each job being executable by at least one of said customization module or the rendering engine module, and select ones of the jobs in the graph for execution.
29. A computer program product as defined in claim 28 in which the job manager module is configured to enable the computer to select ones of the jobs for execution in relation to respective job cost values associated with the respective jobs.
30. A computer program product as defined in claim 29 in which the job manager module is configured to enable the computer to assign respective job cost values in relation to an estimate of server resources used during execution of the associated jobs.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/844,511 US20060036756A1 (en) | 2000-04-28 | 2001-04-28 | Scalable, multi-user server and method for rendering images from interactively customizable scene information |
US12/398,638 US8583724B2 (en) | 2000-04-28 | 2009-03-05 | Scalable, multi-user server and methods for rendering images from interactively customizable scene information |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US20056400P | 2000-04-28 | 2000-04-28 | |
US09/844,511 US20060036756A1 (en) | 2000-04-28 | 2001-04-28 | Scalable, multi-user server and method for rendering images from interactively customizable scene information |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/398,638 Continuation US8583724B2 (en) | 2000-04-28 | 2009-03-05 | Scalable, multi-user server and methods for rendering images from interactively customizable scene information |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060036756A1 true US20060036756A1 (en) | 2006-02-16 |
Family
ID=22742238
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/844,511 Abandoned US20060036756A1 (en) | 2000-04-28 | 2001-04-28 | Scalable, multi-user server and method for rendering images from interactively customizable scene information |
US12/398,638 Expired - Lifetime US8583724B2 (en) | 2000-04-28 | 2009-03-05 | Scalable, multi-user server and methods for rendering images from interactively customizable scene information |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/398,638 Expired - Lifetime US8583724B2 (en) | 2000-04-28 | 2009-03-05 | Scalable, multi-user server and methods for rendering images from interactively customizable scene information |
Country Status (6)
Country | Link |
---|---|
US (2) | US20060036756A1 (en) |
EP (1) | EP1290642B1 (en) |
AT (1) | ATE481697T1 (en) |
AU (1) | AU2001256604A1 (en) |
DE (1) | DE60143082D1 (en) |
WO (1) | WO2001084501A2 (en) |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030172366A1 (en) * | 2002-03-11 | 2003-09-11 | Samsung Electronics Co., Ltd. | Rendering system and method and recording medium therefor |
US20030236716A1 (en) * | 2002-06-25 | 2003-12-25 | Manico Joseph A. | Software and system for customizing a presentation of digital images |
US20040080625A1 (en) * | 1997-01-07 | 2004-04-29 | Takahiro Kurosawa | Video-image control apparatus and method and storage medium |
US20060036577A1 (en) * | 2004-08-03 | 2006-02-16 | Knighton Mark S | Commercial shape search engine |
US20060079983A1 (en) * | 2004-10-13 | 2006-04-13 | Tokyo Electron Limited | R2R controller to automate the data collection during a DOE |
US20060263133A1 (en) * | 2005-05-17 | 2006-11-23 | Engle Jesse C | Network based method and apparatus for collaborative design |
US20070204213A1 (en) * | 2006-02-24 | 2007-08-30 | International Business Machines Corporation | Form multiplexer for a portal environment |
US20070206611A1 (en) * | 2006-03-06 | 2007-09-06 | Sun Microsystems, Inc. | Effective high availability cluster management and effective state propagation for failure recovery in high availability clusters |
US20080082629A1 (en) * | 2006-10-03 | 2008-04-03 | Oracle International Corporation | Enabling Users to Repeatedly Perform a Sequence of User Actions When Interacting With a Web Server |
US20080220862A1 (en) * | 2007-03-06 | 2008-09-11 | Aiseek Ltd. | System and method for the generation of navigation graphs in real-time |
WO2008118065A1 (en) * | 2007-03-28 | 2008-10-02 | Agency 9 Ab | Graphics rendering system |
US20080313618A1 (en) * | 2007-06-13 | 2008-12-18 | Microsoft Corporation | Detaching Profilers |
US20090033978A1 (en) * | 2007-07-31 | 2009-02-05 | Xerox Corporation | Method and system for aggregating print jobs |
US20110138408A1 (en) * | 2009-12-07 | 2011-06-09 | Verizon Patent And Licensing, Inc. | Television interaction information and related iconography |
US20110145727A1 (en) * | 2000-11-29 | 2011-06-16 | Dov Koren | Sharing of Information Associated with Events |
US20110161410A1 (en) * | 2009-12-31 | 2011-06-30 | Centrifuge Systems, Inc. | Massive-scale interactive visualization of data spaces |
US20120011518A1 (en) * | 2010-07-08 | 2012-01-12 | International Business Machines Corporation | Sharing with performance isolation between tenants in a software-as-a service system |
US8171147B1 (en) * | 2008-02-20 | 2012-05-01 | Adobe Systems Incorporated | System, method, and/or apparatus for establishing peer-to-peer communication |
US20120310602A1 (en) * | 2011-06-03 | 2012-12-06 | Walter P. Moore and Associates, Inc. | Facilities Management System |
US20120307308A1 (en) * | 2007-03-05 | 2012-12-06 | Morales Javier A | Automated imposition for print jobs with exception pages |
CN102855653A (en) * | 2012-08-23 | 2013-01-02 | 上海创图网络科技发展有限公司 | Large-scale three-dimensional animation figure rendering system and application thereof |
US20140304662A1 (en) * | 2013-04-08 | 2014-10-09 | Kalloc Studios Asia Limited | Methods and Systems for Processing 3D Graphic Objects |
US20150134772A1 (en) * | 2013-11-11 | 2015-05-14 | Amazon Technologies, Inc. | Multiple stream content presentation |
WO2015070221A3 (en) * | 2013-11-11 | 2015-11-05 | Heinz Gerard Joseph | Service for generating graphics object data |
US9215151B1 (en) * | 2011-12-14 | 2015-12-15 | Google Inc. | Dynamic sampling rate adjustment for rate-limited statistical data collection |
US20160124424A1 (en) * | 2014-11-05 | 2016-05-05 | The Boeing Company | 3d visualizations of in-process products based on machine tool input |
WO2016066056A1 (en) * | 2014-10-31 | 2016-05-06 | 腾讯科技(深圳)有限公司 | Image remote projection method, server and client |
US20160232710A1 (en) * | 2015-02-10 | 2016-08-11 | Dreamworks Animation Llc | Generation of three-dimensional imagery from a two-dimensional image using a depth map |
US9578074B2 (en) | 2013-11-11 | 2017-02-21 | Amazon Technologies, Inc. | Adaptive content transmission |
US9582904B2 (en) | 2013-11-11 | 2017-02-28 | Amazon Technologies, Inc. | Image composition based on remote object data |
US9634942B2 (en) | 2013-11-11 | 2017-04-25 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US9641592B2 (en) | 2013-11-11 | 2017-05-02 | Amazon Technologies, Inc. | Location of actor resources |
US9805479B2 (en) | 2013-11-11 | 2017-10-31 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US9897806B2 (en) | 2015-02-10 | 2018-02-20 | Dreamworks Animation L.L.C. | Generation of three-dimensional imagery to supplement existing content |
US20180217878A1 (en) * | 2017-01-30 | 2018-08-02 | Oracle International Corporation | Mutex profiling based on waiting analytics |
CN111177627A (en) * | 2019-12-27 | 2020-05-19 | 小船出海教育科技(北京)有限公司 | Method and device for dynamically configuring response scene |
US20200186892A1 (en) * | 1999-10-29 | 2020-06-11 | Opentv, Inc. | Systems and methods for providing a multi-perspective video display |
CN115564903A (en) * | 2022-12-05 | 2023-01-03 | 阿里巴巴(中国)有限公司 | Three-dimensional scene asset data processing method and device, electronic equipment and storage medium |
WO2023109930A1 (en) * | 2021-12-16 | 2023-06-22 | 华为云计算技术有限公司 | Public cloud-based three-dimensional graphics data sharing method and cloud management platform |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040080533A1 (en) * | 2002-10-23 | 2004-04-29 | Sun Microsystems, Inc. | Accessing rendered graphics over the internet |
US8291009B2 (en) * | 2003-04-30 | 2012-10-16 | Silicon Graphics International Corp. | System, method, and computer program product for applying different transport mechanisms for user interface and image portions of a remotely rendered image |
FR2870374B1 (en) * | 2004-05-12 | 2006-08-11 | Jean Marc Krattli | CLIENT-SERVER ARCHITECTURE FOR VISUALIZATION OF A THREE-DIMENSIONAL DIGITAL MODEL |
US7996756B2 (en) | 2007-09-12 | 2011-08-09 | Vistaprint Technologies Limited | System and methods for displaying user modifiable server-rendered images |
EP2098994A1 (en) * | 2008-03-04 | 2009-09-09 | Agfa HealthCare NV | System for real-time volume rendering on thin clients via a render server |
US8605863B1 (en) | 2008-03-18 | 2013-12-10 | Avaya Inc. | Method and apparatus for providing state indication on a telephone call |
US8266289B2 (en) * | 2009-04-23 | 2012-09-11 | Microsoft Corporation | Concurrent data processing in a distributed system |
US8754884B2 (en) * | 2009-09-21 | 2014-06-17 | Xerox Corporation | 3D virtual environment for generating variable data images |
US20110202845A1 (en) * | 2010-02-17 | 2011-08-18 | Anthony Jon Mountjoy | System and method for generating and distributing three dimensional interactive content |
US8930954B2 (en) * | 2010-08-10 | 2015-01-06 | International Business Machines Corporation | Scheduling parallel data tasks |
US8699801B2 (en) | 2010-11-26 | 2014-04-15 | Agfa Healthcare Inc. | Systems and methods for transmitting high dynamic range images |
US11978031B2 (en) | 2010-12-14 | 2024-05-07 | E2Interactive, Inc. | Systems and methods that create a pseudo prescription from transaction data generated during a point of sale purchase at a front of a store |
CN102957748A (en) * | 2012-11-07 | 2013-03-06 | 广东威创视讯科技股份有限公司 | Dynamic update method and system for three-dimensional scene |
JP2017504986A (en) * | 2013-11-11 | 2017-02-09 | アマゾン テクノロジーズ インコーポレイテッド | Data collection for multiple display generation |
EP3092622A4 (en) * | 2014-01-09 | 2017-08-30 | Square Enix Holdings Co., Ltd. | Methods and systems for efficient rendering of game screens for multi-player video game |
FR3038995B1 (en) | 2015-07-15 | 2018-05-11 | F4 | INTERACTIVE DEVICE WITH CUSTOMIZABLE DISPLAY |
FR3042620B1 (en) | 2015-10-16 | 2017-12-08 | F4 | INTERACTIVE WEB DEVICE WITH CUSTOMIZABLE DISPLAY |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5202987A (en) * | 1990-02-01 | 1993-04-13 | Nimrod Bayer | High flow-rate synchronizer/scheduler apparatus and method for multiprocessors |
US5742289A (en) * | 1994-04-01 | 1998-04-21 | Lucent Technologies Inc. | System and method of generating compressed video graphics images |
US5761633A (en) * | 1994-08-30 | 1998-06-02 | Samsung Electronics Co., Ltd. | Method of encoding and decoding speech signals |
US5761663A (en) * | 1995-06-07 | 1998-06-02 | International Business Machines Corporation | Method for distributed task fulfillment of web browser requests |
US5968167A (en) * | 1996-04-04 | 1999-10-19 | Videologic Limited | Multi-threaded data processing management system |
US6091422A (en) * | 1998-04-03 | 2000-07-18 | Avid Technology, Inc. | System for editing complex visual data providing a continuously updated rendering |
US6232974B1 (en) * | 1997-07-30 | 2001-05-15 | Microsoft Corporation | Decision-theoretic regulation for allocating computational resources among components of multimedia content to improve fidelity |
US6266053B1 (en) * | 1998-04-03 | 2001-07-24 | Synapix, Inc. | Time inheritance scene graph for representation of media content |
US6487565B1 (en) * | 1998-12-29 | 2002-11-26 | Microsoft Corporation | Updating animated images represented by scene graphs |
US6525731B1 (en) * | 1999-11-09 | 2003-02-25 | Ibm Corporation | Dynamic view-dependent texture mapping |
US6538654B1 (en) * | 1998-12-24 | 2003-03-25 | B3D Inc. | System and method for optimizing 3D animation and textures |
US6570578B1 (en) * | 1998-04-03 | 2003-05-27 | Avid Technology, Inc. | System for automatic generation of selective partial renderings of complex scenes |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6035305A (en) * | 1997-08-29 | 2000-03-07 | The Boeing Company | Computer-based method of structuring product configuration information and configuring a product |
US6205582B1 (en) * | 1997-12-09 | 2001-03-20 | Ictv, Inc. | Interactive cable television system with frame server |
US6222549B1 (en) * | 1997-12-31 | 2001-04-24 | Apple Computer, Inc. | Methods and apparatuses for transmitting data representing multiple views of an object |
US6310627B1 (en) | 1998-01-20 | 2001-10-30 | Toyo Boseki Kabushiki Kaisha | Method and system for generating a stereoscopic image of a garment |
US6205482B1 (en) * | 1998-02-19 | 2001-03-20 | Ameritech Corporation | System and method for executing a request from a client application |
US6798417B1 (en) * | 1999-09-23 | 2004-09-28 | International Business Machines Corporation | Just in time graphics dispatching |
-
2001
- 2001-04-28 US US09/844,511 patent/US20060036756A1/en not_active Abandoned
- 2001-04-30 WO PCT/IB2001/000922 patent/WO2001084501A2/en active Application Filing
- 2001-04-30 EP EP01929928A patent/EP1290642B1/en not_active Expired - Lifetime
- 2001-04-30 AU AU2001256604A patent/AU2001256604A1/en not_active Abandoned
- 2001-04-30 AT AT01929928T patent/ATE481697T1/en not_active IP Right Cessation
- 2001-04-30 DE DE60143082T patent/DE60143082D1/en not_active Expired - Lifetime
-
2009
- 2009-03-05 US US12/398,638 patent/US8583724B2/en not_active Expired - Lifetime
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5202987A (en) * | 1990-02-01 | 1993-04-13 | Nimrod Bayer | High flow-rate synchronizer/scheduler apparatus and method for multiprocessors |
US5742289A (en) * | 1994-04-01 | 1998-04-21 | Lucent Technologies Inc. | System and method of generating compressed video graphics images |
US5761633A (en) * | 1994-08-30 | 1998-06-02 | Samsung Electronics Co., Ltd. | Method of encoding and decoding speech signals |
US5761663A (en) * | 1995-06-07 | 1998-06-02 | International Business Machines Corporation | Method for distributed task fulfillment of web browser requests |
US5968167A (en) * | 1996-04-04 | 1999-10-19 | Videologic Limited | Multi-threaded data processing management system |
US6232974B1 (en) * | 1997-07-30 | 2001-05-15 | Microsoft Corporation | Decision-theoretic regulation for allocating computational resources among components of multimedia content to improve fidelity |
US6091422A (en) * | 1998-04-03 | 2000-07-18 | Avid Technology, Inc. | System for editing complex visual data providing a continuously updated rendering |
US6266053B1 (en) * | 1998-04-03 | 2001-07-24 | Synapix, Inc. | Time inheritance scene graph for representation of media content |
US6570578B1 (en) * | 1998-04-03 | 2003-05-27 | Avid Technology, Inc. | System for automatic generation of selective partial renderings of complex scenes |
US6538654B1 (en) * | 1998-12-24 | 2003-03-25 | B3D Inc. | System and method for optimizing 3D animation and textures |
US6487565B1 (en) * | 1998-12-29 | 2002-11-26 | Microsoft Corporation | Updating animated images represented by scene graphs |
US6525731B1 (en) * | 1999-11-09 | 2003-02-25 | Ibm Corporation | Dynamic view-dependent texture mapping |
Cited By (84)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040080625A1 (en) * | 1997-01-07 | 2004-04-29 | Takahiro Kurosawa | Video-image control apparatus and method and storage medium |
US7355633B2 (en) * | 1997-01-07 | 2008-04-08 | Canon Kabushiki Kaisha | Video-image control apparatus and method with image generating mechanism, and storage medium containing the video-image control program |
US10869102B2 (en) * | 1999-10-29 | 2020-12-15 | Opentv, Inc. | Systems and methods for providing a multi-perspective video display |
US20200186892A1 (en) * | 1999-10-29 | 2020-06-11 | Opentv, Inc. | Systems and methods for providing a multi-perspective video display |
US10986161B2 (en) | 2000-11-29 | 2021-04-20 | Dov Koren | Mechanism for effective sharing of application content |
US9813481B2 (en) | 2000-11-29 | 2017-11-07 | Dov Koren | Mechanism for sharing of information associated with events |
US9208469B2 (en) | 2000-11-29 | 2015-12-08 | Dov Koren | Sharing of information associated with events |
US20110145727A1 (en) * | 2000-11-29 | 2011-06-16 | Dov Koren | Sharing of Information Associated with Events |
US9098829B2 (en) | 2000-11-29 | 2015-08-04 | Dov Koren | Sharing of information associated with events |
US9098828B2 (en) * | 2000-11-29 | 2015-08-04 | Dov Koren | Sharing of information associated with events |
US10805378B2 (en) | 2000-11-29 | 2020-10-13 | Dov Koren | Mechanism for sharing of information associated with events |
US8984387B2 (en) | 2000-11-29 | 2015-03-17 | Dov Koren | Real time sharing of user updates |
US9535582B2 (en) | 2000-11-29 | 2017-01-03 | Dov Koren | Sharing of information associated with user application events |
US8984386B2 (en) | 2000-11-29 | 2015-03-17 | Dov Koren | Providing alerts in an information-sharing computer-based service |
US10033792B2 (en) | 2000-11-29 | 2018-07-24 | Dov Koren | Mechanism for sharing information associated with application events |
US10270838B2 (en) | 2000-11-29 | 2019-04-23 | Dov Koren | Mechanism for sharing of information associated with events |
US10476932B2 (en) | 2000-11-29 | 2019-11-12 | Dov Koren | Mechanism for sharing of information associated with application events |
US9105010B2 (en) | 2000-11-29 | 2015-08-11 | Dov Koren | Effective sharing of content with a group of users |
US7519449B2 (en) * | 2002-03-11 | 2009-04-14 | Samsung Electronics Co., Ltd. | Rendering system and method and recording medium therefor |
US20030172366A1 (en) * | 2002-03-11 | 2003-09-11 | Samsung Electronics Co., Ltd. | Rendering system and method and recording medium therefor |
US7236960B2 (en) * | 2002-06-25 | 2007-06-26 | Eastman Kodak Company | Software and system for customizing a presentation of digital images |
US20030236716A1 (en) * | 2002-06-25 | 2003-12-25 | Manico Joseph A. | Software and system for customizing a presentation of digital images |
US20070116433A1 (en) * | 2002-06-25 | 2007-05-24 | Manico Joseph A | Software and system for customizing a presentation of digital images |
US10726450B2 (en) | 2004-08-03 | 2020-07-28 | Nextpat Limited | Commercial shape search engine |
US8126907B2 (en) * | 2004-08-03 | 2012-02-28 | Nextengine, Inc. | Commercial shape search engine |
US20060036577A1 (en) * | 2004-08-03 | 2006-02-16 | Knighton Mark S | Commercial shape search engine |
US20060079983A1 (en) * | 2004-10-13 | 2006-04-13 | Tokyo Electron Limited | R2R controller to automate the data collection during a DOE |
US20060263133A1 (en) * | 2005-05-17 | 2006-11-23 | Engle Jesse C | Network based method and apparatus for collaborative design |
US20070204213A1 (en) * | 2006-02-24 | 2007-08-30 | International Business Machines Corporation | Form multiplexer for a portal environment |
US9087034B2 (en) * | 2006-02-24 | 2015-07-21 | International Business Machines Corporation | Form multiplexer for a portal environment |
US7760743B2 (en) * | 2006-03-06 | 2010-07-20 | Oracle America, Inc. | Effective high availability cluster management and effective state propagation for failure recovery in high availability clusters |
US20070206611A1 (en) * | 2006-03-06 | 2007-09-06 | Sun Microsystems, Inc. | Effective high availability cluster management and effective state propagation for failure recovery in high availability clusters |
US20080082629A1 (en) * | 2006-10-03 | 2008-04-03 | Oracle International Corporation | Enabling Users to Repeatedly Perform a Sequence of User Actions When Interacting With a Web Server |
US20120307308A1 (en) * | 2007-03-05 | 2012-12-06 | Morales Javier A | Automated imposition for print jobs with exception pages |
US20080220862A1 (en) * | 2007-03-06 | 2008-09-11 | Aiseek Ltd. | System and method for the generation of navigation graphs in real-time |
US8111257B2 (en) | 2007-03-06 | 2012-02-07 | Aiseek Ltd. | System and method for the generation of navigation graphs in real-time |
WO2008118065A1 (en) * | 2007-03-28 | 2008-10-02 | Agency 9 Ab | Graphics rendering system |
US20080313618A1 (en) * | 2007-06-13 | 2008-12-18 | Microsoft Corporation | Detaching Profilers |
US20090033978A1 (en) * | 2007-07-31 | 2009-02-05 | Xerox Corporation | Method and system for aggregating print jobs |
US8443057B1 (en) * | 2008-02-20 | 2013-05-14 | Adobe Systems Incorporated | System, method, and/or apparatus for establishing peer-to-peer communication |
US8171147B1 (en) * | 2008-02-20 | 2012-05-01 | Adobe Systems Incorporated | System, method, and/or apparatus for establishing peer-to-peer communication |
US20110138408A1 (en) * | 2009-12-07 | 2011-06-09 | Verizon Patent And Licensing, Inc. | Television interaction information and related iconography |
US9236965B2 (en) * | 2009-12-07 | 2016-01-12 | Verizon Patent And Licensing Inc. | Television interaction information and related iconography |
US20110161410A1 (en) * | 2009-12-31 | 2011-06-30 | Centrifuge Systems, Inc. | Massive-scale interactive visualization of data spaces |
US20120011518A1 (en) * | 2010-07-08 | 2012-01-12 | International Business Machines Corporation | Sharing with performance isolation between tenants in a software-as-a service system |
US8539078B2 (en) * | 2010-07-08 | 2013-09-17 | International Business Machines Corporation | Isolating resources between tenants in a software-as-a-service system using the estimated costs of service requests |
US8843350B2 (en) * | 2011-06-03 | 2014-09-23 | Walter P. Moore and Associates, Inc. | Facilities management system |
US20120310602A1 (en) * | 2011-06-03 | 2012-12-06 | Walter P. Moore and Associates, Inc. | Facilities Management System |
US9215151B1 (en) * | 2011-12-14 | 2015-12-15 | Google Inc. | Dynamic sampling rate adjustment for rate-limited statistical data collection |
CN102855653A (en) * | 2012-08-23 | 2013-01-02 | 上海创图网络科技发展有限公司 | Large-scale three-dimensional animation figure rendering system and application thereof |
US9600940B2 (en) * | 2013-04-08 | 2017-03-21 | Kalloc Studios Asia Limited | Method and systems for processing 3D graphic objects at a content processor after identifying a change of the object |
US20140304662A1 (en) * | 2013-04-08 | 2014-10-09 | Kalloc Studios Asia Limited | Methods and Systems for Processing 3D Graphic Objects |
US10097596B2 (en) * | 2013-11-11 | 2018-10-09 | Amazon Technologies, Inc. | Multiple stream content presentation |
US9374552B2 (en) | 2013-11-11 | 2016-06-21 | Amazon Technologies, Inc. | Streaming game server video recorder |
US9604139B2 (en) | 2013-11-11 | 2017-03-28 | Amazon Technologies, Inc. | Service for generating graphics object data |
US9634942B2 (en) | 2013-11-11 | 2017-04-25 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US9641592B2 (en) | 2013-11-11 | 2017-05-02 | Amazon Technologies, Inc. | Location of actor resources |
US20170134450A1 (en) * | 2013-11-11 | 2017-05-11 | Amazon Technologies, Inc. | Multiple stream content presentation |
US10315110B2 (en) | 2013-11-11 | 2019-06-11 | Amazon Technologies, Inc. | Service for generating graphics object data |
US9805479B2 (en) | 2013-11-11 | 2017-10-31 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US9608934B1 (en) | 2013-11-11 | 2017-03-28 | Amazon Technologies, Inc. | Efficient bandwidth estimation |
US10257266B2 (en) | 2013-11-11 | 2019-04-09 | Amazon Technologies, Inc. | Location of actor resources |
US9413830B2 (en) | 2013-11-11 | 2016-08-09 | Amazon Technologies, Inc. | Application streaming service |
WO2015070221A3 (en) * | 2013-11-11 | 2015-11-05 | Heinz Gerard Joseph | Service for generating graphics object data |
US9582904B2 (en) | 2013-11-11 | 2017-02-28 | Amazon Technologies, Inc. | Image composition based on remote object data |
US9578074B2 (en) | 2013-11-11 | 2017-02-21 | Amazon Technologies, Inc. | Adaptive content transmission |
US9596280B2 (en) * | 2013-11-11 | 2017-03-14 | Amazon Technologies, Inc. | Multiple stream content presentation |
US10601885B2 (en) | 2013-11-11 | 2020-03-24 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US20150134772A1 (en) * | 2013-11-11 | 2015-05-14 | Amazon Technologies, Inc. | Multiple stream content presentation |
US10347013B2 (en) | 2013-11-11 | 2019-07-09 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US10374928B1 (en) | 2013-11-11 | 2019-08-06 | Amazon Technologies, Inc. | Efficient bandwidth estimation |
US10778756B2 (en) | 2013-11-11 | 2020-09-15 | Amazon Technologies, Inc. | Location of actor resources |
WO2016066056A1 (en) * | 2014-10-31 | 2016-05-06 | 腾讯科技(深圳)有限公司 | Image remote projection method, server and client |
US11144041B2 (en) * | 2014-11-05 | 2021-10-12 | The Boeing Company | 3D visualizations of in-process products based on machine tool input |
US20160124424A1 (en) * | 2014-11-05 | 2016-05-05 | The Boeing Company | 3d visualizations of in-process products based on machine tool input |
US20160232710A1 (en) * | 2015-02-10 | 2016-08-11 | Dreamworks Animation Llc | Generation of three-dimensional imagery from a two-dimensional image using a depth map |
US9721385B2 (en) * | 2015-02-10 | 2017-08-01 | Dreamworks Animation Llc | Generation of three-dimensional imagery from a two-dimensional image using a depth map |
US10096157B2 (en) | 2015-02-10 | 2018-10-09 | Dreamworks Animation L.L.C. | Generation of three-dimensional imagery from a two-dimensional image using a depth map |
US9897806B2 (en) | 2015-02-10 | 2018-02-20 | Dreamworks Animation L.L.C. | Generation of three-dimensional imagery to supplement existing content |
US10417057B2 (en) * | 2017-01-30 | 2019-09-17 | Oracle International Corporation | Mutex profiling based on waiting analytics |
US20180217878A1 (en) * | 2017-01-30 | 2018-08-02 | Oracle International Corporation | Mutex profiling based on waiting analytics |
CN111177627A (en) * | 2019-12-27 | 2020-05-19 | 小船出海教育科技(北京)有限公司 | Method and device for dynamically configuring response scene |
WO2023109930A1 (en) * | 2021-12-16 | 2023-06-22 | 华为云计算技术有限公司 | Public cloud-based three-dimensional graphics data sharing method and cloud management platform |
CN115564903A (en) * | 2022-12-05 | 2023-01-03 | 阿里巴巴(中国)有限公司 | Three-dimensional scene asset data processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2001084501A2 (en) | 2001-11-08 |
US20090172561A1 (en) | 2009-07-02 |
WO2001084501A3 (en) | 2002-06-27 |
AU2001256604A1 (en) | 2001-11-12 |
ATE481697T1 (en) | 2010-10-15 |
EP1290642A2 (en) | 2003-03-12 |
DE60143082D1 (en) | 2010-10-28 |
EP1290642B1 (en) | 2010-09-15 |
US8583724B2 (en) | 2013-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8583724B2 (en) | Scalable, multi-user server and methods for rendering images from interactively customizable scene information | |
JP4051484B2 (en) | Web3D image display system | |
US5781189A (en) | Embedding internet browser/buttons within components of a network component system | |
RU2509341C2 (en) | Image processing device and image processing method | |
EP1188125B1 (en) | Method for integrating into an application objects that are provided over a network | |
EP0777943B1 (en) | Extensible, replaceable network component system | |
US20100045662A1 (en) | Method and system for delivering and interactively displaying three-dimensional graphics | |
EP1391848A1 (en) | Information distribution system and information distribution method | |
CN112449707A (en) | Computer-implemented method for creating content including composite images | |
CN110930325B (en) | Image processing method and device based on artificial intelligence and storage medium | |
US20220254114A1 (en) | Shared mixed reality and platform-agnostic format | |
KR20230153469A (en) | Method, device, and program for interpreting user input regarding 3D objects | |
WO2001080098A2 (en) | Web browser plug-in providing 3d visualization | |
Behr et al. | Beyond the web browser-x3d and immersive vr | |
JP2005165873A (en) | Web 3d-image display system | |
Schönhage et al. | A flexible architecture for user-adaptable visualization | |
JP4140333B2 (en) | Web3D file editing system | |
Luo et al. | Real time multi-user interaction with 3D graphics via communication networks | |
Luo et al. | Cooperative design for 3d virtual scenes | |
Huang et al. | Interactive visualization for 3D pipelines using Ajax3D | |
Jehaes et al. | Hybrid representations to improve both streaming and rendering of dynamic networked virtual environments | |
KR20010044191A (en) | The method of supplying for electronic menual on the internet | |
EP0769169B1 (en) | A network component system | |
Gobbetti et al. | Virtual Sardinia: A large-scale hypermedia regional information system | |
Steed et al. | Construction of collaborative virtual environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MENTAL IMAGES G.M.B.H. & CO. KG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DRIEMEYER, THOMAS;HERKEN, ROLF;REEL/FRAME:017432/0594;SIGNING DATES FROM 20010516 TO 20010517 |
|
AS | Assignment |
Owner name: MENTAL IMAGES GMBH, GERMANY Free format text: MERGER;ASSIGNOR:MENTAL IMAGES G.M.B.H. & CO. KG;REEL/FRAME:017441/0862 Effective date: 20031001 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |