[go: nahoru, domu]

US20060028479A1 - Architecture for rendering graphics on output devices over diverse connections - Google Patents

Architecture for rendering graphics on output devices over diverse connections Download PDF

Info

Publication number
US20060028479A1
US20060028479A1 US11/176,482 US17648205A US2006028479A1 US 20060028479 A1 US20060028479 A1 US 20060028479A1 US 17648205 A US17648205 A US 17648205A US 2006028479 A1 US2006028479 A1 US 2006028479A1
Authority
US
United States
Prior art keywords
rendering
server
display
asset
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/176,482
Inventor
Won-Suk Chun
Joshua Napoli
Thomas Purtell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Stragent LLC
Original Assignee
Actuality Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Actuality Systems Inc filed Critical Actuality Systems Inc
Priority to US11/176,482 priority Critical patent/US20060028479A1/en
Assigned to ACTUALITY SYSTEMS, INC. reassignment ACTUALITY SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHUN, WON-SUK, FAVALORA, GREGG E., NAPOLI, JOSHUA, PURTELL II, THOMAS J.
Publication of US20060028479A1 publication Critical patent/US20060028479A1/en
Assigned to STRAGENT, LLC reassignment STRAGENT, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ACTUALITY SYSTEMS, INC.
Priority to US13/292,070 priority patent/US20120050301A1/en
Priority to US13/292,066 priority patent/US20120050300A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/32Image data format

Definitions

  • the present disclosure relates generally to imaging and visualization, and, more particularly, to an architecture for rendering graphics on output devices over diverse connections.
  • Example output devices are two-dimensional displays, three-dimensional displays such as volumetric, multi-view, and holographic displays, and two- and three-dimensional printers.
  • Three-dimensional (3-D) information is used in a variety of tasks, such as radiation treatment planning, mechanical computer-aided design, computational fluid dynamics, and battlefield visualization.
  • tasks such as radiation treatment planning, mechanical computer-aided design, computational fluid dynamics, and battlefield visualization.
  • the user is forced to comprehend more information in less time.
  • a rescue team has limited time to discover a catastrophic event, map the structure of the context (i.e., a skyscraper), and deliver accurate instructions to team members.
  • a spatial 3-D display offers rescue planners the ability to see the entire scenario at once.
  • the 3-D locations of the injured are more intuitively known from a spatial display than from a flat screen, which would require rotating the “perspective view” in order to build a mental model of the situation.
  • Spatial 3-D displays e.g., Actuality Systems Inc.'s Perspecta® Spatial 3-D Display
  • imagery that fills a volume of space—such as inside a transparent dome—and that appears 3-D without any cumbersome headwear.
  • Chromium architecture is a prior attempt to solve this problem. Chromium abstracts a graphical execution environment. However, the binding between an application, rendering resource and display is statically determined by a configuration file. Therefore, applications cannot address specific rendering resources. Current 3-D display architectures and applications cannot address remote or distributed resources. Such resources are necessary for displays where ready-made rendering hardware is not available for PCs.
  • FIG. 1 depicts an overview of an architecture that may be implemented by exemplary embodiments of the present invention
  • FIG. 2 depicts an more detailed view of an architecture that may be implemented by exemplary embodiments of the present invention
  • FIG. 3 is a block diagram of an exemplary spatial graphics language implementation
  • FIG. 4 is a block diagram of an exemplary compatibility module structure
  • FIG. 5 is a block diagram of an exemplary rendering module
  • FIG. 6 is an exemplary process flow diagram for processing a command from a ported application
  • FIG. 7 depicts a system that may be implemented by exemplary embodiments of the present invention.
  • FIG. 8 depicts a system that may be implemented by exemplary embodiments of the present invention.
  • FIG. 9 depicts a system that may be implemented by exemplary embodiments of the present invention.
  • FIG. 10 depicts a system that may be implemented by exemplary embodiments of the present invention.
  • Exemplary embodiments of the present invention are directed to a system for displaying graphical information.
  • the system includes an asset server for storing information and a rendering server in communication with the asset server.
  • the rendering server receives a graphics command and renders graphic display data in response to the graphics command and to the information.
  • the rendering server is independently addressable from the asset server.
  • exemplary embodiments of the present invention are directed to a method for displaying graphical information.
  • the method includes receiving a graphics command at a rendering server.
  • Information responsive to the graphics command is accessed.
  • the information is located in an asset server that is separately addressable from the rendering server.
  • Graphic display data is rendered in response to the graphics command and the information.
  • FIG. 1 For exemplary embodiments of the present invention are directed to an architecture for displaying graphical information.
  • the architecture includes an asset resource layer for storing information and a rendering layer.
  • the rendering layer receives a graphics command and renders graphic display data in response to the graphics command and to the information.
  • the communication server is independently addressable from the asset resource layer.
  • Still further exemplary embodiments of the present invention include a computer program product for displaying graphical information.
  • the computer program product includes a storage medium readable by a processing circuit for performing a method.
  • the method includes receiving a graphics command at a rendering server.
  • Information responsive to the graphics command is accessed.
  • the information is located in an asset server that is separately addressable from the rendering server.
  • Graphic display data is rendered in response to the graphics command and the information.
  • Exemplary embodiments of the present invention include a spatial 3-D architecture to support separate asset servers and rendering servers in a graphics environment.
  • the architecture also has a spatial visualization environment (SVE), that includes a 3-D rendering API and a display virtualization layer that enables application developers to universally exploit the unique benefits (such as true volumetric rendering) of 3-D displays.
  • SVE supports the cooperative execution of multiple software applications.
  • a new API is defined, referred to herein as the spatial graphics language (SpatialGL), to provide an optional, display-agnostic interface for 3-D rendering.
  • SpatialGL is a graphical language that facilitates access to remote displays and graphical data (e.g., rendering modules and assets).
  • the architecture further has core rendering software which includes a collection of high-performance rendering algorithms for a variety of 3-D displays.
  • the architecture also includes core rendering electronics including a motherboard that combines a graphics processing unit (GPU) with a 64-bit processor and double-buffered video memory to accelerate 3-D rendering for a variety of high-resolution, color, multiplanar and/or multiview displays.
  • graphics processing unit GPU
  • 64-bit processor 64-bit processor
  • double-buffered video memory to accelerate 3-D rendering for a variety of high-resolution, color, multiplanar and/or multiview displays.
  • Many of today's 3-D software applications use the well-known OpenGL API.
  • exemplary embodiments of the present invention include an OpenGL driver for the Actuality Systems, Incorporated Perspecta Spatial 3-D Display product. Embodiments of the Perspecta Spatial 3-D Display product are described in U.S. Pat. No. 6,554,430 to Dorval et al., of common assignment herewith.
  • volume manager is available to manage cooperative access to display resources from one or more simultaneous software applications (see for example, U.S. Patent Application No. 2004/0135974 A1 to Favalora et al., of common assignment herewith).
  • Current implementations of the volume manager have asset and rendering resources that are not abstracted separately from the display.
  • the display rendering and storage system are considered as a single concept. Therefore, the display and rendering system must be designed together. Effectively, the display must be designed with the maximum image complexity in mind.
  • These resources may be independently addressed, and therefore may be located in one or more servers and accessed via one or more networks.
  • these resources e.g., two or more computation resources
  • the resources may also be located in different geographic locations (e.g., different rooms in the same building, different cities, different countries) and in communication via a network.
  • FIG. 1 depicts an overview of an architecture that may be implemented by exemplary embodiments of the present invention.
  • One or more means is also provided for interfacing one or more central applications with a local, remote or distributed rendering or display systems and for interfacing external databases with a rendering system.
  • the architecture depicted in FIG. 1 includes four layers, an application software layer 102 , an SVE layer 104 , a rendering architecture layer 106 and a display-specific rendering module layer 108 .
  • the application software layer 102 includes legacy applications 110 , ported applications 112 and native applications 116 .
  • the legacy applications 110 and the ported applications 112 are written to the OpenGL API and converted into the SpatialGL API 118 by the OpenGL compatibility module 114 in the SVE layer 104 .
  • OpenGL and SpatialGL are examples of API types. Exemplary embodiments are not limited to these two types of APIs and may be extended to support any graphics APIs such as the Direct3D API.
  • the native applications 116 are written to the SpatialGL API 118 which is in communication with the volume manager 120 .
  • the rendering architecture layer 106 depicted in FIG. 1 includes core rendering software (CRS) 122 , which is a device independent management layer for performing computations/renderings based on commands received from the SpatialGL API 118 and data in the volume manager 120 .
  • the display-specific rendering module layer 108 includes a Perspecta rendering module 124 for converting data from the CRS 122 for output to a Perspecta Spatial 3-D Display and a multiview rendering module 126 for converting data from the CRS 122 into output to other 3-D and 2-D display devices.
  • the architecture depicted in FIG. 1 transforms commands (e.g., graphics commands) from several API types into a single graphical language, SpatialGL. This permits the architecture to provide consistent access to display and rendering resources to both legacy and native application software. This is contrasted with the currently utilized device-specific rendering drivers. Each driver manages rendering hardware, visual assets (display lists, textures, vertex buffers, etc.), and display devices.
  • the architecture depicted in FIG. 1 includes a rendering architecture layer 106 that is a device-independent management layer that with core rendering software 122 .
  • This rendering architecture layer 106 gives the graphics language (SpatialGL 118 ) access to diverse, high-level resources, such as multiple display geometries, rendering clusters and image databases.
  • FIG. 2 depicts a more detailed view of an architecture that may be implemented by exemplary embodiments of the present invention.
  • the SVE layer 104 includes a collection of compatibility strategies between emerging displays and application software.
  • One aspect of SVE provides compatibility with software applications to diverse display types through SpatialGL APIs and OpenGL APIs.
  • the SVE concept extends in three additional directions: application software development can be accelerated by producing higher-level graphical programming toolkits; a spatial user interface (UI) library can provide applications with a consistent and intuitive UI that works well with 3-D displays; and a streaming content library allows the SVE to work with stored or transmitted content. This may be utilized to enable “appliance” applications and “dumb terminals.”
  • UI spatial user interface
  • the SVE is a display-agnostic and potentially remote-rendering architecture.
  • the SVE can communicate with 2-D and very different 3-D displays (multiplanar, view-sequential, lenticular, stereoscopic, holographic).
  • the rendering server does not need to be local to the display(s).
  • the CRS 122 is a collection of rendering strategies. The cost of implementing a rendering engine for a new display geometry breaks down into a system integration effort and an algorithm implementation effort. CRS 122 eliminates the system integration effort by providing a portable communication framework to bridge the client and server domains and by abstracting computation assets. The CRS 122 creates output for a Perspecta rendering module 124 , a multiview rendering module 126 and can be tailored to create output for future rendering modules 206 . In addition, the architecture depicted in FIG. 2 may be utilized to support future graphics display architectures and third-party architectures 210 .
  • the spatial transport protocol describes the interaction between the Spatial Visualization Environment and Core Rendering Software.
  • the spatial transport protocol comprises a set of commands.
  • the STP may optionally comprise a physical definition of the bus used to communicate STP-formatted information.
  • the STP commands are divided into several groups. One group of commands is for operating the rendering hardware, and frame buffer associated with the display. Another group of commands is for synchronizing the STP command stream with events on the host device, rendering hardware and frame buffer. Another group of commands is for operating features specific to the display hardware, such as changing to a low power mode or reading back diagnostic information.
  • Different streams of graphics commands from different applications may proceed through the architecture to be merged into a single STP stream. Due to multitasking, the STP is able to coherently communicate overlapping streams of graphics commands.
  • STP supports synchronization objects between the applications (or any layer below the application) and the display hardware.
  • the application level of the system typically generates sequential operations for the display drivers to process.
  • Graphics commands may be communicated with a commutative language.
  • the display hardware completes the commands out of order. Occasionally, order is important; one graphics operation may refer to the output of a previous graphics operation, or an application may read information back from the hardware, expecting to receive a result from a sequence of graphics operations.
  • Exemplary embodiments of the SVE include a 3-D rendering API and display virtualization layer that enables application developers to universally exploit the unique benefits (such as true volumetric rendering) of 3-D displays. It consists of several subsystems: SpatialGL 118 , OpenGL compatibility module 114 , streaming content library and volume manager 120 . Future development may expand SVE to include scene-graph, rendering engine and application-specific plug-in subsystems.
  • implementations of the SpatialGL API 118 are display, or output-device-specific.
  • Examples of “targets” for SpatialGL implementations are: 2-D displays, volumetric displays, view-sequential displays, and lenticular multi-view displays.
  • Exemplary embodiments of the SVE can communicate with a broad range of output devices whose underlying physics are quite different.
  • FIG. 3 is a block diagram of an exemplary SpatialGL implementation that may be utilized by exemplary embodiments of the present invention.
  • the blocks include a NativeApp block 116 which is written to take full advantage of spatial displays by using the SpatialGL API.
  • the NativeApp block 116 may transmit data to the client 308 , SpatialEngine 304 , SceneGraph 306 and the volume manager 310 .
  • applications can also take advantage of higher level APIs such as SceneGraph 306 and SpatialEngine 304 from Actuality Systems, Incorporated.
  • SceneGraph 306 provides an interface for encoding scene graphs in SpatialGL.
  • SceneGraph 306 implements features such as: assemble shapes into objects; transform the positions of an objects; and animation node.
  • the SpatialEngine 304 implements high level functions such as draw volume and overlaying a scene-graph. SpatialEngine 304 is extensible. For example, an OilToolkit can be added, which adds functions such as: draw porosity volume, overlay drill path and animate path plan.
  • SpatialGL is input to the client 308 .
  • the native API, or SpatialGL API provides an object oriented front-end to the STP byte code.
  • the SpatialGL API exposes features such as, but not limited to: define fragment program, define vertex program, bind geometry source, bind texture source, swap buffer and synchronize.
  • the client 308 sends SpatialGL commands to the volume manager 310 .
  • the SpatialGL commands may include commands for retrieving persistent objects to be displayed on a graphical display device.
  • the persistent objects include, but are not limited to, 2-D and 3-D textures and vertex buffers.
  • the persistent objects may be stored on one or more of a database, a storage medium and a memory buffer.
  • the SpatialGL commands may include commands for retrieving display nodes to be displayed on a graphical display device.
  • Display nodes refer to an instance of any display that can be individually referenced (e.g., a Perspecta display, a 2-D display).
  • STP commands from the volume manger 310 are sent to the core rendering client 312 .
  • the core rendering client 312 is the first computation resource available to the STP execution environment. Early data reducing filter stages can also execute here. Stream compression and volume overlay are processes that may be assigned computation resources at this point.
  • the core rendering client 312 formats the remainder of the filter graph to take into account the physical transport 314 layer between the core rendering client 312 and the core rendering server.
  • API calls are converted into STP.
  • Each STP node is a computation resource. STP procedures get bound to STP nodes as the program is processed. The node executes any procedure that has been bound to it by a previous node.
  • Spatial Transport Protocol may be converted for persistent storage and written to a disk. This can be accomplished by storing the serialized Spatial Transport Protocol byte code to disk, along with a global context table.
  • the global context table allows context-specific assets to be resolved when the STP file is later read back from disk.
  • the global context table establishes correspondences between local context handles referenced by the STP byte code and persistent forms of the referenced data.
  • a STP byte code may reference texture image number 5 .
  • the texture image number is associated with specific data in the original local context of the byte code.
  • texture image number 5 is associated with a particular texture image by the local context table. This can be accomplished by storing in table position 5 , a copy of the texture image, or by storing a GUID or URL that identifies a persistent source for the texture image.
  • FIG. 4 is a block diagram of an exemplary compatibility module structure.
  • Ported applications 112 and/or legacy applications 110 can provide input to the compatibility module structure depicted in FIG. 4 .
  • Ported applications 112 are applications originally written using OpenGL, but have been extended by programmers to interface with spatial displays.
  • Legacy applications 110 are applications written with no knowledge of spatial displays or vendor APIs (e.g., Actuality Systems, Incorporated APIs).
  • OpenGL support is provided through two dynamic link libraries.
  • the main library is called the GLAS library 412 . It provides drawing methods similar to the industry-standard OpenGL API, and also contains specialized initialization and state management routines for spatial displays.
  • the GLAS library 412 converts OpenGL API calls into SpatialGL 118 calls.
  • SpatialGL 118 is a low level graphics language utilized by exemplary embodiments of the present invention.
  • the OGLStub library 414 exports an interface similar to the OpenGL32.dll system library 408 .
  • the behavior of the library can be customized on a per-application basis.
  • the OGLStub library 414 intercepts and redirects OpenGL API calls in a customizable manner. Calls are optionally forwarded to the OpenGL32.dll system library 408 and/or the GLAS library 412 for translation.
  • OpenGL is an industry standard low-level 3-D graphics API for, scientific and computer aided design (CAD) applications. OpenGL supplies a language that expresses static information. The application must explicitly break down dynamic scenes into discrete frames and render each one. OpenGL expresses commands such as: input a vertex; draw a triangle; apply a texture; engage a lighting model; and show the new rendering.
  • CAD computer aided design
  • OpenGL calls are duplicated for both the system library 408 (to render on the 2-D monitor) and for the GLAS library 412 .
  • the first scene is analyzed to determine the depth center of the application's implied coordinate system. Since the depth center is not known until the first swap-buffers call, it may take until the second scene for the image in Perspecta to render properly.
  • the first scene is analyzed to determine the depth center of the application's coordinate system. Once the depth center is calculated, a fix-up transform is calculated. This transform is applied consistently to the projection specified by the application, so that the application's further transformations the projection (such as scaling and zooming) are reflected properly in the spatial rendering. After the depth center is determined, the Stub library 414 issues a redraw call to the application to ensure that the first scene is drawn properly in Perspecta.
  • Ghost mode 406 automatically duplicates OpenGL calls for both the system library 408 and for the GLAS library 412 .
  • depth centering is based on x and y scale and centered to get the majority of the vertices within the display.
  • ghost mode 406 provides an unextended OpenGL interface and attempts to make a spatial display appear as a 2-D display to the application.
  • Extended mode 410 allows the application to control the call forwarding behavior. Extended mode 410 exposes an extended OpenGL interface.
  • a few commands are added to help the application control a spatial display separately from a 2-D display.
  • Example commands include: create a context for a spatial display and draw to a spatial display context.
  • Output from the GLAS library 412 in SpatialGL, is sent to the client 308 and then to the volume manager 310 .
  • the volume manager 310 assigns display resources. It filters the STP stream to reformat the data according to the display resource assigned to the given context.
  • the core rendering block 312 which contains the mechanisms for decoding and executing procedures in the STP language, receives STP commands.
  • the configuration is controllable for each application, based on a central control repository.
  • Parameters that may configured include, but are not limited to: context selection strategy (allows the controller to change the context selection while the application is running); projection fix-up strategy that overrides the projection that that application specifies, in order to fit the image in the actual display geometry; texture processing strategy; context STP preamble (e.g., resolution hints); and scene STP preamble.
  • Some spatial displays physically realize view-dependent lighting effects. In this case, lighting is calculated based on the actual view directions, rather than the master direction given by the projection matrix.
  • the partial ordering of the color value of the fragments must agree with the partial ordering of the intersection (area or length) between the fragment and pixels of the display's native coordinate system, when normalized for variation in area or volume of the pixels.
  • the streaming content library permits spatial stream assets.
  • a spatial stream asset is a time-varying source of spatial imagery.
  • the spatial stream may be synchronized with one or more audio streams.
  • a spatial stream may either consist of a real-time stream, a recorded stream, or a dynamically generated stream.
  • An example of a ream-time spatial stream is a multi-view stream that is fed from an array of cameras.
  • An example of a recorded stream is a spatial movie stored on a removable disk.
  • An example of a dynamically generated stream is a sequence of dynamically rendered 3-D reconstructions from a PACS database.
  • Each stream is associated with a spatial codec.
  • the intended interpretation of the stream is determined by the associated spatial codec.
  • the spatial codec is comprised of a stream encoding specification and a reconstruction specification.
  • the stream encoding specification determines the mapping from the raw binary stream to a time-varying series of pixel arrays.
  • the stream encoding specification may also identify an audio stream, synchronized with the pixel arrays.
  • the reconstruction specification determines the intended mapping from pixel arrays to physical light fields. Examples of stream encoding specifications include MPEG coded representations.
  • the reconstruction specification can be defined using the persistent form of Spatial Transport Protocol.
  • a client of the streaming content library receives the raw binary stream and the spatial codec.
  • the client proceeds to reconstruct an approximation of the intended physical light field, by calculating pixel arrays using the stream encoding specification.
  • the client consumes one or more pixel arrays and interprets an intended light field, using the reconstruction specification.
  • the intended light field is rendered into the local display's specific geometry, using Core Rendering Software.
  • Core Rendering Software moves the rendered image into the spatial frame buffer, causing the display to generate a physical manifestation of a light field.
  • the streaming content library includes spatial stream asset servers. Each spatial stream asset is published by a spatial asset server.
  • An asset server may publish one or more streams, each with a unique URL.
  • a software application using SpatialGL (such as a spatial media player) can call up a particular spatial stream asset using its associated URL.
  • Spatial stream assets may be transmitted with unidirectional signaling: for example several TV channels may be jointly used to transmit a multi-view recording.
  • the spatial codec can be continuously or periodically transmitted.
  • Spatial content may also be broadcast with bidirectional signaling: for example, a spatial movie may be downloaded from an Internet-based asset server and viewed using a spatial media player using SpatialGL.
  • the client could potentially negotiate an optimal spatial codec to match the client's display geometry.
  • Bidirectional signaling can also be used to allow a client to remotely control a dynamically generated stream. For example, a client may continuously send updates to a server about the desired view direction and region of interest, while the server continuously returns rendered images to the client through the streaming library.
  • a client may receive notifications from the spatial stream asset server when new data is available. Based on the notifications, the client may choose to download and render the new data or else the client may skip the new data. When receiving a notification, the client may decide whether to download or skip the new data, based on factors such as the currently available buffer space, communication bandwidth, processing power, or desired level of image quality.
  • the CRS 122 has the character of a slave peripheral and communication to the CRS 122 is limited to proprietary channels. Alternate exemplary embodiments of the CRS 122 have an expanded role as a network device. In addition, it can communicate with a host over a network, and it supports standard protocols for network configuration.
  • the CRS 122 has both a client part and a server part.
  • a host PC runs an application and is in communication with a single multiplanar 3-D display which contains an embedded core rendering electronics system.
  • the client part is embodied on the host PC, while the server part is embodied on the core rendering electronics.
  • CRS 122 is distinct from the SVE because the CRS 122 is meant primarily to provide a rendering engine for specific display types that is compatible with the display-independent graphical commands generated in the SVE.
  • the client side of the CRS 122 interfaces to the SVE using the STP language.
  • STP is used to package and transport SpatialGL API calls.
  • a core rendering client connects the volume manager 120 to the physical transport by acting as an STP interpreter.
  • the core rendering client interpreter exposes procedures (with STP linkage) that allow an STP program to address specific servers. Exemplary embodiments of the present invention only function when a single server is present. Alternate exemplary embodiments of the core rendering client communicate with servers over a network, and are able to list and address the set of available servers.
  • the client also provides a boot service.
  • This provides the boot-image used by the net-boot feature of the servers.
  • the boot-image is stored in a file that can be updated by Perspecta Software Suite upgrade disks (or via web upgrade).
  • the boot service can be enabled and disabled by the SVE. After the boot-image file is upgraded, the installer must enable the boot service to allow the display to update.
  • the embedded system acts as a normal Internet Protocol (IP) device.
  • IP Internet Protocol
  • the embedded system acts as a server, while the host PC acts as a client.
  • the server acts as a normal IP device.
  • the client and server must be directly connected.
  • clients and servers are connected through a gigabit switch. This configuration removes the requirement that the client PC contains two Ethernet controllers, and it allows multiple clients to connect to a single server.
  • the server obtains an IP address using dynamic host configuration protocol (DHCP) (unless it has been configured to use a static address).
  • DHCP dynamic host configuration protocol
  • the CRS 122 and the client must be made aware of the identity of the server. This is done by a symmetric system where a node (client or server) broadcasts a datagram when it starts. The node that starts first obtains the identity of the later node. If the server is started first, and encounters a client datagram broadcast, it opens a connection to the client to communicate the server's identity.
  • a client may simultaneously communicate with multiple servers. Each server may only communicate with a single client at a time.
  • the servers have a user interface and policy for attaching to specific clients when more than one client is available.
  • the CRS 122 provides a simple network management protocol (SNMP) interface to manage the network settings of the server.
  • the SNMP interface configures the IP address, broadcast settings and security settings.
  • the security settings include client allow and deny lists.
  • the host and client support a single gigabit Ethernet connection.
  • the host and client employ an additional protocol to support two gigabit Ethernet connections.
  • the client may open the server.
  • the client and server communicate through datagrams.
  • the server is single-threaded; the client may only open a single connection to the server and it is guaranteed exclusive access to the entire display resource.
  • the client may begin transacting rendering commands. Rendering commands are moved between the client and server using a command stream and a remote memory protocol.
  • the network graphics service is meant to communicate only over a local network segment, a very low level of packet loss is expected.
  • the details of the communication scheme can be arranged to ensure that the system degrades gracefully under packet loss.
  • Device allocation and context creation must be guaranteed to operate correctly under packet loss.
  • the bulk graphics data transfer is not protected, except that a frame that is rendered without packet loss must not be degraded by packet loss in previous frames.
  • Persistent texture map data is protected against packet loss by a checksum and a failure/retry scheme.
  • CRS 122 uses the STP language as a form for communicating graphics commands and procedures.
  • STP allows the interfaces between the major components of the Core Rendering Software system to be uniform.
  • STP serves as the inter-process-communication mechanism.
  • STP is used to communicate a sequence of graphics commands from the client to the server.
  • the initial version of STP will include conditional execution and parallel branching prototype features.
  • modules will be written within the STP language, thus flattening the hardware-native part of the architecture.
  • Conditional execution and parallel branching features will be optimized in later versions of Core Rendering Software.
  • FIG. 5 is a block diagram of an exemplary rendering module.
  • the pipeline system structure, or pipeline framework, subsystem provides the generic structure that is common to rendering pipelines for the CRS 122 .
  • a rendering pipeline is implemented through a pipeline system class 502 .
  • a pipeline system class 502 is composed of a rendering pipeline and a fixed set of active objects.
  • An active object models a device that can trade time for data movement or transformation, such as a bus, a GPU or a CPU.
  • the pipeline system class 502 binds stages to scheduler threads 510 (i.e., to active objects).
  • the scheduler thread 510 is the binding between stages and active objects.
  • An instance of a pipeline 504 operates on a single input stream of homogeneous elements.
  • An exemplary pipeline constructor includes: initiate first-in-first-out (FIFO) length; and initialize stage connections. As depicted in FIG. 5 , fixed length FIFOs 506 constrain the resource usage of the system.
  • Rendering pipelines are implemented as a series of stages 508 that communicate with tasks.
  • a stage 508 is an algorithmic unit that transforms one stream of tasks into another stream of tasks.
  • a stage 508 may be designed to be compatible with a specific active object 512 , the binding to the active object 512 is external to the stage 508 .
  • a stage 508 may implicitly require a binding with a GPU by making OpenGL calls, but it must not own or manipulate an OpenGL context.
  • Stage objects have an unusually complicated life cycle. They are typically created in one thread but work in a second thread. The lifetime of a stage 508 consists of these distinct transitions: construction, initialization, execution, de-initialization, and destruction.
  • a stage 508 transforms a stream of homogeneous elements.
  • a stage 508 utilizes the resources of a single active object and executes several hundreds of times a second.
  • the biding between a stage 508 and an active object 512 is external to the stage class. Therefore, a pipeline 504 may be considered a stage 508 , but a pipeline system 502 may not be considered a stage 508 .
  • the remote active object 512 depicted in FIG. 5 models a thread of execution that exists outside of the CPU. Input to the active object 512 includes data from the GL context block 514 and the voxel engine context block 516 .
  • the pipeline framework includes a Fence class, which is utilized to provide a main stream synchronization pattern.
  • a pipeline system 502 operates asynchronously from its enclosing system.
  • the enclosing system can insert a fence into the command stream of a pipeline 504 .
  • a pipeline passes a fence when all processing due to tasks issued before the fence have completed.
  • the enclosing system can query whether the pipeline 504 has passed the fence, or it can block until the fence has been passed.
  • SpatialGL API As described above, a key feature of the SVE is display-independence (or “display-agnosticism”). Implementations of the SpatialGL API can be made for a variety of 2-D and 3-D displays.
  • the SpatialGL API may be utilized with a Perspecta multiplanar volumetric display.
  • the SpatialGL API may be utilized with other types of displays.
  • the SpatialGL implementation is substantially simpler than the SpatialGL implementation for multiplanar rendering.
  • a slice volume could be created as part of the rendering process.
  • a slice volume contains a slice for each rendered view direction.
  • Rendered views use sheared versions of standard projection matrices, corresponding to the viewing angles.
  • “Final views” correspond to the views that are physically generated by the display hardware.
  • Final views are sampled from the slice volume (for example, using texture mapping). The number of final views may be different than the number of rendered views.
  • Rendering tetrahedra requires special treatment because, at the time of writing, GPUs lack native volumetric rendering support.
  • SpatialGL wraps an efficient volume rendering implementation such as ray casting.
  • image formatting can be different. Because Stereographics' lenticular display interfaces via digital visual interface (DVI), it does not require special formatting hardware (such as the Voxel Engine in Actuality Systems, Incorporated's Core Rendering Electronics). However, the distribution of pixels and views is somewhat irregular, and requires a reformatting process known as “interzigging.” Additionally, view anti-aliasing can occur during this step. On the other hand, Actuality Systems' holovideo display was designed to use the same Core Rendering Electronics as Perspecta, and can share the same implementation.
  • DVI digital visual interface
  • SpatialGL is display-agnostic
  • SpatialGL can also be used for non-3D displays. Examples include tiled display walls, displays with heterogeneous interfaces (e.g. the Sunnybrook HDR LCD, foveated resolution displays), and displays with unusual geometries (e.g. dome, sphere or cube shaped displays).
  • a standard 2-D display such as a desktop cathode ray tube (CRT) or liquid crystal display (LCD). This would allow the use of SpatialGL programs on standard computer hardware without an exotic display configuration. For the most part, the rendering of these displays only requires changes in the image reformatting stage, and minor changes elsewhere.
  • FIG. 6 is an exemplary process flow diagram of a command from a ported application.
  • a ported application 112 renders a scene by issuing function calls via the compatibility module 114 to the GLAS Extended API (similar to OpenGL). These API calls specify features of the scene, such as texture images, the position and shape of primitive elements (such as lines points and triangles) and the mappings between elements textures and colors.
  • the GLAS extended stub library 612 receives the API calls and issues them to the GLAS translation library 412 .
  • the GLAS translation library manages the OpenGL state machine to interpret the GLAS Extended API calls. Interpreted calls are translated into SpatialGL API calls.
  • Legacy applications invoke a similar command flow.
  • a legacy application 110 renders a scene by issuing function calls via the compatibility module 114 to the GLAS ghost API (similar to OpenGL).
  • the GLAS ghost stub library 613 receives the API calls and reformats the scene in preparation for translation to SpatialGL.
  • the stub library may apply a heuristic that inspects the ghost API calls to estimate the intended depth center of the scene. This additional information is passed to the GLAS translation library 412 , along with the API calls generated by the legacy application. Interpreted calls are translated into SpatialGL API calls.
  • the SpatialGL client library 308 directs the API calls to the volume manager 310 , along with association information to identify the software application instance that generated the commands (the ported application).
  • the volume manager 310 modifies the API call stream. For example, it may map the application's rendering output to a specific potion of the graphics volume generated by the display device. It may also apply an overlay to the rendering output, such as a sprite that marks the location of a 3-D pointer.
  • the core rendering client library 312 marshals the API calls and transmits them (for example, using Spatial Transport Protocol 314 ) to the server execution environment.
  • the core rendering software instantiated in the server execution environment receives and unmarshals API calls.
  • the core rendering server 604 operates rendering algorithms, based on the API calls.
  • the rendering algorithms are defined by a renderer module 606 .
  • the rendering algorithms cause rendered image bitmaps to be moved into the spatial frame-buffer 611 , using the voxel engine driver 610 .
  • graphics libraries allow multiple client applications to share access to a single rendering server (via, for example, the Windows Graphics Library [WGL] for OpenGL Windows).
  • WGL Windows Graphics Library
  • the SVE volume manger provides this meta-service for the SpatialGL API.
  • the SpatialGL API is also designed to allow a single client application to access multiple servers. Often, this will be to provide multiple views to an application (e.g., standard 2-D view with a Perspecta volumetric view).
  • SpatialGL objects can be referenced by Uniform Resource Locators (URLs). These URLs may represent local resources or shared resources. Exemplary implementations of the present invention can distribute different parts of the rendering pipeline to different servers that may specialize in various tasks.
  • a simple configuration that may be implemented by the network layer 202 of the architecture described previously herein includes having a host computer attached directly to a display (e.g., a spatial display).
  • the client application on the host computer may open multiple contexts (virtual servers) that are shared on the display.
  • Another configuration that may be implemented by exemplary embodiments of the present invention is a typical client/server configuration. In this case, multiple host computers are attached to a display over a network and client applications on different host computers can each open multiple contexts (e.g., via the SpatialGL API) that are shared on the display.
  • Other exemplary configurations include a buffered display configuration with a host computer that is attached to a SpatialGL rendering server. In this configuration the rendering server can only perform rendering, and does not actually display images. When the client application sends scenes to the rendering server, they are rendered and stored. The stored results can be played back on a display at a later time.
  • FIGS. 7-10 depict further example configurations that may be implemented by the network layer 202 of the architecture described previously herein.
  • multiple host computers 702 are attached to various displays 704 over one or more networks 706 .
  • the displays 704 may be a mixture of various types (e.g., 2-D and 3-D).
  • the host computers 702 discover displays 704 through the Domain Name System (DNS).
  • DNS Domain Name System
  • the host computer systems 702 include one or more graphics applications that communicate with one or more displays 704 via the network using an API (e.g., the SpatialGL API).
  • an API e.g., the SpatialGL API
  • the network 706 may be implemented by any type of known network including, but not limited to, a wide area network (WAN), a local area network (LAN), storage area network (SAN), a global network (e.g. Internet, cellular), a virtual private network (VPN), and an intranet.
  • the network 706 may be implemented using a wireless network or any kind of physical network implementation.
  • SpatialGL servers e.g., asset server, rendering server
  • FIG. 8 depicts a system that may be implemented by exemplary embodiments of the present invention.
  • the configuration depicted in FIG. 8 includes a dedicated asset server 802 .
  • a host computer is attached to a SpatialGL asset server 802 , a SpatialGL rendering server 804 and a display on a thin client 806 .
  • the client application accesses assets from the asset server 802 by URL. These assets are forwarded to the rendering server 804 (which may have the asset locally cached).
  • the client application sends SpatialGL commands to the rendering server 804 , which renders the SpatialGL scenes (also referred to herein as graphic display data) and sends the results to the display.
  • SpatialGL scenes also referred to herein as graphic display data
  • the assets are very large compared to the bandwidth between the host computer and the SpatialGL server (e.g., host/display is a doctor's home PC connected over the Internet and the asset server 802 is SpatialGL/DICOM bridge that creates SpatialGL volumetric textures from a hospital's picture archiving and communication system (PACS).
  • the SpatialGL server e.g., host/display is a doctor's home PC connected over the Internet and the asset server 802 is SpatialGL/DICOM bridge that creates SpatialGL volumetric textures from a hospital's picture archiving and communication system (PACS).
  • PPS picture archiving and communication system
  • FIG. 9 depicts a system that may be implemented by exemplary embodiments of the present invention.
  • the configuration depicted in FIG. 8 may be referred to as a remote rendering implementation.
  • a host computer is in communication with a SpatialGL rendering server 804 and a display server through a network.
  • the host computer and display server on located on the same machine (e.g., host/display is a thin client 806 such as a table PC or cellular telephone) and the rendering algorithms are performed remotely by the rendering server 804 .
  • the rendering server may access the asset server 802 via the network 706 .
  • the client application, located on the thin client 806 sends SpatialGL commands to the rendering server 804 .
  • the rendering server 804 then renders the SpatialGL scenes and sends the results to the Spatial GL display on the thin client 806 .
  • FIG. 10 depicts a system that may be implemented by exemplary embodiments of the present invention to provide distributed rendering.
  • Multiple host computers 702 are attached to various SpatialGL rendering servers 1004 and displays 1002 over a network. 706 .
  • Client applications from the host computers 702 send SpatialGL commands to the pool of rendering servers 1004 that load-balances and distributes the rendering tasks amongst themselves, possibly through an arbiter.
  • Scenes are rendered in parallel across available and easily accessible rendering servers 1004 and sent to the appropriate displays 1002 .
  • Interesting (though not mutually exclusive) distributions of rendering servers 1004 include workstations that perform SpatialGL rendering during idle cycles or centrally deployed cluster of dedicated rendering servers 1004 . The latter case is particularly interesting when combined with dedicated asset servers 802 .
  • asset resources e.g., asset server 802
  • computation resources e.g., rendering server 804
  • display resources e.g., displays 704
  • asset resources e.g., asset server 802
  • computation resources e.g., rendering server 804
  • display resources e.g., displays 704
  • assets resources and/or computation resources may create input to one or more displays.
  • Particular use-cases and specific examples that may be implemented using the configurations described above include, but are not limited to:
  • a customer's PC connected directly to a 3-D display for example, a mechanical engineer running SolidWorks application software on a desktop IBM PC workstation, containing elements of Spatial Visualization Environment, in physical connection to a multiplanar three-dimensional display over gigabit Ethernet.
  • One or more mobile devices with rendered 3-D graphics on its 2-D display i.e. 3-D-on-2-D that call upon remote rendering computational horsepower to draw animated 3-D graphics for games.
  • One or more mobile devices with rendered 3-D graphics on its 3-D display i.e. a cellphone with a lenticular 9-view autostereoscopic 3-D display) that call upon remote rendering computational horsepower.
  • Exemplary embodiments of the present invention allow applications to have access to a wider variety of assets, rendering algorithms and displays.
  • the assets, rendering algorithms and displays may be located in the same geographic location or located in separate geographic locations and in communication via a network.
  • the embodiments of the invention may be embodied in the form of hardware, software, firmware, or any processes and/or apparatuses for practicing the embodiments.
  • Embodiments of the invention may also be embodied in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention.
  • the present invention can also be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention.
  • computer program code segments configure the microprocessor to create specific logic circuits.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Manufacturing & Machinery (AREA)
  • Materials Engineering (AREA)
  • Digital Computer Display Output (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A system for displaying graphical information. The system includes an asset server for storing information and a rendering server in communication with the asset server. The rendering server receives a graphics command and renders graphic display data in response to the graphics command and to the information. The rendering server is independently addressable from the asset server.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of the filing date of U.S. provisional patent application Ser. No. 60/586,327, filed Jul. 8, 2004, the contents of which are herein incorporated by reference.
  • GOVERNMENT RIGHTS
  • The U.S. Government may have certain rights in this invention pursuant to Grant No. 70NANB3H3028 awarded by the National Institute of Standards and Technology (NIST).
  • BACKGROUND
  • The present disclosure relates generally to imaging and visualization, and, more particularly, to an architecture for rendering graphics on output devices over diverse connections. Example output devices are two-dimensional displays, three-dimensional displays such as volumetric, multi-view, and holographic displays, and two- and three-dimensional printers.
  • Three-dimensional (3-D) information is used in a variety of tasks, such as radiation treatment planning, mechanical computer-aided design, computational fluid dynamics, and battlefield visualization. As computational power and the capability of sensors improve, the user is forced to comprehend more information in less time. For example, a rescue team has limited time to discover a catastrophic event, map the structure of the context (i.e., a skyscraper), and deliver accurate instructions to team members. Just as an interactive computer screen is better than a paper map, a spatial 3-D display offers rescue planners the ability to see the entire scenario at once. The 3-D locations of the injured are more intuitively known from a spatial display than from a flat screen, which would require rotating the “perspective view” in order to build a mental model of the situation.
  • Display technologies now exist which are designed to cope with these large datasets. Spatial 3-D displays (e.g., Actuality Systems Inc.'s Perspecta® Spatial 3-D Display) create imagery that fills a volume of space—such as inside a transparent dome—and that appears 3-D without any cumbersome headwear.
  • It is expected that a variety of spatial displays will come into existence in the near future. Furthermore, software applications will emerge that will exploit the unique properties of spatial displays. In order to allow every type of display to be compatible with every application, a standard is needed which dictates how (electronically and with what protocol) spatial 3-D information is transmitted to the display device. In addition, software applications and display devices that are not specialized for spatial 3-D rendering will continue to be utilized. Many customer computer environments will contain a mix of 3-D and non-3-D display devices and software applications. It would be desirable for application programmers to be able to write and execute a single application program to produce graphics on a variety of 3-D and non-3-D displays.
  • Further, modern graphics environments must solve the problem that the application software generally runs on separate hardware from the rendering algorithms. Since off-the-shelf personal computers (PCs) are not yet specialized for spatial 3-D rendering, the process separation is generally more complicated than sending the data across the peripheral component interface (PCI)-express bus. The Chromium architecture is a prior attempt to solve this problem. Chromium abstracts a graphical execution environment. However, the binding between an application, rendering resource and display is statically determined by a configuration file. Therefore, applications cannot address specific rendering resources. Current 3-D display architectures and applications cannot address remote or distributed resources. Such resources are necessary for displays where ready-made rendering hardware is not available for PCs.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Referring now to the figures, which are exemplary embodiments and wherein like elements are numbered alike:
  • FIG. 1 depicts an overview of an architecture that may be implemented by exemplary embodiments of the present invention;
  • FIG. 2 depicts an more detailed view of an architecture that may be implemented by exemplary embodiments of the present invention;
  • FIG. 3 is a block diagram of an exemplary spatial graphics language implementation;
  • FIG. 4 is a block diagram of an exemplary compatibility module structure;
  • FIG. 5 is a block diagram of an exemplary rendering module;
  • FIG. 6 is an exemplary process flow diagram for processing a command from a ported application;
  • FIG. 7 depicts a system that may be implemented by exemplary embodiments of the present invention;
  • FIG. 8 depicts a system that may be implemented by exemplary embodiments of the present invention;
  • FIG. 9 depicts a system that may be implemented by exemplary embodiments of the present invention; and
  • FIG. 10 depicts a system that may be implemented by exemplary embodiments of the present invention.
  • SUMMARY OF THE INVENTION
  • Exemplary embodiments of the present invention are directed to a system for displaying graphical information. The system includes an asset server for storing information and a rendering server in communication with the asset server. The rendering server receives a graphics command and renders graphic display data in response to the graphics command and to the information. The rendering server is independently addressable from the asset server.
  • Other exemplary embodiments of the present invention are directed to a method for displaying graphical information. The method includes receiving a graphics command at a rendering server. Information responsive to the graphics command is accessed. The information is located in an asset server that is separately addressable from the rendering server. Graphic display data is rendered in response to the graphics command and the information.
  • Further exemplary embodiments of the present invention are directed to an architecture for displaying graphical information. The architecture includes an asset resource layer for storing information and a rendering layer. The rendering layer receives a graphics command and renders graphic display data in response to the graphics command and to the information. The communication server is independently addressable from the asset resource layer.
  • Still further exemplary embodiments of the present invention include a computer program product for displaying graphical information. The computer program product includes a storage medium readable by a processing circuit for performing a method. The method includes receiving a graphics command at a rendering server. Information responsive to the graphics command is accessed. The information is located in an asset server that is separately addressable from the rendering server. Graphic display data is rendered in response to the graphics command and the information.
  • DETAILED DESCRIPTION
  • Exemplary embodiments of the present invention include a spatial 3-D architecture to support separate asset servers and rendering servers in a graphics environment. The architecture also has a spatial visualization environment (SVE), that includes a 3-D rendering API and a display virtualization layer that enables application developers to universally exploit the unique benefits (such as true volumetric rendering) of 3-D displays. SVE supports the cooperative execution of multiple software applications. As part of the SVE, a new API is defined, referred to herein as the spatial graphics language (SpatialGL), to provide an optional, display-agnostic interface for 3-D rendering. SpatialGL is a graphical language that facilitates access to remote displays and graphical data (e.g., rendering modules and assets). The architecture further has core rendering software which includes a collection of high-performance rendering algorithms for a variety of 3-D displays. The architecture also includes core rendering electronics including a motherboard that combines a graphics processing unit (GPU) with a 64-bit processor and double-buffered video memory to accelerate 3-D rendering for a variety of high-resolution, color, multiplanar and/or multiview displays. Many of today's 3-D software applications use the well-known OpenGL API. To provide compatibility with those applications, exemplary embodiments of the present invention include an OpenGL driver for the Actuality Systems, Incorporated Perspecta Spatial 3-D Display product. Embodiments of the Perspecta Spatial 3-D Display product are described in U.S. Pat. No. 6,554,430 to Dorval et al., of common assignment herewith.
  • Currently, a volume manager is available to manage cooperative access to display resources from one or more simultaneous software applications (see for example, U.S. Patent Application No. 2004/0135974 A1 to Favalora et al., of common assignment herewith). Current implementations of the volume manager have asset and rendering resources that are not abstracted separately from the display. The display rendering and storage system are considered as a single concept. Therefore, the display and rendering system must be designed together. Effectively, the display must be designed with the maximum image complexity in mind. Exemplary embodiments of the SVE, as described herein, remove this restriction by providing separately named asset, computation (rendering), and display resources. Unlike other rendering systems, the application has the flexibility to combine these resources by addressing each one independently. These resources may be independently addressed, and therefore may be located in one or more servers and accessed via one or more networks. In addition, these resources (e.g., two or more computation resources) may be combined to create output for a single graphics display. The resources may also be located in different geographic locations (e.g., different rooms in the same building, different cities, different countries) and in communication via a network.
  • FIG. 1 depicts an overview of an architecture that may be implemented by exemplary embodiments of the present invention. One or more means is also provided for interfacing one or more central applications with a local, remote or distributed rendering or display systems and for interfacing external databases with a rendering system.
  • The architecture depicted in FIG. 1 includes four layers, an application software layer 102, an SVE layer 104, a rendering architecture layer 106 and a display-specific rendering module layer 108. The application software layer 102 includes legacy applications 110, ported applications 112 and native applications 116. The legacy applications 110 and the ported applications 112 are written to the OpenGL API and converted into the SpatialGL API 118 by the OpenGL compatibility module 114 in the SVE layer 104. OpenGL and SpatialGL are examples of API types. Exemplary embodiments are not limited to these two types of APIs and may be extended to support any graphics APIs such as the Direct3D API. The native applications 116 are written to the SpatialGL API 118 which is in communication with the volume manager 120. The rendering architecture layer 106 depicted in FIG. 1 includes core rendering software (CRS) 122, which is a device independent management layer for performing computations/renderings based on commands received from the SpatialGL API 118 and data in the volume manager 120. The display-specific rendering module layer 108 includes a Perspecta rendering module 124 for converting data from the CRS 122 for output to a Perspecta Spatial 3-D Display and a multiview rendering module 126 for converting data from the CRS 122 into output to other 3-D and 2-D display devices.
  • Unlike prior architectures, the architecture depicted in FIG. 1 transforms commands (e.g., graphics commands) from several API types into a single graphical language, SpatialGL. This permits the architecture to provide consistent access to display and rendering resources to both legacy and native application software. This is contrasted with the currently utilized device-specific rendering drivers. Each driver manages rendering hardware, visual assets (display lists, textures, vertex buffers, etc.), and display devices. The architecture depicted in FIG. 1 includes a rendering architecture layer 106 that is a device-independent management layer that with core rendering software 122. This rendering architecture layer 106 gives the graphics language (SpatialGL 118) access to diverse, high-level resources, such as multiple display geometries, rendering clusters and image databases. Each class of resources: asset (e.g., volume manager 120); computational (e.g., core rendering software 122); and display (e.g., Perspecta rendering 124 and multiview rendering 126) is enabled by an independent module.
  • FIG. 2 depicts a more detailed view of an architecture that may be implemented by exemplary embodiments of the present invention. The SVE layer 104 includes a collection of compatibility strategies between emerging displays and application software. One aspect of SVE provides compatibility with software applications to diverse display types through SpatialGL APIs and OpenGL APIs. The SVE concept extends in three additional directions: application software development can be accelerated by producing higher-level graphical programming toolkits; a spatial user interface (UI) library can provide applications with a consistent and intuitive UI that works well with 3-D displays; and a streaming content library allows the SVE to work with stored or transmitted content. This may be utilized to enable “appliance” applications and “dumb terminals.”
  • In addition, the SVE is a display-agnostic and potentially remote-rendering architecture. The SVE can communicate with 2-D and very different 3-D displays (multiplanar, view-sequential, lenticular, stereoscopic, holographic). The rendering server does not need to be local to the display(s).
  • The CRS 122 is a collection of rendering strategies. The cost of implementing a rendering engine for a new display geometry breaks down into a system integration effort and an algorithm implementation effort. CRS 122 eliminates the system integration effort by providing a portable communication framework to bridge the client and server domains and by abstracting computation assets. The CRS 122 creates output for a Perspecta rendering module 124, a multiview rendering module 126 and can be tailored to create output for future rendering modules 206. In addition, the architecture depicted in FIG. 2 may be utilized to support future graphics display architectures and third-party architectures 210.
  • The spatial transport protocol (STP) describes the interaction between the Spatial Visualization Environment and Core Rendering Software. The spatial transport protocol comprises a set of commands. The STP may optionally comprise a physical definition of the bus used to communicate STP-formatted information. The STP commands are divided into several groups. One group of commands is for operating the rendering hardware, and frame buffer associated with the display. Another group of commands is for synchronizing the STP command stream with events on the host device, rendering hardware and frame buffer. Another group of commands is for operating features specific to the display hardware, such as changing to a low power mode or reading back diagnostic information.
  • Different streams of graphics commands from different applications may proceed through the architecture to be merged into a single STP stream. Due to multitasking, the STP is able to coherently communicate overlapping streams of graphics commands. STP supports synchronization objects between the applications (or any layer below the application) and the display hardware. The application level of the system typically generates sequential operations for the display drivers to process. Graphics commands may be communicated with a commutative language. For efficiency, the display hardware completes the commands out of order. Occasionally, order is important; one graphics operation may refer to the output of a previous graphics operation, or an application may read information back from the hardware, expecting to receive a result from a sequence of graphics operations.
  • Application Layer
  • Exemplary embodiments of the SVE include a 3-D rendering API and display virtualization layer that enables application developers to universally exploit the unique benefits (such as true volumetric rendering) of 3-D displays. It consists of several subsystems: SpatialGL 118, OpenGL compatibility module 114, streaming content library and volume manager 120. Future development may expand SVE to include scene-graph, rendering engine and application-specific plug-in subsystems.
  • SpatialGL
  • Just as OpenGL API implementations are video-card-specific, implementations of the SpatialGL API 118 are display, or output-device-specific. Examples of “targets” for SpatialGL implementations are: 2-D displays, volumetric displays, view-sequential displays, and lenticular multi-view displays. Exemplary embodiments of the SVE can communicate with a broad range of output devices whose underlying physics are quite different.
  • FIG. 3 is a block diagram of an exemplary SpatialGL implementation that may be utilized by exemplary embodiments of the present invention. The blocks include a NativeApp block 116 which is written to take full advantage of spatial displays by using the SpatialGL API. The NativeApp block 116 may transmit data to the client 308, SpatialEngine 304, SceneGraph 306 and the volume manager 310. In alternate exemplary embodiments, applications can also take advantage of higher level APIs such as SceneGraph 306 and SpatialEngine 304 from Actuality Systems, Incorporated. SceneGraph 306 provides an interface for encoding scene graphs in SpatialGL. SceneGraph 306 implements features such as: assemble shapes into objects; transform the positions of an objects; and animation node. The SpatialEngine 304 implements high level functions such as draw volume and overlaying a scene-graph. SpatialEngine 304 is extensible. For example, an OilToolkit can be added, which adds functions such as: draw porosity volume, overlay drill path and animate path plan.
  • As depicted in FIG. 3, SpatialGL is input to the client 308. In exemplary embodiments of the present invention, the native API, or SpatialGL API, provides an object oriented front-end to the STP byte code. The SpatialGL API exposes features such as, but not limited to: define fragment program, define vertex program, bind geometry source, bind texture source, swap buffer and synchronize. The client 308 sends SpatialGL commands to the volume manager 310. The SpatialGL commands may include commands for retrieving persistent objects to be displayed on a graphical display device. The persistent objects include, but are not limited to, 2-D and 3-D textures and vertex buffers. The persistent objects may be stored on one or more of a database, a storage medium and a memory buffer. In addition, the SpatialGL commands may include commands for retrieving display nodes to be displayed on a graphical display device. Display nodes refer to an instance of any display that can be individually referenced (e.g., a Perspecta display, a 2-D display). STP commands from the volume manger 310 are sent to the core rendering client 312. The core rendering client 312 is the first computation resource available to the STP execution environment. Early data reducing filter stages can also execute here. Stream compression and volume overlay are processes that may be assigned computation resources at this point. The core rendering client 312 formats the remainder of the filter graph to take into account the physical transport 314 layer between the core rendering client 312 and the core rendering server. At the STP interpreter block 316, API calls are converted into STP. Each STP node is a computation resource. STP procedures get bound to STP nodes as the program is processed. The node executes any procedure that has been bound to it by a previous node.
  • Spatial Transport Protocol may be converted for persistent storage and written to a disk. This can be accomplished by storing the serialized Spatial Transport Protocol byte code to disk, along with a global context table. The global context table allows context-specific assets to be resolved when the STP file is later read back from disk. The global context table establishes correspondences between local context handles referenced by the STP byte code and persistent forms of the referenced data. For example, a STP byte code may reference texture image number 5. The texture image number is associated with specific data in the original local context of the byte code. When saved to disk, texture image number 5 is associated with a particular texture image by the local context table. This can be accomplished by storing in table position 5, a copy of the texture image, or by storing a GUID or URL that identifies a persistent source for the texture image.
  • Compatibility Module Structure
  • FIG. 4 is a block diagram of an exemplary compatibility module structure. Ported applications 112 and/or legacy applications 110 can provide input to the compatibility module structure depicted in FIG. 4. Ported applications 112 are applications originally written using OpenGL, but have been extended by programmers to interface with spatial displays. Legacy applications 110 are applications written with no knowledge of spatial displays or vendor APIs (e.g., Actuality Systems, Incorporated APIs). OpenGL support is provided through two dynamic link libraries. The main library is called the GLAS library 412. It provides drawing methods similar to the industry-standard OpenGL API, and also contains specialized initialization and state management routines for spatial displays. The GLAS library 412 converts OpenGL API calls into SpatialGL 118 calls. SpatialGL 118 is a low level graphics language utilized by exemplary embodiments of the present invention. The OGLStub library 414 exports an interface similar to the OpenGL32.dll system library 408. The behavior of the library can be customized on a per-application basis. The OGLStub library 414 intercepts and redirects OpenGL API calls in a customizable manner. Calls are optionally forwarded to the OpenGL32.dll system library 408 and/or the GLAS library 412 for translation.
  • OpenGL is an industry standard low-level 3-D graphics API for, scientific and computer aided design (CAD) applications. OpenGL supplies a language that expresses static information. The application must explicitly break down dynamic scenes into discrete frames and render each one. OpenGL expresses commands such as: input a vertex; draw a triangle; apply a texture; engage a lighting model; and show the new rendering.
  • Referring to FIG. 3, OpenGL calls are duplicated for both the system library 408 (to render on the 2-D monitor) and for the GLAS library 412. By default, the first scene is analyzed to determine the depth center of the application's implied coordinate system. Since the depth center is not known until the first swap-buffers call, it may take until the second scene for the image in Perspecta to render properly.
  • The first scene is analyzed to determine the depth center of the application's coordinate system. Once the depth center is calculated, a fix-up transform is calculated. This transform is applied consistently to the projection specified by the application, so that the application's further transformations the projection (such as scaling and zooming) are reflected properly in the spatial rendering. After the depth center is determined, the Stub library 414 issues a redraw call to the application to ensure that the first scene is drawn properly in Perspecta.
  • The two main configurations are “ghost mode” 406 and “extended mode” 410. Ghost mode 406 automatically duplicates OpenGL calls for both the system library 408 and for the GLAS library 412. In ghost mode 406, depth centering is based on x and y scale and centered to get the majority of the vertices within the display. Ghost mode 406 provides an unextended OpenGL interface and attempts to make a spatial display appear as a 2-D display to the application. Extended mode 410 allows the application to control the call forwarding behavior. Extended mode 410 exposes an extended OpenGL interface. A few commands are added to help the application control a spatial display separately from a 2-D display. Example commands include: create a context for a spatial display and draw to a spatial display context. Output from the GLAS library 412, in SpatialGL, is sent to the client 308 and then to the volume manager 310. The volume manager 310 assigns display resources. It filters the STP stream to reformat the data according to the display resource assigned to the given context. The core rendering block 312, which contains the mechanisms for decoding and executing procedures in the STP language, receives STP commands.
  • The configuration is controllable for each application, based on a central control repository. Parameters that may configured include, but are not limited to: context selection strategy (allows the controller to change the context selection while the application is running); projection fix-up strategy that overrides the projection that that application specifies, in order to fit the image in the actual display geometry; texture processing strategy; context STP preamble (e.g., resolution hints); and scene STP preamble.
  • Some spatial displays physically realize view-dependent lighting effects. In this case, lighting is calculated based on the actual view directions, rather than the master direction given by the projection matrix.
  • Specific rasterization constraints and rules can only be specified relative to the unique geometry of each display type. In general, only fragments that intersect the projection of an element into the display's native coordinate system may be lit. When rendering polygons, elements must not contain holes. When rendering connected polygons where exact vertex positions are shared, the rendered figure must not contain holes.
  • When anti-aliasing is used, the partial ordering of the color value of the fragments must agree with the partial ordering of the intersection (area or length) between the fragment and pixels of the display's native coordinate system, when normalized for variation in area or volume of the pixels.
  • Streaming Content Library
  • The streaming content library permits spatial stream assets. A spatial stream asset is a time-varying source of spatial imagery. Optionally, the spatial stream may be synchronized with one or more audio streams. A spatial stream may either consist of a real-time stream, a recorded stream, or a dynamically generated stream. An example of a ream-time spatial stream is a multi-view stream that is fed from an array of cameras. An example of a recorded stream is a spatial movie stored on a removable disk. An example of a dynamically generated stream is a sequence of dynamically rendered 3-D reconstructions from a PACS database.
  • Each stream is associated with a spatial codec. The intended interpretation of the stream is determined by the associated spatial codec. The spatial codec is comprised of a stream encoding specification and a reconstruction specification. The stream encoding specification determines the mapping from the raw binary stream to a time-varying series of pixel arrays. The stream encoding specification may also identify an audio stream, synchronized with the pixel arrays. The reconstruction specification determines the intended mapping from pixel arrays to physical light fields. Examples of stream encoding specifications include MPEG coded representations. The reconstruction specification can be defined using the persistent form of Spatial Transport Protocol.
  • A client of the streaming content library receives the raw binary stream and the spatial codec. The client proceeds to reconstruct an approximation of the intended physical light field, by calculating pixel arrays using the stream encoding specification. At each time step, the client consumes one or more pixel arrays and interprets an intended light field, using the reconstruction specification. The intended light field is rendered into the local display's specific geometry, using Core Rendering Software. Finally, Core Rendering Software moves the rendered image into the spatial frame buffer, causing the display to generate a physical manifestation of a light field.
  • The streaming content library includes spatial stream asset servers. Each spatial stream asset is published by a spatial asset server. An asset server may publish one or more streams, each with a unique URL. A software application using SpatialGL (such as a spatial media player) can call up a particular spatial stream asset using its associated URL.
  • Spatial stream assets may be transmitted with unidirectional signaling: for example several TV channels may be jointly used to transmit a multi-view recording. In this case, the spatial codec can be continuously or periodically transmitted. Spatial content may also be broadcast with bidirectional signaling: for example, a spatial movie may be downloaded from an Internet-based asset server and viewed using a spatial media player using SpatialGL. In this case, the client could potentially negotiate an optimal spatial codec to match the client's display geometry. Bidirectional signaling can also be used to allow a client to remotely control a dynamically generated stream. For example, a client may continuously send updates to a server about the desired view direction and region of interest, while the server continuously returns rendered images to the client through the streaming library. Alternately, a client may receive notifications from the spatial stream asset server when new data is available. Based on the notifications, the client may choose to download and render the new data or else the client may skip the new data. When receiving a notification, the client may decide whether to download or skip the new data, based on factors such as the currently available buffer space, communication bandwidth, processing power, or desired level of image quality.
  • Pseudo-Code Reconstruction Specification for a Multi-View Stream
    • Define n views V1, . . . Vn, each comprised of a projection Pi and an aperture Qi
    • For each time step t
      • For each view Vi,
        • Render a plane, textured with pixel array t*n+i, using projection Pi
        • Render aperture Qi
      • Swap the rendered image into the active frame buffer
        Pseudo-Code Reconstruction Specification for a Volumetric Stream
    • Define a local 3-D texture asset T
    • For each time step t
      • For each pixel array i in 1 . . . n
        • Load pixel array i into slice i of texture T
        • Render a solid cube, textured with T
      • Swap the rendered image into the active frame buffer
        Network Layer
  • In an exemplary embodiment of the present invention, the CRS 122 has the character of a slave peripheral and communication to the CRS 122 is limited to proprietary channels. Alternate exemplary embodiments of the CRS 122 have an expanded role as a network device. In addition, it can communicate with a host over a network, and it supports standard protocols for network configuration. The CRS 122 has both a client part and a server part.
  • In exemplary embodiments of the present invention, a host PC runs an application and is in communication with a single multiplanar 3-D display which contains an embedded core rendering electronics system. The client part is embodied on the host PC, while the server part is embodied on the core rendering electronics. CRS 122 is distinct from the SVE because the CRS 122 is meant primarily to provide a rendering engine for specific display types that is compatible with the display-independent graphical commands generated in the SVE.
  • The client side of the CRS 122 interfaces to the SVE using the STP language. STP is used to package and transport SpatialGL API calls. A core rendering client connects the volume manager 120 to the physical transport by acting as an STP interpreter. The core rendering client interpreter exposes procedures (with STP linkage) that allow an STP program to address specific servers. Exemplary embodiments of the present invention only function when a single server is present. Alternate exemplary embodiments of the core rendering client communicate with servers over a network, and are able to list and address the set of available servers.
  • The client also provides a boot service. This provides the boot-image used by the net-boot feature of the servers. The boot-image is stored in a file that can be updated by Perspecta Software Suite upgrade disks (or via web upgrade). The boot service can be enabled and disabled by the SVE. After the boot-image file is upgraded, the installer must enable the boot service to allow the display to update.
  • In the current example, in which there is one host PC and one Perspecta display, all input to the system arrives through the gigabit Ethernet connections. The embedded system acts as a normal Internet Protocol (IP) device. The embedded system acts as a server, while the host PC acts as a client. The server acts as a normal IP device. In exemplary embodiments of the present invention, the client and server must be directly connected. In alternate exemplary embodiments of the present invention, clients and servers are connected through a gigabit switch. This configuration removes the requirement that the client PC contains two Ethernet controllers, and it allows multiple clients to connect to a single server. The server obtains an IP address using dynamic host configuration protocol (DHCP) (unless it has been configured to use a static address). Once an IP address has been obtained, the CRS 122 and the client must be made aware of the identity of the server. This is done by a symmetric system where a node (client or server) broadcasts a datagram when it starts. The node that starts first obtains the identity of the later node. If the server is started first, and encounters a client datagram broadcast, it opens a connection to the client to communicate the server's identity. A client may simultaneously communicate with multiple servers. Each server may only communicate with a single client at a time.
  • In alternate exemplary embodiments of the present invention, the servers have a user interface and policy for attaching to specific clients when more than one client is available. The CRS 122 provides a simple network management protocol (SNMP) interface to manage the network settings of the server. The SNMP interface configures the IP address, broadcast settings and security settings. The security settings include client allow and deny lists.
  • In exemplary embodiments of the present invention the host and client support a single gigabit Ethernet connection. In alternate exemplary embodiments, the host and client employ an additional protocol to support two gigabit Ethernet connections.
  • Once a client knows the identity of a server, the client may open the server. The client and server communicate through datagrams. The server is single-threaded; the client may only open a single connection to the server and it is guaranteed exclusive access to the entire display resource. Once the client has opened the server, it may begin transacting rendering commands. Rendering commands are moved between the client and server using a command stream and a remote memory protocol.
  • Since the network graphics service is meant to communicate only over a local network segment, a very low level of packet loss is expected. The details of the communication scheme can be arranged to ensure that the system degrades gracefully under packet loss. Device allocation and context creation must be guaranteed to operate correctly under packet loss. The bulk graphics data transfer is not protected, except that a frame that is rendered without packet loss must not be degraded by packet loss in previous frames. Persistent texture map data is protected against packet loss by a checksum and a failure/retry scheme.
  • Core Rendering Software (CRS)
  • CRS 122 uses the STP language as a form for communicating graphics commands and procedures. STP allows the interfaces between the major components of the Core Rendering Software system to be uniform. In the initial version of Core Rendering Software, STP serves as the inter-process-communication mechanism. STP is used to communicate a sequence of graphics commands from the client to the server. The initial version of STP will include conditional execution and parallel branching prototype features. In later versions of Core Rendering Software, modules will be written within the STP language, thus flattening the hardware-native part of the architecture. Conditional execution and parallel branching features will be optimized in later versions of Core Rendering Software.
  • Rendering Modules
  • FIG. 5 is a block diagram of an exemplary rendering module. The pipeline system structure, or pipeline framework, subsystem provides the generic structure that is common to rendering pipelines for the CRS 122. A rendering pipeline is implemented through a pipeline system class 502. A pipeline system class 502 is composed of a rendering pipeline and a fixed set of active objects. An active object models a device that can trade time for data movement or transformation, such as a bus, a GPU or a CPU. The pipeline system class 502 binds stages to scheduler threads 510 (i.e., to active objects). The scheduler thread 510 is the binding between stages and active objects.
  • An instance of a pipeline 504 operates on a single input stream of homogeneous elements. An exemplary pipeline constructor includes: initiate first-in-first-out (FIFO) length; and initialize stage connections. As depicted in FIG. 5, fixed length FIFOs 506 constrain the resource usage of the system.
  • Rendering pipelines are implemented as a series of stages 508 that communicate with tasks. A stage 508 is an algorithmic unit that transforms one stream of tasks into another stream of tasks. Although a stage 508 may be designed to be compatible with a specific active object 512, the binding to the active object 512 is external to the stage 508. For example, a stage 508 may implicitly require a binding with a GPU by making OpenGL calls, but it must not own or manipulate an OpenGL context.
  • Stage objects have an unusually complicated life cycle. They are typically created in one thread but work in a second thread. The lifetime of a stage 508 consists of these distinct transitions: construction, initialization, execution, de-initialization, and destruction. A stage 508 transforms a stream of homogeneous elements. A stage 508 utilizes the resources of a single active object and executes several hundreds of times a second. The biding between a stage 508 and an active object 512 is external to the stage class. Therefore, a pipeline 504 may be considered a stage 508, but a pipeline system 502 may not be considered a stage 508. The remote active object 512 depicted in FIG. 5 models a thread of execution that exists outside of the CPU. Input to the active object 512 includes data from the GL context block 514 and the voxel engine context block 516.
  • Task objects are not strongly structured, outside of their specific implementation domain. In exemplary embodiments of the present invention, the pipeline framework includes a Fence class, which is utilized to provide a main stream synchronization pattern. A pipeline system 502 operates asynchronously from its enclosing system. The enclosing system can insert a Fence into the command stream of a pipeline 504. A pipeline passes a fence when all processing due to tasks issued before the fence have completed. The enclosing system can query whether the pipeline 504 has passed the fence, or it can block until the fence has been passed.
  • SpatialGL Graphics Pipeline for Other Displays
  • As described above, a key feature of the SVE is display-independence (or “display-agnosticism”). Implementations of the SpatialGL API can be made for a variety of 2-D and 3-D displays. The SpatialGL API may be utilized with a Perspecta multiplanar volumetric display. In addition, the SpatialGL API may be utilized with other types of displays.
  • Because multi-view rendering is very similar to single-view rendering, the SpatialGL implementation is substantially simpler than the SpatialGL implementation for multiplanar rendering. For example, on flat, horizontal-parallax multi-view displays, such as the Stereographics 9-view lenticular display or Actualty System's quasi-holographic video display, a slice volume could be created as part of the rendering process. A slice volume contains a slice for each rendered view direction. Rendered views use sheared versions of standard projection matrices, corresponding to the viewing angles. “Final views” correspond to the views that are physically generated by the display hardware. Final views are sampled from the slice volume (for example, using texture mapping). The number of final views may be different than the number of rendered views.
  • Rendering tetrahedra requires special treatment because, at the time of writing, GPUs lack native volumetric rendering support. In this case, SpatialGL wraps an efficient volume rendering implementation such as ray casting.
  • Depending on the multiview display, image formatting can be different. Because Stereographics' lenticular display interfaces via digital visual interface (DVI), it does not require special formatting hardware (such as the Voxel Engine in Actuality Systems, Incorporated's Core Rendering Electronics). However, the distribution of pixels and views is somewhat irregular, and requires a reformatting process known as “interzigging.” Additionally, view anti-aliasing can occur during this step. On the other hand, Actuality Systems' holovideo display was designed to use the same Core Rendering Electronics as Perspecta, and can share the same implementation.
  • Because SpatialGL is display-agnostic, SpatialGL can also be used for non-3D displays. Examples include tiled display walls, displays with heterogeneous interfaces (e.g. the Sunnybrook HDR LCD, foveated resolution displays), and displays with unusual geometries (e.g. dome, sphere or cube shaped displays). Finally, an obvious example would be a standard 2-D display such as a desktop cathode ray tube (CRT) or liquid crystal display (LCD). This would allow the use of SpatialGL programs on standard computer hardware without an exotic display configuration. For the most part, the rendering of these displays only requires changes in the image reformatting stage, and minor changes elsewhere.
  • FIG. 6 is an exemplary process flow diagram of a command from a ported application. A ported application 112 renders a scene by issuing function calls via the compatibility module 114 to the GLAS Extended API (similar to OpenGL). These API calls specify features of the scene, such as texture images, the position and shape of primitive elements (such as lines points and triangles) and the mappings between elements textures and colors. The GLAS extended stub library 612 receives the API calls and issues them to the GLAS translation library 412. The GLAS translation library manages the OpenGL state machine to interpret the GLAS Extended API calls. Interpreted calls are translated into SpatialGL API calls.
  • Legacy applications invoke a similar command flow. In this case, a legacy application 110 renders a scene by issuing function calls via the compatibility module 114 to the GLAS Ghost API (similar to OpenGL). The GLAS ghost stub library 613 receives the API calls and reformats the scene in preparation for translation to SpatialGL. For example, the stub library may apply a heuristic that inspects the Ghost API calls to estimate the intended depth center of the scene. This additional information is passed to the GLAS translation library 412, along with the API calls generated by the legacy application. Interpreted calls are translated into SpatialGL API calls.
  • The SpatialGL client library 308 directs the API calls to the volume manager 310, along with association information to identify the software application instance that generated the commands (the ported application).
  • The volume manager 310 modifies the API call stream. For example, it may map the application's rendering output to a specific potion of the graphics volume generated by the display device. It may also apply an overlay to the rendering output, such as a sprite that marks the location of a 3-D pointer.
  • After the volume manager 310, the core rendering client library 312 marshals the API calls and transmits them (for example, using Spatial Transport Protocol 314) to the server execution environment. The core rendering software (instantiated in the server execution environment) receives and unmarshals API calls.
  • The core rendering server 604 operates rendering algorithms, based on the API calls. The rendering algorithms are defined by a renderer module 606. In general, there is a specialized renderer module for each distinct class of display geometry. The rendering algorithms cause rendered image bitmaps to be moved into the spatial frame-buffer 611, using the voxel engine driver 610.
  • Distributed Configurations
  • Typically, graphics libraries allow multiple client applications to share access to a single rendering server (via, for example, the Windows Graphics Library [WGL] for OpenGL Windows). In exemplary embodiments of the present invention, the SVE volume manger provides this meta-service for the SpatialGL API. The SpatialGL API is also designed to allow a single client application to access multiple servers. Often, this will be to provide multiple views to an application (e.g., standard 2-D view with a Perspecta volumetric view).
  • However, for the SpatialGL API, servers do not necessarily represent access to rendering/display resources; instead they may also represent access to graphical assets. This includes geometry data, images, shader programs, and their combinations. Like web pages, SpatialGL objects can be referenced by Uniform Resource Locators (URLs). These URLs may represent local resources or shared resources. Exemplary implementations of the present invention can distribute different parts of the rendering pipeline to different servers that may specialize in various tasks.
  • A simple configuration that may be implemented by the network layer 202 of the architecture described previously herein includes having a host computer attached directly to a display (e.g., a spatial display). In this configuration, the client application on the host computer may open multiple contexts (virtual servers) that are shared on the display. Another configuration that may be implemented by exemplary embodiments of the present invention is a typical client/server configuration. In this case, multiple host computers are attached to a display over a network and client applications on different host computers can each open multiple contexts (e.g., via the SpatialGL API) that are shared on the display. Other exemplary configurations include a buffered display configuration with a host computer that is attached to a SpatialGL rendering server. In this configuration the rendering server can only perform rendering, and does not actually display images. When the client application sends scenes to the rendering server, they are rendered and stored. The stored results can be played back on a display at a later time.
  • FIGS. 7-10 depict further example configurations that may be implemented by the network layer 202 of the architecture described previously herein. In FIG. 7, multiple host computers 702 are attached to various displays 704 over one or more networks 706. The displays 704 may be a mixture of various types (e.g., 2-D and 3-D). In exemplary embodiments of the present invention, the host computers 702 discover displays 704 through the Domain Name System (DNS). The host computer systems 702 include one or more graphics applications that communicate with one or more displays 704 via the network using an API (e.g., the SpatialGL API).
  • The network 706 may be implemented by any type of known network including, but not limited to, a wide area network (WAN), a local area network (LAN), storage area network (SAN), a global network (e.g. Internet, cellular), a virtual private network (VPN), and an intranet. The network 706 may be implemented using a wireless network or any kind of physical network implementation. SpatialGL servers (e.g., asset server, rendering server) may be attached to the network 706 in a wireless fashion.
  • FIG. 8 depicts a system that may be implemented by exemplary embodiments of the present invention. The configuration depicted in FIG. 8 includes a dedicated asset server 802. In FIG. 8 a host computer is attached to a SpatialGL asset server 802, a SpatialGL rendering server 804 and a display on a thin client 806. The client application accesses assets from the asset server 802 by URL. These assets are forwarded to the rendering server 804 (which may have the asset locally cached). The client application sends SpatialGL commands to the rendering server 804, which renders the SpatialGL scenes (also referred to herein as graphic display data) and sends the results to the display. In exemplary embodiments of the present invention, the assets are very large compared to the bandwidth between the host computer and the SpatialGL server (e.g., host/display is a doctor's home PC connected over the Internet and the asset server 802 is SpatialGL/DICOM bridge that creates SpatialGL volumetric textures from a hospital's picture archiving and communication system (PACS).
  • FIG. 9 depicts a system that may be implemented by exemplary embodiments of the present invention. The configuration depicted in FIG. 8 may be referred to as a remote rendering implementation. In FIG. 9, a host computer is in communication with a SpatialGL rendering server 804 and a display server through a network. In the system depicted in FIG. 9, the host computer and display server on located on the same machine (e.g., host/display is a thin client 806 such as a table PC or cellular telephone) and the rendering algorithms are performed remotely by the rendering server 804. The rendering server may access the asset server 802 via the network 706. The client application, located on the thin client 806, sends SpatialGL commands to the rendering server 804. The rendering server 804 then renders the SpatialGL scenes and sends the results to the Spatial GL display on the thin client 806.
  • FIG. 10 depicts a system that may be implemented by exemplary embodiments of the present invention to provide distributed rendering. Multiple host computers 702 are attached to various SpatialGL rendering servers 1004 and displays 1002 over a network. 706. Client applications from the host computers 702 send SpatialGL commands to the pool of rendering servers 1004 that load-balances and distributes the rendering tasks amongst themselves, possibly through an arbiter. Scenes are rendered in parallel across available and easily accessible rendering servers 1004 and sent to the appropriate displays 1002. Interesting (though not mutually exclusive) distributions of rendering servers 1004 include workstations that perform SpatialGL rendering during idle cycles or centrally deployed cluster of dedicated rendering servers 1004. The latter case is particularly interesting when combined with dedicated asset servers 802.
  • The system configurations described above and depicted in FIGS. 7-10 are exemplary in nature. In general, the asset resources (e.g., asset server 802), computation resources (e.g., rendering server 804) and/or display resources (e.g., displays 704) may be distributed across one or more networks 706 or co-located. In addition, one or more assets resources and/or computation resources may create input to one or more displays. Particular use-cases and specific examples that may be implemented using the configurations described above include, but are not limited to:
  • A customer's PC connected directly to a 3-D display (for example, a mechanical engineer running SolidWorks application software on a desktop IBM PC workstation, containing elements of Spatial Visualization Environment, in physical connection to a multiplanar three-dimensional display over gigabit Ethernet.)
  • One or more mobile devices with rendered 3-D graphics on its 2-D display (i.e. 3-D-on-2-D) that call upon remote rendering computational horsepower to draw animated 3-D graphics for games.
  • One or more mobile devices with rendered 3-D graphics on its 3-D display (i.e. a cellphone with a lenticular 9-view autostereoscopic 3-D display) that call upon remote rendering computational horsepower.
  • Several displays of various types distributed through an oil exploration enterprise, such as 17″ desktop LCDs, a 9-panel 2-D video wall, ten stereoscopic displays, and a volumetric 3-D display, all rendering projections of a 3-D application's graphical output, where the 3-D application resides in a computer not necessarily local to the displays.
  • Several displays of various types, such as 2-D and panoramagram 3-D, throughout one or many movie retail locations, which in response to a potentially remote source of data provides content in a synchronized manner. For example, when a new product is offered, all of the displays can show up-to-date advertising content.
  • Exemplary embodiments of the present invention allow applications to have access to a wider variety of assets, rendering algorithms and displays. The assets, rendering algorithms and displays may be located in the same geographic location or located in separate geographic locations and in communication via a network.
  • As described above, the embodiments of the invention may be embodied in the form of hardware, software, firmware, or any processes and/or apparatuses for practicing the embodiments. Embodiments of the invention may also be embodied in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits.
  • While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims. Moreover, the use of the terms first, second, etc. do not denote any order or importance, but rather the terms first, second, etc. are used to distinguish one element from another.

Claims (21)

1. A system for displaying graphical information, the system comprising:
an asset server for storing information; and
a rendering server in communication with the asset server for receiving a graphics command and for rendering graphic display data in response to the graphics command and to the information, wherein the rendering server is independently addressable from the asset server.
2. The system of claim 1 wherein the asset server and the rendering server are in communication via a network.
3. The system of claim 1 wherein the asset server is located in a different geographic location than the rendering server.
4. The system of claim 1 wherein the information includes one or more of rendering resources, geometry data, images and shader programs.
5. The system of claim 1 wherein the rendering server further transmits the graphic display data to a display device that is independently addressable from the asset server and the rendering server.
6. The system of claim 5 wherein the graphic display data is transmitted to the display device via a network.
7. The system of claim 5 wherein the display device is located in a different geographic location than one or more of the rendering server and the asset server.
8. The system of claim 5 wherein the display device includes one or more of a two-dimensional (2-D) display and a three-dimensional (3-D) display.
9. The system of claim 5 wherein the display device includes one or more of a 2-D printer and a 3-D printer.
10. The system of claim 1 wherein the graphics command is generated by a client application.
11. The system of claim 10 wherein the client application transmits the graphics command to the rendering server via a network.
12. The system of claim 10 wherein the client application is located in a different geographic location than the rendering server.
13. The system of claim 1 wherein the graphics command is generated by a plurality of client applications.
14. The system of claim 1 wherein the rendering server further transmits the graphic display data to a plurality of independently addressable display devices.
15. A method for displaying graphical information, the method comprising:
receiving a graphics command at a rendering server;
accessing information responsive to the graphics command, wherein the information is located in an asset server that is separately addressable from the rendering sever; and
rendering graphic display data in response to the graphics command and to the information.
16. The method of claim 15 wherein the asset server is located in a different geographic location than the rendering server and the asset server and the rendering server communicate via a network.
17. The method of claim 15 further comprising transmitting the graphic display data to a display device.
18. The method of claim 17 wherein the display device is located in a different geographic location from one or more of the asset server and the rendering server.
19. An architecture for displaying graphical information, the architecture comprising:
an asset resource layer for storing information; and
a rendering layer for receiving a graphics command and rendering graphic display data in response to the graphics command and to the information, wherein the communication server is independently addressable from the asset server.
20. The architecture of claim 19 further comprising a display layer for receiving the graphic display data and displaying the graphic display data on a display device, wherein the display layer in independently addressable from the asset layer and the rendering layer.
21. A computer program product for displaying graphical information, the computer program product comprising:
a storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
receiving a graphics command at a rendering server;
accessing information responsive to the graphics command, wherein the information is located in an asset server that is separately addressable from the rendering sever; and
rendering graphic display data in response to the graphics command and to the information.
US11/176,482 2004-07-08 2005-07-07 Architecture for rendering graphics on output devices over diverse connections Abandoned US20060028479A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US11/176,482 US20060028479A1 (en) 2004-07-08 2005-07-07 Architecture for rendering graphics on output devices over diverse connections
US13/292,070 US20120050301A1 (en) 2004-07-08 2011-11-08 Architecture For Rendering Graphics On Output Devices Over Diverse Connections
US13/292,066 US20120050300A1 (en) 2004-07-08 2011-11-08 Architecture For Rendering Graphics On Output Devices Over Diverse Connections

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US58632704P 2004-07-08 2004-07-08
US11/176,482 US20060028479A1 (en) 2004-07-08 2005-07-07 Architecture for rendering graphics on output devices over diverse connections

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US13/292,066 Continuation US20120050300A1 (en) 2004-07-08 2011-11-08 Architecture For Rendering Graphics On Output Devices Over Diverse Connections
US13/292,070 Continuation US20120050301A1 (en) 2004-07-08 2011-11-08 Architecture For Rendering Graphics On Output Devices Over Diverse Connections

Publications (1)

Publication Number Publication Date
US20060028479A1 true US20060028479A1 (en) 2006-02-09

Family

ID=35787617

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/176,057 Expired - Fee Related US8042094B2 (en) 2004-07-08 2005-07-07 Architecture for rendering graphics on output devices
US11/176,482 Abandoned US20060028479A1 (en) 2004-07-08 2005-07-07 Architecture for rendering graphics on output devices over diverse connections
US13/292,070 Abandoned US20120050301A1 (en) 2004-07-08 2011-11-08 Architecture For Rendering Graphics On Output Devices Over Diverse Connections
US13/292,066 Abandoned US20120050300A1 (en) 2004-07-08 2011-11-08 Architecture For Rendering Graphics On Output Devices Over Diverse Connections

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/176,057 Expired - Fee Related US8042094B2 (en) 2004-07-08 2005-07-07 Architecture for rendering graphics on output devices

Family Applications After (2)

Application Number Title Priority Date Filing Date
US13/292,070 Abandoned US20120050301A1 (en) 2004-07-08 2011-11-08 Architecture For Rendering Graphics On Output Devices Over Diverse Connections
US13/292,066 Abandoned US20120050300A1 (en) 2004-07-08 2011-11-08 Architecture For Rendering Graphics On Output Devices Over Diverse Connections

Country Status (3)

Country Link
US (4) US8042094B2 (en)
TW (2) TW200622930A (en)
WO (2) WO2006017198A2 (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060129632A1 (en) * 2004-12-14 2006-06-15 Blume Leo R Remote content rendering for mobile viewing
US20060155800A1 (en) * 2004-12-14 2006-07-13 Ziosoft, Inc. Image processing system for volume rendering
US20070061733A1 (en) * 2005-08-30 2007-03-15 Microsoft Corporation Pluggable window manager architecture using a scene graph system
US20070120865A1 (en) * 2005-11-29 2007-05-31 Ng Kam L Applying rendering context in a multi-threaded environment
US20070236502A1 (en) * 2006-04-07 2007-10-11 Huang Paul C Generic visualization system
US20080007559A1 (en) * 2006-06-30 2008-01-10 Nokia Corporation Apparatus, method and a computer program product for providing a unified graphics pipeline for stereoscopic rendering
US20080051160A1 (en) * 2004-09-08 2008-02-28 Seil Oliver D Holder, Electrical Supply, and RF Transmitter Unit for Electronic Devices
US20080068289A1 (en) * 2006-09-14 2008-03-20 Citrix Systems, Inc. System and method for multiple display support in remote access software
US20080068290A1 (en) * 2006-09-14 2008-03-20 Shadi Muklashy Systems and methods for multiple display support in remote access software
US20080194930A1 (en) * 2007-02-09 2008-08-14 Harris Melvyn L Infrared-visible needle
WO2008118065A1 (en) * 2007-03-28 2008-10-02 Agency 9 Ab Graphics rendering system
US20090002368A1 (en) * 2007-06-26 2009-01-01 Nokia Corporation Method, apparatus and a computer program product for utilizing a graphical processing unit to provide depth information for autostereoscopic display
US20090138544A1 (en) * 2006-11-22 2009-05-28 Rainer Wegenkittl Method and System for Dynamic Image Processing
US20090201303A1 (en) * 2007-11-23 2009-08-13 Mercury Computer Systems, Inc. Multi-user multi-gpu render server apparatus and methods
US20090284583A1 (en) * 2008-05-19 2009-11-19 Samsung Electronics Co., Ltd. Apparatus and method for creatihng and displaying media file
US7647129B1 (en) * 2005-11-23 2010-01-12 Griffin Technology, Inc. Digital music player accessory interface
US20100220098A1 (en) * 2008-10-26 2010-09-02 Zebra Imaging, Inc. Converting 3D Data to Hogel Data
US20110141113A1 (en) * 2006-03-07 2011-06-16 Graphics Properties Holdings, Inc. Integration of graphical application content into the graphical scene of another application
US20110227934A1 (en) * 2010-03-19 2011-09-22 Microsoft Corporation Architecture for Volume Rendering
US20120306899A1 (en) * 2011-06-03 2012-12-06 Jeremy Sandmel Serialization of Asynchronous Command Streams
US8775510B2 (en) 2007-08-27 2014-07-08 Pme Ip Australia Pty Ltd Fast file server methods and system
US8976190B1 (en) 2013-03-15 2015-03-10 Pme Ip Australia Pty Ltd Method and system for rule based display of sets of images
US9019287B2 (en) 2007-11-23 2015-04-28 Pme Ip Australia Pty Ltd Client-server visualization system with hybrid data processing
US9454813B2 (en) 2007-11-23 2016-09-27 PME IP Pty Ltd Image segmentation assignment of a volume by comparing and correlating slice histograms with an anatomic atlas of average histograms
US9509802B1 (en) 2013-03-15 2016-11-29 PME IP Pty Ltd Method and system FPOR transferring data to improve responsiveness when sending large data sets
US20170199763A1 (en) * 2016-01-08 2017-07-13 Electronics And Telecommunications Research Institute Method and apparatus for visualizing scheduling result in multicore system
US9904969B1 (en) 2007-11-23 2018-02-27 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9984478B2 (en) 2015-07-28 2018-05-29 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US10070839B2 (en) 2013-03-15 2018-09-11 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US10311541B2 (en) 2007-11-23 2019-06-04 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10540803B2 (en) 2013-03-15 2020-01-21 PME IP Pty Ltd Method and system for rule-based display of sets of images
US10909679B2 (en) 2017-09-24 2021-02-02 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11183292B2 (en) 2013-03-15 2021-11-23 PME IP Pty Ltd Method and system for rule-based anonymized display and data export
US11244495B2 (en) 2013-03-15 2022-02-08 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11599672B2 (en) 2015-07-31 2023-03-07 PME IP Pty Ltd Method and apparatus for anonymized display and data export

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006018689A1 (en) * 2006-04-13 2007-10-25 Seereal Technologies S.A. Method for rendering and generating computer-generated video holograms in real time
CN101059760B (en) * 2006-04-20 2014-05-07 意法半导体研发(上海)有限公司 OPENGL to OPENGLIES translator and OPENGLIES emulator
DE102006025096B4 (en) * 2006-05-23 2012-03-29 Seereal Technologies S.A. Method and device for rendering and generating computer-generated video holograms
WO2008156785A2 (en) * 2007-06-18 2008-12-24 Pano Logic, Inc. Remote graphics rendering across a network
US20090089453A1 (en) * 2007-09-27 2009-04-02 International Business Machines Corporation Remote visualization of a graphics application
GB2459335B (en) * 2008-04-25 2013-01-09 Tenomichi Ltd Temporary modification for extending functionality of computer games and software applications
DE102008046643A1 (en) * 2008-09-09 2010-03-11 Visumotion Gmbh Method for displaying raster images
WO2012034113A2 (en) * 2010-09-10 2012-03-15 Stereonics, Inc. Stereoscopic three dimensional projection and display
WO2013049835A2 (en) * 2011-09-30 2013-04-04 Owens Corning Intellectual Capital, Llc Method of forming a web from fibrous materails
JP2015505972A (en) 2011-11-09 2015-02-26 コーニンクレッカ フィリップス エヌ ヴェ Display apparatus and method
US9183663B1 (en) 2011-12-30 2015-11-10 Graphon Corporation System for and method of classifying and translating graphics commands in client-server computing systems
EP3000232A4 (en) 2013-05-23 2017-01-25 Kabushiki Kaisha Square Enix Holdings (also trading as Square Enix Holdings Co., Ltd) Dynamic allocation of rendering resources in a cloud gaming system
TWI493371B (en) * 2013-05-27 2015-07-21 Drawing the method of building mold
US9821517B2 (en) * 2013-06-26 2017-11-21 Microsoft Technology Licensing, Llc 3D manufacturing platform
US9280401B2 (en) * 2014-01-09 2016-03-08 Theplatform, Llc Type agnostic data engine
EP3183653A4 (en) * 2014-08-20 2018-07-04 Landmark Graphics Corporation Optimizing computer hardware resource utilization when processing variable precision data
US20160094837A1 (en) * 2014-09-30 2016-03-31 3DOO, Inc. Distributed stereoscopic rendering for stereoscopic projecton and display
US9329858B2 (en) * 2014-09-30 2016-05-03 Linkedin Corporation Managing access to resource versions in shared computing environments
US10261985B2 (en) 2015-07-02 2019-04-16 Microsoft Technology Licensing, Llc Output rendering in dynamic redefining application
US9733993B2 (en) 2015-07-02 2017-08-15 Microsoft Technology Licensing, Llc Application sharing using endpoint interface entities
US9712472B2 (en) 2015-07-02 2017-07-18 Microsoft Technology Licensing, Llc Application spawning responsive to communication
US9658836B2 (en) * 2015-07-02 2017-05-23 Microsoft Technology Licensing, Llc Automated generation of transformation chain compatible class
US10198252B2 (en) 2015-07-02 2019-02-05 Microsoft Technology Licensing, Llc Transformation chain application splitting
US9733915B2 (en) 2015-07-02 2017-08-15 Microsoft Technology Licensing, Llc Building of compound application chain applications
US9860145B2 (en) 2015-07-02 2018-01-02 Microsoft Technology Licensing, Llc Recording of inter-application data flow
US9785484B2 (en) 2015-07-02 2017-10-10 Microsoft Technology Licensing, Llc Distributed application interfacing across different hardware
US10031724B2 (en) 2015-07-08 2018-07-24 Microsoft Technology Licensing, Llc Application operation responsive to object spatial status
US10198405B2 (en) 2015-07-08 2019-02-05 Microsoft Technology Licensing, Llc Rule-based layout of changing information
US10277582B2 (en) 2015-08-27 2019-04-30 Microsoft Technology Licensing, Llc Application service architecture
CA2946074C (en) 2015-10-21 2024-02-13 Stephen Viggers Systems and methods for using an opengl api with a vulkan graphics driver
JP2018055185A (en) * 2016-09-26 2018-04-05 富士ゼロックス株式会社 Image forming apparatus and program
WO2018144315A1 (en) 2017-02-01 2018-08-09 Pcms Holdings, Inc. System and method for augmented reality content delivery in pre-captured environments
EP3441877A3 (en) 2017-08-09 2019-03-20 Daniel Herring Systems and methods for using egl with an opengl api and a vulkan graphics driver
US10981059B2 (en) * 2019-07-03 2021-04-20 Sony Interactive Entertainment LLC Asset aware computing architecture for graphics processing
US11283982B2 (en) 2019-07-07 2022-03-22 Selfie Snapper, Inc. Selfie camera
TWI734232B (en) * 2019-10-25 2021-07-21 東培工業股份有限公司 Automatic drawing system for bearing design
WO2021138566A1 (en) 2019-12-31 2021-07-08 Selfie Snapper, Inc. Electroadhesion device with voltage control module
WO2021252980A1 (en) * 2020-06-12 2021-12-16 Selfie Snapper, Inc. Digital mirror
USD939607S1 (en) 2020-07-10 2021-12-28 Selfie Snapper, Inc. Selfie camera
TWI825435B (en) * 2021-06-23 2023-12-11 中興工程顧問股份有限公司 The method and the system of engineering automation design and management
CN113538705B (en) * 2021-07-19 2022-09-09 中国人民解放军66350部队 Vulkan-based visual engine for flight simulation
KR102635694B1 (en) * 2021-07-29 2024-02-13 (주)그래피카 3D Data Transformation and Using Method for 3D Express Rendering

Citations (93)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3140415A (en) * 1960-06-16 1964-07-07 Hughes Aircraft Co Three-dimensional display cathode ray tube
US4574364A (en) * 1982-11-23 1986-03-04 Hitachi, Ltd. Method and apparatus for controlling image display
US5132839A (en) * 1987-07-10 1992-07-21 Travis Adrian R L Three dimensional display device
US5227771A (en) * 1991-07-10 1993-07-13 International Business Machines Corporation Method and system for incrementally changing window size on a display
US5544318A (en) * 1993-04-16 1996-08-06 Accom, Inc., Asynchronous media server request processing system for servicing reprioritizing request from a client determines whether or not to delay executing said reprioritizing request
US5913032A (en) * 1994-04-04 1999-06-15 Inprise Corporation System and methods for automatically distributing a particular shared data object through electronic mail
US5933778A (en) * 1996-06-04 1999-08-03 At&T Wireless Services Inc. Method and apparatus for providing telecommunication services based on a subscriber profile updated by a personal information manager
US5990959A (en) * 1996-12-20 1999-11-23 U S West, Inc. Method, system and product for direct rendering of video images to a video data stream
US6101445A (en) * 1996-12-23 2000-08-08 Schlumberger Technology Corporation Apparatus, system and method to transmit and display acquired well data in near real time at a remote location
US6163749A (en) * 1998-06-05 2000-12-19 Navigation Technologies Corp. Method and system for scrolling a map display in a navigation application
US6181338B1 (en) * 1998-10-05 2001-01-30 International Business Machines Corporation Apparatus and method for managing windows in graphical user interface environment
US6201611B1 (en) * 1997-11-19 2001-03-13 International Business Machines Corporation Providing local printing on a thin client
US6249294B1 (en) * 1998-07-20 2001-06-19 Hewlett-Packard Company 3D graphics in a single logical sreen display using multiple computer systems
US6263365B1 (en) * 1996-10-04 2001-07-17 Raindance Communications, Inc. Browser controller
US6281893B1 (en) * 1996-04-04 2001-08-28 Sun Microsystems, Inc. Method and apparatus for providing an object oriented approach to a device independent graphics control system
US6337689B1 (en) * 1999-04-03 2002-01-08 Hewlett-Packard Company Adaptive buffering of computer graphics vertex commands
US20020015042A1 (en) * 2000-08-07 2002-02-07 Robotham John S. Visual content browsing using rasterized representations
US6373488B1 (en) * 1999-10-18 2002-04-16 Sierra On-Line Three-dimensional tree-structured data display
US20020141405A1 (en) * 2000-12-22 2002-10-03 Stephane Bouet Transferring objects within an ongoing file transfer operation
US20020154214A1 (en) * 2000-11-02 2002-10-24 Laurent Scallie Virtual reality game system using pseudo 3D display driver
US20020158865A1 (en) * 1998-04-27 2002-10-31 Dye Thomas A. Graphics system and method for rendering independent 2D and 3D objects using pointer based display list video refresh operations
US20020158873A1 (en) * 2001-01-26 2002-10-31 Todd Williamson Real-time virtual viewpoint in simulated reality environment
US20020173857A1 (en) * 2001-05-07 2002-11-21 Ecritical, Inc. Method and apparatus for measurement, analysis, and optimization of content delivery
US6490626B1 (en) * 1997-11-19 2002-12-03 Hewlett Packard Company Browser system
US20020191018A1 (en) * 2001-05-31 2002-12-19 International Business Machines Corporation System and method for implementing a graphical user interface across dissimilar platforms yet retaining similar look and feel
US6501487B1 (en) * 1999-02-02 2002-12-31 Casio Computer Co., Ltd. Window display controller and its program storage medium
US20030009343A1 (en) * 2001-07-06 2003-01-09 Snowshore Networks, Inc. System and method for constructing phrases for a media server
US20030046432A1 (en) * 2000-05-26 2003-03-06 Paul Coleman Reducing the amount of graphical line data transmitted via a low bandwidth transport protocol mechanism
US20030055896A1 (en) * 2001-08-31 2003-03-20 Hui Hu On-line image processing and communication system
US20030079030A1 (en) * 2001-08-22 2003-04-24 Cocotis Thomas A. Output management system and method for enabling access to private network resources
US6554430B2 (en) * 2000-09-07 2003-04-29 Actuality Systems, Inc. Volumetric three-dimensional display system
US6608628B1 (en) * 1998-11-06 2003-08-19 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration (Nasa) Method and apparatus for virtual interactive medical imaging by multiple remotely-located users
US6611264B1 (en) * 1999-06-18 2003-08-26 Interval Research Corporation Deferred scanline conversion architecture
US20030164832A1 (en) * 2002-03-04 2003-09-04 Alcorn Byron A. Graphical display system and method
US6621918B1 (en) * 1999-11-05 2003-09-16 H Innovation, Inc. Teleradiology systems for rendering and visualizing remotely-located volume data sets
US6621500B1 (en) * 2000-11-17 2003-09-16 Hewlett-Packard Development Company, L.P. Systems and methods for rendering graphical data
US20030189574A1 (en) * 2002-04-05 2003-10-09 Ramsey Paul R. Acceleration of graphics for remote display using redirection of rendering and compression
US20030234790A1 (en) * 2002-06-24 2003-12-25 Hochmuth Roland M. System and method for grabbing frames of graphical data
US20040001095A1 (en) * 2002-07-01 2004-01-01 Todd Marques Method and apparatus for universal device management
US20040003117A1 (en) * 2001-01-26 2004-01-01 Mccoy Bill Method and apparatus for dynamic optimization and network delivery of multimedia content
US20040019628A1 (en) * 2002-07-09 2004-01-29 Puri Anish N. System for remotely rendering content for output by a printer
US20040024846A1 (en) * 2000-08-22 2004-02-05 Stephen Randall Method of enabling a wireless information device to access data services
US20040031052A1 (en) * 2002-08-12 2004-02-12 Liberate Technologies Information platform
US20040073626A1 (en) * 2000-12-22 2004-04-15 Major Harry R. Information browser system and method for a wireless communication device
US20040080533A1 (en) * 2002-10-23 2004-04-29 Sun Microsystems, Inc. Accessing rendered graphics over the internet
US20040094632A1 (en) * 2001-12-17 2004-05-20 Alleshouse Bruce N. Xml printer system
US6742161B1 (en) * 2000-03-07 2004-05-25 Scansoft, Inc. Distributed computing document recognition and processing
US20040100651A1 (en) * 2002-11-22 2004-05-27 Xerox Corporation. Printing to a client site from an application running on a remote server
US20040104935A1 (en) * 2001-01-26 2004-06-03 Todd Williamson Virtual reality immersion system
US20040125130A1 (en) * 2001-02-26 2004-07-01 Andrea Flamini Techniques for embedding custom user interface controls inside internet content
US6762763B1 (en) * 1999-07-01 2004-07-13 Microsoft Corporation Computer system having a distributed texture memory architecture
US20040137921A1 (en) * 2002-11-08 2004-07-15 Vinod Valloppillil Asynchronous messaging based system for publishing and accessing content and accessing applications on a network with mobile devices
US20040135974A1 (en) * 2002-10-18 2004-07-15 Favalora Gregg E. System and architecture for displaying three dimensional data
US6771991B1 (en) * 2002-03-28 2004-08-03 Motorola, Inc. Graphics and variable presence architectures in wireless communication networks, mobile handsets and methods therefor
US6782431B1 (en) * 1998-09-30 2004-08-24 International Business Machines Corporation System and method for dynamic selection of database application code execution on the internet with heterogenous clients
US6799318B1 (en) * 2000-04-24 2004-09-28 Microsoft Corporation Method having multiple interfaces with distinguished functions and commands for providing services to a device through a transport
US6798417B1 (en) * 1999-09-23 2004-09-28 International Business Machines Corporation Just in time graphics dispatching
US6803912B1 (en) * 2001-08-02 2004-10-12 Mark Resources, Llc Real time three-dimensional multiple display imaging system
US20040221004A1 (en) * 2003-04-30 2004-11-04 Alexander Chalfin System, method, and computer program product for applying different transport mechanisms for user interface and image portions of a remotely rendered image
US20040226048A1 (en) * 2003-02-05 2004-11-11 Israel Alpert System and method for assembling and distributing multi-media output
US20040255005A1 (en) * 2001-08-24 2004-12-16 David Spooner Web server resident on a mobile computing device
US6847366B2 (en) * 2002-03-01 2005-01-25 Hewlett-Packard Development Company, L.P. System and method utilizing multiple processes to render graphical data
US20050021656A1 (en) * 2003-07-21 2005-01-27 Callegari Andres C. System and method for network transmission of graphical data through a distributed application
US20050022139A1 (en) * 2003-07-25 2005-01-27 David Gettman Information display
US20050050216A1 (en) * 2002-01-08 2005-03-03 John Stauffer Virtualization of graphics resources
US6867766B1 (en) * 1999-05-25 2005-03-15 Sony Computer Entertainment Inc. Image generating apparatus, image generating method, entertainment system, and recording medium
US20050081161A1 (en) * 2003-10-10 2005-04-14 Macinnes Cathryn Three-dimensional interior design system
US20050080929A1 (en) * 2003-10-13 2005-04-14 Lg Electronics Inc. Server system for performing communication over wireless network
US20050080871A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation Image distribution for dynamic server pages
US20050108364A1 (en) * 2003-11-14 2005-05-19 Callaghan David M. Systems and methods that utilize scalable vector graphics to provide web-based visualization of a device
US20050110953A1 (en) * 2001-12-26 2005-05-26 Joseph Castaldi Projector device user interface system
US20050138193A1 (en) * 2003-12-19 2005-06-23 Microsoft Corporation Routing of resource information in a network
US20050166214A1 (en) * 2002-07-29 2005-07-28 Silicon Graphics, Inc. System and method for managing graphics applications
US20050172009A1 (en) * 2004-01-29 2005-08-04 Lg Electronics Inc., Server system for performing communication over wireless network
US20050229118A1 (en) * 2004-03-31 2005-10-13 Fuji Xerox Co., Ltd. Systems and methods for browsing multimedia content on small mobile devices
US20050243094A1 (en) * 2004-05-03 2005-11-03 Microsoft Corporation Systems and methods for providing an enhanced graphics pipeline
US20060007400A1 (en) * 2001-12-26 2006-01-12 Joseph Castaldi System and method for updating an image display device from a remote location
US7016949B1 (en) * 2000-11-20 2006-03-21 Colorado Computer Training Institute Network training system with a remote, shared classroom laboratory
US20060082583A1 (en) * 2004-10-14 2006-04-20 Microsoft Corporation Remote client graphics rendering
US7062527B1 (en) * 2000-04-19 2006-06-13 Silicon Graphics, Inc. Management and scheduling of a distributed rendering method and system
US7092983B1 (en) * 2000-04-19 2006-08-15 Silicon Graphics, Inc. Method and system for secure remote distributed rendering
US7136042B2 (en) * 2002-10-29 2006-11-14 Microsoft Corporation Display controller permitting connection of multiple displays with a single video cable
US7162528B1 (en) * 1998-11-23 2007-01-09 The United States Of America As Represented By The Secretary Of The Navy Collaborative environment implemented on a distributed computer network and software therefor
US7170521B2 (en) * 2001-04-03 2007-01-30 Ultravisual Medical Systems Corporation Method of and system for storing, communicating, and displaying image data
US20070033634A1 (en) * 2003-08-29 2007-02-08 Koninklijke Philips Electronics N.V. User-profile controls rendering of content information
US7188347B2 (en) * 2002-05-24 2007-03-06 Nokia Corporation Method, apparatus and system for connecting system-level functionality of domestic OS of a mobile phone to any application operating system
US7266616B1 (en) * 2001-08-08 2007-09-04 Pasternak Solutions Llc Method and system for digital rendering over a network
US7274368B1 (en) * 2000-07-31 2007-09-25 Silicon Graphics, Inc. System method and computer program product for remote graphics processing
US7339939B2 (en) * 2001-06-29 2008-03-04 Nokia Corporation Apparatus, method and system for an object exchange bridge
US7346689B1 (en) * 1998-04-20 2008-03-18 Sun Microsystems, Inc. Computer architecture having a stateless human interface device and methods of use
US7475419B1 (en) * 2003-09-19 2009-01-06 Hewlett-Packard Development Company, L.P. System and method for controlling access in an interactive grid environment
US7580986B2 (en) * 2004-05-17 2009-08-25 Pixar Dependency graph-based aggregate asset status reporting methods and apparatus
US7783695B1 (en) * 2000-04-19 2010-08-24 Graphics Properties Holdings, Inc. Method and system for distributed rendering

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2622802C2 (en) 1976-05-21 1983-11-03 Ibm Deutschland Gmbh, 7000 Stuttgart Device for three-dimensional imaging in a cylinder-symmetrical imaging space
FR2532267B1 (en) 1982-08-31 1988-05-27 Lely Nv C Van Der TRACTOR COMPRISING A PLURALITY OF DRIVE WHEELS
US5544291A (en) * 1993-11-10 1996-08-06 Adobe Systems, Inc. Resolution-independent method for displaying a three dimensional model in two-dimensional display space
US6466185B2 (en) 1998-04-20 2002-10-15 Alan Sullivan Multi-planar volumetric display system and method of operation using psychological vision cues
US6377229B1 (en) * 1998-04-20 2002-04-23 Dimensional Media Associates, Inc. Multi-planar volumetric display system and method of operation using three-dimensional anti-aliasing
EP1088448B1 (en) * 1998-06-18 2003-01-15 Sony Electronics Inc. A method of and apparatus for partitioning, scaling and displaying video and/or graphics across several display devices
US6747642B1 (en) * 1999-01-29 2004-06-08 Nintendo Co., Ltd. Method and apparatus for providing non-photorealistic cartoon outlining within a 3D videographics system
US6346938B1 (en) * 1999-04-27 2002-02-12 Harris Corporation Computer-resident mechanism for manipulating, navigating through and mensurating displayed image of three-dimensional geometric model
US6529201B1 (en) * 1999-08-19 2003-03-04 International Business Machines Corporation Method and apparatus for storing and accessing texture maps
WO2001080180A2 (en) * 2000-04-14 2001-10-25 Smyleventures, Inc. Method and apparatus for displaying assets, goods and services
US7523411B2 (en) * 2000-08-22 2009-04-21 Bruce Carlin Network-linked interactive three-dimensional composition and display of saleable objects in situ in viewer-selected scenes for purposes of object promotion and procurement, and generation of object advertisements
AU2002952872A0 (en) * 2002-11-25 2002-12-12 Dynamic Digital Depth Research Pty Ltd Image generation
US7532230B2 (en) * 2004-01-29 2009-05-12 Hewlett-Packard Development Company, L.P. Method and system for communicating gaze in an immersive virtual environment

Patent Citations (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3140415A (en) * 1960-06-16 1964-07-07 Hughes Aircraft Co Three-dimensional display cathode ray tube
US4574364A (en) * 1982-11-23 1986-03-04 Hitachi, Ltd. Method and apparatus for controlling image display
US5132839A (en) * 1987-07-10 1992-07-21 Travis Adrian R L Three dimensional display device
US5227771A (en) * 1991-07-10 1993-07-13 International Business Machines Corporation Method and system for incrementally changing window size on a display
US5544318A (en) * 1993-04-16 1996-08-06 Accom, Inc., Asynchronous media server request processing system for servicing reprioritizing request from a client determines whether or not to delay executing said reprioritizing request
US5913032A (en) * 1994-04-04 1999-06-15 Inprise Corporation System and methods for automatically distributing a particular shared data object through electronic mail
US6281893B1 (en) * 1996-04-04 2001-08-28 Sun Microsystems, Inc. Method and apparatus for providing an object oriented approach to a device independent graphics control system
US5933778A (en) * 1996-06-04 1999-08-03 At&T Wireless Services Inc. Method and apparatus for providing telecommunication services based on a subscriber profile updated by a personal information manager
US6263365B1 (en) * 1996-10-04 2001-07-17 Raindance Communications, Inc. Browser controller
US5990959A (en) * 1996-12-20 1999-11-23 U S West, Inc. Method, system and product for direct rendering of video images to a video data stream
US6101445A (en) * 1996-12-23 2000-08-08 Schlumberger Technology Corporation Apparatus, system and method to transmit and display acquired well data in near real time at a remote location
US6201611B1 (en) * 1997-11-19 2001-03-13 International Business Machines Corporation Providing local printing on a thin client
US6490626B1 (en) * 1997-11-19 2002-12-03 Hewlett Packard Company Browser system
US7346689B1 (en) * 1998-04-20 2008-03-18 Sun Microsystems, Inc. Computer architecture having a stateless human interface device and methods of use
US20020158865A1 (en) * 1998-04-27 2002-10-31 Dye Thomas A. Graphics system and method for rendering independent 2D and 3D objects using pointer based display list video refresh operations
US6330858B1 (en) * 1998-06-05 2001-12-18 Navigation Technologies Corporation Method and system for scrolling a map display in a navigation application
US6163749A (en) * 1998-06-05 2000-12-19 Navigation Technologies Corp. Method and system for scrolling a map display in a navigation application
US6249294B1 (en) * 1998-07-20 2001-06-19 Hewlett-Packard Company 3D graphics in a single logical sreen display using multiple computer systems
US6782431B1 (en) * 1998-09-30 2004-08-24 International Business Machines Corporation System and method for dynamic selection of database application code execution on the internet with heterogenous clients
US6181338B1 (en) * 1998-10-05 2001-01-30 International Business Machines Corporation Apparatus and method for managing windows in graphical user interface environment
US6608628B1 (en) * 1998-11-06 2003-08-19 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration (Nasa) Method and apparatus for virtual interactive medical imaging by multiple remotely-located users
US7162528B1 (en) * 1998-11-23 2007-01-09 The United States Of America As Represented By The Secretary Of The Navy Collaborative environment implemented on a distributed computer network and software therefor
US6501487B1 (en) * 1999-02-02 2002-12-31 Casio Computer Co., Ltd. Window display controller and its program storage medium
US6337689B1 (en) * 1999-04-03 2002-01-08 Hewlett-Packard Company Adaptive buffering of computer graphics vertex commands
US6867766B1 (en) * 1999-05-25 2005-03-15 Sony Computer Entertainment Inc. Image generating apparatus, image generating method, entertainment system, and recording medium
US6611264B1 (en) * 1999-06-18 2003-08-26 Interval Research Corporation Deferred scanline conversion architecture
US6762763B1 (en) * 1999-07-01 2004-07-13 Microsoft Corporation Computer system having a distributed texture memory architecture
US6798417B1 (en) * 1999-09-23 2004-09-28 International Business Machines Corporation Just in time graphics dispatching
US6373488B1 (en) * 1999-10-18 2002-04-16 Sierra On-Line Three-dimensional tree-structured data display
US6621918B1 (en) * 1999-11-05 2003-09-16 H Innovation, Inc. Teleradiology systems for rendering and visualizing remotely-located volume data sets
US6742161B1 (en) * 2000-03-07 2004-05-25 Scansoft, Inc. Distributed computing document recognition and processing
US7062527B1 (en) * 2000-04-19 2006-06-13 Silicon Graphics, Inc. Management and scheduling of a distributed rendering method and system
US7783695B1 (en) * 2000-04-19 2010-08-24 Graphics Properties Holdings, Inc. Method and system for distributed rendering
US7092983B1 (en) * 2000-04-19 2006-08-15 Silicon Graphics, Inc. Method and system for secure remote distributed rendering
US6799318B1 (en) * 2000-04-24 2004-09-28 Microsoft Corporation Method having multiple interfaces with distinguished functions and commands for providing services to a device through a transport
US20030046432A1 (en) * 2000-05-26 2003-03-06 Paul Coleman Reducing the amount of graphical line data transmitted via a low bandwidth transport protocol mechanism
US7274368B1 (en) * 2000-07-31 2007-09-25 Silicon Graphics, Inc. System method and computer program product for remote graphics processing
US20020015042A1 (en) * 2000-08-07 2002-02-07 Robotham John S. Visual content browsing using rasterized representations
US20040239681A1 (en) * 2000-08-07 2004-12-02 Zframe, Inc. Visual content browsing using rasterized representations
US6704024B2 (en) * 2000-08-07 2004-03-09 Zframe, Inc. Visual content browsing using rasterized representations
US20040024846A1 (en) * 2000-08-22 2004-02-05 Stephen Randall Method of enabling a wireless information device to access data services
US6554430B2 (en) * 2000-09-07 2003-04-29 Actuality Systems, Inc. Volumetric three-dimensional display system
US20020154214A1 (en) * 2000-11-02 2002-10-24 Laurent Scallie Virtual reality game system using pseudo 3D display driver
US20030189578A1 (en) * 2000-11-17 2003-10-09 Alcorn Byron A. Systems and methods for rendering graphical data
US6621500B1 (en) * 2000-11-17 2003-09-16 Hewlett-Packard Development Company, L.P. Systems and methods for rendering graphical data
US7016949B1 (en) * 2000-11-20 2006-03-21 Colorado Computer Training Institute Network training system with a remote, shared classroom laboratory
US20020141405A1 (en) * 2000-12-22 2002-10-03 Stephane Bouet Transferring objects within an ongoing file transfer operation
US20040073626A1 (en) * 2000-12-22 2004-04-15 Major Harry R. Information browser system and method for a wireless communication device
US20020158873A1 (en) * 2001-01-26 2002-10-31 Todd Williamson Real-time virtual viewpoint in simulated reality environment
US20040003117A1 (en) * 2001-01-26 2004-01-01 Mccoy Bill Method and apparatus for dynamic optimization and network delivery of multimedia content
US20040104935A1 (en) * 2001-01-26 2004-06-03 Todd Williamson Virtual reality immersion system
US20040125130A1 (en) * 2001-02-26 2004-07-01 Andrea Flamini Techniques for embedding custom user interface controls inside internet content
US7170521B2 (en) * 2001-04-03 2007-01-30 Ultravisual Medical Systems Corporation Method of and system for storing, communicating, and displaying image data
US20020173857A1 (en) * 2001-05-07 2002-11-21 Ecritical, Inc. Method and apparatus for measurement, analysis, and optimization of content delivery
US20020191018A1 (en) * 2001-05-31 2002-12-19 International Business Machines Corporation System and method for implementing a graphical user interface across dissimilar platforms yet retaining similar look and feel
US7339939B2 (en) * 2001-06-29 2008-03-04 Nokia Corporation Apparatus, method and system for an object exchange bridge
US20030009343A1 (en) * 2001-07-06 2003-01-09 Snowshore Networks, Inc. System and method for constructing phrases for a media server
US6803912B1 (en) * 2001-08-02 2004-10-12 Mark Resources, Llc Real time three-dimensional multiple display imaging system
US20050062678A1 (en) * 2001-08-02 2005-03-24 Mark Resources, Llc Autostereoscopic display system
US7266616B1 (en) * 2001-08-08 2007-09-04 Pasternak Solutions Llc Method and system for digital rendering over a network
US20030079030A1 (en) * 2001-08-22 2003-04-24 Cocotis Thomas A. Output management system and method for enabling access to private network resources
US20040255005A1 (en) * 2001-08-24 2004-12-16 David Spooner Web server resident on a mobile computing device
US7039723B2 (en) * 2001-08-31 2006-05-02 Hinnovation, Inc. On-line image processing and communication system
US20030055896A1 (en) * 2001-08-31 2003-03-20 Hui Hu On-line image processing and communication system
US20040094632A1 (en) * 2001-12-17 2004-05-20 Alleshouse Bruce N. Xml printer system
US20050110953A1 (en) * 2001-12-26 2005-05-26 Joseph Castaldi Projector device user interface system
US20060007400A1 (en) * 2001-12-26 2006-01-12 Joseph Castaldi System and method for updating an image display device from a remote location
US20050050216A1 (en) * 2002-01-08 2005-03-03 John Stauffer Virtualization of graphics resources
US6847366B2 (en) * 2002-03-01 2005-01-25 Hewlett-Packard Development Company, L.P. System and method utilizing multiple processes to render graphical data
US20030164832A1 (en) * 2002-03-04 2003-09-04 Alcorn Byron A. Graphical display system and method
US6771991B1 (en) * 2002-03-28 2004-08-03 Motorola, Inc. Graphics and variable presence architectures in wireless communication networks, mobile handsets and methods therefor
US20030189574A1 (en) * 2002-04-05 2003-10-09 Ramsey Paul R. Acceleration of graphics for remote display using redirection of rendering and compression
US7188347B2 (en) * 2002-05-24 2007-03-06 Nokia Corporation Method, apparatus and system for connecting system-level functionality of domestic OS of a mobile phone to any application operating system
US20030234790A1 (en) * 2002-06-24 2003-12-25 Hochmuth Roland M. System and method for grabbing frames of graphical data
US20040001095A1 (en) * 2002-07-01 2004-01-01 Todd Marques Method and apparatus for universal device management
US20040019628A1 (en) * 2002-07-09 2004-01-29 Puri Anish N. System for remotely rendering content for output by a printer
US20050166214A1 (en) * 2002-07-29 2005-07-28 Silicon Graphics, Inc. System and method for managing graphics applications
US20040031052A1 (en) * 2002-08-12 2004-02-12 Liberate Technologies Information platform
US20040135974A1 (en) * 2002-10-18 2004-07-15 Favalora Gregg E. System and architecture for displaying three dimensional data
US20040080533A1 (en) * 2002-10-23 2004-04-29 Sun Microsystems, Inc. Accessing rendered graphics over the internet
US7136042B2 (en) * 2002-10-29 2006-11-14 Microsoft Corporation Display controller permitting connection of multiple displays with a single video cable
US20040137921A1 (en) * 2002-11-08 2004-07-15 Vinod Valloppillil Asynchronous messaging based system for publishing and accessing content and accessing applications on a network with mobile devices
US20040100651A1 (en) * 2002-11-22 2004-05-27 Xerox Corporation. Printing to a client site from an application running on a remote server
US20040226048A1 (en) * 2003-02-05 2004-11-11 Israel Alpert System and method for assembling and distributing multi-media output
US20040221004A1 (en) * 2003-04-30 2004-11-04 Alexander Chalfin System, method, and computer program product for applying different transport mechanisms for user interface and image portions of a remotely rendered image
US20050021656A1 (en) * 2003-07-21 2005-01-27 Callegari Andres C. System and method for network transmission of graphical data through a distributed application
US20050022139A1 (en) * 2003-07-25 2005-01-27 David Gettman Information display
US20070033634A1 (en) * 2003-08-29 2007-02-08 Koninklijke Philips Electronics N.V. User-profile controls rendering of content information
US7475419B1 (en) * 2003-09-19 2009-01-06 Hewlett-Packard Development Company, L.P. System and method for controlling access in an interactive grid environment
US20050080871A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation Image distribution for dynamic server pages
US20050081161A1 (en) * 2003-10-10 2005-04-14 Macinnes Cathryn Three-dimensional interior design system
US20050080929A1 (en) * 2003-10-13 2005-04-14 Lg Electronics Inc. Server system for performing communication over wireless network
US20050108364A1 (en) * 2003-11-14 2005-05-19 Callaghan David M. Systems and methods that utilize scalable vector graphics to provide web-based visualization of a device
US20050138193A1 (en) * 2003-12-19 2005-06-23 Microsoft Corporation Routing of resource information in a network
US20050172009A1 (en) * 2004-01-29 2005-08-04 Lg Electronics Inc., Server system for performing communication over wireless network
US20050229118A1 (en) * 2004-03-31 2005-10-13 Fuji Xerox Co., Ltd. Systems and methods for browsing multimedia content on small mobile devices
US20050243094A1 (en) * 2004-05-03 2005-11-03 Microsoft Corporation Systems and methods for providing an enhanced graphics pipeline
US7580986B2 (en) * 2004-05-17 2009-08-25 Pixar Dependency graph-based aggregate asset status reporting methods and apparatus
US20060082583A1 (en) * 2004-10-14 2006-04-20 Microsoft Corporation Remote client graphics rendering

Cited By (101)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7930004B2 (en) * 2004-09-08 2011-04-19 Belkin International, Inc. Holder, electrical supply, and RF transmitter unit for electronic devices
US20080051160A1 (en) * 2004-09-08 2008-02-28 Seil Oliver D Holder, Electrical Supply, and RF Transmitter Unit for Electronic Devices
US8004516B2 (en) * 2004-12-14 2011-08-23 Ziosoft, Inc. Image processing system for volume rendering
US20060155800A1 (en) * 2004-12-14 2006-07-13 Ziosoft, Inc. Image processing system for volume rendering
US20060129632A1 (en) * 2004-12-14 2006-06-15 Blume Leo R Remote content rendering for mobile viewing
US20070061733A1 (en) * 2005-08-30 2007-03-15 Microsoft Corporation Pluggable window manager architecture using a scene graph system
US7716685B2 (en) * 2005-08-30 2010-05-11 Microsoft Corporation Pluggable window manager architecture using a scene graph system
US7647129B1 (en) * 2005-11-23 2010-01-12 Griffin Technology, Inc. Digital music player accessory interface
US20070120865A1 (en) * 2005-11-29 2007-05-31 Ng Kam L Applying rendering context in a multi-threaded environment
US8624892B2 (en) 2006-03-07 2014-01-07 Rpx Corporation Integration of graphical application content into the graphical scene of another application
US20110141113A1 (en) * 2006-03-07 2011-06-16 Graphics Properties Holdings, Inc. Integration of graphical application content into the graphical scene of another application
US8314804B2 (en) * 2006-03-07 2012-11-20 Graphics Properties Holdings, Inc. Integration of graphical application content into the graphical scene of another application
US20070236502A1 (en) * 2006-04-07 2007-10-11 Huang Paul C Generic visualization system
US20080007559A1 (en) * 2006-06-30 2008-01-10 Nokia Corporation Apparatus, method and a computer program product for providing a unified graphics pipeline for stereoscopic rendering
US8284204B2 (en) * 2006-06-30 2012-10-09 Nokia Corporation Apparatus, method and a computer program product for providing a unified graphics pipeline for stereoscopic rendering
US7791559B2 (en) * 2006-09-14 2010-09-07 Citrix Systems, Inc. System and method for multiple display support in remote access software
US20080068289A1 (en) * 2006-09-14 2008-03-20 Citrix Systems, Inc. System and method for multiple display support in remote access software
US20080068290A1 (en) * 2006-09-14 2008-03-20 Shadi Muklashy Systems and methods for multiple display support in remote access software
US8054241B2 (en) 2006-09-14 2011-11-08 Citrix Systems, Inc. Systems and methods for multiple display support in remote access software
US8471782B2 (en) 2006-09-14 2013-06-25 Citrix Systems, Inc. Systems and methods for multiple display support in remote access software
US8793301B2 (en) * 2006-11-22 2014-07-29 Agfa Healthcare Method and system for dynamic image processing
US20090138544A1 (en) * 2006-11-22 2009-05-28 Rainer Wegenkittl Method and System for Dynamic Image Processing
US20080194930A1 (en) * 2007-02-09 2008-08-14 Harris Melvyn L Infrared-visible needle
US20100060652A1 (en) * 2007-03-28 2010-03-11 Tomas Karlsson Graphics rendering system
WO2008118065A1 (en) * 2007-03-28 2008-10-02 Agency 9 Ab Graphics rendering system
US20090002368A1 (en) * 2007-06-26 2009-01-01 Nokia Corporation Method, apparatus and a computer program product for utilizing a graphical processing unit to provide depth information for autostereoscopic display
US10038739B2 (en) 2007-08-27 2018-07-31 PME IP Pty Ltd Fast file server methods and systems
US10686868B2 (en) 2007-08-27 2020-06-16 PME IP Pty Ltd Fast file server methods and systems
US11075978B2 (en) 2007-08-27 2021-07-27 PME IP Pty Ltd Fast file server methods and systems
US11516282B2 (en) 2007-08-27 2022-11-29 PME IP Pty Ltd Fast file server methods and systems
US11902357B2 (en) 2007-08-27 2024-02-13 PME IP Pty Ltd Fast file server methods and systems
US9860300B2 (en) 2007-08-27 2018-01-02 PME IP Pty Ltd Fast file server methods and systems
US9531789B2 (en) 2007-08-27 2016-12-27 PME IP Pty Ltd Fast file server methods and systems
US9167027B2 (en) 2007-08-27 2015-10-20 PME IP Pty Ltd Fast file server methods and systems
US8775510B2 (en) 2007-08-27 2014-07-08 Pme Ip Australia Pty Ltd Fast file server methods and system
US9904969B1 (en) 2007-11-23 2018-02-27 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US11328381B2 (en) 2007-11-23 2022-05-10 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9019287B2 (en) 2007-11-23 2015-04-28 Pme Ip Australia Pty Ltd Client-server visualization system with hybrid data processing
US12062111B2 (en) 2007-11-23 2024-08-13 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US11900501B2 (en) 2007-11-23 2024-02-13 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9355616B2 (en) 2007-11-23 2016-05-31 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9454813B2 (en) 2007-11-23 2016-09-27 PME IP Pty Ltd Image segmentation assignment of a volume by comparing and correlating slice histograms with an anatomic atlas of average histograms
US20090201303A1 (en) * 2007-11-23 2009-08-13 Mercury Computer Systems, Inc. Multi-user multi-gpu render server apparatus and methods
US11900608B2 (en) 2007-11-23 2024-02-13 PME IP Pty Ltd Automatic image segmentation methods and analysis
US11640809B2 (en) 2007-11-23 2023-05-02 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US9595242B1 (en) 2007-11-23 2017-03-14 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US11514572B2 (en) 2007-11-23 2022-11-29 PME IP Pty Ltd Automatic image segmentation methods and analysis
US9728165B1 (en) 2007-11-23 2017-08-08 PME IP Pty Ltd Multi-user/multi-GPU render server apparatus and methods
US11315210B2 (en) 2007-11-23 2022-04-26 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US11244650B2 (en) 2007-11-23 2022-02-08 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US10825126B2 (en) 2007-11-23 2020-11-03 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US8319781B2 (en) 2007-11-23 2012-11-27 Pme Ip Australia Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9984460B2 (en) 2007-11-23 2018-05-29 PME IP Pty Ltd Automatic image segmentation methods and analysis
US10762872B2 (en) 2007-11-23 2020-09-01 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US10706538B2 (en) 2007-11-23 2020-07-07 PME IP Pty Ltd Automatic image segmentation methods and analysis
US10043482B2 (en) 2007-11-23 2018-08-07 PME IP Pty Ltd Client-server visualization system with hybrid data processing
WO2011065929A1 (en) * 2007-11-23 2011-06-03 Mercury Computer Systems, Inc. Multi-user multi-gpu render server apparatus and methods
US10311541B2 (en) 2007-11-23 2019-06-04 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10614543B2 (en) 2007-11-23 2020-04-07 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10430914B2 (en) 2007-11-23 2019-10-01 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10380970B2 (en) 2007-11-23 2019-08-13 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US8749616B2 (en) * 2008-05-19 2014-06-10 Samsung Electronics Co., Ltd. Apparatus and method for creating and displaying media file
US20090284583A1 (en) * 2008-05-19 2009-11-19 Samsung Electronics Co., Ltd. Apparatus and method for creatihng and displaying media file
US20100220098A1 (en) * 2008-10-26 2010-09-02 Zebra Imaging, Inc. Converting 3D Data to Hogel Data
US8605081B2 (en) * 2008-10-26 2013-12-10 Zebra Imaging, Inc. Converting 3D data to hogel data
US20110227934A1 (en) * 2010-03-19 2011-09-22 Microsoft Corporation Architecture for Volume Rendering
US9058224B2 (en) * 2011-06-03 2015-06-16 Apple Inc. Serialization of asynchronous command streams
US20120306899A1 (en) * 2011-06-03 2012-12-06 Jeremy Sandmel Serialization of Asynchronous Command Streams
US11244495B2 (en) 2013-03-15 2022-02-08 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11763516B2 (en) 2013-03-15 2023-09-19 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US10373368B2 (en) 2013-03-15 2019-08-06 PME IP Pty Ltd Method and system for rule-based display of sets of images
US10820877B2 (en) 2013-03-15 2020-11-03 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US9898855B2 (en) 2013-03-15 2018-02-20 PME IP Pty Ltd Method and system for rule based display of sets of images
US10832467B2 (en) 2013-03-15 2020-11-10 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11916794B2 (en) 2013-03-15 2024-02-27 PME IP Pty Ltd Method and system fpor transferring data to improve responsiveness when sending large data sets
US10540803B2 (en) 2013-03-15 2020-01-21 PME IP Pty Ltd Method and system for rule-based display of sets of images
US10764190B2 (en) 2013-03-15 2020-09-01 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US11129583B2 (en) 2013-03-15 2021-09-28 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US11129578B2 (en) 2013-03-15 2021-09-28 PME IP Pty Ltd Method and system for rule based display of sets of images
US11183292B2 (en) 2013-03-15 2021-11-23 PME IP Pty Ltd Method and system for rule-based anonymized display and data export
US10070839B2 (en) 2013-03-15 2018-09-11 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US9509802B1 (en) 2013-03-15 2016-11-29 PME IP Pty Ltd Method and system FPOR transferring data to improve responsiveness when sending large data sets
US11296989B2 (en) 2013-03-15 2022-04-05 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US9749245B2 (en) 2013-03-15 2017-08-29 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US8976190B1 (en) 2013-03-15 2015-03-10 Pme Ip Australia Pty Ltd Method and system for rule based display of sets of images
US9524577B1 (en) 2013-03-15 2016-12-20 PME IP Pty Ltd Method and system for rule based display of sets of images
US10631812B2 (en) 2013-03-15 2020-04-28 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US11810660B2 (en) 2013-03-15 2023-11-07 PME IP Pty Ltd Method and system for rule-based anonymized display and data export
US10762687B2 (en) 2013-03-15 2020-09-01 PME IP Pty Ltd Method and system for rule based display of sets of images
US10320684B2 (en) 2013-03-15 2019-06-11 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US11666298B2 (en) 2013-03-15 2023-06-06 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US11701064B2 (en) 2013-03-15 2023-07-18 PME IP Pty Ltd Method and system for rule based display of sets of images
US11620773B2 (en) 2015-07-28 2023-04-04 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US10395398B2 (en) 2015-07-28 2019-08-27 PME IP Pty Ltd Appartus and method for visualizing digital breast tomosynthesis and other volumetric images
US11017568B2 (en) 2015-07-28 2021-05-25 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US9984478B2 (en) 2015-07-28 2018-05-29 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US11599672B2 (en) 2015-07-31 2023-03-07 PME IP Pty Ltd Method and apparatus for anonymized display and data export
US11972024B2 (en) 2015-07-31 2024-04-30 PME IP Pty Ltd Method and apparatus for anonymized display and data export
US20170199763A1 (en) * 2016-01-08 2017-07-13 Electronics And Telecommunications Research Institute Method and apparatus for visualizing scheduling result in multicore system
US11669969B2 (en) 2017-09-24 2023-06-06 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US10909679B2 (en) 2017-09-24 2021-02-02 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters

Also Published As

Publication number Publication date
US8042094B2 (en) 2011-10-18
US20060010454A1 (en) 2006-01-12
US20120050301A1 (en) 2012-03-01
WO2006017198A9 (en) 2006-05-11
TW200622930A (en) 2006-07-01
WO2006014480A3 (en) 2006-05-04
WO2006017198A3 (en) 2006-07-06
US20120050300A1 (en) 2012-03-01
WO2006014480A2 (en) 2006-02-09
TW200606693A (en) 2006-02-16
WO2006017198A2 (en) 2006-02-16

Similar Documents

Publication Publication Date Title
US20060028479A1 (en) Architecture for rendering graphics on output devices over diverse connections
US7899864B2 (en) Multi-user terminal services accelerator
CN112085658B (en) Apparatus and method for non-uniform frame buffer rasterization
US8112513B2 (en) Multi-user display proxy server
US6917362B2 (en) System and method for managing context data in a single logical screen graphics environment
US7076735B2 (en) System and method for network transmission of graphical data through a distributed application
EP2068279B1 (en) System and method for using a secondary processor in a graphics system
EP2962191B1 (en) System and method for virtual displays
JP4901261B2 (en) Efficient remote display system with high-quality user interface
US10776997B2 (en) Rendering an image from computer graphics using two rendering computing devices
US20100289804A1 (en) System, mechanism, and apparatus for a customizable and extensible distributed rendering api
US20070070067A1 (en) Scene splitting for perspective presentations
JP2011129153A (en) System and method for unified composition engine in graphics processing system
CN114741081B (en) Cross-operation environment display output sharing method based on heterogeneous cache access
US10733689B2 (en) Data processing
Jeong et al. High-performance scalable graphics architecture for high-resolution displays
Bundulis et al. Conclusions from the evaluation of virtual machine based high resolution display wall system
Venkataraman Volume Rendering of Large Data for Scalable Displays Using Photonic Switching
Argue Advanced multi-display configuration and connectivity
JP2005181637A (en) Synchronous display system, client, server, and synchronous display method
Ritger An Overview of the NVIDIA UNIX Graphics Driver

Legal Events

Date Code Title Description
AS Assignment

Owner name: ACTUALITY SYSTEMS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NAPOLI, JOSHUA;CHUN, WON-SUK;PURTELL II, THOMAS J.;AND OTHERS;REEL/FRAME:016548/0097

Effective date: 20050719

AS Assignment

Owner name: STRAGENT, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACTUALITY SYSTEMS, INC.;REEL/FRAME:022176/0935

Effective date: 20090129

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION