US20080215972A1 - Mapping user emotional state to avatar in a virtual world - Google Patents
Mapping user emotional state to avatar in a virtual world Download PDFInfo
- Publication number
- US20080215972A1 US20080215972A1 US11/682,292 US68229207A US2008215972A1 US 20080215972 A1 US20080215972 A1 US 20080215972A1 US 68229207 A US68229207 A US 68229207A US 2008215972 A1 US2008215972 A1 US 2008215972A1
- Authority
- US
- United States
- Prior art keywords
- user
- avatar
- virtual world
- virtual
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000002996 emotional effect Effects 0.000 title claims abstract description 28
- 238000013507 mapping Methods 0.000 title claims description 6
- 238000000034 method Methods 0.000 claims abstract description 55
- 230000003993 interaction Effects 0.000 claims abstract description 28
- 230000008447 perception Effects 0.000 claims abstract 2
- 230000001815 facial effect Effects 0.000 claims description 10
- 238000001931 thermography Methods 0.000 claims description 3
- 231100000430 skin reaction Toxicity 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 63
- 238000012545 processing Methods 0.000 description 37
- 230000033001 locomotion Effects 0.000 description 36
- 230000015654 memory Effects 0.000 description 30
- 238000004088 simulation Methods 0.000 description 23
- 230000006870 function Effects 0.000 description 20
- 238000012546 transfer Methods 0.000 description 13
- 230000001133 acceleration Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 10
- 238000004458 analytical method Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 210000003128 head Anatomy 0.000 description 8
- 230000001413 cellular effect Effects 0.000 description 7
- 230000000875 corresponding effect Effects 0.000 description 7
- 239000011295 pitch Substances 0.000 description 7
- 230000005236 sound signal Effects 0.000 description 7
- 230000008451 emotion Effects 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000002195 synergetic effect Effects 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000008921 facial expression Effects 0.000 description 3
- 239000003550 marker Substances 0.000 description 3
- 229920013636 polyphenyl ether polymer Polymers 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013523 data management Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 241000170006 Bius Species 0.000 description 1
- 230000005355 Hall effect Effects 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 239000004783 Serene Substances 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008933 bodily movement Effects 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 230000026058 directional locomotion Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- MTCFGRXMJLQNBG-UHFFFAOYSA-N serine Chemical compound OCC(N)C(O)=O MTCFGRXMJLQNBG-UHFFFAOYSA-N 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000007781 signaling event Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000002463 transducing effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/31—Communication aspects specific to video games, e.g. between several handheld game devices at close range
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/33—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
- A63F13/335—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/34—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using peer-to-peer connections
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/75—Enforcing rules, e.g. detecting foul play or generating lists of cheating players
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/77—Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1854—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with non-centralised forwarding system, e.g. chaincast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1101—Session protocols
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
- A63F13/5375—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
- A63F13/792—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for payment purposes, e.g. monthly subscriptions
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/40—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
- A63F2300/407—Data transfer via internet
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/40—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
- A63F2300/408—Peer to peer connection
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/53—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
- A63F2300/534—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for network load management, e.g. bandwidth optimization, latency reduction
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8082—Virtual reality
Definitions
- This invention is related to interactive computer entertainment and more specifically to communication among users of a virtual world.
- a virtual world is a simulated environment in which users may interact with each other via one or more computer processors. Users may appear on a video screen in the form of representations referred to as avatars. The degree of interaction between the avatars and the simulated environment is implemented by one or more computer applications that govern such interactions as simulated physics, exchange of information between users, and the like. The nature of interactions among users of the virtual world is often limited by the constraints of the system implementing the virtual world.
- FIG. 1A is a screen shot illustrating an example of a world map representing a virtual world that may be used in conjunction with embodiments of the present invention.
- FIG. 1B is a screen shot illustrating an example of a public space in a virtual world that may be used in conjunction with embodiments of the present invention.
- FIG. 1C is a screen shot illustrating an example of a private space in a virtual world that may be used in conjunction with embodiments of the present invention.
- FIG. 1D is a screen shot illustrating an example of a virtual communication device according to an embodiment of the present invention.
- FIG. 1E is a schematic diagram of a virtual world system according to an embodiment of the present invention.
- FIG. 1F is a functional block diagram showing one implementation of a multimedia processing apparatus by which a user may perceive and interact with a virtual world according to an embodiment of the present invention.
- FIG. 2A is a functional block diagram showing one implementation of the multimedia processing apparatus that may be used in conjunction with embodiments of the invention.
- FIG. 2B shows an implementation of a multimedia processing system that may be used in conjunction with embodiments of the invention.
- FIGS. 2C-2D illustrate an image capture device including an array of microphones for use with embodiments of the invention.
- FIG. 2E is a block diagram illustrating examples of call routing between real and virtual communication devices according to an embodiment of the present invention.
- FIG. 2F is diagrammatically illustrates an example of communication between real and virtual communication devices in accordance with an embodiment of the present invention.
- FIG. 3 is a block diagram illustrating a video game apparatus that may be used to interface with a virtual world according to an embodiment of the present invention.
- FIG. 4 is a block diagram of a cell processor implementation of a video game apparatus according to an embodiment of the present invention.
- users may interact with a virtual world.
- virtual world means a representation of a real or fictitious environment having rules of interaction simulated by means of one or more processors that a real user may perceive via one or more display devices and/or may interact with via one or more user interfaces.
- user interface refers to a real device by which a user may send inputs to or receive outputs from the virtual world.
- the virtual world may be simulated by one or more processor modules. Multiple processor modules may be linked together via a network.
- the user may interact with the virtual world via a user interface device that can communicate with the processor modules and other user interface devices via a network.
- Certain aspects of the virtual world may be presented to the user in graphical form on a graphical display such as a computer monitor, television monitor or similar display. Certain other aspects of the virtual world may be presented to the user in audible form on a speaker, which may be associated with the graphical display.
- the virtual world may comprise a simulated public space and one or more simulated private spaces.
- public and private spaces may be presented to the user via a graphic display that presents a schematic representation or map of the virtual world.
- a world map 10 may indicate a “home” location 11 .
- the home location 11 may be a private space within the virtual world that is exclusive to a particular user. Other users may “visit” the home location 11 only at the invitation of the user associated with that location.
- the world map 10 may also show various other locations 12 that the user may visit, e.g., by selecting them with a cursor or similar graphical user interface.
- locations may be sponsored by vendors and may be represented on the map by their respective corporate logos or other well-recognized symbols. Such locations may be visited by an user of the virtual world.
- the virtual world may or may not have a fixed amount of virtual “real estate”. In preferred embodiments, the amount of virtual real estate is not fixed.
- the virtual world may have multiple public spaces referred to herein as “lobbies”. Each lobby may have associated with it a separate chat channel so that users in the lobby may interact with one another. Each lobby may have the appearance of a lobby for a public building such as a hotel, office building, apartment building, theater or other public building.
- FIG. 1B depicts a screen shot of such a lobby.
- the lobby may contain items with which users may interact. Examples of such items include games.
- portions of the virtual world may be presented graphically to the user in three-dimensional (3D) form.
- the term three-dimensional (3D) form refers to a representation having the dimensions of length, width and depth (or at least the illusion of depth).
- the lobby may contain “screens” 13 , which are areas in spaces that can be used to show photos or canned or streaming video.
- users may be represented by avatars 14 .
- Each avatar within the virtual world may be uniquely associated with a different user.
- the name or pseudonym of a user may be displayed next to the avatar so that users may readily identify each other.
- a particular user's interactions with the virtual world may be represented by one or more corresponding actions of the avatar.
- Different users may interact with each other in the public space via their avatars.
- An avatar representing a user could have an appearance similar to that of a person, an animal or an object.
- An avatar in the form of a person may have the same gender as the user or a different gender.
- the avatar may be shown on the display so that the user can see the avatar along with other objects in the virtual world. Alternatively, the display may show the world from the point of view of the avatar without showing itself.
- the user's (or avatar's) perspective on the virtual world may be thought of as being the view of a virtual camera.
- a virtual camera refers to a point of view within the virtual world that may be used for rendering two-dimensional images of a 3D scene within the virtual world.
- Users may interact with each other through their avatars by means of the chat channels associated with each lobby.
- Users may enter text for chat with other users via their user interface.
- the text may then appear over or next to the user's avatar, e.g., in the form of comic-book style dialogue bubbles, sometimes referred to as chat bubbles.
- chat may be facilitated by the use of a canned phrase chat system sometimes referred to as quick chat.
- quick chat a user may select one or more chat phrases from a menu.
- the public spaces are public in the sense that they are not uniquely associated with any particular user or group of users and no user or group of users can exclude another user from the public space.
- Each private space by contrast, is associated with a particular user from among a plurality of users.
- a private space is private in the sense that the particular user associated with the private space may restrict access to the private space by other users.
- the private spaces may take on the appearance of familiar private real estate. For example, as seen in FIG. 1C a private space may be configured to resemble an apartment or private home.
- Virtual items may be included within the private space. Examples of virtual items include, but are not limited to, furniture 15 , decorations 16 and virtual communication devices 17 , such as a virtual radio or video screen.
- users of the virtual world may communicate by means of virtual communication devices.
- the term virtual communication device generally refers to a virtual world simulation of a real world device using assets of the system that generates the virtual world.
- a virtual communication device 18 may be presented on a display in a form that facilitates operation of the device by the user.
- the virtual communication device has the appearance of a portable game console, e.g., a Sony Playstation Portable (PSP).
- PPSP Sony Playstation Portable
- Buttons on a real controller that the user uses to interact with the virtual world may be mapped to corresponding buttons 19 or other controls on the virtual communication device to facilitate interaction between the user and the virtual communication device.
- a virtual communication device may have associated with it a position within the virtual world that may be fixed or movable.
- the communication device may be simulated by simulating an interface for the simulated communication device in the virtual world and presenting the simulated interface to a user for interaction therewith.
- the virtual device may have a form or appearance in the virtual world by which it can be recognized by a user. This form or appearance may be configured to mimic that of a corresponding real world device in a way that facilitates user interaction.
- a virtual phone may be shown as having buttons which the user may operate by using the controller.
- the virtual phone may further be shown as having a speaker, a mouthpiece and perhaps a graphic display screen.
- the simulated communication device may be a simulated hand-held communication device, such as a telephone, mobile telephone (e.g., cell phone or cordless phone), voice-over-internet-protocol (VoIP) phone, portable text message device, portable email device, portable game device, two-way radio or other hand-held device.
- a telephone e.g., cell phone or cordless phone
- VoIP voice-over-internet-protocol
- a virtual communication device may be simulated in the virtual world and communication may take place between the simulated communication device and a real communication device.
- the real communication device may be a real hand-held communication device, such as a telephone, mobile telephone (e.g., cell phone or cordless phone), voice-over-internet protocol (VoIP) phone, portable text message device, portable email device, portable game device, two-way radio or other hand-held device.
- the real communication device is configured to communicate with other real communication devices via one or more communication channels that are independent of the virtual world.
- communication channel independent of the virtual world means a channel of communication that does not require the existence of the virtual world in order for communication to take place over that channel.
- a virtual telephone may be used to make a telephone call to a real cellular phone (or vice versa) via communication assets provided by the virtual world.
- the real cellular phone could still make calls to other real cellular phones or telephones even if the virtual world did not exist.
- the real phone may produce a distinctive ringtone when receiving calls from a virtual phone.
- the simulated and real communication devices may communicate with each other by means of text messages and/or video images.
- FIG. 1E is a block diagram illustrating an example of system 20 that may be used to simulate a virtual world.
- the system 20 includes simulation servers 22 and view servers 24 .
- Each simulation server 22 may include one or more processor modules that executes coded instructions that simulate some part of the virtual world.
- each simulation server may include one or more multiple core processors, e.g., a dual-core, quad-core or Cell processors.
- FIG. 1E this configuration may be arbitrarily extended to any number of servers.
- the numbers of simulation servers 22 and view servers 24 may both be scaled.
- one simulator server 22 may accommodate and many view servers 24
- many simulation servers 22 may accommodate one view server 24 .
- Adding more simulation servers 24 may allow for a bigger and/or better simulation of the virtual world. Adding more view servers 24 allow the system 20 to handle more users. Of course, the system 20 may accommodate both a bigger and better simulation and more users by adding more of both simulation servers 22 and view servers 24 .
- Theoretically the number of simulation servers 22 may be infinitely scalable. However, given a finite level of network bandwidth, the number of view servers 14 may be reasonably expected to reach a finite limit after a certain number of users due to computation and network bandwidth limitations.
- Cell processors are described in detail, e.g., in Cell Broadband Engine Architecture , copyright International Business Machines Corporation, Sony Computer Entertainment Incorporated, Toshiba Corporation Aug. 8, 2005 a copy of which may be downloaded at http://cell.scei.cojp/, the entire contents of which are incorporated herein by reference.
- a typical Cell processor has a power processor unit (PPU) and up to 8 additional processors referred to as synergistic processing units (SPU). Each SPU is typically a single chip or part of a single chip containing a main processor and a co-processor.
- All of the SPUs and the PPU can access a main memory, e.g., through a memory flow controller (MFC).
- the SPUs can perform parallel processing of operations in conjunction with a program running on the main processor.
- the SPUs have small local memories (typically about 256 kilobytes) that must be managed by software-code and data must be manually transferred to/from the local SPU memories. For high performance, this code and data must be managed from SPU software (PPU software involvement must be minimized).
- There are many techniques for managing code and data from the SPU Examples of such techniques are described e.g., in U.S. patent application Ser. No. 11/238,077 to John P.
- the simulation servers 22 may communicate with each other and with the view servers 24 via high speed data transfer links 26 .
- the data transfer links may be 10 gigabit per second Ethernet connections.
- the simulation servers 22 may be either remotely located with respect to each other or they may be located proximate each other. To optimize data transfer it may be desirable to locate the simulation servers 22 in fairly close physical proximity, e.g., within the same room or on the same server rack.
- the view servers 24 receive simulation data from the simulation servers 22 and send view data to remotely distributed client devices 28 over a wide area network 30 , such as the Internet or other wide area network.
- the client devices 28 may be any suitable device that can communicate over the network 30 . Communication over the network 30 may be slower than over the fast data links 26 .
- the client devices 28 may be video game console devices, such as the Sony PlayStation 3.
- the client devices 28 may be any computer device from handheld to workstation, etc.
- a handheld video game device such as a PlayStation Portable from Sony Computer Entertainment of Tokyo, Japan is one example among others of a handheld device that may be used as a client device 28 in embodiments of the present invention.
- the client devices 28 may send the view servers 24 instructions relating to their desired interaction with other clients' avatars and with the simulated environment. For example, a client user may wish to move his or her avatar to a different portion of the simulated environment.
- Each client device 28 sends instructions to one of the view servers 24 . These instructions are relayed by the view servers to the simulation servers that perform the necessary computations to simulate the interactions.
- Other devices 29 may also communicate with each other over the network 30 .
- Examples of such other devices include, telephones, cellular phones, voice over internet protocol (VoIP) phones, personal computers, portable web browsers, portable email devices, text messaging devices, portable game devices and the like. Communication between such other devices 29 may be independent of the simulation servers 22 and view servers 26 that generate the virtual world. Although the other devices 29 are not considered part of the system 20 , they may interact with it via the network 30 .
- VoIP voice over internet protocol
- the users of the client devices 28 are often interested in things around them.
- the view servers 24 make sure that each client 28 receives relevant data about its surroundings in the proper order.
- the view servers 24 determine what the client needs based on its avatar's location, orientation, motion, etc.
- each view server may generate the code and/or data that the client devices use to present views of the public spaces or private spaces.
- Embodiments of the invention may make use of Peerlib to traverse network address translators (NATs) to establish peer-to-peer connections among users in the same public space.
- NAT traversal is described e.g., in U.S. patent application Ser. No.
- FIG. 1F shows one implementation of a multimedia processing system 100 that may be used as a client device 28 and a user interface with the virtual world generated by the system 20 .
- the processing system 100 may include a composite apparatus capable of processing a plurality of contents, such as still images, moving images, music, broadcasts, and games, spread over a plurality of media.
- the processing of a plurality of contents includes presentation, recording, and other related tasks performed by the multimedia processing system 100 .
- the multimedia processing system 100 includes a multimedia processing apparatus 102 , a display 104 (e.g., a monitor or television), and a controller 114 . Buttons on the controller 114 may be mapped to corresponding buttons 19 on the virtual controller 18 shown in FIG. 1D and described above.
- the multimedia processing apparatus 102 may receive multimedia contents from various media sources, such as broadcast media, the Internet (or other network) media, an optical disk 110 , and a memory card 112 .
- Contents from the broadcast media may be received through a broadcast data channel 106
- contents from the Internet media can be received through a network data channel 108 .
- the broadcast and network data channels 106 , 108 may be either wireless or wired channels.
- the contents from the broadcast media and the Internet media can be recorded and stored by the multimedia processing apparatus 102 .
- the received contents can also be used by various functions (e.g., a game) of the multimedia processing apparatus 102 in addition to interaction with the virtual world.
- the received multimedia contents may be displayed on the display 104 .
- the display may include a video monitor, such as a cathode ray tube (CRT) or flat screen for display of still or moving visual images.
- the display 104 may further include one or more audio speakers for presenting sounds to the user.
- the controller 114 allows the user to input various instructions related to multimedia processing, and to control functions of the multimedia processing apparatus 102 .
- the system 100 may include audio and video inputs to facilitate user interaction with visual images and/or audible sounds presented by the display 104 .
- Such inputs may include a video image capture device 116 , such as a camera, and an audio signal capture device 118 , such as a microphone.
- the video image capture device 116 may be placed on top of or integrated into the display 104 and coupled to the multimedia processing apparatus 102 , e.g., by cables, or over-the-air connections, such as optical (e.g., infrared) or radiofrequency (e.g., Bluetooth) data links. It should be understood that the image capture device 116 may be placed in any other proximate location that will allow it to capture images that are located about in front of the display 104 .
- the image capture device 116 may be a digital camera, e.g. a USB 2.0 type camera. Such a camera may have a field of view of about 75 degrees, and an f-stop of about 1.5 and be capable of capturing images at a frame rate of up to about 120 frames per second.
- the video image capture device may be an EyeToy Camera available from Logitech of Fremont, Calif.
- the media processing apparatus 102 may be a game console, television, digital video recorder (DVR), cable set-top-box, home media server or consumer electronic device and including any device capable of rendering itself subject to control of a user.
- the image capture device may be a three-dimensional (3D) camera.
- a 3D camera or zed camera refers to an image capture device configured to facilitate determining the depth of objects in an image.
- depth refers a location of an object relative to a direction perpendicular to a plane of the image.
- FIG. 2A is a functional block diagram showing one implementation of the multimedia processing apparatus 102 .
- the multimedia processing apparatus 102 includes the controller 114 , video image capture device 116 , audio signal capture device 118 , a data input/output (I/O) unit 200 , a display output unit 202 , a display control unit 204 , a storage unit 208 , and a game/virtual world processor 206 .
- the game/virtual world processor 206 may be or may include a parallel processor such as a cell processor having a power processing unit (PPU) coupled to one or more synergistic processing units (SPU). Cell processors are described, e.g., in U.S. patent application Ser. No. 11/238,077, which is incorporated herein by reference.
- the multimedia processing apparatus 102 further includes programs and instructions for performing various functions, such as a data input function, a data retaining function, an image processing function, a rendering function, and other related functions.
- the controller 114 may include a direction-determining unit 222 for determining one or a combination of four directions (i.e., an upward direction, a downward direction, a left direction, and a right direction) from the user input; and an instruction-determining unit 224 for determining an instruction from the user input.
- the instruction may include a command to present a multimedia content, to terminate the presentation, to invoke a menu screen, and to issue other related commands and/or instructions.
- Output of the controller 114 , video image capture device 116 and audio signal capture device 118 is directed to the display output unit 202 , the display control unit 204 , and the game/virtual world processor 206 .
- the direction-determining unit 222 and the instruction-determining unit 224 may be configured with a combination of buttons, circuits, and programs to actuate, sense, and determine the direction and the instruction.
- the buttons can include cross-shaped keys or joysticks.
- the button associated with an instruction for invoking a menu screen can be set in a toggle manner so that the menu screen can be toggled between a display mode and a non-display mode each time the button is pressed.
- the direction-determining unit 222 may determine the diagonal movements of the button as a binary command in which the movement is ascertained to be in one of two directions. Thus, a diagonal movement between the up direction and the right direction can be ascertained to be in either the up or the right direction. In another implementation, the direction-determining unit 222 may determine the diagonal movements of the button as an analog command in which the movement is ascertained to be in a particular direction up to the accuracy of the measurement. Thus, a diagonal movement between the up direction and the right direction can be ascertained to be in a northwesterly direction. Directional movements may also be determined through interaction between the user, the video image capture device 116 and the display control 204 as described below.
- the data I/O unit 200 may include a broadcast input unit 212 for inputting broadcast contents via the broadcast channel 106 ; a network communication unit 214 for inputting and outputting data such as web contents via the network channel 108 ; a disk reading unit 216 for inputting data stored on a disk 110 ; and a memory card reading unit 218 for inputting and outputting data to/from a memory card 112 .
- Output of the data I/O unit 200 may be directed to the display output unit 202 , the display control unit 204 , the game processor 206 , and the storage unit 208 .
- the display output unit 202 may include a decoder 232 , a synthesizer 234 , an output buffer 236 , and an on-screen buffer 238 .
- the decoder 232 decodes input data received from the data I/O unit 200 or the storage unit 208 .
- the input data may include broadcast contents, movies, and music.
- the synthesizer 234 processes the decoded input data based on user direction/instruction received from the controller 114 .
- the output of the synthesizer 234 is stored in the output buffer 236 .
- the on-screen buffer 238 may store image data of a menu screen generated by the display control unit 204 .
- the output of the display output unit 202 is transmitted to the display 104 .
- the display control unit 204 may include a menu manager 242 , an effects processor 244 , a contents controller 246 , and an image generator 248 .
- the menu manager 242 manages media items and multimedia contents received from the storage unit 208 and the data I/O unit 200 , and shown on the menu screen.
- the effects processor 244 processes operation of icons and icon arrays on the menu screen.
- the effects processor 244 also manages various actions and effects to be displayed on the menu screen.
- the contents controller 246 controls processing of media items and multimedia contents, and handling of data from the data I/O unit, the storage unit 208 , and the game/virtual world processor 206 .
- the image generator 248 operates to generate a menu screen including a medium icon array and a contents icon array.
- the game/virtual world processor 206 executes game and/or virtual world programs using data read from the data I/O unit 200 or from the storage unit 208 .
- the game/virtual world processor 206 executes a game program or facilitates user interaction with the virtual world based on user instructions received from the controller 114 .
- the display data of the executed game program or virtual world interaction is transmitted to the display output unit 202 .
- signals from the video image capture device 116 and audio signal capture device 118 allow a user to interact with and manipulate images shown on the display 104 .
- embodiments of the invention may allow a user to “grab” and “drag” objects from one location to another on the display 104 .
- the video image capture device 116 points at and captures an image I U of a user U.
- the image I U may then be shown on the display 104 in the background of other images through a technique known as alpha blending.
- alpha blending refers generally to a convex combination of two colors allowing for transparency effects in computer graphics.
- the value alpha in the color code may range from 0.0 to 1.0, where 0.0 represents a fully transparent color, and 1.0 represents a fully opaque color.
- 0.0 represents a fully transparent color
- 1.0 represents a fully opaque color.
- the value of the resulting color when color Value1 is drawn over a background of color Value0 may be given by:
- the alpha component is used to blend to red, green and blue components equally, as in 32-bit RGBA, or, alternatively, there are three alpha values specified corresponding to each of the primary colors for spectral color filtering.
- the effects processor may correlate the directional displacement of the user's hand to directional input such as would normally be received from the controller 114 .
- a magnitude of the displacement can control the input speed.
- the image I U may include the user's head H and hand h. It is noted that to facilitate user interaction with the image I U the user's image I U may be presented on the screen as a mirror image of the user U. Thus, when the user U moves his hand h to the user's left, an image I h of the hand also moves to the user's left.
- the effects processor 244 may be configured to recognize the user's hand h and recognizes changes in the aspect ratio (ratio of height to width) of the hand image I h . These changes in aspect ratio may be used to signal the controller 114 that the user has “grabbed” or “clicked” on an object 140 presented on the display.
- the effects processor 244 can then move the selected object with the motion of the image I h of the user's hand h.
- the user may hold a deformable “C”-shaped object 142 that is colored to be more readily recognizable to the effects processor 244 when interpreting the image from the video image capture device 116 .
- Deformation of the object 142 referred to herein as a “clam” can provide a change in aspect ratio that is recognize as a command to “grab” or “click” an object in the display 104 .
- the effects processor 244 it is often desirable for the effects processor 244 to be able to recognize whether the user U is using his left or right hand to manipulate the object 140 on the display 104 .
- the controller may also include software that recognizes the users hand h, head H, his arm A and his chest C by their corresponding images I h , I H , I A , and I C . With this information, the controller 114 can determine whether the user U is using his left or right hand.
- the user's hand h is on the left side of his head H and his arm A is not across his chest, it can be determined that the user U is using his left hand.
- the user's hand h is on the left side of his head and his arm is across his chest, it can be determined that the user U is using his right hand.
- FIGS. 2C-2D depict an image capture device 120 that may be used with the multimedia processing system 100 .
- the device 120 includes an optical image capture device 122 , e.g., a digital camera (or 3D camera) and one or more microphones 124 .
- the microphones 124 may be arranged in an array and spaced apart from each other at known distances.
- the microphones 124 may be spaced in a linear array with adjacent microphones spaced about 2 centimeters apart center-to-center. Each microphone may have a resonant frequency of about 16 kilohertz.
- Such microphone arrays may be used to locate and track one or more sources of sound in conjunction with operation of the apparatus 102 and interaction with a virtual world.
- the use of such microphone arrays for sound source location and tracking is described, e.g., in U.S. patent application Ser. Nos. 11/381,724, 11/381,725 and 11/381,729 filed May 4, 2006, the entire disclosures of all of which are incorporated herein by reference.
- the microphones 124 may move with the image capture device 122 .
- the microphones 124 may be mounted to a frame 126 that keeps the microphones in a fixed positional relationship with respect to the image capture device, e.g., with respect to a lens 128 .
- the microphones are depicted as being arrayed in a horizontal linear arrangement, they may alternatively be oriented vertically or diagonally or arrayed in a two-dimensional arrangement.
- the device 120 may include a visible LED 130 and an infrared LED 132 . These may be used to illuminate objects in a field of view of the image capture device 122 .
- the lens 128 may include a so-called “day-night” coating that transmits visible light and selected frequencies of the infrared (e.g., frequencies at around 940 nm).
- elements of the system 20 and apparatus 102 may be set up so that a may direct his or her avatar to pick up virtual cell phone, dial number and make real call to a real or virtual phone. If the intended recipient of the call is another user of the virtual world, the system 20 and apparatus 102 may be suitable programmed to connect to that user's virtual phone, e.g., via VoIP if that user happens to be online interacting with the virtual world at the time of the call. Elements of the system 20 and apparatus 102 may be configured to rout the call by default to the intended recipient's virtual phone (if any). If the intended recipient is not online, the call may be re-routed to the recipient's real communication device.
- real communication devices may include, but are not limited to phones (e.g., land line, cellular phone, or VoIP phone) or voice mail (which may be associated with a real or virtual phone) or any network device with VoIP capability including portable game devices and the like.
- call may be routed by default to the user's real communication device.
- elements of the system 20 and apparatus 102 may be used to enable intelligent two-way routing between the virtual world and real communication devices.
- FIG. 2E communication between real and virtual devices may be understood with respect to FIG. 2E and FIG. 2F .
- two or more users 251 , 252 may interact with a virtual world, e.g., over network 30 via the system 20 , described above with respect to FIG. 1E .
- Each user may interface with the system 20 over the network 30 via client devices 253 , 254 , which may be suitably configured, e.g., as described above with respect to FIGS. 1F and 2A .
- Each client device 253 , 254 may include suitably configured hardware and/or software that generates a virtual communication devices 255 , 256 .
- Device avatars may represent the virtual communication devices in the virtual world.
- a device avatar may take on the appearance of a real device, e.g., as described above. Alternatively, the user may customize the device avatar so that it takes on an entirely arbitrary and/or fanciful appearance.
- the Each user 251 , 252 may also have access to real communication devices, such as land line telephones 257 , 258 and cell phones 259 , 260 .
- real communication devices such as land line telephones 257 , 258 and cell phones 259 , 260 .
- Each client device 253 , 254 may be provided with a configurable router 261 , 262 to facilitate routing of calls among real devices and virtual devices.
- the routers 261 , 262 may reside in software or hardware, on a server, peer-to-peer network, combination, etc. In the example depicted in FIG. 2E , the routers 261 , 262 are located on the client devices 253 , 254 , however this is not the only possible configuration.
- the routers 261 , 262 may alternatively be located anywhere, e.g., on the simulation servers 22 , view servers 24 or other devices connected to the network 30 .
- the routers 261 , 262 may be accessed in a plurality of ways and from various devices, including, but not limited to, the virtual phone or communication device, real communication device, network web pages, and the like. Each router 261 , 262 may be configured with one or more routing preferences to control the routing function. The routers 261 , 262 may implement routing preferences for either the source or the target of a communication.
- the “source” of a communication generally refers to the party originating a communication (e.g., the “caller” for telephone call or the “sender” for text message or email).
- the “target” of a communication generally refers to the intended recipient of the communication. It is noted that the source or target need not be a user of the virtual world.
- the first user's router 261 may be configured to preferentially attempt to contact the second user 252 at virtual communication device 256 . If the user is not online and using the virtual world, the first user's router 261 may attempt to contact the second user at land line 258 and failing that, the router 261 may attempt to contact the second user 252 at his or her cell phone 260 .
- the second user's router 260 may implement its own routing preference for reception of communications from the first user 251 .
- the second user's router 262 may preferentially route calls from the first user 251 to the second user's cell phone 260 , then to the second user's land line 258 and then to the second user's virtual device 256 .
- each user 251 , 252 may have one corresponding telephone number that is associated with each of that user's real and virtual communication devices.
- the routers 261 , 262 may route calls to a particular user's number among the user's different devices automatically. Such routing may be based on the user's preferences, the user's activity, or some combination of both. It is noted that the routers 261 , 262 may be programmed with different preferences depending on the identity of the source of the communication and/or on the identity of the target of the communication.
- the first user's router 261 may receive a call from a source who is calling a first user's number.
- the first user 251 may provide the router 261 with information indicating that the first user 251 is online. Such information may be programmed into the multimedia processing apparatus 102 , e.g., using the controller 114 .
- the router 261 may check to see if the first user 251 is online. If so, the router may router 261 may route the “call” to the first user's virtual communication device 255 , which may be configured “ring” even if the first user 251 is online via the second user's client device 254 .
- the router 261 may be provided with information or may check to determine that the first user 251 is online and the target (e.g., the second user 252 ) is offline. In such a case, the first user's router 261 may route the “call” to the second user's real communication device, e.g., land line 258 or cell phone 260 .
- the first user's router 261 may be provided information or determine that the first user 251 is online and the second user 252 is online. In such a case, a “text message” may be routed within the virtual world, e.g., to the second user's avatar OR the target second user's virtual device 256 .
- the router 261 may be provided information or may check to determine if the second user 252 is online. If the second user 252 is offline the “text message” may be routed to a real world device associated with the second user, e.g., land line 258 or cell phone 260 .
- the source may place the call from within the virtual world OR within the virtual world through an avatar virtual device OR through any VOIP device or service or through any real telephone line and source.
- the above intelligent routing may take action based on user preferences so a user may want his real cell phone to ring when online not his avatar phone. In other configurations the above intelligent routing may take action based on STATE CONTROLS so only in certain circumstances does the call route to the avatar or the real phone depending on the application configuration. For example, if a target is involved in an online game and does not wish to be interrupted the call may be routed to the target's real or virtual voicemail. In yet another configuration, a call may be routed to virtual device but if the device does not ring instead of going to virtual voicemail call may be re-routed to a real device, such as a real phone.
- embodiments of the present invention allow for a situation where the first user 251 calls the second user 252 using a real phone, e.g., land line 257 speaks into the phone and the first user's avatar 263 appears on the second user's virtual communication device 256 which is shown on the display 104 connected to the multimedia processing apparatus 102 belong to the second user 252 .
- the first user's name 266 may also be shown on the display 104 proximate the first user's avatar 263 .
- the first user's spoken speech 265 may be translated to text through use of speech recognition software and/or hardware, which may be implemented on the apparatus 102 , the simulation servers 22 , view server 24 or other device.
- the resulting text may appear on the display 104 as text bubbles 264 proximate the first user's avatar 263 .
- An audio speaker 267 may play audible sounds 268 of the first user's speech 265 during communication between the first and second users.
- routing procedure may be used for other types of messaging, e.g. text messaging or email.
- An advantage of this system is that real calls and/or text messages may be routed from one user to another in a way that can avoid long distance or other phone charges associated with real communication devices.
- the recipient's (or user's) real telephone or text message device may be equipped with middleware to facilitate interaction with the virtual world supported by the system 20 .
- a user may be able to use a real communication device to access virtual world content.
- a cellular phone, portable internet device, etc. may be used to make changes to the user's avatar, public space or private space.
- the real communication device may be used to remotely access virtual communication device content.
- the real communication device may be used as an interface between the simulated communication device and a user.
- the virtual communication device is a virtual digital video recorder (DVR) located within the user's private space.
- DVR virtual digital video recorder
- a user may access the virtual DVR to record a real or virtual event by way of a real cellular phone and electronic programming guide.
- DVR virtual digital video recorder
- communicating between the real and virtual communication devices may involve video communication.
- an image of the avatar may be displayed with the real communication device during the video communication.
- the system that generates the virtual world may facilitate lip-synching of the avatar image to real or synthesized speech generated by the user associated with the avatar.
- the user may record a voice message to be sent to the real device as part of a video message.
- the system may generate a video message of the avatar speaking the voice message in which the avatar's lip movements are synchronized to the user's speech within the message.
- the user may enter text of the message into a virtual device.
- the system may then synthesize speech for the avatar from the text and then generate a video image of the avatar in which the avatar's lip movements are synchronized to the synthesized speech.
- the user may record a sound and video message, e.g., using the video image capture device 116 and audio signal capture device 118 .
- the avatars 14 may express emotion through animation, facial change, sound, particle or chat bubble change to communicate a specific emotion.
- emotion may be pre-programmed and may be triggered by user commands.
- emotions expressed by the user during interaction with the virtual world may be mapped to emotion exhibited by the user's avatar.
- the user may select an emotional state that can be projected by the avatar.
- avatar emotes may be selected from a menu presented to the user by the apparatus 102 . If, for example, the user selects “happy”, the user's avatar may be shown with a smile on its face.
- the apparatus 102 may be configured to detect an emotional state of the user in real time and then appropriately change the features of the user's avatar to reflect that state.
- Such real time tracking of user emotional state can be particularly useful, e.g., for mapping user emotional state onto an avatar during video communication in which an image of the user's avatar is presented to a real device.
- the apparatus 102 may track user emotional state in real time by capturing one or more visual images of the user U and analyzing one or more facial features of the user using the image capture device 116 .
- the game/virtual world processor 206 may be programmed to analyze these images, e.g., using facial features such as the user's lips, eyes, eyelids and eyebrows, cheeks, teeth or nostrils, or body language features, e.g., stance, placement of arms or hands, to determine the user's emotional state.
- facial and/or body language analysis may be enhanced through the use of a 3D camera to generate the images.
- user emotional stage may be tracked in real time through analysis of the user's voice stress as exhibited in user speech or other vocalizations detected by the audio signal capture device 118 .
- emotional may be tracked by analysis of the text for certain words, phrases or language patterns that are indicative of emotional state.
- the user's emotional state may be tracked using other biometrics, such as electrocardiographic (EKG), electroencephalographic (EEG), galvanic skin response, or thermal imaging data.
- EKG electrocardiographic
- EEG electroencephalographic
- thermal imaging data Such data may be obtained through appropriate sensors incorporated into the controller 114 and analyzed by appropriately configured software, hardware, or firmware incorporated into the processor 206 .
- Thermal imaging data may also be obtained if the image capture device 116 includes an infrared imaging capability.
- various combinations of body language and facial features indicative of the emotional state may be reflected in emotes exhibited by animation of the avatar (e.g., a raised first combined with bared teeth to indicate anger).
- users may wish to use customized gestures or “emotes” for their avatars.
- one or more custom gestures may be generated for the avatar.
- These custom gestures may then be associated with one or more user interface signals so that the user's avatar can perform the gesture on command.
- the custom gesture may be generated through use of motion capture or performance capture techniques to record and digitize the user's bodily movements or mapping of the user's facial expression as the user performs the gesture.
- the image capture device 116 may be used for this purpose.
- a commercial motion capture studio or performance capture studio may be used for this purpose.
- the user or some other performer may wear markers near each joint to identify the motion by the positions or angles between the markers.
- Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the rate of the desired motion, to submillimeter positions.
- the motion capture computer software records the positions, angles, velocities, accelerations and impulses, providing an accurate digital representation of the motion.
- an optical motion capture system may triangulate the 3D position of a marker between one or more cameras calibrated to provide overlapping projections.
- a passive optical system may use markers coated with a retroreflective material to reflect light back that is generated near the cameras lens.
- the cameras sensitivity can be adjusted taking advantage of most cameras narrow range of sensitivity to light so only the bright markers will be sampled ignoring skin and fabric.
- an active optical system may be used in which the markers themselves are powered to emit their own light. Power may be sequentially provided to each marker may in phase with the capture system providing a unique identification of each marker for a given capture frame at a cost to the resultant frame rate.
- Performance capture differs from standard motion capture due to the interactive nature of the performance, capturing the body, the hands and facial expression all at the same time, as opposed to capturing data for reference motion and editing the motions together later.
- the digitized gesture may be used to generate coded instructions or other user interface signals for animation of the avatar so that it performs the gesture.
- the code or other user interface signals may be distributed to one or more other users, so that they can customize their avatars to perform the custom gesture.
- Customized avatar gestures may be combined with customized avatar clothing, footwear, hairstyles, ethnic characteristics and other custom avatar features as a means of social identification with a particular group.
- moderating or moderation refers to enforcement of some degree of rules for acceptable behavior in the virtual world.
- Such moderation may be implemented by the view servers 24 , which may analyze the custom gestures for rudeness or other indications of inappropriateness. Moderating the display of the custom gesture may include restricting an ability of a particular user to make an avatar perform the custom gesture or an ability of the particular user to perceive the avatar performing the custom gesture based on predetermined criteria.
- predetermined criteria may include the age of the user or viewer of the gesture or a sensitivity of the viewer to offense based on religious, ethnic or other affiliation of the viewer.
- an avatar may be associated with a source of an email.
- a user may generate an email within the virtual world and associate one or more images of his or her avatar with the email.
- the email may be sent from the virtual world to a real a real device.
- the avatar images may be then be presented at email's destination, e.g., by self-extracting email attachment.
- the email may be generated, e.g., using a virtual communication device within the virtual world.
- the destination of the email may be a real communication device, e.g., any real device configured to receive email messages.
- the real communication device may be configured to communicate with other real communication devices via one or more communication channels that are independent of the virtual world.
- the virtual world may optionally comprise a simulated public space configured to facilitate interaction among a plurality of users and one or more private spaces.
- Each private space is associated with a particular user of the plurality of users, e.g., as described above.
- Recorded or synthesized speech may be associated with the email and presented with the one or more images at the destination.
- the avatar images may comprise an animation of the avatar generated specifically for the email.
- the animation may be presented at the destination, e.g., by self-extracting email attachment.
- one or more gestures may be mapped to the animation of the avatar, e.g., as described above.
- the gestures may be mapped by recording audio and/or video of a source of the email message and mapping one or more features of the audio and/or video to one or more features of the avatar in the animation.
- a theme may be associated with virtual camera movements in the animation.
- the theme may involve choice of virtual camera angle, tracking, panning, tilting, zoom, close-up, simulated lighting, and the like.
- the virtual camera position may be fixed or moving.
- the theme may involve a choice of background scenery for the avatar.
- generating the email may involve tracking an emotional state of the source, e.g., as described above, and mapping the emotional state to the theme. For example, a serene or calm emotional state may be mapped to a theme characterized by fixed camera position or relatively slow virtual camera movement. An agitated or excited emotional state may be mapped to a theme characterized by jarring camera movement, extreme close-ups, harsh camera angles, and the like.
- Avatar email communications of the type described above may be implemented, e.g., by appropriate configuration of the system 20 of FIG. 1E and/or the multimedia apparatus 102 of FIG. 1F and FIG. 2A .
- a console video game apparatus 300 may include a processor 301 and a memory 302 (e.g., RAM, DRAM, ROM, and the like).
- the video game apparatus 300 may have multiple processors 301 if parallel processing is to be implemented.
- the memory 302 includes data and game program code 304 , which may include portions that facilitate user interaction with a virtual world as described above.
- the memory 302 may include inertial signal data 306 which may include stored controller path information as described above.
- the memory 302 may also contain stored gesture data 308 , e.g., data representing one or more gestures relevant to the game program 304 .
- Coded instructions executed on the processor 302 may implement a multi-input mixer 305 , which may be configured and function as described above.
- the apparatus 300 may also include well-known support functions 310 , such as input/output (I/O) elements 311 , power supplies (P/S) 312 , a clock (CLK) 313 and cache 314 .
- the apparatus 300 may optionally include a mass storage device 315 such as a disk drive, CD-ROM drive, tape drive, or the like to store programs and/or data.
- the controller may also optionally include a display unit 316 and input unit 318 to facilitate interaction between the apparatus 300 and a user.
- the display unit 316 may be in the form of a cathode ray tube (CRT) or flat panel screen that displays text, numerals, graphical symbols or images.
- the user interface 318 may include a keyboard, mouse, joystick, light pen or other device.
- the user input 318 may include a microphone, video camera or other signal transducing device to provide for direct capture of a signal to be analyzed.
- the apparatus 300 may also include a network interface 319 to enable the device to communicate with virtual world servers and other similarly configured devices over a network, such as the internet.
- the processor 301 , memory 302 , user input 318 , network interface 319 and other components of the apparatus 300 may exchange signals (e.g., code instructions and data) with each other via a system bus 320 as shown in FIG. 3 .
- a microphone array 322 may be coupled to the system 300 through the I/O functions 311 .
- the microphone array may include between about 2 and about 8 microphones, preferably about 4 microphones with neighboring microphones separated by a distance of less than about 4 centimeters, preferably between about 1 centimeter and about 2 centimeters.
- the microphones in the array 322 are omni-directional microphones.
- An optional image capture unit 323 e.g., a digital camera
- One or more pointing actuators 325 may be mechanically coupled to the camera to control pointing of the image capture unit. These actuators 325 may exchange signals with the processor 301 via the I/O functions 311 .
- I/O generally refers to any program, operation or device that transfers data to or from the apparatus 300 and to or from a peripheral device. Every data transfer may be regarded as an output from one device and an input into another.
- Peripheral devices include input-only devices, such as keyboards and mouses, output-only devices, such as printers as well as devices such as a writable CD-ROM that can act as both an input and an output device.
- peripheral device includes external devices, such as a mouse, keyboard, printer, monitor, microphone, game controller, camera, external Zip drive or scanner as well as internal devices, such as a CD-ROM drive, CD-R drive or internal modem or other peripheral such as a flash memory reader/writer, hard drive.
- the apparatus 300 may include a controller 330 coupled to the processor via the I/O functions 311 either through wires (e.g., a USB cable) or wirelessly, e.g., using infrared or radiofrequency (such as Bluetooth) connections.
- the controller 330 may have analog joystick controls 331 and conventional buttons 333 that provide control signals commonly used during playing of video games.
- Such video games may be implemented as processor readable data and/or instructions from the program 304 which may be stored in the memory 302 or other processor readable medium such as one associated with the mass storage device 315 .
- the joystick controls 331 may generally be configured so that moving a control stick left or right signals movement along the X axis, and moving it forward (up) or back (down) signals movement along the Y axis. In joysticks that are configured for three-dimensional movement, twisting the stick left (counter-clockwise) or right (clockwise) may signal movement along the Z axis.
- X Y and Z are often referred to as roll, pitch, and yaw, respectively, particularly in relation to an aircraft.
- the controller 330 may include one or more inertial sensors 332 , which may provide position and/or orientation information to the processor 301 via an inertial signal. Orientation information may include angular information such as a tilt, roll or yaw of the controller 330 .
- the inertial sensors 332 may include any number and/or combination of accelerometers, gyroscopes or tilt sensors.
- the inertial sensors 332 include tilt sensors adapted to sense orientation of the joystick controller with respect to tilt and roll axes, a first accelerometer adapted to sense acceleration along a yaw axis and a second accelerometer adapted to sense angular acceleration with respect to the yaw axis.
- An accelerometer may be implemented, e.g., as a MEMS device including a mass mounted by one or more springs with sensors for sensing displacement of the mass relative to one or more directions. Signals from the sensors that are dependent on the displacement of the mass may be used to determine an acceleration of the joystick controller 330 .
- Such techniques may be implemented by instructions from the game program 304 which may be stored in the memory 302 and executed by the processor 301 .
- an accelerometer suitable as the inertial sensor 332 may be a simple mass elastically coupled at three or four points to a frame, e.g., by springs.
- Pitch and roll axes lie in a plane that intersects the frame, which is mounted to the joystick controller 330 .
- the mass will displace under the influence of gravity and the springs will elongate or compress in a way that depends on the angle of pitch and/or roll.
- the displacement and of the mass can be sensed and converted to a signal that is dependent on the amount of pitch and/or roll.
- Angular acceleration about the yaw axis or linear acceleration along the yaw axis may also produce characteristic patterns of compression and/or elongation of the springs or motion of the mass that can be sensed and converted to signals that are dependent on the amount of angular or linear acceleration.
- Such an accelerometer device can measure tilt, roll angular acceleration about the yaw axis and linear acceleration along the yaw axis by tracking movement of the mass or compression and expansion forces of the springs.
- resistive strain gauge material including resistive strain gauge material, photonic sensors, magnetic sensors, hall-effect devices, piezoelectric devices, capacitive sensors, and the like.
- the joystick controller 330 may include one or more light sources 334 , such as light emitting diodes (LEDs).
- the light sources 334 may be used to distinguish one controller from the other.
- one or more LEDs can accomplish this by flashing or holding an LED pattern code.
- 5 LEDs can be provided on the joystick controller 330 in a linear or two-dimensional pattern.
- the LEDs may alternatively, be arranged in a rectangular pattern or an arcuate pattern to facilitate determination of an image plane of the LED array when analyzing an image of the LED pattern obtained by the image capture unit 323 .
- the LED pattern codes may also be used to determine the positioning of the joystick controller 330 during game play.
- the LEDs can assist in identifying tilt, yaw and roll of the controllers. This detection pattern can assist in providing a better user/feel in games, such as aircraft flying games, etc.
- the image capture unit 323 may capture images containing the joystick controller 330 and light sources 334 . Analysis of such images can determine the location and/or orientation of the joystick controller. Such analysis may be implemented by program code instructions 304 stored in the memory 302 and executed by the processor 301 . To facilitate capture of images of the light sources 334 by the image capture unit 323 , the light sources 334 may be placed on two or more different sides of the joystick controller 330 , e.g., on the front and on the back (as shown in phantom). Such placement allows the image capture unit 323 to obtain images of the light sources 334 for different orientations of the joystick controller 330 depending on how the joystick controller 330 is held by a user.
- the light sources 334 may provide telemetry signals to the processor 301 , e.g., in pulse code, amplitude modulation or frequency modulation format. Such telemetry signals may indicate which joystick buttons are being pressed and/or how hard such buttons are being pressed. Telemetry signals may be encoded into the optical signal, e.g., by pulse coding, pulse width modulation, frequency modulation or light intensity (amplitude) modulation. The processor 301 may decode the telemetry signal from the optical signal and execute a game command in response to the decoded telemetry signal. Telemetry signals may be decoded from analysis of images of the joystick controller 330 obtained by the image capture unit 323 .
- the apparatus 301 may include a separate optical sensor dedicated to receiving telemetry signals from the lights sources 334 .
- a separate optical sensor dedicated to receiving telemetry signals from the lights sources 334 .
- the use of LEDs in conjunction with determining an intensity amount in interfacing with a computer program is described, e.g., in U.S. patent application Ser. No. 11/429,414, to Richard L. Marks et al., entitled “USE OF COMPUTER IMAGE AND AUDIO PROCESSING IN DETERMINING AN INTENSITY AMOUNT WHEN INTERFACING WITH A COMPUTER PROGRAM” (Attorney Docket No. SONYP052), filed May 4, 2006, which is incorporated herein by reference in its entirety.
- analysis of images containing the light sources 334 may be used for both telemetry and determining the position and/or orientation of the joystick controller 330 .
- Such techniques may be implemented by instructions of the program 304 which may be stored in the memory 302 and executed by the processor 301 .
- the processor 301 may use the inertial signals from the inertial sensor 332 in conjunction with optical signals from light sources 334 detected by the image capture unit 323 and/or sound source location and characterization information from acoustic signals detected by the microphone array 322 to deduce information on the location and/or orientation of the controller 330 and/or its user.
- “acoustic radar” sound source location and characterization may be used in conjunction with the microphone array 322 to track a moving voice while motion of the joystick controller is independently tracked (through the inertial sensor 332 and or light sources 334 ).
- a pre-calibrated listening zone is selected at runtime and sounds originating from sources outside the pre-calibrated listening zone are filtered out.
- the pre-calibrated listening zones may include a listening zone that corresponds to a volume of focus or field of view of the image capture unit 323 .
- Examples of acoustic radar are described in detail in U.S. patent application Ser. No. 11/381,724, to Xiadong Mao entitled “METHODS AND APPARATUS FOR TARGETED SOUND DETECTION AND CHARACTERIZATION”, filed May 4, 2006, which is incorporated herein by reference.
- any number of different combinations of different modes of providing control signals to the processor 301 may be used in conjunction with embodiments of the present invention.
- Such techniques may be implemented by program code instructions 304 which may be stored in the memory 302 and executed by the processor 301 and may optionally include one or more instructions that direct the one or more processors to select a pre-calibrated listening zone at runtime and filter out sounds originating from sources outside the pre-calibrated listening zone.
- the pre-calibrated listening zones may include a listening zone that corresponds to a volume of focus or field of view of the image capture unit 323 .
- the program 304 may optionally include one or more instructions that direct the one or more processors to produce a discrete time domain input signal x m (t) from microphones M 0 . . . M M , of the microphone array 322 , determine a listening sector, and use the listening sector in a semi-blind source separation to select the finite impulse response filter coefficients to separate out different sound sources from input signal x m (t).
- the program 304 may also include instructions to apply one or more fractional delays to selected input signals x m (t) other than an input signal x 0 (t) from a reference microphone M 0 . Each fractional delay may be selected to optimize a signal to noise ratio of a discrete time domain output signal y(t) from the microphone array.
- the fractional delays may be selected to such that a signal from the reference microphone M 0 is first in time relative to signals from the other microphone(s) of the array.
- the program 304 may include one or more instructions which, when executed, cause the system 300 to select a pre-calibrated listening sector that contains a source of sound. Such instructions may cause the apparatus to determine whether a source of sound lies within an initial sector or on a particular side of the initial sector. If the source of sound does not lie within the default sector, the instructions may, when executed, select a different sector on the particular side of the default sector. The different sector may be characterized by an attenuation of the input signals that is closest to an optimum value. These instructions may, when executed, calculate an attenuation of input signals from the microphone array 322 and the attenuation to an optimum value.
- the instructions may, when executed, cause the apparatus 300 to determine a value of an attenuation of the input signals for one or more sectors and select a sector for which the attenuation is closest to an optimum value. Examples of such a technique are described, e.g., in U.S. patent application Ser. No. 11/381,725, to Xiadong Mao, entitled “METHODS AND APPARATUS FOR TARGETED SOUND DETECTION” filed May 4, 2006, the disclosures of which are incorporated herein by reference.
- Signals from the inertial sensor 332 may provide part of a tracking information input and signals generated from the image capture unit 323 from tracking the one or more light sources 334 may provide another part of the tracking information input.
- such “mixed mode” signals may be used in a football type video game in which a Quarterback pitches the ball to the right after a head fake head movement to the left.
- a game player holding the controller 330 may turn his head to the left and make a sound while making a pitch movement swinging the controller out to the right like it was the football.
- the microphone array 320 in conjunction with “acoustic radar” program code can track the user's voice.
- the image capture unit 323 can track the motion of the user's head or track other commands that do not require sound or use of the controller.
- the sensor 332 may track the motion of the joystick controller (representing the football).
- the image capture unit 323 may also track the light sources 334 on the controller 330 .
- the user may release of the “ball” upon reaching a certain amount and/or direction of acceleration of the joystick controller 330 or upon a key command triggered by pressing a button on the controller 330 .
- an inertial signal e.g., from an accelerometer or gyroscope may be used to determine a location of the controller 330 .
- an acceleration signal from an accelerometer may be integrated once with respect to time to determine a change in velocity and the velocity may be integrated with respect to time to determine a change in position. If values of the initial position and velocity at some time are known then the absolute position may be determined using these values and the changes in velocity and position.
- position determination using an inertial sensor may be made more quickly than using the image capture unit 323 and light sources 334 the inertial sensor 332 may be subject to a type of error known as “drift” in which errors that accumulate over time can lead to a discrepancy D between the position of the joystick 330 calculated from the inertial signal (shown in phantom) and the actual position of the joystick controller 330 .
- Embodiments of the present invention allow a number of ways to deal with such errors.
- the drift may be cancelled out manually by re-setting the initial position of the controller 330 to be equal to the current calculated position.
- a user may use one or more of the buttons on the controller 330 to trigger a command to re-set the initial position.
- image-based drift may be implemented by re-setting the current position to a position determined from an image obtained from the image capture unit 323 as a reference.
- image-based drift compensation may be implemented manually, e.g., when the user triggers one or more of the buttons on the joystick controller 330 .
- image-based drift compensation may be implemented automatically, e.g., at regular intervals of time or in response to game play.
- Such techniques may be implemented by program code instructions 304 which may be stored in the memory 302 and executed by the processor 301 .
- the signal from the inertial sensor 332 may be oversampled and a sliding average may be computed from the oversampled signal to remove spurious data from the inertial sensor signal.
- a sliding average may be computed from the oversampled signal to remove spurious data from the inertial sensor signal.
- other data sampling and manipulation techniques may be used to adjust the signal from the inertial sensor to remove or reduce the significance of spurious data. The choice of technique may depend on the nature of the signal, computations to be performed with the signal, the nature of game play or some combination of two or more of these.
- Such techniques may be implemented by instructions of the program 304 which may be stored in the memory 302 and executed by the processor 301 .
- the processor 301 may perform analysis of inertial signal data 306 as described above in response to the data 306 and program code instructions of a program 304 stored and retrieved by the memory 302 and executed by the processor module 301 .
- the processor may implement certain virtual world simulation functions described above as part of the program 304 .
- the program 304 may all or part of various methods for communicating with a virtual world and/or methods for interaction with a three-dimensional virtual world and/or avatar email communication as described above. Code portions of the program 304 may conform to any one of a number of different programming languages such as Assembly, C++, JAVA or a number of other languages.
- the processor module 301 forms a general-purpose computer that becomes a specific purpose computer when executing programs such as the program code 304 .
- program code 304 is described herein as being implemented in software and executed upon a general purpose computer, those skilled in the art will realize that the method of task management could alternatively be implemented using hardware such as an application specific integrated circuit (ASIC) or other hardware circuitry. As such, it should be understood that embodiments of the invention can be implemented, in whole or in part, in software, hardware or some combination of both.
- ASIC application specific integrated circuit
- the program code 304 may include a set of processor readable instructions that direct the one or more processors to analyze signals from the inertial sensor 332 to generate position and/or orientation information and utilize the information during play of a video game, during communication with a virtual world or during interaction with a three-dimensional virtual world.
- the program code 304 may optionally include processor executable instructions including one or more instructions which, when executed cause the image capture unit 323 to monitor a field of view in front of the image capture unit 323 , identify one or more of the light sources 334 within the field of view, detect a change in light emitted from the light source(s) 334 ; and in response to detecting the change, triggering an input command to the processor 301 .
- the program code 304 may optionally include processor executable instructions including one or more instructions which, when executed, use signals from the inertial sensor and signals generated from the image capture unit from tracking the one or more light sources as inputs to a game system, e.g., as described above.
- the program code 304 may optionally include processor executable instructions including one or more instructions which, when executed compensate for drift in the inertial sensor 332 .
- embodiments of the present invention are described in terms of examples related to a video game controller 330 games, embodiments of the invention, including the system 300 may be used on any user manipulated body, molded object, knob, structure, etc, with inertial sensing capability and inertial sensor signal transmission capability, wireless or otherwise.
- FIG. 4 illustrates a type of cell processor 400 according to an embodiment of the present invention.
- the cell processor 400 may be used as the processor 301 of FIG. 3 or in the simulation servers 22 or view servers 24 of FIG. 1E .
- the cell processor 400 includes a main memory 402 , power processor element (PPE) 404 , and a number of synergistic processor elements (SPEs) 406 .
- PPE power processor element
- SPEs synergistic processor elements
- the cell processor 400 includes a single PPE 404 and eight SPE 406 .
- seven of the SPE 406 may be used for parallel processing and one may be reserved as a back-up in case one of the other seven fails.
- a cell processor may alternatively include multiple groups of PPEs (PPE groups) and multiple groups of SPEs (SPE groups).
- PPE groups groups of PPEs
- SPE groups groups of SPEs
- hardware resources can be shared between units within a group.
- the SPEs and PPEs must appear to software as independent elements.
- embodiments of the present invention are not limited to use with the configuration shown in FIG. 4 .
- the main memory 402 typically includes both general-purpose and nonvolatile storage, as well as special-purpose hardware registers or arrays used for functions such as system configuration, data-transfer synchronization, memory-mapped I/O, and I/O subsystems.
- a video game program 403 may be resident in main memory 402 .
- the video program 403 may include inertial, image and acoustic analyzers and a mixer configured as described with respect to FIGS. 4 , 5 A, 5 B or 5 C above or some combination of these.
- the program 403 may run on the PPE.
- the program 403 may be divided up into multiple signal processing tasks that can be executed on the SPEs and/or PPE.
- the PPE 404 may be a 64-bit PowerPC Processor Unit (PPU) with associated caches L1 and L2.
- the PPE 404 is a general-purpose processing unit, which can access system management resources (such as the memory-protection tables, for example). Hardware resources may be mapped explicitly to a real address space as seen by the PPE. Therefore, the PPE can address any of these resources directly by using an appropriate effective address value.
- a primary function of the PPE 404 is the management and allocation of tasks for the SPEs 406 in the cell processor 400 .
- the cell processor 400 may have multiple PPEs organized into PPE groups, of which there may be more than one. These PPE groups may share access to the main memory 402 . Furthermore the cell processor 400 may include two or more groups SPEs. The SPE groups may also share access to the main memory 402 . Such configurations are within the scope of the present invention.
- CBEA cell broadband engine architecture
- Each SPE 406 is includes a synergistic processor unit (SPU) and its own local storage area LS.
- the local storage LS may include one or more separate areas of memory storage, each one associated with a specific SPU.
- Each SPU may be configured to only execute instructions (including data load and data store operations) from within its own associated local storage domain. In such a configuration, data transfers between the local storage LS and elsewhere in the system 400 may be performed by issuing direct memory access (DMA) commands from the memory flow controller (MFC) to transfer data to or from the local storage domain (of the individual SPE).
- DMA direct memory access
- MFC memory flow controller
- the SPUs are less complex computational units than the PPE 404 in that they do not perform any system management functions.
- the SPU generally have a single instruction, multiple data (SIMD) capability and typically process data and initiate any required data transfers (subject to access properties set up by the PPE) in order to perform their allocated tasks.
- SIMD single instruction, multiple data
- the purpose of the SPU is to enable applications that require a higher computational unit density and can effectively use the provided instruction set.
- a significant number of SPEs in a system managed by the PPE 404 allow for cost-effective processing over a wide range of applications.
- Each SPE 406 may include a dedicated memory flow controller (MFC) that includes an associated memory management unit that can hold and process memory-protection and access-permission information.
- MFC provides the primary method for data transfer, protection, and synchronization between main storage of the cell processor and the local storage of an SPE.
- An MFC command describes the transfer to be performed. Commands for transferring data are sometimes referred to as MFC direct memory access (DMA) commands (or MFC DMA commands).
- DMA direct memory access
- Each MFC may support multiple DMA transfers at the same time and can maintain and process multiple MFC commands.
- Each MFC DMA data transfer command request may involve both a local storage address (LSA) and an effective address (EA).
- LSA local storage address
- EA effective address
- the local storage address may directly address only the local storage area of its associated SPE.
- the effective address may have a more general application, e.g., it may be able to reference main storage, including all the SPE local storage areas, if they are aliased into the real address space.
- the SPEs 406 and PPE 404 may include signal notification registers that are tied to signaling events.
- the PPE 404 and SPEs 406 may be coupled by a star topology in which the PPE 404 acts as a router to transmit messages to the SPEs 406 .
- each SPE 406 and the PPE 404 may have a one-way signal notification register referred to as a mailbox.
- the mailbox can be used by an SPE 406 to host operating system (OS) synchronization.
- OS operating system
- the cell processor 400 may include an input/output (I/O) function 408 through which the cell processor 400 may interface with peripheral devices, such as a microphone array 412 and optional image capture unit 413 and a game/virtual world controller 730 .
- the controller unit 730 may include an inertial sensor 732 , and light sources 734 .
- an Element Interconnect Bus 410 may connect the various components listed above. Each SPE and the PPE can access the bus 410 through a bus interface units BIU.
- the cell processor 400 may also includes two controllers typically found in a processor: a Memory Interface Controller MIC that controls the flow of data between the bus 410 and the main memory 402 , and a Bus Interface Controller BIC, which controls the flow of data between the I/O 408 and the bus 410 .
- a Memory Interface Controller MIC that controls the flow of data between the bus 410 and the main memory 402
- a Bus Interface Controller BIC which controls the flow of data between the I/O 408 and the bus 410 .
- the requirements for the MIC, BIC, BIUs and bus 410 may vary widely for different implementations, those of skill in the art will be familiar their functions and circuits for implementing them.
- the cell processor 400 may also include an internal interrupt controller IIC.
- the IIC component manages the priority of the interrupts presented to the PPE.
- the IIC allows interrupts from the other components the cell processor 400 to be handled without using a main system interrupt controller.
- the IIC may be regarded as a second level controller.
- the main system interrupt controller may handle interrupts originating external to the cell processor.
- certain computations that facilitate interaction with the virtual world may be performed in parallel using the PPE 404 and/or one or more of the SPE 406 . Such computations may be run as one or more separate tasks that different SPE 406 may take as they become available.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Health & Medical Sciences (AREA)
- Child & Adolescent Psychology (AREA)
- Information Transfer Between Computers (AREA)
- Processing Or Creating Images (AREA)
- Communication Control (AREA)
- Massaging Devices (AREA)
- Power Steering Mechanism (AREA)
Abstract
Description
- This application claims the benefit of priority of commonly-assigned, co-pending United Kingdom patent application no. ______, entitled “ENTERTAINMENT DEVICE”, filed Mar. 1, 2007, the entire disclosures of which are incorporated herein by reference.
- This application claims the benefit of priority of co-pending U.S. Provisional patent application No. 60/892,397, entitled “VIRTUAL WORLD COMMUNICATION SYSTEMS AND METHODS”, filed Mar. 1, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. ______, to Tomas Gillo et al., entitled “SYSTEM AND METHOD FOR COMMUNICATING WITH A VIRTUAL WORLD”, attorney docket no. SCEA070 IJDI01 filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. ______, to Tomas Gillo et al., entitled “SYSTEM AND METHOD FOR ROUTING COMMUNICATIONS AMONG REAL AND VIRTUAL COMMUNICATION DEVICES”, attorney docket no. SCEA0701JDI02 filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. ______, to Tomas Gillo et al., entitled “SYSTEM AND METHOD FOR COMMUNICATING WITH AN AVATAR”, attorney docket no. SCEA0701JDI03 filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. ______, to Tomas Gillo et al., entitled “AVATAR CUSTOMIZATION”, attorney docket no. SCEA0701JDI05 filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending U.S. patent application Ser. No. ______, entitled “AVATAR EMAIL AND METHODS FOR COMMUNICATING BETWEEN REAL AND VIRTUAL WORLDS”, attorney docket no. SCEA070 IJDI06 filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending United Kingdom patent application no. ______, entitled “ENTERTAINMENT DEVICE AND METHOD”, (attorney docket no P028337GB) filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending United Kingdom patent application no. ______, entitled “ENTERTAINMENT DEVICE AND METHOD”, (attorney docket no. P028337 GB) filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending United Kingdom patent application no. ______, entitled “ENTERTAINMENT DEVICE AND METHOD”, (attorney docket no. P028338 GB) filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This application is related to commonly-assigned, co-pending United Kingdom patent application no. ______, entitled “ENTERTAINMENT DEVICE AND METHOD”, (attorney docket no. P028379 GB) filed Mar. 5, 2007, the entire disclosures of which are incorporated herein by reference.
- This invention is related to interactive computer entertainment and more specifically to communication among users of a virtual world.
- A virtual world is a simulated environment in which users may interact with each other via one or more computer processors. Users may appear on a video screen in the form of representations referred to as avatars. The degree of interaction between the avatars and the simulated environment is implemented by one or more computer applications that govern such interactions as simulated physics, exchange of information between users, and the like. The nature of interactions among users of the virtual world is often limited by the constraints of the system implementing the virtual world.
- It is within this context that embodiments of the invention arise.
- The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
-
FIG. 1A is a screen shot illustrating an example of a world map representing a virtual world that may be used in conjunction with embodiments of the present invention. -
FIG. 1B is a screen shot illustrating an example of a public space in a virtual world that may be used in conjunction with embodiments of the present invention. -
FIG. 1C is a screen shot illustrating an example of a private space in a virtual world that may be used in conjunction with embodiments of the present invention. -
FIG. 1D is a screen shot illustrating an example of a virtual communication device according to an embodiment of the present invention. -
FIG. 1E is a schematic diagram of a virtual world system according to an embodiment of the present invention. -
FIG. 1F is a functional block diagram showing one implementation of a multimedia processing apparatus by which a user may perceive and interact with a virtual world according to an embodiment of the present invention. -
FIG. 2A is a functional block diagram showing one implementation of the multimedia processing apparatus that may be used in conjunction with embodiments of the invention. -
FIG. 2B shows an implementation of a multimedia processing system that may be used in conjunction with embodiments of the invention. -
FIGS. 2C-2D illustrate an image capture device including an array of microphones for use with embodiments of the invention. -
FIG. 2E is a block diagram illustrating examples of call routing between real and virtual communication devices according to an embodiment of the present invention. -
FIG. 2F is diagrammatically illustrates an example of communication between real and virtual communication devices in accordance with an embodiment of the present invention. -
FIG. 3 is a block diagram illustrating a video game apparatus that may be used to interface with a virtual world according to an embodiment of the present invention. -
FIG. 4 is a block diagram of a cell processor implementation of a video game apparatus according to an embodiment of the present invention. - Although the following detailed description contains many specific details for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the invention. Accordingly, the exemplary embodiments of the invention described below are set forth without any loss of generality to, and without imposing limitations upon, the claimed invention.
- According to an embodiment of the present invention users may interact with a virtual world. As used herein the term virtual world means a representation of a real or fictitious environment having rules of interaction simulated by means of one or more processors that a real user may perceive via one or more display devices and/or may interact with via one or more user interfaces. As used herein, the term user interface refers to a real device by which a user may send inputs to or receive outputs from the virtual world. The virtual world may be simulated by one or more processor modules. Multiple processor modules may be linked together via a network. The user may interact with the virtual world via a user interface device that can communicate with the processor modules and other user interface devices via a network. Certain aspects of the virtual world may be presented to the user in graphical form on a graphical display such as a computer monitor, television monitor or similar display. Certain other aspects of the virtual world may be presented to the user in audible form on a speaker, which may be associated with the graphical display.
- By way of example, the virtual world may comprise a simulated public space and one or more simulated private spaces. In some embodiments, such public and private spaces may be presented to the user via a graphic display that presents a schematic representation or map of the virtual world. By way of example, as shown in
FIG. 1A , aworld map 10 may indicate a “home”location 11. Thehome location 11 may be a private space within the virtual world that is exclusive to a particular user. Other users may “visit” thehome location 11 only at the invitation of the user associated with that location. Theworld map 10 may also show variousother locations 12 that the user may visit, e.g., by selecting them with a cursor or similar graphical user interface. These locations may be sponsored by vendors and may be represented on the map by their respective corporate logos or other well-recognized symbols. Such locations may be visited by an user of the virtual world. The virtual world may or may not have a fixed amount of virtual “real estate”. In preferred embodiments, the amount of virtual real estate is not fixed. - In certain embodiments of the present invention, the virtual world may have multiple public spaces referred to herein as “lobbies”. Each lobby may have associated with it a separate chat channel so that users in the lobby may interact with one another. Each lobby may have the appearance of a lobby for a public building such as a hotel, office building, apartment building, theater or other public building.
FIG. 1B depicts a screen shot of such a lobby. The lobby may contain items with which users may interact. Examples of such items include games. As may be seen fromFIG. 1B , portions of the virtual world may be presented graphically to the user in three-dimensional (3D) form. As used herein, the term three-dimensional (3D) form refers to a representation having the dimensions of length, width and depth (or at least the illusion of depth). The lobby may contain “screens” 13, which are areas in spaces that can be used to show photos or canned or streaming video. - Within the virtual world, users may be represented by
avatars 14. Each avatar within the virtual world may be uniquely associated with a different user. The name or pseudonym of a user may be displayed next to the avatar so that users may readily identify each other. A particular user's interactions with the virtual world may be represented by one or more corresponding actions of the avatar. Different users may interact with each other in the public space via their avatars. An avatar representing a user could have an appearance similar to that of a person, an animal or an object. An avatar in the form of a person may have the same gender as the user or a different gender. The avatar may be shown on the display so that the user can see the avatar along with other objects in the virtual world. Alternatively, the display may show the world from the point of view of the avatar without showing itself. The user's (or avatar's) perspective on the virtual world may be thought of as being the view of a virtual camera. As used herein, a virtual camera refers to a point of view within the virtual world that may be used for rendering two-dimensional images of a 3D scene within the virtual world. Users may interact with each other through their avatars by means of the chat channels associated with each lobby. Users may enter text for chat with other users via their user interface. The text may then appear over or next to the user's avatar, e.g., in the form of comic-book style dialogue bubbles, sometimes referred to as chat bubbles. Such chat may be facilitated by the use of a canned phrase chat system sometimes referred to as quick chat. With quick chat, a user may select one or more chat phrases from a menu. - In embodiments of the present invention, the public spaces are public in the sense that they are not uniquely associated with any particular user or group of users and no user or group of users can exclude another user from the public space. Each private space, by contrast, is associated with a particular user from among a plurality of users. A private space is private in the sense that the particular user associated with the private space may restrict access to the private space by other users. The private spaces may take on the appearance of familiar private real estate. For example, as seen in
FIG. 1C a private space may be configured to resemble an apartment or private home. Virtual items may be included within the private space. Examples of virtual items include, but are not limited to,furniture 15,decorations 16 andvirtual communication devices 17, such as a virtual radio or video screen. - In certain embodiments of the present invention, users of the virtual world, users may communicate by means of virtual communication devices. As used herein, the term virtual communication device generally refers to a virtual world simulation of a real world device using assets of the system that generates the virtual world. By way of example, as shown in
FIG. 1D , avirtual communication device 18 may be presented on a display in a form that facilitates operation of the device by the user. In the example depicted inFIG. 1D , for instance, the virtual communication device has the appearance of a portable game console, e.g., a Sony Playstation Portable (PSP). Buttons on a real controller that the user uses to interact with the virtual world may be mapped tocorresponding buttons 19 or other controls on the virtual communication device to facilitate interaction between the user and the virtual communication device. - A virtual communication device may have associated with it a position within the virtual world that may be fixed or movable. The communication device may be simulated by simulating an interface for the simulated communication device in the virtual world and presenting the simulated interface to a user for interaction therewith. By way of example, the virtual device may have a form or appearance in the virtual world by which it can be recognized by a user. This form or appearance may be configured to mimic that of a corresponding real world device in a way that facilitates user interaction. For example, a virtual phone may be shown as having buttons which the user may operate by using the controller. The virtual phone may further be shown as having a speaker, a mouthpiece and perhaps a graphic display screen. The simulated communication device may be a simulated hand-held communication device, such as a telephone, mobile telephone (e.g., cell phone or cordless phone), voice-over-internet-protocol (VoIP) phone, portable text message device, portable email device, portable game device, two-way radio or other hand-held device.
- According to an embodiment of the present invention a virtual communication device may be simulated in the virtual world and communication may take place between the simulated communication device and a real communication device. The real communication device may be a real hand-held communication device, such as a telephone, mobile telephone (e.g., cell phone or cordless phone), voice-over-internet protocol (VoIP) phone, portable text message device, portable email device, portable game device, two-way radio or other hand-held device. Preferably, the real communication device is configured to communicate with other real communication devices via one or more communication channels that are independent of the virtual world. As used herein, the term “communication channel independent of the virtual world” means a channel of communication that does not require the existence of the virtual world in order for communication to take place over that channel. For example, a virtual telephone may be used to make a telephone call to a real cellular phone (or vice versa) via communication assets provided by the virtual world. The real cellular phone, however, could still make calls to other real cellular phones or telephones even if the virtual world did not exist. In some embodiments, the real phone may produce a distinctive ringtone when receiving calls from a virtual phone. In alternative embodiments, the simulated and real communication devices may communicate with each other by means of text messages and/or video images.
-
FIG. 1E is a block diagram illustrating an example ofsystem 20 that may be used to simulate a virtual world. Thesystem 20 includessimulation servers 22 andview servers 24. Eachsimulation server 22 may include one or more processor modules that executes coded instructions that simulate some part of the virtual world. By way of example, each simulation server may include one or more multiple core processors, e.g., a dual-core, quad-core or Cell processors. Although a limited number ofsimulation servers 22 and asingle view server 24 are depicted inFIG. 1E , this configuration may be arbitrarily extended to any number of servers. The numbers ofsimulation servers 22 andview servers 24 may both be scaled. For example onesimulator server 22 may accommodate andmany view servers 24, ormany simulation servers 22 may accommodate oneview server 24. Addingmore simulation servers 24 may allow for a bigger and/or better simulation of the virtual world. Addingmore view servers 24 allow thesystem 20 to handle more users. Of course, thesystem 20 may accommodate both a bigger and better simulation and more users by adding more of bothsimulation servers 22 andview servers 24. Theoretically the number ofsimulation servers 22 may be infinitely scalable. However, given a finite level of network bandwidth, the number ofview servers 14 may be reasonably expected to reach a finite limit after a certain number of users due to computation and network bandwidth limitations. - For the purpose of example and without limitation of embodiments of the invention, examples will be described herein with respect to Cell processors. Cell processors are described in detail, e.g., in Cell Broadband Engine Architecture, copyright International Business Machines Corporation, Sony Computer Entertainment Incorporated, Toshiba Corporation Aug. 8, 2005 a copy of which may be downloaded at http://cell.scei.cojp/, the entire contents of which are incorporated herein by reference. A typical Cell processor has a power processor unit (PPU) and up to 8 additional processors referred to as synergistic processing units (SPU). Each SPU is typically a single chip or part of a single chip containing a main processor and a co-processor. All of the SPUs and the PPU can access a main memory, e.g., through a memory flow controller (MFC). The SPUs can perform parallel processing of operations in conjunction with a program running on the main processor. The SPUs have small local memories (typically about 256 kilobytes) that must be managed by software-code and data must be manually transferred to/from the local SPU memories. For high performance, this code and data must be managed from SPU software (PPU software involvement must be minimized). There are many techniques for managing code and data from the SPU. Examples of such techniques are described e.g., in U.S. patent application Ser. No. 11/238,077 to John P. Bates, Payton White and Attila Vass entitled “CELL PROCESSOR APPARATUS AND METHODS, filed Sep. 27, 2005, U.S. patent application Ser. No. 11/238,095 to Richard B. Stenson and John P. Bates entitled “CELL PROCESSOR TASK AND DATA MANAGEMENT” filed Sep. 27, 2005, U.S. patent application Ser. No. 11/238,086 to Tatsuya Iwamoto entitled “CELL PROCESSOR TASK AND DATA MANAGEMENT” filed Sep. 27, 2005, U.S. patent application Ser. No. 11/238,087 to John P. Bates, Payton R. White, Richard B. Stenson, Howard Berkey, Attila Vass, Mark Cerny and John Morgan entitled SPU TASK MANAGER FOR CELL PROCESSOR filed Sep. 27, 2005, U.S. patent application Ser. No. 11/257,761 to Tatsuya Iwamoto entitled “SECURE OPERATION OF CELL PROCESSORS” filed Oct. 24, 2005, U.S. patent application Ser. No. 11/461,390 to John P. Bates, Keisuke Inoue and Mark Cerny entitled CELL PROCESSOR METHODS AND APPARATUS, filed Jul. 31, 2006, the entire contents of all of which are incorporated herein by reference. The
simulation servers 22 may communicate with each other and with theview servers 24 via high speed data transfer links 26. By way of example, the data transfer links may be 10 gigabit per second Ethernet connections. Thesimulation servers 22 may be either remotely located with respect to each other or they may be located proximate each other. To optimize data transfer it may be desirable to locate thesimulation servers 22 in fairly close physical proximity, e.g., within the same room or on the same server rack. Theview servers 24 receive simulation data from thesimulation servers 22 and send view data to remotely distributedclient devices 28 over awide area network 30, such as the Internet or other wide area network. Theclient devices 28 may be any suitable device that can communicate over thenetwork 30. Communication over thenetwork 30 may be slower than over thefast data links 26. - By way of example, the
client devices 28 may be video game console devices, such as the Sony PlayStation 3. Alternatively, theclient devices 28 may be any computer device from handheld to workstation, etc. A handheld video game device, such as a PlayStation Portable from Sony Computer Entertainment of Tokyo, Japan is one example among others of a handheld device that may be used as aclient device 28 in embodiments of the present invention. Theclient devices 28 may send theview servers 24 instructions relating to their desired interaction with other clients' avatars and with the simulated environment. For example, a client user may wish to move his or her avatar to a different portion of the simulated environment. Eachclient device 28 sends instructions to one of theview servers 24. These instructions are relayed by the view servers to the simulation servers that perform the necessary computations to simulate the interactions. -
Other devices 29 may also communicate with each other over thenetwork 30. Examples of such other devices include, telephones, cellular phones, voice over internet protocol (VoIP) phones, personal computers, portable web browsers, portable email devices, text messaging devices, portable game devices and the like. Communication between suchother devices 29 may be independent of thesimulation servers 22 andview servers 26 that generate the virtual world. Although theother devices 29 are not considered part of thesystem 20, they may interact with it via thenetwork 30. - The users of the
client devices 28 are often interested in things around them. Theview servers 24 make sure that eachclient 28 receives relevant data about its surroundings in the proper order. Theview servers 24 determine what the client needs based on its avatar's location, orientation, motion, etc. By way of example, each view server may generate the code and/or data that the client devices use to present views of the public spaces or private spaces. - To implement a complex simulated world, it may be desirable to establish peer-to-peer communication between clients and servers or between client devices and other client devices. For example, audio/video (A/V) chat among users in the same public space may be implemented by direct peer-to-peer communication among the users. Such peer-to-peer communication may reduce the load on the servers. Embodiments of the invention may make use of Peerlib to traverse network address translators (NATs) to establish peer-to-peer connections among users in the same public space. NAT traversal is described e.g., in U.S. patent application Ser. No. 11/245,853 to Yutaka Takeda, entitled “METHOD FOR PEER-TO-PEER COMMUNICATION TRAVERSING NETWORK ADDRESS TRANSLATORS OF TYPE SYMMETRIC” filed Oct. 4, 2005, which is incorporated herein by reference.
-
FIG. 1F shows one implementation of amultimedia processing system 100 that may be used as aclient device 28 and a user interface with the virtual world generated by thesystem 20. - The
processing system 100 may include a composite apparatus capable of processing a plurality of contents, such as still images, moving images, music, broadcasts, and games, spread over a plurality of media. The processing of a plurality of contents includes presentation, recording, and other related tasks performed by themultimedia processing system 100. By way of example, themultimedia processing system 100 includes amultimedia processing apparatus 102, a display 104 (e.g., a monitor or television), and acontroller 114. Buttons on thecontroller 114 may be mapped tocorresponding buttons 19 on thevirtual controller 18 shown inFIG. 1D and described above. - The
multimedia processing apparatus 102 may receive multimedia contents from various media sources, such as broadcast media, the Internet (or other network) media, anoptical disk 110, and amemory card 112. Contents from the broadcast media may be received through abroadcast data channel 106, while contents from the Internet media can be received through anetwork data channel 108. The broadcast andnetwork data channels multimedia processing apparatus 102. The received contents can also be used by various functions (e.g., a game) of themultimedia processing apparatus 102 in addition to interaction with the virtual world. - The received multimedia contents may be displayed on the
display 104. The display may include a video monitor, such as a cathode ray tube (CRT) or flat screen for display of still or moving visual images. Thedisplay 104 may further include one or more audio speakers for presenting sounds to the user. Thecontroller 114 allows the user to input various instructions related to multimedia processing, and to control functions of themultimedia processing apparatus 102. - The
system 100 may include audio and video inputs to facilitate user interaction with visual images and/or audible sounds presented by thedisplay 104. Such inputs may include a videoimage capture device 116, such as a camera, and an audiosignal capture device 118, such as a microphone. The videoimage capture device 116 may be placed on top of or integrated into thedisplay 104 and coupled to themultimedia processing apparatus 102, e.g., by cables, or over-the-air connections, such as optical (e.g., infrared) or radiofrequency (e.g., Bluetooth) data links. It should be understood that theimage capture device 116 may be placed in any other proximate location that will allow it to capture images that are located about in front of thedisplay 104. Techniques for capturing these movements and interactions can vary, but examples of such techniques are described in United Kingdom Applications GB 0304024.3 (PCT/GB2004/000693) and GB 0304022.7 (PCT/GB2004/000703), each filed on Feb. 21, 2003, and each of which is hereby incorporated by reference. Theimage capture device 116 may be a digital camera, e.g. a USB 2.0 type camera. Such a camera may have a field of view of about 75 degrees, and an f-stop of about 1.5 and be capable of capturing images at a frame rate of up to about 120 frames per second. By way of example, the video image capture device may be an EyeToy Camera available from Logitech of Fremont, Calif. Themedia processing apparatus 102 may be a game console, television, digital video recorder (DVR), cable set-top-box, home media server or consumer electronic device and including any device capable of rendering itself subject to control of a user. In alternative embodiments, the image capture device may be a three-dimensional (3D) camera. As used herein, a 3D camera (or zed camera) refers to an image capture device configured to facilitate determining the depth of objects in an image. In this context, the term “depth” refers a location of an object relative to a direction perpendicular to a plane of the image. -
FIG. 2A is a functional block diagram showing one implementation of themultimedia processing apparatus 102. In the illustrated implementation, themultimedia processing apparatus 102 includes thecontroller 114, videoimage capture device 116, audiosignal capture device 118, a data input/output (I/O)unit 200, adisplay output unit 202, adisplay control unit 204, astorage unit 208, and a game/virtual world processor 206. By way of example, the game/virtual world processor 206 may be or may include a parallel processor such as a cell processor having a power processing unit (PPU) coupled to one or more synergistic processing units (SPU). Cell processors are described, e.g., in U.S. patent application Ser. No. 11/238,077, which is incorporated herein by reference. Themultimedia processing apparatus 102 further includes programs and instructions for performing various functions, such as a data input function, a data retaining function, an image processing function, a rendering function, and other related functions. - The
controller 114 may include a direction-determining unit 222 for determining one or a combination of four directions (i.e., an upward direction, a downward direction, a left direction, and a right direction) from the user input; and an instruction-determiningunit 224 for determining an instruction from the user input. The instruction may include a command to present a multimedia content, to terminate the presentation, to invoke a menu screen, and to issue other related commands and/or instructions. Output of thecontroller 114, videoimage capture device 116 and audiosignal capture device 118 is directed to thedisplay output unit 202, thedisplay control unit 204, and the game/virtual world processor 206. - In the illustrated implementations of
FIGS. 1B and 2A , the direction-determining unit 222 and the instruction-determiningunit 224 may be configured with a combination of buttons, circuits, and programs to actuate, sense, and determine the direction and the instruction. The buttons can include cross-shaped keys or joysticks. The button associated with an instruction for invoking a menu screen can be set in a toggle manner so that the menu screen can be toggled between a display mode and a non-display mode each time the button is pressed. - In one implementation, the direction-determining unit 222 may determine the diagonal movements of the button as a binary command in which the movement is ascertained to be in one of two directions. Thus, a diagonal movement between the up direction and the right direction can be ascertained to be in either the up or the right direction. In another implementation, the direction-determining unit 222 may determine the diagonal movements of the button as an analog command in which the movement is ascertained to be in a particular direction up to the accuracy of the measurement. Thus, a diagonal movement between the up direction and the right direction can be ascertained to be in a northwesterly direction. Directional movements may also be determined through interaction between the user, the video
image capture device 116 and thedisplay control 204 as described below. - The data I/
O unit 200 may include abroadcast input unit 212 for inputting broadcast contents via thebroadcast channel 106; anetwork communication unit 214 for inputting and outputting data such as web contents via thenetwork channel 108; adisk reading unit 216 for inputting data stored on adisk 110; and a memorycard reading unit 218 for inputting and outputting data to/from amemory card 112. Output of the data I/O unit 200 may be directed to thedisplay output unit 202, thedisplay control unit 204, thegame processor 206, and thestorage unit 208. - The
display output unit 202 may include adecoder 232, asynthesizer 234, anoutput buffer 236, and an on-screen buffer 238. Thedecoder 232 decodes input data received from the data I/O unit 200 or thestorage unit 208. Thus, the input data may include broadcast contents, movies, and music. Thesynthesizer 234 processes the decoded input data based on user direction/instruction received from thecontroller 114. The output of thesynthesizer 234 is stored in theoutput buffer 236. The on-screen buffer 238 may store image data of a menu screen generated by thedisplay control unit 204. The output of thedisplay output unit 202 is transmitted to thedisplay 104. - The
display control unit 204 may include amenu manager 242, aneffects processor 244, a contents controller 246, and animage generator 248. Themenu manager 242 manages media items and multimedia contents received from thestorage unit 208 and the data I/O unit 200, and shown on the menu screen. Theeffects processor 244 processes operation of icons and icon arrays on the menu screen. Theeffects processor 244 also manages various actions and effects to be displayed on the menu screen. The contents controller 246 controls processing of media items and multimedia contents, and handling of data from the data I/O unit, thestorage unit 208, and the game/virtual world processor 206. Theimage generator 248 operates to generate a menu screen including a medium icon array and a contents icon array. - The game/
virtual world processor 206 executes game and/or virtual world programs using data read from the data I/O unit 200 or from thestorage unit 208. The game/virtual world processor 206 executes a game program or facilitates user interaction with the virtual world based on user instructions received from thecontroller 114. The display data of the executed game program or virtual world interaction is transmitted to thedisplay output unit 202. - In embodiments of the present invention, signals from the video
image capture device 116 and audiosignal capture device 118 allow a user to interact with and manipulate images shown on thedisplay 104. Specifically, embodiments of the invention may allow a user to “grab” and “drag” objects from one location to another on thedisplay 104. As shown inFIG. 2B , the videoimage capture device 116 points at and captures an image IU of a user U. The image IU may then be shown on thedisplay 104 in the background of other images through a technique known as alpha blending. - The term “alpha blending” refers generally to a convex combination of two colors allowing for transparency effects in computer graphics. The value alpha in the color code may range from 0.0 to 1.0, where 0.0 represents a fully transparent color, and 1.0 represents a fully opaque color. By way of example, the value of the resulting color when color Value1 is drawn over a background of color Value0 may be given by:
-
Value=Value0(1.0−alpha)+Value1(alpha) - The alpha component is used to blend to red, green and blue components equally, as in 32-bit RGBA, or, alternatively, there are three alpha values specified corresponding to each of the primary colors for spectral color filtering.
- Once the user's hand h is recognized, the effects processor may correlate the directional displacement of the user's hand to directional input such as would normally be received from the
controller 114. Optionally a magnitude of the displacement can control the input speed. - In particular embodiments, the image IU may include the user's head H and hand h. It is noted that to facilitate user interaction with the image IU the user's image IU may be presented on the screen as a mirror image of the user U. Thus, when the user U moves his hand h to the user's left, an image Ih of the hand also moves to the user's left. The
effects processor 244 may be configured to recognize the user's hand h and recognizes changes in the aspect ratio (ratio of height to width) of the hand image Ih. These changes in aspect ratio may be used to signal thecontroller 114 that the user has “grabbed” or “clicked” on anobject 140 presented on the display. Theeffects processor 244 can then move the selected object with the motion of the image Ih of the user's hand h. In some embodiments, the user may hold a deformable “C”-shaped object 142 that is colored to be more readily recognizable to theeffects processor 244 when interpreting the image from the videoimage capture device 116. Deformation of the object 142, referred to herein as a “clam” can provide a change in aspect ratio that is recognize as a command to “grab” or “click” an object in thedisplay 104. - It is often desirable for the
effects processor 244 to be able to recognize whether the user U is using his left or right hand to manipulate theobject 140 on thedisplay 104. For example, when manipulating an object on thedisplay 104 with the left hand it is often desirable for object to appear to the left of the user's head H. In such a case the controller may also include software that recognizes the users hand h, head H, his arm A and his chest C by their corresponding images Ih, IH, IA, and IC. With this information, thecontroller 114 can determine whether the user U is using his left or right hand. For example, if the user's hand h is on the left side of his head H and his arm A is not across his chest, it can be determined that the user U is using his left hand. Similarly, if the user's hand h is on the left side of his head and his arm is across his chest, it can be determined that the user U is using his right hand. - In certain embodiments of the invention the
image capture device 116 and audiosignal capture device 118 may be combined into the same piece of equipment. For example,FIGS. 2C-2D depict animage capture device 120 that may be used with themultimedia processing system 100. Thedevice 120 includes an opticalimage capture device 122, e.g., a digital camera (or 3D camera) and one ormore microphones 124. Themicrophones 124 may be arranged in an array and spaced apart from each other at known distances. By way of example and without loss of generality, themicrophones 124 may be spaced in a linear array with adjacent microphones spaced about 2 centimeters apart center-to-center. Each microphone may have a resonant frequency of about 16 kilohertz. Such microphone arrays may be used to locate and track one or more sources of sound in conjunction with operation of theapparatus 102 and interaction with a virtual world. The use of such microphone arrays for sound source location and tracking is described, e.g., in U.S. patent application Ser. Nos. 11/381,724, 11/381,725 and 11/381,729 filed May 4, 2006, the entire disclosures of all of which are incorporated herein by reference. - In certain embodiments of the invention it is desirable for the
microphones 124 to move with theimage capture device 122. For example, themicrophones 124 may be mounted to aframe 126 that keeps the microphones in a fixed positional relationship with respect to the image capture device, e.g., with respect to alens 128. Although the microphones are depicted as being arrayed in a horizontal linear arrangement, they may alternatively be oriented vertically or diagonally or arrayed in a two-dimensional arrangement. - In some embodiments, the
device 120 may include avisible LED 130 and aninfrared LED 132. These may be used to illuminate objects in a field of view of theimage capture device 122. To facilitate capture of infrared images, thelens 128 may include a so-called “day-night” coating that transmits visible light and selected frequencies of the infrared (e.g., frequencies at around 940 nm). - By way of example, elements of the
system 20 andapparatus 102 may be set up so that a may direct his or her avatar to pick up virtual cell phone, dial number and make real call to a real or virtual phone. If the intended recipient of the call is another user of the virtual world, thesystem 20 andapparatus 102 may be suitable programmed to connect to that user's virtual phone, e.g., via VoIP if that user happens to be online interacting with the virtual world at the time of the call. Elements of thesystem 20 andapparatus 102 may be configured to rout the call by default to the intended recipient's virtual phone (if any). If the intended recipient is not online, the call may be re-routed to the recipient's real communication device. Examples of real communication devices may include, but are not limited to phones (e.g., land line, cellular phone, or VoIP phone) or voice mail (which may be associated with a real or virtual phone) or any network device with VoIP capability including portable game devices and the like. Alternatively call may be routed by default to the user's real communication device. - In such embodiments, elements of the
system 20 andapparatus 102 may be used to enable intelligent two-way routing between the virtual world and real communication devices. - By way of example and without loss of generality, communication between real and virtual devices may be understood with respect to
FIG. 2E andFIG. 2F . As shown inFIG. 2E , two ormore users network 30 via thesystem 20, described above with respect toFIG. 1E . Each user may interface with thesystem 20 over thenetwork 30 viaclient devices FIGS. 1F and 2A . Eachclient device virtual communication devices user land line telephones cell phones - Each
client device configurable router routers FIG. 2E , therouters client devices routers simulation servers 22,view servers 24 or other devices connected to thenetwork 30. Therouters router routers - By way of example, suppose a
first user 251 wishes to communicate with asecond user 252, e.g., usingvirtual communication device 255. In this case, thefirst user 251 is the source of the call and thesecond user 252 is the target of the call. The first user'srouter 261 may be configured to preferentially attempt to contact thesecond user 252 atvirtual communication device 256. If the user is not online and using the virtual world, the first user'srouter 261 may attempt to contact the second user atland line 258 and failing that, therouter 261 may attempt to contact thesecond user 252 at his or hercell phone 260. As an alternative example, it is noted that the second user'srouter 260 may implement its own routing preference for reception of communications from thefirst user 251. For example, the second user'srouter 262 may preferentially route calls from thefirst user 251 to the second user'scell phone 260, then to the second user'sland line 258 and then to the second user'svirtual device 256. - In a preferred embodiment each
user routers routers - For example, in one mode the first user's
router 261 may receive a call from a source who is calling a first user's number. In one mode, thefirst user 251 may provide therouter 261 with information indicating that thefirst user 251 is online. Such information may be programmed into themultimedia processing apparatus 102, e.g., using thecontroller 114. Alternatively, therouter 261 may check to see if thefirst user 251 is online. If so, the router mayrouter 261 may route the “call” to the first user'svirtual communication device 255, which may be configured “ring” even if thefirst user 251 is online via the second user'sclient device 254. - In another mode, the
router 261 may be provided with information or may check to determine that thefirst user 251 is online and the target (e.g., the second user 252) is offline. In such a case, the first user'srouter 261 may route the “call” to the second user's real communication device, e.g.,land line 258 orcell phone 260. - In another mode, the first user's
router 261 may be provided information or determine that thefirst user 251 is online and thesecond user 252 is online. In such a case, a “text message” may be routed within the virtual world, e.g., to the second user's avatar OR the target second user'svirtual device 256. - In another mode, the
router 261 may be provided information or may check to determine if thesecond user 252 is online. If thesecond user 252 is offline the “text message” may be routed to a real world device associated with the second user, e.g.,land line 258 orcell phone 260. - Many other permutations on the above examples are also possible. For example, in the above examples the source may place the call from within the virtual world OR within the virtual world through an avatar virtual device OR through any VOIP device or service or through any real telephone line and source.
- In other configurations the above intelligent routing may take action based on user preferences so a user may want his real cell phone to ring when online not his avatar phone. In other configurations the above intelligent routing may take action based on STATE CONTROLS so only in certain circumstances does the call route to the avatar or the real phone depending on the application configuration. For example, if a target is involved in an online game and does not wish to be interrupted the call may be routed to the target's real or virtual voicemail. In yet another configuration, a call may be routed to virtual device but if the device does not ring instead of going to virtual voicemail call may be re-routed to a real device, such as a real phone.
- As shown in
FIG. 2F , embodiments of the present invention allow for a situation where thefirst user 251 calls thesecond user 252 using a real phone, e.g.,land line 257 speaks into the phone and the first user's avatar 263 appears on the second user'svirtual communication device 256 which is shown on thedisplay 104 connected to themultimedia processing apparatus 102 belong to thesecond user 252. The first user'sname 266 may also be shown on thedisplay 104 proximate the first user's avatar 263. The first user's spokenspeech 265 may be translated to text through use of speech recognition software and/or hardware, which may be implemented on theapparatus 102, thesimulation servers 22,view server 24 or other device. The resulting text may appear on thedisplay 104 as text bubbles 264 proximate the first user's avatar 263. Anaudio speaker 267 may playaudible sounds 268 of the first user'sspeech 265 during communication between the first and second users. - It is noted that the same routing procedure may be used for other types of messaging, e.g. text messaging or email. An advantage of this system is that real calls and/or text messages may be routed from one user to another in a way that can avoid long distance or other phone charges associated with real communication devices. The recipient's (or user's) real telephone or text message device may be equipped with middleware to facilitate interaction with the virtual world supported by the
system 20. - In some embodiments, a user may be able to use a real communication device to access virtual world content. For example, a cellular phone, portable internet device, etc. may be used to make changes to the user's avatar, public space or private space. Alternatively, the real communication device may be used to remotely access virtual communication device content. In particular, the real communication device may be used as an interface between the simulated communication device and a user. For example, suppose the virtual communication device is a virtual digital video recorder (DVR) located within the user's private space. A user may access the virtual DVR to record a real or virtual event by way of a real cellular phone and electronic programming guide.
- As mentioned above, communicating between the real and virtual communication devices may involve video communication. According to a particular embodiment, an image of the avatar may be displayed with the real communication device during the video communication. The system that generates the virtual world may facilitate lip-synching of the avatar image to real or synthesized speech generated by the user associated with the avatar. For example, the user may record a voice message to be sent to the real device as part of a video message. The system may generate a video message of the avatar speaking the voice message in which the avatar's lip movements are synchronized to the user's speech within the message. Alternatively, the user may enter text of the message into a virtual device. The system may then synthesize speech for the avatar from the text and then generate a video image of the avatar in which the avatar's lip movements are synchronized to the synthesized speech. In other embodiments, the user may record a sound and video message, e.g., using the video
image capture device 116 and audiosignal capture device 118. - In some embodiments, the
avatars 14 may express emotion through animation, facial change, sound, particle or chat bubble change to communicate a specific emotion. Such expressions of emotion by the avatar (sometimes called “emotes”) may be pre-programmed and may be triggered by user commands. In particular embodiments of the invention, emotions expressed by the user during interaction with the virtual world may be mapped to emotion exhibited by the user's avatar. In certain embodiments, the user may select an emotional state that can be projected by the avatar. By way of example avatar emotes may be selected from a menu presented to the user by theapparatus 102. If, for example, the user selects “happy”, the user's avatar may be shown with a smile on its face. If the user selects “sad”, the avatar may be shown with a frown. Such menu-drive emotions may be somewhat awkward for a user to implement quickly. Therefore, in certain embodiments of theapparatus 102 may be configured to detect an emotional state of the user in real time and then appropriately change the features of the user's avatar to reflect that state. Such real time tracking of user emotional state can be particularly useful, e.g., for mapping user emotional state onto an avatar during video communication in which an image of the user's avatar is presented to a real device. - By way of non-limiting example, the
apparatus 102 may track user emotional state in real time by capturing one or more visual images of the user U and analyzing one or more facial features of the user using theimage capture device 116. The game/virtual world processor 206 may be programmed to analyze these images, e.g., using facial features such as the user's lips, eyes, eyelids and eyebrows, cheeks, teeth or nostrils, or body language features, e.g., stance, placement of arms or hands, to determine the user's emotional state. Such facial and/or body language analysis may be enhanced through the use of a 3D camera to generate the images. - Alternatively, user emotional stage may be tracked in real time through analysis of the user's voice stress as exhibited in user speech or other vocalizations detected by the audio
signal capture device 118. Where the user communicates via text, emotional may be tracked by analysis of the text for certain words, phrases or language patterns that are indicative of emotional state. In addition, the user's emotional state may be tracked using other biometrics, such as electrocardiographic (EKG), electroencephalographic (EEG), galvanic skin response, or thermal imaging data. Such data may be obtained through appropriate sensors incorporated into thecontroller 114 and analyzed by appropriately configured software, hardware, or firmware incorporated into theprocessor 206. Thermal imaging data may also be obtained if theimage capture device 116 includes an infrared imaging capability. - Once the user's emotional state has been determined various combinations of body language and facial features indicative of the emotional state may be reflected in emotes exhibited by animation of the avatar (e.g., a raised first combined with bared teeth to indicate anger).
- In some embodiments, users may wish to use customized gestures or “emotes” for their avatars. To facilitate this one or more custom gestures may be generated for the avatar. These custom gestures may then be associated with one or more user interface signals so that the user's avatar can perform the gesture on command. By way of example, the custom gesture may be generated through use of motion capture or performance capture techniques to record and digitize the user's bodily movements or mapping of the user's facial expression as the user performs the gesture. In some embodiments, the
image capture device 116 may be used for this purpose. Alternatively, a commercial motion capture studio or performance capture studio may be used for this purpose. - In motion capture, the user or some other performer may wear markers near each joint to identify the motion by the positions or angles between the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the rate of the desired motion, to submillimeter positions. The motion capture computer software records the positions, angles, velocities, accelerations and impulses, providing an accurate digital representation of the motion. By way of example, an optical motion capture system may triangulate the 3D position of a marker between one or more cameras calibrated to provide overlapping projections. A passive optical system may use markers coated with a retroreflective material to reflect light back that is generated near the cameras lens. The cameras sensitivity can be adjusted taking advantage of most cameras narrow range of sensitivity to light so only the bright markers will be sampled ignoring skin and fabric. Alternatively, an active optical system may be used in which the markers themselves are powered to emit their own light. Power may be sequentially provided to each marker may in phase with the capture system providing a unique identification of each marker for a given capture frame at a cost to the resultant frame rate.
- Performance capture differs from standard motion capture due to the interactive nature of the performance, capturing the body, the hands and facial expression all at the same time, as opposed to capturing data for reference motion and editing the motions together later.
- Once the user's body movements and/or facial expression for the gesture have been digitized, the digitized gesture may be used to generate coded instructions or other user interface signals for animation of the avatar so that it performs the gesture. The code or other user interface signals may be distributed to one or more other users, so that they can customize their avatars to perform the custom gesture. Customized avatar gestures may be combined with customized avatar clothing, footwear, hairstyles, ethnic characteristics and other custom avatar features as a means of social identification with a particular group. In some embodiments it may be desirable to moderate the use of custom gestures, e.g., to avoid unnecessarily offending other users or breaking the law. As used herein moderating or moderation refers to enforcement of some degree of rules for acceptable behavior in the virtual world. Such moderation may be implemented by the
view servers 24, which may analyze the custom gestures for rudeness or other indications of inappropriateness. Moderating the display of the custom gesture may include restricting an ability of a particular user to make an avatar perform the custom gesture or an ability of the particular user to perceive the avatar performing the custom gesture based on predetermined criteria. Such predetermined criteria may include the age of the user or viewer of the gesture or a sensitivity of the viewer to offense based on religious, ethnic or other affiliation of the viewer. - The systems and methods described above may be modified to implement communication using a virtual world according to an alternative embodiment of the invention. Specifically, an avatar may be associated with a source of an email. A user may generate an email within the virtual world and associate one or more images of his or her avatar with the email. The email may be sent from the virtual world to a real a real device. The avatar images may be then be presented at email's destination, e.g., by self-extracting email attachment. The email may be generated, e.g., using a virtual communication device within the virtual world. By way of example, and without limitation, the destination of the email may be a real communication device, e.g., any real device configured to receive email messages. The real communication device may be configured to communicate with other real communication devices via one or more communication channels that are independent of the virtual world.
- By way of example, and without loss of generality, the virtual world may optionally comprise a simulated public space configured to facilitate interaction among a plurality of users and one or more private spaces. Each private space is associated with a particular user of the plurality of users, e.g., as described above.
- Recorded or synthesized speech may be associated with the email and presented with the one or more images at the destination. The avatar images may comprise an animation of the avatar generated specifically for the email. The animation may be presented at the destination, e.g., by self-extracting email attachment. In addition, one or more gestures may be mapped to the animation of the avatar, e.g., as described above. The gestures may be mapped by recording audio and/or video of a source of the email message and mapping one or more features of the audio and/or video to one or more features of the avatar in the animation.
- In some embodiments a theme may be associated with virtual camera movements in the animation. By way of example, and without limitation, the theme may involve choice of virtual camera angle, tracking, panning, tilting, zoom, close-up, simulated lighting, and the like. The virtual camera position may be fixed or moving. In addition, the theme may involve a choice of background scenery for the avatar.
- In some embodiments, generating the email may involve tracking an emotional state of the source, e.g., as described above, and mapping the emotional state to the theme. For example, a serene or calm emotional state may be mapped to a theme characterized by fixed camera position or relatively slow virtual camera movement. An agitated or excited emotional state may be mapped to a theme characterized by jarring camera movement, extreme close-ups, harsh camera angles, and the like.
- Avatar email communications of the type described above may be implemented, e.g., by appropriate configuration of the
system 20 ofFIG. 1E and/or themultimedia apparatus 102 ofFIG. 1F andFIG. 2A . - According to embodiments of the present invention, virtual world systems and methods of the type described above may be implemented using a console video game apparatus as a
client device 28 and a user interface for interacting with the virtual world, e.g., as generated by elements of thesystem 20. As depicted inFIG. 3 , a consolevideo game apparatus 300 may include aprocessor 301 and a memory 302 (e.g., RAM, DRAM, ROM, and the like). In addition, thevideo game apparatus 300 may havemultiple processors 301 if parallel processing is to be implemented. Thememory 302 includes data andgame program code 304, which may include portions that facilitate user interaction with a virtual world as described above. Specifically, thememory 302 may includeinertial signal data 306 which may include stored controller path information as described above. Thememory 302 may also contain storedgesture data 308, e.g., data representing one or more gestures relevant to thegame program 304. Coded instructions executed on theprocessor 302 may implement amulti-input mixer 305, which may be configured and function as described above. - The
apparatus 300 may also include well-known support functions 310, such as input/output (I/O)elements 311, power supplies (P/S) 312, a clock (CLK) 313 andcache 314. Theapparatus 300 may optionally include amass storage device 315 such as a disk drive, CD-ROM drive, tape drive, or the like to store programs and/or data. The controller may also optionally include adisplay unit 316 and input unit 318 to facilitate interaction between theapparatus 300 and a user. Thedisplay unit 316 may be in the form of a cathode ray tube (CRT) or flat panel screen that displays text, numerals, graphical symbols or images. The user interface 318 may include a keyboard, mouse, joystick, light pen or other device. In addition, the user input 318 may include a microphone, video camera or other signal transducing device to provide for direct capture of a signal to be analyzed. Theapparatus 300 may also include a network interface 319 to enable the device to communicate with virtual world servers and other similarly configured devices over a network, such as the internet. Theprocessor 301,memory 302, user input 318, network interface 319 and other components of theapparatus 300 may exchange signals (e.g., code instructions and data) with each other via asystem bus 320 as shown inFIG. 3 . Amicrophone array 322 may be coupled to thesystem 300 through the I/O functions 311. - The microphone array may include between about 2 and about 8 microphones, preferably about 4 microphones with neighboring microphones separated by a distance of less than about 4 centimeters, preferably between about 1 centimeter and about 2 centimeters. Preferably, the microphones in the
array 322 are omni-directional microphones. An optional image capture unit 323 (e.g., a digital camera) may be coupled to theapparatus 300 through the I/O functions 311. One or more pointing actuators 325 may be mechanically coupled to the camera to control pointing of the image capture unit. These actuators 325 may exchange signals with theprocessor 301 via the I/O functions 311. - As used herein, the term I/O generally refers to any program, operation or device that transfers data to or from the
apparatus 300 and to or from a peripheral device. Every data transfer may be regarded as an output from one device and an input into another. Peripheral devices include input-only devices, such as keyboards and mouses, output-only devices, such as printers as well as devices such as a writable CD-ROM that can act as both an input and an output device. The term “peripheral device” includes external devices, such as a mouse, keyboard, printer, monitor, microphone, game controller, camera, external Zip drive or scanner as well as internal devices, such as a CD-ROM drive, CD-R drive or internal modem or other peripheral such as a flash memory reader/writer, hard drive. - In certain embodiments of the invention, the
apparatus 300 may include acontroller 330 coupled to the processor via the I/O functions 311 either through wires (e.g., a USB cable) or wirelessly, e.g., using infrared or radiofrequency (such as Bluetooth) connections. Thecontroller 330 may have analog joystick controls 331 andconventional buttons 333 that provide control signals commonly used during playing of video games. Such video games may be implemented as processor readable data and/or instructions from theprogram 304 which may be stored in thememory 302 or other processor readable medium such as one associated with themass storage device 315. - The joystick controls 331 may generally be configured so that moving a control stick left or right signals movement along the X axis, and moving it forward (up) or back (down) signals movement along the Y axis. In joysticks that are configured for three-dimensional movement, twisting the stick left (counter-clockwise) or right (clockwise) may signal movement along the Z axis. These three axis—X Y and Z—are often referred to as roll, pitch, and yaw, respectively, particularly in relation to an aircraft.
- In addition to conventional features, the
controller 330 may include one or moreinertial sensors 332, which may provide position and/or orientation information to theprocessor 301 via an inertial signal. Orientation information may include angular information such as a tilt, roll or yaw of thecontroller 330. By way of example, theinertial sensors 332 may include any number and/or combination of accelerometers, gyroscopes or tilt sensors. In a preferred embodiment, theinertial sensors 332 include tilt sensors adapted to sense orientation of the joystick controller with respect to tilt and roll axes, a first accelerometer adapted to sense acceleration along a yaw axis and a second accelerometer adapted to sense angular acceleration with respect to the yaw axis. An accelerometer may be implemented, e.g., as a MEMS device including a mass mounted by one or more springs with sensors for sensing displacement of the mass relative to one or more directions. Signals from the sensors that are dependent on the displacement of the mass may be used to determine an acceleration of thejoystick controller 330. Such techniques may be implemented by instructions from thegame program 304 which may be stored in thememory 302 and executed by theprocessor 301. - By way of example an accelerometer suitable as the
inertial sensor 332 may be a simple mass elastically coupled at three or four points to a frame, e.g., by springs. Pitch and roll axes lie in a plane that intersects the frame, which is mounted to thejoystick controller 330. As the frame (and the joystick controller 330) rotates about pitch and roll axes the mass will displace under the influence of gravity and the springs will elongate or compress in a way that depends on the angle of pitch and/or roll. The displacement and of the mass can be sensed and converted to a signal that is dependent on the amount of pitch and/or roll. Angular acceleration about the yaw axis or linear acceleration along the yaw axis may also produce characteristic patterns of compression and/or elongation of the springs or motion of the mass that can be sensed and converted to signals that are dependent on the amount of angular or linear acceleration. Such an accelerometer device can measure tilt, roll angular acceleration about the yaw axis and linear acceleration along the yaw axis by tracking movement of the mass or compression and expansion forces of the springs. There are a number of different ways to track the position of the mass and/or or the forces exerted on it, including resistive strain gauge material, photonic sensors, magnetic sensors, hall-effect devices, piezoelectric devices, capacitive sensors, and the like. - In addition, the
joystick controller 330 may include one or morelight sources 334, such as light emitting diodes (LEDs). Thelight sources 334 may be used to distinguish one controller from the other. For example one or more LEDs can accomplish this by flashing or holding an LED pattern code. By way of example, 5 LEDs can be provided on thejoystick controller 330 in a linear or two-dimensional pattern. Although a linear array of LEDs is preferred, the LEDs may alternatively, be arranged in a rectangular pattern or an arcuate pattern to facilitate determination of an image plane of the LED array when analyzing an image of the LED pattern obtained by theimage capture unit 323. Furthermore, the LED pattern codes may also be used to determine the positioning of thejoystick controller 330 during game play. For instance, the LEDs can assist in identifying tilt, yaw and roll of the controllers. This detection pattern can assist in providing a better user/feel in games, such as aircraft flying games, etc. Theimage capture unit 323 may capture images containing thejoystick controller 330 andlight sources 334. Analysis of such images can determine the location and/or orientation of the joystick controller. Such analysis may be implemented byprogram code instructions 304 stored in thememory 302 and executed by theprocessor 301. To facilitate capture of images of thelight sources 334 by theimage capture unit 323, thelight sources 334 may be placed on two or more different sides of thejoystick controller 330, e.g., on the front and on the back (as shown in phantom). Such placement allows theimage capture unit 323 to obtain images of thelight sources 334 for different orientations of thejoystick controller 330 depending on how thejoystick controller 330 is held by a user. - In addition the
light sources 334 may provide telemetry signals to theprocessor 301, e.g., in pulse code, amplitude modulation or frequency modulation format. Such telemetry signals may indicate which joystick buttons are being pressed and/or how hard such buttons are being pressed. Telemetry signals may be encoded into the optical signal, e.g., by pulse coding, pulse width modulation, frequency modulation or light intensity (amplitude) modulation. Theprocessor 301 may decode the telemetry signal from the optical signal and execute a game command in response to the decoded telemetry signal. Telemetry signals may be decoded from analysis of images of thejoystick controller 330 obtained by theimage capture unit 323. Alternatively, theapparatus 301 may include a separate optical sensor dedicated to receiving telemetry signals from the lights sources 334. The use of LEDs in conjunction with determining an intensity amount in interfacing with a computer program is described, e.g., in U.S. patent application Ser. No. 11/429,414, to Richard L. Marks et al., entitled “USE OF COMPUTER IMAGE AND AUDIO PROCESSING IN DETERMINING AN INTENSITY AMOUNT WHEN INTERFACING WITH A COMPUTER PROGRAM” (Attorney Docket No. SONYP052), filed May 4, 2006, which is incorporated herein by reference in its entirety. In addition, analysis of images containing thelight sources 334 may be used for both telemetry and determining the position and/or orientation of thejoystick controller 330. Such techniques may be implemented by instructions of theprogram 304 which may be stored in thememory 302 and executed by theprocessor 301. - The
processor 301 may use the inertial signals from theinertial sensor 332 in conjunction with optical signals fromlight sources 334 detected by theimage capture unit 323 and/or sound source location and characterization information from acoustic signals detected by themicrophone array 322 to deduce information on the location and/or orientation of thecontroller 330 and/or its user. For example, “acoustic radar” sound source location and characterization may be used in conjunction with themicrophone array 322 to track a moving voice while motion of the joystick controller is independently tracked (through theinertial sensor 332 and or light sources 334). In acoustic radar a pre-calibrated listening zone is selected at runtime and sounds originating from sources outside the pre-calibrated listening zone are filtered out. The pre-calibrated listening zones may include a listening zone that corresponds to a volume of focus or field of view of theimage capture unit 323. Examples of acoustic radar are described in detail in U.S. patent application Ser. No. 11/381,724, to Xiadong Mao entitled “METHODS AND APPARATUS FOR TARGETED SOUND DETECTION AND CHARACTERIZATION”, filed May 4, 2006, which is incorporated herein by reference. - Any number of different combinations of different modes of providing control signals to the
processor 301 may be used in conjunction with embodiments of the present invention. Such techniques may be implemented byprogram code instructions 304 which may be stored in thememory 302 and executed by theprocessor 301 and may optionally include one or more instructions that direct the one or more processors to select a pre-calibrated listening zone at runtime and filter out sounds originating from sources outside the pre-calibrated listening zone. The pre-calibrated listening zones may include a listening zone that corresponds to a volume of focus or field of view of theimage capture unit 323. - The
program 304 may optionally include one or more instructions that direct the one or more processors to produce a discrete time domain input signal xm(t) from microphones M0 . . . MM, of themicrophone array 322, determine a listening sector, and use the listening sector in a semi-blind source separation to select the finite impulse response filter coefficients to separate out different sound sources from input signal xm(t). Theprogram 304 may also include instructions to apply one or more fractional delays to selected input signals xm(t) other than an input signal x0(t) from a reference microphone M0. Each fractional delay may be selected to optimize a signal to noise ratio of a discrete time domain output signal y(t) from the microphone array. The fractional delays may be selected to such that a signal from the reference microphone M0 is first in time relative to signals from the other microphone(s) of the array. Theprogram 304 may also include instructions to introduce a fractional time delay Δ into an output signal y(t) of the microphone array so that: y(t+Δ)=x(t+Δ)*b0+x(t−1+Δ)*b1+x(t−2+Δ)*b2+ . . . +x(t−N+Δ)bN, where Δ is between zero and ±1. Examples of such techniques are described in detail in U.S. patent application Ser. No. 11/381,729, to Xiadong Mao, entitled “ULTRA SMALL MICROPHONE ARRAY” filed May 4, 2006, the entire disclosures of which are incorporated by reference. - The
program 304 may include one or more instructions which, when executed, cause thesystem 300 to select a pre-calibrated listening sector that contains a source of sound. Such instructions may cause the apparatus to determine whether a source of sound lies within an initial sector or on a particular side of the initial sector. If the source of sound does not lie within the default sector, the instructions may, when executed, select a different sector on the particular side of the default sector. The different sector may be characterized by an attenuation of the input signals that is closest to an optimum value. These instructions may, when executed, calculate an attenuation of input signals from themicrophone array 322 and the attenuation to an optimum value. The instructions may, when executed, cause theapparatus 300 to determine a value of an attenuation of the input signals for one or more sectors and select a sector for which the attenuation is closest to an optimum value. Examples of such a technique are described, e.g., in U.S. patent application Ser. No. 11/381,725, to Xiadong Mao, entitled “METHODS AND APPARATUS FOR TARGETED SOUND DETECTION” filed May 4, 2006, the disclosures of which are incorporated herein by reference. - Signals from the
inertial sensor 332 may provide part of a tracking information input and signals generated from theimage capture unit 323 from tracking the one or morelight sources 334 may provide another part of the tracking information input. By way of example, and without limitation, such “mixed mode” signals may be used in a football type video game in which a Quarterback pitches the ball to the right after a head fake head movement to the left. Specifically, a game player holding thecontroller 330 may turn his head to the left and make a sound while making a pitch movement swinging the controller out to the right like it was the football. Themicrophone array 320 in conjunction with “acoustic radar” program code can track the user's voice. Theimage capture unit 323 can track the motion of the user's head or track other commands that do not require sound or use of the controller. Thesensor 332 may track the motion of the joystick controller (representing the football). Theimage capture unit 323 may also track thelight sources 334 on thecontroller 330. The user may release of the “ball” upon reaching a certain amount and/or direction of acceleration of thejoystick controller 330 or upon a key command triggered by pressing a button on thecontroller 330. - In certain embodiments of the present invention, an inertial signal, e.g., from an accelerometer or gyroscope may be used to determine a location of the
controller 330. - Specifically, an acceleration signal from an accelerometer may be integrated once with respect to time to determine a change in velocity and the velocity may be integrated with respect to time to determine a change in position. If values of the initial position and velocity at some time are known then the absolute position may be determined using these values and the changes in velocity and position. Although position determination using an inertial sensor may be made more quickly than using the
image capture unit 323 andlight sources 334 theinertial sensor 332 may be subject to a type of error known as “drift” in which errors that accumulate over time can lead to a discrepancy D between the position of thejoystick 330 calculated from the inertial signal (shown in phantom) and the actual position of thejoystick controller 330. Embodiments of the present invention allow a number of ways to deal with such errors. - For example, the drift may be cancelled out manually by re-setting the initial position of the
controller 330 to be equal to the current calculated position. A user may use one or more of the buttons on thecontroller 330 to trigger a command to re-set the initial position. Alternatively, image-based drift may be implemented by re-setting the current position to a position determined from an image obtained from theimage capture unit 323 as a reference. Such image-based drift compensation may be implemented manually, e.g., when the user triggers one or more of the buttons on thejoystick controller 330. Alternatively, image-based drift compensation may be implemented automatically, e.g., at regular intervals of time or in response to game play. Such techniques may be implemented byprogram code instructions 304 which may be stored in thememory 302 and executed by theprocessor 301. - In certain embodiments it may be desirable to compensate for spurious data in the inertial sensor signal. For example the signal from the
inertial sensor 332 may be oversampled and a sliding average may be computed from the oversampled signal to remove spurious data from the inertial sensor signal. In some situations it may be desirable to oversample the signal and reject a high and/or low value from some subset of data points and compute the sliding average from the remaining data points. Furthermore, other data sampling and manipulation techniques may be used to adjust the signal from the inertial sensor to remove or reduce the significance of spurious data. The choice of technique may depend on the nature of the signal, computations to be performed with the signal, the nature of game play or some combination of two or more of these. Such techniques may be implemented by instructions of theprogram 304 which may be stored in thememory 302 and executed by theprocessor 301. - The
processor 301 may perform analysis ofinertial signal data 306 as described above in response to thedata 306 and program code instructions of aprogram 304 stored and retrieved by thememory 302 and executed by theprocessor module 301. In addition, the processor may implement certain virtual world simulation functions described above as part of theprogram 304. Specifically, theprogram 304 may all or part of various methods for communicating with a virtual world and/or methods for interaction with a three-dimensional virtual world and/or avatar email communication as described above. Code portions of theprogram 304 may conform to any one of a number of different programming languages such as Assembly, C++, JAVA or a number of other languages. Theprocessor module 301 forms a general-purpose computer that becomes a specific purpose computer when executing programs such as theprogram code 304. Although theprogram code 304 is described herein as being implemented in software and executed upon a general purpose computer, those skilled in the art will realize that the method of task management could alternatively be implemented using hardware such as an application specific integrated circuit (ASIC) or other hardware circuitry. As such, it should be understood that embodiments of the invention can be implemented, in whole or in part, in software, hardware or some combination of both. - In one embodiment, among others, the
program code 304 may include a set of processor readable instructions that direct the one or more processors to analyze signals from theinertial sensor 332 to generate position and/or orientation information and utilize the information during play of a video game, during communication with a virtual world or during interaction with a three-dimensional virtual world. Theprogram code 304 may optionally include processor executable instructions including one or more instructions which, when executed cause theimage capture unit 323 to monitor a field of view in front of theimage capture unit 323, identify one or more of thelight sources 334 within the field of view, detect a change in light emitted from the light source(s) 334; and in response to detecting the change, triggering an input command to theprocessor 301. The use of LEDs in conjunction with an image capture device to trigger actions in a game controller is described e.g., in U.S. patent application Ser. No. 10/759,782 to Richard L. Marks, filed Jan. 16, 2004 and entitled: METHOD AND APPARATUS FOR LIGHT INPUT DEVICE, which is incorporated herein by reference in its entirety. - The
program code 304 may optionally include processor executable instructions including one or more instructions which, when executed, use signals from the inertial sensor and signals generated from the image capture unit from tracking the one or more light sources as inputs to a game system, e.g., as described above. Theprogram code 304 may optionally include processor executable instructions including one or more instructions which, when executed compensate for drift in theinertial sensor 332. - Although embodiments of the present invention are described in terms of examples related to a
video game controller 330 games, embodiments of the invention, including thesystem 300 may be used on any user manipulated body, molded object, knob, structure, etc, with inertial sensing capability and inertial sensor signal transmission capability, wireless or otherwise. - By way of example, embodiments of the present invention may be implemented on parallel processing systems. Such parallel processing systems typically include two or more processor elements that are configured to execute parts of a program in parallel using separate processors. By way of example, and without limitation,
FIG. 4 illustrates a type ofcell processor 400 according to an embodiment of the present invention. Thecell processor 400 may be used as theprocessor 301 ofFIG. 3 or in thesimulation servers 22 orview servers 24 ofFIG. 1E . In the example depicted inFIG. 4 , thecell processor 400 includes amain memory 402, power processor element (PPE) 404, and a number of synergistic processor elements (SPEs) 406. In the example depicted inFIG. 4 , thecell processor 400 includes asingle PPE 404 and eightSPE 406. In such a configuration, seven of theSPE 406 may be used for parallel processing and one may be reserved as a back-up in case one of the other seven fails. A cell processor may alternatively include multiple groups of PPEs (PPE groups) and multiple groups of SPEs (SPE groups). In such a case, hardware resources can be shared between units within a group. However, the SPEs and PPEs must appear to software as independent elements. As such, embodiments of the present invention are not limited to use with the configuration shown inFIG. 4 . - The
main memory 402 typically includes both general-purpose and nonvolatile storage, as well as special-purpose hardware registers or arrays used for functions such as system configuration, data-transfer synchronization, memory-mapped I/O, and I/O subsystems. In embodiments of the present invention, avideo game program 403 may be resident inmain memory 402. Thevideo program 403 may include inertial, image and acoustic analyzers and a mixer configured as described with respect toFIGS. 4 , 5A, 5B or 5C above or some combination of these. Theprogram 403 may run on the PPE. Theprogram 403 may be divided up into multiple signal processing tasks that can be executed on the SPEs and/or PPE. - By way of example, the
PPE 404 may be a 64-bit PowerPC Processor Unit (PPU) with associated caches L1 and L2. ThePPE 404 is a general-purpose processing unit, which can access system management resources (such as the memory-protection tables, for example). Hardware resources may be mapped explicitly to a real address space as seen by the PPE. Therefore, the PPE can address any of these resources directly by using an appropriate effective address value. A primary function of thePPE 404 is the management and allocation of tasks for theSPEs 406 in thecell processor 400. - Although only a single PPE is shown in
FIG. 4 , some cell processor implementations, such as cell broadband engine architecture (CBEA), thecell processor 400 may have multiple PPEs organized into PPE groups, of which there may be more than one. These PPE groups may share access to themain memory 402. Furthermore thecell processor 400 may include two or more groups SPEs. The SPE groups may also share access to themain memory 402. Such configurations are within the scope of the present invention. - Each
SPE 406 is includes a synergistic processor unit (SPU) and its own local storage area LS. The local storage LS may include one or more separate areas of memory storage, each one associated with a specific SPU. Each SPU may be configured to only execute instructions (including data load and data store operations) from within its own associated local storage domain. In such a configuration, data transfers between the local storage LS and elsewhere in thesystem 400 may be performed by issuing direct memory access (DMA) commands from the memory flow controller (MFC) to transfer data to or from the local storage domain (of the individual SPE). The SPUs are less complex computational units than thePPE 404 in that they do not perform any system management functions. The SPU generally have a single instruction, multiple data (SIMD) capability and typically process data and initiate any required data transfers (subject to access properties set up by the PPE) in order to perform their allocated tasks. The purpose of the SPU is to enable applications that require a higher computational unit density and can effectively use the provided instruction set. A significant number of SPEs in a system managed by thePPE 404 allow for cost-effective processing over a wide range of applications. - Each
SPE 406 may include a dedicated memory flow controller (MFC) that includes an associated memory management unit that can hold and process memory-protection and access-permission information. The MFC provides the primary method for data transfer, protection, and synchronization between main storage of the cell processor and the local storage of an SPE. An MFC command describes the transfer to be performed. Commands for transferring data are sometimes referred to as MFC direct memory access (DMA) commands (or MFC DMA commands). - Each MFC may support multiple DMA transfers at the same time and can maintain and process multiple MFC commands. Each MFC DMA data transfer command request may involve both a local storage address (LSA) and an effective address (EA). The local storage address may directly address only the local storage area of its associated SPE. The effective address may have a more general application, e.g., it may be able to reference main storage, including all the SPE local storage areas, if they are aliased into the real address space.
- To facilitate communication between the
SPEs 406 and/or between theSPEs 406 and thePPE 404, theSPEs 406 andPPE 404 may include signal notification registers that are tied to signaling events. ThePPE 404 andSPEs 406 may be coupled by a star topology in which thePPE 404 acts as a router to transmit messages to theSPEs 406. Alternatively, eachSPE 406 and thePPE 404 may have a one-way signal notification register referred to as a mailbox. The mailbox can be used by anSPE 406 to host operating system (OS) synchronization. - The
cell processor 400 may include an input/output (I/O) function 408 through which thecell processor 400 may interface with peripheral devices, such as a microphone array 412 and optionalimage capture unit 413 and a game/virtual world controller 730. The controller unit 730 may include an inertial sensor 732, andlight sources 734. In addition anElement Interconnect Bus 410 may connect the various components listed above. Each SPE and the PPE can access thebus 410 through a bus interface units BIU. Thecell processor 400 may also includes two controllers typically found in a processor: a Memory Interface Controller MIC that controls the flow of data between thebus 410 and themain memory 402, and a Bus Interface Controller BIC, which controls the flow of data between the I/O 408 and thebus 410. Although the requirements for the MIC, BIC, BIUs andbus 410 may vary widely for different implementations, those of skill in the art will be familiar their functions and circuits for implementing them. - The
cell processor 400 may also include an internal interrupt controller IIC. The IIC component manages the priority of the interrupts presented to the PPE. The IIC allows interrupts from the other components thecell processor 400 to be handled without using a main system interrupt controller. The IIC may be regarded as a second level controller. The main system interrupt controller may handle interrupts originating external to the cell processor. - In embodiments of the present invention, certain computations that facilitate interaction with the virtual world, may be performed in parallel using the
PPE 404 and/or one or more of theSPE 406. Such computations may be run as one or more separate tasks thatdifferent SPE 406 may take as they become available. - While the above is a complete description of the preferred embodiment of the present invention, it is possible to use various alternatives, modifications and equivalents. Therefore, the scope of the present invention should be determined not with reference to the above description but should, instead, be determined with reference to the appended claims, along with their full scope of equivalents. Any feature described herein, whether preferred or not, may be combined with any other feature described herein, whether preferred or not. In the claims that follow, the indefinite article “A” or “An” refers to a quantity of one or more of the item following the article, except where expressly stated otherwise. The appended claims are not to be interpreted as including means-plus-function limitations, unless such a limitation is explicitly recited in a given claim using the phrase “means for.”
Claims (11)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/682,292 US20080215972A1 (en) | 2007-03-01 | 2007-03-05 | Mapping user emotional state to avatar in a virtual world |
PCT/US2008/055037 WO2008109299A2 (en) | 2007-03-01 | 2008-02-26 | System and method for communicating with a virtual world |
EP08730776A EP2132650A4 (en) | 2007-03-01 | 2008-02-26 | System and method for communicating with a virtual world |
JP2009551806A JP2010533006A (en) | 2007-03-01 | 2008-02-26 | System and method for communicating with a virtual world |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US89239707P | 2007-03-01 | 2007-03-01 | |
GBGB0703974.6A GB0703974D0 (en) | 2007-03-01 | 2007-03-01 | Entertainment device |
GB0703974.6 | 2007-03-01 | ||
US11/682,292 US20080215972A1 (en) | 2007-03-01 | 2007-03-05 | Mapping user emotional state to avatar in a virtual world |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080215972A1 true US20080215972A1 (en) | 2008-09-04 |
Family
ID=37965740
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/682,287 Abandoned US20080215971A1 (en) | 2007-03-01 | 2007-03-05 | System and method for communicating with an avatar |
US11/682,298 Active 2028-08-26 US8788951B2 (en) | 2007-03-01 | 2007-03-05 | Avatar customization |
US11/682,299 Active 2030-10-20 US8502825B2 (en) | 2007-03-01 | 2007-03-05 | Avatar email and methods for communicating between real and virtual worlds |
US11/682,292 Abandoned US20080215972A1 (en) | 2007-03-01 | 2007-03-05 | Mapping user emotional state to avatar in a virtual world |
US11/682,284 Active 2028-01-29 US7979574B2 (en) | 2007-03-01 | 2007-03-05 | System and method for routing communications among real and virtual communication devices |
US11/682,281 Active 2030-01-20 US8425322B2 (en) | 2007-03-01 | 2007-03-05 | System and method for communicating with a virtual world |
US12/528,956 Abandoned US20120166969A1 (en) | 2007-03-01 | 2008-02-29 | Apparatus and method of data transfer |
US12/528,920 Active 2031-08-07 US8951123B2 (en) | 2007-03-01 | 2008-03-03 | Apparatus and method of modifying an online environment |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/682,287 Abandoned US20080215971A1 (en) | 2007-03-01 | 2007-03-05 | System and method for communicating with an avatar |
US11/682,298 Active 2028-08-26 US8788951B2 (en) | 2007-03-01 | 2007-03-05 | Avatar customization |
US11/682,299 Active 2030-10-20 US8502825B2 (en) | 2007-03-01 | 2007-03-05 | Avatar email and methods for communicating between real and virtual worlds |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/682,284 Active 2028-01-29 US7979574B2 (en) | 2007-03-01 | 2007-03-05 | System and method for routing communications among real and virtual communication devices |
US11/682,281 Active 2030-01-20 US8425322B2 (en) | 2007-03-01 | 2007-03-05 | System and method for communicating with a virtual world |
US12/528,956 Abandoned US20120166969A1 (en) | 2007-03-01 | 2008-02-29 | Apparatus and method of data transfer |
US12/528,920 Active 2031-08-07 US8951123B2 (en) | 2007-03-01 | 2008-03-03 | Apparatus and method of modifying an online environment |
Country Status (8)
Country | Link |
---|---|
US (8) | US20080215971A1 (en) |
EP (2) | EP1964597B1 (en) |
JP (2) | JP5026531B2 (en) |
AT (1) | ATE497813T1 (en) |
DE (1) | DE602008004893D1 (en) |
ES (1) | ES2408680T3 (en) |
GB (2) | GB0703974D0 (en) |
WO (2) | WO2008104786A2 (en) |
Cited By (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080020361A1 (en) * | 2006-07-12 | 2008-01-24 | Kron Frederick W | Computerized medical training system |
US20080093814A1 (en) * | 2004-09-09 | 2008-04-24 | Massimo Filippi | Wheel Assembly with Internal Pressure Reservoir and Pressure Fluctuation Warning System |
US20080215971A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc. | System and method for communicating with an avatar |
US20090106672A1 (en) * | 2007-10-18 | 2009-04-23 | Sony Ericsson Mobile Communications Ab | Virtual world avatar activity governed by person's real life activity |
US20090128567A1 (en) * | 2007-11-15 | 2009-05-21 | Brian Mark Shuster | Multi-instance, multi-user animation with coordinated chat |
US20090164916A1 (en) * | 2007-12-21 | 2009-06-25 | Samsung Electronics Co., Ltd. | Method and system for creating mixed world that reflects real state |
US20090222255A1 (en) * | 2008-02-28 | 2009-09-03 | International Business Machines Corporation | Using gender analysis of names to assign avatars in instant messaging applications |
US20090254843A1 (en) * | 2008-04-05 | 2009-10-08 | Social Communications Company | Shared virtual area communication environment based apparatus and methods |
US20090292658A1 (en) * | 2008-05-23 | 2009-11-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Acquisition and particular association of inference data indicative of inferred mental states of authoring users |
US20090290767A1 (en) * | 2008-05-23 | 2009-11-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Determination of extent of congruity between observation of authoring user and observation of receiving user |
US20090292713A1 (en) * | 2008-05-23 | 2009-11-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Acquisition and particular association of data indicative of an inferred mental state of an authoring user |
US20090292928A1 (en) * | 2008-05-23 | 2009-11-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Acquisition and particular association of inference data indicative of an inferred mental state of an authoring user and source identity data |
US20100013828A1 (en) * | 2008-07-17 | 2010-01-21 | International Business Machines Corporation | System and method for enabling multiple-state avatars |
US20100020100A1 (en) * | 2008-07-25 | 2010-01-28 | International Business Machines Corporation | Method for extending a virtual environment through registration |
US20100031164A1 (en) * | 2008-08-01 | 2010-02-04 | International Business Machines Corporation | Method for providing a virtual world layer |
US20100081507A1 (en) * | 2008-10-01 | 2010-04-01 | Microsoft Corporation | Adaptation for Alternate Gaming Input Devices |
US20100114668A1 (en) * | 2007-04-23 | 2010-05-06 | Integrated Media Measurement, Inc. | Determining Relative Effectiveness Of Media Content Items |
US20100146052A1 (en) * | 2007-06-22 | 2010-06-10 | France Telecom | method and a system for setting up encounters between persons in a telecommunications system |
US20100199221A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Navigation of a virtual plane using depth |
US20100194872A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Body scan |
US20100194741A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Depth map movement tracking via optical flow and velocity prediction |
US20100231512A1 (en) * | 2009-03-16 | 2010-09-16 | Microsoft Corporation | Adaptive cursor sizing |
US20100238182A1 (en) * | 2009-03-20 | 2010-09-23 | Microsoft Corporation | Chaining animations |
US20100281437A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Managing virtual ports |
US20100281436A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Binding users to a gesture based system and providing feedback to the users |
US20100278384A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Human body pose estimation |
US20100281438A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Altering a view perspective within a display environment |
US20100277470A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Systems And Methods For Applying Model Tracking To Motion Capture |
US20100277489A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Determine intended motions |
US20100295771A1 (en) * | 2009-05-20 | 2010-11-25 | Microsoft Corporation | Control of display objects |
US20100302257A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Systems and Methods For Applying Animations or Motions to a Character |
US20100306715A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Gestures Beyond Skeletal |
US20100303289A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Device for identifying and tracking multiple humans over time |
US20100302395A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Environment And/Or Target Segmentation |
US20100306713A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Gesture Tool |
US20100306710A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Living cursor control mechanics |
US20100306685A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | User movement feedback via on-screen avatars |
US20100303302A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Systems And Methods For Estimating An Occluded Body Part |
US20100304813A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Protocol And Format For Communicating An Image From A Camera To A Computing Environment |
US20100306716A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Extending standard gestures |
US20100306655A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Avatar Integrated Shared Media Experience |
US20100306712A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Gesture Coach |
US20100306261A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Localized Gesture Aggregation |
US20100303290A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Systems And Methods For Tracking A Model |
US20100302138A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Methods and systems for defining or modifying a visual representation |
US20100311280A1 (en) * | 2009-06-03 | 2010-12-09 | Microsoft Corporation | Dual-barrel, connector jack and plug assemblies |
US20110007142A1 (en) * | 2009-07-09 | 2011-01-13 | Microsoft Corporation | Visual representation expression based on player expression |
US20110007079A1 (en) * | 2009-07-13 | 2011-01-13 | Microsoft Corporation | Bringing a visual representation to life via learned input from the user |
US20110025689A1 (en) * | 2009-07-29 | 2011-02-03 | Microsoft Corporation | Auto-Generating A Visual Representation |
US20110055846A1 (en) * | 2009-08-31 | 2011-03-03 | Microsoft Corporation | Techniques for using human gestures to control gesture unaware programs |
US20110087540A1 (en) * | 2007-06-08 | 2011-04-14 | Gopal Krishnan | Web Pages and Methods for Displaying Targeted On-Line Advertisements in a Social Networking Media Space |
US20110109617A1 (en) * | 2009-11-12 | 2011-05-12 | Microsoft Corporation | Visualizing Depth |
US20110208014A1 (en) * | 2008-05-23 | 2011-08-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Determination of extent of congruity between observation of authoring user and observation of receiving user |
US8010474B1 (en) * | 2006-09-05 | 2011-08-30 | Aol Inc. | Translating paralinguisitic indicators |
US20110298827A1 (en) * | 2010-06-02 | 2011-12-08 | Microsoft Corporation | Limiting avatar gesture display |
US20120011453A1 (en) * | 2010-07-08 | 2012-01-12 | Namco Bandai Games Inc. | Method, storage medium, and user terminal |
US8290249B2 (en) | 2009-05-01 | 2012-10-16 | Microsoft Corporation | Systems and methods for detecting a tilt angle from a depth image |
US8416247B2 (en) | 2007-10-09 | 2013-04-09 | Sony Computer Entertaiment America Inc. | Increasing the number of advertising impressions in an interactive environment |
US8429225B2 (en) | 2008-05-21 | 2013-04-23 | The Invention Science Fund I, Llc | Acquisition and presentation of data indicative of an extent of congruence between inferred mental states of authoring users |
US8509479B2 (en) | 2009-05-29 | 2013-08-13 | Microsoft Corporation | Virtual object |
US8542252B2 (en) | 2009-05-29 | 2013-09-24 | Microsoft Corporation | Target digitization, extraction, and tracking |
US8620113B2 (en) | 2011-04-25 | 2013-12-31 | Microsoft Corporation | Laser diode modes |
US8635637B2 (en) | 2011-12-02 | 2014-01-21 | Microsoft Corporation | User interface presenting an animated avatar performing a media reaction |
US8638985B2 (en) | 2009-05-01 | 2014-01-28 | Microsoft Corporation | Human body pose estimation |
US8649554B2 (en) | 2009-05-01 | 2014-02-11 | Microsoft Corporation | Method to control perspective for a camera-controlled computer |
US8667519B2 (en) | 2010-11-12 | 2014-03-04 | Microsoft Corporation | Automatic passive and anonymous feedback system |
US8726195B2 (en) | 2006-09-05 | 2014-05-13 | Aol Inc. | Enabling an IM user to navigate a virtual world |
US8760395B2 (en) | 2011-05-31 | 2014-06-24 | Microsoft Corporation | Gesture recognition techniques |
US20140303778A1 (en) * | 2010-06-07 | 2014-10-09 | Gary Stephen Shuster | Creation and use of virtual places |
US8898687B2 (en) | 2012-04-04 | 2014-11-25 | Microsoft Corporation | Controlling a media program based on a media reaction |
US8930472B2 (en) | 2007-10-24 | 2015-01-06 | Social Communications Company | Promoting communicant interactions in a network communications environment |
US8942917B2 (en) | 2011-02-14 | 2015-01-27 | Microsoft Corporation | Change invariant scene recognition by an agent |
US8942428B2 (en) | 2009-05-01 | 2015-01-27 | Microsoft Corporation | Isolate extraneous motions |
US8959541B2 (en) | 2012-05-04 | 2015-02-17 | Microsoft Technology Licensing, Llc | Determining a future portion of a currently presented media program |
US20150169832A1 (en) * | 2013-12-18 | 2015-06-18 | Lenovo (Singapore) Pte, Ltd. | Systems and methods to determine user emotions and moods based on acceleration data and biometric data |
US9065874B2 (en) | 2009-01-15 | 2015-06-23 | Social Communications Company | Persistent network resource and virtual area associations for realtime collaboration |
US9100685B2 (en) | 2011-12-09 | 2015-08-04 | Microsoft Technology Licensing, Llc | Determining audience state or interest using passive sensor data |
US9101263B2 (en) | 2008-05-23 | 2015-08-11 | The Invention Science Fund I, Llc | Acquisition and association of data indicative of an inferred mental state of an authoring user |
US20150254887A1 (en) * | 2014-03-07 | 2015-09-10 | Yu-Hsien Li | Method and system for modeling emotion |
US9256282B2 (en) | 2009-03-20 | 2016-02-09 | Microsoft Technology Licensing, Llc | Virtual object manipulation |
US9298263B2 (en) | 2009-05-01 | 2016-03-29 | Microsoft Technology Licensing, Llc | Show body position |
US9400559B2 (en) | 2009-05-29 | 2016-07-26 | Microsoft Technology Licensing, Llc | Gesture shortcuts |
US20160226813A1 (en) * | 2015-01-29 | 2016-08-04 | International Business Machines Corporation | Smartphone indicator for conversation nonproductivity |
US9465980B2 (en) | 2009-01-30 | 2016-10-11 | Microsoft Technology Licensing, Llc | Pose tracking pipeline |
US9483157B2 (en) | 2007-10-24 | 2016-11-01 | Sococo, Inc. | Interfacing with a spatial virtual communication environment |
US9635195B1 (en) * | 2008-12-24 | 2017-04-25 | The Directv Group, Inc. | Customizable graphical elements for use in association with a user interface |
US9898675B2 (en) | 2009-05-01 | 2018-02-20 | Microsoft Technology Licensing, Llc | User movement tracking feedback to improve tracking |
GB2556347A (en) * | 2016-03-11 | 2018-05-30 | Sony Interactive Entertainment Europe Ltd | Virtual reality |
US10609332B1 (en) | 2018-12-21 | 2020-03-31 | Microsoft Technology Licensing, Llc | Video conferencing supporting a composite video stream |
US20200167002A1 (en) * | 2018-11-28 | 2020-05-28 | International Business Machines Corporation | Non-verbal communication tracking and classification |
US11158102B2 (en) * | 2019-01-22 | 2021-10-26 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for processing information |
US11215711B2 (en) | 2012-12-28 | 2022-01-04 | Microsoft Technology Licensing, Llc | Using photometric stereo for 3D environment modeling |
US11223800B1 (en) | 2020-11-03 | 2022-01-11 | International Business Machines Corporation | Selective reaction obfuscation |
US11322171B1 (en) | 2007-12-17 | 2022-05-03 | Wai Wu | Parallel signal processing system and method |
US11651562B2 (en) * | 2019-12-30 | 2023-05-16 | Tmrw Foundation Ip S. À R.L. | Method and system for enabling enhanced user-to-user communication in digital realities |
US11657438B2 (en) | 2012-10-19 | 2023-05-23 | Sococo, Inc. | Bridging physical and virtual spaces |
US11710309B2 (en) | 2013-02-22 | 2023-07-25 | Microsoft Technology Licensing, Llc | Camera/object pose from predicted coordinates |
US20230252709A1 (en) * | 2013-08-09 | 2023-08-10 | Implementation Apps Llc | Generating a background that allows a first avatar to take part in an activity with a second avatar |
KR102724091B1 (en) * | 2023-02-02 | 2024-10-31 | 주식회사 스콘 | An avatar interaction method and a computing devices on which such method is implemented |
Families Citing this family (387)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8233592B2 (en) | 2003-11-10 | 2012-07-31 | Nuance Communications, Inc. | Personal home voice portal |
EP2131731B1 (en) * | 2007-02-16 | 2014-04-09 | Galvanic Limited | Biosensor system |
US20080204448A1 (en) * | 2007-02-27 | 2008-08-28 | Dawson Christopher J | Unsolicited advertisements in a virtual universe through avatar transport offers |
US20080215975A1 (en) * | 2007-03-01 | 2008-09-04 | Phil Harrison | Virtual world user opinion & response monitoring |
US20080263460A1 (en) * | 2007-04-20 | 2008-10-23 | Utbk, Inc. | Methods and Systems to Connect People for Virtual Meeting in Virtual Reality |
US8601386B2 (en) | 2007-04-20 | 2013-12-03 | Ingenio Llc | Methods and systems to facilitate real time communications in virtual reality |
US20080301556A1 (en) * | 2007-05-30 | 2008-12-04 | Motorola, Inc. | Method and apparatus for displaying operational information about an electronic device |
US20080297515A1 (en) * | 2007-05-30 | 2008-12-04 | Motorola, Inc. | Method and apparatus for determining the appearance of a character display by an electronic device |
US20090055484A1 (en) * | 2007-08-20 | 2009-02-26 | Thanh Vuong | System and method for representation of electronic mail users using avatars |
US20090055187A1 (en) * | 2007-08-21 | 2009-02-26 | Howard Leventhal | Conversion of text email or SMS message to speech spoken by animated avatar for hands-free reception of email and SMS messages while driving a vehicle |
US8281240B2 (en) * | 2007-08-23 | 2012-10-02 | International Business Machines Corporation | Avatar aggregation in a virtual universe |
US20090063970A1 (en) * | 2007-08-30 | 2009-03-05 | Comverse Ltd. | System, method and program product for exchanging information between a virtual environment to a non-virtual environment |
KR20230156158A (en) * | 2007-09-26 | 2023-11-13 | 에이큐 미디어 인크 | Audio-visual navigation and communication |
US20090089685A1 (en) * | 2007-09-28 | 2009-04-02 | Mordecai Nicole Y | System and Method of Communicating Between A Virtual World and Real World |
US8407605B2 (en) | 2009-04-03 | 2013-03-26 | Social Communications Company | Application sharing |
US20090288007A1 (en) * | 2008-04-05 | 2009-11-19 | Social Communications Company | Spatial interfaces for realtime networked communications |
US7769806B2 (en) | 2007-10-24 | 2010-08-03 | Social Communications Company | Automated real-time data stream switching in a shared virtual area communication environment |
US8375397B1 (en) | 2007-11-06 | 2013-02-12 | Google Inc. | Snapshot view of multi-dimensional virtual environment |
US8595299B1 (en) * | 2007-11-07 | 2013-11-26 | Google Inc. | Portals between multi-dimensional virtual environments |
US8732591B1 (en) | 2007-11-08 | 2014-05-20 | Google Inc. | Annotations of objects in multi-dimensional virtual environments |
US9245041B2 (en) * | 2007-11-10 | 2016-01-26 | Geomonkey, Inc. | Creation and use of digital maps |
JP2009122776A (en) * | 2007-11-12 | 2009-06-04 | Internatl Business Mach Corp <Ibm> | Information control method and device in virtual world |
US8062130B2 (en) * | 2007-11-16 | 2011-11-22 | International Business Machines Corporation | Allowing an alternative action in a virtual world |
US20090141047A1 (en) * | 2007-11-29 | 2009-06-04 | International Business Machines Corporation | Virtual world communication display method |
US8149241B2 (en) * | 2007-12-10 | 2012-04-03 | International Business Machines Corporation | Arrangements for controlling activities of an avatar |
US20090157660A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems employing a cohort-linked avatar |
US20090157625A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for identifying an avatar-linked population cohort |
US20090157751A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying an avatar |
US9211077B2 (en) * | 2007-12-13 | 2015-12-15 | The Invention Science Fund I, Llc | Methods and systems for specifying an avatar |
US20090157813A1 (en) * | 2007-12-17 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for identifying an avatar-linked population cohort |
US8615479B2 (en) * | 2007-12-13 | 2013-12-24 | The Invention Science Fund I, Llc | Methods and systems for indicating behavior in a population cohort |
US20090171164A1 (en) * | 2007-12-17 | 2009-07-02 | Jung Edward K Y | Methods and systems for identifying an avatar-linked population cohort |
US20090164458A1 (en) * | 2007-12-20 | 2009-06-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems employing a cohort-linked avatar |
US20090157481A1 (en) * | 2007-12-13 | 2009-06-18 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying a cohort-linked avatar attribute |
US9418368B2 (en) * | 2007-12-20 | 2016-08-16 | Invention Science Fund I, Llc | Methods and systems for determining interest in a cohort-linked avatar |
US20090164503A1 (en) * | 2007-12-20 | 2009-06-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying a media content-linked population cohort |
US20090164131A1 (en) * | 2007-12-20 | 2009-06-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for specifying a media content-linked population cohort |
US9775554B2 (en) * | 2007-12-31 | 2017-10-03 | Invention Science Fund I, Llc | Population cohort-linked avatar |
US9483750B2 (en) * | 2007-12-31 | 2016-11-01 | International Business Machines Corporation | Location independent communication in a virtual world |
US7970911B2 (en) * | 2008-01-04 | 2011-06-28 | Mitel Networks Corporation | Method, apparatus and system for modulating an application based on proximity |
US20090228355A1 (en) * | 2008-03-07 | 2009-09-10 | Dawson Christopher J | Amelioration of unsolicited advertisements in a virtual universe through avatar transport offers |
US8605863B1 (en) * | 2008-03-18 | 2013-12-10 | Avaya Inc. | Method and apparatus for providing state indication on a telephone call |
US8006182B2 (en) * | 2008-03-18 | 2011-08-23 | International Business Machines Corporation | Method and computer program product for implementing automatic avatar status indicators |
US20090254855A1 (en) * | 2008-04-08 | 2009-10-08 | Sony Ericsson Mobile Communications, Ab | Communication terminals with superimposed user interface |
US20090282075A1 (en) * | 2008-05-06 | 2009-11-12 | Dawson Christopher J | System and method for identifying and blocking avatar-based unsolicited advertising in a virtual universe |
US8244805B2 (en) * | 2008-06-24 | 2012-08-14 | International Business Machines Corporation | Communication integration between a virtual universe and an external device |
US7970840B2 (en) * | 2008-07-02 | 2011-06-28 | International Business Machines Corporation | Method to continue instant messaging exchange when exiting a virtual world |
US20120246585A9 (en) * | 2008-07-14 | 2012-09-27 | Microsoft Corporation | System for editing an avatar |
US9223469B2 (en) * | 2008-08-22 | 2015-12-29 | Intellectual Ventures Fund 83 Llc | Configuring a virtual world user-interface |
GB2463122A (en) * | 2008-09-09 | 2010-03-10 | Skype Ltd | Establishing a communication event in response to an interaction with an electronic game object |
US8347235B2 (en) | 2008-09-26 | 2013-01-01 | International Business Machines Corporation | Method and system of providing information during content breakpoints in a virtual universe |
US8866809B2 (en) * | 2008-09-30 | 2014-10-21 | Apple Inc. | System and method for rendering dynamic three-dimensional appearing imagery on a two-dimensional user interface |
US9902109B2 (en) | 2008-10-07 | 2018-02-27 | Tripetals, Llc | Internet-enabled apparatus, system and methods for physically and virtually rendering three-dimensional objects |
US20100088650A1 (en) * | 2008-10-07 | 2010-04-08 | Christopher Kaltenbach | Internet-enabled apparatus, system and methods for physically and virtually rendering three-dimensional objects |
JP2012506088A (en) | 2008-10-14 | 2012-03-08 | ブランド・アフィニティー・テクノロジーズ・インコーポレイテッド | Apparatus, system and method for brand affinity engine using positive and negative descriptions and indexes |
US9747371B2 (en) * | 2008-10-14 | 2017-08-29 | Disney Enterprises, Inc. | Method and system for producing customized content |
US20100093439A1 (en) * | 2008-10-15 | 2010-04-15 | Nc Interactive, Inc. | Interactive network game and methods thereof |
US8683354B2 (en) * | 2008-10-16 | 2014-03-25 | At&T Intellectual Property I, L.P. | System and method for distributing an avatar |
US20100099495A1 (en) * | 2008-10-16 | 2010-04-22 | Nc Interactive, Inc. | Interactive network game and methods thereof |
US8589803B2 (en) * | 2008-11-05 | 2013-11-19 | At&T Intellectual Property I, L.P. | System and method for conducting a communication exchange |
US9412126B2 (en) * | 2008-11-06 | 2016-08-09 | At&T Intellectual Property I, Lp | System and method for commercializing avatars |
US8898565B2 (en) * | 2008-11-06 | 2014-11-25 | At&T Intellectual Property I, Lp | System and method for sharing avatars |
US20100131843A1 (en) * | 2008-11-26 | 2010-05-27 | International Business Machines Corporation | Transforming Business Process Data to Generate a Virtual World Client for Human Interactions |
US20110143839A1 (en) * | 2008-12-01 | 2011-06-16 | Mclaughlin Thomas | Exercise control device for video games |
US20100137105A1 (en) * | 2008-12-01 | 2010-06-03 | Mclaughlin Thomas | Riding the joystick system to health and fitness |
US8751927B2 (en) * | 2008-12-02 | 2014-06-10 | International Business Machines Corporation | System and method for dynamic multi-content cards |
US8988421B2 (en) * | 2008-12-02 | 2015-03-24 | International Business Machines Corporation | Rendering avatar details |
US9529423B2 (en) * | 2008-12-10 | 2016-12-27 | International Business Machines Corporation | System and method to modify audio components in an online environment |
US20100153858A1 (en) * | 2008-12-11 | 2010-06-17 | Paul Gausman | Uniform virtual environments |
US8214433B2 (en) * | 2008-12-15 | 2012-07-03 | International Business Machines Corporation | System and method to provide context for an automated agent to service multiple avatars within a virtual universe |
US8581838B2 (en) * | 2008-12-19 | 2013-11-12 | Samsung Electronics Co., Ltd. | Eye gaze control during avatar-based communication |
US9697535B2 (en) | 2008-12-23 | 2017-07-04 | International Business Machines Corporation | System and method in a virtual universe for identifying spam avatars based upon avatar multimedia characteristics |
US9704177B2 (en) | 2008-12-23 | 2017-07-11 | International Business Machines Corporation | Identifying spam avatars in a virtual universe (VU) based upon turing tests |
US8255807B2 (en) * | 2008-12-23 | 2012-08-28 | Ganz | Item customization and website customization |
US20100162149A1 (en) * | 2008-12-24 | 2010-06-24 | At&T Intellectual Property I, L.P. | Systems and Methods to Provide Location Information |
US8762861B2 (en) * | 2008-12-28 | 2014-06-24 | Avaya, Inc. | Method and apparatus for interrelating virtual environment and web content |
US9853922B2 (en) | 2012-02-24 | 2017-12-26 | Sococo, Inc. | Virtual area communications |
US9319357B2 (en) | 2009-01-15 | 2016-04-19 | Social Communications Company | Context based virtual area creation |
US9600306B2 (en) * | 2009-01-31 | 2017-03-21 | International Business Machines Corporation | Client-side simulated virtual universe environment |
US9105014B2 (en) | 2009-02-03 | 2015-08-11 | International Business Machines Corporation | Interactive avatar in messaging environment |
US8700477B2 (en) * | 2009-05-26 | 2014-04-15 | Embodee Corp. | Garment fit portrayal system and method |
US20130124156A1 (en) * | 2009-05-26 | 2013-05-16 | Embodee Corp | Footwear digitization system and method |
US8656476B2 (en) | 2009-05-28 | 2014-02-18 | International Business Machines Corporation | Providing notification of spam avatars |
KR20100138700A (en) * | 2009-06-25 | 2010-12-31 | 삼성전자주식회사 | Method and apparatus for processing virtual world |
US8417649B2 (en) * | 2009-07-13 | 2013-04-09 | International Business Machines Corporation | Providing a seamless conversation service between interacting environments |
TW201102847A (en) * | 2009-07-14 | 2011-01-16 | Tzu-Ling Liang | Game blog platform system |
JP5227910B2 (en) * | 2009-07-21 | 2013-07-03 | 株式会社コナミデジタルエンタテインメント | Video game apparatus, game image display method, and game image display program |
JP5438412B2 (en) * | 2009-07-22 | 2014-03-12 | 株式会社コナミデジタルエンタテインメント | Video game device, game information display control method, and game information display control program |
US20110029889A1 (en) * | 2009-07-31 | 2011-02-03 | International Business Machines Corporation | Selective and on-demand representation in a virtual world |
US9393488B2 (en) * | 2009-09-03 | 2016-07-19 | International Business Machines Corporation | Dynamically depicting interactions in a virtual world based on varied user rights |
GB0915588D0 (en) * | 2009-09-07 | 2009-10-07 | Sony Comp Entertainment Europe | Image processing method, apparatus and system |
US9542010B2 (en) * | 2009-09-15 | 2017-01-10 | Palo Alto Research Center Incorporated | System for interacting with objects in a virtual environment |
JP2011081480A (en) * | 2009-10-05 | 2011-04-21 | Seiko Epson Corp | Image input system |
US20110086711A1 (en) * | 2009-10-08 | 2011-04-14 | Sony Ericsson Mobile Communications Ab | Game Environment to Interact with Telephony Modem |
EP2494432B1 (en) * | 2009-10-27 | 2019-05-29 | Harmonix Music Systems, Inc. | Gesture-based user interface |
US9981193B2 (en) | 2009-10-27 | 2018-05-29 | Harmonix Music Systems, Inc. | Movement based recognition and evaluation |
US8365075B2 (en) * | 2009-11-19 | 2013-01-29 | International Business Machines Corporation | Recording events in a virtual world |
US8401848B2 (en) * | 2009-12-15 | 2013-03-19 | At&T Intellectual Property I, L.P. | System and method for audible text center subsystem |
US9031221B2 (en) * | 2009-12-22 | 2015-05-12 | Cyara Solutions Pty Ltd | System and method for automated voice quality testing |
US10659402B2 (en) * | 2009-12-22 | 2020-05-19 | Cyara Solutions Pty Ltd | System and method for automated end-to-end web interaction testing |
US8284157B2 (en) | 2010-01-15 | 2012-10-09 | Microsoft Corporation | Directed performance in motion capture system |
US20120327091A1 (en) * | 2010-03-08 | 2012-12-27 | Nokia Corporation | Gestural Messages in Social Phonebook |
US9264785B2 (en) | 2010-04-01 | 2016-02-16 | Sony Computer Entertainment Inc. | Media fingerprinting for content determination and retrieval |
US8560583B2 (en) | 2010-04-01 | 2013-10-15 | Sony Computer Entertainment Inc. | Media fingerprinting for social networking |
TWI439960B (en) | 2010-04-07 | 2014-06-01 | Apple Inc | Avatar editing environment |
US9542038B2 (en) | 2010-04-07 | 2017-01-10 | Apple Inc. | Personalizing colors of user interfaces |
US8719730B2 (en) | 2010-04-23 | 2014-05-06 | Ganz | Radial user interface and system for a virtual world game |
US8323068B2 (en) | 2010-04-23 | 2012-12-04 | Ganz | Villagers in a virtual world with upgrading via codes |
US11117033B2 (en) | 2010-04-26 | 2021-09-14 | Wilbert Quinc Murdock | Smart system for display of dynamic movement parameters in sports and training |
US20110276883A1 (en) * | 2010-05-07 | 2011-11-10 | Mark Cabble | Online Multiplayer Virtual Game and Virtual Social Environment Interaction Using Integrated Mobile Services Technologies |
US9990429B2 (en) | 2010-05-14 | 2018-06-05 | Microsoft Technology Licensing, Llc | Automated social networking graph mining and visualization |
US20110289432A1 (en) * | 2010-05-21 | 2011-11-24 | Lucas Keith V | Community-Based Moderator System for Online Content |
US8771064B2 (en) | 2010-05-26 | 2014-07-08 | Aristocrat Technologies Australia Pty Limited | Gaming system and a method of gaming |
US8949725B1 (en) * | 2010-05-27 | 2015-02-03 | Speaktoit, Inc. | Chat information system for portable electronic devices |
US8694899B2 (en) | 2010-06-01 | 2014-04-08 | Apple Inc. | Avatars reflecting user states |
US8384770B2 (en) * | 2010-06-02 | 2013-02-26 | Nintendo Co., Ltd. | Image display system, image display apparatus, and image display method |
US9358456B1 (en) | 2010-06-11 | 2016-06-07 | Harmonix Music Systems, Inc. | Dance competition game |
CA2802348A1 (en) | 2010-06-11 | 2011-12-15 | Harmonix Music Systems, Inc. | Dance game and tutorial |
EP2395768B1 (en) | 2010-06-11 | 2015-02-25 | Nintendo Co., Ltd. | Image display program, image display system, and image display method |
US10332176B2 (en) | 2014-08-28 | 2019-06-25 | Ebay Inc. | Methods and systems for virtual fitting rooms or hybrid stores |
JP5700963B2 (en) * | 2010-06-29 | 2015-04-15 | キヤノン株式会社 | Information processing apparatus and control method thereof |
US8839118B2 (en) * | 2010-06-30 | 2014-09-16 | Verizon Patent And Licensing Inc. | Users as actors in content |
US10398366B2 (en) * | 2010-07-01 | 2019-09-03 | Nokia Technologies Oy | Responding to changes in emotional condition of a user |
US8843585B1 (en) | 2010-07-06 | 2014-09-23 | Midnight Studios, Inc. | Methods and apparatus for generating a unique virtual item |
US9832441B2 (en) | 2010-07-13 | 2017-11-28 | Sony Interactive Entertainment Inc. | Supplemental content on a mobile device |
US9159165B2 (en) | 2010-07-13 | 2015-10-13 | Sony Computer Entertainment Inc. | Position-dependent gaming, 3-D controller, and handheld as a remote |
US9814977B2 (en) * | 2010-07-13 | 2017-11-14 | Sony Interactive Entertainment Inc. | Supplemental video content on a mobile device |
US9143699B2 (en) | 2010-07-13 | 2015-09-22 | Sony Computer Entertainment Inc. | Overlay non-video content on a mobile device |
US8730354B2 (en) * | 2010-07-13 | 2014-05-20 | Sony Computer Entertainment Inc | Overlay video content on a mobile device |
KR102000618B1 (en) * | 2010-09-13 | 2019-10-21 | 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 | Add-on Management |
JP5739674B2 (en) | 2010-09-27 | 2015-06-24 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and information processing method |
JP5363449B2 (en) * | 2010-10-28 | 2013-12-11 | 株式会社スクウェア・エニックス | Game system, game system program, information recording medium |
US9871907B2 (en) * | 2010-11-02 | 2018-01-16 | Facebook, Inc. | Avatar-based communications launching system |
US20120130717A1 (en) * | 2010-11-19 | 2012-05-24 | Microsoft Corporation | Real-time Animation for an Expressive Avatar |
US9022868B2 (en) | 2011-02-10 | 2015-05-05 | Ganz | Method and system for creating a virtual world where user-controlled characters interact with non-player characters |
JP2012190183A (en) * | 2011-03-09 | 2012-10-04 | Sony Corp | Image processing device, method, and program |
WO2012135231A2 (en) * | 2011-04-01 | 2012-10-04 | Social Communications Company | Creating virtual areas for realtime communications |
US8756061B2 (en) | 2011-04-01 | 2014-06-17 | Sony Computer Entertainment Inc. | Speech syllable/vowel/phone boundary detection using auditory attention cues |
US8825643B2 (en) * | 2011-04-02 | 2014-09-02 | Open Invention Network, Llc | System and method for filtering content based on gestures |
US20120290948A1 (en) * | 2011-05-09 | 2012-11-15 | Idle Games | System and method for providing a virtual space with individualized maps |
US9180378B2 (en) | 2011-05-17 | 2015-11-10 | Activision Publishing, Inc. | Conditional access to areas in a video game |
JP6045777B2 (en) * | 2011-05-23 | 2016-12-14 | 任天堂株式会社 | Direction control system, direction control device, direction control program, and direction control method |
US8814693B2 (en) | 2011-05-27 | 2014-08-26 | Microsoft Corporation | Avatars of friends as non-player-characters |
US9369543B2 (en) | 2011-05-27 | 2016-06-14 | Microsoft Technology Licensing, Llc | Communication between avatars in different games |
US9727227B2 (en) * | 2011-07-28 | 2017-08-08 | Microsoft Technology Licensing, Llc | Multi-touch remoting |
US8666052B2 (en) | 2011-09-15 | 2014-03-04 | Microsoft Corporation | Universal phone number for contacting group members |
US20130117704A1 (en) * | 2011-11-09 | 2013-05-09 | Darius Lahoutifard | Browser-Accessible 3D Immersive Virtual Events |
US9628843B2 (en) * | 2011-11-21 | 2017-04-18 | Microsoft Technology Licensing, Llc | Methods for controlling electronic devices using gestures |
CN106961621A (en) * | 2011-12-29 | 2017-07-18 | 英特尔公司 | Use the communication of incarnation |
KR101951761B1 (en) * | 2012-01-27 | 2019-02-25 | 라인 가부시키가이샤 | System and method for providing avatar in service provided in mobile environment |
US20130265240A1 (en) * | 2012-04-06 | 2013-10-10 | At&T Intellectual Property I, Lp | Method and apparatus for presenting a virtual touchscreen |
GB2511668A (en) * | 2012-04-12 | 2014-09-10 | Supercell Oy | System and method for controlling technical processes |
US9293016B2 (en) | 2012-04-24 | 2016-03-22 | At&T Intellectual Property I, Lp | Method and apparatus for processing sensor data of detected objects |
US10155168B2 (en) | 2012-05-08 | 2018-12-18 | Snap Inc. | System and method for adaptable avatars |
WO2013181026A1 (en) | 2012-06-02 | 2013-12-05 | Social Communications Company | Interfacing with a spatial virtual communications environment |
US20130346875A1 (en) * | 2012-06-20 | 2013-12-26 | Microsoft Corporation | Personalized Interactive Entertainment Profile |
KR101923723B1 (en) * | 2012-09-17 | 2018-11-29 | 한국전자통신연구원 | Metaverse client terminal and method for providing metaverse space for user interaction |
EP2722085A1 (en) * | 2012-10-18 | 2014-04-23 | Bigpoint Inc. | Online game system, method, and computer-readable medium |
US9031293B2 (en) * | 2012-10-19 | 2015-05-12 | Sony Computer Entertainment Inc. | Multi-modal sensor based emotion recognition and emotional interface |
US9020822B2 (en) | 2012-10-19 | 2015-04-28 | Sony Computer Entertainment Inc. | Emotion recognition using auditory attention cues extracted from users voice |
JP5998861B2 (en) * | 2012-11-08 | 2016-09-28 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
US9672811B2 (en) | 2012-11-29 | 2017-06-06 | Sony Interactive Entertainment Inc. | Combining auditory attention cues with phoneme posterior scores for phone/vowel/syllable boundary detection |
US20140219056A1 (en) * | 2013-02-04 | 2014-08-07 | Halliburton Energy Services, Inc. ("HESI") | Fiberoptic systems and methods for acoustic telemetry |
US9325943B2 (en) | 2013-02-20 | 2016-04-26 | Microsoft Technology Licensing, Llc | Providing a tele-immersive experience using a mirror metaphor |
US10220303B1 (en) | 2013-03-15 | 2019-03-05 | Harmonix Music Systems, Inc. | Gesture-based music game |
EP2809031B1 (en) * | 2013-05-31 | 2023-09-27 | Dassault Systèmes | Communication middleware for managing multicast channels |
US10176621B2 (en) | 2013-06-10 | 2019-01-08 | Sony Interactive Entertainment Inc. | Using compute shaders as front end for vertex shaders |
US9044682B1 (en) * | 2013-09-26 | 2015-06-02 | Matthew B. Rappaport | Methods and apparatus for electronic commerce initiated through use of video games and fulfilled by delivery of physical goods |
GB201319333D0 (en) | 2013-11-01 | 2013-12-18 | Microsoft Corp | Controlling display of video data |
US10074080B2 (en) * | 2013-11-06 | 2018-09-11 | Capital One Services, Llc | Wearable transaction devices |
US9397972B2 (en) | 2014-01-24 | 2016-07-19 | Mitii, Inc. | Animated delivery of electronic messages |
US10116604B2 (en) | 2014-01-24 | 2018-10-30 | Mitii, Inc. | Animated delivery of electronic messages |
US10586570B2 (en) | 2014-02-05 | 2020-03-10 | Snap Inc. | Real time video processing for changing proportions of an object in the video |
US9883138B2 (en) | 2014-02-26 | 2018-01-30 | Microsoft Technology Licensing, Llc | Telepresence experience |
CA2891742C (en) * | 2014-05-15 | 2023-11-28 | Tyco Safety Products Canada Ltd. | System and method for processing control commands in a voice interactive system |
US10529009B2 (en) * | 2014-06-25 | 2020-01-07 | Ebay Inc. | Digital avatars in online marketplaces |
US20160015328A1 (en) * | 2014-07-18 | 2016-01-21 | Sony Corporation | Physical properties converter |
US9560050B2 (en) | 2014-09-08 | 2017-01-31 | At&T Intellectual Property I, L.P | System and method to share a resource or a capability of a device |
US10726625B2 (en) * | 2015-01-28 | 2020-07-28 | CCP hf. | Method and system for improving the transmission and processing of data regarding a multi-user virtual environment |
US10725297B2 (en) | 2015-01-28 | 2020-07-28 | CCP hf. | Method and system for implementing a virtual representation of a physical environment using a virtual reality environment |
US20170228929A1 (en) * | 2015-09-01 | 2017-08-10 | Patrick Dengler | System and Method by which combining computer hardware device sensor readings and a camera, provides the best, unencumbered Augmented Reality experience that enables real world objects to be transferred into any digital space, with context, and with contextual relationships. |
CN105214309B (en) * | 2015-10-10 | 2017-07-11 | 腾讯科技(深圳)有限公司 | A kind of information processing method, terminal and computer-readable storage medium |
US10695663B2 (en) * | 2015-12-22 | 2020-06-30 | Intel Corporation | Ambient awareness in virtual reality |
GB201523166D0 (en) | 2015-12-31 | 2016-02-17 | Jones Maria F | Direct integration system |
US20170302709A1 (en) * | 2015-12-31 | 2017-10-19 | Maria Francisca Jones | Virtual meeting participant response indication method and system |
CA2920914C (en) | 2016-02-17 | 2017-07-18 | Cae Inc | Portable computing device and method for transmitting instructor operating station (ios) filtered information |
CA2920913C (en) * | 2016-02-17 | 2018-04-10 | Cae Inc | Simulation server capable of interacting with a plurality of simulators to perform a plurality of simulations |
CA2920981C (en) | 2016-02-17 | 2018-05-01 | Cae Inc | A simulation server capable of creating events of a lesson plan based on simulation data statistics |
US10339365B2 (en) | 2016-03-31 | 2019-07-02 | Snap Inc. | Automated avatar generation |
US10118696B1 (en) | 2016-03-31 | 2018-11-06 | Steven M. Hoffberg | Steerable rotating projectile |
US10203751B2 (en) | 2016-05-11 | 2019-02-12 | Microsoft Technology Licensing, Llc | Continuous motion controls operable using neurological data |
US9864431B2 (en) | 2016-05-11 | 2018-01-09 | Microsoft Technology Licensing, Llc | Changing an application state using neurological data |
US10474353B2 (en) | 2016-05-31 | 2019-11-12 | Snap Inc. | Application control using a gesture based trigger |
US10360708B2 (en) | 2016-06-30 | 2019-07-23 | Snap Inc. | Avatar based ideogram generation |
US10855632B2 (en) | 2016-07-19 | 2020-12-01 | Snap Inc. | Displaying customized electronic messaging graphics |
US20180052512A1 (en) * | 2016-08-16 | 2018-02-22 | Thomas J. Overly | Behavioral rehearsal system and supporting software |
US10124250B2 (en) * | 2016-09-14 | 2018-11-13 | Tao Xu | Gaming system, kit, and method for enabling interactive play |
US20180071621A1 (en) * | 2016-09-14 | 2018-03-15 | Tao Xu | Gaming System, Kit, and Method for Enabling Interactive Play |
US10338193B2 (en) * | 2016-10-07 | 2019-07-02 | Marko Beko | Apparatus and method for RSS/AoA target 3-D localization in wireless networks |
US10609036B1 (en) | 2016-10-10 | 2020-03-31 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
US10198626B2 (en) | 2016-10-19 | 2019-02-05 | Snap Inc. | Neural networks for facial modeling |
WO2018073832A1 (en) * | 2016-10-20 | 2018-04-26 | Rn Chidakashi Technologies Pvt. Ltd. | Emotionally intelligent companion device |
US10432559B2 (en) | 2016-10-24 | 2019-10-01 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
US10593116B2 (en) | 2016-10-24 | 2020-03-17 | Snap Inc. | Augmented reality object manipulation |
US10242503B2 (en) | 2017-01-09 | 2019-03-26 | Snap Inc. | Surface aware lens |
US11616745B2 (en) | 2017-01-09 | 2023-03-28 | Snap Inc. | Contextual generation and selection of customized media content |
US10242477B1 (en) | 2017-01-16 | 2019-03-26 | Snap Inc. | Coded vision system |
US10951562B2 (en) | 2017-01-18 | 2021-03-16 | Snap. Inc. | Customized contextual media content item generation |
US10454857B1 (en) | 2017-01-23 | 2019-10-22 | Snap Inc. | Customized digital avatar accessories |
US11069103B1 (en) | 2017-04-20 | 2021-07-20 | Snap Inc. | Customized user interface for electronic communications |
US11893647B2 (en) | 2017-04-27 | 2024-02-06 | Snap Inc. | Location-based virtual avatars |
US10212541B1 (en) | 2017-04-27 | 2019-02-19 | Snap Inc. | Selective location-based identity communication |
EP4451197A2 (en) | 2017-04-27 | 2024-10-23 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
US10967255B2 (en) * | 2017-05-26 | 2021-04-06 | Brandon Rosado | Virtual reality system for facilitating participation in events |
US10679428B1 (en) | 2017-05-26 | 2020-06-09 | Snap Inc. | Neural network-based image stream modification |
US11122094B2 (en) | 2017-07-28 | 2021-09-14 | Snap Inc. | Software application manager for messaging applications |
US10682086B2 (en) * | 2017-09-12 | 2020-06-16 | AebeZe Labs | Delivery of a digital therapeutic method and system |
US11157700B2 (en) * | 2017-09-12 | 2021-10-26 | AebeZe Labs | Mood map for assessing a dynamic emotional or mental state (dEMS) of a user |
US10586368B2 (en) | 2017-10-26 | 2020-03-10 | Snap Inc. | Joint audio-video facial animation system |
US10657695B2 (en) | 2017-10-30 | 2020-05-19 | Snap Inc. | Animated chat presence |
US11460974B1 (en) | 2017-11-28 | 2022-10-04 | Snap Inc. | Content discovery refresh |
WO2019108702A1 (en) | 2017-11-29 | 2019-06-06 | Snap Inc. | Graphic rendering for electronic messaging applications |
CN111434078B (en) | 2017-11-29 | 2022-06-10 | 斯纳普公司 | Method and system for aggregating media content in electronic messaging applications |
US10838587B2 (en) | 2018-01-02 | 2020-11-17 | Microsoft Technology Licensing, Llc | Augmented and virtual reality for traversing group messaging constructs |
US10949648B1 (en) | 2018-01-23 | 2021-03-16 | Snap Inc. | Region-based stabilized face tracking |
US10979752B1 (en) | 2018-02-28 | 2021-04-13 | Snap Inc. | Generating media content items based on location information |
US10726603B1 (en) | 2018-02-28 | 2020-07-28 | Snap Inc. | Animated expressive icon |
US11712637B1 (en) | 2018-03-23 | 2023-08-01 | Steven M. Hoffberg | Steerable disk or ball |
US10843089B2 (en) * | 2018-04-06 | 2020-11-24 | Rovi Guides, Inc. | Methods and systems for facilitating intra-game communications in a video game environment |
US11310176B2 (en) | 2018-04-13 | 2022-04-19 | Snap Inc. | Content suggestion system |
KR20240027845A (en) | 2018-04-18 | 2024-03-04 | 스냅 인코포레이티드 | Augmented expression system |
CN108854069B (en) * | 2018-05-29 | 2020-02-07 | 腾讯科技(深圳)有限公司 | Sound source determination method and device, storage medium and electronic device |
JP6526879B1 (en) * | 2018-06-25 | 2019-06-05 | 株式会社バーチャルキャスト | Data transmission device and program |
US20200019242A1 (en) * | 2018-07-12 | 2020-01-16 | Microsoft Technology Licensing, Llc | Digital personal expression via wearable device |
US11074675B2 (en) | 2018-07-31 | 2021-07-27 | Snap Inc. | Eye texture inpainting |
US11030813B2 (en) | 2018-08-30 | 2021-06-08 | Snap Inc. | Video clip object tracking |
US11090567B2 (en) * | 2018-09-11 | 2021-08-17 | Activision Publishing, Inc. | Individualized game data augmented displays |
US10902659B2 (en) | 2018-09-19 | 2021-01-26 | International Business Machines Corporation | Intelligent photograph overlay in an internet of things (IoT) computing environment |
US10896534B1 (en) | 2018-09-19 | 2021-01-19 | Snap Inc. | Avatar style transformation using neural networks |
US10895964B1 (en) | 2018-09-25 | 2021-01-19 | Snap Inc. | Interface to display shared user groups |
US10904181B2 (en) | 2018-09-28 | 2021-01-26 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US10698583B2 (en) | 2018-09-28 | 2020-06-30 | Snap Inc. | Collaborative achievement interface |
US11189070B2 (en) | 2018-09-28 | 2021-11-30 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
US11245658B2 (en) | 2018-09-28 | 2022-02-08 | Snap Inc. | System and method of generating private notifications between users in a communication session |
US11103795B1 (en) | 2018-10-31 | 2021-08-31 | Snap Inc. | Game drawer |
US10872451B2 (en) | 2018-10-31 | 2020-12-22 | Snap Inc. | 3D avatar rendering |
US11176737B2 (en) | 2018-11-27 | 2021-11-16 | Snap Inc. | Textured mesh building |
US10902661B1 (en) | 2018-11-28 | 2021-01-26 | Snap Inc. | Dynamic composite user identifier |
US10861170B1 (en) | 2018-11-30 | 2020-12-08 | Snap Inc. | Efficient human pose tracking in videos |
US11199957B1 (en) | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11110350B2 (en) | 2018-12-03 | 2021-09-07 | Intuitive Research And Technology Corporation | Multiplayer teleportation and summoning |
US11055514B1 (en) | 2018-12-14 | 2021-07-06 | Snap Inc. | Image face manipulation |
US11516173B1 (en) | 2018-12-26 | 2022-11-29 | Snap Inc. | Message composition interface |
US11032670B1 (en) | 2019-01-14 | 2021-06-08 | Snap Inc. | Destination sharing in location sharing system |
US10939246B1 (en) | 2019-01-16 | 2021-03-02 | Snap Inc. | Location-based context information sharing in a messaging system |
WO2020150598A1 (en) * | 2019-01-18 | 2020-07-23 | University Of Washington | Systems, apparatuses. and methods for acoustic motion tracking |
US11294936B1 (en) | 2019-01-30 | 2022-04-05 | Snap Inc. | Adaptive spatial density based clustering |
US10984575B2 (en) | 2019-02-06 | 2021-04-20 | Snap Inc. | Body pose estimation |
US10656797B1 (en) | 2019-02-06 | 2020-05-19 | Snap Inc. | Global event-based avatar |
US10936066B1 (en) | 2019-02-13 | 2021-03-02 | Snap Inc. | Sleep detection in a location sharing system |
US10964082B2 (en) | 2019-02-26 | 2021-03-30 | Snap Inc. | Avatar based on weather |
WO2020180283A1 (en) * | 2019-03-01 | 2020-09-10 | Hewlett-Packard Development Company, L.P. | Control adjusted multimedia presentation devices |
US10852918B1 (en) | 2019-03-08 | 2020-12-01 | Snap Inc. | Contextual information in chat |
US11868414B1 (en) | 2019-03-14 | 2024-01-09 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
US11852554B1 (en) | 2019-03-21 | 2023-12-26 | Snap Inc. | Barometer calibration in a location sharing system |
US11166123B1 (en) | 2019-03-28 | 2021-11-02 | Snap Inc. | Grouped transmission of location data in a location sharing system |
US10674311B1 (en) | 2019-03-28 | 2020-06-02 | Snap Inc. | Points of interest in a location sharing system |
US12070682B2 (en) | 2019-03-29 | 2024-08-27 | Snap Inc. | 3D avatar plugin for third-party games |
US10992619B2 (en) | 2019-04-30 | 2021-04-27 | Snap Inc. | Messaging system with avatar generation |
US11438455B2 (en) * | 2019-05-17 | 2022-09-06 | Alberto Patron | Method and system for providing captioned telephone services |
US11601548B2 (en) * | 2019-05-17 | 2023-03-07 | Beryl Burcher | Captioned telephone services improvement |
USD916809S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916871S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916811S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916872S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916810S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
US10893385B1 (en) | 2019-06-07 | 2021-01-12 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11676199B2 (en) | 2019-06-28 | 2023-06-13 | Snap Inc. | Generating customizable avatar outfits |
US11188190B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | Generating animation overlays in a communication session |
US11189098B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | 3D object camera customization system |
US11307747B2 (en) | 2019-07-11 | 2022-04-19 | Snap Inc. | Edge gesture interface with smart interactions |
US11455081B2 (en) | 2019-08-05 | 2022-09-27 | Snap Inc. | Message thread prioritization interface |
US10911387B1 (en) | 2019-08-12 | 2021-02-02 | Snap Inc. | Message reminder interface |
US11320969B2 (en) | 2019-09-16 | 2022-05-03 | Snap Inc. | Messaging system with battery level sharing |
US11425062B2 (en) | 2019-09-27 | 2022-08-23 | Snap Inc. | Recommended content viewed by friends |
US11080917B2 (en) | 2019-09-30 | 2021-08-03 | Snap Inc. | Dynamic parameterized user avatar stories |
US11218838B2 (en) | 2019-10-31 | 2022-01-04 | Snap Inc. | Focused map-based context information surfacing |
US11417042B2 (en) * | 2019-11-21 | 2022-08-16 | Sony Interactive Entertainment Inc. | Animating body language for avatars |
US11544921B1 (en) | 2019-11-22 | 2023-01-03 | Snap Inc. | Augmented reality items based on scan |
US11063891B2 (en) | 2019-12-03 | 2021-07-13 | Snap Inc. | Personalized avatar notification |
US11128586B2 (en) | 2019-12-09 | 2021-09-21 | Snap Inc. | Context sensitive avatar captions |
US11036989B1 (en) | 2019-12-11 | 2021-06-15 | Snap Inc. | Skeletal tracking using previous frames |
US11227442B1 (en) | 2019-12-19 | 2022-01-18 | Snap Inc. | 3D captions with semantic graphical elements |
US11263817B1 (en) | 2019-12-19 | 2022-03-01 | Snap Inc. | 3D captions with face tracking |
US11128715B1 (en) | 2019-12-30 | 2021-09-21 | Snap Inc. | Physical friend proximity in chat |
US11140515B1 (en) | 2019-12-30 | 2021-10-05 | Snap Inc. | Interfaces for relative device positioning |
US11169658B2 (en) | 2019-12-31 | 2021-11-09 | Snap Inc. | Combined map icon with action indicator |
US11036781B1 (en) | 2020-01-30 | 2021-06-15 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11356720B2 (en) | 2020-01-30 | 2022-06-07 | Snap Inc. | Video generation system to render frames on demand |
US11284144B2 (en) | 2020-01-30 | 2022-03-22 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUs |
US11991419B2 (en) | 2020-01-30 | 2024-05-21 | Snap Inc. | Selecting avatars to be included in the video being generated on demand |
EP4096798A1 (en) | 2020-01-30 | 2022-12-07 | Snap Inc. | System for generating media content items on demand |
US11619501B2 (en) | 2020-03-11 | 2023-04-04 | Snap Inc. | Avatar based on trip |
US11217020B2 (en) | 2020-03-16 | 2022-01-04 | Snap Inc. | 3D cutout image modification |
US11625873B2 (en) | 2020-03-30 | 2023-04-11 | Snap Inc. | Personalized media overlay recommendation |
US11818286B2 (en) | 2020-03-30 | 2023-11-14 | Snap Inc. | Avatar recommendation and reply |
EP4128194A1 (en) | 2020-03-31 | 2023-02-08 | Snap Inc. | Augmented reality beauty product tutorials |
US11956190B2 (en) | 2020-05-08 | 2024-04-09 | Snap Inc. | Messaging system with a carousel of related entities |
US11543939B2 (en) | 2020-06-08 | 2023-01-03 | Snap Inc. | Encoded image based messaging system |
US11922010B2 (en) | 2020-06-08 | 2024-03-05 | Snap Inc. | Providing contextual information with keyboard interface for messaging system |
US11356392B2 (en) | 2020-06-10 | 2022-06-07 | Snap Inc. | Messaging system including an external-resource dock and drawer |
US11423652B2 (en) | 2020-06-10 | 2022-08-23 | Snap Inc. | Adding beauty products to augmented reality tutorials |
CN115735229A (en) | 2020-06-25 | 2023-03-03 | 斯纳普公司 | Updating avatar garments in messaging systems |
US11580682B1 (en) | 2020-06-30 | 2023-02-14 | Snap Inc. | Messaging system with augmented reality makeup |
US11863513B2 (en) | 2020-08-31 | 2024-01-02 | Snap Inc. | Media content playback and comments management |
US11360733B2 (en) | 2020-09-10 | 2022-06-14 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11470025B2 (en) | 2020-09-21 | 2022-10-11 | Snap Inc. | Chats with micro sound clips |
US11452939B2 (en) | 2020-09-21 | 2022-09-27 | Snap Inc. | Graphical marker generation system for synchronizing users |
US11910269B2 (en) | 2020-09-25 | 2024-02-20 | Snap Inc. | Augmented reality content items including user avatar to share location |
US11660022B2 (en) | 2020-10-27 | 2023-05-30 | Snap Inc. | Adaptive skeletal joint smoothing |
US11615592B2 (en) | 2020-10-27 | 2023-03-28 | Snap Inc. | Side-by-side character animation from realtime 3D body motion capture |
US11748931B2 (en) | 2020-11-18 | 2023-09-05 | Snap Inc. | Body animation sharing and remixing |
US11450051B2 (en) | 2020-11-18 | 2022-09-20 | Snap Inc. | Personalized avatar real-time motion capture |
US11734894B2 (en) | 2020-11-18 | 2023-08-22 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
EP4272173A1 (en) | 2020-12-30 | 2023-11-08 | Snap Inc. | Flow-guided motion retargeting |
US12008811B2 (en) | 2020-12-30 | 2024-06-11 | Snap Inc. | Machine learning-based selection of a representative video frame within a messaging application |
US12106486B2 (en) | 2021-02-24 | 2024-10-01 | Snap Inc. | Whole body visual effects |
US11790531B2 (en) | 2021-02-24 | 2023-10-17 | Snap Inc. | Whole body segmentation |
US11798201B2 (en) | 2021-03-16 | 2023-10-24 | Snap Inc. | Mirroring device with whole-body outfits |
US11978283B2 (en) | 2021-03-16 | 2024-05-07 | Snap Inc. | Mirroring device with a hands-free mode |
US11809633B2 (en) | 2021-03-16 | 2023-11-07 | Snap Inc. | Mirroring device with pointing based navigation |
US11734959B2 (en) | 2021-03-16 | 2023-08-22 | Snap Inc. | Activating hands-free mode on mirroring device |
US11908243B2 (en) | 2021-03-16 | 2024-02-20 | Snap Inc. | Menu hierarchy navigation on electronic mirroring devices |
US11544885B2 (en) | 2021-03-19 | 2023-01-03 | Snap Inc. | Augmented reality experience based on physical items |
US11562548B2 (en) | 2021-03-22 | 2023-01-24 | Snap Inc. | True size eyewear in real time |
US12067804B2 (en) | 2021-03-22 | 2024-08-20 | Snap Inc. | True size eyewear experience in real time |
US12034680B2 (en) | 2021-03-31 | 2024-07-09 | Snap Inc. | User presence indication data management |
US12100156B2 (en) | 2021-04-12 | 2024-09-24 | Snap Inc. | Garment segmentation |
US11636654B2 (en) | 2021-05-19 | 2023-04-25 | Snap Inc. | AR-based connected portal shopping |
US11941227B2 (en) | 2021-06-30 | 2024-03-26 | Snap Inc. | Hybrid search system for customizable media |
US11854069B2 (en) | 2021-07-16 | 2023-12-26 | Snap Inc. | Personalized try-on ads |
US11908083B2 (en) | 2021-08-31 | 2024-02-20 | Snap Inc. | Deforming custom mesh based on body mesh |
US11983462B2 (en) | 2021-08-31 | 2024-05-14 | Snap Inc. | Conversation guided augmented reality experience |
US11670059B2 (en) | 2021-09-01 | 2023-06-06 | Snap Inc. | Controlling interactive fashion based on body gestures |
US20230066179A1 (en) * | 2021-09-02 | 2023-03-02 | Snap Inc. | Interactive fashion with music ar |
US11673054B2 (en) | 2021-09-07 | 2023-06-13 | Snap Inc. | Controlling AR games on fashion items |
US11663792B2 (en) | 2021-09-08 | 2023-05-30 | Snap Inc. | Body fitted accessory with physics simulation |
US11900506B2 (en) | 2021-09-09 | 2024-02-13 | Snap Inc. | Controlling interactive fashion based on facial expressions |
US11734866B2 (en) | 2021-09-13 | 2023-08-22 | Snap Inc. | Controlling interactive fashion based on voice |
US11798238B2 (en) | 2021-09-14 | 2023-10-24 | Snap Inc. | Blending body mesh into external mesh |
US11836866B2 (en) | 2021-09-20 | 2023-12-05 | Snap Inc. | Deforming real-world object using an external mesh |
US11636662B2 (en) | 2021-09-30 | 2023-04-25 | Snap Inc. | Body normal network light and rendering control |
US11983826B2 (en) | 2021-09-30 | 2024-05-14 | Snap Inc. | 3D upper garment tracking |
US11651572B2 (en) | 2021-10-11 | 2023-05-16 | Snap Inc. | Light and rendering of garments |
US11836862B2 (en) | 2021-10-11 | 2023-12-05 | Snap Inc. | External mesh with vertex attributes |
US11790614B2 (en) | 2021-10-11 | 2023-10-17 | Snap Inc. | Inferring intent from pose and speech input |
US11763481B2 (en) | 2021-10-20 | 2023-09-19 | Snap Inc. | Mirror-based augmented reality experience |
US12086916B2 (en) | 2021-10-22 | 2024-09-10 | Snap Inc. | Voice note with face tracking |
US12020358B2 (en) | 2021-10-29 | 2024-06-25 | Snap Inc. | Animated custom sticker creation |
US11995757B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Customized animation from video |
US11996113B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Voice notes with changing effects |
US11925866B1 (en) * | 2021-11-18 | 2024-03-12 | Amazon Technologies, Inc. | Techniques for modifying network applications |
US11960784B2 (en) | 2021-12-07 | 2024-04-16 | Snap Inc. | Shared augmented reality unboxing experience |
US11748958B2 (en) | 2021-12-07 | 2023-09-05 | Snap Inc. | Augmented reality unboxing experience |
US11880947B2 (en) | 2021-12-21 | 2024-01-23 | Snap Inc. | Real-time upper-body garment exchange |
US12096153B2 (en) | 2021-12-21 | 2024-09-17 | Snap Inc. | Avatar call platform |
US11928783B2 (en) | 2021-12-30 | 2024-03-12 | Snap Inc. | AR position and orientation along a plane |
US11887260B2 (en) | 2021-12-30 | 2024-01-30 | Snap Inc. | AR position indicator |
US11823346B2 (en) | 2022-01-17 | 2023-11-21 | Snap Inc. | AR body part tracking system |
US11954762B2 (en) | 2022-01-19 | 2024-04-09 | Snap Inc. | Object replacement system |
US12039627B2 (en) * | 2022-01-24 | 2024-07-16 | Zoom Video Communications, Inc. | Expo floor layout |
US11909778B2 (en) | 2022-01-24 | 2024-02-20 | Zoom Video Communications, Inc. | Creating video conference expos |
US20230239434A1 (en) * | 2022-01-24 | 2023-07-27 | Zoom Video Communications, Inc. | Virtual expo booth previews |
US11875471B1 (en) * | 2022-03-16 | 2024-01-16 | Build a Rocket Boy Games Lid. | Three-dimensional environment linear content viewing and transition |
US12002146B2 (en) | 2022-03-28 | 2024-06-04 | Snap Inc. | 3D modeling based on neural light field |
US12086375B2 (en) * | 2022-05-10 | 2024-09-10 | Tmrw Foundation Ip S. À R.L. | Layer-partitioned virtual world system |
US12062144B2 (en) | 2022-05-27 | 2024-08-13 | Snap Inc. | Automated augmented reality experience creation based on sample source and target images |
US11954404B2 (en) * | 2022-06-10 | 2024-04-09 | Qualcomm Incorporated | Verbal communication in a virtual world |
US12020384B2 (en) | 2022-06-21 | 2024-06-25 | Snap Inc. | Integrating augmented reality experiences with other components |
US12020386B2 (en) | 2022-06-23 | 2024-06-25 | Snap Inc. | Applying pregenerated virtual experiences in new location |
US11870745B1 (en) | 2022-06-28 | 2024-01-09 | Snap Inc. | Media gallery sharing and management |
US12062146B2 (en) | 2022-07-28 | 2024-08-13 | Snap Inc. | Virtual wardrobe AR experience |
US12051163B2 (en) | 2022-08-25 | 2024-07-30 | Snap Inc. | External computer vision for an eyewear device |
US12100081B2 (en) * | 2022-09-26 | 2024-09-24 | Sony Interactive Entertainment Inc. | Customized digital humans and pets for meta verse |
US20240115957A1 (en) * | 2022-10-06 | 2024-04-11 | Sony Interactive Entertainment Inc. | Systems and methods for applying a modification microservice to a game instance |
US11893166B1 (en) | 2022-11-08 | 2024-02-06 | Snap Inc. | User avatar movement control using an augmented reality eyewear device |
US12047337B1 (en) | 2023-07-03 | 2024-07-23 | Snap Inc. | Generating media content items during user interaction |
Citations (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5720619A (en) * | 1995-04-24 | 1998-02-24 | Fisslinger; Johannes | Interactive computer assisted multi-media biofeedback system |
US5880731A (en) * | 1995-12-14 | 1999-03-09 | Microsoft Corporation | Use of avatars with automatic gesturing and bounded interaction in on-line chat session |
US5926179A (en) * | 1996-09-30 | 1999-07-20 | Sony Corporation | Three-dimensional virtual reality space display processing apparatus, a three-dimensional virtual reality space display processing method, and an information providing medium |
US5977968A (en) * | 1997-03-14 | 1999-11-02 | Mindmeld Multimedia Inc. | Graphical user interface to communicate attitude or emotion to a computer program |
US6121953A (en) * | 1997-02-06 | 2000-09-19 | Modern Cartoons, Ltd. | Virtual reality system for sensing facial movements |
US6212502B1 (en) * | 1998-03-23 | 2001-04-03 | Microsoft Corporation | Modeling and projecting emotion and personality from a computer user interface |
US6215498B1 (en) * | 1998-09-10 | 2001-04-10 | Lionhearth Technologies, Inc. | Virtual command post |
US20010025261A1 (en) * | 1996-06-14 | 2001-09-27 | Shari Olefson | Method and apparatus for providing a virtual tour of a dormatory or other institution to a prospective resident |
US20010046228A1 (en) * | 1996-03-20 | 2001-11-29 | Jyri Tahtinen | Method and arrangement for interconnecting a virtual-reality world and the real world for the purpose of establishing a real-time communications connection such as a telephone call connection |
US20020005865A1 (en) * | 1999-12-17 | 2002-01-17 | Barbara Hayes-Roth | System, method, and device for authoring content for interactive agents |
US6366285B1 (en) * | 1997-11-21 | 2002-04-02 | International Business Machines Corporation | Selection by proximity with inner and outer sensitivity ranges |
US20020064149A1 (en) * | 1996-11-18 | 2002-05-30 | Elliott Isaac K. | System and method for providing requested quality of service in a hybrid network |
US20020090985A1 (en) * | 2000-09-07 | 2002-07-11 | Ilan Tochner | Coexistent interaction between a virtual character and the real world |
US20020113820A1 (en) * | 2000-10-10 | 2002-08-22 | Robinson Jack D. | System and method to configure and provide a network-enabled three-dimensional computing environment |
US20020128952A1 (en) * | 2000-07-06 | 2002-09-12 | Raymond Melkomian | Virtual interactive global exchange |
US20020154174A1 (en) * | 2001-04-23 | 2002-10-24 | Redlich Arthur Norman | Method and system for providing a service in a photorealistic, 3-D environment |
US6476830B1 (en) * | 1996-08-02 | 2002-11-05 | Fujitsu Software Corporation | Virtual objects for building a community in a virtual world |
US20030057884A1 (en) * | 1997-12-17 | 2003-03-27 | Dowling Kevin J. | Systems and methods for digital entertainment |
US6545682B1 (en) * | 2000-05-24 | 2003-04-08 | There, Inc. | Method and apparatus for creating and customizing avatars using genetic paradigm |
US20030117485A1 (en) * | 2001-12-20 | 2003-06-26 | Yoshiyuki Mochizuki | Virtual television phone apparatus |
US20030156134A1 (en) * | 2000-12-08 | 2003-08-21 | Kyunam Kim | Graphic chatting with organizational avatars |
US20030235341A1 (en) * | 2002-04-11 | 2003-12-25 | Gokturk Salih Burak | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications |
US20040030787A1 (en) * | 2000-10-27 | 2004-02-12 | Magnus Jandel | Communication infrastructure arrangement for multiuser |
US20040060067A1 (en) * | 2002-09-24 | 2004-03-25 | Lg Electronics Inc. | System and method for multiplexing media information over a network using reduced communications resources and prior knowledge/experience of a called or calling party |
US20040105004A1 (en) * | 2002-11-30 | 2004-06-03 | Yong Rui | Automated camera management system and method for capturing presentations using videography rules |
US20040153557A1 (en) * | 2002-10-02 | 2004-08-05 | Joe Shochet | Multi-user interactive communication network environment |
US6784901B1 (en) * | 2000-05-09 | 2004-08-31 | There | Method, system and computer program product for the delivery of a chat message in a 3D multi-user environment |
US20040179038A1 (en) * | 2003-03-03 | 2004-09-16 | Blattner Patrick D. | Reactive avatars |
US20040189720A1 (en) * | 2003-03-25 | 2004-09-30 | Wilson Andrew D. | Architecture for controlling a computer using hand gestures |
US20040207597A1 (en) * | 2002-07-27 | 2004-10-21 | Sony Computer Entertainment Inc. | Method and apparatus for light input device |
US20040255027A1 (en) * | 2003-06-04 | 2004-12-16 | Sony Computer Entertainment Inc. | Virtual/real world dynamic intercommunication methods and systems |
US20040255015A1 (en) * | 2003-06-16 | 2004-12-16 | International Business Machines Corporation | Communications management using weights and thresholds |
US6834389B1 (en) * | 1997-12-01 | 2004-12-21 | Recursion Software, Inc. | Method of forwarding messages to mobile objects in a computer network |
US6836515B1 (en) * | 1998-07-24 | 2004-12-28 | Hughes Electronics Corporation | Multi-modulation radio communications |
US20050054381A1 (en) * | 2003-09-05 | 2005-03-10 | Samsung Electronics Co., Ltd. | Proactive user interface |
US20050083333A1 (en) * | 2003-05-01 | 2005-04-21 | Sony Corporation | System and method for capturing facial and body motion |
US6904408B1 (en) * | 2000-10-19 | 2005-06-07 | Mccarthy John | Bionet method, system and personalized web content manager responsive to browser viewers' psychological preferences, behavioral responses and physiological stress indicators |
US20050261032A1 (en) * | 2004-04-23 | 2005-11-24 | Jeong-Wook Seo | Device and method for displaying a status of a portable terminal by using a character image |
US6976846B2 (en) * | 2002-05-08 | 2005-12-20 | Accenture Global Services Gmbh | Telecommunications virtual simulator |
US20060003814A1 (en) * | 2004-06-30 | 2006-01-05 | Taryn Moody | Intelligent ringtone service |
US7036082B1 (en) * | 2000-09-21 | 2006-04-25 | Nortel Networks Limited | Controlling communications through a virtual reality environment |
US20060092270A1 (en) * | 2004-11-04 | 2006-05-04 | Sony Corporation | Kinesiological model-based gestural augmentation of voice communication |
US7073129B1 (en) * | 1998-12-18 | 2006-07-04 | Tangis Corporation | Automated selection of appropriate information based on a computer user's context |
US20060153425A1 (en) * | 2005-01-07 | 2006-07-13 | Lg Electronics Inc. | Method of processing three-dimensional image in mobile device |
US7086005B1 (en) * | 1999-11-29 | 2006-08-01 | Sony Corporation | Shared virtual space conversation support system using virtual telephones |
US20060178968A1 (en) * | 2005-02-04 | 2006-08-10 | Jung Edward K | Virtual world interconnection technique |
US20060224546A1 (en) * | 2003-03-25 | 2006-10-05 | Daniel Ballin | Aparatus and method for generating behaviour in an object |
US20060233389A1 (en) * | 2003-08-27 | 2006-10-19 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US20060264258A1 (en) * | 2002-07-27 | 2006-11-23 | Zalewski Gary M | Multi-input game control mixer |
US20060264259A1 (en) * | 2002-07-27 | 2006-11-23 | Zalewski Gary M | System for tracking user manipulations within an environment |
US20060277571A1 (en) * | 2002-07-27 | 2006-12-07 | Sony Computer Entertainment Inc. | Computer image and audio processing of intensity and input devices for interfacing with a computer program |
US20070002057A1 (en) * | 2004-10-12 | 2007-01-04 | Matt Danzig | Computer-implemented system and method for home page customization and e-commerce support |
US20070013515A1 (en) * | 2005-07-15 | 2007-01-18 | Microsoft Corporation | Parental controls for a media console |
US20070025562A1 (en) * | 2003-08-27 | 2007-02-01 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection |
US20070037561A1 (en) * | 2005-08-10 | 2007-02-15 | Bowen Blake A | Method for intelligently dialing contact numbers for a person using user-defined smart rules |
US20070074206A1 (en) * | 2005-09-27 | 2007-03-29 | Sony Computer Entertainment Inc. | Operating cell processors over a network |
US20070074212A1 (en) * | 2005-09-27 | 2007-03-29 | Sony Computer Entertainment Inc. | Cell processor methods and apparatus |
US20070074207A1 (en) * | 2005-09-27 | 2007-03-29 | Sony Computer Entertainment Inc. | SPU task manager for cell processor |
US20070074114A1 (en) * | 2005-09-29 | 2007-03-29 | Conopco, Inc., D/B/A Unilever | Automated dialogue interface |
US20070074221A1 (en) * | 2005-09-27 | 2007-03-29 | Sony Computer Entertainment Inc. | Cell processor task and data management |
US20070083755A1 (en) * | 2005-09-27 | 2007-04-12 | Sony Computer Entertainment Inc. | Operating cell processors over a network |
US20070082309A1 (en) * | 2005-10-07 | 2007-04-12 | Carrier Corporation | Inshot burner flame retainer |
US20070113181A1 (en) * | 2003-03-03 | 2007-05-17 | Blattner Patrick D | Using avatars to communicate real-time information |
US20070149282A1 (en) * | 2005-12-27 | 2007-06-28 | Industrial Technology Research Institute | Interactive gaming method and apparatus with emotion perception ability |
US20070192727A1 (en) * | 2006-01-26 | 2007-08-16 | Finley William D | Three dimensional graphical user interface representative of a physical work space |
US20070198178A1 (en) * | 2004-03-31 | 2007-08-23 | Trimby Martin W | Pathfinding system |
US20070198628A1 (en) * | 2005-09-27 | 2007-08-23 | Sony Computer Entertainment Inc. | Cell processor methods and apparatus |
US7272662B2 (en) * | 2000-11-30 | 2007-09-18 | Nms Communications Corporation | Systems and methods for routing messages to communications devices over a communications network |
US20070260340A1 (en) * | 2006-05-04 | 2007-11-08 | Sony Computer Entertainment Inc. | Ultra small microphone array |
US20080059578A1 (en) * | 2006-09-06 | 2008-03-06 | Jacob C Albertson | Informing a user of gestures made by others out of the user's line of sight |
US20080059570A1 (en) * | 2006-09-05 | 2008-03-06 | Aol Llc | Enabling an im user to navigate a virtual world |
US20080096665A1 (en) * | 2006-10-18 | 2008-04-24 | Ariel Cohen | System and a method for a reality role playing game genre |
US20080120558A1 (en) * | 2006-11-16 | 2008-05-22 | Paco Xander Nathan | Systems and methods for managing a persistent virtual avatar with migrational ability |
US7412077B2 (en) * | 2006-12-29 | 2008-08-12 | Motorola, Inc. | Apparatus and methods for head pose estimation and head gesture detection |
US20080215973A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc | Avatar customization |
US20080246759A1 (en) * | 2005-02-23 | 2008-10-09 | Craig Summers | Automatic Scene Modeling for the 3D Camera and 3D Video |
US7468729B1 (en) * | 2004-12-21 | 2008-12-23 | Aol Llc, A Delaware Limited Liability Company | Using an avatar to generate user profile information |
US20090013052A1 (en) * | 1998-12-18 | 2009-01-08 | Microsoft Corporation | Automated selection of appropriate information based on a computer user's context |
US7647560B2 (en) * | 2004-05-11 | 2010-01-12 | Microsoft Corporation | User interface for multi-sensory emoticons in a communication system |
US7676237B2 (en) * | 2006-04-11 | 2010-03-09 | At&T Intellectual Property I, L.P. | Routing communication based on urgency priority level |
US7908554B1 (en) * | 2003-03-03 | 2011-03-15 | Aol Inc. | Modifying avatar behavior based on user action or mood |
Family Cites Families (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04795A (en) * | 1990-04-17 | 1992-01-06 | Nkk Corp | Magnetic shielding plate |
US5491743A (en) * | 1994-05-24 | 1996-02-13 | International Business Machines Corporation | Virtual conference system and terminal apparatus therefor |
CA2180891C (en) * | 1995-07-12 | 2010-01-12 | Junichi Rekimoto | Notification of updates in a three-dimensional virtual reality space sharing system |
US6167432A (en) * | 1996-02-29 | 2000-12-26 | Webex Communications, Inc., | Method for creating peer-to-peer connections over an interconnected network to facilitate conferencing among users |
US5850396A (en) * | 1996-06-24 | 1998-12-15 | Gateway 2000, Inc. | Multicast message distribution in a polynomial expansion manner |
JP3679526B2 (en) * | 1996-10-31 | 2005-08-03 | キヤノン株式会社 | Image sharing apparatus, screen control method, and computer-readable memory |
JPH11177628A (en) * | 1997-12-15 | 1999-07-02 | Mitsubishi Electric Corp | Three-dimension virtual space common share system for broad area environment |
US6466213B2 (en) * | 1998-02-13 | 2002-10-15 | Xerox Corporation | Method and apparatus for creating personal autonomous avatars |
US6570555B1 (en) * | 1998-12-30 | 2003-05-27 | Fuji Xerox Co., Ltd. | Method and apparatus for embodied conversational characters with multimodal input/output in an interface device |
US7036128B1 (en) * | 1999-01-05 | 2006-04-25 | Sri International Offices | Using a community of distributed electronic agents to support a highly mobile, ambient computing environment |
US6249810B1 (en) * | 1999-02-19 | 2001-06-19 | Chaincast, Inc. | Method and system for implementing an internet radio device for receiving and/or transmitting media information |
US6370565B1 (en) * | 1999-03-01 | 2002-04-09 | Sony Corporation Of Japan | Method of sharing computation load within a distributed virtual environment system |
AU5012300A (en) | 1999-05-14 | 2000-12-05 | Graphic Gems | Method and apparatus for registering lots in a shared virtual world |
JP4283397B2 (en) * | 1999-08-26 | 2009-06-24 | 任天堂株式会社 | Communication game system, game machine, and information storage medium |
US6772195B1 (en) * | 1999-10-29 | 2004-08-03 | Electronic Arts, Inc. | Chat clusters for a virtual world application |
US6672961B1 (en) * | 2000-03-16 | 2004-01-06 | Sony Computer Entertainment America Inc. | Computer system and method of displaying images |
US7523114B2 (en) * | 2000-04-24 | 2009-04-21 | Ebay Inc. | Method and system for categorizing items in both actual and virtual categories |
WO2002042921A1 (en) | 2000-11-27 | 2002-05-30 | Butterfly.Net, Inc. | System and method for synthesizing environments to facilitate distributed, context-sensitive, multi-user interactive applications |
US7377852B2 (en) * | 2000-12-20 | 2008-05-27 | Aruze Co., Ltd. | Server providing competitive game service, program storage medium for use in the server, and method of providing competitive game service using the server |
US7035653B2 (en) * | 2001-04-13 | 2006-04-25 | Leap Wireless International, Inc. | Method and system to facilitate interaction between and content delivery to users of a wireless communications network |
US8108509B2 (en) * | 2001-04-30 | 2012-01-31 | Sony Computer Entertainment America Llc | Altering network transmitted content data based upon user specified characteristics |
US7671861B1 (en) * | 2001-11-02 | 2010-03-02 | At&T Intellectual Property Ii, L.P. | Apparatus and method of customizing animated entities for use in a multi-media communication application |
US20030135569A1 (en) * | 2002-01-15 | 2003-07-17 | Khakoo Shabbir A. | Method and apparatus for delivering messages based on user presence, preference or location |
US20040107242A1 (en) * | 2002-12-02 | 2004-06-03 | Microsoft Corporation | Peer-to-peer content broadcast transfer mechanism |
GB2398691B (en) | 2003-02-21 | 2006-05-31 | Sony Comp Entertainment Europe | Control of data processing |
GB2398690B (en) | 2003-02-21 | 2006-05-10 | Sony Comp Entertainment Europe | Control of data processing |
US7358972B2 (en) * | 2003-05-01 | 2008-04-15 | Sony Corporation | System and method for capturing facial and body motion |
JP4073885B2 (en) * | 2003-06-17 | 2008-04-09 | 任天堂株式会社 | GAME SYSTEM, GAME DEVICE, AND GAME PROGRAM |
US7171190B2 (en) * | 2003-06-25 | 2007-01-30 | Oracle International Corporation | Intelligent messaging |
US7713116B2 (en) * | 2003-06-30 | 2010-05-11 | Microsoft Corporation | Inventory management of virtual items in computer games |
US20050137015A1 (en) * | 2003-08-19 | 2005-06-23 | Lawrence Rogers | Systems and methods for a role-playing game having a customizable avatar and differentiated instant messaging environment |
JP3793213B2 (en) * | 2003-09-01 | 2006-07-05 | 株式会社ソニー・コンピュータエンタテインメント | Network game terminal, game server, method executed on network game terminal, and recording medium |
US20050135317A1 (en) * | 2003-12-22 | 2005-06-23 | Christopher Ware | Method and system for multicast scheduling in a WLAN |
CN1961333A (en) * | 2004-02-12 | 2007-05-09 | 贝斯简·阿利万迪 | System and method for producing merchandise from a virtual environment |
KR100456601B1 (en) * | 2004-03-18 | 2004-11-10 | 엔에이치엔(주) | A registration system for game item sale and a method thereof |
EP1738251A2 (en) * | 2004-04-16 | 2007-01-03 | Cascade Basic Research Corp. | Modelling relationships within an on-line connectivity universe |
US20050251531A1 (en) * | 2004-05-10 | 2005-11-10 | Microsoft Corporation | Data management for a networked multimedia console |
US20050256985A1 (en) * | 2004-05-13 | 2005-11-17 | Wildtangent, Inc. | Sending progress information of other users for transmitted shared content |
KR100557130B1 (en) * | 2004-05-14 | 2006-03-03 | 삼성전자주식회사 | Terminal equipment capable of editing movement of avatar and method therefor |
CN1319008C (en) * | 2004-06-18 | 2007-05-30 | 华为技术有限公司 | Game virtual-article data processing method, game platform system and game system |
US7491123B2 (en) * | 2004-07-29 | 2009-02-17 | Nintendo Co., Ltd. | Video game voice chat with amplitude-based virtual ranging |
US20090005167A1 (en) * | 2004-11-29 | 2009-01-01 | Juha Arrasvuori | Mobile Gaming with External Devices in Single and Multiplayer Games |
JP2005149529A (en) * | 2005-01-06 | 2005-06-09 | Fujitsu Ltd | Voice interactive system |
US7679640B2 (en) * | 2005-01-27 | 2010-03-16 | Polycom, Inc. | Method and system for conducting a sub-videoconference from a main videoconference |
US8358762B1 (en) * | 2005-03-21 | 2013-01-22 | Aol Inc. | Conference calls and meetings via electronic messaging interface |
US20060221857A1 (en) * | 2005-03-31 | 2006-10-05 | Bushnell William J | Method and apparatus for providing enhanced features to multicast content services and multiplayer gaming services |
US20070094325A1 (en) * | 2005-10-21 | 2007-04-26 | Nucleoid Corp. | Hybrid peer-to-peer data communication and management |
WO2007076721A2 (en) * | 2005-12-31 | 2007-07-12 | Tencent Technology (Shenzhen) Company Limited | A display and presentation method and a display system and presentation apparatus for 3d virtual image |
US7789758B2 (en) * | 2006-03-10 | 2010-09-07 | Electronic Arts, Inc. | Video game with simulated evolution |
US8504605B2 (en) * | 2006-05-30 | 2013-08-06 | Microsoft Corporation | Proximity filtering of multiparty VoIP communications |
US8075404B2 (en) * | 2006-07-03 | 2011-12-13 | Microsoft Corporation | Multi-player gaming |
US8888598B2 (en) * | 2006-10-17 | 2014-11-18 | Playspan, Inc. | Transaction systems and methods for virtual items of massively multiplayer online games and virtual worlds |
KR101201695B1 (en) * | 2006-11-08 | 2012-11-15 | 돌비 레버러토리즈 라이쎈싱 코오포레이션 | Apparatuses and methods for use in creating an audio scene |
US8026918B1 (en) * | 2006-11-22 | 2011-09-27 | Aol Inc. | Controlling communications with proximate avatars in virtual world environment |
GB2447095B (en) | 2007-03-01 | 2010-07-28 | Sony Comp Entertainment Europe | Entertainment device and method |
GB2447020A (en) | 2007-03-01 | 2008-09-03 | Sony Comp Entertainment Europe | Transmitting game data from an entertainment device and rendering that data in a virtual environment of a second entertainment device |
GB2447094B (en) | 2007-03-01 | 2010-03-10 | Sony Comp Entertainment Europe | Entertainment device and method |
GB2447096B (en) | 2007-03-01 | 2011-10-12 | Sony Comp Entertainment Europe | Entertainment device and method |
US7769806B2 (en) * | 2007-10-24 | 2010-08-03 | Social Communications Company | Automated real-time data stream switching in a shared virtual area communication environment |
-
2007
- 2007-03-01 GB GBGB0703974.6A patent/GB0703974D0/en not_active Ceased
- 2007-03-05 US US11/682,287 patent/US20080215971A1/en not_active Abandoned
- 2007-03-05 US US11/682,298 patent/US8788951B2/en active Active
- 2007-03-05 US US11/682,299 patent/US8502825B2/en active Active
- 2007-03-05 US US11/682,292 patent/US20080215972A1/en not_active Abandoned
- 2007-03-05 US US11/682,284 patent/US7979574B2/en active Active
- 2007-03-05 US US11/682,281 patent/US8425322B2/en active Active
- 2007-08-10 GB GB0715650A patent/GB2447100B/en active Active
- 2007-10-03 EP EP07253928A patent/EP1964597B1/en active Active
- 2007-10-03 ES ES07253928T patent/ES2408680T3/en active Active
-
2008
- 2008-02-29 DE DE602008004893T patent/DE602008004893D1/en active Active
- 2008-02-29 JP JP2009551940A patent/JP5026531B2/en active Active
- 2008-02-29 US US12/528,956 patent/US20120166969A1/en not_active Abandoned
- 2008-02-29 WO PCT/GB2008/000683 patent/WO2008104786A2/en active Application Filing
- 2008-02-29 AT AT08709557T patent/ATE497813T1/en not_active IP Right Cessation
- 2008-02-29 EP EP08709557A patent/EP2131935B1/en active Active
- 2008-03-03 JP JP2009551941A patent/JP5032594B2/en active Active
- 2008-03-03 WO PCT/GB2008/000714 patent/WO2008104795A1/en active Application Filing
- 2008-03-03 US US12/528,920 patent/US8951123B2/en active Active
Patent Citations (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5720619A (en) * | 1995-04-24 | 1998-02-24 | Fisslinger; Johannes | Interactive computer assisted multi-media biofeedback system |
US5880731A (en) * | 1995-12-14 | 1999-03-09 | Microsoft Corporation | Use of avatars with automatic gesturing and bounded interaction in on-line chat session |
US20010046228A1 (en) * | 1996-03-20 | 2001-11-29 | Jyri Tahtinen | Method and arrangement for interconnecting a virtual-reality world and the real world for the purpose of establishing a real-time communications connection such as a telephone call connection |
US20010025261A1 (en) * | 1996-06-14 | 2001-09-27 | Shari Olefson | Method and apparatus for providing a virtual tour of a dormatory or other institution to a prospective resident |
US6476830B1 (en) * | 1996-08-02 | 2002-11-05 | Fujitsu Software Corporation | Virtual objects for building a community in a virtual world |
US5926179A (en) * | 1996-09-30 | 1999-07-20 | Sony Corporation | Three-dimensional virtual reality space display processing apparatus, a three-dimensional virtual reality space display processing method, and an information providing medium |
US20020064149A1 (en) * | 1996-11-18 | 2002-05-30 | Elliott Isaac K. | System and method for providing requested quality of service in a hybrid network |
US6121953A (en) * | 1997-02-06 | 2000-09-19 | Modern Cartoons, Ltd. | Virtual reality system for sensing facial movements |
US5977968A (en) * | 1997-03-14 | 1999-11-02 | Mindmeld Multimedia Inc. | Graphical user interface to communicate attitude or emotion to a computer program |
US6366285B1 (en) * | 1997-11-21 | 2002-04-02 | International Business Machines Corporation | Selection by proximity with inner and outer sensitivity ranges |
US6834389B1 (en) * | 1997-12-01 | 2004-12-21 | Recursion Software, Inc. | Method of forwarding messages to mobile objects in a computer network |
US20030057884A1 (en) * | 1997-12-17 | 2003-03-27 | Dowling Kevin J. | Systems and methods for digital entertainment |
US6212502B1 (en) * | 1998-03-23 | 2001-04-03 | Microsoft Corporation | Modeling and projecting emotion and personality from a computer user interface |
US6836515B1 (en) * | 1998-07-24 | 2004-12-28 | Hughes Electronics Corporation | Multi-modulation radio communications |
US6215498B1 (en) * | 1998-09-10 | 2001-04-10 | Lionhearth Technologies, Inc. | Virtual command post |
US7073129B1 (en) * | 1998-12-18 | 2006-07-04 | Tangis Corporation | Automated selection of appropriate information based on a computer user's context |
US20090013052A1 (en) * | 1998-12-18 | 2009-01-08 | Microsoft Corporation | Automated selection of appropriate information based on a computer user's context |
US7086005B1 (en) * | 1999-11-29 | 2006-08-01 | Sony Corporation | Shared virtual space conversation support system using virtual telephones |
US20020005865A1 (en) * | 1999-12-17 | 2002-01-17 | Barbara Hayes-Roth | System, method, and device for authoring content for interactive agents |
US6784901B1 (en) * | 2000-05-09 | 2004-08-31 | There | Method, system and computer program product for the delivery of a chat message in a 3D multi-user environment |
US6545682B1 (en) * | 2000-05-24 | 2003-04-08 | There, Inc. | Method and apparatus for creating and customizing avatars using genetic paradigm |
US20020128952A1 (en) * | 2000-07-06 | 2002-09-12 | Raymond Melkomian | Virtual interactive global exchange |
US20020090985A1 (en) * | 2000-09-07 | 2002-07-11 | Ilan Tochner | Coexistent interaction between a virtual character and the real world |
US7036082B1 (en) * | 2000-09-21 | 2006-04-25 | Nortel Networks Limited | Controlling communications through a virtual reality environment |
US20020113820A1 (en) * | 2000-10-10 | 2002-08-22 | Robinson Jack D. | System and method to configure and provide a network-enabled three-dimensional computing environment |
US6904408B1 (en) * | 2000-10-19 | 2005-06-07 | Mccarthy John | Bionet method, system and personalized web content manager responsive to browser viewers' psychological preferences, behavioral responses and physiological stress indicators |
US20040030787A1 (en) * | 2000-10-27 | 2004-02-12 | Magnus Jandel | Communication infrastructure arrangement for multiuser |
US7272662B2 (en) * | 2000-11-30 | 2007-09-18 | Nms Communications Corporation | Systems and methods for routing messages to communications devices over a communications network |
US20030156134A1 (en) * | 2000-12-08 | 2003-08-21 | Kyunam Kim | Graphic chatting with organizational avatars |
US6910186B2 (en) * | 2000-12-08 | 2005-06-21 | Kyunam Kim | Graphic chatting with organizational avatars |
US20020154174A1 (en) * | 2001-04-23 | 2002-10-24 | Redlich Arthur Norman | Method and system for providing a service in a photorealistic, 3-D environment |
US20030117485A1 (en) * | 2001-12-20 | 2003-06-26 | Yoshiyuki Mochizuki | Virtual television phone apparatus |
US20030235341A1 (en) * | 2002-04-11 | 2003-12-25 | Gokturk Salih Burak | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications |
US7203356B2 (en) * | 2002-04-11 | 2007-04-10 | Canesta, Inc. | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications |
US6976846B2 (en) * | 2002-05-08 | 2005-12-20 | Accenture Global Services Gmbh | Telecommunications virtual simulator |
US20040207597A1 (en) * | 2002-07-27 | 2004-10-21 | Sony Computer Entertainment Inc. | Method and apparatus for light input device |
US20060277571A1 (en) * | 2002-07-27 | 2006-12-07 | Sony Computer Entertainment Inc. | Computer image and audio processing of intensity and input devices for interfacing with a computer program |
US20060264259A1 (en) * | 2002-07-27 | 2006-11-23 | Zalewski Gary M | System for tracking user manipulations within an environment |
US20060264258A1 (en) * | 2002-07-27 | 2006-11-23 | Zalewski Gary M | Multi-input game control mixer |
US20040060067A1 (en) * | 2002-09-24 | 2004-03-25 | Lg Electronics Inc. | System and method for multiplexing media information over a network using reduced communications resources and prior knowledge/experience of a called or calling party |
US20040153557A1 (en) * | 2002-10-02 | 2004-08-05 | Joe Shochet | Multi-user interactive communication network environment |
US20040105004A1 (en) * | 2002-11-30 | 2004-06-03 | Yong Rui | Automated camera management system and method for capturing presentations using videography rules |
US20070113181A1 (en) * | 2003-03-03 | 2007-05-17 | Blattner Patrick D | Using avatars to communicate real-time information |
US7908554B1 (en) * | 2003-03-03 | 2011-03-15 | Aol Inc. | Modifying avatar behavior based on user action or mood |
US20040179038A1 (en) * | 2003-03-03 | 2004-09-16 | Blattner Patrick D. | Reactive avatars |
US20040189720A1 (en) * | 2003-03-25 | 2004-09-30 | Wilson Andrew D. | Architecture for controlling a computer using hand gestures |
US20060224546A1 (en) * | 2003-03-25 | 2006-10-05 | Daniel Ballin | Aparatus and method for generating behaviour in an object |
US7574332B2 (en) * | 2003-03-25 | 2009-08-11 | British Telecommunications Plc | Apparatus and method for generating behaviour in an object |
US20050083333A1 (en) * | 2003-05-01 | 2005-04-21 | Sony Corporation | System and method for capturing facial and body motion |
US20040255027A1 (en) * | 2003-06-04 | 2004-12-16 | Sony Computer Entertainment Inc. | Virtual/real world dynamic intercommunication methods and systems |
US20040255015A1 (en) * | 2003-06-16 | 2004-12-16 | International Business Machines Corporation | Communications management using weights and thresholds |
US20070025562A1 (en) * | 2003-08-27 | 2007-02-01 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection |
US20060233389A1 (en) * | 2003-08-27 | 2006-10-19 | Sony Computer Entertainment Inc. | Methods and apparatus for targeted sound detection and characterization |
US20050054381A1 (en) * | 2003-09-05 | 2005-03-10 | Samsung Electronics Co., Ltd. | Proactive user interface |
US20070198178A1 (en) * | 2004-03-31 | 2007-08-23 | Trimby Martin W | Pathfinding system |
US20050261032A1 (en) * | 2004-04-23 | 2005-11-24 | Jeong-Wook Seo | Device and method for displaying a status of a portable terminal by using a character image |
US7647560B2 (en) * | 2004-05-11 | 2010-01-12 | Microsoft Corporation | User interface for multi-sensory emoticons in a communication system |
US20060003814A1 (en) * | 2004-06-30 | 2006-01-05 | Taryn Moody | Intelligent ringtone service |
US20070002057A1 (en) * | 2004-10-12 | 2007-01-04 | Matt Danzig | Computer-implemented system and method for home page customization and e-commerce support |
US20060092270A1 (en) * | 2004-11-04 | 2006-05-04 | Sony Corporation | Kinesiological model-based gestural augmentation of voice communication |
US7468729B1 (en) * | 2004-12-21 | 2008-12-23 | Aol Llc, A Delaware Limited Liability Company | Using an avatar to generate user profile information |
US20060153425A1 (en) * | 2005-01-07 | 2006-07-13 | Lg Electronics Inc. | Method of processing three-dimensional image in mobile device |
US20060178968A1 (en) * | 2005-02-04 | 2006-08-10 | Jung Edward K | Virtual world interconnection technique |
US20080246759A1 (en) * | 2005-02-23 | 2008-10-09 | Craig Summers | Automatic Scene Modeling for the 3D Camera and 3D Video |
US20070013515A1 (en) * | 2005-07-15 | 2007-01-18 | Microsoft Corporation | Parental controls for a media console |
US20070037561A1 (en) * | 2005-08-10 | 2007-02-15 | Bowen Blake A | Method for intelligently dialing contact numbers for a person using user-defined smart rules |
US20070074221A1 (en) * | 2005-09-27 | 2007-03-29 | Sony Computer Entertainment Inc. | Cell processor task and data management |
US20070198628A1 (en) * | 2005-09-27 | 2007-08-23 | Sony Computer Entertainment Inc. | Cell processor methods and apparatus |
US20070074206A1 (en) * | 2005-09-27 | 2007-03-29 | Sony Computer Entertainment Inc. | Operating cell processors over a network |
US20070074212A1 (en) * | 2005-09-27 | 2007-03-29 | Sony Computer Entertainment Inc. | Cell processor methods and apparatus |
US20070074207A1 (en) * | 2005-09-27 | 2007-03-29 | Sony Computer Entertainment Inc. | SPU task manager for cell processor |
US20070083755A1 (en) * | 2005-09-27 | 2007-04-12 | Sony Computer Entertainment Inc. | Operating cell processors over a network |
US20070074114A1 (en) * | 2005-09-29 | 2007-03-29 | Conopco, Inc., D/B/A Unilever | Automated dialogue interface |
US20070082309A1 (en) * | 2005-10-07 | 2007-04-12 | Carrier Corporation | Inshot burner flame retainer |
US20070149282A1 (en) * | 2005-12-27 | 2007-06-28 | Industrial Technology Research Institute | Interactive gaming method and apparatus with emotion perception ability |
US20070192727A1 (en) * | 2006-01-26 | 2007-08-16 | Finley William D | Three dimensional graphical user interface representative of a physical work space |
US7676237B2 (en) * | 2006-04-11 | 2010-03-09 | At&T Intellectual Property I, L.P. | Routing communication based on urgency priority level |
US20070260340A1 (en) * | 2006-05-04 | 2007-11-08 | Sony Computer Entertainment Inc. | Ultra small microphone array |
US20080059570A1 (en) * | 2006-09-05 | 2008-03-06 | Aol Llc | Enabling an im user to navigate a virtual world |
US20080059578A1 (en) * | 2006-09-06 | 2008-03-06 | Jacob C Albertson | Informing a user of gestures made by others out of the user's line of sight |
US20080096665A1 (en) * | 2006-10-18 | 2008-04-24 | Ariel Cohen | System and a method for a reality role playing game genre |
US20080120558A1 (en) * | 2006-11-16 | 2008-05-22 | Paco Xander Nathan | Systems and methods for managing a persistent virtual avatar with migrational ability |
US7412077B2 (en) * | 2006-12-29 | 2008-08-12 | Motorola, Inc. | Apparatus and methods for head pose estimation and head gesture detection |
US20080214253A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc. | System and method for communicating with a virtual world |
US20080235582A1 (en) * | 2007-03-01 | 2008-09-25 | Sony Computer Entertainment America Inc. | Avatar email and methods for communicating between real and virtual worlds |
US20080215679A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc. | System and method for routing communications among real and virtual communication devices |
US20080215971A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc. | System and method for communicating with an avatar |
US20080215973A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc | Avatar customization |
US8502825B2 (en) * | 2007-03-01 | 2013-08-06 | Sony Computer Entertainment Europe Limited | Avatar email and methods for communicating between real and virtual worlds |
Cited By (208)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080093814A1 (en) * | 2004-09-09 | 2008-04-24 | Massimo Filippi | Wheel Assembly with Internal Pressure Reservoir and Pressure Fluctuation Warning System |
US20080020361A1 (en) * | 2006-07-12 | 2008-01-24 | Kron Frederick W | Computerized medical training system |
US8469713B2 (en) | 2006-07-12 | 2013-06-25 | Medical Cyberworlds, Inc. | Computerized medical training system |
US8954368B2 (en) | 2006-09-05 | 2015-02-10 | Microsoft Corporation | Translating paralinguistic indicators |
US8010474B1 (en) * | 2006-09-05 | 2011-08-30 | Aol Inc. | Translating paralinguisitic indicators |
US8726195B2 (en) | 2006-09-05 | 2014-05-13 | Aol Inc. | Enabling an IM user to navigate a virtual world |
US8688611B2 (en) | 2006-09-05 | 2014-04-01 | Microsoft Corporation | Translating paralinguistic indicators |
US8473441B2 (en) | 2006-09-05 | 2013-06-25 | Microsoft Corporation | Translating paralinguistic indicators |
US9760568B2 (en) | 2006-09-05 | 2017-09-12 | Oath Inc. | Enabling an IM user to navigate a virtual world |
US20080215679A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc. | System and method for routing communications among real and virtual communication devices |
US8788951B2 (en) | 2007-03-01 | 2014-07-22 | Sony Computer Entertainment America Llc | Avatar customization |
US20080215971A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc. | System and method for communicating with an avatar |
US8425322B2 (en) | 2007-03-01 | 2013-04-23 | Sony Computer Entertainment America Inc. | System and method for communicating with a virtual world |
US8502825B2 (en) | 2007-03-01 | 2013-08-06 | Sony Computer Entertainment Europe Limited | Avatar email and methods for communicating between real and virtual worlds |
US20080235582A1 (en) * | 2007-03-01 | 2008-09-25 | Sony Computer Entertainment America Inc. | Avatar email and methods for communicating between real and virtual worlds |
US7979574B2 (en) | 2007-03-01 | 2011-07-12 | Sony Computer Entertainment America Llc | System and method for routing communications among real and virtual communication devices |
US20080215973A1 (en) * | 2007-03-01 | 2008-09-04 | Sony Computer Entertainment America Inc | Avatar customization |
US20100114668A1 (en) * | 2007-04-23 | 2010-05-06 | Integrated Media Measurement, Inc. | Determining Relative Effectiveness Of Media Content Items |
US11222344B2 (en) | 2007-04-23 | 2022-01-11 | The Nielsen Company (Us), Llc | Determining relative effectiveness of media content items |
US10489795B2 (en) | 2007-04-23 | 2019-11-26 | The Nielsen Company (Us), Llc | Determining relative effectiveness of media content items |
US20110087540A1 (en) * | 2007-06-08 | 2011-04-14 | Gopal Krishnan | Web Pages and Methods for Displaying Targeted On-Line Advertisements in a Social Networking Media Space |
US20100146052A1 (en) * | 2007-06-22 | 2010-06-10 | France Telecom | method and a system for setting up encounters between persons in a telecommunications system |
US10974137B2 (en) | 2007-10-09 | 2021-04-13 | Sony Interactive Entertainment LLC | Increasing the number of advertising impressions in an interactive environment |
US10343060B2 (en) | 2007-10-09 | 2019-07-09 | Sony Interactive Entertainment LLC | Increasing the number of advertising impressions in an interactive environment |
US11660529B2 (en) | 2007-10-09 | 2023-05-30 | Sony Interactive Entertainment LLC | Increasing the number of advertising impressions in an interactive environment |
US8416247B2 (en) | 2007-10-09 | 2013-04-09 | Sony Computer Entertaiment America Inc. | Increasing the number of advertising impressions in an interactive environment |
US9795875B2 (en) | 2007-10-09 | 2017-10-24 | Sony Interactive Entertainment America Llc | Increasing the number of advertising impressions in an interactive environment |
US9272203B2 (en) | 2007-10-09 | 2016-03-01 | Sony Computer Entertainment America, LLC | Increasing the number of advertising impressions in an interactive environment |
US20090106672A1 (en) * | 2007-10-18 | 2009-04-23 | Sony Ericsson Mobile Communications Ab | Virtual world avatar activity governed by person's real life activity |
US8930472B2 (en) | 2007-10-24 | 2015-01-06 | Social Communications Company | Promoting communicant interactions in a network communications environment |
US9483157B2 (en) | 2007-10-24 | 2016-11-01 | Sococo, Inc. | Interfacing with a spatial virtual communication environment |
US20090128567A1 (en) * | 2007-11-15 | 2009-05-21 | Brian Mark Shuster | Multi-instance, multi-user animation with coordinated chat |
US11322171B1 (en) | 2007-12-17 | 2022-05-03 | Wai Wu | Parallel signal processing system and method |
US20090164916A1 (en) * | 2007-12-21 | 2009-06-25 | Samsung Electronics Co., Ltd. | Method and system for creating mixed world that reflects real state |
US20090222255A1 (en) * | 2008-02-28 | 2009-09-03 | International Business Machines Corporation | Using gender analysis of names to assign avatars in instant messaging applications |
US8191001B2 (en) | 2008-04-05 | 2012-05-29 | Social Communications Company | Shared virtual area communication environment based apparatus and methods |
US8732593B2 (en) | 2008-04-05 | 2014-05-20 | Social Communications Company | Shared virtual area communication environment based apparatus and methods |
US20090254843A1 (en) * | 2008-04-05 | 2009-10-08 | Social Communications Company | Shared virtual area communication environment based apparatus and methods |
US8429225B2 (en) | 2008-05-21 | 2013-04-23 | The Invention Science Fund I, Llc | Acquisition and presentation of data indicative of an extent of congruence between inferred mental states of authoring users |
US20110208014A1 (en) * | 2008-05-23 | 2011-08-25 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Determination of extent of congruity between observation of authoring user and observation of receiving user |
US9192300B2 (en) * | 2008-05-23 | 2015-11-24 | Invention Science Fund I, Llc | Acquisition and particular association of data indicative of an inferred mental state of an authoring user |
US20090292658A1 (en) * | 2008-05-23 | 2009-11-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Acquisition and particular association of inference data indicative of inferred mental states of authoring users |
US20090290767A1 (en) * | 2008-05-23 | 2009-11-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Determination of extent of congruity between observation of authoring user and observation of receiving user |
US20090292713A1 (en) * | 2008-05-23 | 2009-11-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Acquisition and particular association of data indicative of an inferred mental state of an authoring user |
US20090292928A1 (en) * | 2008-05-23 | 2009-11-26 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Acquisition and particular association of inference data indicative of an inferred mental state of an authoring user and source identity data |
US8380658B2 (en) | 2008-05-23 | 2013-02-19 | The Invention Science Fund I, Llc | Determination of extent of congruity between observation of authoring user and observation of receiving user |
US9101263B2 (en) | 2008-05-23 | 2015-08-11 | The Invention Science Fund I, Llc | Acquisition and association of data indicative of an inferred mental state of an authoring user |
US9161715B2 (en) * | 2008-05-23 | 2015-10-20 | Invention Science Fund I, Llc | Determination of extent of congruity between observation of authoring user and observation of receiving user |
US8615664B2 (en) | 2008-05-23 | 2013-12-24 | The Invention Science Fund I, Llc | Acquisition and particular association of inference data indicative of an inferred mental state of an authoring user and source identity data |
US20100013828A1 (en) * | 2008-07-17 | 2010-01-21 | International Business Machines Corporation | System and method for enabling multiple-state avatars |
US9324173B2 (en) | 2008-07-17 | 2016-04-26 | International Business Machines Corporation | System and method for enabling multiple-state avatars |
US10424101B2 (en) | 2008-07-17 | 2019-09-24 | International Business Machines Corporation | System and method for enabling multiple-state avatars |
US10369473B2 (en) * | 2008-07-25 | 2019-08-06 | International Business Machines Corporation | Method for extending a virtual environment through registration |
US20150160825A1 (en) * | 2008-07-25 | 2015-06-11 | International Business Machines Corporation | Method for extending a virtual environment through registration |
US20100020100A1 (en) * | 2008-07-25 | 2010-01-28 | International Business Machines Corporation | Method for extending a virtual environment through registration |
US8957914B2 (en) * | 2008-07-25 | 2015-02-17 | International Business Machines Corporation | Method for extending a virtual environment through registration |
US10166470B2 (en) | 2008-08-01 | 2019-01-01 | International Business Machines Corporation | Method for providing a virtual world layer |
US20100031164A1 (en) * | 2008-08-01 | 2010-02-04 | International Business Machines Corporation | Method for providing a virtual world layer |
US8133119B2 (en) | 2008-10-01 | 2012-03-13 | Microsoft Corporation | Adaptation for alternate gaming input devices |
US20100081507A1 (en) * | 2008-10-01 | 2010-04-01 | Microsoft Corporation | Adaptation for Alternate Gaming Input Devices |
US9635195B1 (en) * | 2008-12-24 | 2017-04-25 | The Directv Group, Inc. | Customizable graphical elements for use in association with a user interface |
US9124662B2 (en) | 2009-01-15 | 2015-09-01 | Social Communications Company | Persistent network resource and virtual area associations for realtime collaboration |
US9065874B2 (en) | 2009-01-15 | 2015-06-23 | Social Communications Company | Persistent network resource and virtual area associations for realtime collaboration |
US8294767B2 (en) | 2009-01-30 | 2012-10-23 | Microsoft Corporation | Body scan |
US8467574B2 (en) | 2009-01-30 | 2013-06-18 | Microsoft Corporation | Body scan |
US20100194741A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Depth map movement tracking via optical flow and velocity prediction |
US8897493B2 (en) | 2009-01-30 | 2014-11-25 | Microsoft Corporation | Body scan |
US20100194872A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Body scan |
US8866821B2 (en) | 2009-01-30 | 2014-10-21 | Microsoft Corporation | Depth map movement tracking via optical flow and velocity prediction |
US20100199221A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Navigation of a virtual plane using depth |
US9652030B2 (en) | 2009-01-30 | 2017-05-16 | Microsoft Technology Licensing, Llc | Navigation of a virtual plane using a zone of restriction for canceling noise |
US9465980B2 (en) | 2009-01-30 | 2016-10-11 | Microsoft Technology Licensing, Llc | Pose tracking pipeline |
US9607213B2 (en) | 2009-01-30 | 2017-03-28 | Microsoft Technology Licensing, Llc | Body scan |
US10599212B2 (en) | 2009-01-30 | 2020-03-24 | Microsoft Technology Licensing, Llc | Navigation of a virtual plane using a zone of restriction for canceling noise |
US9007417B2 (en) | 2009-01-30 | 2015-04-14 | Microsoft Technology Licensing, Llc | Body scan |
US9153035B2 (en) | 2009-01-30 | 2015-10-06 | Microsoft Technology Licensing, Llc | Depth map movement tracking via optical flow and velocity prediction |
US20100231512A1 (en) * | 2009-03-16 | 2010-09-16 | Microsoft Corporation | Adaptive cursor sizing |
US8773355B2 (en) | 2009-03-16 | 2014-07-08 | Microsoft Corporation | Adaptive cursor sizing |
US20100238182A1 (en) * | 2009-03-20 | 2010-09-23 | Microsoft Corporation | Chaining animations |
US9824480B2 (en) | 2009-03-20 | 2017-11-21 | Microsoft Technology Licensing, Llc | Chaining animations |
US9478057B2 (en) | 2009-03-20 | 2016-10-25 | Microsoft Technology Licensing, Llc | Chaining animations |
US9256282B2 (en) | 2009-03-20 | 2016-02-09 | Microsoft Technology Licensing, Llc | Virtual object manipulation |
US8988437B2 (en) | 2009-03-20 | 2015-03-24 | Microsoft Technology Licensing, Llc | Chaining animations |
US9519970B2 (en) | 2009-05-01 | 2016-12-13 | Microsoft Technology Licensing, Llc | Systems and methods for detecting a tilt angle from a depth image |
US9898675B2 (en) | 2009-05-01 | 2018-02-20 | Microsoft Technology Licensing, Llc | User movement tracking feedback to improve tracking |
US8503766B2 (en) | 2009-05-01 | 2013-08-06 | Microsoft Corporation | Systems and methods for detecting a tilt angle from a depth image |
US8503720B2 (en) | 2009-05-01 | 2013-08-06 | Microsoft Corporation | Human body pose estimation |
US20100281437A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Managing virtual ports |
US20100281436A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Binding users to a gesture based system and providing feedback to the users |
US20100278384A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Human body pose estimation |
US20100281438A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Altering a view perspective within a display environment |
US20100277470A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Systems And Methods For Applying Model Tracking To Motion Capture |
US20100277489A1 (en) * | 2009-05-01 | 2010-11-04 | Microsoft Corporation | Determine intended motions |
US9015638B2 (en) | 2009-05-01 | 2015-04-21 | Microsoft Technology Licensing, Llc | Binding users to a gesture based system and providing feedback to the users |
US8638985B2 (en) | 2009-05-01 | 2014-01-28 | Microsoft Corporation | Human body pose estimation |
US8649554B2 (en) | 2009-05-01 | 2014-02-11 | Microsoft Corporation | Method to control perspective for a camera-controlled computer |
US10210382B2 (en) | 2009-05-01 | 2019-02-19 | Microsoft Technology Licensing, Llc | Human body pose estimation |
US9910509B2 (en) | 2009-05-01 | 2018-03-06 | Microsoft Technology Licensing, Llc | Method to control perspective for a camera-controlled computer |
US8942428B2 (en) | 2009-05-01 | 2015-01-27 | Microsoft Corporation | Isolate extraneous motions |
US9524024B2 (en) | 2009-05-01 | 2016-12-20 | Microsoft Technology Licensing, Llc | Method to control perspective for a camera-controlled computer |
US9519828B2 (en) | 2009-05-01 | 2016-12-13 | Microsoft Technology Licensing, Llc | Isolate extraneous motions |
US8340432B2 (en) | 2009-05-01 | 2012-12-25 | Microsoft Corporation | Systems and methods for detecting a tilt angle from a depth image |
US9498718B2 (en) | 2009-05-01 | 2016-11-22 | Microsoft Technology Licensing, Llc | Altering a view perspective within a display environment |
US9377857B2 (en) | 2009-05-01 | 2016-06-28 | Microsoft Technology Licensing, Llc | Show body position |
US8762894B2 (en) | 2009-05-01 | 2014-06-24 | Microsoft Corporation | Managing virtual ports |
US8451278B2 (en) | 2009-05-01 | 2013-05-28 | Microsoft Corporation | Determine intended motions |
US8290249B2 (en) | 2009-05-01 | 2012-10-16 | Microsoft Corporation | Systems and methods for detecting a tilt angle from a depth image |
US9298263B2 (en) | 2009-05-01 | 2016-03-29 | Microsoft Technology Licensing, Llc | Show body position |
US9262673B2 (en) | 2009-05-01 | 2016-02-16 | Microsoft Technology Licensing, Llc | Human body pose estimation |
US9191570B2 (en) | 2009-05-01 | 2015-11-17 | Microsoft Technology Licensing, Llc | Systems and methods for detecting a tilt angle from a depth image |
US8253746B2 (en) | 2009-05-01 | 2012-08-28 | Microsoft Corporation | Determine intended motions |
US8181123B2 (en) | 2009-05-01 | 2012-05-15 | Microsoft Corporation | Managing virtual port associations to users in a gesture-based computing environment |
US20100295771A1 (en) * | 2009-05-20 | 2010-11-25 | Microsoft Corporation | Control of display objects |
US20100306671A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Avatar Integrated Shared Media Selection |
US8351652B2 (en) | 2009-05-29 | 2013-01-08 | Microsoft Corporation | Systems and methods for tracking a model |
US8418085B2 (en) | 2009-05-29 | 2013-04-09 | Microsoft Corporation | Gesture coach |
US8896721B2 (en) | 2009-05-29 | 2014-11-25 | Microsoft Corporation | Environment and/or target segmentation |
US8145594B2 (en) | 2009-05-29 | 2012-03-27 | Microsoft Corporation | Localized gesture aggregation |
US8509479B2 (en) | 2009-05-29 | 2013-08-13 | Microsoft Corporation | Virtual object |
US8542252B2 (en) | 2009-05-29 | 2013-09-24 | Microsoft Corporation | Target digitization, extraction, and tracking |
US10691216B2 (en) | 2009-05-29 | 2020-06-23 | Microsoft Technology Licensing, Llc | Combining gestures beyond skeletal |
US8625837B2 (en) | 2009-05-29 | 2014-01-07 | Microsoft Corporation | Protocol and format for communicating an image from a camera to a computing environment |
US20100302257A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Systems and Methods For Applying Animations or Motions to a Character |
US10368120B2 (en) | 2009-05-29 | 2019-07-30 | Microsoft Technology Licensing, Llc | Avatar integrated shared media experience |
US20100306715A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Gestures Beyond Skeletal |
WO2010138734A3 (en) * | 2009-05-29 | 2011-03-03 | Microsoft Corporation | Avatar integrated shared media experience |
US8661353B2 (en) | 2009-05-29 | 2014-02-25 | Microsoft Corporation | Avatar integrated shared media experience |
US20100303289A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Device for identifying and tracking multiple humans over time |
US9118737B2 (en) | 2009-05-29 | 2015-08-25 | Microsoft Technology Licensing, Llc | Avatar integrated shared media experience |
US9943755B2 (en) | 2009-05-29 | 2018-04-17 | Microsoft Technology Licensing, Llc | Device for identifying and tracking multiple humans over time |
US8660310B2 (en) | 2009-05-29 | 2014-02-25 | Microsoft Corporation | Systems and methods for tracking a model |
US9861886B2 (en) | 2009-05-29 | 2018-01-09 | Microsoft Technology Licensing, Llc | Systems and methods for applying animations or motions to a character |
US20100302395A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Environment And/Or Target Segmentation |
US20100306713A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Gesture Tool |
US20100306710A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Living cursor control mechanics |
US9656162B2 (en) | 2009-05-29 | 2017-05-23 | Microsoft Technology Licensing, Llc | Device for identifying and tracking multiple humans over time |
US9182814B2 (en) | 2009-05-29 | 2015-11-10 | Microsoft Technology Licensing, Llc | Systems and methods for estimating a non-visible or occluded body part |
US20100306685A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | User movement feedback via on-screen avatars |
US8176442B2 (en) | 2009-05-29 | 2012-05-08 | Microsoft Corporation | Living cursor control mechanics |
US9215478B2 (en) | 2009-05-29 | 2015-12-15 | Microsoft Technology Licensing, Llc | Protocol and format for communicating an image from a camera to a computing environment |
US20100303302A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Systems And Methods For Estimating An Occluded Body Part |
US20100302138A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Methods and systems for defining or modifying a visual representation |
US8856691B2 (en) | 2009-05-29 | 2014-10-07 | Microsoft Corporation | Gesture tool |
US20100303290A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Systems And Methods For Tracking A Model |
US8803889B2 (en) | 2009-05-29 | 2014-08-12 | Microsoft Corporation | Systems and methods for applying animations or motions to a character |
US8320619B2 (en) | 2009-05-29 | 2012-11-27 | Microsoft Corporation | Systems and methods for tracking a model |
US20170095738A1 (en) * | 2009-05-29 | 2017-04-06 | Microsoft Technology Licensing, Llc | User movement feedback via on-screen avatars |
US20100304813A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Protocol And Format For Communicating An Image From A Camera To A Computing Environment |
US9383823B2 (en) | 2009-05-29 | 2016-07-05 | Microsoft Technology Licensing, Llc | Combining gestures beyond skeletal |
US9400559B2 (en) | 2009-05-29 | 2016-07-26 | Microsoft Technology Licensing, Llc | Gesture shortcuts |
US8379101B2 (en) | 2009-05-29 | 2013-02-19 | Microsoft Corporation | Environment and/or target segmentation |
US9423945B2 (en) | 2009-05-29 | 2016-08-23 | Microsoft Technology Licensing, Llc | Avatar integrated shared media experience |
US20100306261A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Localized Gesture Aggregation |
US20100306712A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Gesture Coach |
US20100306655A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Avatar Integrated Shared Media Experience |
US8744121B2 (en) | 2009-05-29 | 2014-06-03 | Microsoft Corporation | Device for identifying and tracking multiple humans over time |
US20100306716A1 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Extending standard gestures |
US20100311280A1 (en) * | 2009-06-03 | 2010-12-09 | Microsoft Corporation | Dual-barrel, connector jack and plug assemblies |
US7914344B2 (en) | 2009-06-03 | 2011-03-29 | Microsoft Corporation | Dual-barrel, connector jack and plug assemblies |
US9519989B2 (en) | 2009-07-09 | 2016-12-13 | Microsoft Technology Licensing, Llc | Visual representation expression based on player expression |
US8390680B2 (en) | 2009-07-09 | 2013-03-05 | Microsoft Corporation | Visual representation expression based on player expression |
US20110007142A1 (en) * | 2009-07-09 | 2011-01-13 | Microsoft Corporation | Visual representation expression based on player expression |
US20110007079A1 (en) * | 2009-07-13 | 2011-01-13 | Microsoft Corporation | Bringing a visual representation to life via learned input from the user |
US9159151B2 (en) | 2009-07-13 | 2015-10-13 | Microsoft Technology Licensing, Llc | Bringing a visual representation to life via learned input from the user |
US20110025689A1 (en) * | 2009-07-29 | 2011-02-03 | Microsoft Corporation | Auto-Generating A Visual Representation |
US20110055846A1 (en) * | 2009-08-31 | 2011-03-03 | Microsoft Corporation | Techniques for using human gestures to control gesture unaware programs |
US9141193B2 (en) | 2009-08-31 | 2015-09-22 | Microsoft Technology Licensing, Llc | Techniques for using human gestures to control gesture unaware programs |
US20110109617A1 (en) * | 2009-11-12 | 2011-05-12 | Microsoft Corporation | Visualizing Depth |
US9245177B2 (en) * | 2010-06-02 | 2016-01-26 | Microsoft Technology Licensing, Llc | Limiting avatar gesture display |
US20110298827A1 (en) * | 2010-06-02 | 2011-12-08 | Microsoft Corporation | Limiting avatar gesture display |
US20140303778A1 (en) * | 2010-06-07 | 2014-10-09 | Gary Stephen Shuster | Creation and use of virtual places |
US9595136B2 (en) * | 2010-06-07 | 2017-03-14 | Gary Stephen Shuster | Creation and use of virtual places |
US11605203B2 (en) | 2010-06-07 | 2023-03-14 | Pfaqutruma Research Llc | Creation and use of virtual places |
US10984594B2 (en) * | 2010-06-07 | 2021-04-20 | Pfaqutruma Research Llc | Creation and use of virtual places |
US20120011453A1 (en) * | 2010-07-08 | 2012-01-12 | Namco Bandai Games Inc. | Method, storage medium, and user terminal |
US8667519B2 (en) | 2010-11-12 | 2014-03-04 | Microsoft Corporation | Automatic passive and anonymous feedback system |
US8942917B2 (en) | 2011-02-14 | 2015-01-27 | Microsoft Corporation | Change invariant scene recognition by an agent |
US11271805B2 (en) | 2011-02-21 | 2022-03-08 | Knapp Investment Company Limited | Persistent network resource and virtual area associations for realtime collaboration |
US8620113B2 (en) | 2011-04-25 | 2013-12-31 | Microsoft Corporation | Laser diode modes |
US10331222B2 (en) | 2011-05-31 | 2019-06-25 | Microsoft Technology Licensing, Llc | Gesture recognition techniques |
US9372544B2 (en) | 2011-05-31 | 2016-06-21 | Microsoft Technology Licensing, Llc | Gesture recognition techniques |
US8760395B2 (en) | 2011-05-31 | 2014-06-24 | Microsoft Corporation | Gesture recognition techniques |
US9154837B2 (en) | 2011-12-02 | 2015-10-06 | Microsoft Technology Licensing, Llc | User interface presenting an animated avatar performing a media reaction |
US8635637B2 (en) | 2011-12-02 | 2014-01-21 | Microsoft Corporation | User interface presenting an animated avatar performing a media reaction |
US10798438B2 (en) | 2011-12-09 | 2020-10-06 | Microsoft Technology Licensing, Llc | Determining audience state or interest using passive sensor data |
US9100685B2 (en) | 2011-12-09 | 2015-08-04 | Microsoft Technology Licensing, Llc | Determining audience state or interest using passive sensor data |
US9628844B2 (en) | 2011-12-09 | 2017-04-18 | Microsoft Technology Licensing, Llc | Determining audience state or interest using passive sensor data |
US8898687B2 (en) | 2012-04-04 | 2014-11-25 | Microsoft Corporation | Controlling a media program based on a media reaction |
US9788032B2 (en) | 2012-05-04 | 2017-10-10 | Microsoft Technology Licensing, Llc | Determining a future portion of a currently presented media program |
US8959541B2 (en) | 2012-05-04 | 2015-02-17 | Microsoft Technology Licensing, Llc | Determining a future portion of a currently presented media program |
US11657438B2 (en) | 2012-10-19 | 2023-05-23 | Sococo, Inc. | Bridging physical and virtual spaces |
US11215711B2 (en) | 2012-12-28 | 2022-01-04 | Microsoft Technology Licensing, Llc | Using photometric stereo for 3D environment modeling |
US11710309B2 (en) | 2013-02-22 | 2023-07-25 | Microsoft Technology Licensing, Llc | Camera/object pose from predicted coordinates |
US12094045B2 (en) * | 2013-08-09 | 2024-09-17 | Implementation Apps Llc | Generating a background that allows a first avatar to take part in an activity with a second avatar |
US20230252709A1 (en) * | 2013-08-09 | 2023-08-10 | Implementation Apps Llc | Generating a background that allows a first avatar to take part in an activity with a second avatar |
US20150169832A1 (en) * | 2013-12-18 | 2015-06-18 | Lenovo (Singapore) Pte, Ltd. | Systems and methods to determine user emotions and moods based on acceleration data and biometric data |
US20150254887A1 (en) * | 2014-03-07 | 2015-09-10 | Yu-Hsien Li | Method and system for modeling emotion |
US20160226813A1 (en) * | 2015-01-29 | 2016-08-04 | International Business Machines Corporation | Smartphone indicator for conversation nonproductivity |
US9722965B2 (en) * | 2015-01-29 | 2017-08-01 | International Business Machines Corporation | Smartphone indicator for conversation nonproductivity |
GB2556347A (en) * | 2016-03-11 | 2018-05-30 | Sony Interactive Entertainment Europe Ltd | Virtual reality |
GB2556347B (en) * | 2016-03-11 | 2019-08-28 | Sony Interactive Entertainment Europe Ltd | Virtual Reality |
US11068065B2 (en) * | 2018-11-28 | 2021-07-20 | International Business Machines Corporation | Non-verbal communication tracking and classification |
US20200167002A1 (en) * | 2018-11-28 | 2020-05-28 | International Business Machines Corporation | Non-verbal communication tracking and classification |
US10609332B1 (en) | 2018-12-21 | 2020-03-31 | Microsoft Technology Licensing, Llc | Video conferencing supporting a composite video stream |
US11158102B2 (en) * | 2019-01-22 | 2021-10-26 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for processing information |
US11651562B2 (en) * | 2019-12-30 | 2023-05-16 | Tmrw Foundation Ip S. À R.L. | Method and system for enabling enhanced user-to-user communication in digital realities |
US11223800B1 (en) | 2020-11-03 | 2022-01-11 | International Business Machines Corporation | Selective reaction obfuscation |
KR102724091B1 (en) * | 2023-02-02 | 2024-10-31 | 주식회사 스콘 | An avatar interaction method and a computing devices on which such method is implemented |
Also Published As
Publication number | Publication date |
---|---|
EP2131935B1 (en) | 2011-02-09 |
GB2447100A (en) | 2008-09-03 |
US20080215973A1 (en) | 2008-09-04 |
US8951123B2 (en) | 2015-02-10 |
US7979574B2 (en) | 2011-07-12 |
US20080214253A1 (en) | 2008-09-04 |
US20080215971A1 (en) | 2008-09-04 |
US8788951B2 (en) | 2014-07-22 |
US20080215679A1 (en) | 2008-09-04 |
EP1964597A1 (en) | 2008-09-03 |
EP2131935A2 (en) | 2009-12-16 |
ES2408680T3 (en) | 2013-06-21 |
WO2008104795A1 (en) | 2008-09-04 |
US20120115597A1 (en) | 2012-05-10 |
GB2447100B (en) | 2010-03-24 |
WO2008104786A3 (en) | 2008-10-23 |
JP2010522909A (en) | 2010-07-08 |
JP2010520539A (en) | 2010-06-10 |
WO2008104786A2 (en) | 2008-09-04 |
GB0715650D0 (en) | 2007-09-19 |
EP1964597B1 (en) | 2013-02-20 |
US8425322B2 (en) | 2013-04-23 |
JP5026531B2 (en) | 2012-09-12 |
GB0703974D0 (en) | 2007-04-11 |
JP5032594B2 (en) | 2012-09-26 |
DE602008004893D1 (en) | 2011-03-24 |
ATE497813T1 (en) | 2011-02-15 |
US8502825B2 (en) | 2013-08-06 |
US20080235582A1 (en) | 2008-09-25 |
US20120166969A1 (en) | 2012-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8502825B2 (en) | Avatar email and methods for communicating between real and virtual worlds | |
EP2132650A2 (en) | System and method for communicating with a virtual world | |
WO2008109299A2 (en) | System and method for communicating with a virtual world | |
JP6263252B1 (en) | Information processing method, apparatus, and program for causing computer to execute information processing method | |
US11826636B2 (en) | Depth sensing module and mobile device including the same | |
US9071808B2 (en) | Storage medium having stored information processing program therein, information processing apparatus, information processing method, and information processing system | |
JP5806469B2 (en) | Image processing program, image processing apparatus, image processing system, and image processing method | |
CN101548547B (en) | Object detection using video input combined with tilt angle information | |
US8384770B2 (en) | Image display system, image display apparatus, and image display method | |
JP5622609B2 (en) | Image display system, image display apparatus, and image display method | |
CN104010706B (en) | The direction input of video-game | |
US20190026950A1 (en) | Program executed on a computer for providing virtual space, method and information processing apparatus for executing the program | |
US20080001951A1 (en) | System and method for providing affective characteristics to computer generated avatar during gameplay | |
JP6514376B1 (en) | Game program, method, and information processing apparatus | |
CN117085322B (en) | Interactive observation method, device, equipment and medium based on virtual scene | |
CN106873760A (en) | Portable virtual reality system | |
JP2019106220A (en) | Program executed by computer to provide virtual space via head mount device, method, and information processing device | |
JP6495399B2 (en) | Program and method executed by computer to provide virtual space, and information processing apparatus for executing the program | |
CN112115398A (en) | Virtual space providing system, virtual space providing method, and program | |
CN112717409B (en) | Virtual vehicle control method, device, computer equipment and storage medium | |
JP2018092635A (en) | Information processing method, device, and program for implementing that information processing method on computer | |
JP7356827B2 (en) | Program, information processing method, and information processing device | |
JP7016438B1 (en) | Information processing systems, information processing methods and computer programs | |
JP7317322B2 (en) | Information processing system, information processing method and computer program | |
JPH1165814A (en) | Interactive system and image display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY COMPUTER ENTERTAINMENT EUROPE LIMITED, UNITED Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZALEWSKI, GARY;GILLO, TOMAS;GOODWIN, MITCHELL;AND OTHERS;SIGNING DATES FROM 20070516 TO 20070523;REEL/FRAME:019343/0833 Owner name: SONY COMPUTER ENTERTAINMENT AMERICA INC., CALIFORN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZALEWSKI, GARY;GILLO, TOMAS;GOODWIN, MITCHELL;AND OTHERS;SIGNING DATES FROM 20070516 TO 20070523;REEL/FRAME:019343/0833 |
|
AS | Assignment |
Owner name: SONY COMPUTER ENTERTAINMENT AMERICA LLC, CALIFORNI Free format text: MERGER;ASSIGNOR:SONY COMPUTER ENTERTAINMENT AMERICA INC.;REEL/FRAME:025373/0698 Effective date: 20100401 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT AMERICA LLC;REEL/FRAME:038626/0637 Effective date: 20160331 Owner name: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC, CALIFO Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT AMERICA LLC;REEL/FRAME:038626/0637 Effective date: 20160331 |