Virtual Character Or Avatar (e.g., Animated Person) Patents (Class 715/706)
  • Patent number: 10002452
    Abstract: An image editing device is configured to automatically apply special effects to a digital image. In the image editing device, a digital image is obtained, and a selection is retrieved from a user, where the user selection specifying at least one criterion. At least one attribute of the digital image is analyzed, and a determination is made on whether the at least one attribute coincides with a target attribute associated with the at least one criterion. Responsive to the at least one attribute coinciding with the target attribute, a special effect is obtained from a data store, and the obtained special effect is applied to the digital image.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: June 19, 2018
    Assignee: CYBERLINK CORP.
    Inventors: Chieh-Chung Wu, Chin-Yu Hsu
  • Patent number: 9972116
    Abstract: An information processing apparatus includes a bio-information obtaining unit configured to obtain bio-information of a subject; a kinetic-information obtaining unit configured to obtain kinetic information of the subject; and a control unit configured to determine an expression or movement of an avatar on the basis of the bio-information obtained by the bio-information obtaining unit and the kinetic information obtained by the kinetic-information obtaining unit and to perform a control operation so that the avatar with the determined expression or movement is displayed.
    Type: Grant
    Filed: December 23, 2016
    Date of Patent: May 15, 2018
    Assignee: Sony Corporation
    Inventors: Akane Sano, Masamichi Asukai, Taiji Ito, Yoichiro Sako
  • Patent number: 9965553
    Abstract: A method builds a personality of a user agent that is based on verbal and nonverbal communication preferences of a person. The user agent executes a search request from a user and displays results to the search request based on the personality of the user agent opposed to a personality of the user.
    Type: Grant
    Filed: May 29, 2013
    Date of Patent: May 8, 2018
    Inventor: Philip Scott Lyren
  • Patent number: 9956479
    Abstract: An information processing apparatus capable of near field wireless communication with an information storage medium includes a reading/writing module which reads and/or writes data from and/or into first and second information storage media by establishing near field wireless communication with the first and second information storage media and a data processing module which processes data read as a result of reading of data from the first and second information storage media by a first application program executed by the information processing apparatus. The data processing module performs first processing affecting progress of a game based on the data read from the first information storage medium and performs second processing not affecting progress of the game based on the data read from the second information storage medium.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: May 1, 2018
    Assignee: NINTENDO CO., LTD.
    Inventor: Aya Kyogoku
  • Patent number: 9955006
    Abstract: The mobile phone comprising a voice communication implementer, an audio playback implementer, and a mobile phone updating implementer which updates a mobile phone battery controller, a mobile phone camera unit controller, a mobile phone microphone controller, a mobile phone speaker controller, or a mobile phone vibrator controller.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: April 24, 2018
    Inventor: Iwao Fujisaki
  • Patent number: 9940750
    Abstract: Provided herein are methods and systems for role negotiation with multiple sources. A method for role negotiation can comprise rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements. A plurality of role designations can be received, each role designation associated with one of a plurality of devices, wherein at least one of the plurality of devices is a remote device located remotely from another of the plurality of devices. The common field of interest can be updated based upon the plurality of role designations, wherein each of the plurality of role designations defines an interactive functionality associated with the respective device of the plurality of devices.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: April 10, 2018
    Assignee: Help Lighting, Inc.
    Inventors: Marcus W. Dillavou, Matthew Benton May
  • Patent number: 9934253
    Abstract: An apparatus and method for displaying an image in a mobile terminal are provided. One or more images are selected from among a plurality of pre-stored images, based on information about a party other than the user of the mobile terminal, stored in a phone book, and information about an event that is currently executed in the mobile terminal, and the selected images are preferentially displayed.
    Type: Grant
    Filed: June 6, 2014
    Date of Patent: April 3, 2018
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Sae-Mee Yim, Doo-Suk Kang, Geon-Soo Kim, Chang-Ho Lee, Bo-Kun Choi
  • Patent number: 9881504
    Abstract: In one embodiment, an aerospace system is provided. The aerospace system comprises at least one display unit configured to display flight data and a memory configured to store one or more flight plan associations. Each flight plan association is an association between a data link message and a respective waypoint in a flight plan. The aerospace system also comprises a processing unit configured to determine when each respective waypoint in the flight plan is reached based on a comparison of current location data to the flight plan. When each respective waypoint is reached, the processing unit is configured to identify any data link messages associated with the respective waypoint based on the flight plan associations and to direct the at least one display unit to display a respective notification for each identified data link message associated with the respective waypoint.
    Type: Grant
    Filed: July 17, 2014
    Date of Patent: January 30, 2018
    Assignee: Honeywell International Inc.
    Inventors: Maria John Paul Dominic, Leonard Pereira, Siva Kommuri, Anil Kumar Pendyala, Rakesh Kumar, Xiaozhong He, Thomas D. Judd, David Pepitone
  • Patent number: 9880704
    Abstract: Exemplary methods, apparatuses, and systems receive a reply or forward command for a selected email message in an email user interface, determine that the email user interface is in full screen mode and that the selected email message is at least partially outside of a visible area of the email user interface, and display a composition window in response to the command, wherein the displaying includes a sequence of images to create the illusion of the composition window sliding up from the bottom of the screen. In an example, a user interface includes a collapsible panel of mailboxes and folders, and a separate and independent favorites bar including a plurality of the mailboxes and folders. One of the mailboxes and folders in the favorites bar may be a hierarchical folder that includes a subfolder that is accessible via a drop down menu from the hierarchical folder in the favorites bar.
    Type: Grant
    Filed: February 13, 2014
    Date of Patent: January 30, 2018
    Assignee: Apple Inc.
    Inventors: Angela Guzman, Bas Ording, Brendan Langoulant, Daniel Shteremberg, Patrick Lee Coffman, Stephen Decker, Stephen O. Lemay
  • Patent number: 9882859
    Abstract: A message-browsing system includes an image storage unit storing user images to be displayed on a screen, each user image corresponding to respective ones of the multiple users, an input instruction unit providing a response in multiple response forms to a post, the response forms corresponding to multiple fixed response sentences that are different from one another, a motion storage unit storing pieces of motion information defining motions of the user images, each in association with respective ones of the multiple response forms, and a setting unit selecting a piece of motion information that corresponds to a response form of a response provided to a post, and setting a motion of a user image corresponding to a user who has contributed the post, the user image being displayed on a screen of another user who has provided the response.
    Type: Grant
    Filed: December 24, 2014
    Date of Patent: January 30, 2018
    Assignee: KONAMI DIGITAL ENTERTAINMENT CO., LTD.
    Inventors: Koki Kimura, Erika Nakamura, Takashi Suenaga, Takashi Hamano, Atsushi Takeda, Masaki Shimizu, Chiaki Nakanishi
  • Patent number: 9866504
    Abstract: A method for identifying at least one participant involved in an electronic communication who is in need of technical assistance is provided. The method may include monitoring the electronic communication according to a plurality of predetermined conditions. The method may also include determining if the plurality of predetermined conditions is satisfied. The method may further include identifying the at least one participant who is in need of technical assistance based on the plurality of predetermined conditions being satisfied. The method may also include flagging the identified at least one participant for at least one follow-up action.
    Type: Grant
    Filed: April 20, 2015
    Date of Patent: January 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Nnaemeka I. Emejulu, Ye Liu, Mario A. Maldari
  • Patent number: 9833698
    Abstract: Techniques for providing an immersive storytelling experience using a plurality of storytelling devices. Each of the storytelling devices may be configured to perform one or more actions based on a current context of a story and in response to a stimulus event. The actions include at least one of an auditory and a visual effect. Embodiments also provide a controller device configured to manage a playback of the story, by adjusting the stimulus event and the effect of each of the plurality of storytelling devices, based on the current context of the story, thereby creating an interactive and immersive storytelling experience.
    Type: Grant
    Filed: September 13, 2013
    Date of Patent: December 5, 2017
    Assignee: Disney Enterprises, Inc.
    Inventors: Eric Charles Haseltine, Joseph O'Brien Garlington, Theodore Wei-Yun Leung, Eric Wilson Muhlheim, Gabriel Sebastian Schlumberger
  • Patent number: 9836984
    Abstract: Embodiments provide techniques for dynamically creating a story for playback using a plurality of storytelling devices. Embodiments identify a plurality of storytelling devices available to participate in a storytelling experience. User input associated with the storytelling experience is received. Embodiments further include retrieving a story template based at least in part on the identified plurality of storytelling devices. Additionally, embodiments dynamically create a first story by mapping actions from the retrieved story template to storytelling devices in the plurality of storytelling devices, based at least in part on the received user input, such that the plurality of storytelling devices will perform a respective one or more actions during playback of the first story based on the mapped actions from the retrieved story template.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: December 5, 2017
    Assignee: Disney Enterprises, Inc.
    Inventors: Eric C. Haseltine, Gary K.- W. Lau, Theodore W.- Y. Leung, Jason E. Lewis, Guy A. Molinari, Deva D. Visamsetty, William D. Watts
  • Patent number: 9830184
    Abstract: Systems and methods described herein facilitate determining desktop readiness using interactive measures. A host is in communication with a server and the host includes a virtual desktop and a virtual desktop agent. The virtual desktop agent is configured to perform one or more injecting events via one or more monitoring agents, wherein each of the injecting events is a simulated input device event. The desktop agent is further configured to receive, via a display module, a response to the injecting event(s), wherein the response is a display update causing pixel color values for the display module to alter. The desktop agent is also configured to identify, via the monitoring agent(s), whether the response to the injecting event(s) is an expected response. The desktop agent is also configured to determine, via the monitoring agent(s), a readiness of the virtual desktop based on the expected response.
    Type: Grant
    Filed: March 8, 2016
    Date of Patent: November 28, 2017
    Assignee: VMware, Inc.
    Inventors: Banit Agrawal, Lawrence Andrew Spracklen, Rishi Bidarkar
  • Patent number: 9818024
    Abstract: A face is detected and identified within an acquired digital image. One or more features of the face is/are extracted from the digital image, including two independent eyes or subsets of features of each of the two eyes, or lips or partial lips or one or more other mouth features and one or both eyes, or both. A model including multiple shape parameters is applied to the two independent eyes or subsets of features of each of the two eyes, and/or to the lips or partial lips or one or more other mouth features and one or both eyes. One or more similarities between the one or more features of the face and a library of reference feature sets is/are determined. A probable facial expression is identified based on the determining of the one or more similarities.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: November 14, 2017
    Assignee: FotoNation Limited
    Inventors: Ioana Bacivarov, Peter Corcoran
  • Patent number: 9805493
    Abstract: One or more social interactive goals for an automated entity such as an avatar may be determined during a social interaction between the automated entity and a selected entity such as a human. Identity attributes of identity images from an identity model of the automated entity may be used to determine a set of behavioral actions the automated entity is to take for the determined goals. Paralanguage elements expressed for the automated entity via a user interface may be altered based on the determined set of behavioral actions. The automated entity may refer to a computer implemented automaton that simulates a human in the user interface of an interactive computing environment. By way of example, an avatar cybernetic goal seeking behavior may be implemented in accordance with an identity theory model.
    Type: Grant
    Filed: August 25, 2015
    Date of Patent: October 31, 2017
    Assignee: Lockheed Martin Corporation
    Inventors: James H. Crutchfield, Jr., Hien Q. Pham, Reginald H. Price, Steven J. Tourville
  • Patent number: 9774655
    Abstract: A server including a first storage module for storing possessed objects of a first user and a second user, a communication module for receiving from a device of the first user a request for transfer of an object from the first user to the second user, a second storage module for storing an object transfer relationship between the first user and the second user in response to the request for transfer, and a benefit granting module for granting a predetermined benefit to the second user if a condition for granting a benefit in relation to an object transfer relationship of the second user with other users is satisfied when an object is transferred in response to the request for transfer.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: September 26, 2017
    Assignee: GREE, Inc.
    Inventor: Masaru Takeuchi
  • Patent number: 9740677
    Abstract: Provided is a method of recommending a sticker through a dialog act analysis. The method includes: by a server, performing a surface analysis on the last utterance between the first user terminal and the second user the terminal; performing a dialog act analysis on the last utterance using a result of the surface analysis; extracting a dialog context factor including a surface analysis result and a dialog act analysis result on a certain number of continuous utterances including the last utterance between the first user terminal and the second user terminal; selecting a sticker to be recommended to the first user using the dialog context factor; and providing the selected sticker for the first user terminal.
    Type: Grant
    Filed: July 12, 2015
    Date of Patent: August 22, 2017
    Assignee: NCsoft Corporation
    Inventors: Taek Jin Kim, Jay June Lee, Jungsun Jang, Sehee Chung, Kyeong Jong Lee, Yeonsoo Lee
  • Patent number: 9679581
    Abstract: The present invention relates to implementing a system and method for enhancing the recording of a sign-language video by automatically associating the prompter text with the segment(s) of the sign-language video recording. The segment(s) of the sign-language video recording is automatically determined based on the phrases identified within the video recording. Further, the system and method implements a plurality of features to manage the sign-language video and facilitates a means to actively collaborate, upload, and store the sign-language video within the network.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: June 13, 2017
    Inventor: Trausti Thor Kristjansson
  • Patent number: 9646037
    Abstract: A system for facilitating content creation includes collecting profiles which are analyzed to build a profile parameter index. A dummy profile is created based on the profile parameter index. The dummy profile is a fictitious character having profile parameters based on input from a user of the profile parameter index. The control of the dummy profile is under the user.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: May 9, 2017
    Assignee: SAP SE
    Inventors: Kaushik Nath, Suresh Venkatasubramaniyan
  • Patent number: 9626152
    Abstract: Provided are a method and a computer program of recommending a responsive sticker. The method includes: generating dialog situation information by analyzing pairs of the last utterance of a second user and previous utterances and previous utterances of a first user as an utterance of the second user terminal is inputted into the server; determining a similar situation from a dialog situation information database that is already collected and constructed, using the generated dialog situation information; determining whether it is a turn for the first user terminal to input a response; selecting a responsive sticker candidate group from the determined similar situation when it is a turn for the first user terminal to input the response; and providing information on at least one responsive sticker of the responsive sticker candidate group for the first user terminal.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: April 18, 2017
    Assignee: NCSOFT CORPORATION
    Inventors: Taek Jin Kim, Jay June Lee, Jungsun Jang, Sehee Chung, Kyeong Jong Lee, Yeonsoo Lee
  • Patent number: 9584455
    Abstract: A method provides an expression picture in an instant communication conversation window; acquires information of a user operation activity information from a sending user with respect to the expression picture; searching a first expression database based on the expression picture and the acquired information of the user operation activity; generates a first response message corresponding to the expression picture under the user operation activity; and sends found information related to the expression picture and the acquired information of the user operation activity to a receiving client corresponding to a receiving user to facilitate the receiving client to generate a second response message corresponding to the expression picture under the user operation activity.
    Type: Grant
    Filed: January 13, 2015
    Date of Patent: February 28, 2017
    Assignee: Alibaba Group Holding Limited
    Inventor: Yuanlong Zheng
  • Patent number: 9529423
    Abstract: A system and method to modify audio components in an online environment based on avatar characteristics and/or inventory items. The system includes a component to allow one or more audio modification algorithms to be selected. The system also includes a component to identify one or more avatar characteristics and a component to identify one or more avatar inventory items. The system further comprises a component to modify an audio communication in a virtual universe based on at least one of the one or more audio modification algorithms, the one or more avatar characteristics, and the one or more avatar inventory items.
    Type: Grant
    Filed: December 10, 2008
    Date of Patent: December 27, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jeffrey D. Amsterdam, Rick A. Hamilton, II, Brian M. O'Connell, Keith R. Walker
  • Patent number: 9466142
    Abstract: Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.
    Type: Grant
    Filed: December 17, 2012
    Date of Patent: October 11, 2016
    Assignee: Intel Corporation
    Inventors: Yangzhou Du, Wenlong Li, Xiaofeng Tong, Wei Hu, Yimin Zhang
  • Patent number: 9405503
    Abstract: According to one embodiment, a plurality of spatial publishing objects (SPOs) is provided in a multidimensional space in a user interface. Each of the plurality of spatial publishing objects is associated with digital media data from at least one digital media source. The user interface has a field for the digital media data. A user is provided via the user interface with a user presence that is optionally capable of being represented in the user interface relative to the plurality of spatial publishing objects. The digital media data associated with the at least one spatial publishing object are combined to generate a media output corresponding to the combined digital media data.
    Type: Grant
    Filed: September 7, 2012
    Date of Patent: August 2, 2016
    Assignee: AQ Media, Inc.
    Inventor: Jan Peter Roos
  • Patent number: 9401095
    Abstract: Provided is an information processing system including a position management unit configured to manage positions of avatars operated by users in a virtual space, a schedule management unit configured to manage a schedule of communication of the users, and a control unit configured to cause communication between terminals used by the users who are scheduled to have the communication to be established when the avatars of the users who are scheduled to have the communication gather in a predefined place in the virtual space and to cause at least one of an image and a voice to be transmitted and received.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: July 26, 2016
    Assignee: Sony Corporation
    Inventors: Ichiro Kubota, Hiromitsu Aikawa
  • Patent number: 9384579
    Abstract: In embodiments of stop-motion video creation from full-motion video, a video of an animation sequence is filmed with a video camera that captures an animation object and manipulations to interact with the animation object. Motion frames of the video are determined, where the motion frames depict motion as the manipulations to interact with the animation object. The motion frames may also depict other motion, other than the manipulations to interact with the animation object, where the other motion is also captured when the video is filmed. The motion frames that depict the motion in the video are discarded, leaving static frames that depict the animation object without any detectable motion. A frame sequence of the static frames can then be generated as a stop-motion video that depicts the animation object to appear moving or created without the manipulations.
    Type: Grant
    Filed: September 3, 2014
    Date of Patent: July 5, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Akshat Bhargava, Rinky Gupta
  • Patent number: 9386052
    Abstract: A method, system, and/or apparatus for automatically tracking a mobile user using the user's mobile device. This invention is particularly useful in the field of social media, such as for detecting and tracking the location and activity of a user and her community. The method or implementing software application uses or relies upon location information available on the mobile device from any source, such as cell phone usage and/or other device applications. The social media system automatically determines a location type and/or user activity from context information. The context information can include current and past location and user and/or community information, time-dependent information, and third party information. The location activities can be presented to the community using pictograms selected to represent user activities.
    Type: Grant
    Filed: August 8, 2014
    Date of Patent: July 5, 2016
    Assignee: PUSHD, INC.
    Inventors: Ian Miles Ownbey, Ben Cherry, Abdur Chowdhury, Ophir Frieder, Tyler Howarth, Eric Jensen, Andrew Lorek, Matt Sanford
  • Patent number: 9357025
    Abstract: A persistent virtual area that supports establishment of respective presences of communicants operating respective network nodes connected to the virtual area even after all network nodes have disconnected from the virtual area is maintained. A presence in the virtual area is established for a user of a Public Switched Telephone Network (PSTN) terminal device. Transmission of data associated with the virtual area to the PSTN terminal device.
    Type: Grant
    Filed: June 21, 2011
    Date of Patent: May 31, 2016
    Assignee: Social Communications Company
    Inventors: Eric Cozzi, David Van Wie, Paul J. Brody, Matthew Leacock
  • Patent number: 9313045
    Abstract: The present invention relates a system and method for providing an avatar with variable appearance. When a user is connected through a network, information on the avatar provided to the user is collected. The avatar has at least two exposed units. The user's location on the network is determined, at least one exposed unit is selected from among the plurality of exposed units according to the determined location, and an avatar for displaying the selected exposed unit is generated to the user. The exposed units configuring the avatar are respectively modified according to the user's selection. Therefore, the avatar can be easily displayed on any place on the Internet by controlling the exposed units of the avatar depending on the user's location on the network. Further, the user can combine and modify the avatar in various ways by respectively controlling the exposed units.
    Type: Grant
    Filed: April 7, 2006
    Date of Patent: April 12, 2016
    Assignee: NHN Corporation
    Inventors: Young Joo Min, Young Hoon Jung
  • Patent number: 9301069
    Abstract: Systems, methods, and computer-readable storage media for generating an immersive three-dimensional sound space for searching audio. The system generates a three-dimensional sound space having a plurality of sound sources playing at a same time, wherein each of the plurality of sound sources is assigned a respective location in the three-dimensional sound space relative to one another, and wherein a user is assigned a current location in the three-dimensional sound space relative to each respective location. Next, the system receives input from the user to navigate to a new location in the three-dimensional sound space. Based on the input, the system then changes each respective location of the plurality of sound sources relative to the new location in the three-dimensional sound space.
    Type: Grant
    Filed: December 27, 2012
    Date of Patent: March 29, 2016
    Assignee: Avaya Inc.
    Inventors: Doree Duncan Seligmann, Ajita John, Michael J. Sammon
  • Patent number: 9268454
    Abstract: Business owners in a virtual universe may want to create a data source that transmits a data feed when certain parameters are met. Functionality can be implemented within a virtual universe to create a data feed when a trigger event occurs. The data feed may include data about the trigger event itself and/or about an avatar that caused the trigger. Triggered data feeds can be used to derive statistics, monitor use of objects and space and bill for such use, etc. In such cases, the user may be notified, such as via email, of the new data feed, as well as instructions on how to subscribe to it in their RSS reader or other client. Alternatively, users may be given individual feeds wherein additional feeds are distributed as separate categories of the user's feed as opposed to creating a separate feed.
    Type: Grant
    Filed: May 14, 2008
    Date of Patent: February 23, 2016
    Assignee: International Business Machines Corporation
    Inventors: Rick A. Hamilton, II, James R. Kozloski, Brian M. O'Connell, Clifford A. Pickover, Keith R. Walker
  • Patent number: 9245368
    Abstract: A device includes one or more processors, and memory storing programs. The programs include a respective application and an application service module. The application service module includes instructions for, in response to a triggering event from the respective application, initializing an animation object with one or more respective initialization values corresponding to the triggering event. The animation object includes an instance of a predefined animation software class. At each of a series of successive times, the device updates the animation object so as to produce a respective animation value in accordance with a predefined animation function based on a primary function of an initial velocity and a deceleration rate and one or more secondary functions. The device updates a state of one or more user interface objects in accordance with the respective animation value, and renders on a display a user interface in accordance with the updated state.
    Type: Grant
    Filed: September 28, 2011
    Date of Patent: January 26, 2016
    Assignee: Apple Inc.
    Inventor: Joshua H. Shaffer
  • Patent number: 9239728
    Abstract: In an input/output virtualization-enabled computing environment, a device, method, and system for securely handling virtual function driver communications with a physical function driver of a computing device includes maintaining communication profiles for virtual function drivers and applying the communication profiles to communications from the virtual function drivers to the physical function driver, to determine whether the communications present a security and/or performance condition. The device, method and system may disable a virtual function driver if a security and/or performance condition is detected.
    Type: Grant
    Filed: June 17, 2014
    Date of Patent: January 19, 2016
    Assignee: Intel Corporation
    Inventors: Nrupal R. Jani, Shannon L. Nelson, Gregory D. Cummings
  • Patent number: 9233304
    Abstract: Technologies are generally described for load balancing for a game in a cloud computing environment hosting a game service. In some examples, a method may include analyzing a status of a player character located in a virtual space, the virtual space being configured to have a plurality of areas and the player character being located in a first area among the plurality of areas, calculating a probability of movement of the player character from the first area to a second area among the plurality of areas based at least in part on the analyzed status, and calculating an amount of cached data to copy from a second cache server corresponding to the second area to a first cache server corresponding to the first area based at least in part on the calculated probability.
    Type: Grant
    Filed: March 22, 2012
    Date of Patent: January 12, 2016
    Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLC
    Inventors: Shuichi Kurabayashi, Naofumi Yoshida, Kosuke Takano
  • Patent number: 9220981
    Abstract: A computer-implemented method of controlling attribute expression for an avatar within a virtual environment can include defining a rule that determines expression of an attribute of a first avatar conditioned upon an attribute of at least one other avatar within a virtual environment and, responsive to the first avatar contacting a second avatar within the virtual environment, determining an attribute of the second avatar. The method can include determining whether to express the attribute of the first avatar according to the attribute of the second avatar as determined by the rule and outputting a state of the first avatar specifying each attribute selected for expression.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: December 29, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John M. Lance, Josef Scherpa
  • Patent number: 9214038
    Abstract: Methods, apparatuses and systems directed to efficiently circumventing the limitations of client side rendering of virtual worlds. In a particular implementation, a proposed system renders each client viewport remotely, removing the burden of rendering a 3D scene from the local client device. 3D viewports, rather than being rasterized on the local client, are instead generated on a remote render device which then transmits a visual representation of the viewport to the client device in a format (including, but not limited to a video stream) which the client can use to display the scene without requiring complex 3D rasterization. This process eliminates the need for the client to have any specialized 3D rendering software or hardware, or to install or download any persistent render assets on the local system. The hardware requirements for the client are therefore roughly equivalent to those needed to play a continuous video stream.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: December 15, 2015
    Inventor: Julian Michael Urbach
  • Patent number: 9192860
    Abstract: A method for providing a single user multiple presence implementation may include providing access for a user identified by a user account to a virtual environment hosted by a computer. The method may further include generating multiple avatars for the user account to concurrently coexist and be operative within the virtual environment. The method may further include controlling the multiple avatars at least partly in response to input from the user. The method may further include communicating virtual environment data regarding more than one of the multiple avatars to at least one client operated by the user. An apparatus for performing the method may include a processor coupled to a memory holding encoded instructions for performing operations of the method on a computer configured as a network entity.
    Type: Grant
    Filed: November 8, 2011
    Date of Patent: November 24, 2015
    Inventor: Gary S. Shuster
  • Patent number: 9195363
    Abstract: A system for perspective based tagging and visualization of avatars in a virtual world may include determining if another avatar has moved within a predetermined proximity range of a user's avatar in a virtual world. The system may also include allowing the user to tag the other avatar with information in response to the other avatar being within the predetermined proximity range of the user's avatar.
    Type: Grant
    Filed: May 22, 2013
    Date of Patent: November 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Andrew Bryan Smith, Brian Ronald Bokor, Daniel Edward House, William Bruce Nicol, II
  • Patent number: 9137318
    Abstract: A method of detecting events in a computer-implemented online community includes providing a computer-implemented event processor, providing the computer-implemented event processor with a description of at least one event to be detected, automatically analyzing messages of the online community with the computer-implemented event processor to detect the at least one event, and issuing a notification of a detected event. Also an apparatus for carrying out the method.
    Type: Grant
    Filed: January 16, 2009
    Date of Patent: September 15, 2015
    Assignee: Avaya Inc.
    Inventor: Jack L. Hong
  • Patent number: 9123157
    Abstract: A multi-instance, multi-user animation platform includes a plurality of modeled parallel dimensions in a computer memory. Each of the parallel dimensions may be an independent model of a physical, three-dimensional space having corresponding features such that the parallel dimensions are recognizable as counterparts to each other. Avatars are located within corresponding ones of the parallel dimensions so as to prevent over-population of any one of the parallel dimensions by avatars. Avatars are animated within different ones of the parallel dimensions using input from respective users to provide virtual-reality data. A common space is modeled in the computer memory configured in relation to the plurality of parallel instances so that an object located inside the common space is visible from viewpoints located inside each of the plurality of parallel instances. Remote clients may output an animated display of a corresponding one of the parallel dimensions and avatars therein.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: September 1, 2015
    Inventors: Brian Mark Shuster, Gary Stephen Shuster
  • Patent number: 9116827
    Abstract: A system(s), method(s) and computer program product to optimize Luby Transform codes to facilitate a transmission of data over a communication network are disclosed. Demands from various sinks are received and a demand vector is calculated. Various sources are employed with LT codes to encode the data. A Generalized LT code (GLT) is generated for an objective function determined for a given demand vector irrespective of the LT codes employed at the sources. Morphing rules are designed by optimizing a degree distribution of the data and mapping LT codes to the generalized LT codes. The GLT is optimized by using a linear transformation to obtain optimal morphing rules. The LT codes are retargeted by re-encoding an LT encoded data to further obtain an LT re-encoded data. The LT re-encoded data is then transmitted by a relay device to plurality of sinks.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: August 25, 2015
    Assignee: Tata Consultancy Services Limited
    Inventors: Shirish Subhash Karande, Mariswamy Girish Chandra, Sachin P. Lodha
  • Patent number: 9111440
    Abstract: Provided is a refrigerator, which includes a display part and a control part. The display part displays information. The control part controls the display part. The display part includes a first display part for displaying information related to an additional function except for a cooling function, and a second display part for displaying temperature information related to the cooling function.
    Type: Grant
    Filed: January 6, 2012
    Date of Patent: August 18, 2015
    Assignee: LG ELECTRONICS INC.
    Inventors: Hyoungjun Park, Yanghwan Kim, Museung Kim, Jongmi Choi
  • Patent number: 9108109
    Abstract: A method is provided for implementing and controlling virtual environments, for example “virtual worlds” by which users can for example meet one another virtually and communicate through their respective terminals. For example, the method relates to the display, rendering and/or deletion of the representations of users acting in these virtual environments, at the different terminals in which these representations can be viewed. The method includes the following steps for a given user: determination of a destination zone in the virtual environment in which the representation of the user is to be rendered; determination of an appearance zone in the virtual environment; displaying of the representation in the appearance zone; and automatic movement of the representation from the appearance zone to the destination zone.
    Type: Grant
    Filed: December 11, 2008
    Date of Patent: August 18, 2015
    Assignee: ORANGE
    Inventors: Louis Pare, Yves Scotto D'Apollonia
  • Patent number: 9098167
    Abstract: A system and method for representing content available from a hosting user are provided. In general, content representations that are descriptive of content made accessible by the hosting user are presented in association with a content representation host representing the hosting user according to a layering scheme. The content representation host may be, for example, an avatar in a virtual environment, a custom webpage of the hosting user, an identifier of the hosting user in a peer-to-peer (P2P) network, an identifier of the hosting user in a mobile network, or the like. Based on the content representations, other users are enabled to quickly and easily determine whether content of interest is accessible from the hosting user.
    Type: Grant
    Filed: February 26, 2007
    Date of Patent: August 4, 2015
    Assignee: Qurio Holdings, Inc.
    Inventors: Alfredo C. Issa, Richard J. Walsh, Christopher M. Amidon
  • Patent number: 9092437
    Abstract: “Experience Streams” (ESs) are used by a “rich interactive narrative” (RIN) data model as basic building blocks that are combined in a variety of ways to enable or construct a large number of RIN scenarios for presenting interactive narratives to the user. In general various ES types contain all the information required to define and populate a particular RIN, as well as the information (in the form of a series of navigable states) that charts an animated and interactive course through each RIN. In other words, combinations of various ES provide a scripted path through a RIN environment, as well as various UI controls and/or toolbars that enable user interaction with the interactive narrative provided by each RIN. Example ESs include, but are not limited, content browser experience streams, zoomable media experience streams, relationship graph experience streams, player-controls/toolbar experience streams, etc.
    Type: Grant
    Filed: January 18, 2011
    Date of Patent: July 28, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Joseph M. Joy, Narendranath Datha, Eric J. Stollnitz, Aditya Sankar, Vinay Krishnaswamy, Sujith Radhakrishnan Warrier, Kanchen Rajanna, Tanuja Abhay Joshi
  • Patent number: 9092600
    Abstract: Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.
    Type: Grant
    Filed: November 5, 2012
    Date of Patent: July 28, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mike Scavezze, Jason Scott, Jonathan Steed, Ian McIntyre, Aaron Krauss, Daniel McCulloch, Stephen Latta, Kevin Geisner, Brian Mount
  • Patent number: 9088425
    Abstract: Method and systems for establishing a collaboration among individuals and an imaginary character in a virtual world are disclosed. An exemplary method may include establishing an internet protocol channel between a first individual and a program represented by the imaginary character. The exemplary method may also include establishing a non-internet protocol channel between a second individual and the program represented by the imaginary character.
    Type: Grant
    Filed: August 17, 2006
    Date of Patent: July 21, 2015
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Alok S. Raghunath Rao, Ashok K. Meena, Raju T. Ramakrishnan, Ramakrishnan R. Sankaranarayan
  • Patent number: 9088426
    Abstract: Embodiments generally relate to processing media streams during a multi-user video conference. In one embodiment, a method includes obtaining at least one frame from a media stream, and determining a plurality of coordinates within the at least one frame. The method also includes obtaining at least one media content item, obtaining one or more parameters from a remote user, and adding the at least one media content item to the at least one frame based on the plurality of coordinates and the one or more parameters.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: July 21, 2015
    Assignee: Google Inc.
    Inventors: Janahan Vivekanandan, Frank Petterson, Thor Carpenter, John David Salazar
  • Patent number: 9071808
    Abstract: A storage medium having stored an information processing program therein, an information processing apparatus, an information processing method, and an information processing system, which enable highly entertaining display in accordance with a user which performs an operation input, are provided. A characteristic of the face of a user currently using a game apparatus 10 is detected, and data indicating the detected characteristic is compared to already stored data indicating characteristics of the faces of users. If there is data indicating a closest characteristic, among the already stored data, the data is selected. When the data is selected, a motion of an object is set and controlled in accordance with a number of times which the data is selected.
    Type: Grant
    Filed: November 18, 2010
    Date of Patent: June 30, 2015
    Assignee: Nintendo Co., Ltd.
    Inventors: Yasuyuki Oyagi, Tatsuya Takadera