Motion Planning Or Control Patents (Class 345/474)
-
Patent number: 8130226Abstract: A framework for performing graphics animation and compositing operations has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media, or any other type of object for a user interface of an application. The application commits change to the state of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, an animation is determined for animating the change in state. In determining the animation, the framework can define a set of predetermined animations based on motion, visibility, and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer for display on the processing device.Type: GrantFiled: May 31, 2007Date of Patent: March 6, 2012Assignee: Apple Inc.Inventors: Ralph Brunner, John Harper, Peter N Graffagnino
-
Patent number: 8130225Abstract: A computer-implemented method includes comparing one or more surface features to a motion model. The surface feature or surface features represent a portion of an object in an image. The method also includes identifying a representation of the object from the motion model, based upon the comparison.Type: GrantFiled: April 13, 2007Date of Patent: March 6, 2012Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Steve Sullivan, Francesco G. Callari
-
Patent number: 8125485Abstract: Animating speech of an avatar representing a participant in a mobile communication including selecting one or more images; selecting a generic animation template; fitting the one or more images with the generic animation template; texture wrapping the one more images over the generic animation template; and displaying the one or more images texture wrapped over the generic animation template. Receiving an audio speech signal; identifying a series of phonemes; and for each phoneme: identifying a new mouth position for the mouth of the generic animation template; altering the mouth position to the new mouth position; texture wrapping a portion of the one or more images corresponding to the altered mouth position; displaying the texture wrapped portion of the one or more images corresponding to the altered mouth position of the mouth of the generic animation template; and playing the portion of the audio speech signal represented by the phoneme.Type: GrantFiled: November 20, 2009Date of Patent: February 28, 2012Assignee: International Business Machines CorporationInventors: William A. Brown, Richard W. Muirhead, Francis X. Reddington, Martin A. Wolfe
-
Publication number: 20120044251Abstract: Methods and devices enable rendering of graphic images at a minimum frame rate even when processing resource limitations and rendering processing may not support the minimum frame rate presentation. While graphics are being rendered, a processor of a computing device may monitor the achieved frame rate. If the frame rate falls below a minimum threshold, the processor may note a current speed or rate of movement of the image and begin rendering less computationally complex graphic items. Rendering of less computationally complex items continues until the processor notes that the speed of rendered items is less than the noted speed. At this point, normal graphical rendering may be recommenced. The aspects may be applied to more than one type of less computationally complex item or rendering format. The various aspects may be applied to a wide variety of animations and moving graphics, as well as scrolling text, webpages, etc.Type: ApplicationFiled: August 20, 2010Publication date: February 23, 2012Inventors: John Liam Mark, Michael U. Schwartz, Sean S. Rogers
-
Patent number: 8122158Abstract: A method, system, and a computer program product for improving IO (input/output) performance of host systems using external storage systems. An aspect of the present invention predicts policies to be applied in the host system based on historical information. Several characteristics of a set of IO requests sent by a host system are collected and analyzed to determine a usage/IO pattern. A suitable policy is then determined based on the pattern and applied on the host system when a similar pattern of IO requests is sought to be sent again, thereby improving the IO performance of the host system.Type: GrantFiled: September 25, 2009Date of Patent: February 21, 2012Assignee: EMC CorporationInventors: Santhosh Venkatesh Kudva, Ajith Balakrishnan
-
Patent number: 8115771Abstract: A system for multilevel simulation of an animation cloth is provided. The system includes a multilevel area generation module, a curvature calculation module, a curvature comparison module, and a dynamic simulation module. The multilevel area generation module divides a plurality of grid units of the animation cloth into a plurality of level sub-areas based on a multilevel technique, wherein each of the level sub-areas is generated by dividing an upper level sub-area. The curvature calculation module calculates the curvatures of the level sub-areas according to the plane vectors of the grid units in a frame. The curvature comparison module compares the curvatures of the level sub-areas with a flatness threshold. The dynamic simulation module calculates the plane vector of each grid unit in a next frame through different method according to the comparison result of the curvature comparison module.Type: GrantFiled: December 27, 2007Date of Patent: February 14, 2012Assignee: Institute for Information IndustryInventors: Chia-Ying Chi, Chi Chu, Zen-Chung Shih, Wei-Te Lin
-
System and method of customizing animated entities for use in a multimedia communication application
Patent number: 8115772Abstract: In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.Type: GrantFiled: April 8, 2011Date of Patent: February 14, 2012Assignee: AT&T Intellectual Property II, L.P.Inventors: Joern Ostermann, Mehmet Reha Civanlar, Ana Cristina Andres del Valle, Patrick Haffner -
Patent number: 8111284Abstract: An apparatus for 3D representation of image data, comprising: a structure identifier for identifying structures in motion within image data, and a skeleton insertion unit, which associates three-dimensional skeleton elements with the identified structures. The skeleton elements are able to move with the structures to provide a three-dimensional motion and structural understanding of said image data which can be projected back onto the input data. As well as individual elements, complex bodies can be modeled by complex skeletons having multiple elements. The skeleton elements themselves can be used to identify the complex objects.Type: GrantFiled: May 1, 2007Date of Patent: February 7, 2012Assignee: Extreme Reality Ltd.Inventor: Dor Givon
-
Patent number: 8111258Abstract: As a virtual plane used when converting a designation point, which is a touched position on a touch panel, into a control point, in a virtual three dimensional space, used for controlling a movement of an object, a first virtual plane is used when a ball is an object to be controlled; a second virtual plane is used when an object to be operated is a dog and an action mode thereof is an attention mode; a third virtual plane is used when the object to be operated is the dog and the action mode thereof is a lick mode; a fourth virtual plane is used when the object to be operated is the dog and the action mode thereof is a rope shake mode; and a fifth virtual plane is used when the object to be operated is the dog and the action mode thereof is a circle mode.Type: GrantFiled: February 14, 2011Date of Patent: February 7, 2012Assignee: Nintendo Co., Ltd.Inventors: Yoshitaka Ajioka, Yasushi Ebisawa, Kiyoshi Mizuki
-
Patent number: 8111261Abstract: A method for creating an appearance of texture in a computer image having the steps of introducing information into a computer from which the image is produced for each point of the image in 3D geometric space. There is the step of computing a pseudo-random hash value at each vertex of a unit cube surrounding the point of the image using six + modules and seven L modules where the L module is implement as a look-up table having 64 6 bits entries. There is the step of mapping the lower six bits from last stage L modules of a plurality of stages of modules to a fixed set of 64 gradient vectors where the set is chosen such that a length of each component of every vector of the 64 vectors is a power of two. There is the step of based on the gradient vectors, combining with the computer the contribution from each vertex into a single interpolated result to produce the point of the image with noise interpolated texture that do not have visible grid artifacts.Type: GrantFiled: November 21, 2000Date of Patent: February 7, 2012Inventor: Kenneth Perlin
-
Patent number: 8106911Abstract: Capturing motion, including: coupling at least one body marker on at least one body point of at least one actor; coupling at least one facial marker on at least one facial point of the at least one actor; arranging a plurality of motion capture cameras around a periphery of a motion capture volume, the plurality of motion capture cameras is arranged such that substantially all laterally exposed surfaces of the at least one actor while in motion within the motion capture volume are within a field of view of at least one of the plurality of motion capture cameras at substantially all times; attaching at least one wearable motion capture camera to the at least one actor, wherein substantially all of the at least one facial marker and the at least one body marker on the at least one actor are within a field of view of the at least one wearable motion capture camera.Type: GrantFiled: September 13, 2010Date of Patent: January 31, 2012Assignees: Sony Corporation, Sony Pictures Entertainement Inc.Inventor: Demian Gordon
-
Patent number: 8106910Abstract: A method for correct reproduction of moving three-dimensional (3D) scenes observed by a viewer on displays showing moving three-dimensional scenes relating to video games, animated cartoons, simulators for drivers or pilots, etc. The main concept of the invention is the reproduction of a moving scene that rotates around a selected center, which is the point of the viewer's gaze fixation. Thus, all objects that are stationary with respect to each other on the illustrated scene rotate at the same angular speed but move on the screen at different linear speeds, which are inversely proportional to preselected distances from the viewer to the respective objects. Movements of objects relative to each other are presented in a coordinate system rotating with the scene. For reproduction on the display, distances to the objects, number of objects, and other selected data are entered into a conventional 3D-animation computer program.Type: GrantFiled: March 28, 2008Date of Patent: January 31, 2012Inventors: Vldimir Pugach, Stanislav Klimenko, Polina Danilicheva
-
Patent number: 8103128Abstract: A video game device calculates an angle of an object 71, etc., with respect to a reference direction in a three-dimensional space. Then, the video game device defines a virtual cylindrical model for each of a plurality of objects so that a bottom surface of each cylindrical model is perpendicular to the reference direction. The video game device defines a plurality of cylindrical models for a predetermined object if the inclination of the predetermined object with respect to the reference direction is greater than a predetermined angle. Then, the video game device determines whether or not the defined cylindrical models have an overlap therebetween. Then, if two cylindrical models are determined to be overlapping with each other, the video game device gives a predetermined change to the two objects in response to the collision.Type: GrantFiled: November 25, 2009Date of Patent: January 24, 2012Assignee: Nintendo Co., Ltd.Inventors: Eiji Aonuma, Yoichi Yamada, Hajime Nakamura, Hiroshi Umemiya
-
Patent number: 8102406Abstract: A computer-implemented method and system transforms a first sequence of video frames of a first dynamic scene to a second sequence of at least two video frames depicting a second dynamic scene. A subset of video frames in the first sequence is obtained that show movement of at least one object having a plurality of pixels located at respective x, y coordinates and portions from the subset are selected that show non-spatially overlapping appearances of the at least one object in the first dynamic scene. The portions are copied from at least three different input frames to at least two successive frames of the second sequence without changing the respective x, y coordinates of the pixels in the object and such that at least one of the frames of the second sequence contains at least two portions that appear at different frames in the first sequence.Type: GrantFiled: November 15, 2006Date of Patent: January 24, 2012Assignee: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Alexander Rav-Acha
-
Publication number: 20110304632Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.Type: ApplicationFiled: June 11, 2010Publication date: December 15, 2011Applicant: MICROSOFT CORPORATIONInventors: Jeffrey Evertt, Joel Deaguero, Darren Bennett, Dylan Vance, David Galloway, Relja Markovic, Stephen Latta, Oscar Omar Garza Santos, Kevin Geisner
-
Publication number: 20110304633Abstract: Techniques are disclosed for controlling robot pixels to display a visual representation of an input. The input to the system could be an image of a face, and the robot pixels deploy in a physical arrangement to display a visual representation of the face, and would change their physical arrangement over time to represent changing facial expressions. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic robots to guarantee smooth and collision-free motions. The collision avoidance technique works for multiple robots by decoupling path planning and coordination.Type: ApplicationFiled: June 8, 2011Publication date: December 15, 2011Inventors: Paul Beardsley, Javier Alonso Mora, Andreas Breitenmoser, Martin Rufli, Roland Siegwart, Iain Matthews, Katsu Yamane
-
Patent number: 8077179Abstract: Described herein are systems and methods for cartoonizing an image and incorporating the image into an animated video based upon a predefined animated story. In alternate embodiments of the invention, multiple images may be incorporated into the animated video. In further embodiments, the animated video may be output from the system in printed, hard copy.Type: GrantFiled: July 11, 2006Date of Patent: December 13, 2011Assignee: Pandoodle Corp.Inventor: David M. Ludwigsen
-
Publication number: 20110298810Abstract: A moving-subject control device controls a motion of a moving subject based on motion data indicating the motion of the moving subject, and includes an input unit which receives an input of attribute information indicating an attribute of the moving subject, a generation unit which generates motion data for a user based on the attribute information the input of which is received by the input unit, as motion data for controlling a motion of a moving subject for the user generated based on the attribute information input by the user of the moving-subject control device, and a control unit which varies the motion of the moving subject for the user based on the motion data for the user generated by the generation unit.Type: ApplicationFiled: February 8, 2010Publication date: December 8, 2011Applicant: NEC CorporationInventor: Tetsuya Fuyuno
-
Patent number: 8060255Abstract: A system of distributed control of an interactive animatronic show includes a plurality of animatronic actors, at least one of the actors having a processor and one or more motors controlled by the processor. The system also includes a network interconnecting each of the actors, and a plurality of sensors providing messages to the network, where the messages are indicative of processed information. Each processor executes software that schedules and/or coordinates an action of the actor corresponding to the processor in accordance with the sensor messages representative of attributes of an audience viewing the show and the readiness of the corresponding actor. Actions of the corresponding actor can include animation movements of the actor, responding to another actor and/or responding to a member of the audience. The actions can result in movement of at least a component of the actor caused by control of the motor.Type: GrantFiled: September 12, 2007Date of Patent: November 15, 2011Assignee: Disney Enterprises, Inc.Inventor: Alexis Paul Wieland
-
Publication number: 20110273457Abstract: Techniques are disclosed for providing a learning-based clothing model that enables the simultaneous animation of multiple detailed garments in real-time. A simple conditional model learns and preserves key dynamic properties of cloth motions and folding details. Such a conditional model may be generated for each garment worn by a given character. Once generated, the conditional model may be used to determine complex body/cloth interactions in order to render the character and garment from frame-to-frame. The clothing model may be used for a variety of garments worn by male and female human characters (as well as non-human characters) while performing a varied set of motions typically used in video games (e.g., walking, running, jumping, turning, etc.).Type: ApplicationFiled: December 21, 2010Publication date: November 10, 2011Inventors: EDILSON DE AGUIAR, Leonid Sigal, Adrien Treuille, Jessica K. Hodgins
-
Patent number: 8054312Abstract: Capturing motion using motion capture cameras comprises: coupling a plurality of markers to an actor; allowing a material to be positioned between the actor and the motion capture cameras, wherein the material is selected to allow the motion capture cameras to capture motion of the plurality of markers while the actor interacts with the material; and capturing the motion of the markers.Type: GrantFiled: August 25, 2006Date of Patent: November 8, 2011Assignees: Sony Corporation, Sony Pictures Entertainment Inc.Inventor: Demian Gordon
-
Patent number: 8054310Abstract: Computer-implemented methods, systems, and computer program products are provided for recasting a legacy web page as a motion picture with audio.Type: GrantFiled: June 18, 2007Date of Patent: November 8, 2011Assignee: International Business Machines CorporationInventors: William K. Bodin, David Jaramillo, Jesse W. Redman, Derral C. Thorson
-
Publication number: 20110267358Abstract: A method of animating a virtual object within a virtual world, wherein the virtual object comprises a plurality of object parts, wherein for a first object part there is one or more associated second object parts, the method comprising: at an animation update step: specifying a target frame in the virtual world; and applying control to the first object part, wherein the control is arranged such that the application of the control in isolation to the first object part would cause a movement of the first object part in the virtual world that reduces a difference between a control frame and the target frame, the control frame being a frame at a specified position and orientation in the virtual world relative to the first object part, wherein applying control to the first object part comprises moving the one or more second object parts within the virtual world to compensate for the movement of the first object part in the virtual world caused by applying the control to the first object part.Type: ApplicationFiled: April 29, 2010Publication date: November 3, 2011Inventors: Antoine Félix Robert Rennuit, Thomas Daniel Lowe
-
Patent number: 8049757Abstract: To control screen display using moving pictures for a plurality of users. Respective moving pictures for a plurality of users are acquired, and each of the acquired moving pictures are displayed on a screen. Display content for the screen is controlled in response to content of each moving picture. At this time, it is also possible to move a specified movement image on the screen according to display position of each moving picture on the screen and content of each moving picture, or to change a display region of the moving picture on the screen according to content of each moving picture.Type: GrantFiled: May 11, 2006Date of Patent: November 1, 2011Assignee: Sony Computer Entertainment Inc.Inventors: Tomokazu Kake, Yasushi Okumura
-
Patent number: 8049758Abstract: A sensing baseball game apparatus (10) has a game machine (12) connected to a television monitor (18). A bat input device (32) is provided with an acceleration sensor. An acceleration signal is transmitted by an infrared-ray LED (34) to an infrared-ray receiving part of the game machine (12) whereby the game machine (12) determines a moving speed of the bat input device (32) to calculate a moving parameter of a ball to be batted. Accordingly, a batted ball is moved in the game scene according to the parameter.Type: GrantFiled: March 15, 2011Date of Patent: November 1, 2011Assignee: SSD Company LimitedInventors: Hiromu Ueshima, Shuhei Kato
-
Patent number: 8044962Abstract: A computer-implemented method includes identifying a representation of a feature of an animated character by inverting an skinned representation of the feature in one position. The inversion includes a non-linear inversion of the skinned representation of the feature. The method also includes skinning the identified feature representation to produce the animated character in another position.Type: GrantFiled: August 31, 2007Date of Patent: October 25, 2011Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Frederic P. Pighin, Cary Phillips
-
Patent number: 8044975Abstract: Disclosed are an apparatus and a method for providing a wallpaper. To this end, a current state of a mobile terminal is detected, particle images and fluid image reflecting the detected current state of the mobile terminal are generated, and the generated particles and the generated fluid are displayed on a preset wallpaper so as to generate a wallpaper. Then, user motion is detected, acceleration and movement amount are extracted from the detected user motion, and movement amounts and movement directions of the particles and the fluid are determined based on the extracted acceleration and movement amount. Then, the particles and fluid are displayed in reflection of the gravity on movement amounts and movement directions of particles and fluid. Accordingly, it is possible to provide a wallpaper having an effect as if an actual snow globe is moved.Type: GrantFiled: January 3, 2008Date of Patent: October 25, 2011Assignee: Samsung Electronics Co., Ltd.Inventors: Soon-Ok Kim, Chan-Woo Park, Kyu-Ok Choi, Do-Hwan Choi, Jong-Hyun An
-
Patent number: 8035643Abstract: Systems and methods are described, which create a mapping from a space of a source object (e.g., source facial expressions) to a space of a target object (e.g., target facial expressions). In certain implementations, the mapping is learned based a training set composed of corresponding shapes (e.g. facial expressions) in each space. The user can create the training set by selecting expressions from, for example, captured source performance data, and by sculpting corresponding target expressions. Additional target shapes (e.g., target facial expressions) can be interpolated and extrapolated from the shapes in the training set to generate corresponding shapes for potential source shapes (e.g., facial expressions).Type: GrantFiled: March 19, 2007Date of Patent: October 11, 2011Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Frederic P. Pighin, Cary Phillips, Steve Sullivan
-
Patent number: 8035644Abstract: Provided is a method for providing animation in electronic communications. An image is generated by capturing multiple photographs from a camera or video camera typically fixed in one position. The first photograph is called the “naked photo.” Using a graphics program, photos subsequent to the naked photo are edited to cut an element common to the subsequent photos. The cut images are pasted into the naked photo as layers. The modified naked photo, including the layers, is stored as a web-enabled graphics file, which is then transmitted in conjunction with electronic communication. When the electronic communication is received, the naked photo is displayed and each of the layers is displayed and removed in the order that each was taken with a short delay between photos. In this manner, a movie is generated with much smaller files than is currently possible.Type: GrantFiled: June 16, 2008Date of Patent: October 11, 2011Inventor: Douglas G. Richardson
-
Patent number: 8026918Abstract: A first user's avatar in a virtual world environment may be controlled by using a virtual world application enabling access to a virtual world environment within which the first user has an avatar associated with the first user. A virtual world location corresponding to the first avatar is identified. At least one second avatar proximate to the virtual world location is identified. Filtering information is accessed. Profile information related to the second avatar is accessed. Filtering information is compared to the accessed profile information. Based on the comparison results, a metric related to the second avatar is determined. The metric is related to a threshold. It is determined whether relating the metric to the threshold supports enabling communications between the first and second avatars, and if so, communications to be exchanged between the first and second avatars are enabled.Type: GrantFiled: November 21, 2007Date of Patent: September 27, 2011Assignee: AOL Inc.Inventor: Stephen Vaughan Murphy
-
Patent number: 8026917Abstract: Computer-implemented methods and computer program products for automatically transferring expressions between rigs with consistent joint structure, and for automatically transferring skin weights between different skin meshes based on joint positioning. A method is provided for transferring an expression between a source rig and a target rig, where each rig characterizes an animated character, and each rig, in turn, is characterized by a set of joints and a skin mesh having a plurality of vertices, with each vertex characterized by a matrix of weights relating a response of the vertex to movement of associated joints. A set of offsets is calculated of joint positions of a goal expression of the source rig relative to a neutral expression of the source rig. A scaling transformation is then applied to the set of offsets to produce a scaled set of offsets, which are added, in turn, to a neutral expression of the target rig.Type: GrantFiled: April 30, 2007Date of Patent: September 27, 2011Assignee: Image Metrics LtdInventors: Michael Rogers, Kevin Walker, Steve Caulkin, Gareth Edwards
-
Patent number: 8022955Abstract: An operability verification apparatus includes a work plane generation section that generates a work plane on a virtual space where a three-dimensional model of an equipment to be verified is disposed; a plane display section where a two-dimensional image on a work plane generated in the work plane generation section of the three-dimensional model is displayed on the display screen; and a mark display update section in which a mark representative of the pointing device is displayed on the display screen, and upon receipt of the notification of direction of movement and migration length of the pointing device, the mark on the display screen is moved in the direction of the movement corresponding to the direction of the movement of the pointing device by only a migration length in which a ratio of a real size of the equipment to a display size of the two-dimensional image on the display screen is considered.Type: GrantFiled: January 23, 2009Date of Patent: September 20, 2011Assignee: Fujitsu LimitedInventors: Koji Demizu, Hidekatsu Sasaki, Masayuki Kidera, Wataru Nishiyama
-
Patent number: 8022954Abstract: Three-digital micromirror devices (“DMD”) are used to alter the shape of light that is projected onto a stage. The DMDs each receive a primary color and selectively reflects some light of that color, thereby shaping the light that is projected onto the stage. The control for the alteration is controlled by an image. That image can be processed, thereby carrying out image processing effects on the shape of the light that is displayed. One preferred application follow the shape of the performer and illuminates the performer using a shape that adaptively follows the performer's image. This results in a shadowless follow spot.Type: GrantFiled: February 22, 2001Date of Patent: September 20, 2011Assignee: Production Resource Group, L.L.CInventor: William E. Hewlett
-
Publication number: 20110221755Abstract: A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user.Type: ApplicationFiled: March 12, 2010Publication date: September 15, 2011Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Brian James Mount, Zachary T. Middleton, Joel Deaguero, Christopher Willoughby, Dan Osborn, Darren Bennett, Gregory N. Snook
-
Patent number: 8016659Abstract: A gaming apparatus and methods which include multiple virtual stacks of symbols, such as cards, numbers, picture symbols or other symbols. In one form each player has an individual virtual stack and the dealer has one stack. In another form each player has one stack and the dealer has individual virtual stacks for each player. In some preferred forms, the symbol sets for each participant are stripped or reduced during the game as various symbols are assigned thereto. Other alternatives are also described.Type: GrantFiled: February 22, 2008Date of Patent: September 13, 2011Assignee: DigiDeal CorporationInventors: Michael J Kuhn, Donald L. Evans
-
Patent number: 8018455Abstract: A multi-user animation process receives input from multiple remote clients to manipulate avatars through a modeled 3-D environment. Each user is represented by an avatar. The 3-D environment and avatar position/location data is provided to client workstations, which display a simulated environment visible to all participants. A text or speech-based bulletin board application is coupled to the animation process. The bulletin board application receives text or speech input from the multiple remote users and publishes the input in a public forum. The bulletin board application maintains multiple forums organized by topic. Access or participation to particular forums is coordinated with the animation process, such that each user may be permitted access to a forum only when the user's avatar is located within a designated room or region of the modeled 3-D environment.Type: GrantFiled: October 4, 2010Date of Patent: September 13, 2011Inventor: Brian Mark Shuster
-
Patent number: 8004529Abstract: A method for processing an animation file to provide an animated icon to an instant messaging environment is presented. An animation file is reformatted to generate the animated icon to satisfy a pre-defined size requirement of the instant messaging environment. The animated icon is stored for distribution to the instant messaging environment.Type: GrantFiled: October 1, 2007Date of Patent: August 23, 2011Assignee: Apple Inc.Inventors: Justin Wood, Thomas Goossens
-
Publication number: 20110199318Abstract: A user interface (UI) system calculates movements in a multi-layer graphical user interface. The UI system receives user input corresponding to gestures on a touchscreen. The UI system calculates a movement of a first layer in a first direction (e.g., a horizontal direction) at a first movement rate. For example, the first movement rate can be substantially equal to the movement rate of a gesture made by a user's finger or other object on the touchscreen. The UI system calculates movements of other layers substantially parallel to the movement of the first layer, at movement rates that differ from the first movement rate.Type: ApplicationFiled: June 25, 2010Publication date: August 18, 2011Applicant: Microsoft CorporationInventors: Jeffrey Cheng-Yao Fong, Eric J. Hull, Sergey Chub
-
Patent number: 7999827Abstract: Tools and techniques for creating and editing a master block definition for a block, and manipulating a block instantiation of the block are described. User input can be received defining a block and a master block definition can be generated for the block. The block can include one or more graphical entities and receiving a user input defining a block can include receiving one or more user inputs manipulating a graphical representation of the block. The master block definition can include at least one allowable manipulation to a geometry of the block when instantiated. One or more block instantiations of the block can be manipulated differently according to the allowable manipulation.Type: GrantFiled: June 15, 2006Date of Patent: August 16, 2011Assignee: Autodesk, Inc.Inventors: John G. Ford, John Beltran
-
Patent number: 7999811Abstract: An image processing device that models, based on a plurality of frame images being results of time-sequential imaging of an object in motion, a motion of the object using a three-dimensional (3D) body configured by a plurality of parts is disclosed. The device includes: acquisition means for acquiring the frame images being the imaging results; estimation means for computing a first matrix of coordinates of a joint of the 3D body and a second matrix of coordinates of each of the parts of the 3D body, and generating a first motion vector; computing means for computing a second motion vector; and determination means for determining the 3D body.Type: GrantFiled: January 14, 2008Date of Patent: August 16, 2011Assignee: Sony CorporationInventors: Yuyu Liu, Weiguo Wu, Takayuki Yoshigahara
-
Publication number: 20110193867Abstract: A method for producing motion effects of a character capable of interacting with a background image in accordance with the characteristics of the background image is provided, including extracting the characteristics of the background image; determining a character to be provided with the motion effects in the background in accordance with the extracted characteristics of the background image; recognizing external signals including a user input; determining the motion of the character in accordance with the characteristics of the background image and the recognized external signals; and reproducing an animation for executing the motion of the character in the background image.Type: ApplicationFiled: February 11, 2011Publication date: August 11, 2011Applicant: Samsung Electronics Co., Ltd.Inventors: Hee-Bum AHN, Hyun-Soo Kim, Mu-Sik Kwon, Sang-Wook Oh, Dong-Hyuk Lee, Seong-Taek Hwang, An-Na Park
-
Patent number: 7995064Abstract: A computer-implemented chat system having dual channel communications and self-defining product structures is disclosed. The system and method includes providing a first logical communication channel between a first chat client and a second chat client, the first logical channel conveying text chat messages between the first chat client and the second chat client; providing a second logical communication channel between the first chat client and the second chat client, the second logical channel conveying text chat command/control information between the first chat client and the second chat client; and using the chat command/control information to modify a 3D chat scene displayed on each of the first chat client and the second chat client systems.Type: GrantFiled: August 26, 2005Date of Patent: August 9, 2011Assignee: IMVU, Inc.Inventors: Vernon Melvin Guymon, III, Eric Nathan Ries, William David Harvey, Matt Danzig, Marcus Gosling
-
Patent number: 7995065Abstract: An animation reproducing apparatus and method of reproducing an animation reproduces an animation of a predetermined model. The animation reproducing apparatus includes a motion blending unit blending one or more previously prepared animations according to a desired motion that is to be reproduced and a rendering unit rendering a result of the blending.Type: GrantFiled: September 22, 2006Date of Patent: August 9, 2011Assignee: Samsung Electronics Co., Ltd.Inventors: Seyoon Tak, Dokyoon Kim, Keechang Lee, Jeonghwan Ahn
-
Publication number: 20110191666Abstract: Animation control methods and systems. In one embodiment, a method to control animations includes receiving data representing content of a page (e.g. a web page), detecting, from the data, whether the page includes animated content, determining whether to halt execution of the detected animated content, and halting execution of the animated content if a determination to halt was made. In one implementation, the content can be configured into a document object model (DOM) and decisions to halt or not to halt can be made on a node-by-node basis within the DOM. In one implementation, the animated content can be allowed to execute for a shortened duration (e.g. in order to allow a user to see it) and then is halted.Type: ApplicationFiled: February 2, 2010Publication date: August 4, 2011Inventors: Kevin Decker, Jing Jin
-
Publication number: 20110187728Abstract: Opto-mechanical motion capture system for indirectly measuring the movement of bodies and objects, mainly focused on joints of flexible materials, or which have deformations, which makes difficult the instrumentation with rigid sensors such as potentiometers. This invention consists of an image acquisition device or camera and a visualization bed in which there is a series of transmission cables which convey to the visualization bed the movements generated in the flexible parts to be sensed. The camera is set in such a way that it is possible to capture the image of the transmission cables, enabling the determination of its displacement and thus of the sensed objects. The main object of this invention is to enable the measurement of the movements of the flexible parts of the human body in a simple, cheap and comfortable way for the user of the device.Type: ApplicationFiled: August 21, 2008Publication date: August 4, 2011Applicant: Universidad Nacional Autonoma De MexicoInventor: Hernando Ortega-Carrillo
-
Patent number: 7990387Abstract: The present invention provides a computer implemented method and apparatus to project a projected avatar associated with an avatar in a virtual universe. A computer receives a command to project the avatar, the command having a projection point. The computer transmits a request to place a projected avatar at the projection point to a virtual universe host. The computer renders a tab associated with the projected avatar.Type: GrantFiled: August 16, 2007Date of Patent: August 2, 2011Assignee: International Business Machines CorporationInventors: Rick Allen Hamilton, II, Brian Marshall O'Connell, Clifford Alan Pickover, Keith Raymond Walker
-
Graphic system comprising a pipelined graphic engine, pipelining method and computer program product
Patent number: 7990389Abstract: A graphic system includes a pipelined graphic engine for generating image frames for display. The pipelined graphic engine includes a geometric processing stage for performing motion extraction, and a rendering stage for generating full image frames at a first frame rate for display at a second frame rate. The second frame rate is higher than the first frame rate. A motion encoder stage receives motion information from the geometric processing stage, and produces an interpolated frame signal representative of interpolated frames. A motion compensation stage receives the interpolated frame signal from the motion encoder stage, and the full image frames from the rendering stage for generating the interpolated frames. A preferred application is in graphic systems that operate in association with smart displays through a wireless connection, such as in mobile phones.Type: GrantFiled: May 10, 2007Date of Patent: August 2, 2011Assignee: STMicroelectronics S.R.L.Inventor: Massimiliano Barone -
Publication number: 20110181606Abstract: In an animation processing system, generating images to be viewable on a display using a computer that are generated based on scene geometry obtained from computer readable storage and animation data representing changes over time of scene geometry elements, but also images can be modified to include shading that is a function of positions of objects at other than the current instantaneous time for a frame render such that the motion effect shading would suggest motion of at least one of the elements to a viewer of the generated images. Motion effects provide, based on depiction parameters and/or artist inputs, shading that varies for at least some received animation data, received motion depiction parameters, for at least one pixel, a pixel color is rendered based on motion effect program output and at least some received scene geometry, such that the output contributes to features that would suggest the motion.Type: ApplicationFiled: July 26, 2010Publication date: July 28, 2011Applicant: Disney Enterprises, Inc.Inventors: Robert Sumner, Markus Gross, Nils Thuerey, Thomas Oskam
-
Publication number: 20110181607Abstract: A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest.Type: ApplicationFiled: March 30, 2011Publication date: July 28, 2011Applicant: Nintendo of AmericaInventors: Henry Sterchi, Jeff Kalles, Shigeru Miyamoto, Denis Dyack, Carey Murray
-
Patent number: RE43216Abstract: Smooth, stable and high quality game image is provided by accurately pre-reading background data required for image processing each time. The game device therefore reads background data required for a game that displays a condition of a moving vehicle within a virtual three-dimensional space together with a background in the main memory from a CD-ROM (recording medium) prior to image processing. This device comprises a pre-reading unit for pre-reading background data from a recording medium when reading a start line (reference line) set at a distant position in a specified distance away from the limit line of the visual field direction of display is crossing a new area. A recording medium is a medium that records background data by dividing it into a plurality of areas in advance, and the pre-reading unit comprises a unit for judging on which of the areas the reference line is crossing, and a reading unit for reading in memory the background data of the area judged as being crossed with the reference line.Type: GrantFiled: February 7, 2008Date of Patent: February 28, 2012Assignee: Kabushiki Kaisha SegaInventor: Masaaki Ito