Patents Assigned to Electric Planet
-
Patent number: 7184048Abstract: A system and method are disclosed for generating an animatable object. A skeleton of the desired character is constructed by the user utilizing various predetermined components. These predetermined components include a various selection of rods and joints. The rods are static components which remain rigid during motion, while the various joints are moveable components. A static digitized image, for example, an image of the user, is utilized and a constructed skeleton is superimposed onto it. The desired object, such as the image of the user, can then be extracted from the background of the digital image and the resulting personal character can then be animated, for instance by selecting and dragging one of the hands with a mouse.Type: GrantFiled: October 18, 2001Date of Patent: February 27, 2007Assignee: Electric Planet, Inc.Inventor: Kevin L. Hunter
-
Patent number: 7167577Abstract: Disclosed is an inventive method for providing visual chat. A character image is read into memory representing a character a user wishes to be for the duration of the visual chat. Continuous frames of video images are then received, typically using video camera, which include image data of a person. The head image of the person is then tracked by the system, and portions of the head image are extracted from the video images. These extracted portions are preferably features of the person in the video image. Finally, the extracted portions of the head image are blended into corresponding areas of the character image, such that the features of the blended character image match the features of the person, and change as the features of the person change.Type: GrantFiled: February 23, 2005Date of Patent: January 23, 2007Assignee: Electric Planet, Inc.Inventor: Charles Kellner
-
Patent number: 7162082Abstract: A background subtraction apparatus of the present invention includes a key point locator for locating key points on a known object type, a boundary point locator for locating boundary points of the known object that make up the edges of the known object, and an edge processor for processing the edges to provide a clean-edged extraction of the known object from a background image. Preferably, the key point locator includes an alignment detector for detecting alignment of an image of the known object type with a skeleton image. Still more preferably, the skeleton image is an exoskeleton image and the known object type is a human being.Type: GrantFiled: April 18, 2002Date of Patent: January 9, 2007Assignee: Electric Planet, Inc.Inventor: Jeffrey L. Edwards
-
Patent number: 7113918Abstract: A method is provided for conducting commerce over a network via vision-enabled content. First, content is encoded to convert it into vision-enabled content. Payment is received for vision-enabling the content. Also, a program to decode the vision-enabled content is provided. Finally, the vision-enabled content is sent to a user over a network. The program decodes the vision-enabled content and receives an image of the user. The vision-enabled content may include advertising content, entertainment content, and educational or instructional content. In one embodiment, the program combines the image of the user with the vision-enabled content. In another embodiment, the program utilizes the image of the user to control the vision-enabled content.Type: GrantFiled: August 1, 1999Date of Patent: September 26, 2006Assignee: Electric Planet, Inc.Inventors: Subutai Ahmad, G. Scott France
-
Patent number: 7091993Abstract: A system and method are disclosed for digitally compositing an object from an input image onto a destination image. The object is composited from an image having an arbitrary or non-uniform colored background containing some non-static elements onto a destination image with reduced effects from shadows cast by the object and with reduced gaps or holes within the object. Various improvements in the compositing procedure such as shadow reduction and hole filling, and less restrictive requirements regarding the object's surroundings are disclosed. A background model is created and a frame of an input image containing the object is obtained. An alpha image is created in which each pixel is either a zero, indicating it is not part of the object, or a one, indicating that it is part of the object.Type: GrantFiled: January 24, 2003Date of Patent: August 15, 2006Assignee: Electric Planet, Inc.Inventor: Subutai Ahmad
-
Patent number: 6971882Abstract: An interactive karaoke system includes a microphone developing an audio input from at least one karaoke performer; a camera producing a series of video frames including the at least one karaoke performer; and a karaoke processor system including a video environment and a related audio environment for the karaoke performer. The karaoke processor system is coupled to the camera to create extracted images of the at least one karaoke performer from the series of video frames and to composite the extracted images with a background derived from the video environment. The video environment is affected by at least one of a position and a movement of the at least one karaoke performer. A karaoke network includes a local area network, a local karaoke server coupled to the local area network and storing local karaoke content; and a number of karaoke systems coupled to the local area network, each of which can request karaoke content from the local karaoke server.Type: GrantFiled: December 30, 2003Date of Patent: December 6, 2005Assignee: Electric Planet, Inc.Inventors: David Kumar, Subutai Ahmad
-
Patent number: 6909455Abstract: A system, method and article of manufacture are provided for tracking a head portion of a person image in video images. Upon receiving video images, a first head tracking operation is executed for generating a first confidence value. Such first confidence value is representative of a confidence that a head portion of a person image in the video images is correctly located. Also executed is a second head tracking operation for generating a second confidence value representative of a confidence that the head portion of the person image in the video images is correctly located. The first confidence value and the second confidence value are then outputted. Subsequently, the depiction of the head portion of the person image in the video images is based on the first confidence value and the second confidence value.Type: GrantFiled: January 28, 2003Date of Patent: June 21, 2005Assignee: Electric Planet, Inc.Inventors: Jeffrey L. Edwards, Katerina H. Nguyen
-
Patent number: 6876754Abstract: Disclosed is an inventive method for providing visual chat. A character image is read into memory representing a character a user wishes to be for the duration of the visual chat. Continuous frames of video images are then received, typically using video camera, which include image data of a person. The head image of the person is then tracked by the system, and portions of the head image are extracted from the video images. These extracted portions are preferably features of the person in the video image. Finally, the extracted portions of the head image are blended into corresponding areas of the character image, such that the features of the blended character image match the features of the person, and change as the features of the person change.Type: GrantFiled: February 7, 2003Date of Patent: April 5, 2005Assignee: Electric PlanetInventor: Charles Kellner
-
Patent number: 6775835Abstract: Presented herein is a video enhancer plug-in for a web browser that may either interface directly with the browser or indirectly as a plug-in for another more general multimedia browser plug-in. The video enhancer consists primarily of video enhanced scripts and a library of specialized routines. A script is intended to provide a visually interesting display within a web browser environment and calls upon routines in the library to do so. These routines, for example, may take live video input from a video camera and perform specific functions such as locating the head of an individual within the camera's field of view. Because the script runs within a web browser, it allows components of the system to be widely separated from one another, so long as they are connected across a wide area network such as the Internet.Type: GrantFiled: July 30, 1999Date of Patent: August 10, 2004Assignee: Electric PlanetInventors: Subutai Ahmad, Jonathan Cohen
-
Patent number: 6692259Abstract: An interactive karaoke system includes a microphone developing an audio input from at least one karaoke performer; a camera producing a series of video frames including the at least one karaoke performer; and a karaoke processor system including a video environment and a related audio environment for the karaoke performer. The karaoke processor system is coupled to the camera to create extracted images of the at least one karaoke performer from the series of video frames and to composite the extracted images with a background derived from the video environment. The video environment is affected by at least one of a position and a movement of the at least one karaoke performer. A karaoke network includes a local area network, a local karaoke server coupled to the local area network and storing local karaoke content; and a number of karaoke systems coupled to the local area network, each of which can request karaoke content from the local karaoke server.Type: GrantFiled: December 11, 2002Date of Patent: February 17, 2004Assignee: Electric PlanetInventors: David Kumar, Subutai Ahmad
-
Patent number: 6545706Abstract: A system, method and article of manufacture are provided for tracking a head portion of a person image in video images. Upon receiving video images, a first head tracking operation is executed for generating a first confidence value. Such first confidence value is representative of a confidence that a head portion of a person image in the video images is correctly located. Also executed is a second head tracking operation for generating a second confidence value representative of a confidence that the head portion of the person image in the video images is correctly located. The first confidence value and the second confidence value are then outputted. Subsequently, the depiction of the head portion of the person image in the video images is based on the first confidence value and the second confidence value.Type: GrantFiled: July 30, 1999Date of Patent: April 8, 2003Assignee: Electric Planet, Inc.Inventors: Jeffrey L. Edwards, Katerina H. Nguyen
-
Patent number: 6539099Abstract: Disclosed is an inventive method for providing visual chat. A character image is read into memory representing a character a user wishes to be for the duration of the visual chat. Continuous frames of video images are then received, typically using video camera, which include image data of a person. The head image of the person is then tracked by the system, and portions of the head image are extracted from the video images. These extracted portions are preferably features of the person in the video image. Finally, the extracted portions of the head image are blended into corresponding areas of the character image, such that the features of the blended character image match the features of the person, and change as the features of the person change.Type: GrantFiled: August 30, 1999Date of Patent: March 25, 2003Assignee: Electric PlanetInventor: Charles Kellner
-
Patent number: 6532022Abstract: An object is composited digitally from an input image onto a destination image. A background model is created and a frame of an input image containing the object is obtained. An alpha image is created in which each pixel is either a zero, indicating it is not part of the object, or a one, indicating that it is part of the object. The effect of shadows emanating from the object is reduced. A set of templates is then derived to allow holes or gaps in the object created during the compositing process to be filled to a large extent. The object is blended onto a destination image using the alpha image as a blending coefficient.Type: GrantFiled: October 15, 1997Date of Patent: March 11, 2003Assignee: Electric Planet, Inc.Inventor: Subutai Ahmad
-
Patent number: 6514083Abstract: An interactive karaoke system includes a microphone developing an audio input from at least one karaoke performer; a camera producing a series of video frames including the at least one performer; and a karaoke processor system including a video environment and a related audio environment for the karaoke performer. The karaoke processor system is coupled to the camera to create extracted images of the at least one karaoke performer from the series of video frames and to composite the extracted images with a background derived from the video environment. The video environment is affected by at least one of a position and a movement of the at least one karaoke performer. A karaoke network includes a local area network, a local karaoke server coupled to the local area network and storing local karaoke content; and a number of karaoke systems coupled to the local area network, each of which can request karaoke content from the local karaoke server.Type: GrantFiled: January 6, 1999Date of Patent: February 4, 2003Assignee: Electric Planet, Inc.Inventors: David Kumar, Subutai Ahmad
-
Patent number: 6489989Abstract: A system, method and article of manufacture are provided for executing a setup protocol. A camera and a visual display device are coupled to a computer. Images are generated by the camera upon activation of the camera. A series of setup tests are then conducted on the images generated by the camera to determine whether the camera and surrounding environmental elements satisfy predetermined criteria of an intended computer vision application for optimal running of the intended computer vision application on the computer. The series of setup tests comprises at least one setup test selected from a library of setup tests.Type: GrantFiled: September 15, 1999Date of Patent: December 3, 2002Assignee: Electric Planet, Inc.Inventors: Nella Shapiro, Jeffrey L. Edwards
-
Publication number: 20020172433Abstract: A background subtraction apparatus of the present invention includes a key point locator for locating key points on a known object type, a boundary point locator for locating boundary points of the known object that make up the edges of the known object, and an edge processor for processing the edges to provide a clean-edged extraction of the known object from a background image. Preferably, the key point locator includes an alignment detector for detecting alignment of an image of the known object type with a skeleton image. Still more preferably, the skeleton image is an exoskeleton image and the known object type is a human being.Type: ApplicationFiled: April 18, 2002Publication date: November 21, 2002Applicant: Electric Planet, Inc.Inventor: Jeffrey L. Edwards
-
Patent number: 6411744Abstract: A background subtraction apparatus of the present invention includes a key point locator for locating key points on a known object type, a boundary point locator for locating boundary points of the known object that make up the edges of the known object, and an edge processor for processing the edges to provide a clean-edged extraction of the known object from a background image. Preferably, the key point locator includes an alignment detector for detecting alignment of an image of the known object type with a skeleton image. Still more preferably, the skeleton image is an exoskeleton image and the known object type is a human being.Type: GrantFiled: October 15, 1998Date of Patent: June 25, 2002Assignee: Electric Planet, Inc.Inventor: Jeffrey L. Edwards
-
Patent number: 6384819Abstract: A system and method are disclosed for generating an animatable object. A skeleton of the desired character is constructed by the user utilizing various predetermined components. These predetermined components include a various selection of rods and joints. The rods are static components which remain rigid during motion, while the various joints are moveable components. A static digitized image, for example, an image of the user, is utilized and a constructed skeleton is superimposed onto it. The desired object, such as the image of the user, can then be extracted from the background of the digital image and the resulting personal character can then be animated, for instance by selecting and dragging one of the hands with a mouse.Type: GrantFiled: October 15, 1998Date of Patent: May 7, 2002Assignee: Electric Planet, Inc.Inventor: Kevin L. Hunter
-
Patent number: 6256033Abstract: A system and method are disclosed for providing a gesture recognition system for recognizing gestures made by a moving subject within an image and performing an operation based on the semantic meaning of the gesture. A subject, such as a human being, enters the viewing field of a camera connected to a computer and performs a gesture, such as flapping of the arms. The gesture is then examined by the system one image frame at a time. Positional data is derived from the input frames and compared to data representing gestures already known to the system. The comparisons are done in real-time and the system can be trained to better recognize known gestures or to recognize new gestures. A frame of the input image containing the subject is obtained after a background image model has been created. An input frame is used to derive a frame data set that contains particular coordinates of the subject at a given moment in time.Type: GrantFiled: August 10, 1999Date of Patent: July 3, 2001Assignee: Electric PlanetInventor: Katerina H. Nguyen
-
Patent number: 6141463Abstract: To estimate the configuration of a figure in a captured image, a silhouette image of the figure is scanned to create a signed distance image. This image identifies the distance of each pixel in the image to the closest edge of the silhouette, and indicates whether the pixel is inside or outside of the silhouette. Multiple distance images of this type are employed to generate an eigen-points model, which provides an affine mapping from the signed distance images to the limb parameters of an authored skeleton. When a new input image is received, it is first processed to create the signed-distance image, and this image is applied to the eigen-points model to estimate limb parameters, such as the locations of various joints in the figure. From this information, each foreground pixel in the captured image can be assigned to one of the limbs.Type: GrantFiled: December 3, 1997Date of Patent: October 31, 2000Assignee: Electric Planet InteractiveInventors: Michele Covell, Subutai Ahmed