Patents by Inventor Craig Peeper

Craig Peeper has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20120146902
    Abstract: Techniques are provided for re-orienting a field of view of a depth camera having one or more sensors. The depth camera may have one or more sensors for generating a depth image and may also have an RGB camera. In some embodiments, the field of view is re-oriented based on the depth image. The position of the sensor(s) may be altered to change the field of view automatically based on an analysis of objects in the depth image. The re-orientation process may be repeated until a desired orientation of the sensor is determined. Input from the RGB camera might be used to validate a final orientation of the depth camera, but is not required to during the process of determining new possible orientation of the field of view.
    Type: Application
    Filed: December 8, 2010
    Publication date: June 14, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Stanley W. Adermann, Mark Plagge, Craig Peeper, Szymon Stachniak, David C. Kline
  • Publication number: 20120128208
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities.
    Type: Application
    Filed: February 2, 2012
    Publication date: May 24, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Tommer Leyvand, Johnny Lee, Szymon Stachniak, Craig Peeper, Shao Liu
  • Publication number: 20120057753
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.
    Type: Application
    Filed: November 4, 2011
    Publication date: March 8, 2012
    Applicant: Microsoft Corporation
    Inventors: Johnny Chung Lee, Tommer Leyvand, Simon Piotr Stachniak, Craig Peeper
  • Publication number: 20120038657
    Abstract: Systems and associated methods for processing textures in a graphical processing unit (GPU) are disclosed. Textures may be managed on a per region (e.g., tile) basis, which allows efficient use of texture memory. Moreover, very large textures may be used. Techniques provide for both texture streaming, as well as sparse textures. A GPU texture unit may be used to intelligently clamp LOD based on a shader specified value. The texture unit may provide feedback to the shader to allow the shader to react conditionally based on whether clamping was used, etc. Per region (e.g., per-tile) independent mipmap stacks may be used to allow very large textures.
    Type: Application
    Filed: August 16, 2010
    Publication date: February 16, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Mark S. Grossman, Charles N. Boyd, Allison W. Klein, Craig Peeper
  • Patent number: 8081181
    Abstract: The architecture implements A-buffer in hardware by extending hardware to efficiently store a variable amount of data for each pixel. In operation, a prepass is performed to generate the counts of the fragments per pixel in a count buffer, followed by a prefix sum pass on the generated count buffer to calculate locations in a fragment buffer in which to store all the fragments linearly. An index is generated for a given pixel in the prefix sum pass and stored in a location buffer. Access to the pixel fragments is then accomplished using the index. Linear storage of the data allows for a fast rendering pass that stores all the fragments to a memory buffer without needing to look at the contents of the fragments. This is then followed by a resolve pass on the fragment buffer to generate the final image.
    Type: Grant
    Filed: June 20, 2007
    Date of Patent: December 20, 2011
    Assignee: Microsoft Corporation
    Inventor: Craig Peeper
  • Publication number: 20110234589
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.
    Type: Application
    Filed: June 9, 2011
    Publication date: September 29, 2011
    Applicant: Microsoft Corporation
    Inventors: Johnny Chung Lee, Tommer Leyvand, Simon Piotr Stachniak, Craig Peeper
  • Publication number: 20110150271
    Abstract: A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application. For example, a video game system will update the position of images displayed in the video based on the new positions of the objects. In one implementation, avatars can be moved based on movement of the user in front of a camera.
    Type: Application
    Filed: December 18, 2009
    Publication date: June 23, 2011
    Applicant: Microsoft Corporation
    Inventors: Johnny Lee, Tommer Leyvand, Craig Peeper
  • Patent number: 7961910
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.
    Type: Grant
    Filed: November 18, 2009
    Date of Patent: June 14, 2011
    Assignee: Microsoft Corporation
    Inventors: Johnny Chung Lee, Tommy Leyvand, Simon Piotr Stachniak, Craig Peeper
  • Publication number: 20110102438
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. The image may then be processed. For example, the image may be downsampled, a shadow, noise, and/or a missing potion in the image may be determined, pixels in the image that may be outside a range defined by a capture device associated with the image may be determined, a portion of the image associated with a floor may be detected. Additionally, a target in the image may be determined and scanned. A refined image may then be rendered based on the processed image. The refined image may then be processed to, for example, track a user.
    Type: Application
    Filed: November 5, 2009
    Publication date: May 5, 2011
    Applicant: Microsoft Corporation
    Inventors: Zsolt Mathe, Charles Claudius Marais, Craig Peeper, Joe Bertolami, Ryan Michael Geiss
  • Publication number: 20110081045
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.
    Type: Application
    Filed: November 18, 2009
    Publication date: April 7, 2011
    Applicant: Microsoft Corporation
    Inventors: Johnny Chung Lee, Tommer Leyvand, Simon Piotr Stachniak, Craig Peeper
  • Publication number: 20110080336
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may be determined and a model may be adjusted based on the location or position of the one or more extremities.
    Type: Application
    Filed: October 7, 2009
    Publication date: April 7, 2011
    Applicant: Microsoft Corporation
    Inventors: Tommer Leyvand, Johnny Lee, Simon Stachniak, Craig Peeper, Shao Liu
  • Publication number: 20110081044
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may then be discarded to isolate one or more voxels associated with a foreground object such as a human target and the isolated voxels associated with the foreground object may be processed.
    Type: Application
    Filed: October 7, 2009
    Publication date: April 7, 2011
    Applicant: Microsoft Corporation
    Inventors: Craig Peeper, Johnny Lee, Tommer Leyvand, Simon Stachniak
  • Publication number: 20110080475
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined.
    Type: Application
    Filed: November 11, 2009
    Publication date: April 7, 2011
    Applicant: Microsoft Corporation
    Inventors: Johnny Lee, Tommer Leyvand, Simon Piotr Stachniak, Craig Peeper, Shao Liu
  • Publication number: 20100197390
    Abstract: A method of tracking a target includes receiving from a source an observed depth image of a scene including the target. Each pixel of the observed depth image is labeled as either a foreground pixel belonging to the target or a background pixel not belonging to the target. Each foreground pixel is labeled with body part information indicating a likelihood that that foreground pixel belongs to one or more body parts of the target. The target is modeled with a skeleton including a plurality of skeletal points, each skeletal point including a three dimensional position derived from body part information of one or more foreground pixels.
    Type: Application
    Filed: October 21, 2009
    Publication date: August 5, 2010
    Applicant: Microsoft Corporation
    Inventors: Robert Matthew Craig, Tommer Leyvand, Craig Peeper, Momim M. Al-Ghosien, Matt Bronder, Oliver Williams, Ryan M. Geiss, Jamie Daniel Joseph Shotton, Johnny Lee, Mark Finocchio
  • Publication number: 20090322751
    Abstract: Allocation of memory registers for shaders by a processor is described herein. For each shader, registers are allocated based on the shader's level of complexity. Simpler shader instances are restricted to a smaller number of memory registers. More complex shader instances are allotted more registers. To do so, developers' high level shading level (HLSL) language includes template classes of shaders that can later be replaced by complex or simple versions of the shader. The HLSL is converted to bytecode that can be used to rasterize pixels on a computing device.
    Type: Application
    Filed: June 27, 2008
    Publication date: December 31, 2009
    Applicant: MICROSOFT CORPORATION
    Inventors: MICHAEL V. ONEPPO, CRAIG PEEPER, ANDREW L. BLISS, JOHN L. RAPP, MARK M. LACEY
  • Publication number: 20090217252
    Abstract: A high level shader language compiler incorporates transforms to optimize shader code for graphics processing hardware. An instruction reordering transform determines instruction encapsulations of dependent instructions that reduce concurrent register usage by the shader. A phase pulling transform re-organizes the shader's instructions into phases that reduce a measure of depth of texture loads. A register assigning transform assigns registers to lower register usage by the shader.
    Type: Application
    Filed: May 5, 2009
    Publication date: August 27, 2009
    Applicant: Microsoft Corporation
    Inventors: David Floyd Aronson, Anuj Bharat Gosalia, Craig Peeper, Daniel Kurt Baker, Loren McQuade
  • Patent number: 7530062
    Abstract: A high level shader language compiler incorporates transforms to optimize shader code for graphics processing hardware. An instruction reordering transform determines instruction encapsulations of dependent instructions that reduce concurrent register usage by the shader. A phase pulling transform re-organizes the shader's instructions into phases that reduce a measure of depth of texture loads. A register assigning transform assigns registers to lower register usage by the shader.
    Type: Grant
    Filed: May 23, 2003
    Date of Patent: May 5, 2009
    Assignee: Microsoft Corporation
    Inventors: David Floyd Aronson, Anuj Bharat Gosalia, Craig Peeper, Daniel Kurt Baker, Loren McQuade
  • Publication number: 20080316214
    Abstract: The architecture implements A-buffer in hardware by extending hardware to efficiently store a variable amount of data for each pixel. In operation, a prepass is performed to generate the counts of the fragments per pixel in a count buffer, followed by a prefix sum pass on the generated count buffer to calculate locations in a fragment buffer in which to store all the fragments linearly. An index is generated for a given pixel in the prefix sum pass and stored in a location buffer. Access to the pixel fragments is then accomplished using the index. Linear storage of the data allows for a fast rendering pass that stores all the fragments to a memory buffer without needing to look at the contents of the fragments. This is then followed by a resolve pass on the fragment buffer to generate the final image.
    Type: Application
    Filed: June 20, 2007
    Publication date: December 25, 2008
    Applicant: MICROSOFT CORPORATION
    Inventor: Craig Peeper
  • Publication number: 20060170680
    Abstract: A shader program capable of execution on a GPU is analyzed for constant expressions. These constant expressions are replaced with references to registers or memory addresses on the GPU. A preshader is created that comprises two executable files. The first executable file contains the shader program with the each constant expression removed and replaced with a unique reference accessible by the GPU. The first file is executable at the GPU. A second file contains the removed constant expressions along with instructions to place the values generated at the associated reference. The second executable file is executable at a CPU. When the preshader is executed, an instance of the first file is executed at the GPU for each vertex or pixel that is displayed. One instance of the second file is executed at the CPU. As the preshader is executed, the constant expressions in the second file are evaluated and the resulting intermediate values are passed to each instance of the first file on the GPU.
    Type: Application
    Filed: January 28, 2005
    Publication date: August 3, 2006
    Applicant: Microsoft Corporation
    Inventors: Craig Peeper, Daniel Baker, David Aronson, Loren McQuade
  • Publication number: 20050226520
    Abstract: The discrete cosine transform (DCT) is mapped to a graphics processing unit (GPU) instead of a central processing unit (CPU). The DCT can be implemented using a shader-based process or a host-based process. A matrix is applied to a set of pixel samples. The samples are processed in either rows or columns first, and then the processing is performed in the opposite direction. The number of times a shader program is changed is minimized by processing all samples that use a particular shader (e.g., the first shader) at the same time (e.g., in sequence).
    Type: Application
    Filed: April 13, 2004
    Publication date: October 13, 2005
    Applicant: Microsoft Corporation
    Inventors: Channing Verbeck, Craig Peeper