Patents by Inventor Charles Claudius Marais

Charles Claudius Marais has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11302287
    Abstract: Color correction technology for computing and gaming systems are discussed herein which compensate for color vision deficiency among individuals. In one example, a method includes receiving a video frame having a first non-linear transfer function and processing the video frame to have a linear transfer function. The method also includes applying a color transform to the video frame having the linear transfer function to produce at least altered color appearance parameters on selected colors that increase color perceptibility of the video frame for a colorblindness condition, and processing the video frame after the color transform to have a second non-linear transfer function and produce an output video frame. The method also includes transferring the output video frame for display on a display device.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: April 12, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ashley Nicole Allard, Paul John Olczak, Charles Claudius Marais, Ioana Monica Preda, Kevin Hampton Cogger, Michael Paul Erich Von Hippel, Aresh Mishra
  • Patent number: 11049224
    Abstract: Methods, systems and computer program products are described herein that enable the identification and correction of incorrect and/or inconsistent tones in the bright regions in an HDR image. A bright region is identified in an image. The bright region is classified into an assigned classification. A luminance value of the bright region is determined and compared to a predefined luminance values corresponding to the classification. The luminance value of the bright region is adjusted to match the predefined luminance values where there is a mismatch. Bright regions including mismatched or incorrect luminance values may be rendered on display to include a visual indicator that such regions include mismatched luminance values.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: June 29, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Charles Claudius Marais
  • Publication number: 20200184612
    Abstract: Methods, systems and computer program products are described herein that enable the identification and correction of incorrect and/or inconsistent tones in the bright regions in an HDR image. A bright region is identified in an image. The bright region is classified into an assigned classification. A luminance value of the bright region is determined and compared to a predefined luminance values corresponding to the classification. The luminance value of the bright region is adjusted to match the predefined luminance values where there is a mismatch. Bright regions including mismatched or incorrect luminance values may be rendered on display to include a visual indicator that such regions include mismatched luminance values.
    Type: Application
    Filed: December 5, 2018
    Publication date: June 11, 2020
    Inventor: Charles Claudius Marais
  • Patent number: 10369462
    Abstract: Embodiments of the present invention enable rich control input data to control video games that are remotely executed. Rich control input includes three-dimensional image data, color video, audio, device orientation data, and touch input. A remotely-executed video game is one executed on a server or other computing device that is networked to a client device receiving the rich control input. Rich control input includes more data than can be uploaded to a game server without degrading game performance. Embodiments of the present invention preprocess the rich control data on the client and into data that may be uploaded to the game server. The rich input stream may be processed in a general way or in a game-specific way.
    Type: Grant
    Filed: October 3, 2016
    Date of Patent: August 6, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Krassimir Emilov Karamfilov, Emad Barsoum, Charles Claudius Marais, John Raymond Justice, David James Quinn, Roderick Michael Toll
  • Patent number: 9785228
    Abstract: An NUI system to provide user input to a computer system. The NUI system includes a logic machine and an instruction-storage machine. The instruction-storage machine holds instructions that, when executed by the logic machine, cause the logic machine to detect an engagement gesture from a human subject or to compute an engagement metric reflecting the degree of the subject's engagement. The instructions also cause the logic machine to direct gesture-based user input from the subject to the computer system as soon as the engagement gesture is detected or the engagement metric exceeds a threshold.
    Type: Grant
    Filed: February 11, 2013
    Date of Patent: October 10, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Schwesinger, Eduardo Escardo Raffo, Oscar Murillo, David Bastien, Matthew H. Ahn, Mauro Giusti, Kevin Endres, Christian Klein, Julia Schwarz, Charles Claudius Marais
  • Patent number: 9717982
    Abstract: Embodiments of the present invention split game processing and rendering between a client and a game server. A rendered video game image is received from a game server and combined with a rendered image generated by the game client to form a single video game image that is presented to a user. Game play may be controlled using a rich sensory input, such as three-dimensional image data and audio data. The three-dimensional image data describes the shape, size and orientation of objects present in a play space. The rich sensory input is communicated to a game server, potentially with some preprocessing, and is also consumed locally on the client, at least in part. In one embodiment, latency sensitive features are the only features processed on the client and rendered on the client.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: August 1, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David James Quinn, Emad Barsoum, Charles Claudius Marais, John Raymond Justice, Krassimir Emilov Karamfilov, Roderick Michael Toll
  • Patent number: 9607213
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.
    Type: Grant
    Filed: March 16, 2015
    Date of Patent: March 28, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
  • Publication number: 20170021269
    Abstract: Embodiments of the present invention enable rich control input data to control video games that are remotely executed. Rich control input includes three-dimensional image data, color video, audio, device orientation data, and touch input. A remotely-executed video game is one executed on a server or other computing device that is networked to a client device receiving the rich control input. Rich control input includes more data than can be uploaded to a game server without degrading game performance. Embodiments of the present invention preprocess the rich control data on the client and into data that may be uploaded to the game server. The rich input stream may be processed in a general way or in a game-specific way.
    Type: Application
    Filed: October 3, 2016
    Publication date: January 26, 2017
    Inventors: Krassimir Emilov Karamfilov, Emad Barsoum, Charles Claudius Marais, John Raymond Justice, David James Quinn, Roderick Michael Toll
  • Patent number: 9526980
    Abstract: Embodiments of the present invention enable rich control input data to control video games that are remotely executed. Rich control input includes three-dimensional image data, color video, audio, device orientation data, and touch input. A remotely-executed video game is one executed on a server or other computing device that is networked to a client device receiving the rich control input. Rich control input includes more data than can be uploaded to a game server without degrading game performance. Embodiments of the present invention preprocess the rich control data on the client and into data that may be uploaded to the game server. The rich input stream may be processed in a general way or in a game-specific way.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: December 27, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Krassimir Emilov Karamfilov, Emad Barsoum, Charles Claudius Marais, John Raymond Justice, David James Quinn, Roderick Michael Toll
  • Patent number: 9519970
    Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.
    Type: Grant
    Filed: October 9, 2015
    Date of Patent: December 13, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais
  • Publication number: 20160035095
    Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.
    Type: Application
    Filed: October 9, 2015
    Publication date: February 4, 2016
    Inventors: Zsolt Mathe, Charles Claudius Marais
  • Patent number: 9191570
    Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.
    Type: Grant
    Filed: August 5, 2013
    Date of Patent: November 17, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais
  • Publication number: 20150262001
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.
    Type: Application
    Filed: March 16, 2015
    Publication date: September 17, 2015
    Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
  • Patent number: 9007417
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.
    Type: Grant
    Filed: July 18, 2012
    Date of Patent: April 14, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
  • Patent number: 8988432
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. The image may then be processed. For example, the image may be downsampled, a shadow, noise, and/or a missing potion in the image may be determined, pixels in the image that may be outside a range defined by a capture device associated with the image may be determined, a portion of the image associated with a floor may be detected. Additionally, a target in the image may be determined and scanned. A refined image may then be rendered based on the processed image. The refined image may then be processed to, for example, track a user.
    Type: Grant
    Filed: November 5, 2009
    Date of Patent: March 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais, Craig Peeper, Joe Bertolami, Ryan Michael Geiss
  • Patent number: 8897493
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.
    Type: Grant
    Filed: January 4, 2013
    Date of Patent: November 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
  • Patent number: 8896721
    Abstract: A depth image of a scene may be observed or captured by a capture device. The depth image may include a human target and an environment. One or more pixels of the depth image may be analyzed to determine whether the pixels in the depth image are associated with the environment of the depth image. The one or more pixels associated with the environment may then be discarded to isolate the human target and the depth image with the isolated human target may be processed.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: November 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Zsolt Mathe, Charles Claudius Marais
  • Publication number: 20140225820
    Abstract: An NUI system to provide user input to a computer system. The NUI system includes a logic machine and an instruction-storage machine. The instruction-storage machine holds instructions that, when executed by the logic machine, cause the logic machine to detect an engagement gesture from a human subject or to compute an engagement metric reflecting the degree of the subject's engagement. The instructions also cause the logic machine to direct gesture-based user input from the subject to the computer system as soon as the engagement gesture is detected or the engagement metric exceeds a threshold.
    Type: Application
    Filed: February 11, 2013
    Publication date: August 14, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Mark Schwesinger, Eduardo Escardo Raffo, Oscar Murillo, David Bastien, Matthew H. Ahn, Mauro Giusti, Kevin Endres, Christian Klein, Julia Schwarz, Charles Claudius Marais
  • Patent number: 8775916
    Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.
    Type: Grant
    Filed: May 17, 2013
    Date of Patent: July 8, 2014
    Assignee: Microsoft Corporation
    Inventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Forbes, Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper, Michael Johnson
  • Publication number: 20140179436
    Abstract: Embodiments of the present invention enable rich control input data to control video games that are remotely executed. Rich control input includes three-dimensional image data, color video, audio, device orientation data, and touch input. A remotely-executed video game is one executed on a server or other computing device that is networked to a client device receiving the rich control input. Rich control input includes more data than can be uploaded to a game server without degrading game performance. Embodiments of the present invention preprocess the rich control data on the client and into data that may be uploaded to the game server. The rich input stream may be processed in a general way or in a game-specific way.
    Type: Application
    Filed: December 21, 2012
    Publication date: June 26, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Krassimir Emilov Karamfilov, Emad Barsoum, Charles Claudius Marais, John Raymond Justice, David James Quinn, Roderick Michael Toll