Patents by Inventor Christian F. Huitema

Christian F. Huitema has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10936900
    Abstract: Embodiments are disclosed that relate to color identification. In one example, an image processing method comprises receiving an infrared (IR) image including a plurality of IR pixels, each IR pixel specifying one or more IR parameters of that IR pixel, identifying, in the IR image, IR-skin pixels that image human skin, identifying a skin tone of identified human skin based at least in part on the IR-skin pixels, the skin tone having one or more expected visible light (VL) parameters, receiving a VL image including a plurality of VL pixels, each VL pixel specifying one or more VL parameters of that VL pixel, identifying, in the VL image, VL-skin pixels that image identified human skin, and adjusting the VL image to increase a correspondence between the one or more VL parameters of the VL-skin pixels and the one or more expected VL parameters of the identified skin tone.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: March 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lin Liang, Christian F. Huitema
  • Patent number: 10535181
    Abstract: The systems and methods generate geometric proxies for participants in an online communication session, where each geometric proxy is a geometric representation of a participant and each geometric proxy is generated from acquired depth information and is associated with a particular virtual box. The systems and methods also include generating a scene geometry that visually simulates an in-person meeting of the participants where the scene geometry includes the geometric proxies, and the virtual boxes of the geometric proxies are aligned within the scene geometry based on a number of the participants and a reference object to which the virtual boxes are aligned. In addition, the systems and methods cause a display of the scene geometry with the geometric proxies, where the display of a particular geometric proxy includes a video of the participant corresponding to the particular geometric painted onto the particular geometric proxy.
    Type: Grant
    Filed: April 21, 2019
    Date of Patent: January 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yancey Christopher Smith, Eric G. Lang, Zhengyou Zhang, Christian F. Huitema
  • Patent number: 10390273
    Abstract: The electronic devices described herein are configured to enhance user experience associated with a network connection when transitioning the network connection between access points. Determinations to scan for available access points and transfer the network connection to an alternative access point are based on connection attributes and/or access point attributes that are compared to scan criteria and transfer criteria. Further, the scan criteria and transfer criteria are updated, or adjusted, according to machine learning techniques such that the determinations to scan for access points and transfer between access points are tuned on a per-device and/or per-user level to fit patterns of use of a particular device and/or user. Over time, the updates to the scan criteria and transfer criteria based on machine learning provide an increasingly consistent, high quality user experience while roaming efficiently between access points.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: August 20, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mukund Sankaranarayan, Mitesh Desai, Christian F. Huitema, Paul Rosswurm
  • Publication number: 20190244413
    Abstract: The systems and methods generate geometric proxies for participants in an online communication session, where each geometric proxy is a geometric representation of a participant and each geometric proxy is generated from acquired depth information and is associated with a particular virtual box. The systems and methods also include generating a scene geometry that visually simulates an in-person meeting of the participants where the scene geometry includes the geometric proxies, and the virtual boxes of the geometric proxies are aligned within the scene geometry based on a number of the participants and a reference object to which the virtual boxes are aligned. In addition, the systems and methods cause a display of the scene geometry with the geometric proxies, where the display of a particular geometric proxy includes a video of the participant corresponding to the particular geometric painted onto the particular geometric proxy.
    Type: Application
    Filed: April 21, 2019
    Publication date: August 8, 2019
    Inventors: Yancey Christopher Smith, Eric G. Lang, Zhengyou Zhang, Christian F. Huitema
  • Publication number: 20190213436
    Abstract: Embodiments are disclosed that relate to color identification. In one example, an image processing method comprises receiving an infrared (IR) image including a plurality of IR pixels, each IR pixel specifying one or more IR parameters of that IR pixel, identifying, in the IR image, IR-skin pixels that image human skin, identifying a skin tone of identified human skin based at least in part on the IR-skin pixels, the skin tone having one or more expected visible light (VL) parameters, receiving a VL image including a plurality of VL pixels, each VL pixel specifying one or more VL parameters of that VL pixel, identifying, in the VL image, VL-skin pixels that image identified human skin, and adjusting the VL image to increase a correspondence between the one or more VL parameters of the VL-skin pixels and the one or more expected VL parameters of the identified skin tone.
    Type: Application
    Filed: December 21, 2018
    Publication date: July 11, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Lin Liang, Christian F. Huitema
  • Patent number: 10325400
    Abstract: Implementations provide an in-person communication experience by generating a changable virtual viewpoint for a participant in an online communication. For instance, techniques described herein capture visual data about participants in an online communication, and create a realistic geometric proxy from the visual data. A virtual scene geometry is generated that mimics an arrangement of an in-person meeting for the online communication. According to various implementations, a virtual viewpoint displays a changing virtual viewpoint, such as based on a change in position of a participants face.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: June 18, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yancey Christopher Smith, Eric G. Lang, Zhengyou Zhang, Christian F. Huitema
  • Patent number: 10192134
    Abstract: Embodiments are disclosed that relate to color identification. In one example, an image processing method comprises receiving an infrared (IR) image including a plurality of IR pixels, each IR pixel specifying one or more IR parameters of that IR pixel, identifying, in the IR image, IR-skin pixels that image human skin, identifying a skin tone of identified human skin based at least in part on the IR-skin pixels, the skin tone having one or more expected visible light (VL) parameters, receiving a VL image including a plurality of VL pixels, each VL pixel specifying one or more VL parameters of that VL pixel, identifying, in the VL image, VL-skin pixels that image identified human skin, and adjusting the VL image to increase a correspondence between the one or more VL parameters of the VL-skin pixels and the one or more expected VL parameters of the identified skin tone.
    Type: Grant
    Filed: February 6, 2015
    Date of Patent: January 29, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Lin Liang, Christian F. Huitema
  • Patent number: 10128987
    Abstract: Examples of the disclosure dynamically scale receive window auto-tuning. Tuning data is obtained, including the number of bytes in a receive buffer and the distribution of receive packets over time. Aspects of the disclosure use this tuning data to determine rates at which one or more applications on the receiving computer are consuming data and adjust or maintain the receive buffer accordingly in a dynamic manner to scale a receive window to current conditions.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: November 13, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Christian F. Huitema, Praveen Balasubramanian
  • Publication number: 20180234900
    Abstract: The electronic devices described herein are configured to enhance user experience associated with a network connection when transitioning the network connection between access points. Determinations to scan for available access points and transfer the network connection to an alternative access point are based on connection attributes and/or access point attributes that are compared to scan criteria and transfer criteria. Further, the scan criteria and transfer criteria are updated, or adjusted, according to machine learning techniques such that the determinations to scan for access points and transfer between access points are tuned on a per-device and/or per-user level to fit patterns of use of a particular device and/or user. Over time, the updates to the scan criteria and transfer criteria based on machine learning provide an increasingly consistent, high quality user experience while roaming efficiently between access points.
    Type: Application
    Filed: February 10, 2017
    Publication date: August 16, 2018
    Inventors: Mukund Sankaranarayan, Mitesh Desai, Christian F. Huitema, Paul Rosswurm
  • Patent number: 9959627
    Abstract: A three-dimensional shape parameter computation system and method for computing three-dimensional human head shape parameters from two-dimensional facial feature points. A series of images containing a user's face is captured. Embodiments of the system and method deduce the 3D parameters of the user's head by examining a series of captured images of the user over time and in a variety of head poses and facial expressions, and then computing an average. An energy function is constructed over a batch of frames containing 2D face feature points obtained from the captured images, and the energy function is minimized to solve for the head shape parameters valid for the batch of frames. Head pose parameters and facial expression and animation parameters can vary over each captured image in the batch of frames. In some embodiments this minimization is performed using a modified Gauss-Newton minimization technique using a single iteration.
    Type: Grant
    Filed: May 6, 2015
    Date of Patent: May 1, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nikolay Smolyanskiy, Christian F. Huitema, Cha Zhang, Lin Liang, Sean Eron Anderson, Zhengyou Zhang
  • Publication number: 20180089884
    Abstract: Implementations provide an in-person communication experience by generating a changable virtual viewpoint for a participant in an online communication. For instance, techniques described herein capture visual data about participants in an online communication, and create a realistic geometric proxy from the visual data. A virtual scene geometry is generated that mimics an arrangement of an in-person meeting for the online communication. According to various implementations, a virtual viewpoint displays a changing virtual viewpoint, such as based on a change in position of a participants face.
    Type: Application
    Filed: December 4, 2017
    Publication date: March 29, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yancey Christopher Smith, Eric G. Lang, Zhengyou Zhang, Christian F. Huitema
  • Publication number: 20180076934
    Abstract: Examples of the disclosure dynamically scale receive window auto-tuning. Tuning data is obtained, including the number of bytes in a receive buffer and the distribution of receive packets over time. Aspects of the disclosure use this tuning data to determine rates at which one or more applications on the receiving computer are consuming data and adjust or maintain the receive buffer accordingly in a dynamic manner to scale a receive window to current conditions.
    Type: Application
    Filed: October 7, 2016
    Publication date: March 15, 2018
    Inventors: Christian F. Huitema, Praveen Balasubramanian
  • Patent number: 9836870
    Abstract: A perspective-correct communication window system and method for communicating between participants in an online meeting, where the participants are not in the same physical locations. Embodiments of the system and method provide an in-person communications experience by changing virtual viewpoint for the participants when they are viewing the online meeting. The participant sees a different perspective displayed on a monitor based on the location of the participant's eyes. Embodiments of the system and method include a capture and creation component that is used to capture visual data about each participant and create a realistic geometric proxy from the data. A scene geometry component is used to create a virtual scene geometry that mimics the arrangement of an in-person meeting. A virtual viewpoint component displays the changing virtual viewpoint to the viewer and can add perceived depth using motion parallax.
    Type: Grant
    Filed: April 13, 2016
    Date of Patent: December 5, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Yancey Christopher Smith, Eric G. Lang, Zhengyou Zhang, Christian F. Huitema
  • Patent number: 9767586
    Abstract: A method for operating an image processing device coupled to a color camera and a depth camera is provided. The method includes receiving a color image of a 3-dimensional scene from a color camera, receiving a depth map of the 3-dimensional scene from a depth camera, generating an aligned 3-dimensional face mesh from a plurality of color images received from the color camera indicating movement of a subject's head within the 3-dimensional scene and form the depth map, determining a head region based the depth map, segmenting the head region into a plurality of facial sections based on both the color image, depth map, and the aligned 3-dimensional face mesh, and overlaying the plurality of facial sections on the color image.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: September 19, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Lin Liang, Christian F. Huitema, Matthew Adam Simari, Sean Eron Anderson
  • Patent number: 9697635
    Abstract: Technology is disclosed for automatically generating a facial avatar resembling a user in a defined art style. One or more processors generate a user 3D head model for the user based on captured 3D image data from a communicatively coupled 3D image capture device. A set of user transferable head features from the user 3D head model are automatically represented by the one or more processors in the facial avatar in accordance with rules governing transferable user 3D head features. In some embodiments, a base or reference head model of the avatar is remapped to include the set of user head features. In other embodiments, an avatar head shape model is selected based on the user 3D head model, and the transferable user 3D head features are represented in the avatar head shape model.
    Type: Grant
    Filed: October 17, 2016
    Date of Patent: July 4, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David J. Quinn, Peter Alan Ridgway, Nicholas David Burton, Carol Clark, David T. Hill, Christian F. Huitema, Yancey C. Smith, Royal D. Winchester, Iain A. McFadzen, Andrew John Bastable
  • Publication number: 20170039752
    Abstract: Technology is disclosed for automatically generating a facial avatar resembling a user in a defined art style. One or more processors generate a user 3D head model for the user based on captured 3D image data from a communicatively coupled 3D image capture device. A set of user transferable head features from the user 3D head model are automatically represented by the one or more processors in the facial avatar in accordance with rules governing transferable user 3D head features. In some embodiments, a base or reference head model of the avatar is remapped to include the set of user head features. In other embodiments, an avatar head shape model is selected based on the user 3D head model, and the transferable user 3D head features are represented in the avatar head shape model.
    Type: Application
    Filed: October 17, 2016
    Publication date: February 9, 2017
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David J. Quinn, Peter Alan Ridgway, Nicholas David Burton, Carol Clark, David T. Hill, Christian F. Huitema, Yancey C. Smith, Royal D. Winchester, Iain A. McFadzen, Andrew John Bastable
  • Patent number: 9508197
    Abstract: Technology is disclosed for automatically generating a facial avatar resembling a user in a defined art style. One or more processors generate a user 3D head model for the user based on captured 3D image data from a communicatively coupled 3D image capture device. A set of user transferable head features from the user 3D head model are automatically represented by the one or more processors in the facial avatar in accordance with rules governing transferable user 3D head features. In some embodiments, a base or reference head model of the avatar is remapped to include the set of user head features. In other embodiments, an avatar head shape model is selected based on the user 3D head model, and the transferable user 3D head features are represented in the avatar head shape model.
    Type: Grant
    Filed: November 1, 2013
    Date of Patent: November 29, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David J. Quinn, Peter Alan Ridgway, Nicholas David Burton, Carol Clark, David T. Hill, Christian F. Huitema, Yancey C. Smith, Royal D. Winchester, Iain A. McFadzen, Andrew John Bastable
  • Publication number: 20160316170
    Abstract: A perspective-correct communication window system and method for communicating between participants in an online meeting, where the participants are not in the same physical locations. Embodiments of the system and method provide an in-person communications experience by changing virtual viewpoint for the participants when they are viewing the online meeting. The participant sees a different perspective displayed on a monitor based on the location of the participant's eyes. Embodiments of the system and method include a capture and creation component that is used to capture visual data about each participant and create a realistic geometric proxy from the data. A scene geometry component is used to create a virtual scene geometry that mimics the arrangement of an in-person meeting. A virtual viewpoint component displays the changing virtual viewpoint to the viewer and can add perceived depth using motion parallax.
    Type: Application
    Filed: April 13, 2016
    Publication date: October 27, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yancey Christopher Smith, Eric G. Lang, Zhengyou Zhang, Christian F. Huitema
  • Patent number: 9332218
    Abstract: A perspective-correct communication window system and method for communicating between participants in an online meeting, where the participants are not in the same physical locations. Embodiments of the system and method provide an in-person communications experience by changing virtual viewpoint for the participants when they are viewing the online meeting. The participant sees a different perspective displayed on a monitor based on the location of the participant's eyes. Embodiments of the system and method include a capture and creation component that is used to capture visual data about each participant and create a realistic geometric proxy from the data. A scene geometry component is used to create a virtual scene geometry that mimics the arrangement of an in-person meeting. A virtual viewpoint component displays the changing virtual viewpoint to the viewer and can add perceived depth using motion parallax.
    Type: Grant
    Filed: October 1, 2015
    Date of Patent: May 3, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yancey Christopher Smith, Eric G. Lang, Zhengyou Zhang, Christian F. Huitema
  • Patent number: 9332222
    Abstract: A controlled three-dimensional (3D) communication endpoint system and method for simulating an in-person communication between participants in an online meeting or conference and providing easy scaling of a virtual environment when additional participants join. This gives the participants the illusion that the other participants are in the same room and sitting around the same table with the viewer. The controlled communication endpoint includes a plurality of camera pods that capture video of a participant from 360 degrees around the participant. The controlled communication endpoint also includes a display device configuration containing display devices placed at least 180 degrees around the participant and display the virtual environment containing geometric proxies of the other participants. Placing the participants at a round virtual table and increasing the diameter of the virtual table as additional participants are added easily achieves scalability.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: May 3, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yancey Christopher Smith, Eric G. Lang, Christian F. Huitema, Zhengyou Zhang