Patents by Inventor Cem Keskin

Cem Keskin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954899
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: April 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Publication number: 20240046618
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Application
    Filed: March 11, 2021
    Publication date: February 8, 2024
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Publication number: 20230154051
    Abstract: Systems and methods are directed to encoding and/or decoding of the textures/geometry of a three-dimensional volumetric representation. An encoding computing system can obtain voxel blocks from a three-dimensional volumetric representation of an object. The encoding computing system can encode voxel blocks with a machine-learned voxel encoding model to obtain encoded voxel blocks. The encoding computing system can decode the encoded voxel blocks with a machine-learned voxel decoding model to obtain reconstructed voxel blocks. The encoding computing system can generate a reconstructed mesh representation of the object based at least in part on the one or more reconstructed voxel blocks. The encoding computing system can encode textures associated with the voxel blocks according to an encoding scheme and based at least in part on the reconstructed mesh representation of the object to obtain encoded textures.
    Type: Application
    Filed: April 17, 2020
    Publication date: May 18, 2023
    Inventors: Danhang Tang, Saurabh Singh, Cem Keskin, Phillip Andrew Chou, Christian Haene, Mingsong Dou, Sean Ryan Francesco Fanello, Jonathan Taylor, Andrea Tagliasacchi, Philip Lindsley Davidson, Yinda Zhang, Onur Gonen Guleryuz, Shahram Izadi, Sofien Bouaziz
  • Publication number: 20220350997
    Abstract: A head-mounted device (HMD) can be configured to determine a request for recognizing at least one content item included within content framed within a display of the HMD. The HMD can be configured to initiate a head-tracking process that maintains a coordinate system with respect to the content, and a pointer-tracking process that tracks a pointer that is visible together with the content within the display. The HMD can be configured to capture a first image of the content and a second image of the content, the second image including the pointer. The HMD can be configured to map a location of the pointer within the second image to a corresponding image location within the first image, using the coordinate system, and provide the at least one content item from the corresponding image location.
    Type: Application
    Filed: April 29, 2021
    Publication date: November 3, 2022
    Inventors: Qinge Wu, Grant Yoshida, Catherine Boulanger, Erik Hubert Dolly Goossens, Cem Keskin, Sofien Bouaziz, Jonathan James Taylor, Nidhi Rathi, Seth Raphael
  • Patent number: 11335023
    Abstract: According to an aspect, a method for pose estimation using a convolutional neural network includes extracting features from an image, downsampling the features to a lower resolution, arranging the features into sets of features, where each set of features corresponds to a separate keypoint of a pose of a subject, updating, by at least one convolutional block, each set of features based on features of one or more neighboring keypoints using a kinematic structure, and predicting the pose of the subject using the updated sets of features.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 17, 2022
    Assignee: Google LLC
    Inventors: Sameh Khamis, Christian Haene, Hossam Isack, Cem Keskin, Sofien Bouaziz, Shahram Izadi
  • Patent number: 11328486
    Abstract: A method includes receiving a first image including color data and depth data, determining a viewpoint associated with an augmented reality (AR) and/or virtual reality (VR) display displaying a second image, receiving at least one calibration image including an object in the first image, the object being in a different pose as compared to a pose of the object in the first image, and generating the second image based on the first image, the viewpoint and the at least one calibration image.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: May 10, 2022
    Assignee: Google LLC
    Inventors: Anastasia Tkach, Ricardo Martin Brualla, Shahram Izadi, Shuoran Yang, Cem Keskin, Sean Ryan Francesco Fanello, Philip Davidson, Jonathan Taylor, Rohit Pandey, Andrea Tagliasacchi, Pavlo Pidlypenskyi
  • Publication number: 20210366146
    Abstract: According to an aspect, a method for pose estimation using a convolutional neural network includes extracting features from an image, downsampling the features to a lower resolution, arranging the features into sets of features, where each set of features corresponds to a separate keypoint of a pose of a subject, updating, by at least one convolutional block, each set of features based on features of one or more neighboring keypoints using a kinematic structure, and predicting the pose of the subject using the updated sets of features.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 25, 2021
    Inventors: Sameh Khamis, Christian Haene, Hossam Isack, Cem Keskin, Sofien Bouaziz, Shahram Izadi
  • Publication number: 20210365777
    Abstract: Methods, systems, and apparatus for more efficiently and accurately generating neural network outputs, for instance, for use in classifying image or audio data. In one aspect, a method includes processing a network input using a neural network including multiple neural network layers to generate a network output. One or more of the neural network layers is a conditional neural network layer. Processing a layer input using a conditional neural network layer to generate a layer output includes obtaining values of one or more decision parameters of the conditional neural network layer. The neural network processes the layer input and the decision parameters of the conditional neural network layer to determine values of one or more latent parameters of the conditional neural network layer from a continuous set of possible latent parameter values. The values of the latent parameters specify the values of the conditional layer weights.
    Type: Application
    Filed: July 23, 2019
    Publication date: November 25, 2021
    Inventors: Shahram Izadi, Cem Keskin
  • Patent number: 11030773
    Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: June 8, 2021
    Assignee: Google LLC
    Inventors: Jonathan James Taylor, Vladimir Tankovich, Danhang Tang, Cem Keskin, Adarsh Prakash Murthy Kowdle, Philip L. Davidson, Shahram Izadi, David Kim
  • Patent number: 10839539
    Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: November 17, 2020
    Assignee: GOOGLE LLC
    Inventors: Adarsh Prakash Murthy Kowdle, Vladimir Tankovich, Danhang Tang, Cem Keskin, Jonathan James Taylor, Philip L. Davidson, Shahram Izadi, Sean Ryan Fanello, Julien Pascal Christophe Valentin, Christoph Rhemann, Mingsong Dou, Sameh Khamis, David Kim
  • Publication number: 20200349772
    Abstract: A method includes receiving a first image including color data and depth data, determining a viewpoint associated with an augmented reality (AR) and/or virtual reality (VR) display displaying a second image, receiving at least one calibration image including an object in the first image, the object being in a different pose as compared to a pose of the object in the first image, and generating the second image based on the first image, the viewpoint and the at least one calibration image.
    Type: Application
    Filed: April 29, 2020
    Publication date: November 5, 2020
    Inventors: Anastasia Tkach, Ricardo Martin Brualla, Shahram Izadi, Shuoran Yang, Cem Keskin, Sean Ryan Francesco Fanello, Philip Davidson, Jonathan Taylor, Rohit Pandey, Andrea Tagliasacchi, Pavlo Pidlypenskyi
  • Patent number: 10762429
    Abstract: Emotional/cognitive state presentation is described. When two or more users, each using a device configured to present emotional/cognitive state data, are in proximity to one another, each device communicates an emotional/cognitive state of the user of the device to another device. Upon receiving data indicating an emotional/cognitive state of another user, an indication of the emotional/cognitive state of the user is presented.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: September 1, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John C. Gordon, Cem Keskin
  • Publication number: 20200193638
    Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
    Type: Application
    Filed: February 24, 2020
    Publication date: June 18, 2020
    Inventors: Jonathan James TAYLOR, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Adarsh Prakash Murthy KOWDLE, Philip L. DAVIDSON, Shahram IZADI, David KIM
  • Patent number: 10614591
    Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: April 7, 2020
    Assignee: GOOGLE LLC
    Inventors: Jonathan James Taylor, Vladimir Tankovich, Danhang Tang, Cem Keskin, Adarsh Prakash Murthy Kowdle, Philip L. Davidson, Shahram Izadi, David Kim
  • Patent number: 10564713
    Abstract: Computer systems, methods, and storage media for generating a continuous motion control using neurological data and for associating the continuous motion control with a continuous user interface control to enable analog control of the user interface control. The user interface control is modulated through a user's physical movements within a continuous range of motion associated with the continuous motion control. The continuous motion control enables fine-tuned and continuous control of the corresponding user interface control as opposed to control limited to a small number of discrete settings.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: February 18, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cem Keskin, Khuram Shahid, Bill Chau, Jaeyoun Kim, Kazuhito Koishida
  • Patent number: 10484597
    Abstract: Emotional/cognitive state-triggered recording is described. A buffer is used to store captured video content until a change in an emotional or cognitive state of a user is detected. Sensor data indicating a change in an emotional or cognitive state of a user triggers the creation of a video segment based on the current contents of the buffer.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: November 19, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John C. Gordon, Cem Keskin
  • Publication number: 20190212810
    Abstract: Computer systems, methods, and storage media for generating a continuous motion control using neurological data and for associating the continuous motion control with a continuous user interface control to enable analog control of the user interface control. The user interface control is modulated through a user's physical movements within a continuous range of motion associated with the continuous motion control. The continuous motion control enables fine-tuned and continuous control of the corresponding user interface control as opposed to control limited to a small number of discrete settings.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 11, 2019
    Inventors: Cem Keskin, Khuram Shahid, Bill Chau, Jaeyoun Kim, Kazuhito Koishida
  • Patent number: 10311282
    Abstract: Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: June 4, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph Shotton, Cem Keskin, Christoph Rhemann, Toby Sharp, Duncan Paul Robertson, Pushmeet Kohli, Andrew William Fitzgibbon, Shahram Izadi
  • Publication number: 20190075239
    Abstract: Emotional/cognitive state-triggered recording is described. A buffer is used to store captured video content until a change in an emotional or cognitive state of a user is detected. Sensor data indicating a change in an emotional or cognitive state of a user triggers the creation of a video segment based on the current contents of the buffer.
    Type: Application
    Filed: November 7, 2018
    Publication date: March 7, 2019
    Inventors: John C. GORDON, Cem KESKIN
  • Patent number: 10203751
    Abstract: Computer systems, methods, and storage media for generating a continuous motion control using neurological data and for associating the continuous motion control with a continuous user interface control to enable analog control of the user interface control. The user interface control is modulated through a user's physical movements within a continuous range of motion associated with the continuous motion control. The continuous motion control enables fine-tuned and continuous control of the corresponding user interface control as opposed to control limited to a small number of discrete settings.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: February 12, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cem Keskin, Khuram Shahid, Bill Chau, Jaeyoun Kim, Kazuhito Koishida