Patents by Inventor Cem Keskin

Cem Keskin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10154191
    Abstract: Emotional/cognitive state-triggered recording is described. A buffer is used to temporarily store captured video content until a change in an emotional or cognitive state of a user is detected. Sensor data indicating a change in an emotional or cognitive state of a user triggers the creation of a video segment based on the current contents of the buffer.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: December 11, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John C. Gordon, Cem Keskin
  • Publication number: 20180350105
    Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
    Type: Application
    Filed: May 31, 2018
    Publication date: December 6, 2018
    Inventors: Jonathan James TAYLOR, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Adarsh Prakash Murthy KOWDLE, Philip L. DAVIDSON, Shahram IZADI, David KIM
  • Publication number: 20180350087
    Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.
    Type: Application
    Filed: May 31, 2018
    Publication date: December 6, 2018
    Inventors: Adarsh Prakash Murthy KOWDLE, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Jonathan James Taylor, Philip L. DAVIDSON, Shahram IZADI, Sean Ryan FANELLO, Julien Pascal Christophe VALENTIN, Christoph RHEMANN, Mingsong DOU, Sameh KHAMIS, David KIM
  • Patent number: 10063560
    Abstract: A user may be authenticated to access an account, computing device, or other resource using gaze tracking. A gaze-based password may be established by prompting a user to identify multiple gaze targets within a scene. The gaze-based password may be used to authenticate the user to access the resource. In some examples, when the user attempts to access the resource, the scene may be presented on a display. In some examples, the scene may be a real-world scene including the user's real-world surroundings, or a mixed reality scene. The user's gaze may be tracked while the user is viewing the scene to generate login gaze tracking data. The login gaze tracking data may be compared to the gaze-based password and, if the login gaze tracking data satisfies the gaze-based password, the user may be authenticated to access the resource.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: August 28, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: John C. Gordon, Cem Keskin
  • Patent number: 10044712
    Abstract: A user may be authenticated to access an account, computing device, or other resource based on the user's gaze pattern and neural or other physiological response(s) to one or more images or other stimuli. When the user attempts to access the resource, a computing device may obtain login gaze tracking data and measurement of a physiological condition of the user at the time that the user is viewing an image or other stimulus. Based on comparison of the login gaze tracking data and the measurement of the physiological condition to a model, the computing device can determine whether to authenticate the user to access the resource.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: August 7, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John C. Gordon, Cem Keskin, Michael Betser
  • Patent number: 9911032
    Abstract: Tracking hand or body pose from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a prediction engine takes a single frame of image data and predicts a distribution over a pose of a hand or body depicted in the image data. In examples, a stochastic optimizer has a pool of candidate poses of the hand or body which it iteratively refines, and samples from the predicted distribution are used to replace some candidate poses in the pool. In some examples a best candidate pose from the pool is selected as the current tracked pose and the selection processes uses a 3D model of the hand or body.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: March 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph Shotton, Cem Keskin, Jonathan Taylor, Toby Sharp, Shahram Izadi, Andrew William Fitzgibbon, Pushmeet Kohli, Duncan Paul Robertson
  • Patent number: 9886621
    Abstract: Computer vision systems for segmenting scenes into semantic components identify a differential within the physiological readings from the user. The differential corresponds to a semantic boundary associated with the user's gaze. Based upon data gathered by a gaze tracking device, the computer vision system identifies a relative location of the user's gaze at the time of the identified differential. The computer vision system then associates the relative location of the user's gaze with a semantic boundary.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: February 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: John C. Gordon, Cem Keskin
  • Patent number: 9864431
    Abstract: Computer systems, methods, and storage media for changing the state of an application by detecting neurological user intent data associated with a particular operation of a particular application state, and changing the application state so as to enable execution of the particular operation as intended by the user. The application state is automatically changed to align with the intended operation, as determined by received neurological user intent data, so that the intended operation is performed. Some embodiments relate to a computer system creating or updating a state machine, through a training process, to change the state of an application according to detected neurological data.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: January 9, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cem Keskin, David Kim, Bill Chau, Jaeyoun Kim, Kazuhito Koishida, Khuram Shahid
  • Publication number: 20170372126
    Abstract: Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image.
    Type: Application
    Filed: September 11, 2017
    Publication date: December 28, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph SHOTTON, Cem KESKIN, Christoph RHEMANN, Toby SHARP, Duncan Paul ROBERTSON, Pushmeet KOHLI, Andrew William FITZGIBBON, Shahram IZADI
  • Publication number: 20170346817
    Abstract: A user may be authenticated to access an account, computing device, or other resource based on the user's gaze pattern and neural or other physiological response(s) to one or more images or other stimuli. When the user attempts to access the resource, a computing device may obtain login gaze tracking data and measurement of a physiological condition of the user at the time that the user is viewing an image or other stimulus. Based on comparison of the login gaze tracking data and the measurement of the physiological condition to a model, the computing device can determine whether to authenticate the user to access the resource.
    Type: Application
    Filed: May 31, 2016
    Publication date: November 30, 2017
    Inventors: John C. Gordon, Cem Keskin, Michael Betser
  • Publication number: 20170337476
    Abstract: Emotional/cognitive state presentation is described. When two or more users, each using a device configured to present emotional/cognitive state data, are in proximity to one another, each device communicates an emotional/cognitive state of the user of the device to another device. Upon receiving data indicating an emotional/cognitive state of another user, an indication of the emotional/cognitive state of the user is presented.
    Type: Application
    Filed: May 18, 2016
    Publication date: November 23, 2017
    Inventors: John C. Gordon, Cem Keskin
  • Publication number: 20170339338
    Abstract: Emotional/cognitive state-triggered recording is described. A buffer is used to temporarily store captured video content until a change in an emotional or cognitive state of a user is detected. Sensor data indicating a change in an emotional or cognitive state of a user triggers the creation of a video segment based on the current contents of the buffer.
    Type: Application
    Filed: May 18, 2016
    Publication date: November 23, 2017
    Inventors: John C. Gordon, Cem Keskin
  • Publication number: 20170329404
    Abstract: Computer systems, methods, and storage media for changing the state of an application by detecting neurological user intent data associated with a particular operation of a particular application state, and changing the application state so as to enable execution of the particular operation as intended by the user. The application state is automatically changed to align with the intended operation, as determined by received neurological user intent data, so that the intended operation is performed. Some embodiments relate to a computer system creating or updating a state machine, through a training process, to change the state of an application according to detected neurological data.
    Type: Application
    Filed: May 11, 2016
    Publication date: November 16, 2017
    Inventors: Cem Keskin, David Kim, Bill Chau, Jaeyoun Kim, Kazuhito Koishida, Khuram Shahid
  • Publication number: 20170329392
    Abstract: Computer systems, methods, and storage media for generating a continuous motion control using neurological data and for associating the continuous motion control with a continuous user interface control to enable analog control of the user interface control. The user interface control is modulated through a user's physical movements within a continuous range of motion associated with the continuous motion control. The continuous motion control enables fine-tuned and continuous control of the corresponding user interface control as opposed to control limited to a small number of discrete settings.
    Type: Application
    Filed: May 11, 2016
    Publication date: November 16, 2017
    Inventors: Cem Keskin, Khuram Shahid, Bill Chau, Jaeyoun Kim, Kazuhito Koishida
  • Publication number: 20170330023
    Abstract: Computer vision systems for segmenting scenes into semantic components identify a differential within the physiological readings from the user. The differential corresponds to a semantic boundary associated with the user's gaze. Based upon data gathered by a gaze tracking device, the computer vision system identifies a relative location of the user's gaze at the time of the identified differential. The computer vision system then associates the relative location of the user's gaze with a semantic boundary.
    Type: Application
    Filed: May 11, 2016
    Publication date: November 16, 2017
    Inventors: John C. Gordon, Cem Keskin
  • Publication number: 20170318019
    Abstract: A user may be authenticated to access an account, computing device, or other resource using gaze tracking. A gaze-based password may be established by prompting a user to identify multiple gaze targets within a scene. The gaze-based password may be used to authenticate the user to access the resource. In some examples, when the user attempts to access the resource, the scene may be presented on a display. In some examples, the scene may be a real-world scene including the user's real-world surroundings, or a mixed reality scene. The user's gaze may be tracked while the user is viewing the scene to generate login gaze tracking data. The login gaze tracking data may be compared to the gaze-based password and, if the login gaze tracking data satisfies the gaze-based password, the user may be authenticated to access the resource.
    Type: Application
    Filed: April 29, 2016
    Publication date: November 2, 2017
    Inventors: John C. Gordon, Cem Keskin
  • Patent number: 9773155
    Abstract: Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image.
    Type: Grant
    Filed: October 14, 2014
    Date of Patent: September 26, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jamie Daniel Joseph Shotton, Cem Keskin, Christoph Rhemann, Toby Sharp, Duncan Paul Robertson, Pushmeet Kohli, Andrew William Fitzgibbon, Shahram Izadi
  • Patent number: 9734424
    Abstract: Filtering sensor data is described, for example, where filters conditioned on a local appearance of the signal are predicted by a machine learning system, and used to filter the sensor data. In various examples the sensor data is a stream of noisy video image data and the filtering process denoises the video stream. In various examples the sensor data is a depth image and the filtering process refines the depth image which may then be used for gesture recognition or other purposes. In various examples the sensor data is one dimensional measurement data from an electric motor and the filtering process denoises the measurements. In examples the machine learning system comprises a random decision forest where trees of the forest store filters at their leaves. In examples, the random decision forest is trained using a training objective with a data dependent regularization term.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: August 15, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Sean Ryan Francesco Fanello, Cem Keskin, Pushmeet Kohli, Shahram Izadi, Jamie Daniel Joseph Shotton, Antonio Criminisi
  • Patent number: 9690984
    Abstract: A signal encoding an infrared (IR) image including a plurality of IR pixels is received from an IR camera. Each IR pixel specifies one or more IR parameters of that IR pixel. IR-skin pixels that image a human hand are identified in the IR image. For each IR-skin pixel, a depth of a human hand portion imaged by that IR-skin pixel is estimated based on the IR parameters of that IR-skin pixel. A skeletal hand model including a plurality of hand joints is derived. Each hand joint is defined with three independent position coordinates inferred from the estimated depths of each human hand portion.
    Type: Grant
    Filed: April 14, 2015
    Date of Patent: June 27, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ben Butler, Vladimir Tankovich, Cem Keskin, Sean Ryan Francesco Fanello, Shahram Izadi, Emad Barsoum, Simon P. Stachniak, Yichen Wei
  • Publication number: 20170116471
    Abstract: Tracking hand or body pose from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a prediction engine takes a single frame of image data and predicts a distribution over a pose of a hand or body depicted in the image data. In examples, a stochastic optimizer has a pool of candidate poses of the hand or body which it iteratively refines, and samples from the predicted distribution are used to replace some candidate poses in the pool. In some examples a best candidate pose from the pool is selected as the current tracked pose and the selection processes uses a 3D model of the hand or body.
    Type: Application
    Filed: January 4, 2017
    Publication date: April 27, 2017
    Inventors: Jamie Daniel Joseph Shotton, Cem Keskin, Jonathan Taylor, Toby Sharp, Shahram Izadi, Andrew William Fitzgibbon, Pushmeet Kohli, Duncan Paul Robertson