Patents by Inventor Cem Keskin
Cem Keskin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10154191Abstract: Emotional/cognitive state-triggered recording is described. A buffer is used to temporarily store captured video content until a change in an emotional or cognitive state of a user is detected. Sensor data indicating a change in an emotional or cognitive state of a user triggers the creation of a video segment based on the current contents of the buffer.Type: GrantFiled: May 18, 2016Date of Patent: December 11, 2018Assignee: Microsoft Technology Licensing, LLCInventors: John C. Gordon, Cem Keskin
-
Publication number: 20180350105Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.Type: ApplicationFiled: May 31, 2018Publication date: December 6, 2018Inventors: Jonathan James TAYLOR, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Adarsh Prakash Murthy KOWDLE, Philip L. DAVIDSON, Shahram IZADI, David KIM
-
Publication number: 20180350087Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.Type: ApplicationFiled: May 31, 2018Publication date: December 6, 2018Inventors: Adarsh Prakash Murthy KOWDLE, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Jonathan James Taylor, Philip L. DAVIDSON, Shahram IZADI, Sean Ryan FANELLO, Julien Pascal Christophe VALENTIN, Christoph RHEMANN, Mingsong DOU, Sameh KHAMIS, David KIM
-
Patent number: 10063560Abstract: A user may be authenticated to access an account, computing device, or other resource using gaze tracking. A gaze-based password may be established by prompting a user to identify multiple gaze targets within a scene. The gaze-based password may be used to authenticate the user to access the resource. In some examples, when the user attempts to access the resource, the scene may be presented on a display. In some examples, the scene may be a real-world scene including the user's real-world surroundings, or a mixed reality scene. The user's gaze may be tracked while the user is viewing the scene to generate login gaze tracking data. The login gaze tracking data may be compared to the gaze-based password and, if the login gaze tracking data satisfies the gaze-based password, the user may be authenticated to access the resource.Type: GrantFiled: April 29, 2016Date of Patent: August 28, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: John C. Gordon, Cem Keskin
-
Patent number: 10044712Abstract: A user may be authenticated to access an account, computing device, or other resource based on the user's gaze pattern and neural or other physiological response(s) to one or more images or other stimuli. When the user attempts to access the resource, a computing device may obtain login gaze tracking data and measurement of a physiological condition of the user at the time that the user is viewing an image or other stimulus. Based on comparison of the login gaze tracking data and the measurement of the physiological condition to a model, the computing device can determine whether to authenticate the user to access the resource.Type: GrantFiled: May 31, 2016Date of Patent: August 7, 2018Assignee: Microsoft Technology Licensing, LLCInventors: John C. Gordon, Cem Keskin, Michael Betser
-
Patent number: 9911032Abstract: Tracking hand or body pose from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a prediction engine takes a single frame of image data and predicts a distribution over a pose of a hand or body depicted in the image data. In examples, a stochastic optimizer has a pool of candidate poses of the hand or body which it iteratively refines, and samples from the predicted distribution are used to replace some candidate poses in the pool. In some examples a best candidate pose from the pool is selected as the current tracked pose and the selection processes uses a 3D model of the hand or body.Type: GrantFiled: January 4, 2017Date of Patent: March 6, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Jamie Daniel Joseph Shotton, Cem Keskin, Jonathan Taylor, Toby Sharp, Shahram Izadi, Andrew William Fitzgibbon, Pushmeet Kohli, Duncan Paul Robertson
-
Patent number: 9886621Abstract: Computer vision systems for segmenting scenes into semantic components identify a differential within the physiological readings from the user. The differential corresponds to a semantic boundary associated with the user's gaze. Based upon data gathered by a gaze tracking device, the computer vision system identifies a relative location of the user's gaze at the time of the identified differential. The computer vision system then associates the relative location of the user's gaze with a semantic boundary.Type: GrantFiled: May 11, 2016Date of Patent: February 6, 2018Assignee: Microsoft Technology Licensing, LLCInventors: John C. Gordon, Cem Keskin
-
Patent number: 9864431Abstract: Computer systems, methods, and storage media for changing the state of an application by detecting neurological user intent data associated with a particular operation of a particular application state, and changing the application state so as to enable execution of the particular operation as intended by the user. The application state is automatically changed to align with the intended operation, as determined by received neurological user intent data, so that the intended operation is performed. Some embodiments relate to a computer system creating or updating a state machine, through a training process, to change the state of an application according to detected neurological data.Type: GrantFiled: May 11, 2016Date of Patent: January 9, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Cem Keskin, David Kim, Bill Chau, Jaeyoun Kim, Kazuhito Koishida, Khuram Shahid
-
Publication number: 20170372126Abstract: Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image.Type: ApplicationFiled: September 11, 2017Publication date: December 28, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Jamie Daniel Joseph SHOTTON, Cem KESKIN, Christoph RHEMANN, Toby SHARP, Duncan Paul ROBERTSON, Pushmeet KOHLI, Andrew William FITZGIBBON, Shahram IZADI
-
Publication number: 20170346817Abstract: A user may be authenticated to access an account, computing device, or other resource based on the user's gaze pattern and neural or other physiological response(s) to one or more images or other stimuli. When the user attempts to access the resource, a computing device may obtain login gaze tracking data and measurement of a physiological condition of the user at the time that the user is viewing an image or other stimulus. Based on comparison of the login gaze tracking data and the measurement of the physiological condition to a model, the computing device can determine whether to authenticate the user to access the resource.Type: ApplicationFiled: May 31, 2016Publication date: November 30, 2017Inventors: John C. Gordon, Cem Keskin, Michael Betser
-
Publication number: 20170337476Abstract: Emotional/cognitive state presentation is described. When two or more users, each using a device configured to present emotional/cognitive state data, are in proximity to one another, each device communicates an emotional/cognitive state of the user of the device to another device. Upon receiving data indicating an emotional/cognitive state of another user, an indication of the emotional/cognitive state of the user is presented.Type: ApplicationFiled: May 18, 2016Publication date: November 23, 2017Inventors: John C. Gordon, Cem Keskin
-
Publication number: 20170339338Abstract: Emotional/cognitive state-triggered recording is described. A buffer is used to temporarily store captured video content until a change in an emotional or cognitive state of a user is detected. Sensor data indicating a change in an emotional or cognitive state of a user triggers the creation of a video segment based on the current contents of the buffer.Type: ApplicationFiled: May 18, 2016Publication date: November 23, 2017Inventors: John C. Gordon, Cem Keskin
-
Publication number: 20170329404Abstract: Computer systems, methods, and storage media for changing the state of an application by detecting neurological user intent data associated with a particular operation of a particular application state, and changing the application state so as to enable execution of the particular operation as intended by the user. The application state is automatically changed to align with the intended operation, as determined by received neurological user intent data, so that the intended operation is performed. Some embodiments relate to a computer system creating or updating a state machine, through a training process, to change the state of an application according to detected neurological data.Type: ApplicationFiled: May 11, 2016Publication date: November 16, 2017Inventors: Cem Keskin, David Kim, Bill Chau, Jaeyoun Kim, Kazuhito Koishida, Khuram Shahid
-
Publication number: 20170329392Abstract: Computer systems, methods, and storage media for generating a continuous motion control using neurological data and for associating the continuous motion control with a continuous user interface control to enable analog control of the user interface control. The user interface control is modulated through a user's physical movements within a continuous range of motion associated with the continuous motion control. The continuous motion control enables fine-tuned and continuous control of the corresponding user interface control as opposed to control limited to a small number of discrete settings.Type: ApplicationFiled: May 11, 2016Publication date: November 16, 2017Inventors: Cem Keskin, Khuram Shahid, Bill Chau, Jaeyoun Kim, Kazuhito Koishida
-
Publication number: 20170330023Abstract: Computer vision systems for segmenting scenes into semantic components identify a differential within the physiological readings from the user. The differential corresponds to a semantic boundary associated with the user's gaze. Based upon data gathered by a gaze tracking device, the computer vision system identifies a relative location of the user's gaze at the time of the identified differential. The computer vision system then associates the relative location of the user's gaze with a semantic boundary.Type: ApplicationFiled: May 11, 2016Publication date: November 16, 2017Inventors: John C. Gordon, Cem Keskin
-
Publication number: 20170318019Abstract: A user may be authenticated to access an account, computing device, or other resource using gaze tracking. A gaze-based password may be established by prompting a user to identify multiple gaze targets within a scene. The gaze-based password may be used to authenticate the user to access the resource. In some examples, when the user attempts to access the resource, the scene may be presented on a display. In some examples, the scene may be a real-world scene including the user's real-world surroundings, or a mixed reality scene. The user's gaze may be tracked while the user is viewing the scene to generate login gaze tracking data. The login gaze tracking data may be compared to the gaze-based password and, if the login gaze tracking data satisfies the gaze-based password, the user may be authenticated to access the resource.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Inventors: John C. Gordon, Cem Keskin
-
Patent number: 9773155Abstract: Region of interest detection in raw time of flight images is described. For example, a computing device receives at least one raw image captured for a single frame by a time of flight camera. The raw image depicts one or more objects in an environment of the time of flight camera (such as human hands, bodies or any other objects). The raw image is input to a trained region detector and in response one or more regions of interest in the raw image are received. A received region of interest comprises image elements of the raw image which are predicted to depict at least part of one of the objects. A depth computation logic computes depth from the one or more regions of interest of the raw image.Type: GrantFiled: October 14, 2014Date of Patent: September 26, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Jamie Daniel Joseph Shotton, Cem Keskin, Christoph Rhemann, Toby Sharp, Duncan Paul Robertson, Pushmeet Kohli, Andrew William Fitzgibbon, Shahram Izadi
-
Patent number: 9734424Abstract: Filtering sensor data is described, for example, where filters conditioned on a local appearance of the signal are predicted by a machine learning system, and used to filter the sensor data. In various examples the sensor data is a stream of noisy video image data and the filtering process denoises the video stream. In various examples the sensor data is a depth image and the filtering process refines the depth image which may then be used for gesture recognition or other purposes. In various examples the sensor data is one dimensional measurement data from an electric motor and the filtering process denoises the measurements. In examples the machine learning system comprises a random decision forest where trees of the forest store filters at their leaves. In examples, the random decision forest is trained using a training objective with a data dependent regularization term.Type: GrantFiled: April 14, 2014Date of Patent: August 15, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Sean Ryan Francesco Fanello, Cem Keskin, Pushmeet Kohli, Shahram Izadi, Jamie Daniel Joseph Shotton, Antonio Criminisi
-
Patent number: 9690984Abstract: A signal encoding an infrared (IR) image including a plurality of IR pixels is received from an IR camera. Each IR pixel specifies one or more IR parameters of that IR pixel. IR-skin pixels that image a human hand are identified in the IR image. For each IR-skin pixel, a depth of a human hand portion imaged by that IR-skin pixel is estimated based on the IR parameters of that IR-skin pixel. A skeletal hand model including a plurality of hand joints is derived. Each hand joint is defined with three independent position coordinates inferred from the estimated depths of each human hand portion.Type: GrantFiled: April 14, 2015Date of Patent: June 27, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ben Butler, Vladimir Tankovich, Cem Keskin, Sean Ryan Francesco Fanello, Shahram Izadi, Emad Barsoum, Simon P. Stachniak, Yichen Wei
-
Publication number: 20170116471Abstract: Tracking hand or body pose from image data is described, for example, to control a game system, natural user interface or for augmented reality. In various examples a prediction engine takes a single frame of image data and predicts a distribution over a pose of a hand or body depicted in the image data. In examples, a stochastic optimizer has a pool of candidate poses of the hand or body which it iteratively refines, and samples from the predicted distribution are used to replace some candidate poses in the pool. In some examples a best candidate pose from the pool is selected as the current tracked pose and the selection processes uses a 3D model of the hand or body.Type: ApplicationFiled: January 4, 2017Publication date: April 27, 2017Inventors: Jamie Daniel Joseph Shotton, Cem Keskin, Jonathan Taylor, Toby Sharp, Shahram Izadi, Andrew William Fitzgibbon, Pushmeet Kohli, Duncan Paul Robertson