Patents by Inventor Siddharth Khullar

Siddharth Khullar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11937938
    Abstract: Sleep conditions such as moderate-to-severe sleep apnea can be assessed using a multi-night assessments. A respiration signal (e.g., acquired from a sensor strip) can be processed via a computing device. The respiration signal can be segmented and the segments can be classified to identify one or more apnea/hypopnea events. In some examples, some of the segments can be normalized such that each segment input for classification can be of the same size. The identified one or more apnea/hypopnea events can be used to estimate a nightly parameter indicative of a severity of (or presence of) sleep apnea. The nightly parameters from a multi-night period can be used to estimate a multi-night parameter indicative of the severity of (or presence of) sleep apnea. In some examples, quality checks can be performed to filter out some data (e.g., to exclude data from entire nights or exclude a portion of data from individual nights).
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: March 26, 2024
    Assignee: Apple Inc.
    Inventors: Matt Travis Bianchi, Alexander Mark Chan, Fredrik J. Sannholm, Lifeng Miao, Siddharth Khullar
  • Patent number: 11500937
    Abstract: A system for selecting different aspects of data objects to be matched with similar aspects of other data objects. A user inputs a search data object and a value. A neural network computes features for the search object at multiple layers that correspond to different aspects of the object. A descriptor is generated for the search object from features output at a layer position of the neural network determined from the value. The descriptor is compared to corresponding descriptors for objects in a collection to select objects that include aspects similar to an aspect of the search object. The user can change the value to view different objects that include aspects similar to other aspects of the search object. Thus, the user can explore different aspects of an object to find objects that include aspects similar to the aspect of the object that the user is interested in.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: November 15, 2022
    Assignee: Apple Inc.
    Inventors: Luca Zappella, Siddharth Khullar, Till M. Quack, Xavier Suau Cuadros
  • Patent number: 11437039
    Abstract: Modifying operation of an intelligent agent in response to facial expressions and/or emotions.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: September 6, 2022
    Inventors: Siddharth Khullar, Abhishek Sharma, Jerremy Holland, Nicholas E. Apostoloff, Russell Y. Webb, Tai-Peng Tian, Tomas J. Pfister
  • Publication number: 20210166691
    Abstract: Modifying operation of an intelligent agent in response to facial expressions and/or emotions.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 3, 2021
    Inventors: Siddharth Khullar, Abhishek Sharma, Jerremy Holland, Nicholas E. Apostoloff, Russell Y. Webb, Tai-Peng Tian, Tomas J. Pfister
  • Publication number: 20210117782
    Abstract: In some examples, an individually-pruned neural network can estimate blood pressure from a seismocardiogram (SMG). In some examples, a baseline model can be constructed by training the model with SMG data and blood pressure measurement from a plurality of subjects. One or more filters (e.g., the filters in the top layer of the network) can be ranked by separability, which can be used to prune the model for each unseen user that uses the model thereafter, for example. In some examples, individuals can use individually-pruned models to calculate blood pressure using SMG data without corresponding blood pressure measurements.
    Type: Application
    Filed: July 31, 2020
    Publication date: April 22, 2021
    Inventors: Siddharth KHULLAR, Nicholas E. APOSTOLOFF, Amruta PAI
  • Patent number: 10885915
    Abstract: Modifying operation of an intelligent agent in response to facial expressions and/or emotions.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: January 5, 2021
    Inventors: Siddharth Khullar, Abhishek Sharma, Jerremy Holland, Nicholas E. Apostoloff, Russell Y. Webb, Tai-Peng Tian, Tomas J. Pfister
  • Publication number: 20190348037
    Abstract: Modifying operation of an intelligent agent in response to facial expressions and/or emotions.
    Type: Application
    Filed: June 30, 2017
    Publication date: November 14, 2019
    Inventors: Siddharth Khullar, Abhishek Sharma, Jerremy Holland, Nicholas E. Apostoloff, Russell Y. Webb, Tai-Peng Tian, Tomas J. Pfister
  • Patent number: 9977980
    Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: May 22, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
  • Publication number: 20170323174
    Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.
    Type: Application
    Filed: April 17, 2017
    Publication date: November 9, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
  • Patent number: 9659225
    Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.
    Type: Grant
    Filed: February 12, 2014
    Date of Patent: May 23, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
  • Patent number: 9504391
    Abstract: A system and method to determine pulse transit time using a handheld device. The method includes generating an electrocardiogram (EKG) for a user of the handheld device. Two portions of the user's body are in contact with two contact points of the handheld device. The method also includes de-noising the EKG to identify a start time when a blood pulse leaves a heart of the user. The method further includes de-noising a plurality of video images of the user to identify a pressure wave indicating an arterial site and a time when the pressure wave appears. Additionally, the method includes determining the PTT based on the de-noised EKG and the de-noised video images.
    Type: Grant
    Filed: March 4, 2013
    Date of Patent: November 29, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Morris, T. Scott Saponas, Desney S. Tan, Morgan Dixon, Siddharth Khullar, Harshvardhan Vathsangam
  • Publication number: 20150302158
    Abstract: Aspects of the subject disclosure are directed towards a video-based pulse/heart rate system that may use motion data to reduce or eliminate the effects of motion on pulse detection. Signal quality may be computed from (e.g., transformed) video signal data, such as by providing video signal feature data to a trained classifier that provides a measure of the quality of pulse information in each signal. Based upon the signal quality data, corresponding waveforms may be processed to select one for extracting pulse information therefrom. Heart rate data may be computed from the extracted pulse information, which may be smoothed into a heart rate value for a time window based upon confidence and/or prior heart rate data.
    Type: Application
    Filed: April 21, 2014
    Publication date: October 22, 2015
    Applicant: Microsoft Corporation
    Inventors: Daniel Scott Morris, Siddharth Khullar, Neel Suresh Joshi, Timothy Scott Saponas, Desney S. Tan
  • Publication number: 20150228062
    Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.
    Type: Application
    Filed: February 12, 2014
    Publication date: August 13, 2015
    Applicant: Microsoft Corporation
    Inventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
  • Patent number: 9060718
    Abstract: In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye's pupillary axis and camera's optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging.
    Type: Grant
    Filed: February 13, 2013
    Date of Patent: June 23, 2015
    Assignee: Massachusetts Institute of Technology
    Inventors: Matthew Everett Lawson, Ramesh Raskar, Jason Boggess, Siddharth Khullar
  • Publication number: 20140249398
    Abstract: A system and method to determine pulse transit time using a handheld device. The method includes generating an electrocardiogram (EKG) for a user of the handheld device. Two portions of the user's body are in contact with two contact points of the handheld device. The method also includes de-noising the EKG to identify a start time when a blood pulse leaves a heart of the user. The method further includes de-noising a plurality of video images of the user to identify a pressure wave indicating an arterial site and a time when the pressure wave appears. Additionally, the method includes determining the PTT based on the de-noised EKG and the de-noised video images.
    Type: Application
    Filed: March 4, 2013
    Publication date: September 4, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Daniel Morris, T. Scott Saponas, Desney S. Tan, Morgan Dixon, Siddharth Khullar, Harshvardhan Vathsangam
  • Publication number: 20130208241
    Abstract: In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye's pupillary axis and camera's optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging.
    Type: Application
    Filed: February 13, 2013
    Publication date: August 15, 2013
    Inventors: Matthew Everett Lawson, Jason Boggess, Siddharth Khullar, Ramesh Raskar