Patents by Inventor Siddharth Khullar
Siddharth Khullar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11937938Abstract: Sleep conditions such as moderate-to-severe sleep apnea can be assessed using a multi-night assessments. A respiration signal (e.g., acquired from a sensor strip) can be processed via a computing device. The respiration signal can be segmented and the segments can be classified to identify one or more apnea/hypopnea events. In some examples, some of the segments can be normalized such that each segment input for classification can be of the same size. The identified one or more apnea/hypopnea events can be used to estimate a nightly parameter indicative of a severity of (or presence of) sleep apnea. The nightly parameters from a multi-night period can be used to estimate a multi-night parameter indicative of the severity of (or presence of) sleep apnea. In some examples, quality checks can be performed to filter out some data (e.g., to exclude data from entire nights or exclude a portion of data from individual nights).Type: GrantFiled: June 25, 2020Date of Patent: March 26, 2024Assignee: Apple Inc.Inventors: Matt Travis Bianchi, Alexander Mark Chan, Fredrik J. Sannholm, Lifeng Miao, Siddharth Khullar
-
Patent number: 11500937Abstract: A system for selecting different aspects of data objects to be matched with similar aspects of other data objects. A user inputs a search data object and a value. A neural network computes features for the search object at multiple layers that correspond to different aspects of the object. A descriptor is generated for the search object from features output at a layer position of the neural network determined from the value. The descriptor is compared to corresponding descriptors for objects in a collection to select objects that include aspects similar to an aspect of the search object. The user can change the value to view different objects that include aspects similar to other aspects of the search object. Thus, the user can explore different aspects of an object to find objects that include aspects similar to the aspect of the object that the user is interested in.Type: GrantFiled: July 23, 2018Date of Patent: November 15, 2022Assignee: Apple Inc.Inventors: Luca Zappella, Siddharth Khullar, Till M. Quack, Xavier Suau Cuadros
-
Patent number: 11437039Abstract: Modifying operation of an intelligent agent in response to facial expressions and/or emotions.Type: GrantFiled: December 23, 2020Date of Patent: September 6, 2022Inventors: Siddharth Khullar, Abhishek Sharma, Jerremy Holland, Nicholas E. Apostoloff, Russell Y. Webb, Tai-Peng Tian, Tomas J. Pfister
-
Publication number: 20210166691Abstract: Modifying operation of an intelligent agent in response to facial expressions and/or emotions.Type: ApplicationFiled: December 23, 2020Publication date: June 3, 2021Inventors: Siddharth Khullar, Abhishek Sharma, Jerremy Holland, Nicholas E. Apostoloff, Russell Y. Webb, Tai-Peng Tian, Tomas J. Pfister
-
Publication number: 20210117782Abstract: In some examples, an individually-pruned neural network can estimate blood pressure from a seismocardiogram (SMG). In some examples, a baseline model can be constructed by training the model with SMG data and blood pressure measurement from a plurality of subjects. One or more filters (e.g., the filters in the top layer of the network) can be ranked by separability, which can be used to prune the model for each unseen user that uses the model thereafter, for example. In some examples, individuals can use individually-pruned models to calculate blood pressure using SMG data without corresponding blood pressure measurements.Type: ApplicationFiled: July 31, 2020Publication date: April 22, 2021Inventors: Siddharth KHULLAR, Nicholas E. APOSTOLOFF, Amruta PAI
-
Patent number: 10885915Abstract: Modifying operation of an intelligent agent in response to facial expressions and/or emotions.Type: GrantFiled: June 30, 2017Date of Patent: January 5, 2021Inventors: Siddharth Khullar, Abhishek Sharma, Jerremy Holland, Nicholas E. Apostoloff, Russell Y. Webb, Tai-Peng Tian, Tomas J. Pfister
-
Publication number: 20190348037Abstract: Modifying operation of an intelligent agent in response to facial expressions and/or emotions.Type: ApplicationFiled: June 30, 2017Publication date: November 14, 2019Inventors: Siddharth Khullar, Abhishek Sharma, Jerremy Holland, Nicholas E. Apostoloff, Russell Y. Webb, Tai-Peng Tian, Tomas J. Pfister
-
Patent number: 9977980Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.Type: GrantFiled: April 17, 2017Date of Patent: May 22, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
-
Publication number: 20170323174Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.Type: ApplicationFiled: April 17, 2017Publication date: November 9, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
-
Patent number: 9659225Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.Type: GrantFiled: February 12, 2014Date of Patent: May 23, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
-
Patent number: 9504391Abstract: A system and method to determine pulse transit time using a handheld device. The method includes generating an electrocardiogram (EKG) for a user of the handheld device. Two portions of the user's body are in contact with two contact points of the handheld device. The method also includes de-noising the EKG to identify a start time when a blood pulse leaves a heart of the user. The method further includes de-noising a plurality of video images of the user to identify a pressure wave indicating an arterial site and a time when the pressure wave appears. Additionally, the method includes determining the PTT based on the de-noised EKG and the de-noised video images.Type: GrantFiled: March 4, 2013Date of Patent: November 29, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Daniel Morris, T. Scott Saponas, Desney S. Tan, Morgan Dixon, Siddharth Khullar, Harshvardhan Vathsangam
-
Publication number: 20150302158Abstract: Aspects of the subject disclosure are directed towards a video-based pulse/heart rate system that may use motion data to reduce or eliminate the effects of motion on pulse detection. Signal quality may be computed from (e.g., transformed) video signal data, such as by providing video signal feature data to a trained classifier that provides a measure of the quality of pulse information in each signal. Based upon the signal quality data, corresponding waveforms may be processed to select one for extracting pulse information therefrom. Heart rate data may be computed from the extracted pulse information, which may be smoothed into a heart rate value for a time window based upon confidence and/or prior heart rate data.Type: ApplicationFiled: April 21, 2014Publication date: October 22, 2015Applicant: Microsoft CorporationInventors: Daniel Scott Morris, Siddharth Khullar, Neel Suresh Joshi, Timothy Scott Saponas, Desney S. Tan
-
Publication number: 20150228062Abstract: A “Food Logger” provides various approaches for learning or training one or more image-based models (referred to herein as “meal models”) of nutritional content of meals. This training is based on one or more datasets of images of meals in combination with “meal features” that describe various parameters of the meal. Examples of meal features include, but are not limited to, food type, meal contents, portion size, nutritional content (e.g., calories, vitamins, minerals, carbohydrates, protein, salt, etc.), food source (e.g., specific restaurants or restaurant chains, grocery stores, particular pre-packaged foods, school meals, meals prepared at home, etc.). Given the trained models, the Food Logger automatically provides estimates of nutritional information based on automated recognition of new images of meals provided by (or for) the user. This nutritional information is then used to enable a wide range of user-centric interactions relating to food consumed by individual users.Type: ApplicationFiled: February 12, 2014Publication date: August 13, 2015Applicant: Microsoft CorporationInventors: Neel Suresh Joshi, Siddharth Khullar, T Scott Saponas, Daniel Morris, Oscar Beijbom
-
Patent number: 9060718Abstract: In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye's pupillary axis and camera's optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging.Type: GrantFiled: February 13, 2013Date of Patent: June 23, 2015Assignee: Massachusetts Institute of TechnologyInventors: Matthew Everett Lawson, Ramesh Raskar, Jason Boggess, Siddharth Khullar
-
Publication number: 20140249398Abstract: A system and method to determine pulse transit time using a handheld device. The method includes generating an electrocardiogram (EKG) for a user of the handheld device. Two portions of the user's body are in contact with two contact points of the handheld device. The method also includes de-noising the EKG to identify a start time when a blood pulse leaves a heart of the user. The method further includes de-noising a plurality of video images of the user to identify a pressure wave indicating an arterial site and a time when the pressure wave appears. Additionally, the method includes determining the PTT based on the de-noised EKG and the de-noised video images.Type: ApplicationFiled: March 4, 2013Publication date: September 4, 2014Applicant: MICROSOFT CORPORATIONInventors: Daniel Morris, T. Scott Saponas, Desney S. Tan, Morgan Dixon, Siddharth Khullar, Harshvardhan Vathsangam
-
Publication number: 20130208241Abstract: In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye's pupillary axis and camera's optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging.Type: ApplicationFiled: February 13, 2013Publication date: August 15, 2013Inventors: Matthew Everett Lawson, Jason Boggess, Siddharth Khullar, Ramesh Raskar