Patents by Inventor Nafiul Rashid

Nafiul Rashid has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250087016
    Abstract: In one embodiment, a method includes recording, by a camera of a client device, a video of a region of a first user's skin and estimating, from the recorded video, a current ratio of ratios (RoR) for the first user corresponding to the first user's current blood-oxygen saturation. The method further includes converting the first user's determined RoR to a transformed RoR for the first user based at least in part on a baseline user's RoR determined while generating a trained SpO2 prediction model. The trained SpO2 prediction model is trained to estimate the baseline user's SpO2 value based on an input RoR value from the baseline user. The method further includes determining, by the trained SpO2 prediction model and based on the transformed RoR for the first user, the first user's current estimated blood-oxygen saturation.
    Type: Application
    Filed: June 28, 2024
    Publication date: March 13, 2025
    Inventors: Li Zhu, Qijia Shao, Mohsin Ahmed, Korosh Vatanparvar, Migyeong Gwak, Nafiul Rashid, Jungmok Bae, Jilong Kuang, Jun Gao
  • Publication number: 20250079016
    Abstract: A method performed by at least one processor includes obtaining an image of a subject; preprocessing the image of the subject; inputting the preprocessed image into a machine learning model trained in accordance with a first frequency distribution corresponding to a first ground truth obtained from one or more sensors performing a vital measurement on one or more test subjects; and obtaining, from the machine learning model, an estimate of a signal corresponding to the vital measurement of the subject.
    Type: Application
    Filed: August 23, 2024
    Publication date: March 6, 2025
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Korosh VATANPARVAR, Jeremy Speth, Nafiul Rashid, Li Zhu, Migyeong Gwak, Jilong Kuang, Jun Gao
  • Publication number: 20240423498
    Abstract: In one embodiment, a method includes detecting, by a motion sensor of a mobile device worn by a user, multiple motion signals, each representing a motion of the user about one of a number of mobile-device axes defined by an orientation of the mobile device. The method further includes determining, for each of the multiple mobile-device axes, a ballistocardiogram (BCG) signal based on the motion signal corresponding to that mobile-device axis; selecting, based on a strength of the determined BCG signals, one or more particular mobile-device axes and corresponding motion signals for estimating a user's tidal volume; determining, based on the one or more selected motion signals, one or more breathing features; and estimating, by providing the one or more breathing features to a trained machine-learning model, the user's current tidal volume.
    Type: Application
    Filed: June 24, 2024
    Publication date: December 26, 2024
    Inventors: Md Mahbubur Rahman, Yincheng Jin, Mehrab Bin Morshed, Nafiul Rashid, Jilong Kuang, Jun Gao
  • Publication number: 20240420290
    Abstract: In one embodiment, a method includes accessing a video of a user's face. The method further includes accessing, for each image frame in the video, (1) one or more facial landmarks determined by a facial landmark detection (FLD) model and (2) a corresponding determined position in the image for each facial landmark. The method further includes determining, based on the one or more facial landmarks and corresponding positions, a motion of the user's face in the captured video; extracting, from the determined motion of the user's face, a corrected motion signal of the user's face; adjusting, based on the extracted corrected motion signal of the user's face, the positions of one or more facial landmarks in the image frames; and determining, based at least in part on the adjusted positions of the facial landmarks in the sequential images of the video, one or more vital signs of the user.
    Type: Application
    Filed: May 29, 2024
    Publication date: December 19, 2024
    Inventors: Korosh Vatanparvar, Jeremy Speth, Jicheng Li, Li Zhu, Nafiul Rashid, Migyeong Gwak, Jilong Kuang, Jun Gao
  • Publication number: 20240269513
    Abstract: A method includes collecting motion data of a user using a head-worn device while the user is performing a breathing exercise. The method also includes, for a window of the motion data, generating breathing depth features based on the motion data. The method further includes determining, using a first machine learning model that receives the breathing depth features as inputs, whether the motion data corresponds to a non-breathing motion. In addition, the method includes, responsive to determining that the motion data corresponds to the non-breathing motion, presenting a first notification to the user to adjust head motion.
    Type: Application
    Filed: July 25, 2023
    Publication date: August 15, 2024
    Inventors: Md Mahbubur Rahman, Tousif Ahmed, Nafiul Rashid, Jilong Kuang, Jun Gao
  • Publication number: 20240268711
    Abstract: A method includes capturing a video of a person's face using a camera. The method also includes determining a motion-based respiratory rate (RR) and a motion-based respiratory signal based on the video of the person's face. The method further includes determining a remote photoplethysmography (rPPG)-based RR and an rPPG-based respiratory signal based on the video of the person's face. The method also includes predicting whether the motion-based RR or the rPPG-based RR is more likely to be accurate using a trained machine learning model that receives the motion-based respiratory signal and the rPPG-based respiratory signal as input. In addition, the method includes presenting one of the motion-based RR or the rPPG-based RR based on the prediction.
    Type: Application
    Filed: February 5, 2024
    Publication date: August 15, 2024
    Inventors: Migyeong Gwak, Korosh Vatanparvar, Li Zhu, Michael Chan, Nafiul Rashid, Jungmok Bae, Jilong Kuang, Jun Gao
  • Publication number: 20240197246
    Abstract: In one embodiment, a method includes accessing first sensor data from a first sensor worn on a first portion of a user's body and accessing second sensor data from a second sensor worn on a second portion of the user's body. The method includes determining, based on both the first sensor data and the second sensor data, one or more first features related to the user's activity and determining, based on the first features, an initial classification of the user's activity. When the initial classification indicates a class that includes one or more subclasses that are more distinguishable by one of the sensors, then a specific subclassification may be determined based on sensor data from only that one sensor. Otherwise, the classification of the user's activity may be based on the one or more first features that use data from both sensors.
    Type: Application
    Filed: March 7, 2023
    Publication date: June 20, 2024
    Inventors: Ebrahim Nematihosseinabadi, Mohsin Ahmed, Nafiul Rashid, Jilong Kuang, Jun Gao
  • Publication number: 20230380774
    Abstract: In one embodiment, a method includes accessing, from a first sensor of a wearable device, motion data representing motion of a user and determining, from the motion data, an activity level of the user. The method includes selecting, based on the activity level, an activity-based technique for estimating the breathing rate of the user, which is used to determine a breathing rate of the user. The method further includes determining a quality associated with the breathing rate and comparing the determined quality with a threshold. If the determined quality is not less than the threshold, then the method includes using the determined breathing rate as a final breathing-rate determination for the user. If the determined quality is less than the threshold, then the method includes activating a second sensor of the wearable device and determining, based on data from the second sensor, a breathing rate for the user.
    Type: Application
    Filed: May 18, 2023
    Publication date: November 30, 2023
    Inventors: Tousif Ahmed, Md Mahbubur Rahman, Yincheng Jin, Ebrahim Nematihosseinabadi, Mohsin Ahmed, Nafiul Rashid, Jilong Kuang, Jun Gao
  • Publication number: 20230380793
    Abstract: A method includes obtaining at least one breathing audio sample of a user captured using earbuds worn by the user. The method also includes converting the at least one breathing audio sample to a breathing spectrogram configured as an image. The method further includes processing the breathing spectrogram using a trained multi-task convolutional neural network (CNN) to identify a breathing rate and a breathing depth of the user. In addition, the method includes outputting the breathing rate and the breathing depth of the user.
    Type: Application
    Filed: May 9, 2023
    Publication date: November 30, 2023
    Inventors: Mohsin Yusuf Ahmed, Tousif Ahmed, Md Mahbubur Rahman, Ebrahim Nematihosseinabadi, Nafiul Rashid, Jilong Kuang, Jun Gao