Patents by Inventor Shabnam Ghaffarzadegan

Shabnam Ghaffarzadegan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12271815
    Abstract: A method data augmentation includes receiving audio stream data associated with at least one impulse event, receiving a label associated with the audio stream data, and detecting, using an onset detector, at least one peak of the at least one impulse event. The method also includes extracting at least one positive sample of the audio stream data associated with the at least one impulse event. The method also includes applying, to the at least one positive sample, the label associated with the audio stream data and extracting at least one negative sample of the audio stream data associated with the at least one impulse event. The method also includes augmenting training data based on the at least one positive sample and the at least one negative sample and training at least one machine-learning model using the augmented training data.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: April 8, 2025
    Assignee: Robert Bosch GmbH
    Inventors: Luca Bondi, Samarjit Das, Shabnam Ghaffarzadegan
  • Publication number: 20250099067
    Abstract: Methods and systems for training an audio-based machine learning model to predict a health condition based on biological sounds emitted by a person. Audio data corresponding to biological sounds produced by the person is generated from a microphone. The audio data is segmented into a plurality of segments, each segment associated with a respective sound event. An audio-based machine learning model is executed on the plurality of segments. The audio-based machine learning model is configured to output, for each segment, a label of a medical condition and an associated a confidence score. The model is trained via active learning, in which a subset of the plurality of segments are selected based on their confidence score being below a threshold, and provided to a human for annotation.
    Type: Application
    Filed: September 26, 2023
    Publication date: March 27, 2025
    Inventors: Shabnam GHAFFARZADEGAN, Samarjit DAS, Luca BONDI, Ho-Hsiang WU, Joseph Aracri, Kelly J. SHIELDS, Sirajum MUNIR
  • Publication number: 20250085708
    Abstract: A method of training a prototypical network for sound event detection includes receiving samples of an audio signal that include positive samples corresponding to sound events and negative samples that do not correspond to sound events, determining, based on the positive samples, respective positive prototypes of a plurality of classes of sound events, determining, based on the negative samples, respective negative prototypes for respective groups of the negative samples, each of the negative prototypes corresponding to a combination of a plurality of the negative samples, and generating, based on comparisons between a first sample and the respective positive prototypes and each of the negative prototypes, an output signal that indicates whether the first sample belongs to one of the plurality of classes of sound events.
    Type: Application
    Filed: September 8, 2023
    Publication date: March 13, 2025
    Inventors: Md Ehsanul Haque Nirjhar, Luca Bondi, Shabnam Ghaffarzadegan, Samarjit Das
  • Publication number: 20250021770
    Abstract: In some implementations, the device may include receiving a first and second audio dataset. In addition, the device may generate a first, a second, a third, and a fourth audio sample. Moreover, the device may include determining a level of similarity between the first and second audio samples. Also, the device may include combining the first and second audio samples into an audio pair. Further, the device may include training a machine learning model to map audio samples to a latent space visualization in view of time and the similarities between the first and second audio samples to yield a trained machine learning model. In addition, the device may include mapping, by the machine learning model, in the latent space visualization, the third and fourth audio samples, where placement of the third and fourth audio samples depends on the level of similarity between the third and fourth audio samples.
    Type: Application
    Filed: July 14, 2023
    Publication date: January 16, 2025
    Inventors: ALESSANDRO ILIC MEZZA, LUCA BONDI, SHABNAM GHAFFARZADEGAN, PONGTEP ANGKITITRAKUL
  • Publication number: 20250017496
    Abstract: A system includes a first and second transceiver configured to emit and receive a wireless signal, a controller in communication with the first and second transceiver, the controller configured to send instructions to the transceivers to initiate a Fine Time Measurement (FTM) on a wireless channel to a body part associated with a user during a healthy state of the user, store an FTM measurement associated with the healthy state of the user, send instructions to the transceiver to initiate the FTM on the wireless channel during an infected state of the user, store an FTM measurement associated with the infected state of the user, compare the FTM measurement associated with the healthy state of the user with the FTM measurement associated with the infected state of the user, and in response to the comparison exceeding a threshold value, output a notification.
    Type: Application
    Filed: July 14, 2023
    Publication date: January 16, 2025
    Inventors: SIRAJUM MUNIR, WENPENG WANG, SAMARJIT DAS, SHABNAM GHAFFARZADEGAN
  • Publication number: 20250005426
    Abstract: A method of training a machine learning (ML) model includes obtaining a dataset that includes first training data obtained using two or more ground truth sensing systems and second training data obtained using a prediction sensing system configured to implement the ML model, determining a loss function based on the first training data, the loss function defining a region of zero loss based on a minimum and a maximum of the first training data, calculating, using the ML model, a prediction output based on the second training data, calculating, using the loss function, a loss of the ML model based on the prediction output, and updating the ML model based on the calculated loss.
    Type: Application
    Filed: June 27, 2023
    Publication date: January 2, 2025
    Inventors: Luca Bondi, Shabnam Ghaffarzadegan, Samarjit Das
  • Patent number: 12020156
    Abstract: A method includes receiving audio stream data associated with a data capture environment, and receiving sensor data associated with the data capture environment. The method also includes identifying at least some events in the sensor data, and calculating at least one offset value for at least a portion of the audio stream data that corresponds to at least one event of the sensor data. The method also includes synchronizing at least a portion of the sensor data associated with the portion of the audio stream data that corresponds to the at least one event of the sensor data, and labeling at least the portion of the audio stream data that corresponds to the at least one event of the sensor data. The method also includes generating training data using at least some of the labeled portion of the audio stream data, and training a machine learning model using the training data.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: June 25, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Luca Bondi, Shabnam Ghaffarzadegan, Samarjit Das
  • Publication number: 20240020525
    Abstract: A method includes receiving audio stream data associated with a data capture environment, and receiving sensor data associated with the data capture environment. The method also includes identifying at least some events in the sensor data, and calculating at least one offset value for at least a portion of the audio stream data that corresponds to at least one event of the sensor data. The method also includes synchronizing at least a portion of the sensor data associated with the portion of the audio stream data that corresponds to the at least one event of the sensor data, and labeling at least the portion of the audio stream data that corresponds to the at least one event of the sensor data. The method also includes generating training data using at least some of the labeled portion of the audio stream data, and training a machine learning model using the training data.
    Type: Application
    Filed: July 13, 2022
    Publication date: January 18, 2024
    Inventors: Luca Bondi, Shabnam Ghaffarzadegan, Samarjit Das
  • Publication number: 20240020526
    Abstract: A method data augmentation includes receiving audio stream data associated with at least one impulse event, receiving a label associated with the audio stream data, and detecting, using an onset detector, at least one peak of the at least one impulse event. The method also includes extracting at least one positive sample of the audio stream data associated with the at least one impulse event. The method also includes applying, to the at least one positive sample, the label associated with the audio stream data and extracting at least one negative sample of the audio stream data associated with the at least one impulse event. The method also includes augmenting training data based on the at least one positive sample and the at least one negative sample and training at least one machine-learning model using the augmented training data.
    Type: Application
    Filed: July 13, 2022
    Publication date: January 18, 2024
    Inventors: Luca Bondi, Samarjit Das, Shabnam Ghaffarzadegan
  • Patent number: 11830239
    Abstract: A method for labeling audio data includes receiving video stream data and audio stream data that corresponds to at least a portion of the video stream data. The method also includes labeling, at least some objects of the video stream data. The method also includes calculating at least one offset value for at least a portion of the audio stream data that corresponds to at least one labeled object of the video stream data. The method also includes synchronizing at least a portion of the video stream data with the portion of the audio stream data. The method also includes labeling at least the portion of the audio stream data that corresponds to the at least one labeled object of the video stream data and generating training data using at least some of the labeled portion of the audio stream data.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: November 28, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Shabnam Ghaffarzadegan, Samarjit Das, Luca Bondi
  • Patent number: 11810435
    Abstract: A method and system for detecting and localizing a target audio event in an audio clip is disclosed. The method and system use utilizes a hierarchical approach in which a dilated convolutional neural network to detect the presence of the target audio event anywhere in an audio clip based on high level audio features. If the target audio event is detected somewhere in the audio clip, the method and system further utilizes a robust audio vector representation that encodes the inherent state of the audio as well as a learned relationship between state of the audio and the particular target audio event that was detected in the audio clip. A bi-directional long short term memory classifier is used to model long term dependencies and determine the boundaries in time of the target audio event within the audio clip based on the audio vector representations.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: November 7, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Asif Salekin, Zhe Feng, Shabnam Ghaffarzadegan
  • Patent number: 11631394
    Abstract: A method of detecting occupancy in an area includes obtaining, with a processor, an audio sample from an audio sensor and determining, with the processor, feature functional values of a set of selected feature functionals from the audio sample. The determining of the feature functional values includes extracting features in the set of selected feature functionals from the audio sample, and determining the feature functional values of the set of selected features from the extracted features. The method further includes determining, with the processor, occupancy in the area using a classifier based on the determined feature functional values.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: April 18, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Zhe Feng, Attila Reiss, Shabnam Ghaffarzadegan, Mirko Ruhs, Robert Duerichen
  • Patent number: 11500470
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: November 15, 2022
    Inventors: Benzun Pious Wisely Babu, Zeng Dai, Shabnam Ghaffarzadegan, Liu Ren
  • Patent number: 11295756
    Abstract: A system for ontology-aware sound classification. The system includes an electronic processor that is configured to create a first graph based on relationships between fine audio classification labels and create a second graph based on relationships between coarse audio classification labels. The electronic processor is also configured to receive an audio clip including one or more sounds, execute a first graph convolutional network with the first graph as input, and execute a second graph convolutional network with the second graph as input. Using the outputs of the first graph convolutional network and the second graph convolutional network, the electronic processor is configured to determine one or more coarse labels, one or more fine labels, or both to classify the one or more sounds in the audio clip.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: April 5, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Shabnam Ghaffarzadegan, Zhe Feng, Yiwei Sun
  • Patent number: 11176422
    Abstract: A computer-program product storing instructions which, when executed by a computer, cause the computer to receive an input data, encode the input via an encoder, during a first sequence, obtain a first latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing the decoder and at least the first latent variable, obtain a residual between the input data and the reconstruction utilizing a comparison of at least the first latent variable, and output a final reconstruction of the input data utilizing a plurality of residuals from a plurality of sequences.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: November 16, 2021
    Assignee: ROBERT BOSCH GMBH
    Inventors: Shabnam Ghaffarzadegan, Nanxiang Li, Liu Ren
  • Publication number: 20210201889
    Abstract: A method of detecting occupancy in an area includes obtaining, with a processor, an audio sample from an audio sensor and determining, with the processor, feature functional values of a set of selected feature functionals from the audio sample. The determining of the feature functional values includes extracting features in the set of selected feature functionals from the audio sample, and determining the feature functional values of the set of selected features from the extracted features. The method further includes determining, with the processor, occupancy in the area using a classifier based on the determined feature functional values.
    Type: Application
    Filed: December 14, 2018
    Publication date: July 1, 2021
    Inventors: Zhe Feng, Attila Reiss, Shabnam Ghaffarzadegan, Mirko Ruhs, Robert Duerichen
  • Publication number: 20210201930
    Abstract: A system for ontology-aware sound classification. The system includes an electronic processor that is configured to create a first graph based on relationships between fine audio classification labels and create a second graph based on relationships between coarse audio classification labels. The electronic processor is also configured to receive an audio clip including one or more sounds, execute a first graph convolutional network with the first graph as input, and execute a second graph convolutional network with the second graph as input. Using the outputs of the first graph convolutional network and the second graph convolutional network, the electronic processor is configured to determine one or more coarse labels, one or more fine labels, or both to classify the one or more sounds in the audio clip.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Shabnam Ghaffarzadegan, Zhe Feng, Yiwei Sun
  • Publication number: 20210195981
    Abstract: A helmet includes one or more sensors located in the helmet and configured to obtain cognitive-load data indicating a cognitive load of a rider of a vehicle, a wireless transceiver in communication with the vehicle, a controller in communication with the one or more sensors and the wireless transceiver, wherein the controller is configured to determine a cognitive load of the occupant utilizing at least the cognitive-load data and send a wireless command to the vehicle utilizing the wireless transceiver to execute commands to adjust a driver assistance function when the cognitive load is above a threshold.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Shabnam GHAFFARZADEGAN, Benzun Pious Wisely BABU, Zeng DAI, Liu REN
  • Publication number: 20210201854
    Abstract: A smart helmet includes a heads-up display (HUD) configured to output graphical images within a virtual field of view on a visor of the smart helmet. A transceiver is configured to communicate with a mobile device of a user. A processor is programmed to receive, via the transceiver, calibration data from the mobile device that relates to one or more captured images from a camera on the mobile device, and alter the virtual field of view of the HUD based on the calibration data. This allows a user to calibrate his/her HUD of the smart helmet based on images received from the user's mobile device.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN
  • Publication number: 20210191518
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.
    Type: Application
    Filed: December 23, 2019
    Publication date: June 24, 2021
    Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN