Patents by Inventor Shabnam Ghaffarzadegan

Shabnam Ghaffarzadegan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240020525
    Abstract: A method includes receiving audio stream data associated with a data capture environment, and receiving sensor data associated with the data capture environment. The method also includes identifying at least some events in the sensor data, and calculating at least one offset value for at least a portion of the audio stream data that corresponds to at least one event of the sensor data. The method also includes synchronizing at least a portion of the sensor data associated with the portion of the audio stream data that corresponds to the at least one event of the sensor data, and labeling at least the portion of the audio stream data that corresponds to the at least one event of the sensor data. The method also includes generating training data using at least some of the labeled portion of the audio stream data, and training a machine learning model using the training data.
    Type: Application
    Filed: July 13, 2022
    Publication date: January 18, 2024
    Inventors: Luca Bondi, Shabnam Ghaffarzadegan, Samarjit Das
  • Publication number: 20240020526
    Abstract: A method data augmentation includes receiving audio stream data associated with at least one impulse event, receiving a label associated with the audio stream data, and detecting, using an onset detector, at least one peak of the at least one impulse event. The method also includes extracting at least one positive sample of the audio stream data associated with the at least one impulse event. The method also includes applying, to the at least one positive sample, the label associated with the audio stream data and extracting at least one negative sample of the audio stream data associated with the at least one impulse event. The method also includes augmenting training data based on the at least one positive sample and the at least one negative sample and training at least one machine-learning model using the augmented training data.
    Type: Application
    Filed: July 13, 2022
    Publication date: January 18, 2024
    Inventors: Luca Bondi, Samarjit Das, Shabnam Ghaffarzadegan
  • Patent number: 11830239
    Abstract: A method for labeling audio data includes receiving video stream data and audio stream data that corresponds to at least a portion of the video stream data. The method also includes labeling, at least some objects of the video stream data. The method also includes calculating at least one offset value for at least a portion of the audio stream data that corresponds to at least one labeled object of the video stream data. The method also includes synchronizing at least a portion of the video stream data with the portion of the audio stream data. The method also includes labeling at least the portion of the audio stream data that corresponds to the at least one labeled object of the video stream data and generating training data using at least some of the labeled portion of the audio stream data.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: November 28, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Shabnam Ghaffarzadegan, Samarjit Das, Luca Bondi
  • Patent number: 11810435
    Abstract: A method and system for detecting and localizing a target audio event in an audio clip is disclosed. The method and system use utilizes a hierarchical approach in which a dilated convolutional neural network to detect the presence of the target audio event anywhere in an audio clip based on high level audio features. If the target audio event is detected somewhere in the audio clip, the method and system further utilizes a robust audio vector representation that encodes the inherent state of the audio as well as a learned relationship between state of the audio and the particular target audio event that was detected in the audio clip. A bi-directional long short term memory classifier is used to model long term dependencies and determine the boundaries in time of the target audio event within the audio clip based on the audio vector representations.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: November 7, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Asif Salekin, Zhe Feng, Shabnam Ghaffarzadegan
  • Patent number: 11631394
    Abstract: A method of detecting occupancy in an area includes obtaining, with a processor, an audio sample from an audio sensor and determining, with the processor, feature functional values of a set of selected feature functionals from the audio sample. The determining of the feature functional values includes extracting features in the set of selected feature functionals from the audio sample, and determining the feature functional values of the set of selected features from the extracted features. The method further includes determining, with the processor, occupancy in the area using a classifier based on the determined feature functional values.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: April 18, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Zhe Feng, Attila Reiss, Shabnam Ghaffarzadegan, Mirko Ruhs, Robert Duerichen
  • Patent number: 11500470
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: November 15, 2022
    Inventors: Benzun Pious Wisely Babu, Zeng Dai, Shabnam Ghaffarzadegan, Liu Ren
  • Patent number: 11295756
    Abstract: A system for ontology-aware sound classification. The system includes an electronic processor that is configured to create a first graph based on relationships between fine audio classification labels and create a second graph based on relationships between coarse audio classification labels. The electronic processor is also configured to receive an audio clip including one or more sounds, execute a first graph convolutional network with the first graph as input, and execute a second graph convolutional network with the second graph as input. Using the outputs of the first graph convolutional network and the second graph convolutional network, the electronic processor is configured to determine one or more coarse labels, one or more fine labels, or both to classify the one or more sounds in the audio clip.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: April 5, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Shabnam Ghaffarzadegan, Zhe Feng, Yiwei Sun
  • Patent number: 11176422
    Abstract: A computer-program product storing instructions which, when executed by a computer, cause the computer to receive an input data, encode the input via an encoder, during a first sequence, obtain a first latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing the decoder and at least the first latent variable, obtain a residual between the input data and the reconstruction utilizing a comparison of at least the first latent variable, and output a final reconstruction of the input data utilizing a plurality of residuals from a plurality of sequences.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: November 16, 2021
    Assignee: ROBERT BOSCH GMBH
    Inventors: Shabnam Ghaffarzadegan, Nanxiang Li, Liu Ren
  • Publication number: 20210201854
    Abstract: A smart helmet includes a heads-up display (HUD) configured to output graphical images within a virtual field of view on a visor of the smart helmet. A transceiver is configured to communicate with a mobile device of a user. A processor is programmed to receive, via the transceiver, calibration data from the mobile device that relates to one or more captured images from a camera on the mobile device, and alter the virtual field of view of the HUD based on the calibration data. This allows a user to calibrate his/her HUD of the smart helmet based on images received from the user's mobile device.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN
  • Publication number: 20210201889
    Abstract: A method of detecting occupancy in an area includes obtaining, with a processor, an audio sample from an audio sensor and determining, with the processor, feature functional values of a set of selected feature functionals from the audio sample. The determining of the feature functional values includes extracting features in the set of selected feature functionals from the audio sample, and determining the feature functional values of the set of selected features from the extracted features. The method further includes determining, with the processor, occupancy in the area using a classifier based on the determined feature functional values.
    Type: Application
    Filed: December 14, 2018
    Publication date: July 1, 2021
    Inventors: Zhe Feng, Attila Reiss, Shabnam Ghaffarzadegan, Mirko Ruhs, Robert Duerichen
  • Publication number: 20210195981
    Abstract: A helmet includes one or more sensors located in the helmet and configured to obtain cognitive-load data indicating a cognitive load of a rider of a vehicle, a wireless transceiver in communication with the vehicle, a controller in communication with the one or more sensors and the wireless transceiver, wherein the controller is configured to determine a cognitive load of the occupant utilizing at least the cognitive-load data and send a wireless command to the vehicle utilizing the wireless transceiver to execute commands to adjust a driver assistance function when the cognitive load is above a threshold.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Shabnam GHAFFARZADEGAN, Benzun Pious Wisely BABU, Zeng DAI, Liu REN
  • Publication number: 20210201930
    Abstract: A system for ontology-aware sound classification. The system includes an electronic processor that is configured to create a first graph based on relationships between fine audio classification labels and create a second graph based on relationships between coarse audio classification labels. The electronic processor is also configured to receive an audio clip including one or more sounds, execute a first graph convolutional network with the first graph as input, and execute a second graph convolutional network with the second graph as input. Using the outputs of the first graph convolutional network and the second graph convolutional network, the electronic processor is configured to determine one or more coarse labels, one or more fine labels, or both to classify the one or more sounds in the audio clip.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Shabnam Ghaffarzadegan, Zhe Feng, Yiwei Sun
  • Publication number: 20210191518
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.
    Type: Application
    Filed: December 23, 2019
    Publication date: June 24, 2021
    Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN
  • Patent number: 10959479
    Abstract: A system for providing a rider of a saddle-ride vehicle, such as a motorcycle, with information about helmet usage is provided. A camera is mounted to the saddle-ride vehicle and faces the rider and monitor a rider of the vehicle and collect rider image data. A GPS system is configured to detect a location of the saddle-ride vehicle. A controller is in communication with the camera and the GPS system. The controller is configured to receive an image of the ruder from the camera, determine if the rider is wearing a helmet based on the rider image data, and output a helmet-worn indicator to the rider, in which the helmet-worn indicator varies based on the determined location of the saddle-ride vehicle.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: March 30, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Benzun Pious Wisely Babu, Zeng Dai, Shabnam Ghaffarzadegan, Liu Ren
  • Patent number: 10937444
    Abstract: A system for end-to-end automated scoring is disclosed. The system includes a word embedding layer for converting a plurality of ASR outputs into input tensors; a neural network lexical model encoder receiving the input tensors; a neural network acoustic model encoder implementing AM posterior probability, word duration, mean value of pitch and mean value of intensity based on a plurality of cues; and a linear regression module, for receiving concatenated encoded features from the neural network lexical model encoder and the neural network acoustic model encoder.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: March 2, 2021
    Assignee: Educational Testing Service
    Inventors: David Suendermann-Oeft, Lei Chen, Jidong Tao, Shabnam Ghaffarzadegan, Yao Qian
  • Publication number: 20210042583
    Abstract: A computer-program product storing instructions which, when executed by a computer, cause the computer to receive an input data, encode the input via an encoder, during a first sequence, obtain a first latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing the decoder and at least the first latent variable, obtain a residual between the input data and the reconstruction utilizing a comparison of at least the first latent variable, and output a final reconstruction of the input data utilizing a plurality of residuals from a plurality of sequences.
    Type: Application
    Filed: August 8, 2019
    Publication date: February 11, 2021
    Inventors: Shabnam GHAFFARZADEGAN, Nanxiang LI, Liu REN
  • Publication number: 20210005067
    Abstract: A method and system for detecting and localizing a target audio event in an audio clip is disclosed. The method and system use utilizes a hierarchical approach in which a dilated convolutional neural network to detect the presence of the target audio event anywhere in an audio clip based on high level audio features. If the target audio event is detected somewhere in the audio clip, the method and system further utilizes a robust audio vector representation that encodes the inherent state of the audio as well as a learned relationship between state of the audio and the particular target audio event that was detected in the audio clip. A bi-directional long short term memory classifier is used to model long term dependencies and determine the boundaries in time of the target audio event within the audio clip based on the audio vector representations.
    Type: Application
    Filed: February 20, 2019
    Publication date: January 7, 2021
    Inventors: Asif Salekin, Zhe Feng, Shabnam Ghaffarzadegan
  • Patent number: 10636437
    Abstract: A system for monitoring dietary activity of a user includes a wearable device having at least one audio input unit configured to record an audio sample corresponding to audio from a user's neck. The system further includes a processor configured to execute programmed instructions stored in a memory to obtain an audio sample from the audio input unit of a wearable device, determine segmental feature values of a set of selected features from the audio sample by extracting short-term features in the set of selected features from the audio sample and determining the segmental feature values of the set of selected features from the extracted short-term features. The processor is further configured to, using a classifier, classify a dietary activity based on the determined segmental feature values of the audio sample and generate an output corresponding to the classified dietary activity.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: April 28, 2020
    Assignee: Robert Bosche GmbH
    Inventors: Taufiq Hasan, Shabnam Ghaffarzadegan, Zhe Feng
  • Publication number: 20190365342
    Abstract: A method and system for detecting abnormal heart sounds in a phonocardiogram of a person are disclosed. At least one segmented cardiac cycle of the phonocardiogram is received at a processor. The processor decomposes the segmented cardiac cycle into a plurality of frequency sub-bands using a first convolutional neural network having, in particular a plurality of time-convolution layers (tConv). The kernel weights of each time-convolution layer are learned in a training process such that the time-convolution layers identify pathologically significant frequency sub-bands. The processor determines a probability that the segmented cardiac cycle contains an abnormal heart sound based on the plurality of frequency sub-band segments using at least one further neural network. In some embodiments, the time-convolution layers are configured to have a linear phase response (LP-tConv) or a zero phase response (ZP-tConv).
    Type: Application
    Filed: July 12, 2018
    Publication date: December 5, 2019
    Inventors: Shabnam Ghaffarzadegan, Zhe Feng, Ahmed Imtiaz Humayun, Taufiq Hasan
  • Publication number: 20190272845
    Abstract: A system for monitoring dietary activity of a user includes a wearable device having at least one audio input unit configured to record an audio sample corresponding to audio from a user's neck. The system further includes a processor configured to execute programmed instructions stored in a memory to obtain an audio sample from the audio input unit of a wearable device, determine segmental feature values of a set of selected features from the audio sample by extracting short-term features in the set of selected features from the audio sample and determining the segmental feature values of the set of selected features from the extracted short-term features. The processor is further configured to, using a classifier, classify a dietary activity based on the determined segmental feature values of the audio sample and generate an output corresponding to the classified dietary activity.
    Type: Application
    Filed: February 25, 2019
    Publication date: September 5, 2019
    Inventors: Taufiq Hasan, Shabnam Ghaffarzadegan, Zhe Feng