Patents by Inventor Raunaq Bose

Raunaq Bose has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104926
    Abstract: A system and a method are disclosed for detecting a sentiment based in part on visual data. The system receives visual data of an environment generated by one or more sensors, accesses a sequence of AI models. Outputs of earlier models in the sequence act as inputs to one or more later models in the sequence. The sequence of AI models includes one or more frame-based models and one or more temporal-based models. The frame-based model is configured to receive the visual data as input, and extract multiple sets of frame-based features associated with the person based in part on the visual data. The temporal-based model is configured to receive the multiple sets of frame-based features as input, and determine a sentiment of the one or more persons based in part on the multiple sets of frame-based features.
    Type: Application
    Filed: September 23, 2022
    Publication date: March 28, 2024
    Inventors: Dominic Noy, Carlos Serra Magalhães Coelho, Raunaq Bose, Leslie Nooteboom, Maya Audrey Lara Pindeus
  • Publication number: 20240029467
    Abstract: A device performs operations including determining a probability that a vulnerable road user (VRU) will continue on a current path (e.g., in connection with controlling an autonomous vehicle). The device receives an image depicting a vulnerable road user (VRU). The device inputs at least a portion of the image into a model, and receives, as output from the model, a plurality of probabilities describing the VRU, each of the probabilities corresponding to a probability that the VRU is in a given state. The device determines, based on at least some of the plurality of probabilities, a probability that the VRU will exhibit a behavior, and outputs the probability that the VRU will exhibit the behavior to a control system.
    Type: Application
    Filed: October 2, 2023
    Publication date: January 25, 2024
    Inventors: Dominic Noy, Matthew Cameron Angus, James Over Everard, Wassim El Youssoufi, Raunaq Bose, Leslie Cees Nooteboom, Maya Audrey Lara Pindeus
  • Publication number: 20230419839
    Abstract: The systems and methods disclosed herein provide a risk prediction system that uses trained machine learning models to make predictions that a VRU will take a particular action. The system first receives, in a video stream, an image depicting a VRU operating a micro-mobility vehicle and extract the depictions from the image. The extraction process may be determined by bounding box classifiers trained to identify various VRUs and micro-mobility vehicles. The system feeds the extracted depictions to machine learning models and receives, as an output, risk profiles for the VRU and the micro-mobility vehicle. The risk profile may include data associated with the VRU/micro-mobility vehicle determined based on classifications of the VRU and the micro-mobility vehicles. The system may then generate a prediction that the VRU operating the micro-mobility vehicle will take a particular action based on the risk profile.
    Type: Application
    Filed: September 7, 2023
    Publication date: December 28, 2023
    Inventors: Raunaq Bose, Leslie Cees Nooteboom, Maya Audrey Lara Pindeus
  • Patent number: 11816914
    Abstract: A device performs operations including determining a probability that a vulnerable road user (VRU) will continue on a current path (e.g., in connection with controlling an autonomous vehicle). The device receives an image depicting a vulnerable road user (VRU). The device inputs at least a portion of the image into a model, and receives, as output from the model, a plurality of probabilities describing the VRU, each of the probabilities corresponding to a probability that the VRU is in a given state. The device determines, based on at least some of the plurality of probabilities, a probability that the VRU will exhibit a behavior, and outputs the probability that the VRU will exhibit the behavior to a control system.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: November 14, 2023
    Assignee: HUMANISING AUTONOMY LIMITED
    Inventors: Dominic Noy, Matthew Cameron Angus, James Over Everard, Wassim El Youssoufi, Raunaq Bose, Leslie Cees Nooteboom, Maya Audrey Lara Pindeus
  • Publication number: 20230343062
    Abstract: Systems and methods are disclosed herein for tracking a vulnerable road user (VRU) regardless of occlusion. In an embodiment, the system captures a series of images including the VRU, and inputs each of the images into a detection model. The system receives a bounding box for each of the series of images of the VRU as output from the detection model. The system inputs each bounding box into a multi-task model, and receives as output from the multi-task model an embedding for each bounding box. The system determines, using the embeddings for each bounding box across the series of images, an indication of which of the embeddings correspond to the VRU.
    Type: Application
    Filed: June 27, 2023
    Publication date: October 26, 2023
    Inventors: Yazhini Chitra Pradeep, Wassim El Youssoufi, Dominic Noy, James Over Everard, Raunaq Bose, Maya Audrey Lara Pindeus, Leslie Cees Nooteboom
  • Patent number: 11783710
    Abstract: The systems and methods disclosed herein provide a risk prediction system that uses trained machine learning models to make predictions that a VRU will take a particular action. The system first receives, in a video stream, an image depicting a VRU operating a micro-mobility vehicle and extract the depictions from the image. The extraction process may be determined by bounding box classifiers trained to identify various VRUs and micro-mobility vehicles. The system feeds the extracted depictions to machine learning models and receives, as an output, risk profiles for the VRU and the micro-mobility vehicle. The risk profile may include data associated with the VRU/micro-mobility vehicle determined based on classifications of the VRU and the micro-mobility vehicles. The system may then generate a prediction that the VRU operating the micro-mobility vehicle will take a particular action based on the risk profile.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: October 10, 2023
    Assignee: HUMANISING AUTONOMY LIMITED
    Inventors: Raunaq Bose, Leslie Cees Nooteboom, Maya Audrey Lara Pindeus
  • Patent number: 11734907
    Abstract: Systems and methods are disclosed herein for tracking a vulnerable road user (VRU) regardless of occlusion. In an embodiment, the system captures a series of images including the VRU, and inputs each of the images into a detection model. The system receives a bounding box for each of the series of images of the VRU as output from the detection model. The system inputs each bounding box into a multi-task model, and receives as output from the multi-task model an embedding for each bounding box. The system determines, using the embeddings for each bounding box across the series of images, an indication of which of the embeddings correspond to the VRU.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: August 22, 2023
    Assignee: HUMANISING AUTONOMY LIMITED
    Inventors: Yazhini Chitra Pradeep, Wassim El Youssoufi, Dominic Noy, James Over Everard, Raunaq Bose, Maya Audrey Lara Pindeus, Leslie Cees Nooteboom
  • Publication number: 20230048304
    Abstract: A behavior prediction system predicts human behaviors based on environment-aware information such as camera movement data and geospatial data. The system receives sensor data of a vehicle reflecting a state of the vehicle at a given time and a given location. The system determines a field of concern in images of a video stream and determines one or more portions of images of the video stream that correspond to the field of concern. The system may apply different levels of processing powers to objects in the images based on whether an object is in the field of concern. The system then generates features of objects and identify VRUs from the objects of the video stream. For the identified VRUs, the system inputs a representation of the VRUs and the features into a machine learning model, and outputs from the machine learning model a behavioral risk assessment of the VRUs.
    Type: Application
    Filed: August 13, 2021
    Publication date: February 16, 2023
    Inventors: Leslie Cees Nooteboom, Raunaq Bose, Maya Audrey Lara Pindeus, Dominic Noy, James Over Everard, Yazhini Chitra Pradeep
  • Publication number: 20220313109
    Abstract: There is provided a method of differentiating between a strain and a contraction of the pelvic floor muscles (PFM) of a subject. The method comprises receiving data generated by an orientation sensor provided within a vaginal probe device that is located within the vaginal canal of the subject; utilizing a processor to process data generated by the orientation sensor to determine a direction of rotation of the vaginal probe device during a measurement period. When the processor determines that the vaginal probe device has rotated in the cranial-ventral direction relative to the subject, an output is generated indicating that there has been a contraction of the PFM during the measurement period; and when the processor determines that the vaginal probe device has rotated in the caudal-dorsal direction relative to the subject, an output is generated indicating that there has been a strain of the PFM during the measurement period.
    Type: Application
    Filed: June 21, 2022
    Publication date: October 6, 2022
    Inventors: Ben LEVY, Jeroen BERGMANN, Raunaq BOSE, Kay CROTTY
  • Patent number: 11406279
    Abstract: A method of differentiating between a strain and a contraction of the pelvic floor muscles (PFM) of a subject includes receiving data generated by an orientation sensor provided within a vaginal probe device that is located within the vaginal canal of the subject and utilizing a processor to process data generated by the orientation sensor to determine a direction of rotation of the vaginal probe device during a measurement period. When the processor determines that the vaginal probe device has rotated in the cranial-ventral direction relative to the subject, an output is generated indicating that there has been a contraction of the PFM during the measurement period; and when the processor determines that the vaginal probe device has rotated in the caudal-dorsal direction relative to the subject, an output is generated indicating that there has been a strain of the PFM during the measurement period.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: August 9, 2022
    Assignee: Chiaro Technology Limited
    Inventors: Ben Levy, Jeroen Bergmann, Raunaq Bose, Kay Crotty
  • Publication number: 20220189210
    Abstract: An occlusion analysis system improves accuracy of behavior prediction models by generating occlusion parameters that may inform mathematical models to generate more accurate predictions. The occlusion analysis system trains and applies models for generating occlusion parameters, such as a manner in which a person is occluded, occlusion percentage, occlusion type. A behavior prediction system may input the occlusion parameters as well as other parameters relating to activity of the human into a second mathematical model for behavior prediction. The second machine learning model is a higher-level model trained to output a prediction that the human will exhibit a future behavior and a confidence level associated with the prediction. The confidence level is at least partially determined based on the occlusion parameters.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 16, 2022
    Inventors: Wassim El Youssoufi, Dominic Noy, Yazhini Chitra Pradeep, James Over Everard, Leslie Cees Nooteboom, Raunaq Bose, Maya Audrey Lara Pindeus
  • Publication number: 20210403003
    Abstract: The systems and methods disclosed herein provide a risk prediction system that uses trained machine learning models to make predictions that a VRU will take a particular action. The system first receives, in a video stream, an image depicting a VRU operating a micro-mobility vehicle and extract the depictions from the image. The extraction process may be determined by bounding box classifiers trained to identify various VRUs and micro-mobility vehicles. The system feeds the extracted depictions to machine learning models and receives, as an output, risk profiles for the VRU and the micro-mobility vehicle. The risk profile may include data associated with the VRU/micro-mobility vehicle determined based on classifications of the VRU and the micro-mobility vehicles. The system may then generate a prediction that the VRU operating the micro-mobility vehicle will take a particular action based on the risk profile.
    Type: Application
    Filed: June 24, 2021
    Publication date: December 30, 2021
    Inventors: Raunaq Bose, Leslie Cees Nooteboom, Maya Audrey Lara Pindeus
  • Publication number: 20210334982
    Abstract: Systems and methods are disclosed herein for tracking a vulnerable road user (VRU) regardless of occlusion. In an embodiment, the system captures a series of images including the VRU, and inputs each of the images into a detection model. The system receives a bounding box for each of the series of images of the VRU as output from the detection model. The system inputs each bounding box into a multi-task model, and receives as output from the multi-task model an embedding for each bounding box. The system determines, using the embeddings for each bounding box across the series of images, an indication of which of the embeddings correspond to the VRU.
    Type: Application
    Filed: April 24, 2020
    Publication date: October 28, 2021
    Inventors: Yazhini Chitra Pradeep, Wassim El Youssoufi, Dominic Noy, James Over Everard, Raunaq Bose, Maya Audrey Lara Pindeus, Leslie Cees Nooteboom
  • Publication number: 20210070322
    Abstract: A device performs operations including determining a probability that a vulnerable road user (VRU) will continue on a current path (e.g., in connection with controlling an autonomous vehicle). The device receives an image depicting a vulnerable road user (VRU). The device inputs at least a portion of the image into a model, and receives, as output from the model, a plurality of probabilities describing the VRU, each of the probabilities corresponding to a probability that the VRU is in a given state. The device determines, based on at least some of the plurality of probabilities, a probability that the VRU will exhibit a behavior, and outputs the probability that the VRU will exhibit the behavior to a control system.
    Type: Application
    Filed: September 3, 2020
    Publication date: March 11, 2021
    Inventors: Dominic Noy, Matthew Cameron Angus, James Over Everard, Wassim El Youssoufi, Raunaq Bose, Leslie Cees Nooteboom, Maya Audrey Lara Pindeus
  • Patent number: 10913454
    Abstract: A system and a method are disclosed for determining intent of a human based on human pose. In some embodiments, a processor obtains a plurality of sequential images from a video feed, and determines respective keypoints corresponding a human in each respective image of the plurality of sequential images. The processor aggregates the respective keypoints for each respective image into a pose of the human and transmits a query to a database to find a template that matches the pose by comparing the pose to a plurality of templates poses that translate candidate poses to intent, each template corresponding to an associated intent. The processor receives a reply message from the database that either indicates an intent of the human based on a matching template, or an inability to locate the matching template, and, in response to the reply message indicating the intent of the human, outputs the intent.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: February 9, 2021
    Assignee: Humanising Autonomy Limited
    Inventors: Maya Audrey Lara Pindeus, Raunaq Bose, Leslie Cees Nooteboom, Adam Joshua Bernstein
  • Publication number: 20190176820
    Abstract: A system and a method are disclosed for determining intent of a human based on human pose. In some embodiments, a processor obtains a plurality of sequential images from a video feed, and determines respective keypoints corresponding a human in each respective image of the plurality of sequential images. The processor aggregates the respective keypoints for each respective image into a pose of the human and transmits a query to a database to find a template that matches the pose by comparing the pose to a plurality of templates poses that translate candidate poses to intent, each template corresponding to an associated intent. The processor receives a reply message from the database that either indicates an intent of the human based on a matching template, or an inability to locate the matching template, and, in response to the reply message indicating the intent of the human, outputs the intent.
    Type: Application
    Filed: December 13, 2018
    Publication date: June 13, 2019
    Inventors: Maya Audrey Lara Pindeus, Raunaq Bose, Leslie Cees Nooteboom
  • Publication number: 20170319103
    Abstract: A method of differentiating between a strain and a contraction of the pelvic floor muscles (PFM) of a subject includes receiving data generated by an orientation sensor provided within a vaginal probe device that is located within the vaginal canal of the subject and utilizing a processor to process data generated by the orientation sensor to determine a direction of rotation of the vaginal probe device during a measurement period. When the processor determines that the vaginal probe device has rotated in the cranial-ventral direction relative to the subject, an output is generated indicating that there has been a contraction of the PFM during the measurement period; and when the processor determines that the vaginal probe device has rotated in the caudal-dorsal direction relative to the subject, an output is generated indicating that there has been a strain of the PFM during the measurement period.
    Type: Application
    Filed: October 28, 2015
    Publication date: November 9, 2017
    Inventors: Ben Levy, Jeroen Bergmann, Raunaq Bose, Kay Crotty