Patents by Inventor Manash PRATIM

Manash PRATIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250060450
    Abstract: The present disclosure describes a system and a method for object detection and counter measures. A signal reflected by an object in an environment is conditioned to improve various parameters such as signal to noise ratio, spectral resolution, color mapping, or the like. A determination whether the object is unmanned aerial vehicle is based on an output of a trained AI model. The trained AI model classifies the detected object into a category based on the conditioned signal. Additionally, a jammer and spoofer are orchestrated based on determination that the object is an unmanned aerial vehicle. A control of the object is achieved based on the orchestration to perform counter measures such as jamming and spoofing.
    Type: Application
    Filed: August 14, 2024
    Publication date: February 20, 2025
    Inventors: Nilutpal Choudhury, Manash Pratim Bhuyan, Nihar Kanta Sahoo, Pournamy S, Stephin George, Ajay R, Ravikumar G, Raghavendra Murgod
  • Publication number: 20250008193
    Abstract: Examples approaches for generating a target audio track and a target video track based on a source audio-video track are described. In an example, an audio generation model is used to generate a target audio for replacing specific portion of a source audio track to generate a seamless target audio track. Further, a video generation model is used to generate a target video for replacing specific portion of a source video track to generate a seamless target video track. Once generated, the target audio track and the target video track are merged to generate a target audio-visual track.
    Type: Application
    Filed: November 16, 2022
    Publication date: January 2, 2025
    Inventors: Soma SIDDHARTHA, Ankur BHATIA, Amogh GULATI, Manash Pratim BARMAN, Suvrat BHOOSHAN
  • Patent number: 12175336
    Abstract: A computer-implemented method for training a machine learning network. The method may include receiving an input data, selecting one or more batch samples from the input data, applying a perturbation object onto the one or more batch samples to create a perturbed sample, running the perturbed sample through the machine learning network, updating the perturbation object in response to the function in response to running the perturbed sample, and outputting the perturbation object in response to exceeding a convergence threshold.
    Type: Grant
    Filed: September 20, 2020
    Date of Patent: December 24, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Filipe J. Cabrita Condessa, Wan-Yi Lin, Karren Yang, Manash Pratim
  • Patent number: 11893087
    Abstract: A multimodal perception system for an autonomous vehicle includes a first sensor that is one of a video, RADAR, LIDAR, or ultrasound sensor, and a controller. The controller may be configured to, receive a first signal from a first sensor, a second signal from a second sensor, and a third signal from a third sensor, extract a first feature vector from the first signal, extract a second feature vector from the second signal, extract a third feature vector from the third signal, determine an odd-one-out vector from the first, second, and third feature vectors via an odd-one-out network of a machine learning network, based on inconsistent modality prediction, fuse the first, second, and third feature vectors and odd-one-out vector into a fused feature vector, output the fused feature vector, and control the autonomous vehicle based on the fused feature vector.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: February 6, 2024
    Inventors: Karren Yang, Wan-Yi Lin, Manash Pratim, Filipe J. Cabrita Condessa, Jeremy Kolter
  • Publication number: 20220405537
    Abstract: A multimodal perception system for an autonomous vehicle includes a first sensor that is one of a video, RADAR, LIDAR, or ultrasound sensor, and a controller. The controller may be configured to, receive a first signal from a first sensor, a second signal from a second sensor, and a third signal from a third sensor, extract a first feature vector from the first signal, extract a second feature vector from the second signal, extract a third feature vector from the third signal, determine an odd-one-out vector from the first, second, and third feature vectors via an odd-one-out network of a machine learning network, based on inconsistent modality prediction, fuse the first, second, and third feature vectors and odd-one-out vector into a fused feature vector, output the fused feature vector, and control the autonomous vehicle based on the fused feature vector.
    Type: Application
    Filed: June 16, 2021
    Publication date: December 22, 2022
    Inventors: Karren YANG, Wan-Yi LIN, Manash PRATIM, Filipe J. CABRITA CONDESSA, Jeremy KOLTER
  • Publication number: 20220092466
    Abstract: A computer-implemented method for training a machine learning network. The method may include receiving an input data, selecting one or more batch samples from the input data, applying a perturbation object onto the one or more batch samples to create a perturbed sample, running the perturbed sample through the machine learning network, updating the perturbation object in response to the function in response to running the perturbed sample, and outputting the perturbation object in response to exceeding a convergence threshold.
    Type: Application
    Filed: September 20, 2020
    Publication date: March 24, 2022
    Inventors: Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Karren YANG, Manash PRATIM