Patents by Inventor Praveen Narayanan

Praveen Narayanan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11975738
    Abstract: A first image can be acquired from a first sensor included in a vehicle and input to a deep neural network to determine a first bounding box for a first object. A second image can be acquired from the first sensor. Input latitudinal and longitudinal motion data from second sensors included in the vehicle corresponding to the time between inputting the first image and inputting the second image. A second bounding box can be determined by translating the first bounding box based on the latitudinal and longitudinal motion data. The second image can be cropped based on the second bounding box. The cropped second image can be input to the deep neural network to detect a second object. The first image, the first bounding box, the second image, and the second bounding box can be output.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: May 7, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Gurjeet Singh, Apurbaa Mallik, Rohun Atluri, Vijay Nagasamy, Praveen Narayanan
  • Patent number: 11772656
    Abstract: A system includes a computer including a processor and a memory, the memory storing instructions executable by the processor to generate a synthetic image by adjusting respective color values of one or more pixels of a reference image based on a specified meteorological optical range from a vehicle sensor to simulated fog, and input the synthetic image to a machine learning program to train the machine learning program to identify a meteorological optical range from the vehicle sensor to actual fog.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: October 3, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Apurbaa Mallik, Kaushik Balakrishnan, Vijay Nagasamy, Praveen Narayanan, Sowndarya Sundar
  • Patent number: 11720995
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to input a fisheye image to a vector quantized variational autoencoder. The vector quantized variational autoencoder can encode the fisheye image to first latent variables based on an encoder. The vector quantized variational autoencoder can quantize the first latent variables to generate second latent variables based on a dictionary of embeddings. The vector quantized variational autoencoder can decode the second latent variables to a rectified rectilinear image using a decoder and output the rectified rectilinear image.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: August 8, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Praveen Narayanan, Ramchandra Ganesh Karandikar, Nikita Jaipuria, Punarjay Chakravarty, Ganesh Kumar
  • Publication number: 20230139013
    Abstract: An image including a vehicle seat and a seatbelt webbing for the vehicle seat is obtained. The image is input to a neural network trained to, upon determining a presence of an occupant in the vehicle seat, output a physical state of the occupant and a seatbelt webbing state. Respective classifications for the physical state and the seatbelt webbing state are determined. The classifications are one of preferred or nonpreferred. A vehicle component is actuated based on the classification for at least one of the physical state of the occupant or the seatbelt webbing state being nonpreferred.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 4, 2023
    Applicant: Ford Global technologies, LLC
    Inventors: Kaushik Balakrishnan, Praveen Narayanan, Justin Miller, Devesh Upadhyay
  • Patent number: 11625856
    Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: April 11, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
  • Patent number: 11620475
    Abstract: The present disclosure discloses a system and a method that includes receiving, at a decoder, a latent representation of an image having a first domain, and generating a reconstructed image having a second domain, wherein the reconstructed image is generated based on the latent representation.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: April 4, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Praveen Narayanan, Nikita Jaipuria, Punarjay Chakravarty, Vidya Nariyambut murali
  • Patent number: 11613249
    Abstract: A method for training an autonomous vehicle to reach a target location. The method includes detecting the state of an autonomous vehicle in a simulated environment, and using a neural network to navigate the vehicle from an initial location to a target destination. During the training phase, a second neural network may reward the first neural network for a desired action taken by the autonomous vehicle, and may penalize the first neural network for an undesired action taken by the autonomous vehicle. A corresponding system and computer program product are also disclosed and claimed herein.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: March 28, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Kaushik Balakrishnan, Praveen Narayanan, Mohsen Lakehal-ayat
  • Patent number: 11574622
    Abstract: An end-to-end deep-learning-based system that can solve both ASR and TTS problems jointly using unpaired text and audio samples is disclosed herein. An adversarially-trained approach is used to generate a more robust independent TTS neural network and an ASR neural network that can be deployed individually or simultaneously. The process for training the neural networks includes generating an audio sample from a text sample using the TTS neural network, then feeding the generated audio sample into the ASR neural network to regenerate the text. The difference between the regenerated text and the original text is used as a first loss for training the neural networks. A similar process is used for an audio sample. The difference between the regenerated audio and the original audio is used as a second loss. Text and audio discriminators are similarly used on the output of the neural network to generate additional losses for training.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: February 7, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Kaushik Balakrishnan, Praveen Narayanan, Francois Charette
  • Patent number: 11574463
    Abstract: The present disclosure discloses a system and a method. In an example implementation, the system and the method generate, at a first encoder neural network, an encoded representation of image features of an image received from a vehicle sensor of a vehicle. The system and method can also generate, at a second encoder neural network, an encoded representation of map tile features and generate, at the decoder neural network, a semantically segmented map tile based on the encoded representation of image features, the encoded representation of map tile features, and Global Positioning System (GPS) coordinates of the vehicle. The semantically segmented map tile includes a location of the vehicle and detected objects depicted within the image with respect to the vehicle.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: February 7, 2023
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Gaurav Pandey, Nikita Jaipuria, Praveen Narayanan, Punarjay Chakravarty
  • Publication number: 20230023347
    Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive radar data including a radar pixel having a radial velocity from a radar; receive camera data including an image frame including camera pixels from a camera; map the radar pixel to the image frame; generate a region of the image frame surrounding the radar pixel; determine association scores for the respective camera pixels in the region; select a first camera pixel of the camera pixels from the region, the first camera pixel having a greatest association score of the association scores; and calculate a full velocity of the radar pixel using the radial velocity of the radar pixel and a first optical flow at the first camera pixel. The association scores indicate a likelihood that the respective camera pixels correspond to a same point in an environment as the radar pixel.
    Type: Application
    Filed: July 23, 2021
    Publication date: January 26, 2023
    Applicants: Ford Global Technologies, LLC, Board of Trustees of Michigan State University
    Inventors: Xiaoming Liu, Daniel Morris, Yunfei Long, Marcos Paul Gerardo Castro, Punarjay Chakravarty, Praveen Narayanan
  • Publication number: 20220388535
    Abstract: A first image can be acquired from a first sensor included in a vehicle and input to a deep neural network to determine a first bounding box for a first object. A second image can be acquired from the first sensor. Input latitudinal and longitudinal motion data from second sensors included in the vehicle corresponding to the time between inputting the first image and inputting the second image. A second bounding box can be determined by translating the first bounding box based on the latitudinal and longitudinal motion data. The second image can be cropped based on the second bounding box. The cropped second image can be input to the deep neural network to detect a second object. The first image, the first bounding box, the second image, and the second bounding box can be output.
    Type: Application
    Filed: June 3, 2021
    Publication date: December 8, 2022
    Applicant: Ford Global Technologies, LLC
    Inventors: Gurjeet Singh, Apurbaa Mallik, Rohun Atluri, Vijay Nagasamy, Praveen Narayanan
  • Publication number: 20220392014
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to input a fisheye image to a vector quantized variational autoencoder. The vector quantized variational autoencoder can encode the fisheye image to first latent variables based on an encoder. The vector quantized variational autoencoder can quantize the first latent variables to generate second latent variables based on a dictionary of embeddings. The vector quantized variational autoencoder can decode the second latent variables to a rectified rectilinear image using a decoder and output the rectified rectilinear image.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 8, 2022
    Applicant: Ford Global Technologies, LLC
    Inventors: Praveen Narayanan, Ramchandra Ganesh Karandikar, Nikita Jaipuria, Punarjay Chakravarty, Ganesh Kumar
  • Publication number: 20220390591
    Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive radar data from a radar, the radar data including radar pixels having respective measured depths; receive camera data from a camera, the camera data including an image frame including camera pixels; map the radar pixels to the image frame; generate respective regions of the image frame surrounding the respective radar pixels; for each region, determine confidence scores for the respective camera pixels in that region; output a depth map of projected depths for the respective camera pixels based on the confidence scores; and operate a vehicle including the radar and the camera based on the depth map. The confidence scores indicate confidence in applying the measured depth of the radar pixel for that region to the respective camera pixels.
    Type: Application
    Filed: June 3, 2021
    Publication date: December 8, 2022
    Applicants: Ford Global Technologies, LLC, Board of Trustees of Michigan State University
    Inventors: Yunfei Long, Daniel Morris, Xiaoming Liu, Marcos Paul Gerardo Castro, Praveen Narayanan, Punarjay Chakravarty
  • Patent number: 11475591
    Abstract: Various examples of hybrid metric-topological camera-based localization are described. A single image sensor captures an input image of an environment. The input image is localized to one of a plurality of topological nodes of a hybrid simultaneous localization and mapping (SLAM) metric-topological map which describes the environment as the plurality of topological nodes at a plurality of discrete locations in the environment. A metric pose of the image sensor can be determined using a Perspective-n-Point (PnP) projection algorithm. A convolutional neural network (CNN) can be trained to localize the input image to one of the plurality of topological nodes and a direction of traversal through the environment.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: October 18, 2022
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Tom Roussel, Praveen Narayanan, Gaurav Pandey
  • Patent number: 11410667
    Abstract: A speech conversion system is described that includes a hierarchical encoder and a decoder. The system may comprise a processor and memory storing instructions executable by the processor. The instructions may comprise to: using a second recurrent neural network (RNN) (GRU1) and a first set of encoder vectors derived from a spectrogram as input to the second RNN, determine a second concatenated sequence; determine a second set of encoder vectors by doubling a stack height and halving a length of the second concatenated sequence; using the second set of encoder vectors, determine a third set of encoder vectors; and decode the third set of encoder vectors using an attention block.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: August 9, 2022
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Lisa Scaria, Ryan Burke, Francois Charette, Praveen Narayanan
  • Publication number: 20220188621
    Abstract: A system comprises a computer including a processor and a memory. The memory storing instructions executable by the processor to cause the processor to generate a low-level representation of the input source domain data; generate an embedding of the input source domain data; generate a high-level feature representation of features of the input source domain data; generate output target domain data in the target domain that includes semantics corresponding to the input source domain data by processing the high-level feature representation of the features of the input source domain data using a domain low-level decoder neural network layer that generate data from the target; and modify a loss function such that latent attributes corresponding to the embedding are selected from a same probability distribution.
    Type: Application
    Filed: December 10, 2020
    Publication date: June 16, 2022
    Applicant: Ford Global Technologies, LLC
    Inventors: Praveen Narayanan, Nikita Jaipuria, Apurbaa Mallik, Punarjay Chakravarty, Ganesh Kumar
  • Publication number: 20220009498
    Abstract: A system includes a computer including a processor and a memory, the memory storing instructions executable by the processor to, generate a synthetic image by adjusting respective color values of one or more pixels of a reference image based on a specified meteorological optical range from a vehicle sensor to simulated fog, and input the synthetic image to a machine learning program to train the machine learning program to identify a meteorological optical range from the vehicle sensor to actual fog.
    Type: Application
    Filed: July 8, 2020
    Publication date: January 13, 2022
    Applicant: Ford Global Technologies, LLC
    Inventors: Apurbaa Mallik, Kaushik Balakrishnan, Vijay Nagasamy, Praveen Narayanan, Sowndarya Sundar
  • Publication number: 20220005457
    Abstract: An end-to-end deep-learning-based system that can solve both ASR and TTS problems jointly using unpaired text and audio samples is disclosed herein. An adversarially-trained approach is used to generate a more robust independent TTS neural network and an ASR neural network that can be deployed individually or simultaneously. The process for training the neural networks includes generating an audio sample from a text sample using the TTS neural network, then feeding the generated audio sample into the ASR neural network to regenerate the text. The difference between the regenerated text and the original text is used as a first loss for training the neural networks. A similar process is used for an audio sample. The difference between the regenerated audio and the original audio is used as a second loss. Text and audio discriminators are similarly used on the output of the neural network to generate additional losses for training.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 6, 2022
    Applicant: Ford Global Technologies, LLC
    Inventors: Kaushik Balakrishnan, Praveen Narayanan, Francois Charette
  • Patent number: 11210535
    Abstract: A system comprises a computer that includes a processor and a memory.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: December 28, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Gaurav Pandey, Ganesh Kumar, Praveen Narayanan, Jinesh Jain
  • Publication number: 20210397198
    Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive an image including a physical landmark, output a plurality of synthetic images, wherein each synthetic image is generated by simulating at least one ambient feature in the received image, generate respective feature vectors for each of the plurality of synthetic images, and actuate one or more vehicle components upon identifying the physical landmark in a second received image based on a similarity measure between the feature vectors of the synthetic images and a feature vector of the second received image, the similarity measure being one of a probability distribution difference or a statistical distance.
    Type: Application
    Filed: June 18, 2020
    Publication date: December 23, 2021
    Applicant: Ford Global Technologies, LLC
    Inventors: Iman Soltani Bozchalooi, Francois Charette, Praveen Narayanan, Ryan Burke, Devesh Upadhyay, Dimitar Petrov Filev