Patents by Inventor Praveen Narayanan
Praveen Narayanan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12061253Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive radar data from a radar, the radar data including radar pixels having respective measured depths; receive camera data from a camera, the camera data including an image frame including camera pixels; map the radar pixels to the image frame; generate respective regions of the image frame surrounding the respective radar pixels; for each region, determine confidence scores for the respective camera pixels in that region; output a depth map of projected depths for the respective camera pixels based on the confidence scores; and operate a vehicle including the radar and the camera based on the depth map. The confidence scores indicate confidence in applying the measured depth of the radar pixel for that region to the respective camera pixels.Type: GrantFiled: June 3, 2021Date of Patent: August 13, 2024Assignees: Ford Global Technologies, LLC, Board of Trustees of Michigan State UniversityInventors: Yunfei Long, Daniel Morris, Xiaoming Liu, Marcos Paul Gerardo Castro, Praveen Narayanan, Punarjay Chakravarty
-
Publication number: 20240264276Abstract: A computer that includes a processor and a memory, the memory including instructions executable by the processor to generate radar data by projecting radar returns of objects within a scene onto an image plane of camera data of the scene based on extrinsic and intrinsic parameters of a camera and extrinsic parameters of a radar sensor to generate the radar data. The image data can be received at an image channel of an image/radar convolutional neural network (CNN) and receive the radar data at a radar channel of the image/radar CNN, wherein features are transferred from the image channel to the radar channel at multiple stages Image object features and image confidence scores can be determined by the image channel, and radar object features and radar confidences by the radar channel. The image object features can be combined with the radar object features using a weighted sum.Type: ApplicationFiled: January 26, 2024Publication date: August 8, 2024Applicants: Ford Global Technologies, LLC, Board of Trustees of Michigan State UniversityInventors: Yunfei Long, Daniel Morris, Abhinav Kumar, Xiaoming Liu, Marcos Paul Gerardo Castro, Punarjay Chakravarty, Praveen Narayanan
-
Patent number: 11975738Abstract: A first image can be acquired from a first sensor included in a vehicle and input to a deep neural network to determine a first bounding box for a first object. A second image can be acquired from the first sensor. Input latitudinal and longitudinal motion data from second sensors included in the vehicle corresponding to the time between inputting the first image and inputting the second image. A second bounding box can be determined by translating the first bounding box based on the latitudinal and longitudinal motion data. The second image can be cropped based on the second bounding box. The cropped second image can be input to the deep neural network to detect a second object. The first image, the first bounding box, the second image, and the second bounding box can be output.Type: GrantFiled: June 3, 2021Date of Patent: May 7, 2024Assignee: Ford Global Technologies, LLCInventors: Gurjeet Singh, Apurbaa Mallik, Rohun Atluri, Vijay Nagasamy, Praveen Narayanan
-
Patent number: 11772656Abstract: A system includes a computer including a processor and a memory, the memory storing instructions executable by the processor to generate a synthetic image by adjusting respective color values of one or more pixels of a reference image based on a specified meteorological optical range from a vehicle sensor to simulated fog, and input the synthetic image to a machine learning program to train the machine learning program to identify a meteorological optical range from the vehicle sensor to actual fog.Type: GrantFiled: July 8, 2020Date of Patent: October 3, 2023Assignee: Ford Global Technologies, LLCInventors: Apurbaa Mallik, Kaushik Balakrishnan, Vijay Nagasamy, Praveen Narayanan, Sowndarya Sundar
-
Patent number: 11720995Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to input a fisheye image to a vector quantized variational autoencoder. The vector quantized variational autoencoder can encode the fisheye image to first latent variables based on an encoder. The vector quantized variational autoencoder can quantize the first latent variables to generate second latent variables based on a dictionary of embeddings. The vector quantized variational autoencoder can decode the second latent variables to a rectified rectilinear image using a decoder and output the rectified rectilinear image.Type: GrantFiled: June 4, 2021Date of Patent: August 8, 2023Assignee: Ford Global Technologies, LLCInventors: Praveen Narayanan, Ramchandra Ganesh Karandikar, Nikita Jaipuria, Punarjay Chakravarty, Ganesh Kumar
-
Publication number: 20230139013Abstract: An image including a vehicle seat and a seatbelt webbing for the vehicle seat is obtained. The image is input to a neural network trained to, upon determining a presence of an occupant in the vehicle seat, output a physical state of the occupant and a seatbelt webbing state. Respective classifications for the physical state and the seatbelt webbing state are determined. The classifications are one of preferred or nonpreferred. A vehicle component is actuated based on the classification for at least one of the physical state of the occupant or the seatbelt webbing state being nonpreferred.Type: ApplicationFiled: November 4, 2021Publication date: May 4, 2023Applicant: Ford Global technologies, LLCInventors: Kaushik Balakrishnan, Praveen Narayanan, Justin Miller, Devesh Upadhyay
-
Patent number: 11625856Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.Type: GrantFiled: January 27, 2021Date of Patent: April 11, 2023Assignee: Ford Global Technologies, LLCInventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
-
Patent number: 11620475Abstract: The present disclosure discloses a system and a method that includes receiving, at a decoder, a latent representation of an image having a first domain, and generating a reconstructed image having a second domain, wherein the reconstructed image is generated based on the latent representation.Type: GrantFiled: March 25, 2020Date of Patent: April 4, 2023Assignee: Ford Global Technologies, LLCInventors: Praveen Narayanan, Nikita Jaipuria, Punarjay Chakravarty, Vidya Nariyambut murali
-
Patent number: 11613249Abstract: A method for training an autonomous vehicle to reach a target location. The method includes detecting the state of an autonomous vehicle in a simulated environment, and using a neural network to navigate the vehicle from an initial location to a target destination. During the training phase, a second neural network may reward the first neural network for a desired action taken by the autonomous vehicle, and may penalize the first neural network for an undesired action taken by the autonomous vehicle. A corresponding system and computer program product are also disclosed and claimed herein.Type: GrantFiled: April 3, 2018Date of Patent: March 28, 2023Assignee: Ford Global Technologies, LLCInventors: Kaushik Balakrishnan, Praveen Narayanan, Mohsen Lakehal-ayat
-
Patent number: 11574622Abstract: An end-to-end deep-learning-based system that can solve both ASR and TTS problems jointly using unpaired text and audio samples is disclosed herein. An adversarially-trained approach is used to generate a more robust independent TTS neural network and an ASR neural network that can be deployed individually or simultaneously. The process for training the neural networks includes generating an audio sample from a text sample using the TTS neural network, then feeding the generated audio sample into the ASR neural network to regenerate the text. The difference between the regenerated text and the original text is used as a first loss for training the neural networks. A similar process is used for an audio sample. The difference between the regenerated audio and the original audio is used as a second loss. Text and audio discriminators are similarly used on the output of the neural network to generate additional losses for training.Type: GrantFiled: July 2, 2020Date of Patent: February 7, 2023Assignee: Ford Global Technologies, LLCInventors: Kaushik Balakrishnan, Praveen Narayanan, Francois Charette
-
Patent number: 11574463Abstract: The present disclosure discloses a system and a method. In an example implementation, the system and the method generate, at a first encoder neural network, an encoded representation of image features of an image received from a vehicle sensor of a vehicle. The system and method can also generate, at a second encoder neural network, an encoded representation of map tile features and generate, at the decoder neural network, a semantically segmented map tile based on the encoded representation of image features, the encoded representation of map tile features, and Global Positioning System (GPS) coordinates of the vehicle. The semantically segmented map tile includes a location of the vehicle and detected objects depicted within the image with respect to the vehicle.Type: GrantFiled: February 24, 2020Date of Patent: February 7, 2023Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Gaurav Pandey, Nikita Jaipuria, Praveen Narayanan, Punarjay Chakravarty
-
Publication number: 20230023347Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive radar data including a radar pixel having a radial velocity from a radar; receive camera data including an image frame including camera pixels from a camera; map the radar pixel to the image frame; generate a region of the image frame surrounding the radar pixel; determine association scores for the respective camera pixels in the region; select a first camera pixel of the camera pixels from the region, the first camera pixel having a greatest association score of the association scores; and calculate a full velocity of the radar pixel using the radial velocity of the radar pixel and a first optical flow at the first camera pixel. The association scores indicate a likelihood that the respective camera pixels correspond to a same point in an environment as the radar pixel.Type: ApplicationFiled: July 23, 2021Publication date: January 26, 2023Applicants: Ford Global Technologies, LLC, Board of Trustees of Michigan State UniversityInventors: Xiaoming Liu, Daniel Morris, Yunfei Long, Marcos Paul Gerardo Castro, Punarjay Chakravarty, Praveen Narayanan
-
Publication number: 20220392014Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to input a fisheye image to a vector quantized variational autoencoder. The vector quantized variational autoencoder can encode the fisheye image to first latent variables based on an encoder. The vector quantized variational autoencoder can quantize the first latent variables to generate second latent variables based on a dictionary of embeddings. The vector quantized variational autoencoder can decode the second latent variables to a rectified rectilinear image using a decoder and output the rectified rectilinear image.Type: ApplicationFiled: June 4, 2021Publication date: December 8, 2022Applicant: Ford Global Technologies, LLCInventors: Praveen Narayanan, Ramchandra Ganesh Karandikar, Nikita Jaipuria, Punarjay Chakravarty, Ganesh Kumar
-
Publication number: 20220388535Abstract: A first image can be acquired from a first sensor included in a vehicle and input to a deep neural network to determine a first bounding box for a first object. A second image can be acquired from the first sensor. Input latitudinal and longitudinal motion data from second sensors included in the vehicle corresponding to the time between inputting the first image and inputting the second image. A second bounding box can be determined by translating the first bounding box based on the latitudinal and longitudinal motion data. The second image can be cropped based on the second bounding box. The cropped second image can be input to the deep neural network to detect a second object. The first image, the first bounding box, the second image, and the second bounding box can be output.Type: ApplicationFiled: June 3, 2021Publication date: December 8, 2022Applicant: Ford Global Technologies, LLCInventors: Gurjeet Singh, Apurbaa Mallik, Rohun Atluri, Vijay Nagasamy, Praveen Narayanan
-
Publication number: 20220390591Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive radar data from a radar, the radar data including radar pixels having respective measured depths; receive camera data from a camera, the camera data including an image frame including camera pixels; map the radar pixels to the image frame; generate respective regions of the image frame surrounding the respective radar pixels; for each region, determine confidence scores for the respective camera pixels in that region; output a depth map of projected depths for the respective camera pixels based on the confidence scores; and operate a vehicle including the radar and the camera based on the depth map. The confidence scores indicate confidence in applying the measured depth of the radar pixel for that region to the respective camera pixels.Type: ApplicationFiled: June 3, 2021Publication date: December 8, 2022Applicants: Ford Global Technologies, LLC, Board of Trustees of Michigan State UniversityInventors: Yunfei Long, Daniel Morris, Xiaoming Liu, Marcos Paul Gerardo Castro, Praveen Narayanan, Punarjay Chakravarty
-
Patent number: 11475591Abstract: Various examples of hybrid metric-topological camera-based localization are described. A single image sensor captures an input image of an environment. The input image is localized to one of a plurality of topological nodes of a hybrid simultaneous localization and mapping (SLAM) metric-topological map which describes the environment as the plurality of topological nodes at a plurality of discrete locations in the environment. A metric pose of the image sensor can be determined using a Perspective-n-Point (PnP) projection algorithm. A convolutional neural network (CNN) can be trained to localize the input image to one of the plurality of topological nodes and a direction of traversal through the environment.Type: GrantFiled: November 24, 2020Date of Patent: October 18, 2022Assignee: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Tom Roussel, Praveen Narayanan, Gaurav Pandey
-
Patent number: 11410667Abstract: A speech conversion system is described that includes a hierarchical encoder and a decoder. The system may comprise a processor and memory storing instructions executable by the processor. The instructions may comprise to: using a second recurrent neural network (RNN) (GRU1) and a first set of encoder vectors derived from a spectrogram as input to the second RNN, determine a second concatenated sequence; determine a second set of encoder vectors by doubling a stack height and halving a length of the second concatenated sequence; using the second set of encoder vectors, determine a third set of encoder vectors; and decode the third set of encoder vectors using an attention block.Type: GrantFiled: June 28, 2019Date of Patent: August 9, 2022Assignee: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Lisa Scaria, Ryan Burke, Francois Charette, Praveen Narayanan
-
Publication number: 20220188621Abstract: A system comprises a computer including a processor and a memory. The memory storing instructions executable by the processor to cause the processor to generate a low-level representation of the input source domain data; generate an embedding of the input source domain data; generate a high-level feature representation of features of the input source domain data; generate output target domain data in the target domain that includes semantics corresponding to the input source domain data by processing the high-level feature representation of the features of the input source domain data using a domain low-level decoder neural network layer that generate data from the target; and modify a loss function such that latent attributes corresponding to the embedding are selected from a same probability distribution.Type: ApplicationFiled: December 10, 2020Publication date: June 16, 2022Applicant: Ford Global Technologies, LLCInventors: Praveen Narayanan, Nikita Jaipuria, Apurbaa Mallik, Punarjay Chakravarty, Ganesh Kumar
-
Publication number: 20220009498Abstract: A system includes a computer including a processor and a memory, the memory storing instructions executable by the processor to, generate a synthetic image by adjusting respective color values of one or more pixels of a reference image based on a specified meteorological optical range from a vehicle sensor to simulated fog, and input the synthetic image to a machine learning program to train the machine learning program to identify a meteorological optical range from the vehicle sensor to actual fog.Type: ApplicationFiled: July 8, 2020Publication date: January 13, 2022Applicant: Ford Global Technologies, LLCInventors: Apurbaa Mallik, Kaushik Balakrishnan, Vijay Nagasamy, Praveen Narayanan, Sowndarya Sundar
-
Publication number: 20220005457Abstract: An end-to-end deep-learning-based system that can solve both ASR and TTS problems jointly using unpaired text and audio samples is disclosed herein. An adversarially-trained approach is used to generate a more robust independent TTS neural network and an ASR neural network that can be deployed individually or simultaneously. The process for training the neural networks includes generating an audio sample from a text sample using the TTS neural network, then feeding the generated audio sample into the ASR neural network to regenerate the text. The difference between the regenerated text and the original text is used as a first loss for training the neural networks. A similar process is used for an audio sample. The difference between the regenerated audio and the original audio is used as a second loss. Text and audio discriminators are similarly used on the output of the neural network to generate additional losses for training.Type: ApplicationFiled: July 2, 2020Publication date: January 6, 2022Applicant: Ford Global Technologies, LLCInventors: Kaushik Balakrishnan, Praveen Narayanan, Francois Charette