Patents by Inventor Bala Siva Sashank JUJJAVARAPU

Bala Siva Sashank JUJJAVARAPU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135173
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: June 27, 2023
    Publication date: April 25, 2024
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20240020953
    Abstract: In various examples, feature values corresponding to a plurality of views are transformed into feature values of a shared orientation or perspective to generate a feature map—such as a Bird's-Eye-View (BEV), top-down, orthogonally projected, and/or other shared perspective feature map type. Feature values corresponding to a region of a view may be transformed into feature values using a neural network. The feature values may be assigned to bins of a grid and values assigned to at least one same bin may be combined to generate one or more feature values for the feature map. To assign the transformed features to the bins, one or more portions of a view may be projected into one or more bins using polynomial curves. Radial and/or angular bins may be used to represent the environment for the feature map.
    Type: Application
    Filed: July 17, 2023
    Publication date: January 18, 2024
    Inventors: Minwoo Park, Trung Pham, Junghyun Kwon, Sayed Mehdi Sajjadi Mohammadabadi, Bor-Jeng Chen, Xin Liu, Bala Siva Sashank Jujjavarapu, Mehran Maghoumi
  • Publication number: 20230334317
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN.
    Type: Application
    Filed: June 20, 2023
    Publication date: October 19, 2023
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Patent number: 11790230
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: October 17, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11769052
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: September 26, 2023
    Assignee: NVIDIA Corporation
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Publication number: 20220253706
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: April 18, 2022
    Publication date: August 11, 2022
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11308338
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: April 19, 2022
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20220019893
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Application
    Filed: September 29, 2021
    Publication date: January 20, 2022
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Patent number: 11182916
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11170299
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: November 9, 2021
    Assignee: NVIDIA CORPORATION
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Publication number: 20210272304
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: December 27, 2019
    Publication date: September 2, 2021
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20200218979
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Application
    Filed: March 9, 2020
    Publication date: July 9, 2020
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Publication number: 20200210726
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 2, 2020
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 10567723
    Abstract: A system and a method for detecting light sources in a multi-illuminated environment using a composite red-green-blue-infrared (RGB-IR) sensor is provided. The method comprises detecting, by the composite RGB-IR sensor, a multi-illuminant area using a visible raw image and a near-infrared (NIR) raw image of a composite RGBIR image, dividing each of the visible raw image and the NIR raw image into a plurality of grid samples, extracting a plurality of illuminant features based on a green/NIR pixel ratio and a blue/NIR pixel ratio, estimating at least one illuminant feature for each grid sample by passing each grid sample through a convolution neural network (CNN) module using the extracted plurality of illuminant features, and smoothing each grid sample based on the estimated at least one illuminant feature.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: February 18, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Prajit Sivasankaran Nair, Bala Siva Sashank Jujjavarapu, Narasimha Gopalakrishna Pai, Akshay Kumar
  • Publication number: 20190052855
    Abstract: A system and a method for detecting light sources in a multi-illuminated environment using a composite red-green-blue-infrared (RGB-IR) sensor is provided. The method comprises detecting, by the composite RGB-IR sensor, a multi-illuminant area using a visible raw image and a near-infrared (NIR) raw image of a composite RGBIR image, dividing each of the visible raw image and the NIR raw image into a plurality of grid samples, extracting a plurality of illuminant features based on a green/NIR pixel ratio and a blue/NIR pixel ratio, estimating at least one illuminant feature for each grid sample by passing each grid sample through a convolution neural network (CNN) module using the extracted plurality of illuminant features, and smoothing each grid sample based on the estimated at least one illuminant feature.
    Type: Application
    Filed: August 13, 2018
    Publication date: February 14, 2019
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Prajit Sivasankaran NAIR, Bala Siva Sashank JUJJAVARAPU, Narasimha Gopalakrishna PAI, Akshay KUMAR