Patents by Inventor Bala Siva Sashank JUJJAVARAPU
Bala Siva Sashank JUJJAVARAPU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240135173Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.Type: ApplicationFiled: June 27, 2023Publication date: April 25, 2024Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
-
Publication number: 20240020953Abstract: In various examples, feature values corresponding to a plurality of views are transformed into feature values of a shared orientation or perspective to generate a feature map—such as a Bird's-Eye-View (BEV), top-down, orthogonally projected, and/or other shared perspective feature map type. Feature values corresponding to a region of a view may be transformed into feature values using a neural network. The feature values may be assigned to bins of a grid and values assigned to at least one same bin may be combined to generate one or more feature values for the feature map. To assign the transformed features to the bins, one or more portions of a view may be projected into one or more bins using polynomial curves. Radial and/or angular bins may be used to represent the environment for the feature map.Type: ApplicationFiled: July 17, 2023Publication date: January 18, 2024Inventors: Minwoo Park, Trung Pham, Junghyun Kwon, Sayed Mehdi Sajjadi Mohammadabadi, Bor-Jeng Chen, Xin Liu, Bala Siva Sashank Jujjavarapu, Mehran Maghoumi
-
Publication number: 20230334317Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN.Type: ApplicationFiled: June 20, 2023Publication date: October 19, 2023Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
-
Patent number: 11790230Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.Type: GrantFiled: April 18, 2022Date of Patent: October 17, 2023Assignee: NVIDIA CorporationInventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
-
Patent number: 11769052Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.Type: GrantFiled: September 29, 2021Date of Patent: September 26, 2023Assignee: NVIDIA CorporationInventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
-
Publication number: 20220253706Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.Type: ApplicationFiled: April 18, 2022Publication date: August 11, 2022Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
-
Patent number: 11308338Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.Type: GrantFiled: December 27, 2019Date of Patent: April 19, 2022Assignee: NVIDIA CorporationInventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
-
Publication number: 20220019893Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.Type: ApplicationFiled: September 29, 2021Publication date: January 20, 2022Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
-
Patent number: 11182916Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.Type: GrantFiled: December 27, 2019Date of Patent: November 23, 2021Assignee: NVIDIA CorporationInventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
-
Patent number: 11170299Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.Type: GrantFiled: March 9, 2020Date of Patent: November 9, 2021Assignee: NVIDIA CORPORATIONInventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
-
Publication number: 20210272304Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.Type: ApplicationFiled: December 27, 2019Publication date: September 2, 2021Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
-
Publication number: 20200218979Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.Type: ApplicationFiled: March 9, 2020Publication date: July 9, 2020Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
-
Publication number: 20200210726Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.Type: ApplicationFiled: December 27, 2019Publication date: July 2, 2020Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
-
Patent number: 10567723Abstract: A system and a method for detecting light sources in a multi-illuminated environment using a composite red-green-blue-infrared (RGB-IR) sensor is provided. The method comprises detecting, by the composite RGB-IR sensor, a multi-illuminant area using a visible raw image and a near-infrared (NIR) raw image of a composite RGBIR image, dividing each of the visible raw image and the NIR raw image into a plurality of grid samples, extracting a plurality of illuminant features based on a green/NIR pixel ratio and a blue/NIR pixel ratio, estimating at least one illuminant feature for each grid sample by passing each grid sample through a convolution neural network (CNN) module using the extracted plurality of illuminant features, and smoothing each grid sample based on the estimated at least one illuminant feature.Type: GrantFiled: August 13, 2018Date of Patent: February 18, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Prajit Sivasankaran Nair, Bala Siva Sashank Jujjavarapu, Narasimha Gopalakrishna Pai, Akshay Kumar
-
Publication number: 20190052855Abstract: A system and a method for detecting light sources in a multi-illuminated environment using a composite red-green-blue-infrared (RGB-IR) sensor is provided. The method comprises detecting, by the composite RGB-IR sensor, a multi-illuminant area using a visible raw image and a near-infrared (NIR) raw image of a composite RGBIR image, dividing each of the visible raw image and the NIR raw image into a plurality of grid samples, extracting a plurality of illuminant features based on a green/NIR pixel ratio and a blue/NIR pixel ratio, estimating at least one illuminant feature for each grid sample by passing each grid sample through a convolution neural network (CNN) module using the extracted plurality of illuminant features, and smoothing each grid sample based on the estimated at least one illuminant feature.Type: ApplicationFiled: August 13, 2018Publication date: February 14, 2019Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Prajit Sivasankaran NAIR, Bala Siva Sashank JUJJAVARAPU, Narasimha Gopalakrishna PAI, Akshay KUMAR