Patents by Inventor Tommi Koivisto

Tommi Koivisto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250078532
    Abstract: In various examples, multimodal image data may be used to generate a set of top-down tile images, which are applied to a deep neural network generator architecture model to produce lane marking-specific heatmap images corresponding to the set of top-down tile images. The multimodal sensor data may include LIDAR-captured intensity channel data, LIDAR-captured feature height channel data, and optical color image channel data. The set of top-down tile images may be processed by the generator model to automatically detect lane boundaries and navigation boundaries to generate pixel-level heatmap images that may classify lane markings by marking characteristics such as line type and/or color. The generator model may comprise an encoder-decoder architecture, with multiscale feature extraction and/or context extraction functional layers intervening between the encoder model and the decoder model.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 6, 2025
    Inventors: Ruiqi ZHAO, Jonathan Edward BARKER, Tommi KOIVISTO, Yu ZHANG, Shuang WU, Yixuan LIN, Ge CONG, Andrew TAO, Kezhao CHEN
  • Publication number: 20240403640
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: August 9, 2024
    Publication date: December 5, 2024
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 12093824
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: June 28, 2023
    Date of Patent: September 17, 2024
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 12072442
    Abstract: In various examples, detected object data representative of locations of detected objects in a field of view may be determined. One or more clusters of the detected objects may be generated based at least in part on the locations and features of the cluster may be determined for use as inputs to a machine learning model(s). A confidence score, computed by the machine learning model(s) based at least in part on the inputs, may be received, where the confidence score may be representative of a probability that the cluster corresponds to an object depicted at least partially in the field of view. Further examples provide approaches for determining ground truth data for training object detectors, such as for determining coverage values for ground truth objects using associated shapes, and for determining soft coverage values for ground truth objects.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: August 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Tommi Koivisto, Pekka Janis, Tero Kuosmanen, Timo Roman, Sriya Sarathy, William Zhang, Nizar Assaf, Colin Tracey
  • Publication number: 20240232616
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: June 28, 2023
    Publication date: July 11, 2024
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20240192320
    Abstract: In various examples, detected object data representative of locations of detected objects in a field of view may be determined. One or more clusters of the detected objects may be generated based at least in part on the locations and features of the cluster may be determined for use as inputs to a machine learning model(s). A confidence score, computed by the machine learning model(s) based at least in part on the inputs, may be received, where the confidence score may be representative of a probability that the cluster corresponds to an object depicted at least partially in the field of view. Further examples provide approaches for determining ground truth data for training object detectors, such as for determining coverage values for ground truth objects using associated shapes, and for determining soft coverage values for ground truth objects.
    Type: Application
    Filed: February 20, 2024
    Publication date: June 13, 2024
    Inventors: Tommi Koivisto, Pekka Janis, Tero Kuosmanen, Timo Roman, Sriya Sarathy, William Zhang, Nizar Assaf, Colin Tracey
  • Patent number: 11995551
    Abstract: A neural network includes at least a first network layer that includes a first set of filters and a second network layer that includes a second set of filters. Notably, a filter was removed from the first network layer. A bias associated with a different filter included in the second set of filters compensates for a different bias associated with the filter that was removed from the first network layer.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: May 28, 2024
    Assignee: NVIDIA Corporation
    Inventors: Tommi Koivisto, Pekka Jänis
  • Publication number: 20240135173
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: June 27, 2023
    Publication date: April 25, 2024
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11790230
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: October 17, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11704890
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: July 18, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20220253706
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: April 18, 2022
    Publication date: August 11, 2022
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11308338
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: April 19, 2022
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20220108465
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Application
    Filed: November 9, 2021
    Publication date: April 7, 2022
    Inventors: Yilin Yang, Bala Siva Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20220101635
    Abstract: In various examples, detected object data representative of locations of detected objects in a field of view may be determined. One or more clusters of the detected objects may be generated based at least in part on the locations and features of the cluster may be determined for use as inputs to a machine learning model(s). A confidence score, computed by the machine learning model(s) based at least in part on the inputs, may be received, where the confidence score may be representative of a probability that the cluster corresponds to an object depicted at least partially in the field of view. Further examples provide approaches for determining ground truth data for training object detectors, such as for determining coverage values for ground truth objects using associated shapes, and for determining soft coverage values for ground truth objects.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 31, 2022
    Inventors: Tommi Koivisto, Pekka Janis, Tero Kuosmanen, Timo Roman, Sriya Sarathy, William Zhang, Nizar Assaf, Colin Tracey
  • Patent number: 11210537
    Abstract: In various examples, detected object data representative of locations of detected objects in a field of view may be determined. One or more clusters of the detected objects may be generated based at least in part on the locations and features of the cluster may be determined for use as inputs to a machine learning model(s). A confidence score, computed by the machine learning model(s) based at least in part on the inputs, may be received, where the confidence score may be representative of a probability that the cluster corresponds to an object depicted at least partially in the field of view. Further examples provide approaches for determining ground truth data for training object detectors, such as for determining coverage values for ground truth objects using associated shapes, and for determining soft coverage values for ground truth objects.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: December 28, 2021
    Assignee: NVIDIA Corporation
    Inventors: Tommi Koivisto, Pekka Janis, Tero Kuosmanen, Timo Roman, Sriya Sarathy, William Zhang, Nizar Assaf, Colin Tracey
  • Patent number: 11182916
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20210272304
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: December 27, 2019
    Publication date: September 2, 2021
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20200210726
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 2, 2020
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20190258878
    Abstract: In various examples, detected object data representative of locations of detected objects in a field of view may be determined. One or more clusters of the detected objects may be generated based at least in part on the locations and features of the cluster may be determined for use as inputs to a machine learning model(s). A confidence score, computed by the machine learning model(s) based at least in part on the inputs, may be received, where the confidence score may be representative of a probability that the cluster corresponds to an object depicted at least partially in the field of view. Further examples provide approaches for determining ground truth data for training object detectors, such as for determining coverage values for ground truth objects using associated shapes, and for determining soft coverage values for ground truth objects.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 22, 2019
    Inventors: Tommi Koivisto, Pekka Janis, Tero Kuosmanen, Timo Roman, Sriya Sarathy, William Zhang, Nizar Assaf, Colin Tracey
  • Publication number: 20190251442
    Abstract: A neural network includes at least a first network layer that includes a first set of filters and a second network layer that includes a second set of filters. Notably, a filter was removed from the first network layer. A bias associated with a different filter included in the second set of filters compensates for a different bias associated with the filter that was removed from the first network layer.
    Type: Application
    Filed: January 11, 2019
    Publication date: August 15, 2019
    Inventors: Tommi KOIVISTO, Pekka JÄNIS