Patents by Inventor Innfarn Yoo

Innfarn Yoo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240112036
    Abstract: In various examples, a two-dimensional (2D) and three-dimensional (3D) deep neural network (DNN) is implemented to fuse 2D and 3D object detection results for classifying objects. For example, regions of interest (ROIs) and/or bounding shapes corresponding thereto may be determined using one or more region proposal networks (RPNs)—such as an image-based RPN and/or a depth-based RPN. Each ROI may be extended into a frustum in 3D world-space, and a point cloud may be filtered to include only points from within the frustum. The remaining points may be voxelated to generate a volume in 3D world space, and the volume may be applied to a 3D DNN to generate one or more vectors. The one or more vectors, in addition to one or more additional vectors generated using a 2D DNN processing image data, may be applied to a classifier network to generate a classification for an object.
    Type: Application
    Filed: December 6, 2023
    Publication date: April 4, 2024
    Inventors: Innfarn Yoo, Rohit Taneja
  • Patent number: 11915145
    Abstract: In various examples, a two-dimensional (2D) and three-dimensional (3D) deep neural network (DNN) is implemented to fuse 2D and 3D object detection results for classifying objects. For example, regions of interest (ROIs) and/or bounding shapes corresponding thereto may be determined using one or more region proposal networks (RPNs)—such as an image-based RPN and/or a depth-based RPN. Each ROI may be extended into a frustum in 3D world-space, and a point cloud may be filtered to include only points from within the frustum. The remaining points may be voxelated to generate a volume in 3D world space, and the volume may be applied to a 3D DNN to generate one or more vectors. The one or more vectors, in addition to one or more additional vectors generated using a 2D DNN processing image data, may be applied to a classifier network to generate a classification for an object.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: February 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Innfarn Yoo, Rohit Taneja
  • Publication number: 20240005563
    Abstract: Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to facilitate dithering of images that have been subject to quantization in order to reduce the number of colors and/or size of the images. Such a trained encoder generates a dithering image from an input quantized image that can be combined, by addition or by some other process, with the quantized image to result in a dithered output image that exhibits reduced banding or is otherwise aesthetically improved relative to the un-dithered quantized image. The use of a trained encoder to facilitate dithering of quantized images allows the dithering to be performed in a known period of time using a known amount of memory, in contrast to alternative iterative dithering methods. Additionally, the trained encoder can be differentiable, allowing it to be part of a deep learning image processing pipeline or other machine learning pipeline.
    Type: Application
    Filed: September 12, 2023
    Publication date: January 4, 2024
    Inventors: Innfarn Yoo, Xiyang Luo, Feng Yang
  • Patent number: 11790564
    Abstract: Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to facilitate dithering of images that have been subject to quantization in order to reduce the number of colors and/or size of the images. Such a trained encoder generates a dithering image from an input quantized image that can be combined, by addition or by some other process, with the quantized image to result in a dithered output image that exhibits reduced banding or is otherwise aesthetically improved relative to the un-dithered quantized image. The use of a trained encoder to facilitate dithering of quantized images allows the dithering to be performed in a known period of time using a known amount of memory, in contrast to alternative iterative dithering methods. Additionally, the trained encoder can be differentiable, allowing it to be part of a deep learning image processing pipeline or other machine learning pipeline.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: October 17, 2023
    Assignee: Google LLC
    Inventors: Innfarn Yoo, Xiyang Luo, Feng Yang
  • Publication number: 20230214953
    Abstract: Systems and methods are directed to a computing system. The computing system can include one or more processors, a message embedding model, a message extraction model, and a first set of instructions that cause the computing system to perform operations including obtaining the three-dimensional image data and the message vector. The operations can include inputting three-dimensional image data and a message vector into the message embedding model to obtain encoded three-dimensional image data. The operations can include using the message extraction model to extract an embedded message from the encoded three-dimensional image data to obtain a reconstructed message vector. The operations can include evaluating a loss function for a difference between the reconstructed message vector and the message vector and modifying values for parameters of at least the message embedding model based on the loss function.
    Type: Application
    Filed: June 5, 2020
    Publication date: July 6, 2023
    Inventors: Innfarn Yoo, Xiyang Luo, Feng Yang, Ondrej Stava
  • Publication number: 20230053317
    Abstract: Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
    Type: Application
    Filed: January 8, 2020
    Publication date: February 16, 2023
    Inventors: Xiyang LUO, Innfarn YOO, Feng YANG
  • Publication number: 20220383620
    Abstract: In various examples, a two-dimensional (2D) and three-dimensional (3D) deep neural network (DNN) is implemented to fuse 2D and 3D object detection results for classifying objects. For example, regions of interest (ROIs) and/or bounding shapes corresponding thereto may be determined using one or more region proposal networks (RPNs)—such as an image-based RPN and/or a depth-based RPN. Each ROI may be extended into a frustum in 3D world-space, and a point cloud may be filtered to include only points from within the frustum. The remaining points may be voxelated to generate a volume in 3D world space, and the volume may be applied to a 3D DNN to generate one or more vectors. The one or more vectors, in addition to one or more additional vectors generated using a 2D DNN processing image data, may be applied to a classifier network to generate a classification for an object.
    Type: Application
    Filed: August 8, 2022
    Publication date: December 1, 2022
    Inventors: Innfarn Yoo, Rohit Taneja
  • Publication number: 20220335560
    Abstract: A computer-implemented method that provides watermark-based image reconstruction to compensate for lossy encoding schemes. The method can generate a difference image describing the data loss associated with encoding an image using a lossy encoding scheme. The difference image can be encoded as a message and embedded in the encoded image using a watermark and later extracted from the encoded image. The difference image can be added to the encoded image to reconstruct the original image. As an example, an input image encoded using a lossy JPEG compression scheme can be embedded with the lost data and later reconstructed, using the embedded data, to a fidelity level that is identical or substantially similar to the original.
    Type: Application
    Filed: May 12, 2019
    Publication date: October 20, 2022
    Inventors: Innfarn Yoo, Feng Yang, Xiyang Luo
  • Patent number: 11468582
    Abstract: In various examples, a two-dimensional (2D) and three-dimensional (3D) deep neural network (DNN) is implemented to fuse 2D and 3D object detection results for classifying objects. For example, regions of interest (ROIs) and/or bounding shapes corresponding thereto may be determined using one or more region proposal networks (RPNs)—such as an image-based RPN and/or a depth-based RPN. Each ROI may be extended into a frustum in 3D world-space, and a point cloud may be filtered to include only points from within the frustum. The remaining points may be voxelated to generate a volume in 3D world space, and the volume may be applied to a 3D DNN to generate one or more vectors. The one or more vectors, in addition to one or more additional vectors generated using a 2D DNN processing image data, may be applied to a classifier network to generate a classification for an object.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: October 11, 2022
    Assignee: NVIDIA Corporation
    Inventors: Innfarn Yoo, Rohit Taneja
  • Publication number: 20210304445
    Abstract: Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to facilitate dithering of images that have been subject to quantization in order to reduce the number of colors and/or size of the images. Such a trained encoder generates a dithering image from an input quantized image that can be combined, by addition or by some other process, with the quantized image to result in a dithered output image that exhibits reduced banding or is otherwise aesthetically improved relative to the un-dithered quantized image. The use of a trained encoder to facilitate dithering of quantized images allows the dithering to be performed in a known period of time using a known amount of memory, in contrast to alternative iterative dithering methods. Additionally, the trained encoder can be differentiable, allowing it to be part of a deep learning image processing pipeline or other machine learning pipeline.
    Type: Application
    Filed: March 30, 2020
    Publication date: September 30, 2021
    Inventors: Innfarn Yoo, Xiyang Luo, Feng Yang
  • Publication number: 20200294257
    Abstract: In various examples, a two-dimensional (2D) and three-dimensional (3D) deep neural network (DNN) is implemented to fuse 2D and 3D object detection results for classifying objects. For example, regions of interest (ROIs) and/or bounding shapes corresponding thereto may be determined using one or more region proposal networks (RPNs)—such as an image-based RPN and/or a depth-based RPN. Each ROI may be extended into a frustum in 3D world-space, and a point cloud may be filtered to include only points from within the frustum. The remaining points may be voxelated to generate a volume in 3D world space, and the volume may be applied to a 3D DNN to generate one or more vectors. The one or more vectors, in addition to one or more additional vectors generated using a 2D DNN processing image data, may be applied to a classifier network to generate a classification for an object.
    Type: Application
    Filed: March 13, 2020
    Publication date: September 17, 2020
    Inventors: Innfarn Yoo, Rohit Taneja