Patents by Inventor Alejandro Jose Troccoli

Alejandro Jose Troccoli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250124593
    Abstract: Techniques include a calibration assembly for a telepresence system that includes a stereoscopic display and a set of cameras. The calibration assembly may include at least one chart having chart markers, a mirror having mirror markers, and a processor. An example calibration assembly has three charts and the mirror attached to one of the charts. During calibration, the display is configured to display a set of display markers that are imaged in the mirror. Each camera forms a respective image of the set of chart markers, the set of mirror markers, and the set of display markers. The processing circuitry then determines the poses of the cameras with respect to the display based on the images of the set of chart markers, the set of mirror markers, and the set of display markers.
    Type: Application
    Filed: October 11, 2024
    Publication date: April 17, 2025
    Inventors: Alejandro Jose Troccoli, Andrew Block, Vineet Vijay Bhatawadekar, Alexander William Hake
  • Publication number: 20240354990
    Abstract: A telepresence system may include a display configured to present three-dimensional images. The 3D images may be rendered from multiple images captured by multiple cameras that image an area from different viewpoints. Misalignment of any of the multiple cameras may negatively affect the rendering. Accordingly, the telepresence system may calibrate the cameras to compensate for any misalignment as part of the rending. This calibration may include capturing an image, or images, of a calibration target to determine the relative positions of the cameras.
    Type: Application
    Filed: April 24, 2023
    Publication date: October 24, 2024
    Inventors: Guillermo Fabian Díaz Lankenau, Alejandro Jose Troccoli, Antonio Yamil Layon Halun
  • Patent number: 11508076
    Abstract: A neural network model receives color data for a sequence of images corresponding to a dynamic scene in three-dimensional (3D) space. Motion of objects in the image sequence results from a combination of a dynamic camera orientation and motion or a change in the shape of an object in the 3D space. The neural network model generates two components that are used to produce a 3D motion field representing the dynamic (non-rigid) part of the scene. The two components are information identifying dynamic and static portions of each image and the camera orientation. The dynamic portions of each image contain motion in the 3D space that is independent of the camera orientation. In other words, the motion in the 3D space (estimated 3D scene flow data) is separated from the motion of the camera.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: November 22, 2022
    Assignee: NVIDIA Corporation
    Inventors: Zhaoyang Lv, Kihwan Kim, Deqing Sun, Alejandro Jose Troccoli, Jan Kautz
  • Publication number: 20210150736
    Abstract: A neural network model receives color data for a sequence of images corresponding to a dynamic scene in three-dimensional (3D) space. Motion of objects in the image sequence results from a combination of a dynamic camera orientation and motion or a change in the shape of an object in the 3D space. The neural network model generates two components that are used to produce a 3D motion field representing the dynamic (non-rigid) part of the scene. The two components are information identifying dynamic and static portions of each image and the camera orientation. The dynamic portions of each image contain motion in the 3D space that is independent of the camera orientation. In other words, the motion in the 3D space (estimated 3D scene flow data) is separated from the motion of the camera.
    Type: Application
    Filed: January 22, 2021
    Publication date: May 20, 2021
    Inventors: Zhaoyang Lv, Kihwan Kim, Deqing Sun, Alejandro Jose Troccoli, Jan Kautz
  • Patent number: 10929987
    Abstract: A neural network model receives color data for a sequence of images corresponding to a dynamic scene in three-dimensional (3D) space. Motion of objects in the image sequence results from a combination of a dynamic camera orientation and motion or a change in the shape of an object in the 3D space. The neural network model generates two components that are used to produce a 3D motion field representing the dynamic (non-rigid) part of the scene. The two components are information identifying dynamic and static portions of each image and the camera orientation. The dynamic portions of each image contain motion in the 3D space that is independent of the camera orientation. In other words, the motion in the 3D space (estimated 3D scene flow data) is separated from the motion of the camera.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: February 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Zhaoyang Lv, Kihwan Kim, Deqing Sun, Alejandro Jose Troccoli, Jan Kautz
  • Patent number: 10922793
    Abstract: Missing image content is generated using a neural network. In an embodiment, a high resolution image and associated high resolution semantic label map are generated from a low resolution image and associated low resolution semantic label map. The input image/map pair (low resolution image and associated low resolution semantic label map) lacks detail and is therefore missing content. Rather than simply enhancing the input image/map pair, data missing in the input image/map pair is improvised or hallucinated by a neural network, creating plausible content while maintaining spatio-temporal consistency. Missing content is hallucinated to generate a detailed zoomed in portion of an image. Missing content is hallucinated to generate different variations of an image, such as different seasons or weather conditions for a driving video.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: February 16, 2021
    Assignee: NVIDIA Corporation
    Inventors: Seung-Hwan Baek, Kihwan Kim, Jinwei Gu, Orazio Gallo, Alejandro Jose Troccoli, Ming-Yu Liu, Jan Kautz
  • Publication number: 20190355103
    Abstract: Missing image content is generated using a neural network. In an embodiment, a high resolution image and associated high resolution semantic label map are generated from a low resolution image and associated low resolution semantic label map. The input image/map pair (low resolution image and associated low resolution semantic label map) lacks detail and is therefore missing content. Rather than simply enhancing the input image/map pair, data missing in the input image/map pair is improvised or hallucinated by a neural network, creating plausible content while maintaining spatio-temporal consistency. Missing content is hallucinated to generate a detailed zoomed in portion of an image. Missing content is hallucinated to generate different variations of an image, such as different seasons or weather conditions for a driving video.
    Type: Application
    Filed: March 14, 2019
    Publication date: November 21, 2019
    Inventors: Seung-Hwan Baek, Kihwan Kim, Jinwei Gu, Orazio Gallo, Alejandro Jose Troccoli, Ming-Yu Liu, Jan Kautz
  • Patent number: 10482196
    Abstract: A method, computer readable medium, and system are disclosed for generating a Gaussian mixture model hierarchy. The method includes the steps of receiving point cloud data defining a plurality of points; defining a Gaussian Mixture Model (GMM) hierarchy that includes a number of mixels, each mixel encoding parameters for a probabilistic occupancy map; and adjusting the parameters for one or more probabilistic occupancy maps based on the point cloud data utilizing a number of iterations of an Expectation-Maximum (EM) algorithm.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: November 19, 2019
    Assignee: NVIDIA Corporation
    Inventors: Benjamin David Eckart, Kihwan Kim, Alejandro Jose Troccoli, Jan Kautz
  • Publication number: 20190057509
    Abstract: A neural network model receives color data for a sequence of images corresponding to a dynamic scene in three-dimensional (3D) space. Motion of objects in the image sequence results from a combination of a dynamic camera orientation and motion or a change in the shape of an object in the 3D space. The neural network model generates two components that are used to produce a 3D motion field representing the dynamic (non-rigid) part of the scene. The two components are information identifying dynamic and static portions of each image and the camera orientation. The dynamic portions of each image contain motion in the 3D space that is independent of the camera orientation. In other words, the motion in the 3D space (estimated 3D scene flow data) is separated from the motion of the camera.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 21, 2019
    Inventors: Zhaoyang Lv, Kihwan Kim, Deqing Sun, Alejandro Jose Troccoli, Jan Kautz
  • Publication number: 20170249401
    Abstract: A method, computer readable medium, and system are disclosed for generating a Gaussian mixture model hierarchy. The method includes the steps of receiving point cloud data defining a plurality of points; defining a Gaussian Mixture Model (GMM) hierarchy that includes a number of mixels, each mixel encoding parameters for a probabilistic occupancy map; and adjusting the parameters for one or more probabilistic occupancy maps based on the point cloud data utilizing a number of iterations of an Expectation-Maximum (EM) algorithm.
    Type: Application
    Filed: February 26, 2016
    Publication date: August 31, 2017
    Inventors: Benjamin David Eckart, Kihwan Kim, Alejandro Jose Troccoli, Jan Kautz