Patents by Inventor Rodrigo Ortiz Cayon

Rodrigo Ortiz Cayon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240096094
    Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.
    Type: Application
    Filed: November 16, 2023
    Publication date: March 21, 2024
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
  • Patent number: 11887256
    Abstract: Novel images may be generated using an image generator implemented on a processor. The image generator may receive as input neural features selected from a neural texture atlas. The image generator may also receive as input one or more position guides identifying position information for a plurality of input image pixels. The novel images may be evaluated using an image discriminator to determine a plurality of optimization values by comparing each of the plurality of novel images with a respective one of a corresponding plurality of input images. Each of the novel images may be generated from a respective camera pose relative to an object identical to that of the respective one of the corresponding plurality of input images. The image generator and the neural features may be updated based on the optimization values and stored on a storage device.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: January 30, 2024
    Assignee: Fyusion, Inc.
    Inventors: Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Krunal Ketan Chande, Rodrigo Ortiz-Cayon, Stefan Johannes Josef Holzer, Christian Richardt
  • Patent number: 11861900
    Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.
    Type: Grant
    Filed: November 12, 2021
    Date of Patent: January 2, 2024
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
  • Publication number: 20230196658
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Application
    Filed: February 23, 2023
    Publication date: June 22, 2023
    Applicant: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Patent number: 11615582
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: March 28, 2023
    Assignee: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Publication number: 20220406003
    Abstract: Three-dimensional points may be projected onto first locations in a first image of an object captured from a first position in three-dimensional space relative to the object and projected onto second locations a virtual camera position located at a second position in three-dimensional space relative to the object. First transformations linking the first and second locations may then be determined. Second transformations transforming first coordinates for the first image to second coordinates for the second image may be determined based on the first transformations. Based on these second transformations and on the first image, a second image of the object from the virtual camera position.
    Type: Application
    Filed: October 15, 2021
    Publication date: December 22, 2022
    Applicant: Fyusion, Inc.
    Inventors: Rodrigo Ortiz Cayon, Krunal Ketan Chande, Stefan Johannes Josef Holzer, Wook Yeon Hwang, Alexander Jay Bruen Trevor, Shane Griffith
  • Publication number: 20220392151
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Applicant: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Publication number: 20220343601
    Abstract: One or more two-dimensional images of a three-dimensional object may be analyzed to estimate a three-dimensional mesh representing the object and a mapping of the two-dimensional images to the three-dimensional mesh. Initially, a correspondence may be determined between the images and a UV representation of a three-dimensional template mesh by training a neural network. Then, the three-dimensional template mesh may be deformed to determine the representation of the object. The process may involve a reprojection loss cycle in which points from the images are mapped onto the UV representation, then onto the three-dimensional template mesh, and then back onto the two-dimensional images.
    Type: Application
    Filed: April 15, 2022
    Publication date: October 27, 2022
    Applicant: Fyusion, Inc.
    Inventors: Aidas Liaudanskas, Nishant Rai, Srinivas Rao, Rodrigo Ortiz-Cayon, Matteo Munaro, Stefan Johannes Josef Holzer
  • Publication number: 20220254007
    Abstract: Damage to an object such as a vehicle may be detected and presented based at least in part on image data. In some configurations, image data may be detected by causing the object to pass through a gate or portal on which cameras are located. Alternatively, or additionally, image data may be selected by a user operating a camera and moving around the object. The cameras may capture image data, which may be combined and analyzed to detect damage. Some or all of the image data and/or analysis of the image data may be presented in a viewer, which may allow a user to perform actions such as navigating around the object in a virtual environment, identifying and viewing areas of the object where damage has been detected, and accessing the results of the analysis.
    Type: Application
    Filed: February 2, 2022
    Publication date: August 11, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Krunal Ketan Chande, Julius Santiago, Pantelis Kalogiros, Raul Dronca, Ioannis Spanos, Pavel Hanchar, Aidas Liaudanskas, Santi Arano, Rodrigo Ortiz-Cayon
  • Publication number: 20220155945
    Abstract: Images may be captured from a plurality of cameras of an object moving along a path. Each of the cameras may be positioned at a respective identified location in three-dimensional space. Correspondence information for the plurality of images linking locations on different ones of the images may be determined. Linked locations may correspond to similar portions of the object captured by the cameras. A portion of the plurality of images may be presented on a display screen via a graphical user interface. The plurality of images may be grouped based on the correspondence information.
    Type: Application
    Filed: November 12, 2021
    Publication date: May 19, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Krunal Ketan Chande, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Rodrigo Ortiz-Cayon, Ioannis Spanos, Nick Stetco, Milos Vlaski, Martin Markus Hubert Wawro, Endre Ajandi, Santi Arano, Mehjabeen Alim
  • Publication number: 20220156497
    Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.
    Type: Application
    Filed: November 12, 2021
    Publication date: May 19, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
  • Publication number: 20220139036
    Abstract: Novel images may be generated using an image generator implemented on a processor. The image generator may receive as input neural features selected from a neural texture atlas. The image generator may also receive as input one or more position guides identifying position information for a plurality of input image pixels. The novel images may be evaluated using an image discriminator to determine a plurality of optimization values by comparing each of the plurality of novel images with a respective one of a corresponding plurality of input images. Each of the novel images may be generated from a respective camera pose relative to an object identical to that of the respective one of the corresponding plurality of input images. The image generator and the neural features may be updated based on the optimization values and stored on a storage device.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Applicant: Fyusion, Inc.
    Inventors: Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Krunal Ketan Chande, Rodrigo Ortiz-Cayon, Stefan Johannes Josef Holzer, Christian Richardt
  • Patent number: 11252398
    Abstract: Configuration parameters associated with the generation of a cinematic video may identify object components, an order in which to display the object components, and an object type. A multi-view representation that is navigable in one or more dimensions and that includes images of an object captured from different viewpoints may be identified. A cinematic video of the object may be generated based on a subset of the images, arranged in an order based on the configuration parameters.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: February 15, 2022
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Rodrigo Ortiz Cayon, Julius Santiago, Milos Vlaski
  • Publication number: 20210312702
    Abstract: Images of an object may be captured at a computing device. Each of the images may be captured from a respective viewpoint based on image capture configuration information identifying one or more parameter values. A multiview image digital media representation of the object may be generated that includes some or all of the images of the object and that is navigable in one or more dimensions.
    Type: Application
    Filed: June 17, 2021
    Publication date: October 7, 2021
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Santiago Arano Perez, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Martin Markus Hubert Wawro, Ashley Wakefield, Rodrigo Ortiz-Cayon, Josh Faust, Jai Chaudhry, Nico Gregor Sebastian Blodow, Mike Penz
  • Publication number: 20210227195
    Abstract: Configuration parameters associated with the generation of a cinematic video may identify object components, an order in which to display the object components, and an object type. A multi-view representation that is navigable in one or more dimensions and that includes images of an object captured from different viewpoints may be identified. A cinematic video of the object may be generated based on a subset of the images, arranged in an order based on the configuration parameters.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 22, 2021
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Rodrigo Ortiz Cayon, Julius Santiago, Milos Vlaski
  • Patent number: 10958887
    Abstract: A sampling density for capturing a plurality of two-dimensional images of a three-dimensional scene may be determined. The sampling density may be below the Nyquist rate. However, the sampling density may be sufficiently high such that captured images may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Recording guidance may be provided at a display screen on a mobile computing device based on the determined sampling density. The recording guidance identifying a plurality of camera poses at which to position a camera to capture images of the three-dimensional scene. A plurality of images captured via the camera based on the recording guidance may be stored on a storage device.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: March 23, 2021
    Assignee: Fyusion, Inc.
    Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10911732
    Abstract: An estimated camera pose may be determined for each of a plurality of single plane images of a designated three-dimensional scene. The sampling density of the single plane images may be below the Nyquist rate. However, the sampling density of the single plane images may be sufficiently high such that the single plane images is sufficiently high such that they may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Scene depth information identifying for each of a respective plurality of pixels in the single plane image a respective depth value may be determined for each single plane image. A respective multiplane image including a respective plurality of depth planes may be determined for each single plane image. Each of the depth planes may include a respective plurality of pixels from the respective single plane image.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: February 2, 2021
    Assignee: Fyusion, Inc.
    Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10893250
    Abstract: A respective target viewpoint may be rendered for each of a plurality of multiplane images of a three-dimensional scene. Each of the multiplane images may be associated with a respective single plane image of the three-dimensional scene captured from a respective viewpoint. Each of the multiplane images may include a respective plurality of depth planes. Each of the depth planes may include a respective plurality of pixels from the respective single plane image. Each of the pixels in the depth plane may be positioned at approximately the same distance from the respective viewpoint. A weighted combination of the target viewpoint renderings may be determined, where the sampling density of the single plane images is sufficiently high that the weighted combination satisfies the inequality in Equation (7). The weighted combination of the target viewpoint renderings may be transmitted as a novel viewpoint image.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: January 12, 2021
    Assignee: Fyusion, Inc.
    Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20200226816
    Abstract: A respective target viewpoint may be rendered for each of a plurality of multiplane images of a three-dimensional scene. Each of the multiplane images may be associated with a respective single plane image of the three-dimensional scene captured from a respective viewpoint. Each of the multiplane images may include a respective plurality of depth planes. Each of the depth planes may include a respective plurality of pixels from the respective single plane image. Each of the pixels in the depth plane may be positioned at approximately the same distance from the respective viewpoint. A weighted combination of the target viewpoint renderings may be determined, where the sampling density of the single plane images is sufficiently high that the weighted combination satisfies the inequality in Equation (7). The weighted combination of the target viewpoint renderings may be transmitted as a novel viewpoint image.
    Type: Application
    Filed: September 18, 2019
    Publication date: July 16, 2020
    Applicant: Fyusion, Inc.
    Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20200226736
    Abstract: A sampling density for capturing a plurality of two-dimensional images of a three-dimensional scene may be determined. The sampling density may be below the Nyquist rate. However, the sampling density may be sufficiently high such that captured images may be promoted to multiplane images and used to generate novel viewpoints in a light field reconstruction framework. Recording guidance may be provided at a display screen on a mobile computing device based on the determined sampling density. The recording guidance identifying a plurality of camera poses at which to position a camera to capture images of the three-dimensional scene. A plurality of images captured via the camera based on the recording guidance may be stored on a storage device.
    Type: Application
    Filed: September 18, 2019
    Publication date: July 16, 2020
    Applicant: Fyusion, Inc.
    Inventors: Abhishek Kar, Rodrigo Ortiz Cayon, Ben Mildenhall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu