Patents by Inventor Stefan Johannes Josef HOLZER

Stefan Johannes Josef HOLZER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220406003
    Abstract: Three-dimensional points may be projected onto first locations in a first image of an object captured from a first position in three-dimensional space relative to the object and projected onto second locations a virtual camera position located at a second position in three-dimensional space relative to the object. First transformations linking the first and second locations may then be determined. Second transformations transforming first coordinates for the first image to second coordinates for the second image may be determined based on the first transformations. Based on these second transformations and on the first image, a second image of the object from the virtual camera position.
    Type: Application
    Filed: October 15, 2021
    Publication date: December 22, 2022
    Applicant: Fyusion, Inc.
    Inventors: Rodrigo Ortiz Cayon, Krunal Ketan Chande, Stefan Johannes Josef Holzer, Wook Yeon Hwang, Alexander Jay Bruen Trevor, Shane Griffith
  • Publication number: 20220392151
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Applicant: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Patent number: 11488380
    Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: November 1, 2022
    Assignee: Fyusion, Inc.
    Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
  • Publication number: 20220343601
    Abstract: One or more two-dimensional images of a three-dimensional object may be analyzed to estimate a three-dimensional mesh representing the object and a mapping of the two-dimensional images to the three-dimensional mesh. Initially, a correspondence may be determined between the images and a UV representation of a three-dimensional template mesh by training a neural network. Then, the three-dimensional template mesh may be deformed to determine the representation of the object. The process may involve a reprojection loss cycle in which points from the images are mapped onto the UV representation, then onto the three-dimensional template mesh, and then back onto the two-dimensional images.
    Type: Application
    Filed: April 15, 2022
    Publication date: October 27, 2022
    Applicant: Fyusion, Inc.
    Inventors: Aidas Liaudanskas, Nishant Rai, Srinivas Rao, Rodrigo Ortiz-Cayon, Matteo Munaro, Stefan Johannes Josef Holzer
  • Patent number: 11475626
    Abstract: One or more images of an object, each from a respective viewpoint, may be captured at a camera at a mobile computing device. The images may be compared to reference data to identify a difference between the images and the reference data. Image capture guidance may be provided on a display screen for capturing another one or more images of the object that includes the identified difference.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: October 18, 2022
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
  • Publication number: 20220284544
    Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.
    Type: Application
    Filed: March 2, 2021
    Publication date: September 8, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
  • Patent number: 11436275
    Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: September 6, 2022
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
  • Patent number: 11435869
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: September 6, 2022
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 11438565
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: September 6, 2022
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20220254008
    Abstract: Images of an object may be captured by cameras located at fixed locations in space as the object travels through the cameras' fields of view. A three-dimensional model of the object may be determined using the images. A portion of the object that has been damaged may be identified based on the three-dimensional model and the images. A damage map of the object illustrating the portion of the object that has been damaged may be generated.
    Type: Application
    Filed: February 2, 2022
    Publication date: August 11, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Krunal Ketan Chande, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski
  • Publication number: 20220254007
    Abstract: Damage to an object such as a vehicle may be detected and presented based at least in part on image data. In some configurations, image data may be detected by causing the object to pass through a gate or portal on which cameras are located. Alternatively, or additionally, image data may be selected by a user operating a camera and moving around the object. The cameras may capture image data, which may be combined and analyzed to detect damage. Some or all of the image data and/or analysis of the image data may be presented in a viewer, which may allow a user to perform actions such as navigating around the object in a virtual environment, identifying and viewing areas of the object where damage has been detected, and accessing the results of the analysis.
    Type: Application
    Filed: February 2, 2022
    Publication date: August 11, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Krunal Ketan Chande, Julius Santiago, Pantelis Kalogiros, Raul Dronca, Ioannis Spanos, Pavel Hanchar, Aidas Liaudanskas, Santi Arano, Rodrigo Ortiz-Cayon
  • Patent number: 11354851
    Abstract: Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: June 7, 2022
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Santi Arano
  • Publication number: 20220155945
    Abstract: Images may be captured from a plurality of cameras of an object moving along a path. Each of the cameras may be positioned at a respective identified location in three-dimensional space. Correspondence information for the plurality of images linking locations on different ones of the images may be determined. Linked locations may correspond to similar portions of the object captured by the cameras. A portion of the plurality of images may be presented on a display screen via a graphical user interface. The plurality of images may be grouped based on the correspondence information.
    Type: Application
    Filed: November 12, 2021
    Publication date: May 19, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Krunal Ketan Chande, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Rodrigo Ortiz-Cayon, Ioannis Spanos, Nick Stetco, Milos Vlaski, Martin Markus Hubert Wawro, Endre Ajandi, Santi Arano, Mehjabeen Alim
  • Publication number: 20220156497
    Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.
    Type: Application
    Filed: November 12, 2021
    Publication date: May 19, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
  • Publication number: 20220139036
    Abstract: Novel images may be generated using an image generator implemented on a processor. The image generator may receive as input neural features selected from a neural texture atlas. The image generator may also receive as input one or more position guides identifying position information for a plurality of input image pixels. The novel images may be evaluated using an image discriminator to determine a plurality of optimization values by comparing each of the plurality of novel images with a respective one of a corresponding plurality of input images. Each of the novel images may be generated from a respective camera pose relative to an object identical to that of the respective one of the corresponding plurality of input images. The image generator and the neural features may be updated based on the optimization values and stored on a storage device.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Applicant: Fyusion, Inc.
    Inventors: Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Krunal Ketan Chande, Rodrigo Ortiz-Cayon, Stefan Johannes Josef Holzer, Christian Richardt
  • Publication number: 20220108472
    Abstract: The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D coordinate pace. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.
    Type: Application
    Filed: October 15, 2021
    Publication date: April 7, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Pavel HANCHAR, Abhishek KAR, Matteo MUNARO, Krunal Ketan CHANDE, Radu Bogdan RUSU
  • Publication number: 20220058846
    Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.
    Type: Application
    Filed: November 4, 2021
    Publication date: February 24, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20220060639
    Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.
    Type: Application
    Filed: November 4, 2021
    Publication date: February 24, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
  • Patent number: 11252398
    Abstract: Configuration parameters associated with the generation of a cinematic video may identify object components, an order in which to display the object components, and an object type. A multi-view representation that is navigable in one or more dimensions and that includes images of an object captured from different viewpoints may be identified. A cinematic video of the object may be generated based on a subset of the images, arranged in an order based on the configuration parameters.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: February 15, 2022
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Rodrigo Ortiz Cayon, Julius Santiago, Milos Vlaski
  • Publication number: 20220012495
    Abstract: Provided are mechanisms and processes for visual feature tagging in multi-view interactive digital media representations (MIDMRs). In one example, a process includes receiving a visual feature tagging request that includes an MIDMR of an object to be searched, where the MIDMR includes spatial information, scale information, and different viewpoint images of the object. A visual feature in the MIDMR is identified, and visual feature correspondence information is created that links information identifying the visual feature with locations in the viewpoint images. At least one image associated with the MIDMR is transmitted in response to the feature tagging request.
    Type: Application
    Filed: September 23, 2021
    Publication date: January 13, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Pantelis Kalogiros, Radu Bogdan Rusu