Patents by Inventor Josef Holzer
Josef Holzer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220284544Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.Type: ApplicationFiled: March 2, 2021Publication date: September 8, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Patent number: 11435869Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.Type: GrantFiled: December 23, 2019Date of Patent: September 6, 2022Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
-
Patent number: 11438565Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.Type: GrantFiled: April 19, 2019Date of Patent: September 6, 2022Assignee: Fyusion, Inc.Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Patent number: 11436275Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: GrantFiled: August 29, 2019Date of Patent: September 6, 2022Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
-
Publication number: 20220254007Abstract: Damage to an object such as a vehicle may be detected and presented based at least in part on image data. In some configurations, image data may be detected by causing the object to pass through a gate or portal on which cameras are located. Alternatively, or additionally, image data may be selected by a user operating a camera and moving around the object. The cameras may capture image data, which may be combined and analyzed to detect damage. Some or all of the image data and/or analysis of the image data may be presented in a viewer, which may allow a user to perform actions such as navigating around the object in a virtual environment, identifying and viewing areas of the object where damage has been detected, and accessing the results of the analysis.Type: ApplicationFiled: February 2, 2022Publication date: August 11, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Krunal Ketan Chande, Julius Santiago, Pantelis Kalogiros, Raul Dronca, Ioannis Spanos, Pavel Hanchar, Aidas Liaudanskas, Santi Arano, Rodrigo Ortiz-Cayon
-
Publication number: 20220254008Abstract: Images of an object may be captured by cameras located at fixed locations in space as the object travels through the cameras' fields of view. A three-dimensional model of the object may be determined using the images. A portion of the object that has been damaged may be identified based on the three-dimensional model and the images. A damage map of the object illustrating the portion of the object that has been damaged may be generated.Type: ApplicationFiled: February 2, 2022Publication date: August 11, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Krunal Ketan Chande, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski
-
Patent number: 11354851Abstract: Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.Type: GrantFiled: March 29, 2021Date of Patent: June 7, 2022Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Santi Arano
-
Publication number: 20220156497Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.Type: ApplicationFiled: November 12, 2021Publication date: May 19, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
-
Publication number: 20220155945Abstract: Images may be captured from a plurality of cameras of an object moving along a path. Each of the cameras may be positioned at a respective identified location in three-dimensional space. Correspondence information for the plurality of images linking locations on different ones of the images may be determined. Linked locations may correspond to similar portions of the object captured by the cameras. A portion of the plurality of images may be presented on a display screen via a graphical user interface. The plurality of images may be grouped based on the correspondence information.Type: ApplicationFiled: November 12, 2021Publication date: May 19, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Krunal Ketan Chande, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Rodrigo Ortiz-Cayon, Ioannis Spanos, Nick Stetco, Milos Vlaski, Martin Markus Hubert Wawro, Endre Ajandi, Santi Arano, Mehjabeen Alim
-
Publication number: 20220139036Abstract: Novel images may be generated using an image generator implemented on a processor. The image generator may receive as input neural features selected from a neural texture atlas. The image generator may also receive as input one or more position guides identifying position information for a plurality of input image pixels. The novel images may be evaluated using an image discriminator to determine a plurality of optimization values by comparing each of the plurality of novel images with a respective one of a corresponding plurality of input images. Each of the novel images may be generated from a respective camera pose relative to an object identical to that of the respective one of the corresponding plurality of input images. The image generator and the neural features may be updated based on the optimization values and stored on a storage device.Type: ApplicationFiled: November 1, 2021Publication date: May 5, 2022Applicant: Fyusion, Inc.Inventors: Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Krunal Ketan Chande, Rodrigo Ortiz-Cayon, Stefan Johannes Josef Holzer, Christian Richardt
-
Publication number: 20220108472Abstract: The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D coordinate pace. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.Type: ApplicationFiled: October 15, 2021Publication date: April 7, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Pavel HANCHAR, Abhishek KAR, Matteo MUNARO, Krunal Ketan CHANDE, Radu Bogdan RUSU
-
Publication number: 20220058846Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.Type: ApplicationFiled: November 4, 2021Publication date: February 24, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Publication number: 20220060639Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.Type: ApplicationFiled: November 4, 2021Publication date: February 24, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
-
Patent number: 11252398Abstract: Configuration parameters associated with the generation of a cinematic video may identify object components, an order in which to display the object components, and an object type. A multi-view representation that is navigable in one or more dimensions and that includes images of an object captured from different viewpoints may be identified. A cinematic video of the object may be generated based on a subset of the images, arranged in an order based on the configuration parameters.Type: GrantFiled: January 12, 2021Date of Patent: February 15, 2022Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Rodrigo Ortiz Cayon, Julius Santiago, Milos Vlaski
-
Publication number: 20220012495Abstract: Provided are mechanisms and processes for visual feature tagging in multi-view interactive digital media representations (MIDMRs). In one example, a process includes receiving a visual feature tagging request that includes an MIDMR of an object to be searched, where the MIDMR includes spatial information, scale information, and different viewpoint images of the object. A visual feature in the MIDMR is identified, and visual feature correspondence information is created that links information identifying the visual feature with locations in the viewpoint images. At least one image associated with the MIDMR is transmitted in response to the feature tagging request.Type: ApplicationFiled: September 23, 2021Publication date: January 13, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Pantelis Kalogiros, Radu Bogdan Rusu
-
Patent number: 11202017Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.Type: GrantFiled: September 27, 2017Date of Patent: December 14, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
-
Patent number: 11195314Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.Type: GrantFiled: November 2, 2018Date of Patent: December 7, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 11176704Abstract: The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D space. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.Type: GrantFiled: July 22, 2019Date of Patent: November 16, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Pavel Hanchar, Abhishek Kar, Matteo Munaro, Krunal Ketan Chande, Radu Bogdan Rusu
-
Publication number: 20210344891Abstract: Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view.Type: ApplicationFiled: July 12, 2021Publication date: November 4, 2021Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Dave Morrison, Radu Bogdan Rusu, George Haber, Keith Martin
-
Publication number: 20210312719Abstract: Provided are mechanisms and processes for inserting a visual element into a multi-view digital media representation (MVIDMR). In one example, a process includes analyzing an MVIDMR to determine if there is an appropriate location to insert a visual element. Once a location is found, the type of visual element appropriate for the location is determined, where the type of visual element includes either a three-dimensional object to be inserted in the MVIDMR or a two-dimensional image to be inserted as or projected onto a background or object in the MVIDMR. A visual element that is appropriate for the location is then retrieved and inserted into the MVIDMR, such that the visual element is integrated into the MVIDMR and navigable by a user.Type: ApplicationFiled: June 21, 2021Publication date: October 7, 2021Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Pantelis Kalogiros, George Haber, Radu Bogdan Rusu