Patents by Inventor Radu Bogdan Rusu
Radu Bogdan Rusu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12203872Abstract: Images of an object may be captured at a computing device. Each of the images may be captured from a respective viewpoint based on image capture configuration information identifying one or more parameter values. A multiview image digital media representation of the object may be generated that includes some or all of the images of the object and that is navigable in one or more dimensions.Type: GrantFiled: June 17, 2021Date of Patent: January 21, 2025Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Santiago Arano Perez, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Martin Markus Hubert Wawro, Ashley Wakefield, Rodrigo Ortiz-Cayon, Josh Faust, Jai Chaudhry, Nico Gregor Sebastian Blodow, Mike Penz
-
Patent number: 12204869Abstract: A tag characterizing a portion of a multi-view interactive digital media representation (MVIDMR) may be determined by applying a grammar to natural language data. The MVIDMR may include images of an object and may be navigable in one or more dimensions. An object model location for the tag identifying a location within a three-dimensional object model may be determined by applying the grammar to the natural language data. The tag may then be applied to the MVIDMR by associating it with two or more of the images at positions determined based on the object model location.Type: GrantFiled: April 28, 2020Date of Patent: January 21, 2025Assignee: Fyusion, Inc.Inventors: Abhishek Kar, Martin Markus Hubert Wawro, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
-
Patent number: 12190916Abstract: According to particular embodiments, one process includes retrieving a multi-view interactive digital media representation that includes numerous images fused together into content and context models. The process next includes retrieving and processing audio data to be integrated into the multi-view interactive digital media representation. A first segment of audio data may be associated with a first position in the multi-view interactive digital media representation. In other examples, a first segment of audio data may be associated with a visual position or the location of a camera in the multi-view interactive digital media representation. The audio data may be played in coordination with the multi-view interactive digital media representation based on a user's navigation through the multi-view interactive digital media representation, where the first segment is played when the first position or first visual position is reached.Type: GrantFiled: August 29, 2023Date of Patent: January 7, 2025Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Vladimir Roumenov Glavtchev, Alexander Jay Bruen Trevor
-
Publication number: 20240412411Abstract: The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D coordinate pace. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.Type: ApplicationFiled: August 22, 2024Publication date: December 12, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Pavel HANCHAR, Abhishek KAR, Matteo MUNARO, Krunal Ketan CHANDE, Radu Bogdan RUSU
-
Publication number: 20240386585Abstract: Mappings are determined between viewpoints of an object and an object model representing the object. Each mapping identifies a location on the object model corresponding with a portion of the object captured in one of the viewpoints. Defect identifiers for the object model are created based on the mappings, where each defect identifier links one of the viewpoints to one of the locations on the object model. A user interface that includes the object model and the defect identifiers is provided for presentation on a display screen in a user interface. One of the viewpoints is presented in the user interface when the corresponding defect identifier is selected in the object model.Type: ApplicationFiled: July 22, 2024Publication date: November 21, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Radu Bogdan Rusu
-
Publication number: 20240378736Abstract: Mappings are determined between viewpoints of an object and an object model representing the object. Each mapping identifies a location on the object model corresponding with a portion of the object captured in one of the viewpoints. Tags for the object model are created based on the mappings, where each tag links one of the viewpoints to one of the locations on the object model. A user interface that includes the object model and the tags is provided for presentation on a display screen in a user interface. One of the viewpoints is presented in the user interface when the corresponding tag is selected in the object model.Type: ApplicationFiled: July 22, 2024Publication date: November 14, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Radu Bogdan Rusu
-
Patent number: 12131502Abstract: The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D coordinate pace. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton.Type: GrantFiled: July 17, 2023Date of Patent: October 29, 2024Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Pavel Hanchar, Abhishek Kar, Matteo Munaro, Krunal Ketan Chande, Radu Bogdan Rusu
-
Patent number: 12073574Abstract: Mappings are determined between viewpoints of an object and an object model representing the object. Each mapping identifies a location on the object model corresponding with a portion of the object captured in one of the viewpoints. Tags for the object model are created based on the mappings, where each tag links one of the viewpoints to one of the locations on the object model. A user interface that includes the object model and the tags is provided for presentation on a display screen in a user interface. One of the viewpoints is presented in the user interface when the corresponding tag is selected in the object model.Type: GrantFiled: August 28, 2023Date of Patent: August 27, 2024Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Radu Bogdan Rusu
-
Publication number: 20240267481Abstract: Provided are mechanisms and processes for scene-aware selection of filters and effects for visual digital media content. In one example, a digital media item is analyzed with a processor to identify one or more characteristics associated with the digital media item, where the characteristics include a physical object represented in the digital media item. Based on the identified characteristics, a digital media modification is selected from a plurality of digital media modifications for application to the digital media item. The digital media modification may then be provided for presentation in a user interface for selection by a user.Type: ApplicationFiled: April 14, 2024Publication date: August 8, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Matteo MUNARO, Abhishek KAR, Alexander Jay Bruen TREVOR, Krunal Ketan CHANDE, Michelle Jung-Ah HO, Radu Bogdan RUSU
-
Publication number: 20240257445Abstract: A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.Type: ApplicationFiled: April 11, 2024Publication date: August 1, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Abhishek KAR, Matteo MUNARO, Pavel HANCHAR, Radu Bogdan RUSU, Santi ARANO
-
Publication number: 20240242337Abstract: A background scenery portion may be identified in each of a plurality of image sets of an object, where each image set includes images captured simultaneously from different cameras. A correspondence between the image sets may determined, where the correspondence tracks control points associated with the object and present in multiple images. A multi-view interactive digital media representation of the object that is navigable in one or more dimensions and that includes the image sets may be generated and stored.Type: ApplicationFiled: March 28, 2024Publication date: July 18, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Publication number: 20240221402Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.Type: ApplicationFiled: March 17, 2024Publication date: July 4, 2024Applicant: Fyusion, Inc.Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
-
Publication number: 20240211511Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: ApplicationFiled: March 11, 2024Publication date: June 27, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
-
Publication number: 20240214544Abstract: Various embodiments of the present disclosure relate generally to systems and methods for drone-based systems and methods for capturing a multi- media representation of an entity. In some embodiments, the multi-media representation is digital, or multi-view, or interactive, and/or the combinations thereof. According to particular embodiments, a drone having a camera to is controlled or operated to obtain a plurality of images having location information. The plurality of images, including at least a portion of overlapping subject matter, are fused to form multi-view interactive digital media representations.Type: ApplicationFiled: March 2, 2024Publication date: June 27, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
-
Patent number: 12020355Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.Type: GrantFiled: November 4, 2021Date of Patent: June 25, 2024Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 11989822Abstract: A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.Type: GrantFiled: June 20, 2023Date of Patent: May 21, 2024Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu, Santi Arano
-
Patent number: 11972556Abstract: A background scenery portion may be identified in each of a plurality of image sets of an object, where each image set includes images captured simultaneously from different cameras. A correspondence between the image sets may determined, where the correspondence tracks control points associated with the object and present in multiple images. A multi-view interactive digital media representation of the object that is navigable in one or more dimensions and that includes the image sets may be generated and stored.Type: GrantFiled: December 19, 2022Date of Patent: April 30, 2024Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Pavel Hanchar, Radu Bogdan Rusu
-
Patent number: 11967162Abstract: A multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. Selectable tags can be placed at locations on the object in the MVIDMR. When the selectable tags are selected, media content can be output which shows details of the object at location where the selectable tag is placed. A machine learning algorithm can be used to automatically recognize landmarks on the object in the frames of the MVIDMR and a structure from motion calculation can be used to determine 3-D positions associated with the landmarks. A 3-D skeleton associated with the object can be assembled from the 3-D positions and projected into the frames associated with the MVIDMR. The 3-D skeleton can be used to determine the selectable tag locations in the frames of the MVIDMR of the object.Type: GrantFiled: September 26, 2022Date of Patent: April 23, 2024Assignee: FYUSION, INC.Inventors: Chris Beall, Abhishek Kar, Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Pavel Hanchar
-
Patent number: 11960533Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: GrantFiled: July 25, 2022Date of Patent: April 16, 2024Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pantelis Kalogiros, Ioannis Spanos, Radu Bogdan Rusu
-
Patent number: 11956412Abstract: Various embodiments of the present disclosure relate generally to systems and methods for drone-based systems and methods for capturing a multi-media representation of an entity. In some embodiments, the multi-media representation is digital, or multi-view, or interactive, and/or the combinations thereof. According to particular embodiments, a drone having a camera is controlled or operated to obtain a plurality of images having location information. The plurality of images, including at least a portion of overlapping subject matter, are fused to form multi-view interactive digital media representations.Type: GrantFiled: March 9, 2020Date of Patent: April 9, 2024Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu