Patents by Inventor Aidas Liaudanskas
Aidas Liaudanskas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240112301Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.Type: ApplicationFiled: December 6, 2023Publication date: April 4, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Publication number: 20240096094Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.Type: ApplicationFiled: November 16, 2023Publication date: March 21, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
-
Patent number: 11893707Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view. The stitched image may be stored on a storage device.Type: GrantFiled: February 2, 2023Date of Patent: February 6, 2024Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Patent number: 11861900Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.Type: GrantFiled: November 12, 2021Date of Patent: January 2, 2024Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
-
Publication number: 20230230203Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.Type: ApplicationFiled: February 2, 2023Publication date: July 20, 2023Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Patent number: 11605151Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view. The stitched image may be stored on a storage device.Type: GrantFiled: March 2, 2021Date of Patent: March 14, 2023Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Publication number: 20220343601Abstract: One or more two-dimensional images of a three-dimensional object may be analyzed to estimate a three-dimensional mesh representing the object and a mapping of the two-dimensional images to the three-dimensional mesh. Initially, a correspondence may be determined between the images and a UV representation of a three-dimensional template mesh by training a neural network. Then, the three-dimensional template mesh may be deformed to determine the representation of the object. The process may involve a reprojection loss cycle in which points from the images are mapped onto the UV representation, then onto the three-dimensional template mesh, and then back onto the two-dimensional images.Type: ApplicationFiled: April 15, 2022Publication date: October 27, 2022Applicant: Fyusion, Inc.Inventors: Aidas Liaudanskas, Nishant Rai, Srinivas Rao, Rodrigo Ortiz-Cayon, Matteo Munaro, Stefan Johannes Josef Holzer
-
Publication number: 20220284544Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.Type: ApplicationFiled: March 2, 2021Publication date: September 8, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Publication number: 20220254008Abstract: Images of an object may be captured by cameras located at fixed locations in space as the object travels through the cameras' fields of view. A three-dimensional model of the object may be determined using the images. A portion of the object that has been damaged may be identified based on the three-dimensional model and the images. A damage map of the object illustrating the portion of the object that has been damaged may be generated.Type: ApplicationFiled: February 2, 2022Publication date: August 11, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Krunal Ketan Chande, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski
-
Publication number: 20220254007Abstract: Damage to an object such as a vehicle may be detected and presented based at least in part on image data. In some configurations, image data may be detected by causing the object to pass through a gate or portal on which cameras are located. Alternatively, or additionally, image data may be selected by a user operating a camera and moving around the object. The cameras may capture image data, which may be combined and analyzed to detect damage. Some or all of the image data and/or analysis of the image data may be presented in a viewer, which may allow a user to perform actions such as navigating around the object in a virtual environment, identifying and viewing areas of the object where damage has been detected, and accessing the results of the analysis.Type: ApplicationFiled: February 2, 2022Publication date: August 11, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Krunal Ketan Chande, Julius Santiago, Pantelis Kalogiros, Raul Dronca, Ioannis Spanos, Pavel Hanchar, Aidas Liaudanskas, Santi Arano, Rodrigo Ortiz-Cayon
-
Publication number: 20220155945Abstract: Images may be captured from a plurality of cameras of an object moving along a path. Each of the cameras may be positioned at a respective identified location in three-dimensional space. Correspondence information for the plurality of images linking locations on different ones of the images may be determined. Linked locations may correspond to similar portions of the object captured by the cameras. A portion of the plurality of images may be presented on a display screen via a graphical user interface. The plurality of images may be grouped based on the correspondence information.Type: ApplicationFiled: November 12, 2021Publication date: May 19, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Krunal Ketan Chande, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Rodrigo Ortiz-Cayon, Ioannis Spanos, Nick Stetco, Milos Vlaski, Martin Markus Hubert Wawro, Endre Ajandi, Santi Arano, Mehjabeen Alim
-
Publication number: 20220156497Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.Type: ApplicationFiled: November 12, 2021Publication date: May 19, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
-
Patent number: 10950032Abstract: Pixels in a visual representation of an object that includes one or more perspective view images may be mapped to a standard view of the object. Based on the mapping, a portion of the object captured in the visual representation of the object may be identified. A user interface on a display device may indicate the identified object portion.Type: GrantFiled: July 22, 2019Date of Patent: March 16, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Aidas Liaudanskas, Matthias Reso, Alexander Jay Bruen Trevor, Radu Bogdan Rusu
-
Patent number: 10887582Abstract: Images of an object may be analyzed to determine individual damage maps of the object. Each damage map may represent damage to an object depicted in one of the images. The damage may be represented in a standard view of the object. An aggregated damage map for the object may be determined based on the individual damage maps.Type: GrantFiled: October 8, 2019Date of Patent: January 5, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pavel Hanchar, Matteo Munaro, Aidas Liaudanskas, Radu Bogdan Rusu
-
Publication number: 20200349757Abstract: Pixels in a visual representation of an object that includes one or more perspective view images may be mapped to a standard view of the object. Based on the mapping, a portion of the object captured in the visual representation of the object may be identified. A user interface on a display device may indicate the identified object portion.Type: ApplicationFiled: July 22, 2019Publication date: November 5, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Matteo Munaro, Aidas Liaudanskas, Matthias Reso, Alexander Jay Bruen Trevor, Radu Bogdan Rusu
-
Publication number: 20200258309Abstract: A live camera feed may be analyzed to determine the identify of an object, and augmented reality overlay data may be determined based on that identity. The overlay data may include one or more tags that are each associated with a respective location on the object. The live camera feed may be presented on a display screen with the tags being positioned as the respective location.Type: ApplicationFiled: April 28, 2020Publication date: August 13, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Alexander Jay Bruen Trevor, Aidas Liaudanskas, Radu Bogdan Rusu
-
Publication number: 20200234397Abstract: A three-dimensional (3D) skeleton may be determined based on a plurality of vertices and a plurality of faces in a two-dimensional (2D) mesh in a top-down image of an object. A correspondence mapping between a designated perspective view image and the top-down object image may be determined based on the 3D skeleton. The correspondence mapping may link a respective first location in the top-down object image to a respective second location in the designated perspective view image for each of a plurality of points in the designated perspective view image. A top-down mapped image of the object may be created by determining a first respective pixel value for each of the first locations, with each first respective pixel value being determined based on a second respective pixel value for the respective second location linked with the respective first location via the correspondence mapping.Type: ApplicationFiled: July 22, 2019Publication date: July 23, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Aidas Liaudanskas, Abhishek Kar, Krunal Ketan Chande, Radu Bogdan Rusu
-
Publication number: 20200236343Abstract: Images of an object may be analyzed to determine individual damage maps of the object. Each damage map may represent damage to an object depicted in one of the images. The damage may be represented in a standard view of the object. An aggregated damage map for the object may be determined based on the individual damage maps.Type: ApplicationFiled: October 8, 2019Publication date: July 23, 2020Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Alexander Jay Bruen Trevor, Pavel Hanchar, Matteo Munaro, Aidas Liaudanskas, Radu Bogdan Rusu