Patents by Inventor Martin Saelzle
Martin Saelzle has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240112301Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.Type: ApplicationFiled: December 6, 2023Publication date: April 4, 2024Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Patent number: 11893707Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view. The stitched image may be stored on a storage device.Type: GrantFiled: February 2, 2023Date of Patent: February 6, 2024Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Publication number: 20230230203Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.Type: ApplicationFiled: February 2, 2023Publication date: July 20, 2023Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Publication number: 20230196658Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.Type: ApplicationFiled: February 23, 2023Publication date: June 22, 2023Applicant: Fyusion, Inc.Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
-
Patent number: 11636637Abstract: Various embodiments of the present invention relate generally to mechanisms and processes relating to artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes applying a transform to estimate a path outside the trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The process also includes generating an artificially rendered image corresponding to a third location positioned on the path.Type: GrantFiled: March 22, 2019Date of Patent: April 25, 2023Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 11615582Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.Type: GrantFiled: June 8, 2021Date of Patent: March 28, 2023Assignee: Fyusion, Inc.Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
-
Patent number: 11605151Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view. The stitched image may be stored on a storage device.Type: GrantFiled: March 2, 2021Date of Patent: March 14, 2023Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Publication number: 20220392151Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.Type: ApplicationFiled: June 8, 2021Publication date: December 8, 2022Applicant: Fyusion, Inc.Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
-
Publication number: 20220284544Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.Type: ApplicationFiled: March 2, 2021Publication date: September 8, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Publication number: 20220060639Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.Type: ApplicationFiled: November 4, 2021Publication date: February 24, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
-
Publication number: 20220058846Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.Type: ApplicationFiled: November 4, 2021Publication date: February 24, 2022Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 11202017Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.Type: GrantFiled: September 27, 2017Date of Patent: December 14, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
-
Patent number: 11195314Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.Type: GrantFiled: November 2, 2018Date of Patent: December 7, 2021Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 10733475Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: GrantFiled: March 26, 2018Date of Patent: August 4, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 10726593Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and/or extrapolation. According to particular embodiments, a transformation between a first frame and a second frame is estimated, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location.Type: GrantFiled: September 22, 2015Date of Patent: July 28, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 10719733Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: GrantFiled: March 26, 2018Date of Patent: July 21, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 10719732Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: GrantFiled: March 26, 2018Date of Patent: July 21, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Patent number: 10713851Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.Type: GrantFiled: November 12, 2018Date of Patent: July 14, 2020Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
-
Patent number: 10504293Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.Type: GrantFiled: November 5, 2018Date of Patent: December 10, 2019Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
-
Publication number: 20190096137Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.Type: ApplicationFiled: November 12, 2018Publication date: March 28, 2019Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu