Patents by Inventor Martin Saelzle

Martin Saelzle has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240112301
    Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.
    Type: Application
    Filed: December 6, 2023
    Publication date: April 4, 2024
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
  • Patent number: 11893707
    Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view. The stitched image may be stored on a storage device.
    Type: Grant
    Filed: February 2, 2023
    Date of Patent: February 6, 2024
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
  • Publication number: 20230230203
    Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.
    Type: Application
    Filed: February 2, 2023
    Publication date: July 20, 2023
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
  • Publication number: 20230196658
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Application
    Filed: February 23, 2023
    Publication date: June 22, 2023
    Applicant: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Patent number: 11636637
    Abstract: Various embodiments of the present invention relate generally to mechanisms and processes relating to artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes applying a transform to estimate a path outside the trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The process also includes generating an artificially rendered image corresponding to a third location positioned on the path.
    Type: Grant
    Filed: March 22, 2019
    Date of Patent: April 25, 2023
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 11615582
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: March 28, 2023
    Assignee: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Patent number: 11605151
    Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view. The stitched image may be stored on a storage device.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: March 14, 2023
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
  • Publication number: 20220392151
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Applicant: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Publication number: 20220284544
    Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view.
    Type: Application
    Filed: March 2, 2021
    Publication date: September 8, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Blake McConnell, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
  • Publication number: 20220060639
    Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.
    Type: Application
    Filed: November 4, 2021
    Publication date: February 24, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
  • Publication number: 20220058846
    Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.
    Type: Application
    Filed: November 4, 2021
    Publication date: February 24, 2022
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 11202017
    Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: December 14, 2021
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pavel Hanchar, Radu Bogdan Rusu, Martin Saelzle, Shuichi Tsutsumi, Stephen David Miller, George Haber
  • Patent number: 11195314
    Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes moving a set of control points perpendicular to a trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The set of control points is associated with a layer and each control point is moved based on an associated depth of the control point. The method also includes generating an artificially rendered image corresponding to a third location outside of the trajectory by extrapolating individual control points using the set of control points for the third location and extrapolating pixel locations using the individual control points.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: December 7, 2021
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10733475
    Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: August 4, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10726593
    Abstract: Various embodiments of the present invention relate generally to systems and methods for artificially rendering images using viewpoint interpolation and/or extrapolation. According to particular embodiments, a transformation between a first frame and a second frame is estimated, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: July 28, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10719733
    Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: July 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10719732
    Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: July 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10713851
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Grant
    Filed: November 12, 2018
    Date of Patent: July 14, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10504293
    Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: December 10, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
  • Publication number: 20190096137
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Application
    Filed: November 12, 2018
    Publication date: March 28, 2019
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu