Patents by Inventor Stefan Johannes Josef HOLZER
Stefan Johannes Josef HOLZER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180255284Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.Type: ApplicationFiled: March 3, 2017Publication date: September 6, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
-
Patent number: 10068316Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a multi-view interactive digital media representation for presentation on a device. Once a multi-view interactive digital media representation is generated, a user can provide navigational inputs, such via tilting of the device, which alter the presentation state of the multi-view interactive digital media representation. The navigational inputs can be analyzed to determine metrics which indicate a user's interest in the multi-view interactive digital media representation.Type: GrantFiled: March 3, 2017Date of Patent: September 4, 2018Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Stephen David Miller, Pantelis Kalogiros, George Haber
-
Patent number: 10070154Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.Type: GrantFiled: February 7, 2017Date of Patent: September 4, 2018Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
-
Publication number: 20180227569Abstract: Various embodiments disclosed herein relate to systems and methods for analyzing and manipulating images and video. Methods as disclosed herein may include retrieving, using a processor, a multi-view interactive digital media representation (MIDMR) from a storage location, the MIDMR including a content model and a context model, the content model characterizing an object, and the context model characterizing scenery surrounding the object. The methods may also include receiving, using the processor, at least one dynamic content input associated with the retrieved MIDMR, the dynamic content input being received while a user is interacting with the MIDMR. The methods may further include implementing, using the processor, one or more modifications associated with the MIDMR based, at least in part, on the received at least one dynamic content input, the one or more modifications modifying a presentation and functionality of the MIDMR.Type: ApplicationFiled: May 26, 2017Publication date: August 9, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Stephen David Miller, Pantelis Kalogiros, George Haber, Radu Bogdan Rusu
-
Publication number: 20180227601Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.Type: ApplicationFiled: February 7, 2017Publication date: August 9, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
-
Publication number: 20180225517Abstract: Provided are mechanisms and processes for performing skeleton detection and tracking via client-server communication. In one example, a server transmits a skeleton detection message that includes position data for a skeleton representing the structure of an object depicted in a first video frame in a raw video stream at a client device. Based on the initial position data, a processor identifies intervening position data for the skeleton in one or more intervening video frames that are temporally located after the first video frame in the raw video stream. A filtered video stream is then presented by altering the raw video stream based at least in part on the first position data and the intervening position data.Type: ApplicationFiled: February 7, 2017Publication date: August 9, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
-
Publication number: 20180227482Abstract: Provided are mechanisms and processes for scene-aware selection of filters and effects for visual digital media content. In one example, a digital media item is analyzed with a processor to identify one or more characteristics associated with the digital media item, where the characteristics include a physical object represented in the digital media item. Based on the identified characteristics, a digital media modification is selected from a plurality of digital media modifications for application to the digital media item. The digital media modification may then be provided for presentation in a user interface for selection by a user.Type: ApplicationFiled: February 7, 2017Publication date: August 9, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Michelle Jung-Ah Ho, Radu Bogdan Rusu
-
Publication number: 20180218236Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: ApplicationFiled: March 26, 2018Publication date: August 2, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Publication number: 20180218235Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: ApplicationFiled: March 26, 2018Publication date: August 2, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Publication number: 20180211131Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: ApplicationFiled: March 26, 2018Publication date: July 26, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Publication number: 20180203880Abstract: Provided are mechanisms and processes for performing live search using multi-view digital media representations. In one example, a process includes receiving a visual search query from a device for an object to be searched, where the visual search query includes a first set of viewpoints of the object obtained during capture of a first surround view of the object during a live search session. Next, additional recommended viewpoints of the object are identified for the device to capture, where the additional recommended viewpoints are chosen to provide more information about the object. A first set of search results based on the first set of viewpoints and additional recommended viewpoints of the object are transmitted to the device. In response, a second set of viewpoints of the object captured using image capture capabilities of the device are received. A second set of search results with enhanced matches for the object based on the first and second sets of viewpoints are then transmitted to the device.Type: ApplicationFiled: January 18, 2017Publication date: July 19, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Abhishek Kar, Pantelis Kalogiros, Ioannis Spanos, Luke Parham, Radu Bogdan Rusu
-
Publication number: 20180203877Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.Type: ApplicationFiled: January 18, 2017Publication date: July 19, 2018Inventors: Stefan Johannes Josef HOLZER, Abhishek KAR, Alexander Jay Bruen TREVOR, Pantelis KALOGIROS, Ioannis SPANOS, Radu Bogdan RUSU
-
Patent number: 10026219Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.Type: GrantFiled: November 2, 2017Date of Patent: July 17, 2018Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu
-
Publication number: 20180199025Abstract: Various embodiments of the present disclosure relate generally to systems and methods for drone-based systems and methods for capturing a multi-media representation of an entity. In some embodiments, the multi-media representation is digital, or multi-view, or interactive, and/or the combinations thereof. According to particular embodiments, a drone having a camera to is controlled or operated to obtain a plurality of images having location information. The plurality of images, including at least a portion of overlapping subject matter, are fused to form multi-view interactive digital media representations.Type: ApplicationFiled: March 5, 2018Publication date: July 12, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Stephen David MILLER, Radu Bogdan RUSU
-
Publication number: 20180165879Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.Type: ApplicationFiled: December 9, 2016Publication date: June 14, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
-
Publication number: 20180165827Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.Type: ApplicationFiled: December 12, 2016Publication date: June 14, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
-
Patent number: 9996945Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.Type: GrantFiled: December 12, 2016Date of Patent: June 12, 2018Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
-
Publication number: 20180103213Abstract: Various embodiments of the present invention relate generally to systems and processes for transforming a style of video data. In one embodiment, a neural network is used to interpolate native video data received from a camera system on a mobile device in real-time. The interpolation converts the live native video data into a particular style. For example, the style can be associated with a particular artist or a particular theme. The stylized video data can viewed on a display of the mobile device in a manner similar to which native live video data is output to the display. Thus, the stylized video data, which is viewed on the display, is consistent with a current position and orientation of the camera system on the display.Type: ApplicationFiled: September 27, 2017Publication date: April 12, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef HOLZER, Abhishek Kar, Pavel Gonchar, Radu Bogdan Rusu, Martin Saelzie, Shuichi Tsutsumi, Stephen David Miller, George Haber
-
Patent number: 9940541Abstract: Various embodiments of the present invention relate generally to systems and processes for artificially rendering images using interpolation of tracked control points. According to particular embodiments, a set of control points is tracked between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. An artificially rendered image corresponding to a third location is then generated by interpolating individual control points for the third location using the set of control points and interpolating pixel locations using the individual control points. The individual control points are used to transform image data.Type: GrantFiled: July 15, 2015Date of Patent: April 10, 2018Assignee: FYUSION, INC.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Martin Saelzle, Radu Bogdan Rusu
-
Publication number: 20180068485Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.Type: ApplicationFiled: November 2, 2017Publication date: March 8, 2018Applicant: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu