Patents by Inventor Alexander Jais

Alexander Jais has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10222932
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of objects. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: March 5, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 10210662
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: February 19, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 10200677
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: February 5, 2019
    Assignee: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Patent number: 10194188
    Abstract: Videos associated with video resolutions may be received. A first bitrate for each of the video resolutions may be identified based on a first bitrate point associated with the videos where a quality of the videos at a first video resolution that is upscaled to a second video resolution is better than a quality of the videos at the second video resolution at bitrates below the first bitrate point. The upscaling of the first video resolution may correspond to converting the videos from the first video resolution to the second video resolution at a client device. The identified corresponding first bitrate may be assigned to each of the video resolutions.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: January 29, 2019
    Assignee: GOOGLE LLC
    Inventors: Sang-Uok Kum, Sam John, Thierry Foucu, Lei Yang, Alexander Jay Converse, Steve Benting
  • Patent number: 10152825
    Abstract: Provided are mechanisms and processes for augmenting multi-view image data with synthetic objects using inertial measurement unit (IMU) and image data. In one example, a process includes receiving a selection of an anchor location in a reference image for a synthetic object to be placed within a multi-view image. Movements between the reference image and a target image are computed using visual tracking information associated with the multi-view image, device orientation corresponding to the multi-view image, and an estimate of the camera's intrinsic parameters. A first synthetic image is then generated by placing the synthetic object at the anchor location using visual tracking information in the multi-view image, orienting the synthetic object using the inverse of the movements computed between the reference image and the target image, and projecting the synthetic object along a ray into a target view associated with the target image.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: December 11, 2018
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Alexander Jay Bruen Trevor, Martin Saelzle, Radu Bogdan Rusu
  • Patent number: 10140293
    Abstract: A computer-implemented technique can include receiving a selection by a user of a single word in a document in a source language, the document being displayed in a viewing application executing at the computing device, obtaining contextual information from the document that is indicative of a context of the selected word, providing the selected word and its contextual information from the viewing application to a different translation application, obtaining potential translated words using the translation application, the selected word, and its contextual information, each potential translated word being a potential translation of the selected word to a different target language that is preferred by the user, and displaying the potential translated words.
    Type: Grant
    Filed: May 18, 2015
    Date of Patent: November 27, 2018
    Assignee: Google LLC
    Inventors: Alexander Jay Cuthbert, Julie Cattiau
  • Publication number: 20180338128
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20180338126
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Vladimir Glavtchev, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20180338083
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The determined angular views can be used to select from among the live images and determine when a three hundred sixty degree view of the object has been captured. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation, such as a three hundred sixty degree rotation, through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Applicant: Fyusion, Inc.
    Inventors: Alexander Jay Bruen Trevor, Chris Beall, Stefan Johannes Josef Holzer, Radu Bogdan Rusu
  • Publication number: 20180276203
    Abstract: A language translation application on a user device includes a user interface that provides relevant textual and graphical feedback mechanisms associated with various states of voice input and translated speech.
    Type: Application
    Filed: May 8, 2018
    Publication date: September 27, 2018
    Inventors: Alexander Jay Cuthbert, Sunny Goyal, Matthew Morton Gaba, Joshua J. Estelle, Masakazu Seno
  • Publication number: 20180260972
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Application
    Filed: May 14, 2018
    Publication date: September 13, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Patent number: 10070154
    Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: September 4, 2018
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Publication number: 20180227482
    Abstract: Provided are mechanisms and processes for scene-aware selection of filters and effects for visual digital media content. In one example, a digital media item is analyzed with a processor to identify one or more characteristics associated with the digital media item, where the characteristics include a physical object represented in the digital media item. Based on the identified characteristics, a digital media modification is selected from a plurality of digital media modifications for application to the digital media item. The digital media modification may then be provided for presentation in a user interface for selection by a user.
    Type: Application
    Filed: February 7, 2017
    Publication date: August 9, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Michelle Jung-Ah Ho, Radu Bogdan Rusu
  • Publication number: 20180227601
    Abstract: Provided are mechanisms and processes for performing live filtering in a camera view via client-server communication. In one example, a first video frame in a raw video stream is transmitted from a client device to a server. The client device receives a filter processing message associated with the first video frame that includes filter data for applying a filter to the first video frame. A processor at the client device creates a filtered video stream by applying the filter to a second video frame that occurs in the video stream later than the first video frame.
    Type: Application
    Filed: February 7, 2017
    Publication date: August 9, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Publication number: 20180225517
    Abstract: Provided are mechanisms and processes for performing skeleton detection and tracking via client-server communication. In one example, a server transmits a skeleton detection message that includes position data for a skeleton representing the structure of an object depicted in a first video frame in a raw video stream at a client device. Based on the initial position data, a processor identifies intervening position data for the skeleton in one or more intervening video frames that are temporally located after the first video frame in the raw video stream. A filtered video stream is then presented by altering the raw video stream based at least in part on the first position data and the intervening position data.
    Type: Application
    Filed: February 7, 2017
    Publication date: August 9, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Matteo Munaro, Abhishek Kar, Alexander Jay Bruen Trevor, Krunal Ketan Chande, Radu Bogdan Rusu
  • Publication number: 20180203877
    Abstract: Provided are mechanisms and processes for performing visual search using multi-view digital media representations, such as surround views. In one example, a process includes receiving a visual search query that includes a surround view of an object to be searched, where the surround view includes spatial information, scale information, and different viewpoint images of the object. The surround view is compared to stored surround views by comparing spatial information and scale information of the surround view to spatial information and scale information of the stored surround views. A correspondence measure is then generated indicating the degree of similarity between the surround view and a possible match. At least one search result is then transmitted with a corresponding image in response to the visual search query.
    Type: Application
    Filed: January 18, 2017
    Publication date: July 19, 2018
    Inventors: Stefan Johannes Josef HOLZER, Abhishek KAR, Alexander Jay Bruen TREVOR, Pantelis KALOGIROS, Ioannis SPANOS, Radu Bogdan RUSU
  • Publication number: 20180165827
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Application
    Filed: December 12, 2016
    Publication date: June 14, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef HOLZER, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Publication number: 20180165879
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A real object can be tracked in the live image data for the purposes of creating a surround view using a number of tracking points. As a camera is moved around the real object, virtual objects can be rendered into live image data to create synthetic images where a position of the tracking points can be used to position the virtual object in the synthetic image. The synthetic images can be output in real-time. Further, virtual objects in the synthetic images can be incorporated into surround views.
    Type: Application
    Filed: December 9, 2016
    Publication date: June 14, 2018
    Applicant: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Martin Saelzle, Stephen David Miller, Radu Bogdan Rusu
  • Patent number: 9996945
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: June 12, 2018
    Assignee: FYUSION, INC.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Publication number: 20180121422
    Abstract: Computer-implemented techniques can include receiving a selected word in a source language, obtaining one or more parts of speech for the selected word, and for each of the one or more parts-of-speech, obtaining candidate translations of the selected word to a different target language, each candidate translation corresponding to a particular semantic meaning of the selected word. The techniques can include for each semantic meaning of the selected word: obtaining an image corresponding to the semantic meaning of the selected word, and compiling translation information including (i) the semantic meaning, (ii) a corresponding part-of-speech, (iii) the image, and (iv) at least one corresponding candidate translation. The techniques can also include outputting the translation information.
    Type: Application
    Filed: December 22, 2017
    Publication date: May 3, 2018
    Applicant: Google LLC
    Inventors: Alexander Jay Cuthbert, Barak Turovsky