Patents by Inventor Thomas Nonn

Thomas Nonn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11796643
    Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. Adaptive methods dynamically modify the size and location of the field of view as well as laser pulse properties in response to internal and external sensors data.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: October 24, 2023
    Assignee: Microvision, Inc.
    Inventors: Jonathan A. Morarity, Alga Lloyd Nothern, III, Thomas Nonn, Sumit Sharma
  • Patent number: 11754682
    Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. Scanning mirror offsets may be applied to modify a fan angle of the pulsed fanned laser beam. Adaptive methods dynamically modify the size and location of the field of view, laser pulse properties, and/or fan angle in response to internal and external sensors data.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: September 12, 2023
    Assignee: Microvision, Inc.
    Inventors: Alga Lloyd Nothern, III, Jonathan A. Morarity, Thomas Nonn
  • Patent number: 11579256
    Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. A phase offset may be injected into a scanning trajectory to mitigate effects of interfering light sources.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: February 14, 2023
    Assignee: Microvision, Inc.
    Inventors: Jonathan A. Morarity, Alga Lloyd Nothern, III, Thomas Nonn, Sumit Sharma
  • Publication number: 20220401838
    Abstract: Embodiments described herein provide a system for analyzing a gameplay of a first video game. During operation, the system can obtain a stream of video frames associated with the gameplay. The system can then analyze the video frames to identify a set of features of the first video game. Here, a respective feature indicates the characteristics of a virtual object in a virtual environment of the first video game supported by a first game engine. Subsequently, the system can derive, based on the set of identified features, a set of parameters indicating one or more physical characteristics of one or more virtual objects in the virtual environment. The system can store the set of derived parameters in a file format readable by a second game engine different from the first game engine. This allows the second game engine to support a second video game that incorporates the physical characteristics.
    Type: Application
    Filed: June 16, 2022
    Publication date: December 22, 2022
    Applicant: The Meta Game, Inc.
    Inventors: Thomas Nonn, Jay Brown, Garrett Krutilla, Duncan Haberly
  • Patent number: 11328446
    Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: May 10, 2022
    Assignee: Google LLC
    Inventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
  • Publication number: 20210011133
    Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. A phase offset may be injected into a scanning trajectory to mitigate effects of interfering light sources.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Inventors: Jonathan A. Morarity, Alga Lloyd Nothern III, Thomas Nonn, Sumit Sharma
  • Publication number: 20200379090
    Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. Scanning mirror offsets may be applied to modify a fan angle of the pulsed fanned laser beam. Adaptive methods dynamically modify the size and location of the field of view, laser pulse properties, and/or fan angle in response to internal and external sensors data.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Inventors: Alga Lloyd Nothern, III, Jonathan A. Morarity, Thomas Nonn
  • Publication number: 20200379089
    Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. Adaptive methods dynamically modify the size and location of the field of view as well as laser pulse properties in response to internal and external sensors data.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Inventors: Jonathan A. Morarity, Alga Lloyd Nothern III, Thomas Nonn, Sumit Sharma
  • Patent number: 10545215
    Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: January 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Jon Karafin, Gang Pan, Thomas Nonn, Jie Tan
  • Patent number: 10354399
    Abstract: Dense light-field data can be generated from image data that does not include light-field data, or from image data that includes sparse light-field data. In at least one embodiment, the source light-field data may include one or more sub-aperture images that may be used to reconstruct the light-field in denser form. In other embodiments, the source data can take other forms. Examples include data derived from or ancillary to a set of sub-aperture images, synthetic data, or captured image data that does not include full light-field data. Interpolation, back-projection, and/or other techniques are used in connection with source sub-aperture images or their equivalents, to generate dense light-field data.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: July 16, 2019
    Assignee: GOOGLE LLC
    Inventors: Zejing Wang, Thomas Nonn
  • Patent number: 10275892
    Abstract: A depth-based effect may be applied to a multi-view video stream to generate a modified multi-view video stream. User input may designate a boundary between a foreground region and a background region, at a different depth from the foreground region, of a reference image of the video stream. Based on the user input, a reference mask may be generated to indicate the foreground region and the background region. The reference mask may be used to generate one or more other masks that indicate the foreground and background regions for one or more different images, from different frames and/or different views from the reference image. The reference mask and other mask(s) may be used to apply the effect to the multi-view video stream to generate the modified multi-view video stream.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: April 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Francois Bleibel, Tingfang Du, Thomas Nonn, Jie Tan
  • Publication number: 20190079158
    Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.
    Type: Application
    Filed: September 13, 2017
    Publication date: March 14, 2019
    Inventors: Jon Karafin, Gang Pan, Thomas Nonn, Jie Tan
  • Publication number: 20180342075
    Abstract: Dense light-field data can be generated from image data that does not include light-field data, or from image data that includes sparse light-field data. In at least one embodiment, the source light-field data may include one or more sub-aperture images that may be used to reconstruct the light-field in denser form. In other embodiments, the source data can take other forms. Examples include data derived from or ancillary to a set of sub-aperture images, synthetic data, or captured image data that does not include full light-field data. Interpolation, back-projection, and/or other techniques are used in connection with source sub-aperture images or their equivalents, to generate dense light-field data.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 29, 2018
    Inventors: Zejing Wang, Thomas Nonn
  • Patent number: 9900510
    Abstract: Motion blur may be applied to a light-field image. The light-field image may be captured with a light-field camera having a main lens, an image sensor, and a plurality of microlenses positioned between the main lens and the image sensor. The light-field image may have a plurality of lenslet images, each of which corresponds to one microlens of the microlens array. The light-field image may be used to generate a mosaic of subaperture images, each of which has pixels from the same location on each of the lenslet images. Motion vectors may be computed to indicate motion occurring within at least a primary subaperture image of the mosaic. The motion vectors may be used to carry out shutter reconstruction of the mosaic to generate a mosaic of blurred subaperture images, which may then be used to generate a motion-blurred light-field image.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: February 20, 2018
    Assignee: Lytro, Inc.
    Inventors: Jon Karafin, Thomas Nonn, Gang Pan, Zejing Wang
  • Publication number: 20170365068
    Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
    Type: Application
    Filed: June 28, 2017
    Publication date: December 21, 2017
    Inventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
  • Publication number: 20170358092
    Abstract: A depth-based effect may be applied to a multi-view video stream to generate a modified multi-view video stream. User input may designate a boundary between a foreground region and a background region, at a different depth from the foreground region, of a reference image of the video stream. Based on the user input, a reference mask may be generated to indicate the foreground region and the background region. The reference mask may be used to generate one or more other masks that indicate the foreground and background regions for one or more different images, from different frames and/or different views from the reference image. The reference mask and other mask(s) may be used to apply the effect to the multi-view video stream to generate the modified multi-view video stream.
    Type: Application
    Filed: March 17, 2017
    Publication date: December 14, 2017
    Inventors: Francois Bleibel, Tingfang Du, Thomas Nonn, Jie Tan
  • Publication number: 20170059305
    Abstract: A depth map may be generated in conjunction with generation of a digital image such as a light-field image. A light pattern source may be used to project a light pattern into a scene with one or more objects. A camera may be used to capture first light and second light reflected from the one or more objects. The first light may be a reflection of light originating from one or more other light sources independent of the light pattern source. The second light may be a reflection of the light pattern from the one or more objects. In a processor, at least the first light may be used to generate an image such as a light-field image. Further, in the processor, at least the second light may be used to generate a depth map indicative of distance between the one or more objects and the camera.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 2, 2017
    Inventors: Thomas Nonn, Zejing Wang