Patents by Inventor Thomas Nonn
Thomas Nonn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11796643Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. Adaptive methods dynamically modify the size and location of the field of view as well as laser pulse properties in response to internal and external sensors data.Type: GrantFiled: May 30, 2019Date of Patent: October 24, 2023Assignee: Microvision, Inc.Inventors: Jonathan A. Morarity, Alga Lloyd Nothern, III, Thomas Nonn, Sumit Sharma
-
Patent number: 11754682Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. Scanning mirror offsets may be applied to modify a fan angle of the pulsed fanned laser beam. Adaptive methods dynamically modify the size and location of the field of view, laser pulse properties, and/or fan angle in response to internal and external sensors data.Type: GrantFiled: May 30, 2019Date of Patent: September 12, 2023Assignee: Microvision, Inc.Inventors: Alga Lloyd Nothern, III, Jonathan A. Morarity, Thomas Nonn
-
Patent number: 11579256Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. A phase offset may be injected into a scanning trajectory to mitigate effects of interfering light sources.Type: GrantFiled: July 11, 2019Date of Patent: February 14, 2023Assignee: Microvision, Inc.Inventors: Jonathan A. Morarity, Alga Lloyd Nothern, III, Thomas Nonn, Sumit Sharma
-
Publication number: 20220401838Abstract: Embodiments described herein provide a system for analyzing a gameplay of a first video game. During operation, the system can obtain a stream of video frames associated with the gameplay. The system can then analyze the video frames to identify a set of features of the first video game. Here, a respective feature indicates the characteristics of a virtual object in a virtual environment of the first video game supported by a first game engine. Subsequently, the system can derive, based on the set of identified features, a set of parameters indicating one or more physical characteristics of one or more virtual objects in the virtual environment. The system can store the set of derived parameters in a file format readable by a second game engine different from the first game engine. This allows the second game engine to support a second video game that incorporates the physical characteristics.Type: ApplicationFiled: June 16, 2022Publication date: December 22, 2022Applicant: The Meta Game, Inc.Inventors: Thomas Nonn, Jay Brown, Garrett Krutilla, Duncan Haberly
-
Patent number: 11328446Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.Type: GrantFiled: June 28, 2017Date of Patent: May 10, 2022Assignee: Google LLCInventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
-
Publication number: 20210011133Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. A phase offset may be injected into a scanning trajectory to mitigate effects of interfering light sources.Type: ApplicationFiled: July 11, 2019Publication date: January 14, 2021Inventors: Jonathan A. Morarity, Alga Lloyd Nothern III, Thomas Nonn, Sumit Sharma
-
Publication number: 20200379090Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. Scanning mirror offsets may be applied to modify a fan angle of the pulsed fanned laser beam. Adaptive methods dynamically modify the size and location of the field of view, laser pulse properties, and/or fan angle in response to internal and external sensors data.Type: ApplicationFiled: May 30, 2019Publication date: December 3, 2020Inventors: Alga Lloyd Nothern, III, Jonathan A. Morarity, Thomas Nonn
-
Publication number: 20200379089Abstract: A light detection and ranging system includes synchronously scanning transmit and receive mirrors that scan a pulsed fanned laser beam in two dimensions. Imaging optics image a receive aperture onto an arrayed receiver that includes a plurality of light sensitive devices. Adaptive methods dynamically modify the size and location of the field of view as well as laser pulse properties in response to internal and external sensors data.Type: ApplicationFiled: May 30, 2019Publication date: December 3, 2020Inventors: Jonathan A. Morarity, Alga Lloyd Nothern III, Thomas Nonn, Sumit Sharma
-
Patent number: 10545215Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.Type: GrantFiled: September 13, 2017Date of Patent: January 28, 2020Assignee: GOOGLE LLCInventors: Jon Karafin, Gang Pan, Thomas Nonn, Jie Tan
-
Patent number: 10354399Abstract: Dense light-field data can be generated from image data that does not include light-field data, or from image data that includes sparse light-field data. In at least one embodiment, the source light-field data may include one or more sub-aperture images that may be used to reconstruct the light-field in denser form. In other embodiments, the source data can take other forms. Examples include data derived from or ancillary to a set of sub-aperture images, synthetic data, or captured image data that does not include full light-field data. Interpolation, back-projection, and/or other techniques are used in connection with source sub-aperture images or their equivalents, to generate dense light-field data.Type: GrantFiled: May 25, 2017Date of Patent: July 16, 2019Assignee: GOOGLE LLCInventors: Zejing Wang, Thomas Nonn
-
Patent number: 10275892Abstract: A depth-based effect may be applied to a multi-view video stream to generate a modified multi-view video stream. User input may designate a boundary between a foreground region and a background region, at a different depth from the foreground region, of a reference image of the video stream. Based on the user input, a reference mask may be generated to indicate the foreground region and the background region. The reference mask may be used to generate one or more other masks that indicate the foreground and background regions for one or more different images, from different frames and/or different views from the reference image. The reference mask and other mask(s) may be used to apply the effect to the multi-view video stream to generate the modified multi-view video stream.Type: GrantFiled: March 17, 2017Date of Patent: April 30, 2019Assignee: GOOGLE LLCInventors: Francois Bleibel, Tingfang Du, Thomas Nonn, Jie Tan
-
Publication number: 20190079158Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.Type: ApplicationFiled: September 13, 2017Publication date: March 14, 2019Inventors: Jon Karafin, Gang Pan, Thomas Nonn, Jie Tan
-
Publication number: 20180342075Abstract: Dense light-field data can be generated from image data that does not include light-field data, or from image data that includes sparse light-field data. In at least one embodiment, the source light-field data may include one or more sub-aperture images that may be used to reconstruct the light-field in denser form. In other embodiments, the source data can take other forms. Examples include data derived from or ancillary to a set of sub-aperture images, synthetic data, or captured image data that does not include full light-field data. Interpolation, back-projection, and/or other techniques are used in connection with source sub-aperture images or their equivalents, to generate dense light-field data.Type: ApplicationFiled: May 25, 2017Publication date: November 29, 2018Inventors: Zejing Wang, Thomas Nonn
-
Patent number: 9900510Abstract: Motion blur may be applied to a light-field image. The light-field image may be captured with a light-field camera having a main lens, an image sensor, and a plurality of microlenses positioned between the main lens and the image sensor. The light-field image may have a plurality of lenslet images, each of which corresponds to one microlens of the microlens array. The light-field image may be used to generate a mosaic of subaperture images, each of which has pixels from the same location on each of the lenslet images. Motion vectors may be computed to indicate motion occurring within at least a primary subaperture image of the mosaic. The motion vectors may be used to carry out shutter reconstruction of the mosaic to generate a mosaic of blurred subaperture images, which may then be used to generate a motion-blurred light-field image.Type: GrantFiled: December 8, 2016Date of Patent: February 20, 2018Assignee: Lytro, Inc.Inventors: Jon Karafin, Thomas Nonn, Gang Pan, Zejing Wang
-
Publication number: 20170365068Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.Type: ApplicationFiled: June 28, 2017Publication date: December 21, 2017Inventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
-
Publication number: 20170358092Abstract: A depth-based effect may be applied to a multi-view video stream to generate a modified multi-view video stream. User input may designate a boundary between a foreground region and a background region, at a different depth from the foreground region, of a reference image of the video stream. Based on the user input, a reference mask may be generated to indicate the foreground region and the background region. The reference mask may be used to generate one or more other masks that indicate the foreground and background regions for one or more different images, from different frames and/or different views from the reference image. The reference mask and other mask(s) may be used to apply the effect to the multi-view video stream to generate the modified multi-view video stream.Type: ApplicationFiled: March 17, 2017Publication date: December 14, 2017Inventors: Francois Bleibel, Tingfang Du, Thomas Nonn, Jie Tan
-
Publication number: 20170059305Abstract: A depth map may be generated in conjunction with generation of a digital image such as a light-field image. A light pattern source may be used to project a light pattern into a scene with one or more objects. A camera may be used to capture first light and second light reflected from the one or more objects. The first light may be a reflection of light originating from one or more other light sources independent of the light pattern source. The second light may be a reflection of the light pattern from the one or more objects. In a processor, at least the first light may be used to generate an image such as a light-field image. Further, in the processor, at least the second light may be used to generate a depth map indicative of distance between the one or more objects and the camera.Type: ApplicationFiled: August 25, 2015Publication date: March 2, 2017Inventors: Thomas Nonn, Zejing Wang