Patents Examined by Jason A Pringle-Parker
  • Patent number: 10785469
    Abstract: A generation apparatus is configured to generate a virtual viewpoint image based on a plurality of images captured by a plurality of cameras for imaging a field from a plurality of different directions, the generation apparatus including an acquisition unit configured to acquire, based on a three-dimensional model of at least a portion of the field, correspondence information indicating correspondence between a coordinate of an image captured by at least one of the plurality of cameras and a coordinate related to a simple three-dimensional model less accurate than the three-dimensional model, and a generation unit configured to generate a virtual viewpoint image according to designation about a position and an orientation of a virtual viewpoint, by using an image captured by one or more of the plurality of cameras and the correspondence information acquired by the acquisition unit.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: September 22, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yusuke Nakazato
  • Patent number: 10783677
    Abstract: A data visualization system creates a visual representation of data. The visual representation of data is provided in a form that enables an end user to adjust variable data upon which one or more determined data elements are based using an input device. The adjustment of the variable data is detected, and the visual representation of the data is refreshed based on the detected adjustment of the variable data.
    Type: Grant
    Filed: April 15, 2016
    Date of Patent: September 22, 2020
    Assignee: New BIS Safe Luxco S.à r.l
    Inventor: Andrew John Cardno
  • Patent number: 10777148
    Abstract: An image signal processing device of the present disclosure includes a luminance correction section that performs, on a basis of information on a maximum output luminance value in a display section, luminance correction on an image signal to be supplied to the display section, the maximum output luminance value being variable.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: September 15, 2020
    Assignee: Sony Corporation
    Inventors: Kuninori Miyazawa, Hidetaka Honji
  • Patent number: 10762350
    Abstract: Video noise reduction for a video augmented reality system is provided. A head mounted display includes a display unit; a camera for generating frames of display data. A frame store is provided for storing previous frames of displayed information that was sent to the display unit; and a motion processor is provided in communication with the camera, display unit, and the frame store. The motion processor is operable to: identify an area of interest in a current frame of display data; match the area of interest to similar areas in previous frames stored in the frame store; rotate and translate the matched areas of interest from the one or more previous frames stored in the frame store to match the area of interest in the current frame; and average the prior matched areas of interest with the current area of interest to generate a displayed area of interest.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: September 1, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Brian K. Guenter, Ran Gal
  • Patent number: 10762713
    Abstract: A method and system for generating an augmented reality experience without a physical marker. At least two frames from a video stream are collected and one of the frames is designated as a first frame. The graphical processor of a device prepares two collected frames for analysis and features from the two collected frames are selected for comparison. The central processor of the device isolates points on a same plane as a tracked point in the first frame and calculates a position of a virtual object in a second frame in 2D. The next frame from the video stream is collected and the process is repeated until the user navigates away from the URL, webpage or when the camera is turned off. The central processor renders the virtual object on a display of the device.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: September 1, 2020
    Assignee: SHOPPAR INC.
    Inventor: Sina Viseh
  • Patent number: 10754499
    Abstract: A program for causing a computer to execute: receiving an instruction to change a position and a direction of a viewpoint disposed in a virtual space from a user; controlling a viewpoint to change the position and the direction of the viewpoint in response to the instruction; rendering a spatial image that depicts an aspect of an interior of the virtual space on the basis of the position and the direction of the viewpoint; and switching over between a first mode of changing the direction of the viewpoint about the position of the viewpoint and a second mode of changing the position and the direction of the viewpoint about an object of interest to which the user pays attention in the virtual space in a case of receiving an instruction to change the direction of the viewpoint from the user at a time of controlling the viewpoint.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: August 25, 2020
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Masaaki Yamagiwa, Teruyuki Toriyama, Hidetaka Miyazaki
  • Patent number: 10748235
    Abstract: Techniques are provided for optimizing display processing of layers below a dim layer by a display system. Because the dim layer may partially obstruct, conceal, or otherwise impact a user view of layers below the dim layer, resource-saving techniques may be used in the processing the layers below the dim layer. While these techniques may impact visual quality, a user is unlikely to notice visual artifacts or other reductions in quality in the modified layers below the dim layer. For example, when a dim layer is to be displayed, a GPU can render layers below the dim layer at a lower resolution. Furthermore, the GPU can increase a compression ratio for layers below the dim layer. The low-resolution layers can be scaled-up to an original resolution and the compressed layers can be uncompressed in the display pipeline for display underneath the dim layer.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: August 18, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Raviteja Tamatam, Jayant Shekhar, Kalyan Thota, Venkata Nagarjuna Sravan Kumar Deepala
  • Patent number: 10740985
    Abstract: Methods and devices for generating reference data for adjusting a digital representation of a head region, and methods and devices for adjusting the digital representation of a head region are disclosed. In some arrangements, training data are received. A first machine learning algorithm generates first reference data using the training data. A second machine learning algorithm generates second reference data using the same training data and the first reference data generated by the first machine learning algorithm.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: August 11, 2020
    Assignee: RealD Spark, LLC
    Inventors: Eric Sommerlade, Alexandros Neophytou
  • Patent number: 10725609
    Abstract: Various embodiments of the present disclosure relate generally to systems and methods for generating multi-view interactive digital media representations in a virtual reality environment. According to particular embodiments, a plurality of images is fused into a first content model and a first context model, both of which include multi-view interactive digital media representations of s. Next, a virtual reality environment is generated using the first content model and the first context model. The virtual reality environment includes a first layer and a second layer. The user can navigate through and within the virtual reality environment to switch between multiple viewpoints of the content model via corresponding physical movements. The first layer includes the first content model and the second layer includes a second content model and wherein selection of the first layer provides access to the second layer with the second content model.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: July 28, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Alexander Jay Bruen Trevor, Krunal Ketan Chande
  • Patent number: 10726616
    Abstract: An image processing system configured to process perceived images of an environment includes a central processing unit (CPU) including a memory storage device having stored thereon a computer model of the environment, at least one sensor configured and disposed to capture a perceived environment including at least one of visual images of the environment and range data to objects in the environment, and a rendering unit (RU) configured and disposed to render the computer model of the environment forming a rendered model of the environment. The image processing system compares the rendered model of the environment to the perceived environment to update the computer model of the environment.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: July 28, 2020
    Assignee: ROSEMOUNT AEROSPACE INC.
    Inventors: Julian C. Ryde, Xuchu Ding
  • Patent number: 10719195
    Abstract: A program for causing a computer to execute: receiving an instruction to change a position and a direction of a viewpoint disposed in a virtual space from a user; controlling a viewpoint to change the position and the direction of the viewpoint in response to the instruction; rendering a spatial image that depicts an aspect of an interior of the virtual space on the basis of the position and the direction of the viewpoint; and switching over between a first mode of changing the direction of the viewpoint about the position of the viewpoint and a second mode of changing the position and the direction of the viewpoint about an object of interest to which the user pays attention in the virtual space in a case of receiving an instruction to change the direction of the viewpoint from the user at a time of controlling the viewpoint.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: July 21, 2020
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Masaaki Yamagiwa, Teruyuki Toriyama, Hidetaka Miyazaki
  • Patent number: 10719995
    Abstract: Embodiments provide for distorted views in augmented reality by mapping undistorted distances from a camera to real objects in an environment; producing a distorted video feed of the environment that shows the real objects at apparent distances to the camera; anchoring a virtual object to a position in the environment based on the undistorted distances; determining a scaling factor between the camera and the position based on the undistorted distances and the apparent distances; scaling the virtual object based on the scaling factor; overlaying the virtual object as scaled into the distorted video feed; and outputting the distorted video feed to a display device.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: July 21, 2020
    Assignee: Disney Enterprises, Inc.
    Inventor: Michael P. Goslin
  • Patent number: 10715793
    Abstract: The inventive method involves receiving as input a representation of an ordered set of two dimensional images. The ordered set of two dimensional images is analyzed to determine at least one first view of an object in at least two dimensions and at least one motion vector. The next step is analyzing the combination of the first view of the object in at least two dimensions, the motion vector, and the ordered set of two dimensional images to determine at least a second view of the object; generating a three dimensional representation of the ordered set of two dimensional images on the basis of at least the first view of the object and the second view of the object. Finally, the method involves providing indicia of the three dimensional representation as an output.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: July 14, 2020
    Inventor: Steven M. Hoffberg
  • Patent number: 10699480
    Abstract: A system constructs a positional contour of a variably contoured surface using data from a 3D capture device. A processor captures data from a tangential orientation capture device and produces a set of relative orientations. The set of relative orientations is transformed with a set of orientation manifold transformer parameters to produce a set of low dimensional orientations. The set of low dimensional orientations and a trained mapping function definition are used to produce a low dimensional point cloud. The low dimensional point cloud is transformed with a set of point cloud manifold transformer parameters, producing a reconstructed synchronized rotationally invariant point cloud.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: June 30, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: Carlos Sanchez Mendoza, Luca Giancardo, Tobias Hahn, Aurelien Bourquard
  • Patent number: 10699468
    Abstract: The present invention teaches a real-time hybrid ray tracing method for non-planar specular reflections. The high complexity of a non-planar surface is reduced to low complexity of multiple small planar surfaces. Advantage is taken of the planar nature of triangles that comprise building blocks of a non-planar surface. All secondary rays bouncing from a given surface triangle toward object triangles keep a close direction to each other. A collective control of secondary rays is enabled by this closeness and by decoupling secondary rays from primary rays. The result is high coherence of secondary rays.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: June 30, 2020
    Assignee: ADSHIR LTD.
    Inventors: Reuven Bakalash, Ron Weitzman, Elad Haviv
  • Patent number: 10692283
    Abstract: Provided is a geometric model establishment method based on medical image data, including: a step of reading medical image data; a step of defining a tissue type by a conversion relationship between the medical image data and the tissue type; a step of deciding the number of tissue clusters; a step of defining a tissue density by a conversion relationship between the medical image data and the density; a step of establishing 3D encoding matrix with information about the tissue and the density; and a step of generating a geometric model. According to a conversion relationship between medical image data and a tissue type, the number of tissue clusters can be determined according to actual requirements, so that the tissue type, the element composition and the density are provided more accurately, and an established geometric model is better matched to the real situation reflected by the medical image data.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: June 23, 2020
    Assignee: NEUBORON MEDTECH LTD.
    Inventors: Yuanhao Liu, Peiyi Lee
  • Patent number: 10679317
    Abstract: Examples described herein generally relate to intercepting, from a graphics processing unit (GPU) or a graphics driver, a buffer that specifies one or more shader records of a shader table to use in generating the image using raytracing, determining, based at least in part on an identifier of the one or more shader records, a layout of the one or more shader records, interpreting, based at least in part on the layout, additional data in the buffer to determine one or more parameters corresponding to the one or more shader records, and displaying, via an application, an indication of the one or more parameters on an interface.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: June 9, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin Neil Kinross, Amar Patel
  • Patent number: 10679539
    Abstract: Two-dimensional compositing that preserves the curvatures of non-flat surfaces is disclosed. In some embodiments, a mapping is associated with a two-dimensional rendering that maps a potentially variable portion of the two-dimensional rendering to a canvas. The mapping is generated from a three-dimensional model of the potentially variable portion of the two-dimensional rendering. The potentially variable portion of the two-dimensional rendering is dynamically modified according to the mapping to reflect content comprising the canvas or edits received with respect to the canvas.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: June 9, 2020
    Assignee: Outward, Inc.
    Inventors: Clarence Chui, Christopher Murphy
  • Patent number: 10678256
    Abstract: Systems and methods for generating an occlusion-aware bird's eye view map of a road scene include identifying foreground objects and background objects in an input image to extract foreground features and background features corresponding to the foreground objects and the background objects, respectively. The foreground objects are masked from the input image with a mask. Occluded objects and depths of the occluded objects are inferred by predicting semantic features and depths in masked areas of the masked image according to contextual information related to the background features visible in the masked image. The foreground objects and the background objects are mapped to a three-dimensional space according to locations of each of the foreground objects, the background objects and occluded objects using the inferred depths. A bird's eye view is generated from the three-dimensional space and displayed with a display device.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Paul Vernaza, Manmohan Chandraker, Menghua Zhai
  • Patent number: 10678257
    Abstract: Systems and methods for generating an occlusion-aware bird's eye view map of a road scene include identifying foreground objects and background objects in an input image to extract foreground features and background features corresponding to the foreground objects and the background objects, respectively. The foreground objects are masked from the input image with a mask. Occluded objects and depths of the occluded objects are inferred by predicting semantic features and depths in masked areas of the masked image according to contextual information related to the background features visible in the masked image. The foreground objects and the background objects are mapped to a three-dimensional space according to locations of each of the foreground objects, the background objects and occluded objects using the inferred depths. A bird's eye view is generated from the three-dimensional space and displayed with a display device.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Paul Vernaza, Manmohan Chandraker, Menghua Zhai