Patents Examined by Diane M Wills
  • Patent number: 10748332
    Abstract: Systems and methods that facilitate efficient and effective shadow image generation are presented. In one embodiment, a hard shadow generation system comprises a compute shader, pixel shader and graphics shader. The compute shader is configured to retrieve pixel depth information and generate projection matrix information, wherein the generating includes performing dynamic re-projection from eye-space to light space utilizing the pixel depth information. The pixel shader is configured to create light space visibility information. The graphics shader is configured to perform frustum trace operations to produce hard shadow information, wherein the frustum trace operations utilize the light space visibility information. The light space visibility information can be considered irregular z information stored in an irregular z-buffer.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: August 18, 2020
    Assignee: NVIDIA Corporation
    Inventor: Jon Story
  • Patent number: 10747671
    Abstract: An intelligent tile-based prefetching solution executed by a compression address aperture services linearly addressed data requests from a processor to memory stored in a memory component having a tile-based address structure. The aperture monitors tile reads and seeks to match the tile read pattern to a predefined pattern. If a match is determined, the aperture executes a prefetching algorithm uniquely and optimally associated with the predefined tile read pattern. In this way, tile overfetch is mitigated while the latency on first line data reads is reduced.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: August 18, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Wesley James Holland, Bohuslav Rychlik, Andrew Edmund Turner, George Patsilaras, Jeffrey Shabel, Simon Peter William Booth
  • Patent number: 10740987
    Abstract: A method, apparatus, and system for visualizing nonconformance data for a physical object. An augmented reality application in a portable computing device plots, in a defined coordinate cube, points corresponding to nonconformance locations on the physical object. The augmented reality application determines a sub-set of the points plotted that correspond to a region of the physical object visible in an image of the region of the physical object acquired by the portable computing device at a position of the portable computing device, where the sub-set of the points exclude nonconformance locations occluded from view by a physical object structure of the physical object in the image. The augmented reality application displays the nonconformance data for the sub-set of the points visible in the image in association with a sub-set of the nonconformance locations for the physical object in the image displayed on a display system in the portable computing device.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: August 11, 2020
    Assignee: The Boeing Company
    Inventors: Jeremiah Kent Scott, Robert Stephen Kanematsu Baker, Bryan James Beretta, Michael Louis Bernardoni
  • Patent number: 10739965
    Abstract: The technology disclosed relates to providing simplified manipulation of virtual objects by detected hand motions. In particular, it relates to a detecting hand motion and positions of the calculation points relative to a virtual object to be manipulated, dynamically selecting at least one manipulation point proximate to the virtual object based on the detected hand motion and positions of one or more of the calculation points, and manipulating the virtual object by interaction between the detected hand motion and positions of one or more of the calculation points and the dynamically selected manipulation point.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: August 11, 2020
    Assignee: Ultrahaptics IP Two Limited
    Inventors: David S. Holz, Raffi Bedikian, Adrian Gasinski, Hua Yang, Gabriel A. Hare, Maxwell Sills
  • Patent number: 10719982
    Abstract: A surface extraction method that includes the steps outlined below is provided. (A) Raw input depth data of 3D child nodes are received that includes positions and distances of first 3D child nodes and positions of second 3D child nodes. (B) The neighboring 3D child nodes are grouped into 3D parent nodes. (C) For each 3D parent nodes, propagated distances of the second 3D child nodes are generated. (D) The 3D parent nodes are treated as the 3D child nodes to perform the steps (B)-(D) to generate a tree including a plurality levels of nodes such that a surface extraction processing is performed on the tree to extract at least one surface of the scene.
    Type: Grant
    Filed: December 25, 2018
    Date of Patent: July 21, 2020
    Assignee: HTC Corporation
    Inventors: Yi-Chen Lin, Hung-Yi Yang
  • Patent number: 10719906
    Abstract: A graph processing system may include at least one auxiliary memory configured to store graph data including phase data and attribute data, a main memory configured to store a portion of the graph data, a plurality of graphics processing units (GPUs) configured to process the graph data received from the main memory and perform synchronization and including cores and device memories, and a central processing unit (CPU) configured to manage query processing associated with the graph data performed by the GPUs and store, in the auxiliary memory, updatable attribute data of a result of the query processing.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: July 21, 2020
    Inventors: Min Soo Kim, Kyu Hyeon An, Him Chan Park, Jin Wook Kim, Se Yeon Oh
  • Patent number: 10713844
    Abstract: A method and image processing apparatus for creating simplified representations of an existing virtual 3D model for use in occlusion culling is provided. A visual hull construction is performed on the existing virtual 3D model using an approximate voxel volume consisting of a plurality of voxels. A set of projections from a plurality of viewing angles provide a visual hull of the existing 3D model. The volumetric size of the visual hull of the existing 3D model is increased to envelop the existing virtual 3D model to provide the visual hull as an occludee model, and the volumetric size of the visual hull of the existing 3D model is decreased to be enveloped by the existing virtual 3D model to provide the visual hull as an occluder model. The occludee model and the occluder model are used during runtime in a 3D virtual environment for occlusion culling.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: July 14, 2020
    Inventors: Ulrik Lindahl, Gustaf Johansson
  • Patent number: 10713840
    Abstract: A method is provided, including the following method operations: using a robot having a plurality of sensors to acquire sensor data about a local environment; processing the sensor data to generate a spatial model of the local environment, the spatial model defining virtual surfaces that correspond to real surfaces in the local environment; further processing the sensor data to generate texture information that is associated to the virtual surfaces defined by the spatial model; tracking a location and orientation of a head-mounted display (HMD) in the local environment; using the spatial model, the texture information, and the tracked location and orientation of the HMD to render a view of a virtual space that corresponds to the local environment; presenting the view of the virtual environment through the HMD.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: July 14, 2020
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Michael Taylor, Erik Beran
  • Patent number: 10706630
    Abstract: There is disclosed an augmented reality user interface including dual representation of a physical location including generating two views for viewing the augmented reality objects, a first view includes the video data of the view including the augmented reality objects superimposed thereover in augmented reality locations and a second view that includes data derived from the physical location to generate a map with the augmented reality objects from the first view visible as objects on the map in the augmented reality locations, combining the location, the motion data, the video data, and the augmented reality objects into an augmented reality video such that when the computing device is in a first position, the first view is visible and when the computing device is in a second position, the second view is visible, and displaying the augmented reality video on a display.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: July 7, 2020
    Inventor: Iegor Antypov
  • Patent number: 10699488
    Abstract: In one embodiment, the system captures an image using a camera. The image is associated with a user viewpoint. The system identifies a surface in the image using a machine learning model. The surface has associated properties meeting one or more criteria for rendering a three-dimensional virtual space. The system determines relative positions and orientations of three-dimensional display elements to the surface. The system determines the three-dimensional virtual space based at least on the properties of the surface, the user viewpoint, and the relative positions and orientations of the three-dimensional display elements to the surface. The three-dimensional virtual space comprises the three-dimensional display elements, which are positioned behind the surface. The system renders the three-dimensional virtual space on the surface. The three-dimensional virtual space is visible through a display area on the surface as seen from the user viewpoint.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: June 30, 2020
    Assignee: Facebook Technologies, LLC
    Inventor: Mark Terrano
  • Patent number: 10681321
    Abstract: The present disclosure is directed to a process to partially or fully suppress, or limit, pixel coloration errors. These pixel coloration errors, represented by small noise values, can be introduced during signal processing of high dynamic range (HDR) video signals. Converting visual content to a half precision floating point representation, for example, FP16, can introduce small amounts of signal noise due to value rounding. The noise can be multiplied and accumulated during HDR signal processing resulting in visual artifacts and degraded image quality. The disclosure can detect these noise amounts in pixel color component values, and suppress, or partially suppress, the noise to prevent the noise from accumulating during subsequent HDR signal processing.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: June 9, 2020
    Assignee: Nvidia Corporation
    Inventors: Yanbo Sun, Gennady Petrov, Lauri Hyvarinen, Weiwan Liu, Dong Han Ryu
  • Patent number: 10667659
    Abstract: A robot cleaner includes a display unit, a camera unit configured to capture an image of a cleaning region when cleaning starts, a dust sensor configured to output a sensed signal corresponding to an amount of dust sucked in the cleaning region, and a control unit configured to calculate the amount of dust based on the sensed signal, to start an augmented reality (AR) mode when cleaning ends, to generate an AR image corresponding to the amount of dust, and to control the display module to superimpose the AR image on the image.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: June 2, 2020
    Inventor: Jaeduck Jung
  • Patent number: 10672185
    Abstract: One aspect of the disclosure provides a method for rendering an image. The method includes: placing primitives of the image in a screen space; binning the primitives into tiles of the screen space that the primitives touch; and rasterizing the tiles at one tile of the tiles at a time. The aforementioned rasterizing includes shading a subset of the primitives binned to the one tile at a first shading rate during a first pass and shading the subset of primitives binned to the one tile at a second shading rate during a second pass, the second shading rate is different from the first shading rate, and the aforementioned placing is performed once while the image is rendered.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: June 2, 2020
    Assignee: Nvidia Corporation
    Inventor: Rahul Sathe
  • Patent number: 10650599
    Abstract: The present disclosure includes methods and systems for rendering digital images of a virtual environment utilizing full path space learning. In particular, one or more embodiments of the disclosed systems and methods estimate a global light transport function based on sampled paths within a virtual environment. Moreover, in one or more embodiments, the disclosed systems and methods utilize the global light transport function to sample additional paths. Accordingly, the disclosed systems and methods can iteratively update an estimated global light transport function and utilize the estimated global light transport function to focus path sampling on regions of a virtual environment most likely to impact rendering a digital image of the virtual environment from a particular camera perspective.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: May 12, 2020
    Assignee: ADOBE INC.
    Inventors: Xin Sun, Nathan Carr, Hao Qin
  • Patent number: 10650587
    Abstract: The present invention relates to a method and system for constructing isosurfaces from 3D data sets, such as 3D image data that are based on a cubic grid (voxel image data). Specifically, 3D image is rendered from a voxel image that can be generated by a variety of medical modalities. The present invention is a modification of the MCA that allows for constructing an isosurface without holes resulting from some cubes having ambiguous isosurface topology. Specifically, to avoid holes resulting from ambiguities, multiple isosurfaces having different resolution levels are generated for ambiguous cubes to resolve the ambiguity.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: May 12, 2020
    Assignee: Canon U.S.A., Inc.
    Inventors: Zhimin Lu, Hitoshi Nakamura
  • Patent number: 10636187
    Abstract: At a first device there is received from a second device (i) a native pixilated image and (ii) interactive filter data associated with the image. The filter data corresponds to an interactive filter applied to the image. A first representation of the image is displayed in accordance with the interactive filter data on the display. All or a first subset of the pixels of the image are obscured in the first representation. Responsive to user input, for a limited period of time, a second representation of the image is displayed in place of the first representation. None or a second subset of the pixels of the image is obscured in the second representation, where the second subset is less than the first subset. Then there is displayed on the display, after the limited period of time has elapsed, the first representation in place of the second representation.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: April 28, 2020
    Assignee: Glu Mobile Inc.
    Inventors: Sourabh Ahuja, Liang Wu, Michael Mok, Lian A. Amaris
  • Patent number: 10636207
    Abstract: A system and method for generating a three-dimensional (3D) map of a facility is provided. The system has at least one processor and a memory having stored thereon instructions that, upon execution by the at least one processor, cause the system to perform functions comprising: receiving a two-dimensional (2D) map of the facility; converting or importing the 2D map to a base map; generating or editing polygons on the base map, each polygon representative of a facility unit in the base map; generating one or more perspectives at one or more points on the base map to generate the 3D map.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: April 28, 2020
    Assignee: MAPPEDIN INC.
    Inventors: James Nathan Swidersky, Patrick Paskaris, Mitchell Butler, Erkang Wei, Zachary Sean Cregan
  • Patent number: 10606473
    Abstract: A display method of the present invention includes a stain detecting step for detecting a stain on an input display part, and a stained-point displaying step for displaying a predetermined indication at a first point of the input display part corresponding to the position of the detected stain.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: March 31, 2020
    Inventor: Yusuke Ogiwara
  • Patent number: 10607387
    Abstract: An animation engine is configured to apply motion amplifiers to sketches received from an end-user in order to create exaggerated, cartoon-style animation. The animation engine receives a sketch input from the end-user as well as a selection of one or more motion amplifiers. The animation engine also receives one or more control sketches that indicate how the selected motion amplifiers are applied to the sketch input. The animation engine projects the sketch input onto a sketch grid to create a sketch element, and then animates the sketch element by deforming the underlying sketch grid based on the control sketches. The animation engine then interpolates the sketch input, based on the deformations of the sketch grid, to animate the sketch. In this manner, the animation engine exposes an intuitive set of tools that allows end-users to easily apply the well-known Principles of Animation.
    Type: Grant
    Filed: April 19, 2016
    Date of Patent: March 31, 2020
    Assignee: Autodesk, Inc.
    Inventors: Rubiait Habib, Tovi Grossman, Nobuyuki Umetani, George Fitzmaurice
  • Patent number: 10607418
    Abstract: A virtual object arranged in a virtual environment is displayed by virtual reality glasses worn by a person. A virtual hand is positioned within the virtual environment in accordance with a hand detected in the real environment. As the virtual hand dips into an area of the virtual object, the representation of the area is changed.
    Type: Grant
    Filed: May 4, 2017
    Date of Patent: March 31, 2020
    Assignee: AUDI AG
    Inventor: Marcus Kuehne