Patents Examined by Kimbinh T. Nguyen
  • Patent number: 10388032
    Abstract: A method and apparatus are provided for compressing depth buffer data in a three dimensional computer graphics system. The depth buffer data is divided into a plurality of rectangular tiles corresponding to rectangular areas in an associated image. The number of starting point locations in a tile are identified and a difference in depth value determined between each starting point and depth values of each of at least two further locations. Using this information depth values are predicted at a plurality of other locations in the tile and where these predicated values substantially match an actual depth value at location is assigned to a plane associated with respective starting point. Starting point location depth value difference data and plane assignment data for each tile and locations in the tile not assigned to a plane, then stored.
    Type: Grant
    Filed: January 23, 2012
    Date of Patent: August 20, 2019
    Assignee: Imagination Technologies Limited
    Inventor: Donald Fisk
  • Patent number: 10380789
    Abstract: An apparatus and method are described for performing an efficient depth prepass. For example, one embodiment of a method comprising: a method comprising: performing a first pass through a specified portion of a graphics pipeline with only depth rendering active; initializing a coarse depth buffer within the specified portion of the graphics pipeline during the first pass, the coarse depth buffer storing depth data at a level of granularity less than that stored in a per-pixel depth buffer, which is not initialized during the first pass; and performing a second pass through the graphics pipeline following the first pass, the second pass utilizing the full graphics pipeline and using values in the coarse depth buffer initialized by the first pass.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: August 13, 2019
    Assignee: Intel Corporation
    Inventors: Magnus Andersson, Tomas G. Akenine-Moller, Jon N. Hasselgren
  • Patent number: 10360687
    Abstract: Techniques are provided for detection and location of active display regions in videos with static borders. A methodology implementing the techniques according to an embodiment includes extracting features from rows and columns of pixels of a video frame. The features are based on horizontal gradient runs (HGRs) and vertical gradient runs (VGRs). The method also includes detecting one or more static regions of the frame, based on a comparison of differences between the features of the current video frame and features extracted from a previous video frame. The method further includes detecting one or more boundaries of the static regions based on a location of a maximum value of one of the features within the static region, if the maximum value is greater than a boundary detection threshold value. Determination of the active region in the current video frame is based on exclusion of the detected static regions.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: July 23, 2019
    Assignee: INTEL CORPORATION
    Inventors: Yeongseon Lee, Nilesh A. Ahuja, Mahesh Subedar, Jorge E. Caviedes
  • Patent number: 10346504
    Abstract: System or components such as software for 3D modelling of bodies is described. This can be realized by a computer based method for modelling a body by scanning a determined space divided into elementary spaces, each elementary space being assigned a signed distance parameter indicative of the distance of said elementary space to said body and a weight parameter indicative of the importance of the distance parameter; said scanning of said body providing scanned points and each scanned point thus obtained possibly modifying said signed distance and weight parameters of each elementary space of said determined space; wherein, each one of selected elementary spaces is modified by subdividing said selected elementary space into higher level elementary spaces defined by the same parameters; wherein, in order to decrease details of the model of said body, said elementary spaces are modified by replacing selected elementary spaces with lower level elementary spaces.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: July 9, 2019
    Assignee: DENTSPLY IMPLANTS NV
    Inventors: Carl Van Lierde, Joris Vanbiervliet, Pieter De Ceuninck, Stefaan Dhondt, Veerle Pattijn, Jan Heyninck
  • Patent number: 10339820
    Abstract: A system for displaying information related to a flight of an aircraft and an associated method are provided. The display system comprises a display device, a man-machine interface, a module for dynamically generating synthesis images, each comprising a synthetic depiction of the environment and a curve representative of a trajectory, said module being configured to generate a first synthesis image centered around a first central point of interest, to command the display thereof, and to detect an action to modify the central point of interest by an operator via said man-machine interface. The generating module is also configured to determine, as a function of said modification action, a second central point of interest, situated along said curve whatever said modification action is, to generate a second synthesis image centered around said second central point of interest, and to command the display thereof.
    Type: Grant
    Filed: May 19, 2016
    Date of Patent: July 2, 2019
    Assignee: DASSAULT AVIATION
    Inventors: Arnaud Branthomme, Igor Fain, Patrick Darses
  • Patent number: 10331295
    Abstract: A computer-implemented method for visualizing data about an object. A hierarchy of image blocks is generated using an action scheme and a part. Instructions identifying a hierarchy of image blocks and the action scheme are generated. The hierarchy of image blocks is communicated to a graphical user interface. An image area is identified in an image block in the hierarchy of image blocks in the graphical user interface. A query is generated to identify a location of the part within the object. The query is based on a type of search, a spatial region, and the action scheme. An indicator representing the location of the part identified by the query is displayed.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: June 25, 2019
    Assignee: The Boeing Company
    Inventors: Nikoli E. Prazak, John Carney Gass
  • Patent number: 10332303
    Abstract: A ray tracing unit is implemented in a graphics rendering system. The ray tracing unit comprises: processing logic configured to perform ray tracing operations on rays, a dedicated ray memory coupled to the processing logic and configured to store ray data for rays to be processed by the processing logic, an interface to a memory system, and control logic configured to manage allocation of ray data to either the dedicated ray memory or the memory system. Core ray data for rays to be processed by the processing logic is stored in the dedicated ray memory, and at least some non-core ray data for the rays is stored in the memory system. This allows core ray data for many rays to be stored in the dedicated ray memory without the size of the dedicated ray memory becoming too wasteful when the ray tracing unit is not in use.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: June 25, 2019
    Assignee: Imagination Technologies Limited
    Inventors: John W. Howson, Steven J. Clohset, Ali Rabbani
  • Patent number: 10319076
    Abstract: In one embodiment, a method includes accessing a plurality of generative adversarial networks (GANs) that are each applied to a particular level k of a Laplacian pyramid. Each GAN may comprise a generative model Gk and a discriminative model Dk. At each level k, the generative model Gk may take as input a noise vector zk and may output a generated image {tilde over (h)}k. At each level k, the discriminative model Dk may take as input either the generated image {tilde over (h)}k or a real image hk, and may output a probability that the input was the real image hk. The method may further include generating a sample image ?k from the generated images {tilde over (h)}k, wherein the sample image is based on the probabilities outputted by each of the discriminative models Dk and the generated images {tilde over (h)}k. The method may further include providing the sample image ?k for display.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: June 11, 2019
    Assignee: Facebook, Inc.
    Inventors: Emily Denton, Soumith Chintala, Arthur David Szlam, Robert D. Fergus
  • Patent number: 10317689
    Abstract: A 3D display device and a driving method thereof are provided, which includes controlling first subpixels, arranged in an electroluminescent display (ELD) panel disposed below a liquid crystal display (LCD) panel, to form luminous areas and black areas alternately arranged in the row direction, so as to form a rear grating; determining a position for displaying a left-eye view and a position for displaying a right-eye view in the LCD panel according to current positions of eyes of a viewer; and controlling second subpixels corresponding to the same position for displaying the left-eye image in the LCD panel to display same view, and controlling second subpixels corresponding to the same position for displaying the right-eye image to display same view.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: June 11, 2019
    Assignees: BOE Technology Group Co., Ltd., Beijing BOE Optoelectronics Technology Co., Ltd.
    Inventors: Ming Yang, Xiaochuan Chen, Wenqing Zhao, Pengcheng Lu, Qian Wang, Jian Gao, Xiaochen Niu, Rui Xu, Lei Wang, Yingming Liu, Shengji Yang, Haisheng Wang
  • Patent number: 10321126
    Abstract: Systems and methods for capturing a two dimensional (2D) image of a portion of a three dimensional (3D) scene may include a computer rendering a 3D scene on a display from a user's point of view (POV). A camera mode may be activated in response to user input and a POV of a camera may be determined. The POV of the camera may be specified by position and orientation of a user input device coupled to the computer, and may be independent of the user's POV. A 2D frame of the 3D scene based on the POV of the camera may be determined and the 2D image based on the 2D frame may be captured in response to user input. The 2D image may be stored locally or on a server of a network.
    Type: Grant
    Filed: July 7, 2015
    Date of Patent: June 11, 2019
    Assignee: zSpace, Inc.
    Inventors: Jonathan J. Hosenpud, Arthur L. Berman, Jerome C. Tu, Kevin D. Morishige, David A. Chavez
  • Patent number: 10311624
    Abstract: Presented herein are systems and methods configured to generate virtual entities representing real-world users. In some implementations, the systems and/or methods are configured to capture user appearance information with imaging devices and sensors, determines correspondence values conveying correspondences between the appearance of the user's body or user's head and individual ones of default body models and/or default head models, modifies a set of values defining a base body model and/or base head model based on determined correspondence values and sets of base values defining the default body models and/or default head models. The base body model and/or base head model may be modified to model the appearance of the body and/or head of the user.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: June 4, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth Mitchell, Charles Malleson, Ivan Huerta Casado, Martin Klaudiny, Malgorzata Edyta Kosek
  • Patent number: 10311633
    Abstract: An approach is provided for accurate processing and registering of media content for rendering in 3D maps and other applications. The approach includes determining at least one first pixel of at least one image that geometrically corresponds to at least one second pixel of at least one rendered three-dimensional map. Further, the approach includes processing and/or facilitating a processing of (a) the at least one first pixel; (b) the at least one second pixel; (c) metadata associated with at least one of the at least one first pixel and the second pixel; or (d) a combination thereof to determine at least one confidence value, wherein the at least one confidence value is indicative of an estimated level of geometric distortion resulting from projecting the at least one first pixel onto the at least one second pixel.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: June 4, 2019
    Assignee: Nokia Technologies Oy
    Inventor: Kimmo Roimela
  • Patent number: 10311646
    Abstract: A device may detect, in a field of view of a camera, one or more components of an automated teller machine (ATM) device using a computer vision technique based on generating a three dimensional model of the one or more components. The device may identify the ATM device as a particular device or as a particular type of device based on the one or more components of the ATM device, or first information related to the ATM device. The device may identify a set of tasks to be performed with respect to the ATM device. The device may provide, for display via a display associated with the device, second information associated with the set of tasks as an augmented reality overlay. The device may perform an action related to the set of tasks, the ATM device, or the augmented reality overlay.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: June 4, 2019
    Assignee: Capital One Services, LLC
    Inventors: David Kelly Wurmfeld, Kevin Osborn
  • Patent number: 10304419
    Abstract: An electronic device and method are disclosed herein. The electronic device includes a display, a display driver integrated circuit, a memory and a processor, which implements the method. The method includes outputting a first screen to a display in a first resolution, when a resolution change condition is detected by a processor, changing the first screen to display a second screen in a second resolution different from the first resolutions; and adjusting performance of a system resource of the electronic device.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: May 28, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ji Sun Park, Won Hee Seo, In Seog Ku, Woo Geun Kim, Il Jung, Dae Hyun Cho, Dae Sik Hwang, Moo Young Kim
  • Patent number: 10296663
    Abstract: Example systems and methods for virtual visualization of a three-dimensional model of an object in a two-dimensional environment. The method may include moving and aligning the three-dimensional model of the object along a plane in the two-dimensional environment.
    Type: Grant
    Filed: May 12, 2015
    Date of Patent: May 21, 2019
    Assignee: Atheer, Inc.
    Inventor: Milos Jovanovic
  • Patent number: 10290149
    Abstract: A system and method of interacting with a virtual object in a virtual environment using physical movement. The virtual scene contains a 3D object that appears to extend forward or above the plane of the display. A sensor array is provided that monitors an area proximate the display. The sensor array can detect the presence and position of an object that enters the area. Action points are programmed in, on, or near the virtual objects. Each action point has virtual coordinates in said virtual scene that correspond to real coordinates within the monitored area. Subroutines are activated when the sensor array detects an object that moves to real coordinates that correspond to the virtual coordinates of the action points.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: May 14, 2019
    Assignee: Maxx Media Group, LLC
    Inventor: Richard S. Freeman
  • Patent number: 10275929
    Abstract: A method and apparatus for applying a two-dimensional image on a three-dimensional model composed of a polygonal mesh. The method comprises generating an adjacency structure for all triangles within the mesh, identifying a triangle within membrane containing the desired center point, calculating spatial distances between the three vertices and desired center checking each triangle edge to see if the distances show an intersection, if a collision is detected add the triangle to the list and iteratively processing all the triangles in the list calculate the spatial data of the single unknown vertex, check the two edges of the triangle to see if the calculated distances show an intersection, if an intersection occurs add this new triangle to the list; transforming into UV-coordinates; and applying the two-dimensional image to the three-dimensional model using the UV-coordinates.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: April 30, 2019
    Assignee: CREATIVE EDGE SOFTWARE LLC
    Inventors: Amy Jennings, Stuart Jennings
  • Patent number: 10262458
    Abstract: Techniques associated with three-dimensional object modeling are described in various implementations. In one example implementation, a method may include receiving a plurality of two-dimensional images depicting views of an object to be modeled in three dimensions. The method may also include, processing the plurality of two-dimensional images to generate a three-dimensional representation of the object, and analyzing the three-dimensional representation of the object to determine whether sufficient visual information exists in the plurality of two-dimensional images to generate a three-dimensional model of the object. The method may also include, in response to determining that sufficient visual information does not exist for a portion of the object, identifying the portion of the object to a user.
    Type: Grant
    Filed: May 31, 2013
    Date of Patent: April 16, 2019
    Assignee: LONGSAND LIMITED
    Inventors: Sean Blanchflower, George Saklatvala
  • Patent number: 10249094
    Abstract: A method of synthetic representation of elements of interest in a viewing system for aircraft, the viewing system comprises location sensors, a cartographic database and a database of elements of interest, an image sensor, a unit for processing images and a unit for generating three-dimensional digital images representative of the terrain overflown and a viewing device, wherein, when the terrain overflown comprises an element of interest, the method of synthetic representation comprises: a first step of searching for and detecting the element of interest in each image of a sequence of images, and; a second step of generating three-dimensional digital images representative of the terrain overflown, the element of interest represented according to a first representation if it has not been detected in any of the images of the sequence of images and according to a second representation if it is detected.
    Type: Grant
    Filed: March 26, 2017
    Date of Patent: April 2, 2019
    Assignee: THALES
    Inventors: Thierry Ganille, Bruno Aymeric, Johanna Lux
  • Patent number: 10230905
    Abstract: A system includes a video device for capturing, at a viewing time, a first video image corresponding to a foundation scene at a setting, the foundation scene viewed at the viewing time from a vantage position. A memory stores a library of image data including media generated at a time prior to the viewing time. A vantage position monitor tracks the vantage position and generating vantage position of a human viewer. A digital video data controller selects from the image data in the library, at the viewing time and based on the vantage position data, a plurality of second images corresponding to a modifying scene at the setting, the modifying scene further corresponding to the vantage position. A combiner combines the first video image and the plurality of second images to create a composite image for display.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: March 12, 2019
    Assignee: Passewall Research LLC
    Inventor: Stuart Wilkinson