Hidden Line/surface Determining Patents (Class 345/421)
  • Patent number: 11004252
    Abstract: Real time ray tracing-based adaptive multi frequency shading. For example, one embodiment of an apparatus comprising: rasterization hardware logic to process input data for an image in a deferred rendering pass and to responsively update one or more graphics buffers with first data to be used in a subsequent rendering pass; ray tracing hardware logic to perform ray tracing operations using the first data to generate reflection ray data and to store the reflection ray data in a reflection buffer; and image rendering circuitry to perform texture sampling in a texture buffer based on the reflection ray data in the reflection buffer to render an output image.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: May 11, 2021
    Assignee: Intel Corporation
    Inventors: Carson Brownlee, Gabor Liktor, Joshua Barczak, Kai Xiao, Michael Apodaca, Thomas Raoux
  • Patent number: 10979704
    Abstract: Methods and apparatus of generating a refined reference frame for inter-frame encoding by applying blur parameters to allow encoding of image frames having blurred regions are presented herein. The methods and apparatus may identify a blurred region of an image frame by comparing the image frame with a reference frame, generate a refined reference frame by applying the blur parameter indicative of the blurred region to the reference frame, determine whether to use one of the reference frame and refined reference frame to encode the image frame, and encode the image frame using the refined reference frame when determined to use the refined reference frame.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: April 13, 2021
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Ihab M. A. Amer, Khaled Mammou, Vladyslav S. Zakharchenko, Dmytro U. Elperin
  • Patent number: 10977858
    Abstract: A method is disclosed, the method comprising the steps of receiving, from a first client application, first graphical data comprising a first node; receiving, from a second client application independent of the first client application, second graphical data comprising a second node; and generating a scenegraph, wherein the scenegraph describes a hierarchical relationship between the first node and the second node according to visual occlusion relative to a perspective from a display.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: April 13, 2021
    Assignee: MAGIC LEAP, INC.
    Inventor: Praveen Babu Jd
  • Patent number: 10963044
    Abstract: Methods, apparatus, systems are disclosed for altering displayed content on a display device responsive to a user's proximity. In accord with one example, a computing system includes a memory, a sensor to collect data representative of a viewing distance between a display and a user of the display, and a scaler to adjust a size of at least one object displayed by the display based on the viewing distance from the display.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: Jiancheng Tao, Hong Wong, Xiaoguo Liang, Yanbing Sun, Jun Liu, Wah Yiu Kwong
  • Patent number: 10965913
    Abstract: A person is detected from a moving image of the monitoring area, and position information on the person is acquired. Temporal statistical processing is performed on the position information, statistical information relating to a staying situation of the person is acquired in accordance with setting of a target period of time for the statistical processing, and thus a heat map moving image is generated. Furthermore, a mask image corresponding to a person image area is generated at every predetermined point in time based on the position information on the person. A monitoring moving image that results from superimposing the heat map image and the mask image onto a background image is generated and is output at every predetermined point in time.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: March 30, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yuichi Matsumoto, Kazuma Yoshida
  • Patent number: 10943388
    Abstract: Systems and methods for implementing methods for user selection of a virtual object in a virtual scene. A user input may be received via a user input device. The user input may be an attempt to select a virtual object from a plurality of virtual objects rendered in a virtual scene on a display of a display system. A position and orientation of the user input device may be determined in response to the first user input. A probability the user input may select each virtual object may be calculated via a probability model. Based on the position and orientation of the user input device, a ray-cast procedure and a sphere-cast procedure may be performed to determine the virtual object being selected. The probability of selection may also be considered in determining the virtual object. A virtual beam may be rendered from the user input device to the virtual object.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: March 9, 2021
    Assignee: ZSPACE, INC.
    Inventors: Jonathan J. Hosenpud, Clifford S. Champion, David A. Chavez, Kevin S. Yamada, Alexandre R. Lelievre
  • Patent number: 10937242
    Abstract: There are provided systems and methods for performing image compensation using image enhancement effects. In one implementation, such a system includes a computing platform having a hardware processor and a memory storing an image compensation software code. The hardware processor is configured to execute the image compensation software code to receive image data corresponding to an event being viewed by a viewer in a venue, the image data obtained by a wearable augmented reality (AR) device worn by the viewer, and to detect a deficiency in an image included in the image data. The hardware processor is further configured to execute the image compensation software code to generate one or more image enhancement effect(s) for compensating for the deficiency in the image and to output the image enhancement effect(s) for rendering on a display of the wearable AR device while the viewer is viewing the event.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: March 2, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Steven M. Chapman, Todd P. Swanson, Joseph Popp, Samy Segura, Mehul Patel
  • Patent number: 10919154
    Abstract: An interference determination method is provided to compute whether or not a robot that operates according to a motion path will interfere with a nearby object. A first orientation, a second orientation, and an intermediate orientation of the robot are set, and a first combined approximated body is generated that is configured by combining a plurality of robot approximated bodies, which are obtained by approximating the shape of the robot in these orientations. If it is determined that the robot will interfere with the nearby object, whether to generate a combined approximated body that is smaller than the first combined approximated body is determined based on the amount indicating the interval between two adjacent robot approximated bodies.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: February 16, 2021
    Assignee: OMRON Corporation
    Inventors: Haruka Fujii, Akane Nakashima, Takeshi Kojima, Kennosuke Hayashi
  • Patent number: 10919357
    Abstract: A method and a device for determining a trajectory in off-road scenarios having a first sensor system for capturing a 2D image of an environment; a first computing unit for classifying 2D image points into at least three classes, wherein a first class is “intended for being driven on”, wherein a further class is “can be driven on if necessary” and another class is “cannot be driven on”; a second sensor system for determining a height profile of the environment; a second computing unit for projecting the classified image points into the height profile; and a third computing unit having a trajectory planning algorithm, wherein the third computing unit determines a trajectory using the image points of the first class, wherein the image points of the further class are additionally taken into account in response to an incomplete trajectory in the first class.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: February 16, 2021
    Inventors: Mikael Johansson, Volker Patricio Schomerus
  • Patent number: 10910101
    Abstract: An image diagnosis support apparatus includes: an acquisition unit that acquires a plurality of first image data groups and a plurality of second image data groups to be subjected to comparative interpretation; an association unit that associates each of the plurality of first image data groups and each of the plurality of second image data groups with each other based on a degree of similarity between the image data groups; an image extraction unit that extracts a corresponding image corresponding to at least one target image of the first image data group from a second image data group among the plurality of second image data groups associated with a first image data group among the plurality of first image data groups; and a display controller that displays a set of images of the target image and the corresponding image on a display unit in a contrastable layout.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: February 2, 2021
    Assignee: FUJIFILM Corporation
    Inventor: Sadato Akahori
  • Patent number: 10891705
    Abstract: Systems, apparatuses and methods may provide for technology that determines a position associated with one or more polygons in unresolved surface data and select an anti-aliasing sample rate based on a state of the one or more polygons with respect to the position. Additionally, the unresolved surface data may be resolved at the position in accordance with the selected anti-aliasing sample rate, wherein the selected anti-aliasing sample rate varies across a plurality of pixels. The position may be a bounding box, a display screen coordinate, and so forth.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: January 12, 2021
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Joydeep Ray, Peter L. Doyle, Subramaniam Maiyuran, Devan Burke, Philip R. Laws, ElMoustapha Ould-Ahmed-Vall, Altug Koker
  • Patent number: 10861177
    Abstract: A system and method for binocular stereo vision is disclosed. The method includes acquiring a pair of images including a first image and a second image, the pair of images being captured by one or more cameras; obtaining a training model; determining a plurality of feature images according to the first image; determining one or more features of an object in the first image according to the training model and the plurality of feature images; determining a first area in the first image, the first area including at least one of the determined one or more features; obtaining depth information of the determined first area based on the first and second images; and determining a second area in the first image based on the determined first area and depth information, the second area including at least one of the determined first area.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: December 8, 2020
    Assignee: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.
    Inventors: Wei Lu, Qiankun Li, Shizhu Pan, Miao Cheng, Xiangming Zhou
  • Patent number: 10853994
    Abstract: The disclosure is directed to methods and processes of rendering a complex scene using a combination of raytracing and rasterization. The methods and processes can be implemented in a video driver or software library. A developer of an application can provide information to an application programming interface (API) call as if a conventional raytrace API is being called. The method and processes can analyze the scene using a variety of parameters to determine a grouping of objects within the scene. The rasterization algorithm can use as input primitive cluster data retrieved from raytracing acceleration structures. Each group of objects can be rendered using its own balance of raytracing and rasterization to improve rendering performance while maintaining a visual quality target level.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: December 1, 2020
    Assignee: Nvidia Corporation
    Inventors: Christoph Kubisch, Ziyad Hakura, Manuel Kraemer
  • Patent number: 10848738
    Abstract: In implementations of trajectory-based viewport prediction for 360-degree videos, a video system obtains trajectories of angles of users who have previously viewed a 360-degree video. The angles are used to determine viewports of the 360-degree video, and may include trajectories for a yaw angle, a pitch angle, and a roll angle of a user recorded as the user views the 360-degree video. The video system clusters the trajectories of angles into trajectory clusters, and for each trajectory cluster determines a trend trajectory. When a new user views the 360-degree video, the video system compares trajectories of angles of the new user to the trend trajectories, and selects trend trajectories for a yaw angle, a pitch angle, and a roll angle for the user. Using the selected trend trajectories, the video system predicts viewports of the 360-degree video for the user for future times.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: November 24, 2020
    Assignee: Adobe Inc.
    Inventors: Stefano Petrangeli, Viswanathan Swaminathan, Gwendal Brieuc Christian Simon
  • Patent number: 10839598
    Abstract: One example includes a non-transitory machine readable storage medium including voxels organized by an octree and at least one quad-tree index. The octree defines an object to be three-dimensionally printed and includes a list of nodes for each depth of the octree where each node includes nodal content representing at least one voxel. The at least one quad-tree index is to index at least one node of the octree and has a depth less than or equal to a maximum resolved depth. The at least one quad-tree index is to be accessed by computer executable instructions to retrieve nodal content from the octree to control a processor to process the object to be three-dimensionally printed.
    Type: Grant
    Filed: July 26, 2016
    Date of Patent: November 17, 2020
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jun Zeng, Ana Del Angel, Scott White, Sebastia Cortes i Herms, Gary Dispoto
  • Patent number: 10825238
    Abstract: A three-dimensional (3D) object is configured for presentation on a display screen. Object data representing a model of a 3D object is received at a graphics processing unit. The object data includes a plurality of interrelated polygons. Coordinates for one or more clipping boundaries are also received at the graphics processing unit. The clipping boundaries definer a presentation region that overlaps at least in part with visible portions of the display screen. Using a geometry shader, per-polygon clipping is performed on each polygon of the object data that intersects with at least one clipping boundary. Only portions of the 3D object that lie within the presentation region are then presented on the display screen.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: November 3, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jonathan Gustav Paulovich, Nikolai Michael Faaland
  • Patent number: 10825243
    Abstract: A method, system, and apparatus create a 3D CAD model. Scan data from two or more structured scans of a real-world scene are acquired and each scan processed independently by segmenting the scan data into multiple segments, filtering the scan data, and fitting an initial model that is used as a model candidate. Model candidates are clustered into groups and a refined model is fit onto the model candidates in the same group. A grid of cells representing points is mapped over the refined model. Each of the grid cells is labeled by processing each scan independently, labeling each cell located within the refined model as occupied, utilizing back projection to label remaining cells as occluded or empty. The labels from multiple scans are then combined. Based on the labeling, model details are extracted to further define and complete the refined model.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: November 3, 2020
    Assignee: AUTODESK, INC.
    Inventors: Oytun Akman, Ronald Poelman, Yan Fu
  • Patent number: 10818068
    Abstract: A gaming device is configured to provide virtual hybrid texture mapping, which provides a type of virtual placeholder that can be filled in using cloud computing resources. The virtual placeholders are filled in to create more detailed textures whose calculated are offloaded to a cloud computing service provider data center's cluster.
    Type: Grant
    Filed: May 3, 2016
    Date of Patent: October 27, 2020
    Assignee: VMware, Inc.
    Inventor: Rasheed Babatunde
  • Patent number: 10818028
    Abstract: A computing system is configured to train an object classifier. Monocular image data and ground-truth data are received for a scene. Geometric context is determined including a three-dimensional camera position relative to a fixed plane. Regions of interest (RoI) and a set of potential occluders are identified within the image data. For each potential occluder, an occlusion zone is projected onto the fixed plane in three-dimensions. A set of occluded RoIs on the fixed plane are generated for each occlusion zone. Each occluded RoI is projected back to the image data in two-dimensions. The classifier is trained by minimizing a loss function generated by inputting information regarding the RoIs and the occluded RoIs into the classifier, and by minimizing location errors of each RoI and each occluded RoI of the set on the fixed plane based on the ground-truth data. The trained classifier is then output for object detection.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: October 27, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Gang Hua
  • Patent number: 10813734
    Abstract: Feedback data useful in prosthodontic procedures associated with the intra oral cavity is provided. First, a 3D numerical model of the target zone in the intra oral cavity is provided, and this is manipulated so as to extract particular data that may be useful in a particular procedure, for example data relating to the finish line or to the shape and size of a preparation. The relationship between this data and the procedure is then determined, for example the clearance between the preparation and the intended crown. Feedback data, indicative of this relationship, is then generated, for example whether the preparation geometry is adequate for the particular type of prosthesis.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: October 27, 2020
    Assignee: ALIGN TECHNOLOGY, INC.
    Inventors: Avi Kopelman, Eldad Taub
  • Patent number: 10818078
    Abstract: A virtual Reality (VR)-based apparatus that includes a depth sensor that captures a plurality of depth values of a first human subject from a single viewpoint and a modeling circuitry that detects a set of visible vertices and a set of occluded vertices from a plurality of vertices of the first 3D human body model rendered on a display screen. The modeling circuitry determines a set of occluded joints and a set of visible joints from a plurality of joints of a skeleton of the first 3D human body model in the rendered state. The modeling circuitry updates a rotation angle and a rotation axis of the determined set of occluded joints to a defined default value in the skeleton and thereafter, re-renders the first 3D human body model as a reconstructed 3D human model of the first human subject on the display screen.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: October 27, 2020
    Assignee: SONY CORPORATION
    Inventors: Jie Ni, Mohammad Gharavi-Alkhansari
  • Patent number: 10818264
    Abstract: A content visualization system generates visual content for a visualization device based on visual content of an event. The content visualization system collects visual content and source perspective data from visual content sources. The visualization device requests visual content from the content visualization system by providing device perspective data to the content visualization system. The content visualization system generates visual content for the visualization device based on the visual content from the visual content sources, the source perspective data, and the device perspective data. The content visualization system can determine visual content that is relevant to the device perspective by identifying source perspectives that overlap with the device perspective. The content visualization system generates visual content for the visualization device based on the identified visual content.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: October 27, 2020
    Assignee: GoPro, Inc.
    Inventors: Scott Patrick Campbell, Gary Fong
  • Patent number: 10810798
    Abstract: Systems and methods for generating a 360 degree mixed virtual reality environment that provides a 360 degree view of an environment in accordance with embodiments of the invention are described. In a number of embodiments, the 360 degree mixed virtual reality environment is obtained by (1) combining one or more real world videos that capture images of an environment with (2) a virtual world environment that includes various synthetic objects that may be placed within the real world clips. Furthermore, the virtual objects embedded within the 360 degree mixed reality environment interact with the real world objects depicted in the real world environment to provide a realistic mixed reality experience.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: October 20, 2020
    Assignee: Nautilus, Inc.
    Inventors: Tonny Espeset, Andrei Richard Frank, Marc Scott Hardy
  • Patent number: 10789761
    Abstract: A method and a device for processing spatial data are disclosed, the method includes: transforming the received primitive coordinate of the primitive spatial data into view coordinates of the view window according to the preset view control parameter (S11); analyzing or processing the view coordinates in the view window according to the preset processing method corresponding to the processing type (S12); analyzing or processing the primitive spatial data according to the analyzing or processing result of the view coordinates (S13). The method for processing spatial data disclosed by the embodiment transforms the primitive coordinates of primitive spatial data into the view coordinates of the view window, which solves the problem that the amount of processing result of the spatial data is too huge, and makes it sure that all the processed spatial data can display the spatial relationship among the spatial data correctly.
    Type: Grant
    Filed: December 31, 2010
    Date of Patent: September 29, 2020
    Assignee: SUZHOU SUPERENGINE GRAPHICS SOFTWARE CO., LTD.
    Inventor: Futian Dong
  • Patent number: 10776986
    Abstract: An apparatus and method are described for utilizing volume proxies. For example, one embodiment of an apparatus comprises: a volume subdivision module to subdivide a volume into a plurality of partitions, the apparatus to process a first of the partitions and to distribute data associated with each of the other partitions to each of a plurality of nodes; a proxy generation module to compute a first proxy for the first partition, the first proxy to be transmitted to the plurality of nodes; and a ray tracing engine to perform one or more traversal/intersection operations for a current ray or group of rays using the first proxy; if the ray or group of rays interacts with the first proxy, then the ray tracing engine to send the ray(s) to a second node associated with the first proxy or retrieves data related to the interaction from the second node.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: September 15, 2020
    Assignee: Intel Corporation
    Inventor: Ingo Wald
  • Patent number: 10776671
    Abstract: Techniques are disclosed for blur classification. The techniques utilize an image content feature map, a blur map, and an attention map, thereby combining low-level blur estimation with a high-level understanding of important image content in order to perform blur classification. The techniques allow for programmatically determining if blur exists in an image, and determining what type of blur it is (e.g., high blur, low blur, middle or neutral blur, or no blur). According to one example embodiment, if blur is detected, an estimate of spatially-varying blur amounts is performed and blur desirability is categorized in terms of image quality.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: September 15, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Shanghang Zhang, Radomir Mech
  • Patent number: 10767986
    Abstract: A waveguide apparatus includes a planar waveguide and at least one optical diffraction element (DOE) that provides a plurality of optical paths between an exterior and interior of the planar waveguide. A phase profile of the DOE may combine a linear diffraction grating with a circular lens, to shape a wave front and produce beams with desired focus. Waveguide apparati may be assembled to create multiple focal planes. The DOE may have a low diffraction efficiency, and planar waveguides may be transparent when viewed normally, allowing passage of light from an ambient environment (e.g., real world) useful in AR systems. Light may be returned for temporally sequentially passes through the planar waveguide. The DOE(s) may be fixed or may have dynamically adjustable characteristics. An optical coupler system may couple images to the waveguide apparatus from a projector, for instance a biaxially scanning cantilevered optical fiber tip.
    Type: Grant
    Filed: May 8, 2015
    Date of Patent: September 8, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Rony Abovitz, Brian T. Schowengerdt, Matthew D. Watson
  • Patent number: 10740952
    Abstract: A hardware-based traversal coprocessor provides acceleration of tree traversal operations searching for intersections between primitives represented in a tree data structure and a ray. The primitives may include opaque and alpha triangles used in generating a virtual scene. The hardware-based traversal coprocessor is configured to determine primitives intersected by the ray, and return intersection information to a streaming multiprocessor for further processing. The hardware-based traversal coprocessor is configured to provide a deterministic result of intersected triangles regardless of the order that the memory subsystem returns triangle range blocks for processing, while opportunistically eliminating alpha intersections that lie further along the length of the ray than closer opaque intersections.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: August 11, 2020
    Assignee: NVIDIA Corporation
    Inventors: Samuli Laine, Tero Karras, Greg Muthler, William Parsons Newhall, Jr., Ronald Charles Babich, Ignacio Llamas, John Burgess
  • Patent number: 10733794
    Abstract: One embodiment of the present invention includes a parallel processing unit (PPU) that performs pixel shading at variable granularities. For effects that vary at a low frequency across a pixel block, a coarse shading unit performs the associated shading operations on a subset of the pixels in the pixel block. By contrast, for effects that vary at a high frequency across the pixel block, fine shading units perform the associated shading operations on each pixel in the pixel block. Because the PPU implements coarse shading units and fine shading units, the PPU may tune the shading rate per-effect based on the frequency of variation across each pixel group. By contrast, conventional PPUs typically compute all effects per-pixel, performing redundant shading operations for low frequency effects. Consequently, to produce similar image quality, the PPU consumes less power and increases the rendering frame rate compared to a conventional PPU.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: August 4, 2020
    Assignee: NVIDIA Corporation.
    Inventors: Yong He, Eric B. Lum, Eric Enderton, Henry Packard Moreton, Kayvon Fatahalian
  • Patent number: 10726626
    Abstract: A method includes: triggering rendering of an augmented reality (AR) environment having a viewer configured for generating views of the AR environment; triggering rendering, in the AR environment, of an object with an outside surface visualized using a mesh having a direction oriented away from the object; performing a first determination that the viewer is inside the object as a result of relative movement between the viewer and the object; and in response to the first determination, increasing a transparency of the outside surface, reversing the direction of at least part of the mesh, and triggering rendering of an inside surface of the object using the part of the mesh having the reversed direction, wherein the inside surface is illuminated by light from outside the object due to the increased transparency.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: July 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Xavier Benavides Palos, Brett Barros
  • Patent number: 10725152
    Abstract: An illustrative example embodiment of a detector device, which may be useful on an automated vehicle, includes a multiple-dimensional array of detectors including a plurality of first detectors aligned with each other in a first direction and a plurality of second detectors aligned with each other in the first direction. The second detectors are offset relative to the first detectors in a second direction that is different than the first direction. A processor determines an interpolation coefficient related to the offset between the first and second detectors and determines an angle of detection of the device based on the interpolation coefficient.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: July 28, 2020
    Assignee: APTIV TECHNOLOGIES LIMITED
    Inventors: Carlos Alcalde, Zhengzheng Li
  • Patent number: 10720011
    Abstract: A system has a database that stores a plurality of pre-recorded video clips of real-world sports game events. Further, the system has a game engine processor that determines that a wager time period for a corresponding virtual sports game has completed. The game engine processor also determines an outcome of the virtual sports game subsequent to the determination that the wager time period has completed. Moreover, the game engine processor selects a pre-recoded clip from the plurality of pre-recorded clips that corresponds to the outcome. Finally, the game engine processor links the pre-recorded video clips as a base video layer to one or more overlay video layers. The wager time period is a period of time in which one or more user inputs are permitted to be received at a computing device that renders the virtual sports game.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: July 21, 2020
    Assignee: Highlight Games Limited
    Inventors: Timothy Patrick Jonathan Green, Stewart James Whittle, Angus Wood, Chris Latter
  • Patent number: 10721491
    Abstract: The invention notably relates to a computer-implemented method for designing a 3D assembly of modeled objects. The method comprises rendering on a second computer a 3D assembly of modeled objects by merging a second 3D modeled object with at least one raster image of a first 3D modeled object, the at least one raster image having being streamed from a first computer to the second computer; sending from the second computer to the first computer first data related to the second 3D modeled object for contact computation between the first and second 3D modeled objects; and computing on the first computer a contact between the first and second 3D modeled objects.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: July 21, 2020
    Assignee: DASSAULT SYSTEMES
    Inventors: Jean Julien Tuffreau, Malika Boulkenafed
  • Patent number: 10719971
    Abstract: Examples described herein relate to rendering a full resolution image using a reduced resolution depth pre-pass to pre-populate depth data for culling. A computing device may include a memory, a graphics processing unit (GPU) including a hierarchical depth buffer or full resolution depth data, and at least one processor. The computer device may render an occlusion geometry at a reduced resolution. The computer device may be configured to sample a depth value of pixels of the occlusion geometry. The computer device may pre-populate the GPU depth data based on the sampled depth values. The computer device may cull at least one tile or pixel from a full resolution rendering process in response to a depth of the at least one tile or pixel being further than a corresponding depth value in the GPU depth data. The computer device may perform the full resolution rendering process on remaining tiles or pixels.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: July 21, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Martin Jon Irwin Fuller
  • Patent number: 10701331
    Abstract: Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which maybe viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: June 30, 2020
    Assignee: NEXTVR INC.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10699465
    Abstract: Cluster of acceleration engines to accelerate intersections.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: June 30, 2020
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Carsten Benthin, Karthik Vaidyanathan, Philip Laws, Scott Janus, Sven Woop
  • Patent number: 10692279
    Abstract: Various embodiments are directed to making partial selections of multidimensional information while maintaining a multidimensional structure. A server computing device may receive multidimensional data describing a structure of a physical object from one or more sources. The server computing device may then determine a data subset representative of the multidimensional data describing the structure of the physical object. The server computing device may then store the data subset on a storage device. Finally, the server computing device may send the data subset to a client computing device for displaying the structure of the physical object. The data subset may enable the client computing device to utilize fewer hardware computing processing, memory, and bandwidth resources to display the structure of the physical object than are associated with displaying the physical object using the multidimensional data. The client computing device may be incapable of displaying the multidimensional data.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: June 23, 2020
    Assignee: Quantum Spatial, Inc.
    Inventor: Seth Hill
  • Patent number: 10665010
    Abstract: When rendering a scene for output that includes a light source that could cast shadows in a graphics processing system, the world-space volume for the scene to be rendered is first partitioned into a plurality of sub-volumes, and then a set of geometry to be processed for the scene that could cast a shadow from a light source to be considered for the scene in the sub-volume is determined for any sub-volume that is lit by a light source. The determined sets of geometry for the sub-volumes are then used to determine light source visibility parameters for output samples, such as vertex positions and/or screen space sampling positions, for the scene. The determined light source visibility parameter for an output sample is then used to modulate the effect of the light source at the output sample when rendering an output version of the output sample.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: May 26, 2020
    Assignee: Arm Limited
    Inventor: Graham Paul Hazel
  • Patent number: 10665015
    Abstract: Objects can be rendered in three-dimensions and viewed and manipulated in an augmented reality environment. Background images are subtracted from object images from multiple viewpoints to provide baseline representations of the object. Morphological operations can be used to remove errors caused by misalignment of an object image and background image. Using two different contrast thresholds, pixels can be identified that can be said at two different confidence levels to be object pixels. An edge detection algorithm can be used to determine object contours. Low confidence pixels can be associated with the object if they can be connected to high confidence pixels without crossing an object contour. Segmentation masks can be created from high confidence pixels and properly associated low confidence pixels. Segmentation masks can be used to create a three-dimensional representation of the object.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: May 26, 2020
    Assignee: A9.COM, INC.
    Inventors: Arnab Sanat Kumar Dhua, Himanshu Arora, Radek Grzeszczuk
  • Patent number: 10656305
    Abstract: A method and apparatus for simulating spectral representation of a region of interest is disclosed. In one embodiment, the method comprises determining a physical characteristic of a geospatial portion of the region of interest, associating the determined physical characteristic with a material of a spectral library, the spectral library having at least one spectral definition material, associating the spectral definition of the material with the geospatial portion of the region of interest, wherein the material is at least partially representative of the geospatial section of the region of interest, and generating the simulated spectral representation of the region of interest at least in part from at least the associated spectral definition of the at least one material.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: May 19, 2020
    Assignee: THE BOEING COMPANY
    Inventors: Robert J. Klein, Pamela L. Blake
  • Patent number: 10650577
    Abstract: A tile-based graphics processing pipeline includes a back-facing determination and culling unit that is operable to cull back-facing triangles before the tiling stage. The back-facing determination and culling unit include a triangle size estimator that estimates the size of a triangle being considered. If the size of the triangle is less than a selected size, then the area of the triangle is calculated using fixed point arithmetic and the result of that area calculation is used by a back-face culling unit to determine whether to cull the triangle or not. On the other hand, if the size estimator determines that the primitive is greater than the selected size, then the triangle bypasses the fixed point area calculation and back-face culling unit and is instead passed directly to the tiler.
    Type: Grant
    Filed: August 25, 2016
    Date of Patent: May 12, 2020
    Assignee: ARM LTD
    Inventors: Andreas Due Engh-Halstvedt, Frank Langtind
  • Patent number: 10649615
    Abstract: Aspects of the technology described herein provide a control interface for manipulating a 3-D graphical object within a virtual drawing space. The control can be activated by selecting a graphical object or objects. When multiple objects are selected, the manipulations can occur as a group. In one aspect, the manipulations occur around the centroid of the 3-D graphical object, or groups of objects. The manipulations can include rotation, size adjustment, and positional adjustment within the virtual drawing space. The control interface comprises controls that rotate the object around an x-axis, an y-axis, or an z-axis.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: May 12, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Barry John Ptak, David Mondelore, Alexander Charles Cullum
  • Patent number: 10652514
    Abstract: As user device can receive and display 360 panoramic content in a 360 depth format. 360 depth content can comprise 360 panoramic image data and corresponding depth information. To display 360 depth content, the user device can generate a 3D environment based on the 360 depth content and the current user viewpoint. A content display module on the user device can render 360 depth content using a standard 3D rendering pipeline modified to render 360 depth content. The content display module can use a vertex shader or fragment shader of the 3D rendering pipeline to interpret the depth information of the 360 depth content into the 3D environment as it is rendered.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: May 12, 2020
    Assignee: Facebook, Inc.
    Inventors: Forrest Samuel Briggs, Michael John Toksvig
  • Patent number: 10643374
    Abstract: The systems, apparatuses and methods may provide a way to adaptively process and aggressively cull geometry data. Systems, apparatuses and methods may provide for processing, by a positional only shading pipeline (POSH), geometry data including surface triangles for a digital representation of a scene. More particularly, systems, apparatuses and methods may provide a way to identify surface triangles in one or more exclusion zones and non-exclusion zones, and cull surface triangles surface triangles in one or more exclusion zones.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: May 5, 2020
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Karthik Vaidyanathan, Atsuo Kuwahara, Hugues Labbe, Sameer Kp, Jonathan Kennedy, Abhishek R. Appu, Jeffery S. Boles, Balaji Vembu, Michael Apodaca, Slawomir Grajewski, Gabor Liktor, David M. Cimini, Andrew T. Lauritzen, Travis T. Schluessler, Murali Ramadoss, Abhishek Venkatesh, Joydeep Ray, Kai Xiao, Ankur N. Shah, Altug Koker
  • Patent number: 10643375
    Abstract: Techniques and systems are described herein for determining dynamic lighting for objects in images. Using such techniques and systems, a lighting condition of one or more captured images can be adjusted. Techniques and systems are also described herein for determining depth values for one or more objects in an image. In some cases, the depth values (and the lighting values) can be determined using only a single camera and a single image, in which case one or more depth sensors are not needed to produce the depth values.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: May 5, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Ramin Rezaiifar, Antwan Gaggi, Donna Roy
  • Patent number: 10643395
    Abstract: A method for spatially authoring data in a data processing system, may include constructing one or more input spatial geometry regions and iterating through each input spatial geometry region to create current cumulative result data and rejecting geometry groups from the current cumulative result data.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: May 5, 2020
    Assignee: Authanaviz, LLC
    Inventor: Barry L. Byrd, Jr.
  • Patent number: 10628993
    Abstract: A system comprises an obtainment unit that obtains virtual viewpoint information relating to a position and direction of a virtual viewpoint; a designation unit that designates a focus object from a plurality of objects detected based on at least one of the plurality of images captured by the plurality of cameras; a decision unit that decides an object to make transparent from among the plurality of objects based on a position and direction of a virtual viewpoint that the virtual viewpoint information obtained by the obtainment unit indicates, and a position of the focus object designated by the designation unit; and a generation unit that generates, based on the plurality of captured images obtained by the plurality of cameras, a virtual viewpoint image in which the object decided by the decision unit is made to be transparent.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: April 21, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kazuhiro Yoshimura, Kaori Taya, Shugo Higuchi, Tatsuro Koizumi
  • Patent number: 10621775
    Abstract: 3-D rendering systems include a rasterization section that can fetch untransformed geometry, transform geometry and cache data for transformed geometry in a memory. As an example, the rasterization section can transform the geometry into screen space. The geometry can include one or more of static geometry and dynamic geometry. The rasterization section can query the cache for presence of data pertaining to a specific element or elements of geometry, and use that data from the cache, if present, and otherwise perform the transformation again, for actions such as hidden surface removal. The rasterization section can receive, from a geometry processing section, tiled geometry lists and perform the hidden surface removal for pixels within respective tiles to which those lists pertain.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: April 14, 2020
    Assignee: Imagination Technologies Limited
    Inventor: John W. Howson
  • Patent number: 10592066
    Abstract: In one embodiment, a method for designing an augmented-reality effect may include associating, by a computing device, a first visual object with a first rendering order specified by a user. A second visual object may be associated with a second rendering order specified by the user. The first and second visual objects may be defined in a three-dimensional space. Information associated with the first visual object, the first rendering order, the second visual object, and the second rendering order may be stored in one or more files. The one or more files may be configured to cause the first visual object and the second visual object to be rendered sequentially in an order determined based on the first rendering order and the second rendering order. The first visual object and the second visual object may be rendered to generate a scene in the three-dimensional space.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: March 17, 2020
    Assignee: Facebook, Inc.
    Inventors: Guilherme Schneider, Stef Marc Smet, Siarhei Hanchar
  • Patent number: 10595004
    Abstract: The present disclosure relates to an electronic device for capturing a plurality of images using a plurality of cameras, generating a left-eye-view spherical image and a right-eye-view spherical image by classifying each of the plurality of images as a left-eye-view image or a right-eye-view image, obtaining depth information using the generated left-eye-view spherical image and right-eye-view spherical image, and generating a 360 degree three-dimensional image, wherein the three-dimensional effect thereof is controlled using the obtained depth information, and an image processing method therefor.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: March 17, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Do-wan Kim, Sung-jin Kim