Patents Examined by Robert Bader
  • Patent number: 10771761
    Abstract: An information processing apparatus includes one or more memories storing instructions, and one or more processors executing the instructions to determine, among a plurality of captured images obtained by a plurality of image capturing apparatuses, a display target image related to a virtual viewpoint image, based on a position of a virtual viewpoint and a view direction from the virtual viewpoint, the virtual viewpoint image being generated based on the display target image and the position of the virtual viewpoint and the view direction from the virtual viewpoint, and to cause a displaying unit to display the determined display target image in a displaying mode according to a degree of contribution of the determined display target image to generation of the virtual viewpoint image.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: September 8, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Shogo Mizuno
  • Patent number: 10769842
    Abstract: Ray tracing systems process rays through a 3D scene to determine intersections between rays and geometry in the scene, for rendering an image of the scene. Ray direction data for a ray can be compressed, e.g. into an octahedral vector format. The compressed ray direction data for a ray may be represented by two parameters (u,v) which indicate a point on the surface of an octahedron. In order to perform intersection testing on the ray, the ray direction data for the ray is unpacked to determine x, y and z components of a vector to a point on the surface of the octahedron. The unpacked ray direction vector is an unnormalised ray direction vector. Rather than normalising the ray direction vector, the intersection testing is performed on the unnormalised ray direction vector. This avoids the processing steps involved in normalising the ray direction vector.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: September 8, 2020
    Assignee: Imagination Technologies Limited
    Inventors: Luke T. Peterson, Simon Fenney
  • Patent number: 10733791
    Abstract: The invention discloses a real-time rendering method based on energy consumption-error precomputation, comprising: determining the spatial level structure of the scene to be rendered through adaptive subdivision of the space positions and look space of the camera browsable to the user in the 3D scene to be rendered; during the process of adaptive subdivision of the space, for each position subspace obtained at the completion of each subdivision, obtaining the error and energy consumption of the camera for rendering the 3D scene using a plurality of sets of preset rendering parameters in each look subspace at each vertex of the bounding volume that bounds the position subspace, and Pareto curve of the corresponding vertex and look subspace is built based on the error and energy consumption; based on the current camera viewpoint information, searching and obtaining the target Pareto curve in the spatial level structure to determine a set of rendering parameters satisfying the precomputation condition as optimum
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: August 4, 2020
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Rui Wang, Hujun Bao, Tianlei Hu, Bowen Yu
  • Patent number: 10701346
    Abstract: Systems and methods for replacing a 2D image with an equivalent 3D image within a web page. The 2D image displayed within a web page may be identified and a 3D image with substantially equivalent content may also be identified. The 3D image may be integrated into the web page as a replacement to the 2D image. Further, at least one user input manipulating the 3D image within the web page may be received. The at least one user input may include movement of a view point (or point of view) of a user relative to a display displaying the web page and/or detection of a beam projected from an end of a user input device (and/or an end of the user input device) intersecting with the 3D image.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: June 30, 2020
    Assignee: zSpace, Inc.
    Inventors: David A. Chavez, Jonathan J. Hosenpud, Clifford S. Champion, Alexandre R. Lelievre, Arthur L. Berman, Kevin S. Yamada
  • Patent number: 10701347
    Abstract: Systems and methods for replacing a 2D image with an equivalent 3D image within a web page. Content of a 2D image displayed within a web page may be identified and 3D images may be identified as possible replacements of the 2D image. The 3D images may be ranked based on sets of ranking criteria. A 3D image with a highest-ranking value may be selected based on a ranking of the 3D images. The selected 3D image may be integrated into the web page, thereby replacing the 2D image with the selected 3D image. Further, a user input manipulating the 3D image within the web page may be received. The user input may include movement of a view point of a user relative to a display displaying the web page and/or detection of a beam projected from an end of a user input device intersecting with the 3D image.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: June 30, 2020
    Assignee: zSpace, Inc.
    Inventors: David A. Chavez, Jonathan J. Hosenpud, Clifford S. Champion, Alexandre R. Lelievre, Arthur L. Berman, Kevin S. Yamada
  • Patent number: 10679399
    Abstract: An animation generation instruction identifying key frames to use for generating an animation is received. A plurality of tweens corresponding to the animation are obtained, where each tween includes tween objects between a start key frame and an end key frame. One or more timelines are generated when a quantity of tweens is greater than or equal to two, where each timeline corresponds to one or more tweens, and where a quantity of timelines is less than the quantity of tweens. The animation is generated using the plurality of tweens based on the one or more timelines.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: June 9, 2020
    Assignee: Alibaba Group Holding Limited
    Inventor: Tong Huang
  • Patent number: 10672176
    Abstract: An apparatus and method are described for culling commands in a tile-based renderer. For example, one embodiment of an apparatus comprises: a command buffer to store a plurality of commands to be executed by a render pipeline to render a plurality of tiles; visibility analysis circuitry to determine per-tile visibility information for each of the plurality of tiles and to store the visibility information for a first tile in a first storage, the visibility information specifying either that all of the commands associated with rendering the first tile can be skipped or identifying individual commands associated with rendering the first tile that can be skipped; and a render pipeline to read the visibility information from the first storage to determine whether to execute or skip one or more of the commands from the command buffer to render the first tile.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: June 2, 2020
    Assignee: Intel Corporation
    Inventors: Hema C. Nalluri, Balaji Vembu, Peter L. Doyle, Michael Apodaca, Jeffery S. Boles
  • Patent number: 10657375
    Abstract: In certain embodiments, augmented-reality-based currency conversion may be facilitated. In some embodiments, a wearable device (or other device of a user) may capture a live video stream of the user's environment. One or more indicators representing at least one of a currency or units of the currency may be determined from the live video stream, where at least one of the indicators correspond to a feature in the live video stream. Based on the indicators from the live video stream, a predicted equivalent price corresponding to the units of the currency may be generated for a user-selected currency. In some embodiments, the corresponding feature in the live video stream may be continuously tracked, and, based on the continuous tracking, the corresponding feature may be augmented in the live video stream with the predicted equivalent price.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: May 19, 2020
    Assignee: Capital One Services, LLC
    Inventors: Joshua Edwards, Abdelkader Benkreira, Michael Mossoba
  • Patent number: 10650574
    Abstract: Various embodiments of the present disclosure relate generally to systems and processes for generating stereo pairs for virtual reality. According to particular embodiments, a method comprises obtaining a monocular sequence of images using the single lens camera during a capture mode. The sequence of images is captured along a camera translation. Each image in the sequence of images contains at least a portion of overlapping subject matter, which includes an object. The method further comprises generating stereo pairs, for one or more points along the camera translation, for virtual reality using the sequence of images. Generating the stereo pairs may include: selecting frames for each stereo pair based on a spatial baseline; interpolating virtual images in between captured images in the sequence of images; correcting selected frames by rotating the images; and rendering the selected frames by assigning each image in the selected frames to left and right eyes.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: May 12, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10621788
    Abstract: Virtual reality-based apparatus that includes a memory, a depth sensor and circuitry. The depth sensor captures a plurality of depth data points of a human subject from a single viewpoint. The memory stores a deformed three-dimensional (3D) human body model. The circuitry calculates first distances from the depth data points to a plurality of triangular faces. The circuitry calculates second distances from the depth data points to a plurality of edges. The circuitry further calculates third distances from the depth data points to a plurality of vertices. The circuitry further determines minimum distances, among the calculated first distances, the calculated second distances, and the calculated third distances, as point-to-surface distance to reconstruct a 3D human body model with high accuracy.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: April 14, 2020
    Assignee: SONY CORPORATION
    Inventors: Jie Ni, Mohammad Gharavi-Alkhansari
  • Patent number: 10614619
    Abstract: In a graphics processing system, a bounding volume (20) representative of the volume of all or part of a scene to be rendered is defined. Then, when rendering an at least partially transparent object (21) that is within the bounding volume (20) in the scene, a rendering pass for part or all of the object (21) is performed in which the object (21) is rendered as if it were an opaque object.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: April 7, 2020
    Assignee: Arm Limited
    Inventors: Roberto Lopez Mendez, Sylwester Krzysztof Bala, Samuel Paul Laynton
  • Patent number: 10600239
    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: March 24, 2020
    Assignee: Adobe Inc.
    Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
  • Patent number: 10600228
    Abstract: Techniques for performing automatic interactive animation by automatically matching objects between multiple artboards, allowing an animator to link multiple artboards in a temporal sequence using time as a trigger and allowing an animator to preview an animation using intuitive drag controls via an input device such as a mouse or touch screen. An automatic animation process is performed by matching objects/nodes between artboards by determining a ranking of similarity between objects based upon a distance metric computed for a set of one or more attributes associated with each object in the artboards. If sufficient match is found, the matched objects can be treated as a single entity to be animated. In another embodiment, dominant direction of movement with respect to the matched objects is determined, and receipt of a drag event (mouse input or touch screen gesture input) in said dominant direction causes a preview of animation of that entity.
    Type: Grant
    Filed: October 14, 2018
    Date of Patent: March 24, 2020
    Assignee: ADOBE INC.
    Inventors: Anirudh Sasikumar, Alexander Poterek
  • Patent number: 10586378
    Abstract: The present disclosure describes systems and processes for image sequence stabilization. According to particular embodiments, a sequence of images is obtained using a camera which captures the sequence of images along a camera translation. Each image contains at least a portion of overlapping subject matter. A plurality of keypoints is identified on a first image of the sequence of images. Each keypoint from the first image are kept track of to a second image. Using a predetermined algorithm, a camera rotation value and a focal length value are calculated from two randomly sampled keypoints on the first image and two corresponding keypoints on the second image. An optimal camera rotation and focal length pair corresponding to an optimal transformation for producing an image warp for image sequence stabilization is determined. The image warp for image sequence stabilization is constructed using the optimal camera and focal length pair.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: March 10, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10572966
    Abstract: Systems, apparatuses and methods may provide for technology that optimizes tiled rendering for workloads in a graphics pipeline including tessellation and use of a geometry shader. More particularly, systems, apparatuses and methods may provide a way to generate, by a write out fixed-function stage, one or more bounding volumes based on geometry data, as inputs to one or more stages of the graphics pipeline. The systems, apparatuses and methods may compute multiple bounding volumes in parallel, and improve the gamer experience, and enable photorealistic renderings at full speed, (e.g., such as human skin and facial expressions) that render three-dimensional (3D) action more realistically.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: February 25, 2020
    Assignee: Intel Corporation
    Inventor: Peter L. Doyle
  • Patent number: 10553014
    Abstract: An image generating method, includes: establishing a 3D scene model that includes a virtual 3D object, a virtual display screen and at least one sight point set; determining a plurality of intersection points between a plurality of virtual light paths from each sight point to a plurality of virtual object points on the surface of the virtual three dimensional object and the virtual display screen, all virtual light paths {lk1, lk2, . . . , lkS} corresponding to the virtual object point Tk intersect at the virtual object point Tk; wherein, 1?k?n, 1?i?S, S is the total number of established sight points, and the plurality of virtual object points are all located within the viewing angle range of the virtual display screen; forming S frames of rendered images, including: determining color parameters of a plurality of intersection points on the virtual display screen to obtain the ith rendered image.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: February 4, 2020
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Wei Wang, Yafeng Yang, Xiaochuan Chen
  • Patent number: 10540773
    Abstract: Various embodiments of the present invention relate generally to systems and processes for interpolating images of an object. According to particular embodiments, a sequence of images is obtained using a camera which captures the sequence of images along a camera translation. Each image contains at least a portion of overlapping subject matter. A plurality of keypoints is identified on a first image of the sequence of images. Each keypoint from the first image are kept track of to a second image. Using a predetermined algorithm, a plurality of transformations are computed using two randomly sampled keypoint correspondences, each of which includes a keypoint on the first image and a corresponding keypoint on the second image. An optimal subset of transformations is determined from the plurality of transformations based on predetermined criteria, and transformation parameters corresponding to the optimal subset of transformations is calculated and stored for on-the-fly interpolation.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: January 21, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Stephen David Miller, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10529127
    Abstract: A system generates a clothing deformation model which models one or more of a pose-dependent clothing shape variation which is induced by underlying body pose parameters, a pose-independent clothing shape variation which is induced by clothing size and underlying body shape parameters and a clothing shape variation including a combination of the pose-dependent clothing shape variation and/or the pose-independent clothing shape variation. The system generates, for an input human body, a custom-shaped garment associated with a clothing type by mapping, via the clothing deformation model, body shape parameters of the input human body to clothing shape parameters of the clothing type and dresses the input human body with the custom-shaped garment.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: January 7, 2020
    Assignee: BROWN UNIVERSITY
    Inventors: Michael J. Black, Peng Guan
  • Patent number: 10523921
    Abstract: Systems and methods for replacing a 2D image with an equivalent 3D image within a web page. The 2D image displayed within a web page may be identified and a 3D image with substantially equivalent content may also be identified. The 3D image may be integrated into the web page as a replacement to the 2D image. Further, at least one user input manipulating the 3D image within the web page may be received. The at least one user input may include movement of a view point (or point of view) of a user relative to a display displaying the web page and/or detection of a beam projected from an end of a user input device (and/or an end of the user input device) intersecting with the 3D image.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: December 31, 2019
    Assignee: zSpace, Inc.
    Inventors: David A. Chavez, Jonathan J. Hosenpud, Clifford S. Champion, Alexandre R. Lelievre, Arthur L. Berman, Kevin S. Yamada
  • Patent number: 10523922
    Abstract: Systems and methods for replacing a 2D image with an equivalent 3D image within a web page. Content of a 2D image displayed within a web page may be identified and 3D images may be identified as possible replacements of the 2D image. The 3D images may be ranked based on sets of ranking criteria. A 3D image with a highest-ranking value may be selected based on a ranking of the 3D images. The selected 3D image may be integrated into the web page, thereby replacing the 2D image with the selected 3D image. Further, a user input manipulating the 3D image within the web page may be received. The user input may include movement of a view point of a user relative to a display displaying the web page and/or detection of a beam projected from an end of a user input device intersecting with the 3D image.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: December 31, 2019
    Assignee: zSpace, Inc.
    Inventors: David A. Chavez, Jonathan J. Hosenpud, Clifford S. Champion, Alexandre R. Lelievre, Arthur L. Berman, Kevin S. Yamada