Patents Issued in January 14, 2020
  • Patent number: 10535152
    Abstract: These techniques described herein overcome the limitations of conventional techniques by bridging a gap between user interaction with digital content using a computing device and a user's physical environment through use of augmented reality content. In one example, user interaction with augmented reality digital content as part of a live stream of digital images of a user's environment is used to specify a size of an area that is used to filter search results to find a “best fit”. In another example, a geometric shape is used to represent a size and shape of an object included in a digital image (e.g., a two-dimensional digital image). The geometric shape is displayed as augmented reality digital content as part of a live stream of digital images to “assess fit” of the object in the user's physical environment.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: January 14, 2020
    Assignee: eBay Inc.
    Inventors: Preeti Patil Anadure, Mukul Arora, Ashwin Ganesh Krishnamurthy
  • Patent number: 10535153
    Abstract: Methods, systems, and computer programs are provided for generating an interactive space. One method includes operations for associating a first device to a reference point in 3D space, and for calculating by the first device a position of the first device in the 3D space based on inertial information captured by the first device and utilizing dead reckoning. Further, the method includes operations for capturing images with a camera of the first device, and for identifying locations of one or more static features in the images. The position of the first device is corrected based on the identified locations of the one or more static features, and a view of an interactive scene is presented in a display of the first device, where the interactive scene is tied to the reference point and includes virtual objects.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: January 14, 2020
    Assignee: Sony Interactive Entertainment America LLC
    Inventors: George Weising, Thomas Miller
  • Patent number: 10535154
    Abstract: The system for image analysis that analyzes an image taken by a camera improves the accuracy of detection and identification an object in image analysis. The system stores a plurality of analyzed past images and their imaging environment data sets that include setting data of a camera that took the past image and data on an object; includes an acquisition module 211 that acquires an image and a similar image extraction module 212 that extracts a past image similar to the image; and applies the imaging environment data set of the extract past image to the acquired image and analyzes the acquired image.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: January 14, 2020
    Assignee: OPTIM CORPORATION
    Inventor: Shunji Sugaya
  • Patent number: 10535155
    Abstract: Systems and methods for articulated pose estimation are provided. Some embodiments include training a convolutional neural network for object pose estimation, which includes receiving a two-dimensional training image of an articulated object that has a plurality of components and identifying, from the two-dimensional training image, at least one key point for each of the plurality of components. Some embodiments also include testing the accuracy of the object pose estimation, which includes visualizing a three or more dimensional pose of each of the plurality of components of the articulated object from a two-dimensional testing image and providing data related to the visualization for output.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: January 14, 2020
    Assignees: Toyota Motor Engineering & Manufacturing North America, Inc., Carnegie Mellon University
    Inventors: Zhe Cao, Qi Zhu, Yaser Sheikh, Suhas E. Chelian
  • Patent number: 10535156
    Abstract: Examples of the present disclosure describe systems and methods for scene reconstruction from bursts of image data. In an example, an image capture device may gather information from multiple positions within the scene. At each position, a burst of image data may be captured, such that other images within the burst may be used to identify common image features, anchor points, and geometry, in order to generate a scene reconstruction as observed from the position. Thus, as a result of capturing bursts from multiple positions in a scene, multiple burst reconstructions may be generated. Each burst may be oriented within the scene by identifying a key frame for each burst and using common image features and anchor points among the key frames to determine a camera position for each key frame. The burst reconstructions may then be combined into a unified reconstruction, thereby generating a high-quality reconstruction of the scene.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: January 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Sudipta Narayan Sinha, Minh Phuoc Vo
  • Patent number: 10535157
    Abstract: A positioning and measuring system includes: an image scale supporting an object and having positioning mark sets and encoding pattern sets arranged in a two-dimensional array, each of the positioning mark sets including positioning marks, each of the encoding pattern sets including encoding patterns respectively disposed in gaps between the positioning marks; an image capturing device capturing measurement points of the object and an image scale to obtain composite images; a processor processing the composite images and determining one or multiple position relationships between the measurement points according to the encoding patterns and the positioning marks; and a driving mechanism, which is electrically connected to the processor and mechanically connected to the image capturing device or the image scale, and drives one of the image capturing device and the image scale to move relatively to the other of the image capturing device and the image scale.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: January 14, 2020
    Assignee: NATIONAL TSING HUA UNIVERSITY
    Inventors: Hung-Yin Tsai, Wei-Cheng Pan, Yu-Chen Wu
  • Patent number: 10535158
    Abstract: Apparatuses and methods for point source image blur mitigation are provided. An example method may include receiving, from an imaging detector, a plurality of pixel signals associated with respective pixels of an image over a stare time, and determining a trajectory of a point source within the image due to relative angular motion of the point source across a plurality of pixels of the image. The example method may further include determining a subset of pixels that intersect with the trajectory, and determining an estimated location of the point source within the image at an end of the stare time based on the pixel signals for each of the pixels within the subset of pixels.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: January 14, 2020
    Assignee: The Johns Hopkins University
    Inventor: Robert L. Fry
  • Patent number: 10535159
    Abstract: An in vivo motion tracking device tracking an in vivo motion that is a tracking target included in an ultrasonic image includes an image acquiring unit that is configured to acquire an ultrasonic image, an advance learning unit that is configured to perform advance learning using the ultrasonic image as learning data, and a tracking unit that is configured to track a position of the tracking target in an ultrasonic image including the tracking target after the advance learning performed by the advance learning unit.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: January 14, 2020
    Assignees: The University of Electro-Communications, PUBLIC UNIVERSITY CORPORATION YOKOHAMA CITY UNIVERSITY
    Inventors: Norihiro Koizumi, Yu Nishiyama, Ryosuke Kondo, Kyohei Tomita, Fumio Eura, Kazushi Numata
  • Patent number: 10535160
    Abstract: A markerless augmented reality (AR) can track 2D feature points among video frames, generate 2D point clouds and 3D point clouds based thereon, and match a 3D model against 3D point cloud to obtain proper positional information of the model with respect to a frame. The AR system can use the 3D model with the obtained positional information to render and project AR content to a user's view. Additionally, the AR system can maintain associations between frames and 3D model positional information for search and retrieval.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: January 14, 2020
    Assignee: Visom Technology, Inc.
    Inventors: Ryan Kellogg, Charles Phillips, Sean Buchanan
  • Patent number: 10535161
    Abstract: A decoding device, an encoding device and a method for point cloud decoding is disclosed. The method includes receiving a compressed bitstream. The method also includes decoding the compressed bitstream into 2-D frames that represent a 3-D point cloud. Each of the 2-D frames including a set of patches, and each patch includes a cluster of points of the 3-D point cloud. The cluster of points corresponds to an attribute associated with the 3-D point cloud. One patch of the set of patches, the set of patches, and the 2-D frames correspond to respective access levels representing the 3-D point cloud. The method also includes identifying a first and a second flag. In response to identifying the first and the second flag, the method includes reading the metadata from the bitstream. The method further includes generating, based on metadata and using the sets of 2-D frames, the 3-D point cloud.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: January 14, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Madhukar Budagavi, Esmaeil Faramarzi, Rajan Joshi, Hossein Najaf-Zadeh
  • Patent number: 10535162
    Abstract: Encoding and decoding of property data, such as colour values, associated with vertices forming 3D objects. From an analysis of connectivity data, a spiral-like scanning path of the vertices within the 3D model is obtained. The colour values are mapped to a 2D image, each attribute value to a pixel. Next, the mapped 2D image is encoded. To increase redundancies in the 2D image, the spiral-like path is split into path segments, each forming a turn in the spiral; each path segment is assigned to a respective line of the 2D image; and the colour values of each path segment are mapped, in the same order, to the respective line of the 2D image. Successive lines in the 2D image thus contain the colour values of neighbouring vertices in the 3D object, and a better encoding can be achieved.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: January 14, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Guillaume Laroche, Patrice Onno, Christophe Gisquet
  • Patent number: 10535163
    Abstract: A system for generating three-dimensional facial models including photorealistic hair and facial textures includes creating a facial model with reliance upon neural networks based upon a single two-dimensional input image. The photorealistic hair is created by finding a subset of similar three-dimensional polystrip hairstyles from a large database of polystrip hairstyles, selecting the most-alike polystrip hairstyle, deforming that polystrip hairstyle to better fit the hair of the two-dimensional image. Then, collisions and bald spots are corrected, and suitable textures are applied. Finally, the facial model and polystrip hairstyle are combined into a final three-dimensional avatar.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: January 14, 2020
    Assignee: Pinscreen, Inc.
    Inventors: Hao Li, Liwen Hu, Lingyu Wei, Koki Nagano, Jaewoo Seo, Jens Fursund, Shunsuke Saito
  • Patent number: 10535164
    Abstract: A method for applying a style to an input image to generate a stylized image. The method includes maintaining data specifying respective parameter values for each image style in a set of image styles, receiving an input including an input image and data identifying an input style to be applied to the input image to generate a stylized image that is in the input style, determining, from the maintained data, parameter values for the input style, and generating the stylized image by processing the input image using a style transfer neural network that is configured to process the input image to generate the stylized image.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: January 14, 2020
    Assignee: Google Inc.
    Inventors: Jonathon Shlens, Vincent Dumoulin, Manjunath Kudlur Venkatakrishna
  • Patent number: 10535165
    Abstract: Methods, devices, and apparatus, including computer programs encoded on a computer storage medium for reconstructing image are provided. In one aspect, a method of reconstructing image includes obtaining scanning data for a subject in a continuous incremental scanning of medical equipment including real crystals for detection, associating each of the real crystals with one or more virtual crystals in a virtual scanning system, determining delay random coincidence data of two virtual crystals connected by a response line in the virtual scanning system, obtaining random coincidence data by denoising the delay random coincidence data based on crystal receiving efficiency for each of the real crystals, and reconstructing an image with the scanning data by taking the random coincidence data into account.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: January 14, 2020
    Assignee: Shenyang Neusoft Medical Systems Co., Ltd.
    Inventors: Shaolian Liu, Zhipeng Sun, Yunda Li
  • Patent number: 10535166
    Abstract: The present disclosure provides a system and method for PET image reconstruction. The method may include processes for obtaining physiological information and/or rigid motion information. The image reconstruction may be performed based on the physiological information and/or rigid motion information.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: January 14, 2020
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Tao Feng, Wentao Zhu, Hongdi Li, Defu Yang, Yun Dong, Yang Lyu
  • Patent number: 10535167
    Abstract: A method and system for obtaining images of an object of interest using a system comprising an X-ray source facing a detector. The method and system enable the acquiring of a plurality of 2D projection images of the object of interest in a plurality of orientations. A selected 2D projection image such as the zero projection of the plurality of projections can be enhanced by using at least a subset of the plurality of tomosynthesis projection images. The obtained enhanced 2D projection image is displayed for review.
    Type: Grant
    Filed: October 8, 2018
    Date of Patent: January 14, 2020
    Assignee: GENERAL ELECTRIC COMPANY
    Inventor: Sylvain Bernard
  • Patent number: 10535168
    Abstract: A method for generating an image of a subterranean formation includes receiving seismic data that was collected from seismic waves that propagated in the subterranean formation. Partition images are generated using the seismic data. A geological model of the subterranean formation is generated. Dip fields in the partition images are determined. A target dip field in the geological model is determined. A degree of correlation between the respective dip fields and the target dip field is determined. Weights are assigned to the partition images based upon the degrees of correlation to produce weighted partition images. The image of the subterranean formation is generated by stacking the weighted partition images.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: January 14, 2020
    Assignee: SCHLUMBERGER TECHNOLOGY CORPORATION
    Inventors: Ruoyu Gu, Mohammed Hegazy, Stacey Buzzell, Olga Zdraveva
  • Patent number: 10535169
    Abstract: Computer program products, methods, systems, apparatus, and computing entities are provided for overcoming the technical problem of providing an augmented reality that displays an actual image of the item or a proportionally dimensioned representation of the item to a user. To overcome this challenge, two separate approaches can be used: a beacon/tag/sensor-based approach and a marker-based approach.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: January 14, 2020
    Assignee: UNITED PARCEL SERVICE OF AMERICA, INC.
    Inventors: Andrew Dotterweich, Christopher T. Schenken, Jeffrey Cooper
  • Patent number: 10535170
    Abstract: A method may include receiving a selection of a first variation of a consumer product from multiple consumer product variations. Each consumer product variation may include a first and a second surface, each including a depth attribute and a texture map with a sync point (first and second of each, respectively). The method may also include generating a single image of the consumer product as implemented in the first variation based on the first depth attribute and the second depth attribute, where each of the first surface and the second surface may include a texture map including a sync point. For the method, the first and the second sync points may be selected so that transitions between the first surface and the second surface in the single image depict a designated manufacturer matching scheme that matches a pattern feature from the first surface with the second surface.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: January 14, 2020
    Assignee: Micro*D, Inc.
    Inventors: Manoj Nigam, Mark McCuistion, Ron Gordon, Marek Scholaster
  • Patent number: 10535171
    Abstract: One embodiment of the invention disclosed herein provides techniques for processing an evaluation graph associated with a three-dimensional animation scene. An evaluation management system retrieves a first plurality of nodes from a memory. The evaluation management system determines that a first node included in the first plurality of nodes depends on a first output generated by a second node that also is included in the first plurality of nodes. The evaluation management system generates a third node corresponding to the first node and a fourth node corresponding to the second node. The evaluation management system generates an evaluation graph that includes the third node, the fourth node, and an indication that the third node depends on the fourth node. The evaluation management system schedules the third node for evaluation after the fourth node has been evaluated.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: January 14, 2020
    Assignee: AUTODESK, INC.
    Inventors: Krystian Ligenza, Kevin Picott
  • Patent number: 10535172
    Abstract: Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: January 14, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Matthew Thomas Short, Robert Dooley, Grace T. Cheng, Sunny Webb, Mary Elizabeth Hamilton
  • Patent number: 10535174
    Abstract: The present disclosure provides embodiments of a particle-based inverse kinematic analysis system. The inverse kinematic system can utilize a neural network, also referred to as a deep neural network, which utilizes machine learning processes in order to create poses that are more life-like and realistic. The system can generate prediction models using motion capture data. The motion capture data can be aggregated and analyzed in order to train the neural network. The neural network can determine rules and constraints that govern how joints and connectors of a character model move in order to create realistic motion of the character model within the game application.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: January 14, 2020
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Paolo Rigiroli, Hitoshi Nishimura
  • Patent number: 10535175
    Abstract: A method of creating a computer-generated animation uses a graphical user interface including a two-dimensional array of cells. The array has a plurality of rows associated with computer-generated elements and a plurality of columns associated with frames of the animation. The array includes a first cell associated with a first computer-generated element and a first frame. A first view of the array is displayed in which the first cell has a first width and includes a key frame indicator that indicates that a designated value is associated with the first computer-generated element for the first frame. A second view is displayed in which the first cell has a second width and includes an element value indicator. The second width is greater than the first width, and the element value indicator represents the designated value associated with the first computer-generated element.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: January 14, 2020
    Assignee: DreamWorks Animation L.L.C.
    Inventors: Michael Babcock, Fredrik Nilsson, Matthew Christopher Gong
  • Patent number: 10535176
    Abstract: Systems, methods, and computer readable media to improve the animation capabilities of a computer system are described. Animation targets may be represented as a combination of a current animation pose and an incremental morph. The incremental morph may be represented as a series of non-zero weights, where each weight alters one of a predetermined number of target poses. Each target pose may be represented as a weighted difference with respect to a reference pose. Target poses may be stored in memory in a unique and beneficial manner. The disclosed manner permits the efficient retrieval of pose vertex data at run-time and may be especially efficient in systems that do not use, or have very little, cache memory.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: January 14, 2020
    Assignee: Apple Inc.
    Inventors: Aymeric Bard, Thomas Goossens, Amaury Balliet
  • Patent number: 10535177
    Abstract: Systems, methods, and non-transitory computer-readable media can provide an interface that includes a first region and a second region, wherein a live content stream being accessed is presented in the first region, and wherein one or more feedback options for interacting with the live content stream are presented in the second region. A determination is made that at least one user accessing the live content stream has selected a feedback option in response to the live content stream. At least one visual feature corresponding to the selected feedback option is displayed in the first region in which the live content stream is being presented.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: January 14, 2020
    Assignee: Facebook, Inc.
    Inventor: Alex Douglas Cornell
  • Patent number: 10535178
    Abstract: Systems, apparatuses, and methods for performing shader writes to compressed surfaces are disclosed. In one embodiment, a processor includes at least a memory and one or more shader units. In one embodiment, a shader unit of the processor is configured to receive a write request targeted to a compressed surface. The shader unit is configured to identify a first block of the compressed surface targeted by the write request. Responsive to determining the data of the write request targets less than the entirety of the first block, the first shader unit reads the first block from the cache and decompress the first block. Next, the first shader unit merges the data of the write request with the decompressed first block. Then, the shader unit compresses the merged data and writes the merged data to the cache.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: January 14, 2020
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Jimshed Mirza, Christopher J. Brennan, Anthony Chan, Leon Lai
  • Patent number: 10535179
    Abstract: A method comprising: causing detection of a modification of a visual appearance of a portion of the visual scene; causing determination that the portion of the visual scene that has been modified is or includes a first portion of the visual scene that has a corresponding first sound object; causing modification of the first sound object to modify a spatial extent of the first sound object; and causing rendering of the visual scene and the corresponding sound scene including rendering of the modified portion of the visual scene in the visual scene and rendering of the modified first sound object with modified spatial extent in the corresponding sound scene.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: January 14, 2020
    Assignee: Nokia Technologies Oy
    Inventors: Antti Eronen, Miikka Vilermo, Arto Lehtiniemi, Jussi Leppänen
  • Patent number: 10535180
    Abstract: A method for displaying graphics of clouds in a three-dimensional (3D) virtual environment includes generating a filtered texture based on a threshold filter applied to a cloud texture where the filter threshold corresponds to cloud coverage information in weather data of a geographic region. The method further includes mapping the filtered texture to a geometric surface corresponding to a sky dome in the 3D virtual environment, coloring a plurality of texels in the mapped filtered texture on the geometric surface stored in the memory based on an isotropic single-scatter color model, and generating a graphical depiction of the 3D virtual environment including at least a portion of the geometric surface corresponding to the sky dome with clouds based on the plurality of texels of the filtered texture that are colored and mapped to the geometric surface.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: January 14, 2020
    Assignee: Robert Bosch GmbH
    Inventors: Zeng Dai, Liu Ren, Lincan Zou
  • Patent number: 10535181
    Abstract: The systems and methods generate geometric proxies for participants in an online communication session, where each geometric proxy is a geometric representation of a participant and each geometric proxy is generated from acquired depth information and is associated with a particular virtual box. The systems and methods also include generating a scene geometry that visually simulates an in-person meeting of the participants where the scene geometry includes the geometric proxies, and the virtual boxes of the geometric proxies are aligned within the scene geometry based on a number of the participants and a reference object to which the virtual boxes are aligned. In addition, the systems and methods cause a display of the scene geometry with the geometric proxies, where the display of a particular geometric proxy includes a video of the participant corresponding to the particular geometric painted onto the particular geometric proxy.
    Type: Grant
    Filed: April 21, 2019
    Date of Patent: January 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yancey Christopher Smith, Eric G. Lang, Zhengyou Zhang, Christian F. Huitema
  • Patent number: 10535182
    Abstract: A system, devices and methods to reduce the volume of data required to represent three-dimensional smooth cylindrical curves and to ray-trace these three-dimensional smooth cylindrical curves using a set of geometric primitives which have implicit curvature.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: January 14, 2020
    Inventors: Marsel Khadiyev, Nikolay Shtinkov
  • Patent number: 10535183
    Abstract: Graphics processing systems and methods provide soft shadowing effects into rendered images. This is achieved in a simple manner which can be implemented in real-time without incurring high processing costs so it is suitable for implementation in low-cost devices. Rays are cast from positions on visible surfaces corresponding to pixel positions towards the center of a light, and occlusions of the rays are determined. The results of these determinations are used to apply soft shadows to the rendered pixel values.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: January 14, 2020
    Assignee: Imagination Technologies Limited
    Inventors: Justin P. DeCell, Luke T. Peterson
  • Patent number: 10535184
    Abstract: Disclosed herein are an ultrasonic imaging apparatus and a control method thereof. The ultrasonic imaging apparatus includes: an ultrasonic collector configured to collect ultrasonic waves from an object; a volume data generator configured to generate volume data based on the ultrasonic waves; and an image processor configured to perform volume rendering on the volume data with reference to a texture image, wherein a translucency property and multi-layer tissue of the object are reflected to each texel of the texture image.
    Type: Grant
    Filed: July 4, 2014
    Date of Patent: January 14, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Young Ihn Kho, Hee Sae Lee
  • Patent number: 10535185
    Abstract: Aspects of this disclosure relate to a process for rendering graphics that includes performing, with a hardware unit of a graphics processing unit (GPU) designated for vertex shading, a vertex shading operation to shade input vertices so as to output vertex shaded vertices, wherein the hardware unit adheres to an interface that receives a single vertex as an input and generates a single vertex as an output. The process also includes performing, with the hardware unit of the GPU designated for vertex shading, a hull shading operation to generate one or more control points based on one or more of the vertex shaded vertices, wherein the one or more hull shading operations operate on at least one of the one or more vertex shaded vertices to output the one or more control points.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: January 14, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Vineet Goel, Andrew Evan Gruber, Donghyun Kim
  • Patent number: 10535186
    Abstract: A mechanism is described for facilitating multi-resolution deferred shading using texel shaders in computing environments. A method of embodiments, as described herein, includes facilitating computation of shading rate in a first pass in a graphics pipeline, where the shading rate relates to a plurality of pixels. The method may further include facilitating texel shading operations in a second pass using the shading rate, where the first pass is performed separate from and prior to the second pass.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: January 14, 2020
    Assignee: Intel Corporation
    Inventor: Franz Petrik Clarberg
  • Patent number: 10535187
    Abstract: A computer-implemented method for classifying voxels. The method includes rendering a plurality of images associated with a three-dimensional model. The method also includes identifying one or more pixels associated with the plurality of images that correspond to a voxel. The method further includes classifying the voxel as either external to the three-dimensional model or internal to the three-dimensional model based on the one or more pixels.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: January 14, 2020
    Assignee: AUTODESK, INC.
    Inventors: Olivier Dionne, Martin De Lasa
  • Patent number: 10535188
    Abstract: Systems, methods, and computer readable media to implement tessellation edge shaders. Various embodiments receive tessellation patch information that includes patch information and shared edges for the patches. Based on the received patch information, edge tessellation levels for the shared edges may be determined and used to modify edge tessellation levels initially computed for the shared edges. The various embodiments can then generate vertices for a shared edge with the updated edge tessellation levels to adjoin the shared edges without forming cracks. The vertices may be used to render a surface of an object within a digital image or a sequence of digital images.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: January 14, 2020
    Assignee: Apple Inc.
    Inventor: David R. Shreiner
  • Patent number: 10535189
    Abstract: Systems and methods for determining a centerline of a tubular structure from volumetric data of vessels where a contrast agent was injected into the blood stream to enhance the imagery for centerlining. Given a 3D array of scalar values and a first and second point, the system and methods iteratively find a path from the start position to the end position that lies in the center of a tubular structure. A user interface may be provided to visually present and manipulate a centerline of the tubular structure and the tubular structure itself.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: January 14, 2020
    Assignee: CALGARY SCIENTIFIC INC.
    Inventors: Torin Arni Taerum, Jonathan Neil Draper, Robert George Newton
  • Patent number: 10535190
    Abstract: Systems and methods are described for a media guidance application (e.g., implemented on a user device) that allows users to select any arbitrary position in a virtual reality environment from where to view the virtual reality content and changes a user's perspective based on the selected position.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: January 14, 2020
    Assignee: ROVI GUIDES, INC.
    Inventors: Jonathan A. Logan, Adam Bates, Hafiza Jameela, Jesse F. Patterson, Mark K. Berner, Eric Dorsey, David W. Chamberlin, Paul Stevens, Herbert A. Waterman
  • Patent number: 10535191
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: January 14, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Patent number: 10535192
    Abstract: Systems and methods for generating a customized augmented reality environment. A method includes causing generation of at least one signature for at least one multimedia content; causing, based on the generated at least one signature, identification of at least one matching concept, wherein each concept is a collection of signatures and metadata representing the concept; determining, based on the identified at least one matching concept, a selection of at least one augmented reality character; and generating a customized augmented reality environment, wherein the customized augmented reality environment includes each augmented reality character superimposed on the at least one multimedia content element as an overlay.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: January 14, 2020
    Assignee: CORTICA LTD.
    Inventors: Igal Raichelgauz, Karina Odinaev, Yehoshua Y Zeevi
  • Patent number: 10535193
    Abstract: There is provided with an image processing apparatus. A first image sensor outputs a first image. The first image sensor has a relatively small amount of image deterioration caused by a motion of an object. A second image sensor outputs a second image. The second image sensor has a relatively large amount of image deterioration caused by the motion of the object. An estimation unit analyzes the first image and generates position and orientation information of the image processing apparatus. A rendering unit renders a CG object on the second image such that the CG object is superimposed at a position determined based on the position and orientation information.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: January 14, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masaaki Kobayashi
  • Patent number: 10535194
    Abstract: Whether a reference map can be changed is determined based on an index for evaluating a user's unlikeliness to notice a change in an appearance of a virtual object in an image captured by a camera, the change occurring when the reference map is changed to another map.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: January 14, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yuichiro Hirota, Akihiro Katayama, Masakazu Fujiki, Daisuke Kotake
  • Patent number: 10535195
    Abstract: A virtual reality system includes a drone including a rotor, a display, an audio speaker, a body harness having adjustable straps, and one or more processors in operative communication with the display, the audio speaker, and the drone. The drone may be fixed to the body harness. The one or more processors may be configured to issue audio-visual content to the display and audio speaker and control the rotor based on the issued audio-visual content.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: January 14, 2020
    Assignee: SonicSensory, Inc.
    Inventor: Brock Maxwell Seiler
  • Patent number: 10535196
    Abstract: Technologies are described for indicating a geographic origin of a digitally-mediated communication relative to a location of a recipient by presenting the indication in an augmented reality scene. For example, an augmented reality scene can be presented to the recipient. The geographic origin of an incoming digital communication may be determined and a relative location of the origin with respect to the recipient's location may be computed. A format for presenting the relative location may be derived from the digital communication and the geographic origin. The augmented reality scene may be updated with the relative location based on the derived format.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: January 14, 2020
    Assignee: Empire Technology Development LLC
    Inventors: Mark Malamud, Royce Levien
  • Patent number: 10535197
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view for presentation on a device. A visual guide can provided for capturing the multiple images used in the surround view. The visual guide can be a synthetic object that is rendered in real-time into the images output to a display of an image capture device. The visual guide can help user keep the image capture device moving along a desired trajectory.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: January 14, 2020
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Alexander Jay Bruen Trevor, Michelle Jung-Ah Ho, David Klein, Stephen David Miller, Shuichi Tsutsumi, Radu Bogdan Rusu
  • Patent number: 10535198
    Abstract: Described in detail herein are systems and methods for an augmented display system. A computing system can determine whether a physical object is absent from a designated location in a facility in response to data retrieved from the database. The computing system can insert a virtual element in a planogram corresponding to the designated location of the facility in response to determining the physical object is absent. The computing system can augment via interaction with the application executing on the portable electronic device.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: January 14, 2020
    Assignee: Walmart Apollo, LLC
    Inventors: Benjamin D. Enssle, David Blair Brightwell, Cristy Crane Brooks, Greg A. Bryan, Sonney George, Barrett Freeman, Sachin Padwal
  • Patent number: 10535199
    Abstract: The disclosed method may include (1) sensing, via a depth-sensing subsystem, a plurality of locations in three-dimensional space corresponding to physical surfaces in a real-world environment, (2) determining a dominant plane within the real-world environment, (3) defining a three-dimensional grid that is aligned with the dominant plane, (4) identifying, based on the plurality of locations relative to the dominant plane, a set of grid coordinates within the three-dimensional grid that are indicative of the physical surfaces, and (5) determining, based on the set of grid coordinates, a safety boundary to be employed by a head-mounted display system to notify a user of the head-mounted display system of the user's proximity to the physical surfaces. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: January 14, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Lars Anders Bond, Niv Kantor, Nadav Grossinger
  • Patent number: 10535200
    Abstract: Augmented reality presentations are provided at respective electronic devices. A first electronic device receives information relating to modification made to an augmented reality presentation at a second electronic device, and the first electronic device modifies the first augmented reality presentation in response to the information.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: January 14, 2020
    Assignee: OPEN TEXT CORPORATION
    Inventors: Sean Blanchflower, Timothy Halbert
  • Patent number: 10535201
    Abstract: A device receives an image including image data of a scale model of a vehicle, and processes the image data, with a model, to identify a make, a model, and a year represented by the scale model. The device determines augmented reality (AR) vehicle information based on the make, the model, and the year represented by the scale model of the vehicle, and provides the AR vehicle information to enable a user device to associate the AR vehicle information with the image of the scale model of the vehicle. The device receives an input associated with the AR vehicle information, and determines updated AR vehicle information based on the input associated with the AR vehicle information. The device provides the updated AR vehicle information to enable the user device to associate the updated augmented reality vehicle information with the image of the scale model of the vehicle.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: January 14, 2020
    Assignee: Capital One Services, LLC
    Inventors: Micah Price, Qiaochu Tang, Jason Hoover, Stephen Wylie, Geoffrey Dagley, Kristen Przano
  • Patent number: 10535202
    Abstract: An industrial visualization system generates and delivers virtual reality (VR) and augmented reality (AR) presentations of industrial facilities to wearable appliances to facilitate remote or enhanced interaction with automation systems within the facility. VR presentations can comprise three-dimensional (3D) holographic views of a plant facility or a location within a plant facility. The system can selectively render a scaled down view that renders the facility as a 3D scale model, or as a first-person view that renders the facility as a full-scale rendition that simulates the user's presence on the plant floor. Camera icons rendered in the VR presentation can be selected to switch to a live video stream generated by 360-degree cameras within the plant. The system can also render workflow presentations that guide users through the process of correcting detected maintenance issues.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: January 14, 2020
    Assignee: Rockwell Automation Technologies, Inc.
    Inventors: Paul D. Schmirler, Thong T. Nguyen, Alex L. Nicoll, David Vasko