Patents Examined by Said Broome
  • Patent number: 11049273
    Abstract: A processor-implemented method of generating a three-dimensional (3D) volumetric video with an overlay representing visibility counts per pixel of a texture atlas, associated with a viewer telemetry data is provided. The method includes (i) capturing the viewer telemetry data, (ii) determining a visibility of each pixel in the texture atlas associated with a 3D content based on the viewer telemetry data, (iii) generating at least one visibility counts per pixel of the texture atlas based on the visibility of each pixel in the texture atlas, and (iv) generating one of: the 3D volumetric video with the overlay of at least one heat map associated with the viewer telemetry data, using the at least one visibility counts per pixel and a curated selection of the 3D volumetric content based on the viewer telemetry data, using the visibility counts per pixel.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: June 29, 2021
    Assignee: Omnivor, Inc.
    Inventors: Adam G. Kirk, Oliver A. Whyte, Amit Mital
  • Patent number: 11042388
    Abstract: Implementations of the subject technology provide a framework to support creating user interfaces (UI) and animations within the UIs. The subject technology receives first information related to an animation, the first information including an initial state, a destination state, and an animation function. The subject technology generates a copy of the destination state, the copy of the destination state comprising a record for the animation based at least in part on the first information related to the animation and further information related to the animation function. The subject technology updates a value related to an intermediate state of the animation in the copy of the destination state, the intermediate state being between the initial state and the destination state. Further, the subject technology provides the copy of the destination state that includes the value related to the intermediate state for rendering the animation.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: June 22, 2021
    Assignee: Apple Inc.
    Inventors: Jacob A. Xiao, Kyle S. Macomber, Joshua H. Shaffer, John S. Harper
  • Patent number: 11029754
    Abstract: A tilt adjusting image including a captured image that is obtained by capturing a wearer wearing a spectacle-type electronic device, and a horizontal line image indicating a horizontal direction identified by a tilt identification part, is displayed. The wearer adjusts a position of the spectacle-type electronic device by a wearer's hand or the like while viewing the tilt adjusting image so that the spectacle-type electronic device becomes horizontal. When the wearer judges that the spectacle-type electronic device has become horizontal, the wearer operates an operation part and inputs a calibration instruction. When the calibration instruction is input, the calibration instruction is transmitted to the spectacle-type electronic device.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 8, 2021
    Assignee: ALPS ALPINE CO., LTD.
    Inventor: Yukimitsu Yamada
  • Patent number: 11030793
    Abstract: A photo filter (e.g., artistic/stylized painting) light field effect system includes an eyewear device having a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the stylized image painting effect system to apply a photo filter selection to: (i) a left raw image or a left processed image to create a left photo filter image, and (ii) a right raw image or a right processed image to create a right photo filter image. The stylized image painting effect system generates a photo filter stylized painting effect image with an appearance of a spatial rotation or movement, by blending together the left photo filter image and the right photo filter image based on a left image disparity map and a right image disparity map.
    Type: Grant
    Filed: September 29, 2019
    Date of Patent: June 8, 2021
    Assignee: Snap Inc.
    Inventor: Sagi Katz
  • Patent number: 11024267
    Abstract: A gaze tracking display system includes a processor and display circuitry. The processor is configured to perform foveated rendering of image data, and to output foveated image data. The display circuitry is coupled to the processor. The display circuitry includes a display device and a display controller. The display device is configured to produce a viewable image. The display controller is configured to drive the display device. The display controller includes reconstruction circuitry configured to produce an image at a resolution of the display device based on the foveated image data received from the processor.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: June 1, 2021
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Jeff Kempf, Dan Morgan
  • Patent number: 11010984
    Abstract: Systems and methods include transforming a digital file into a three-dimensional object that is spatially positioned in a three-dimensional virtual environment to visually organize the digital file relative to the three-dimensional virtual environment. Embodiments of the present disclosure relate to receiving the digital file that includes digital file parameters and is in a file format. The digital file is transforming into the three-dimensional object based on the digital file parameters associated with the digital file. The three-dimensional object is representative of the presentation of the digital file when executed by the computing device. The three-dimensional object is spatially positioned at a spatial location in the three-dimensional environment based on the digital file parameters of the digital file.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: May 18, 2021
    Assignee: Sagan Works, Inc.
    Inventors: Erika Block, Simon McCluskey, Donald Hicks
  • Patent number: 11010952
    Abstract: A representation of a surface of one or more objects is positioned in a virtual space is obtained in a computer animation system. Thereafter, a guide curve specification of a guide curve in the virtual space relative to the surface is received. Thereafter, the computer animation system computes a first set of tangent vector values for differentiable locations along the guide curve and computes a second set of tangent vector values for nondifferentiable locations along the guide curve. Using the first set and second set, the computer animation system computes a third set of tangent vector values for locations on the surface other than locations along the guide curve and computes a tangent vector field over the surface from at least the first set of tangent vector values, the second set of tangent vector values, and the third set of tangent vector values.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: May 18, 2021
    Assignee: Weta Digital Limited
    Inventor: Kevin Atkinson
  • Patent number: 11004240
    Abstract: Disclosed is a hierarchical division-based point cloud attribute compression method. For point cloud attribute information, a new hierarchical division based coding scheme is proposed, wherein a frame of point cloud is adaptively divided into a “stripe-macroblock-block” hierarchical structure according to the spatial position and color distribution of the point cloud, and stripes are coded independently from one another, increasing the coding efficiency, enhancing the fault tolerance of a system and improving the performance of point cloud attribute compression. The method comprises: (1) inputting a point cloud; (2) division of a k-dimension (KD) tree structure of the point cloud; (3) continuity analysis of point cloud attribute information; (4) stripe division of the point cloud; (5) division of macroblocks and coding blocks of the point cloud; and (6) intra-frame prediction, transformation, quantification and entropy coding based on a block structure.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: May 11, 2021
    Inventors: Ge Li, Yi Ting Shao, Qi Zhang, Rong Gang Wang, Tie Jun Huang, Wen Gao
  • Patent number: 11002669
    Abstract: A device and a method for analyzing objects, including buildings, build environments and/or environment areas is proposed. Image data points of an object are gathered and geo-referenced in order to generate object data points. Properties of the object such as material composition, material state or material properties are determined based spectral characteristics evaluation. Three-dimensional object models are generated in accordance with the evaluation of spectral characteristics.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: May 11, 2021
    Assignee: VOXELGRID GMBH
    Inventors: Karl Christian Wetzel, Charoula Andreou
  • Patent number: 10997785
    Abstract: There is provided a system and method of collecting geospatial object data with mediated reality. The method including: receiving a determined physical position; receiving a live view of a physical scene; receiving a geospatial object to be collected; presenting a visual representation of the geospatial object to a user with the physical scene; receiving a placement of the visual representation relative to the physical scene; and recording the position of the visual representation anchored into a physical position in the physical scene using the determined physical position.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: May 4, 2021
    Assignee: VGIS INC.
    Inventor: Alexandre Pestov
  • Patent number: 10997951
    Abstract: Example techniques are described for generating graphics content by assigning a first region of the graphics content to a first tile, assigning a second region of the graphics content to a second tile, determining, at the first tile and at a first resolution, a first set of samples of the graphics content for each pixel of multiple pixels associated with the first region, determining, at the second tile and at a second resolution that is lower than the first resolution, a second set of samples of the graphics content for each pixel of multiple pixels associated with the second region, downsampling the first set of samples into a combined set of samples, preserving samples of the second set of samples to generate a third set of samples with preserved samples, storing the combined set of samples, and storing the third set of samples with preserved samples.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: May 4, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Jonathan Wicks, Tate Hornbeck, Robert Vanreenen
  • Patent number: 10997764
    Abstract: Embodiments of the present disclosure provide a method and apparatus for generating an animation. A method may include: extracting an audio feature from target speech segment by segment, to aggregate the audio feature into an audio feature sequence composed of an audio feature of each speech segment; inputting the audio feature sequence into a pre-trained mouth-shape information prediction model, to obtain a mouth-shape information sequence corresponding to the audio feature sequence; generating, for mouth-shape information in the mouth-shape information sequence, a face image including a mouth-shape object indicated by the mouth-shape information; and using the generated face image as a key frame of a facial animation, to generate the facial animation.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: May 4, 2021
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Jianxiang Wang, Fuqiang Lyu, Xiao Liu, Jianchao Ji
  • Patent number: 10991158
    Abstract: Systems and methods are disclosed for guiding image capture of a subject by determining a location of the subject and presenting on a display graphical guides representative of perspective views of the subject to be captured. Images of the subject may then be captured and additional graphical guides are presented to the user for display for additional images to be captured. Images may be captured in a predetermined sequence of graphical guides or responsive to a user input or camera information. Captured images may be uploaded to a system for additional processing.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: April 27, 2021
    Assignee: Hover Inc.
    Inventors: William Castillo, Manish Upendran, Ajay Mishra, Adam J. Altman
  • Patent number: 10989542
    Abstract: A method includes retrieving a map of a 3D geometry of an environment the map including a plurality of non-spatial attribute values each corresponding to one of a plurality of non-spatial attributes and indicative of a plurality of non-spatial sensor readings acquired throughout the environment, receiving a plurality of sensor readings from a device within the environment wherein each of the sensor readings corresponds to at least one of the non-spatial attributes and matching the plurality of received sensor readings to at least one location in the map to produce a determined sensor location.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: April 27, 2021
    Assignee: Kaarta, Inc.
    Inventors: Ji Zhang, Kevin Joseph Dowling
  • Patent number: 10987579
    Abstract: A graphics rendering system is disclosed for generating and streaming graphics data of a 3D environment from a server for rendering on a client in 2.5D. 2D textures can be transmitted in advance of frames showing the textures. Data transmitted for each frame can include 2D vertex positions of 2D meshes and depth data. The 2D vertex positions can be positions on a 2D projection as seen from a viewpoint within the 3D environment. Data for each frame can include changes to vertex positions and/or depth data. A prediction system can be used to predict when new objects will be displayed, and textures of those new objects can be transmitted in advance.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: April 27, 2021
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Igor Borovikov, Mohsen Sardari
  • Patent number: 10984609
    Abstract: Disclosed herein are an apparatus and method for generating a 3D avatar. The method, performed by the apparatus, includes performing a 3D scan of the body of a user using an image sensor and generating a 3D scan model using the result of the 3D scan of the body of the user, matching the 3D scan model and a previously stored template avatar, and generating a 3D avatar based on the result of matching the 3D scan model and the template avatar.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: April 20, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Byung-Ok Han, Ho-Won Kim, Ki-Nam Kim, Jae-Hwan Kim, Ji-Hyung Lee, Yu-Gu Jung, Chang-Joon Park, Gil-Haeng Lee
  • Patent number: 10983201
    Abstract: Techniques are disclosed for real-time mapping in a movable object environment. A system for real-time mapping in a movable object environment, may include at least one movable object including a computing device, a scanning sensor electronically coupled to the computing device, and a positioning sensor electronically coupled to the computing device. The system may further include a client device in communication with the at least one movable object, the client device including a visualization application which is configured to receive point cloud data from the scanning sensor and position data from the positioning sensor, record the point cloud data and the position data to a storage location, generate a real-time visualization of the point cloud data and the position data as it is received, and display the real-time visualization using a user interface provided by the visualization application.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: April 20, 2021
    Assignee: DJI Technology, Inc.
    Inventors: Alain Pimentel, Kalyani Premji Nirmal, Comran Morshed, Arjun Sukumar Menon, Weifeng Liu
  • Patent number: 10977862
    Abstract: Visualizing three dimensional content is complicated by display platforms capable of more degrees of freedom to display the content than interface tools have to navigate that content. Disclosed are methods and systems for displaying select portions of the content and generating virtual camera positions with associated look angles for the select portions, such as planar geometries of a three dimensional building, thereby constraining the degrees of freedom for improved navigation through views of the content. Look angles can be associated with axes of the content and fields of view.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: April 13, 2021
    Assignee: Hover Inc.
    Inventors: Manish Upendran, Adam J. Altman
  • Patent number: 10970907
    Abstract: Disclosed herein includes a system, a method, and a non-transitory computer readable medium for applying an expression to an avatar. In one aspect, a class of an expression of a face can be determined according to a set of attributes indicating states of portions of the face. In one aspect, a set of blendshapes with respective weights corresponding to the expression of the face can be determined according to the class of the expression of the face. In one aspect, the set of blendshapes with respective weights can be provided as an input to train a machine learning model. In one aspect, the machine learning model can be configured, via training, to generate an output set of blendshapes with respective weights, according to an input image. An image of an avatar may be rendered according to the output set of blendshapes with respective weights.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: April 6, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Elif Albuz, Melinda Ozel, Tong Xiao, Sidi Fu
  • Patent number: 10970919
    Abstract: A method of determining an illumination effect value of a volumetric dataset includes determining, based on the volumetric dataset, one or more parameter values relating to one or more properties of the volumetric dataset at a sample point; and providing the one or more parameter values as inputs to an anisotropic illumination model and thereby determining an illumination effect value relating to an illumination effect at the sample point, the illumination effect value defining a relationship between an amount of incoming light and an amount of outgoing light at the sample point.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: April 6, 2021
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventor: Felix Dingeldey