Texture Patents (Class 345/582)
  • Patent number: 11221547
    Abstract: A projector including an optical machine, a lens and a casing is provided. The optical machine is configured to form an optical image. The lens is located on a projection path of the optical machine so that the optical image is projected on the lens. The casing is configured to receive the optical machine and the lens. The casing is provided with a positioning mark, which is aligned with an imaging center of the optical image in a vertical direction and configured to align the imaging center of the optical image with a center of a projection screen.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: January 11, 2022
    Assignees: BenQ Intelligent Technology (Shanghai) Co., Ltd, BENQ CORPORATION
    Inventors: Tung-Chia Chou, Chin-Fu Chiang
  • Patent number: 11222396
    Abstract: In one embodiment, an apparatus, coupled to a computing system, may include a first-level of data bus comprising first-level data lines. The apparatus may include second-level data buses each including second-level data lines. Each second-level data bus may be coupled to a memory unit. The second-level data lines of each second-level data bus may correspond to a subset of the first-level data lines. The apparatus may include third-level data buses each including third-level data lines. Each third-level data bus may be coupled to a sub-level memory unit. The third-level data lines of each third-level data bus may correspond to a subset of the second-level data lines of a second-level data bus along a structural hierarchy. The apparatus may be configured to allow the computing system to load a data block from the first-level data lines to sub-level memory units through the third-level data buses excluding multiplexing operations.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: January 11, 2022
    Assignee: Facebook Technologies, LLC
    Inventor: Larry Seiler
  • Patent number: 11217037
    Abstract: Disclosed herein is a point cloud data reception device including a receiver configured to receive a bitstream containing geometry data of point cloud data and/or attribute data of the point cloud data, and/or a decoder configured to decode the point cloud data in the bitstream. Disclosed herein is a point cloud data transmission device including an encoder configured to encode point cloud data, and/or a transmitter configured to transmit a bitstream containing the point cloud data and/or signaling information about the point cloud data.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: January 4, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Yousun Park, Sejin Oh, Hyejung Hur
  • Patent number: 11215845
    Abstract: A virtual try-on process for spectacles includes an approximate positioning and a fine positioning of a spectacle frame on a head of a user. Provided for this purpose are 3D models of the head and the spectacle frame, as well as head metadata based on the model of the head and frame metadata based on the model of the frame. The head metadata contains placement information, in particular a placement point, which can be used for the approximate positioning of the spectacle frame on the head, and/or a placement region which describes a region of the earpiece part of the frame for placement on the ears of the head. A rapid and relatively simple computational positioning of the spectacle frame on the head and a more accurate positioning using a subsequent precise adjustment can be achieved with the aid of the metadata.
    Type: Grant
    Filed: November 30, 2019
    Date of Patent: January 4, 2022
    Assignee: Carl Zeiss Vision International GmbH
    Inventors: Oliver Schwarz, Ivo Ihrke
  • Patent number: 11216990
    Abstract: Anisotropic texture filtering applies a texture at a sampling point in screen space. Calculated texture-filter parameters configure a filter to perform filtering of the texture for the sampling point. The texture for the sampling point is filtered using a filtering kernel having a footprint in texture space determined by the texture-filter parameters. Texture-filter parameters are calculated by generating a first and a second pair of screen-space basis vectors being rotated relative to each other. First and second pairs of texture-space basis vectors are calculated that correspond to the first and second pairs of screen-space basis vectors transformed to texture space under a local approximation of a mapping between screen space and texture space. An angular displacement is determined between a selected pair of the first and second pairs of screen-space basis vectors and screen-space principal axes of a local approximation of the mapping that indicate the maximum and minimum scale factors of the mapping.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: January 4, 2022
    Assignee: Imagination Technologies Limited
    Inventor: Rostam King
  • Patent number: 11210852
    Abstract: A method of merging 3D meshes includes receiving a first mesh and a second mesh; performing spatial alignment to register the first mesh and the second mesh in a common world coordinate system; performing mesh clipping on the first mesh and the second mesh to remove redundant mesh vertices; performing geometry refinement around a clipping seam to close up mesh concatenation holes created by mesh clipping; and performing texture blending in regions adjacent the clipping seam to obtain a merged mesh.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: December 28, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Jianyuan Min, Xiaolin Wei
  • Patent number: 11199394
    Abstract: An apparatus for three-dimensional shape measurement is provided, including a projection device, an image capture device, and an image processing device. The projection device sequentially projects a plurality of structured light beams on a scene during a first projection period and a second projection period. The mean level of the structured light beams during the first projection period is the same as the mean level of the structured light beams during the second projection period, and the frequency of the structured light beams during the first projection period is different from the frequency of the structured light beams during the second projection period. The image capture device captures an image of the scene within the projection time of each of the structured light beams. The image processing device obtains a three-dimensional shape of a to-be-measured object in the scene according to the images.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: December 14, 2021
    Assignee: BENANO INC.
    Inventors: Liang-Pin Yu, Yeong-Feng Wang, Chun-Di Chen
  • Patent number: 11170556
    Abstract: A method of transmitting point cloud data in accordance with embodiments may include encoding point cloud data, and/or transmitting a bitstream containing the encoded point cloud data and metadata for the point cloud data. In addition, a method of receiving point cloud data in accordance with embodiments may include receiving a bitstream containing point cloud data and metadata, and/or decoding the point cloud data.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: November 9, 2021
    Assignee: LG ELECTRONICS INC.
    Inventor: Sejin Oh
  • Patent number: 11164343
    Abstract: Techniques are disclosed for populating a region of an image with a plurality of brush strokes. For instance, the image is displayed, with the region of the image bounded by a boundary. A user input is received that is indicative of a user-defined brush stroke within the region. One or more synthesized brush strokes are generated within the region, based on the user-defined brush stroke. In some examples, the one or more synthesized brush strokes fill at least a part of the region of the image. The image is displayed, along with the user-defined brush stroke and the one or more synthesized brush strokes within the region of the image.
    Type: Grant
    Filed: October 10, 2020
    Date of Patent: November 2, 2021
    Assignee: Adobe Inc.
    Inventors: Vineet Batra, Praveen Kumar Dhanuka, Nathan Carr, Ankit Phogat
  • Patent number: 11157732
    Abstract: A system, method and computer program product for hand-drawing diagrams including text and non-text elements on a computing device are provided. The computing device has a processor and a non-transitory computer readable medium for detecting and recognizing hand-drawing diagram element input under control of the processor. Display of input diagram elements in interactive digital ink is performed on a display device associated with the computing device. One or more of the diagram elements are associated with one or more other of the diagram elements in accordance with a class and type of each diagram element. The diagram elements are re-displayed based on one or more interactions with the digital ink received and in accordance with the one or more associations.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: October 26, 2021
    Assignee: MYSCRIPT
    Inventors: Romain Bednarowicz, Robin Mélinand, Claire Sidoli, Fabien Ric, Khaoula Elagouni, David Hebert, Fabio Valente, Gregory Cousin, Maël Nagot, Cyril Cerovic, Anne Bonnaud
  • Patent number: 11159742
    Abstract: An apparatus for high-speed video is described herein. The apparatus includes a camera array, wherein each camera of the camera array is to capture an image at a different time offset resulting in a plurality of images. The apparatus also includes a controller to interleave the plurality of images in chronological order and a view synthesis unit to synthesize a camera view from a virtual camera for each image of the plurality of images. Additionally, the apparatus includes a post-processing unit to remove any remaining artifacts from the plurality of images.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: October 26, 2021
    Assignee: Intel Corporation
    Inventors: Maha El Choubassi, Oscar Nestares
  • Patent number: 11158110
    Abstract: When sampling a pair of mipmaps when performing anisotropic filtering when sampling a texture to provide an output sampled texture value for use when rendering an output in a graphics processing system, more positions along an anisotropy direction are sampled in the more detailed mipmap level than in the less detailed mipmap level. Each position that is sampled may have a single sample taken for it, or may be supersampled.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: October 26, 2021
    Assignee: Arm Limited
    Inventors: Edvard Fielding, Jorn Nystad
  • Patent number: 11153550
    Abstract: Systems, methods, and articles of manufacture are disclosed that enable the compression of depth data and real-time reconstruction of high-quality light fields. In one aspect, spatial compression and decompression of depth images is divided into the following stages: generating a quadtree data structure for each depth image captured by a light field probe and difference mask associated with the depth image, with each node of the quadtree approximating a corresponding portion of the depth image data using an approximating function; generating, from the quadtree for each depth image, a runtime packed form that is more lightweight and has a desired maximum error; and assembling multiple such runtime packed forms into per-probe stream(s); and decoding at runtime the assembled per-probe stream(s). Further, a block compression format is disclosed for approximating depth data by augmenting the block compression format 3DC+(BC4) with a line and two pairs of endpoints.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: October 19, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth J. Mitchell, Charalampos Koniaris, Malgorzata E. Kosek, David A. Sinclair
  • Patent number: 11151739
    Abstract: The invention relates to a method for 3D reconstruction of a scene, wherein an event camera (1) is moved on a trajectory (T) along the scene, wherein the event camera (1) comprises a plurality of pixels that are configured to only output events (ek) in presence of brightness changes in the scene at the time (tk) they occur, wherein each event comprises the time (tk) at which it occurred, an address (xk,yk) of the respective pixel that detected the brightness change, as well as a polarity value (pk) indicating the sign of the brightness change, wherein a plurality of successive events generated by the event camera (1) along said trajectory (T) are back-projected according to the viewpoint (P) of the event camera (1) as viewing rays (R) through a discretized volume (DSI) at a reference viewpoint (RV) of a virtual event camera associated to said plurality of events, wherein said discretized volume (DSI) comprises voxels (V?), and wherein a score function ƒ(X) associated to the discretized volume (DSI) is determi
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: October 19, 2021
    Inventors: Henri Rebecq, Guillermo Gallego Bonet, Davide Scaramuzza
  • Patent number: 11153551
    Abstract: An apparatus and a method are provided. The apparatus includes a light source configured to project light in a changing pattern that reduces the light's noticeability; collection optics through which light passes and forms an epipolar plane with the light source; and an image sensor configured to receive light passed through the collection optics to acquire image information and depth information simultaneously. The method includes projecting light by a light source in a changing pattern that reduces the light's noticeability; passing light through collection optics and forming an epipolar plane between the collection optics and the light source; and receiving in an image sensor light passed through the collection optics to acquire image information and depth information simultaneously.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: October 19, 2021
    Inventors: Ilia Ovsiannikov, Yibing Michelle Wang, Peter Deane
  • Patent number: 11135517
    Abstract: A plurality of terrain/background objects each including at least a part of a cylindrical curved surface are arranged within a virtual space, and an operation object to be operated on the basis of an operation input on an operation device is controlled within the virtual space. Furthermore, the plurality of terrain/background objects are rotated about rotation center axes on the basis of a movement instruction input to the operation device such that a relative positional relationship between each of the plurality of terrain/background objects and the operation object is changed. Then, an image of the virtual space to be displayed on a display device is generated using a virtual camera.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: October 5, 2021
    Assignee: NINTENDO CO., LTD.
    Inventors: Atsushi Matsumoto, Tetsuya Satoh
  • Patent number: 11127169
    Abstract: A three-dimensional data encoding method includes: extracting, from first three-dimensional data, second three-dimensional data having an amount of a feature greater than or equal to a threshold; and encoding the second three-dimensional data to generate first encoded three-dimensional data. For example, the three-dimensional data encoding method may further include encoding the first three-dimensional data to generate the second encoded three-dimensional data.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: September 21, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Toshiyasu Sugio, Takahiro Nishi, Tadamasa Toma, Toru Matsunobu, Satoshi Yoshikawa, Tatsuya Koyama
  • Patent number: 11120607
    Abstract: An information generating apparatus generates a 3D model expressing a three-dimensional object by using shape data indicating a shape of an object and texture data representing a texture to be assigned to a surface of the shape and containing first information describing a first constituent element contained in the shape data, second information describing a combination of the first constituent element and a second constituent element of the texture assigned in association with the first constituent element, third information describing, by using a value indicating the combination, a shape of each face of the three-dimensional object and a texture to be assigned, and fourth information describing a second constituent element contained in the texture data. The third information describes at least the first constituent element by using a value different from a coordinate value of a vertex of the shape.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: September 14, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tomokazu Sato
  • Patent number: 11106853
    Abstract: A method of designing a 3D Integrated Circuit, the method comprising: performing partitioning to at least a logic strata comprising logic and a memory strata comprising memory; then performing a first placement of said logic strata using a 2D placer executed by a computer, wherein said 2D placer is a Computer Aided Design (CAD) tool for two-dimensional devices, wherein said 3D Integrated Circuit comprises through silicon vias for connection between said logic strata and said memory strata; and performing a second placement of said memory strata based on said first placement, wherein said memory comprises a first memory array, wherein said logic comprises a first logic circuit controlling said first memory array, wherein said first placement comprises placement of said first logic circuit, and wherein said second placement comprises placement of said first memory array based on said placement of said first logic circuit.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: August 31, 2021
    Assignee: MONOLITHIC 3D INC.
    Inventors: Zvi Or-Bach, Zeev Wurman
  • Patent number: 11106902
    Abstract: Certain embodiments detect human-object interactions in image content. For example, human-object interaction metadata is applied to an input image, thereby identifying contact between a part of a depicted human and a part of a depicted object. Applying the human-object interaction metadata involves computing a joint-location heat map by applying a pose estimation subnet to the input image and a contact-point heat map by applying an object contact subnet to the to the input image. The human-object interaction metadata is generated by applying an interaction-detection subnet to the joint-location heat map and the contact-point heat map. The interaction-detection subnet is trained to identify an interaction based on joint-object contact pairs, where a joint-object contact pair includes a relationship between a human joint location and a contact point. An image search system or other computing system is provided with access to the input image having the human-object interaction metadata.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: August 31, 2021
    Assignee: ADOBE INC.
    Inventors: Zimo Li, Vladimir Kim, Mehmet Ersin Yumer
  • Patent number: 11100165
    Abstract: A method for making modified content available includes storing an item comprising contents. A modification procedure to be performed on the item to modify the contents is identified. The method includes generating a file identifier to represent the item such that, upon a request to access the item being received, the modification procedure is performed on the item using the file identifier and the modified contents are provided in response to the request. A method for making modified content available includes receiving a request to access a file identifier that represents an item comprising contents. After receiving the request, a modification procedure to modify the contents is performed. The modification procedure is identified using the file identifier. The modified contents are provided in response to the request. A system includes an application program, a repository and a redirector.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: August 24, 2021
    Assignee: Google LLC
    Inventors: Michael B. Herf, Sigurdur Asgeirsson
  • Patent number: 11094139
    Abstract: A method and device to simulate, visualize and compare 3D surfaces Method and device to visualize a composition of two matched surface models (M1) and (M2) in order to privilege the visualization of the silhouette of the second representation (R2) of the second surface model (M2), that is, the surface elements which are the most tangent to the viewing direction defined by the optical center (C) of a virtual camera and the point (P) of the surface of the second surface model (M2) considered, and to visualize it by transparency on top of the first surface model (M1). The disclosure is intended in particular to compare anatomical subjects before and after simulation or surgical or aesthetic procedures.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: August 17, 2021
    Assignee: QuantifiCare S.A.
    Inventor: Jean Philippe Thirion
  • Patent number: 11093749
    Abstract: A computer system obtains digital video data of at least one physical consumer product (e.g., two or more cosmetics products) in a personal care routine; analyzes the digital video data (e.g., using automated object recognition or gesture recognition techniques); detects at least one physical interaction with the at least one physical consumer product (e.g., two or more applications of cosmetics products) based at least in part on the analysis of the digital video data; and causes customized personal care routine data (e.g., a computer animation simulation or a comparison of the user's routine with routines of other users) to be presented in a user interface. The customized personal care routine data is based at least in part on the at least one physical interaction.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: August 17, 2021
    Assignee: L'Oreal
    Inventors: Kelsey Norwood, Elisabeth Araujo
  • Patent number: 11087499
    Abstract: A three-dimensional data encoding method includes: extracting, from first three-dimensional data, second three-dimensional data having an amount of a feature greater than or equal to a threshold; and encoding the second three-dimensional data to generate first encoded three-dimensional data. For example, the three-dimensional data encoding method may further include encoding the first three-dimensional data to generate the second encoded three-dimensional data.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: August 10, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Toshiyasu Sugio, Takahiro Nishi, Tadamasa Toma, Toru Matsunobu, Satoshi Yoshikawa, Tatsuya Koyama
  • Patent number: 11074746
    Abstract: A system and method for generating a 3D model and/or map of a geographic region is disclosed. A computer designates a geographic region and a number of aircraft, and partitions the designated geographic region into sub-regions, creates waypoints within each sub-region, and plans missions for each aircraft to fly to each waypoint and take pictures. The aircraft are configured to accept and perform missions from the computer, and the computer receives images from the aircraft, assigns each image to a sub-region, and transmits each sub-region and images, as well as instructions, to the computing resource. The computing resource executes the instructions, which perform 3D reconstruction and generate orthophotos and 3D models. The 3D reconstruction comprises trimming distorted portions of the orthophotos and 3D models, and merging the orthophotos and 3D models from each sub-region into a 3D model and/or map of the geographic region.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: July 27, 2021
    Assignee: Locus Social Inc.
    Inventors: Haowen You, Shaofeng Yang
  • Patent number: 11069117
    Abstract: Systems and methods for generating three-dimensional models having regions of various resolutions are provided. In particular, imagery data can be captured and utilized to generate three-dimensional models. Regions of texture can be mapped to regions of a three-dimensional model when rendered. Resolutions of texture can be selectively altered and regions of texture can be selectively segmented to reduce texture memory cost. Texture can be algorithmically generated based on alternative texturing techniques. Models can be rendered having regions at various resolutions.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: July 20, 2021
    Assignee: Matterport, Inc.
    Inventors: Daniel Ford, Matthew Tschudy Bell, David Alan Gausebeck, Mykhaylo Kurinnyy
  • Patent number: 11055898
    Abstract: In one embodiment, a method for determining the color for a sample location includes using a computing system to determine a sampling location within a texture that comprises a plurality of texels. Each texel may encode a distance field and a color index. The system may select, based on the sampling location, a set of texels in the plurality of texels to use to determine a color for the sampling location. The system may compute an interpolated distance field based on the distance fields of the set of texels. The system may select, based on the interpolated distance field, a subset of the set of texels. The system may select a texel from the subset of texels based on a distance between the texel and the sampling location. The system may then determine the color for the sampling location using the color index of the selected texel.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: July 6, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Larry Seiler, Alexander Nankervis, John Adrian Arthur Johnston
  • Patent number: 11049291
    Abstract: Systems and methods configured to determine appearance of woven and knitted textiles at the ply-level are presented herein. Exemplary embodiments may: obtain an input pattern of a textile, the input pattern comprising a two-dimensional weave pattern; obtain appearance information, the appearance information including one or more of color, transparency, or roughness; determine ply curve geometry based on ply-level fiber details making up individual plys; generate an image simulating an appearance of the textile based on the two-dimensional weave pattern, the appearance information, and the ply curve geometry so that the image simulating the appearance of the textile takes into account the ply-level fiber details; and/or perform other operations.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: June 29, 2021
    Assignee: Luxion, Inc.
    Inventors: Zahra Montazeri, Søren Gammelmark, Shuang Zhao, Henrik Wann Jensen
  • Patent number: 11037520
    Abstract: An embodiment of the invention may include a method, computer program product and system for operating an electronic display device. An embodiment may include displaying, using a first refresh rate, first content on a first partition of a display area of the electronic display device. An embodiment may include displaying, using a second refresh rate, second content on a second partition of the display area of the electronic display device. The first refresh rate is different from the second refresh rate.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: June 15, 2021
    Assignee: International Business Machines Corporation
    Inventors: Adam Benjamin Childers, Raquel Norel, Natesan Venkateswaran, Carlos Alberto Hoyos, Jayapreetha Natesan, Yuk L. Chan, Susan Shumway
  • Patent number: 11037326
    Abstract: An individual identifying device includes a conversion unit and an alignment unit. The conversion unit performs frequency conversion on an image obtained by imaging an object. The alignment unit performs alignment of an image for extracting a feature amount for identifying an individual of the object, based on a first subregion in the image after the frequency conversion.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: June 15, 2021
    Assignee: NEC CORPORATION
    Inventors: Toru Takahashi, Rui Ishiyama
  • Patent number: 11037356
    Abstract: A system and method for performing non-graphical algorithm calculations on a GPU (graphics processing unit), by adapting the non-graphical algorithm to be executed according to the texture mapping calculation functions of the GPU, for example within the Web Browser environment. The non-graphical algorithm preferably relates to comparison of a plurality of data points. Each data point may relate to any unit of information, including but not limited to a document (for a document comparison algorithm), information about movements of a unit (for a collision detection algorithm), determination of interactions between two more nodes on a graph, such as for example and without limitation, determining such interactions in a social media channel.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: June 15, 2021
    Assignee: ZIGNAL LABS, INC.
    Inventors: Alex Smith, Andras Benke, Jonathan R. Dodson, Michael Kramer, Fabien Vives, Adam Beaugh, Christopher Miller
  • Patent number: 11027201
    Abstract: The invention relates to a computer implemented method for controlling the display of a tile image on a display of the computer device, the method comprising: storing in a computer memory at the computer device, image texture data comprising a plurality of sets of predefined masks, each set of predefined masks for forming a respective tile image; selecting at random a tile image for display from a plurality of tile images; determining a location of each mask in the set of predefined masks for forming the selected tile image, in the image texture data; and supplying an indication of said location to a shader program executed on the computer device to control the shader program to use the set of predefined masks to form the selected tile image on said display.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: June 8, 2021
    Assignee: KING.COM LTD.
    Inventors: Juan Antonio Moya, David Picon, Oriol Canudas
  • Patent number: 11017589
    Abstract: 3-D rendering systems include a rasterization section that can fetch untransformed geometry, transform geometry and cache data for transformed geometry in a memory. As an example, the rasterization section can transform the geometry into screen space. The geometry can include one or more of static geometry and dynamic geometry. The rasterization section can query the cache for presence of data pertaining to a specific element or elements of geometry, and use that data from the cache, if present, and otherwise perform the transformation again, for actions such as hidden surface removal. The rasterization section can receive, from a geometry processing section, tiled geometry lists and perform the hidden surface removal for pixels within respective tiles to which those lists pertain.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: May 25, 2021
    Assignee: Imagination Technologies Limited
    Inventor: John W. Howson
  • Patent number: 11019363
    Abstract: Method and device for encoding a point cloud that represents a three-dimensional (3D) object. One or more groups of temporally successive pictures are obtained. Each picture of the one or more groups comprises a first set of images, the images being spatially arranged in a same manner in each picture of the one or more groups. A second set of projections is associated with the one or more groups, a unique projection being associated with each image in such a way that a same projection is associated with only one single image and all projections are associated with the images. A first information representative of the projections is encoded. The point cloud may then be encoded according to the obtained pictures. A corresponding method and device for decoding a bitstream that comprises data representative of the encoded picture(s) are also described.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: May 25, 2021
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Sebastien Lasserre, Jean-Claude Chevet, Yannick Olivier
  • Patent number: 11010951
    Abstract: In one embodiment, a system may capture one or more images of a user using one or more cameras, the one or more images depicting at least an eye and a face of the user. The system may determine a direction of a gaze of the user based on the eye depicted in the one or more images. The system may generate a facial mesh based on depth measurements of one or more features of the face depicted in the one or more images. The system may generate an eyeball texture for an eyeball mesh by processing the direction of the gaze and the facial mesh using a machine-learning model. The system may render an avatar of the user based on the eyeball mesh, the eyeball texture, the facial mesh, and a facial texture.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: May 18, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Gabriel Bailowitz Schwartz, Jason Saragih, Tomas Simon Kreuz, Shih-En Wei, Stephen Anthony Lombardi
  • Patent number: 11010958
    Abstract: A method of generating an image of a subject in a scene includes obtaining a first image and a second image of a subject in a scene, each image corresponding to a different respective viewpoint of the subject, each image being captured by a different respective camera, where at least some of the subject is occluded in the first image and not the second image by virtue of the different viewpoints, obtaining camera pose data indicating a pose of a camera for each image, re-projecting, based on the difference in camera poses associated with each image, at least a portion of the second image to correspond to the viewpoint from which the first image was captured, and combining the re-projected portion of the second image with at least some of the first image so as to generate a composite image of the subject from the viewpoint of the first image, the re-projected portion of the second image providing image data for at least some of the occluded part or parts of the subject in the first image.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: May 18, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Ian Henry Bickerstaff, Andrew Damian Hosfield, Nicola Orrù
  • Patent number: 11010629
    Abstract: A method and an apparatus for automatically extracting image features of electrical imaging well logging, wherein the method comprises the steps of: acquiring historical data of electrical imaging well logging; pre-processing the historical data of the electrical imaging well logging to generate an electrical imaging well logging image covering a full hole; recognizing and marking a typical geological feature in the electrical imaging well logging image covering the full hole, obtaining a processed image, and determining the processed image as a training sample according to types of the geological features; constructing a deep learning model including an input layer, a plurality of hidden layers, and an output layer; training the deep learning model using the training sample; using the trained deep learning model, recognizing type of a geological feature of an electrical imaging well logging image of a well section to be recognized, and performing morphological optimization processing on the recognition resul
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: May 18, 2021
    Assignee: PetroChina Company Limited
    Inventors: Zhou Feng, Ning Li, Hongliang Wu, Kewen Wang, Peng Liu, Yusheng Li, Huafeng Wang, Chen Wang
  • Patent number: 10991111
    Abstract: A method of refining a depth image includes extracting shading information of color pixels from a color image, and refining a depth image corresponding to the color image based on surface normal information of an object included in the shading information.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: April 27, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Inwoo Ha, Hyong Euk Lee, Minsu Ahn, Young Hun Sung
  • Patent number: 10984608
    Abstract: In the disclosed systems and methods for competitive scene completion, in conjunction with a scene completion challenge, an image of an initial scene and a plurality of markers are displayed. For each user marker selection, virtual furnishing units corresponding to the unit type are displayed. User unit selection results in display of a three-dimensional graphic of the selected virtual furnishing unit at the corresponding coordinates within the scene, thereby creating an augmented scene that comprises the initial scene with three-dimensional graphics of selected virtual furnishing units. The augmented scene is submitted to a remote server. The user is provided with a reward that consists of credits. Responsive to user selection to access the store, a user interface for the store is displayed within the application. Visual representations of tangible products are displayed. The credits are configured for use towards purchase of the tangible products.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: April 20, 2021
    Inventors: Scott Cuthbertson, Barlow Gilmore, Martin Robaszewski, Brandon Jones, Jakub Fiedorowicz, Christianne Amodio, Ngan Vu, Chris McGill, Chris Hosking, Jeff Tseng, Jose Estuardo Avila, Kristin Darrow, Clara Soroeta, Judy Chen, Naveed Khan, Ramin Shahab
  • Patent number: 10977862
    Abstract: Visualizing three dimensional content is complicated by display platforms capable of more degrees of freedom to display the content than interface tools have to navigate that content. Disclosed are methods and systems for displaying select portions of the content and generating virtual camera positions with associated look angles for the select portions, such as planar geometries of a three dimensional building, thereby constraining the degrees of freedom for improved navigation through views of the content. Look angles can be associated with axes of the content and fields of view.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: April 13, 2021
    Assignee: Hover Inc.
    Inventors: Manish Upendran, Adam J. Altman
  • Patent number: 10970809
    Abstract: In one embodiment, a computing system may receive a number of texels organized into a two-dimensional array. The system may generate addresses for the texels based on one or more mapping rules which may map the texels from the two-dimension array into a one-dimensional array of a pre-determined size in a texel order. The system may store the texels organized in the one-dimensional array into a memory block having the pre-determined size. The system may read texels from the memory block onto a data bus including a number of data lines corresponding to different combinations of low order address bits of addresses of the texels within the two-dimension array. The texel order of the one-dimensional array may map texels having same low order address bits into same data lines. The system may load the texels directly into a number of buffer memory blocks through the data bus.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: April 6, 2021
    Assignee: Facebook Technologies, LLC
    Inventor: Larry Seiler
  • Patent number: 10964088
    Abstract: In one embodiment, a method for computing a color value for a sampling pixel region includes using a computing system to determine a sampling pixel region within a texture. The texture is associated with mipmap levels having different resolutions of the texture. The mipmap levels include at least a first mipmap level defined by color texels and a second mipmap level defined by distance-field texels. The system may select one of the mipmap levels based on a size of the sampling pixel region and a size of a texel in the selected mipmap level. The system may then compute a color value for the sampling pixel region using the selected mipmap level.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: March 30, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Larry Seiler, Alexander Nankervis, John Adrian Arthur Johnston
  • Patent number: 10956498
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for scanning bindings in a webpage. In one aspect, a method includes obtaining, at a browser of a client device, markup for a webpage, initiating a scan of the markup for the webpage to identify bindings in the markup, in response to a time threshold being satisfied during the scan of the markup for the webpage, pausing the scan of the markup and storing location data corresponding to a location in the markup reached by the scan at pause time, rendering, by the browser, a next frame for the webpage, and in response to completion of the rendering of the next frame for the webpage, resuming the scan of the markup for the webpage at the location in the markup.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: March 23, 2021
    Assignee: Google LLC
    Inventors: William Chou, Malte Ubl
  • Patent number: 10933588
    Abstract: Techniques and systems for 3D printing with machines having imperfect light projection are described. A technique includes receiving an intensity map comprising a plurality of pixel values, wherein the pixel values of the intensity map represent variations in intensity of light projection of an additive-manufacturing apparatus; receiving cross-sectional images of a three dimensional (3D) model of an object, each cross-sectional image comprising a plurality of pixel values, each pixel value of each cross-sectional image having an X-location and a Y-location; for each cross-sectional image of the 3D model, applying pixel values of the intensity map to corresponding pixel values of the cross-sectional image of the 3D model, to make one of a plurality of additive-manufacturing images that are calibrated to account for the variations in intensity of the light projection; and providing the additive-manufacturing images to the additive-manufacturing apparatus to build the object.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: March 2, 2021
    Assignee: Autodesk, Inc.
    Inventor: Brian James Adzima
  • Patent number: 10904508
    Abstract: In a system for 360 degree video capture and playback, 360 degree video may be captured, stitched, encoded, decoded, rendered, and played-back. A device for video coding with adaptive projection format may include at least one processor configured to combine at least two different projection formats into a combined projection format and encode a video stream using the combined projection format. The at least one processor may be further configured to decode a video stream that is encoded with a combined projection formation that includes at least two different projection formats.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: January 26, 2021
    Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
    Inventor: Minhua Zhou
  • Patent number: 10891796
    Abstract: Methods, systems and processor-readable storage media for rendering a virtual shadow onto a real-world surface in an image are described. Using an augmented reality application running on a computing device having a camera, the method comprises capturing an image of a scene and detecting a real-world surface therein. A transparent occluding virtual plane is rendered onto the real-world surface. A texture associated with a virtual object is then written to a shadow buffer and projected onto the transparent occluding virtual plane.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: January 12, 2021
    Assignee: EIDOS INTERACTIVE CORP.
    Inventor: Renaud Bédard
  • Patent number: 10884156
    Abstract: The present disclosure provides an image processing method, device, and computer readable storage medium, relating to the field of image processing technology, the method includes: acquiring a first undersampled image to be processed; and reconstructing, according to a mapping relationship between an undersampled image and a normally sampled original image, the first undersampled image to a corresponding first original image, wherein the mapping relationship is obtained by training a machine learning model with a second undersampled image and a normally sampled second original image corresponding to the second undersampled image as training samples.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: January 5, 2021
    Inventors: Qi Wang, Bicheng Liu, Guangming Xu
  • Patent number: 10878138
    Abstract: A method of managing proxy objects within CAD Models by attaching Meta Data to each Proxy and HD Object and translating 2D coordinates into 3D coordinates from within a 3D CAD model with additional data being added through a 360 viewer. The method enables the user to programmatically swap one Proxy Object with one or more HD Objects. All Proxy Objects and HD Objects are stored in a secure database structure while providing access by users to the proxy objects and all related product information. Non-technical and non-CAD users can configure objects within a space by selecting an object, browsing a catalog of possible alternative objects, viewing specific product details and then selecting the object to replace the selected object. Once a new object is selected, a photo realistic 360 image of a scene is created in real time.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: December 29, 2020
    Assignee: MITEK HOLDINGS, INC.
    Inventor: Richard T. Ullom
  • Patent number: 10874196
    Abstract: A makeup part generating apparatus is an apparatus for generating a makeup part image to be overlaid on a facial image, and the apparatus includes: a generation-side information acquiring unit (an information acquiring circuitry) that acquires, from a place on a communication network, common attribute information representing a format of part information defining a makeup part image; a part information generator (a makeup part generator) that generates the makeup part; image and a part information generator that generates, according to the format represented by the acquired common attribute information, the part information defining the generated makeup part.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: December 29, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: Chie Nishi
  • Patent number: 10878278
    Abstract: Visual data is used to locate a device in the real world. The location and orientation of the device is detected by matching the camera view of the device to a computer-generated view of an area surrounding the location. The computer-generated view of the area is constructed by combining various remotely sensed geospatial data including satellite imagery, aerial LIDAR, map data, GIS inventory data, and/or the like. In various embodiments, the disclosed system can not only locate the device but can also provide measurements regarding where the user is looking and 3D measurement of the scene in the device camera view. In various embodiments, the presented invention enables devices to have a system that provides spatial intelligence to any camera.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: December 29, 2020
    Assignee: STURFEE, INC.
    Inventors: Anil Cheriyadat, Brent McFerrin, Harini Sridharan, Dilip Patlolla