Patents Examined by Phi Hoang
  • Patent number: 11366631
    Abstract: Provided is an information processing device provided with an obtaining unit (211) that obtains first physical information of a first user present in a first space, and second physical information of a second user present in a second space, a virtual space generation unit (213) that generates a virtual space on the basis of the first physical information or the second physical information, as operation mode control unit (211) that switches an operation mode in a case where a trigger caused by at least any one of the first user or the second user occurs, and a control unit (210) that interlocks switch of the operation mode with at least one device present is the first space or at least one device present in the second space.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: June 21, 2022
    Assignee: SONY CORPORATION
    Inventors: Yusuke Sakai, Ryusei Koike, Haruo Oba, Motoki Higashide, Daisuke Miki
  • Patent number: 11360553
    Abstract: An example disclosed method in accordance with some embodiments includes: receiving head tracking position information from a client device, the head tracking position information associated with a user at the client device; predicting a future head position of the user at a scan-out time for displaying a virtual reality (VR) video frame, wherein the VR video frame is displayed to the user via the client device; determining an overfill factor based on an expected error in the predicted future head position of the user; rendering an overfilled image based on the predicted future head position of the user and the overfill factor; and sending the VR video frame including the overfilled image to the client device for display to the user.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: June 14, 2022
    Assignee: PCMS Holdings, Inc.
    Inventors: JuHyung Son, Jin Sam Kwak, Hyun Oh Oh, Sanghoon Kim
  • Patent number: 11354807
    Abstract: An apparatus and method for performing multisampling anti-aliasing. For example, one embodiment of an apparatus samples multiple locations within each pixel of an image frame to generate a plurality of image slices. Each image slice comprises a different set of samples for each of the pixels of the image frame. Anti-aliasing is then performed on the image frame using the image slices by first subdividing the plurality of image slices into equal-sized pixel blocks and determining whether each pixel block has one or more different pixel values in different image slices. If so, then edge detection and simple shape detection is performed using pixel data from a pixel block in a single image slice; if not, then edge detection and simple shape detection is performed using the pixel block in multiple image slices.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: June 7, 2022
    Assignee: Intel Corporation
    Inventor: Filip Strugar
  • Patent number: 11356547
    Abstract: A background display method during a call includes: obtaining information on the call; and dynamically displaying a background image on a call interface based on the information.
    Type: Grant
    Filed: September 12, 2020
    Date of Patent: June 7, 2022
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventor: Chunyan Xi
  • Patent number: 11334961
    Abstract: Embodiments relate to circuitry for warping image pyramids for image fusion. An image fusion circuit receives captured images, and generates image pyramids corresponding to the received images to be used for image fusion. A warping circuit warps the first image pyramid based upon one or more warping parameters to align the first image pyramid to the second image pyramid. The warping circuit is a multi-scale warping circuit configured to warp each level of the first image pyramid, using a first warping engine that warps a base level of the image pyramid, and at least one addition warping engine that warps a plurality of scaled levels of the image pyramid in parallel with the first warping engine.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: May 17, 2022
    Assignee: Apple Inc.
    Inventors: Maxim Smirnov, William T. Warner, David R. Pope, Manching Ko
  • Patent number: 11328437
    Abstract: Methods and systems for defocusing a rendered computer-generated image are presented. Pixel values for a pixel array are determined from a scene description. A blur amount for each pixel is determined based on a lens function representing a lens shape and/or effect. A blur amount and blur transparency value are determined for the pixel based on the lens function and pixel depth. A convolution range comprising pixels adjacent to the pixel is determined based on the blur amount. A blend color value is determined for the pixel based on the color value of the pixel, color values of pixels in the convolution range, and the blur transparency value. The blend color value is scaled based on the blend color value and a modified pixel color value is determined from scaled blend color values.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: May 10, 2022
    Assignee: Weta Digital Limited
    Inventor: Peter Hillman
  • Patent number: 11328494
    Abstract: An image processing apparatus in an embodiment of the present disclosure includes: an obtaining unit configured to obtain viewpoint information indicating a change over time in a viewpoint corresponding to a virtual image; and a generation unit configured to generate the virtual image from the viewpoint according to the viewpoint information obtained by the obtaining unit such that among a plurality of objects included in the generated virtual image, an object whose position in the virtual image changes by a first amount according to the change of the viewpoint indicated by the viewpoint information is lower in clearness than an object whose position in the virtual image changes by a second amount smaller than the first amount according to the change of the viewpoint.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: May 10, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Daichi Adachi
  • Patent number: 11328393
    Abstract: A method and a device for displaying a set of high dynamic range sonar or radar data are provided. The method makes it possible to visualize scalar data tables having a high dynamic range. The method consists in producing a first image in colors of essentially uniform shade hue and saturation for ranges of values exhibiting a low dynamic range, in creating a second image containing only the high-amplitude information that is invisible in the first image, in subjecting the second image to a non-linear low-pass filtering creating a halo that becomes all the greater as the information increases in amplitude, then in rendering this second image according to a color map that is of constant luminance but with strong variations of hue and of saturation. These two images are then combined by a weighted average and renormalized in terms of luminance.
    Type: Grant
    Filed: July 4, 2019
    Date of Patent: May 10, 2022
    Assignee: THALES
    Inventor: Andreas Arnold
  • Patent number: 11321812
    Abstract: Disclosed are a display method, a display device, a virtual reality display device, a virtual reality device, and a storage medium. The display method includes: segmenting one frame of image into at least one image region; determining grayscale information of the image region; determining a resolution compression ratio of the image region according to the grayscale information of the image region, wherein a grayscale level of the image region is negatively correlated with the resolution compression ratio; and displaying an image in the image region according to the resolution compression ratio of each image region.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: May 3, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Bingxin Liu, Ziqiang Guo, Jian Sun, Feng Zi, Binhua Sun, Lin Lin, Yakun Wang, Jiyang Shao, Yadong Ding, Qingwen Fan
  • Patent number: 11315313
    Abstract: A method of generating a 3D model may include receiving a plurality of 2D images of a physical object captured from a respective plurality of viewpoints in a 3D scan of the physical object in a first process. The method may include receiving a first process 3D mesh representation of the physical object and calculating respective second process estimated position and/or orientation information for each one of the respective plurality of viewpoints of the plurality of 2D images. The method may include generating a second process 3D mesh representation of the physical object using the plurality of 2D images, the second process estimated position and/or orientation information, and the first process 3D mesh representation of the physical object. The method may include generating a 3D model of the physical object by applying surface texture information from the plurality of 2D images to the second process 3D mesh representation of the physical object.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: April 26, 2022
    Assignee: SONY GROUP CORPORATION
    Inventors: Francesco Michielin, Lars Novak, Fredrik Mattisson
  • Patent number: 11308683
    Abstract: Ray tracing systems and computer-implemented methods perform intersection testing on a bundle of rays with respect to a box. Silhouette edges of the box are identified from the perspective of the bundle of rays. For each of the identified silhouette edges, components of a vector providing a bound to the bundle of rays are obtained and it is determined whether the vector passes inside or outside of the silhouette edge. Results of determining, for each of the identified silhouette edges, whether the vector passes inside or outside of the silhouette edge, are used to determine an intersection testing result for the bundle of rays with respect to the box.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: April 19, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Gregory Clark, Steven J. Clohset, Luke T. Peterson
  • Patent number: 11288781
    Abstract: A standard dynamic range (SDR) image is received. Composer metadata is generated for mapping the SDR image to an enhanced dynamic range (EDR) image. The composer metadata specifies a backward reshaping mapping that is generated from SDR-EDR image pairs in a training database. The SDR-EDR image pairs comprise SDR images that do not include the SDR image and EDR images that corresponds to the SDR images. The SDR image and the composer metadata are encoded in an output SDR video signal. An EDR display operating with a receiver of the output SDR video signal is caused to render an EDR display image. The EDR display image is derived from a composed EDR image composed from the SDR image based on the composer metadata.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: March 29, 2022
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Neeraj J. Gadgil, Guan-Ming Su, Tao Chen, Yoon Yung Lee
  • Patent number: 11288953
    Abstract: A method includes accessing a first dataset including aerial imagery data, accessing a second dataset including property boundary data, and identifying property boundaries associated with a geographic area. A plurality of artificial-intelligence (AI) models are applied to the datasets to identify and compute information of interest. Based on the first dataset and constrained by the property boundaries, a building detection model can be applied to identify a building footprint, and a tree detection model can be applied to identify one or more trees. An estimated distance can be determined between each of the trees and a nearest portion of the building footprint as separation data, which can be compared to a defensible space guideline to determine a defensible space adherence score. A wildfire risk map can be generated, including the defensible space adherence score associated with the geographic area.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: March 29, 2022
    Assignee: THE TRAVELERS INDEMNITY COMPANY
    Inventors: Hoa Ton-That, James Dykstra, John Han, Stefanie M. Walker, Joseph Amuso, George Lee, Kyle J. Kelsey
  • Patent number: 11270408
    Abstract: A method and apparatus for generating a special deformation effect program file package and a computer readable storage medium are provided. The method includes: acquiring parameter values of deformation effect parameters of at least one deformation region; establishing a correlation between the at least one deformation region and at least one predetermined key point; and according to the at least one deformation region, the parameter values of which have been acquired, and the correlation, generating a special deformation effect program file package.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: March 8, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Qinqin Xu, Dayu Yue
  • Patent number: 11272153
    Abstract: An information processing apparatus for determining a position of a virtual viewpoint for a virtual viewpoint image generated based on a plurality of images captured by a plurality of imaging apparatuses includes a first acquisition unit configured to acquire position information indicating a position existing within a predetermined range from a field to be captured by the plurality of imaging apparatuses, and a determination unit configured to determine, as the position of the virtual viewpoint for the virtual viewpoint image, a position different from the position indicated by the position information, based on the position information acquired by the first acquisition unit.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: March 8, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yasushi Shikata, Norihisa Suzuki, Kazuna Maruyama, Tomoaki Arai
  • Patent number: 11270488
    Abstract: An expression animation data processing method is provided for a computer device. The method includes determining a location of a human face in an image and obtaining an avatar model; obtaining current expression data according to the location of the human face in the image and a three-dimensional face model; and obtaining expression change data from the current expression data. The method also includes determining a target split-expression-area that matches the expression change data, the target split-expression-area being selected from split-expression-areas corresponding to the avatar model; and obtaining target basic-avatar-data that matches the target split-expression-area. The method also includes combining the target basic-avatar-data according to the expression change data to generate to-be-loaded expression data; and loading the to-be-loaded expression data into the target split-expression-area to update an expression of an animated avatar corresponding to the avatar model.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: March 8, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yifan Guo, Nan Liu, Feng Xue
  • Patent number: 11262962
    Abstract: A home appliance and a control method therefor are provided. The home appliance includes a display, a sensor to detect whether a door is opened or closed, and at least one processor configured to control the display to display one or more objects, and based on sensing, by the sensor, at least one of an opening or a closing of the door, provide visual feedback to the one or more objects.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: March 1, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yongjae Park, Kyuho Jo, Soyoung Yun, Munkeun Lee, Joohwan Hong
  • Patent number: 11263808
    Abstract: Computer systems and methods are described for automatically generating a 3D model, including, with computer processor(s), obtaining geo-referenced images representing the geographic location of a structure containing one or more real façade texture of the structure; locating a geographical position of real façade texture(s) of the structure; selecting base oblique image(s) from the images by analyzing image raster content of the real façade texture depicted in the images with selection logic; analyzing the real façade texture to locate a geographical position of at least one occlusion using pixel pattern recognition of the real façade texture; locating oblique image(s) having an unoccluded image characteristic of the occlusion in the real façade texture; applying the real façade texture to wire-frame data of the structure to create a 3D model of the structure; and applying the unoccluded image characteristic to the real façade texture to remove the occlusion from the real façade texture.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: March 1, 2022
    Assignee: Pictometry International Corp.
    Inventors: Joseph G. Freund, Ran Gal
  • Patent number: 11263824
    Abstract: Systems and methods for spawning a digital object in an environment are disclosed. Data describing the environment is received. The data includes data describing properties of the environment, a state of the environment, and properties of a plurality of objects within the environment. The data is analyzed to detect and categorize one or more of the plurality of objects, and to detect one or more surfaces related to the plurality of objects. Data is received that describes a placement of the digital object on one of the detected surfaces or detected objects and determines properties of the placement. Conditions are associated with the placed digital object, the conditions including data describing properties of the placement, data describing properties of the detected object, and data describing a state of the detected object. The spawning of the digital object is performed in the environment based on the conditions.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: March 1, 2022
    Assignee: Unity IPR ApS
    Inventors: Jonathan Manzer Forbes, Hugo van Heuven
  • Patent number: 11250602
    Abstract: Methods, systems, and computer program products for generating concept images of human poses using machine learning models are provided herein. A computer-implemented method includes identifying events from input data by applying a machine learning recognition model to at least a portion of the input data, wherein the identifying comprises (i) detecting multiple entities from the input data and (ii) determining behavioral relationships among at least a portion of the multiple entities; generating, using a machine learning interpretability model and at least a portion of the identified events, images illustrating human poses related to at least a portion of the identified events; outputting at least a portion of the generated images to a user; and updating the machine learning recognition model based at least in part on (i) at least a portion of the generated images and (ii) input from the user.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: February 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Samarth Bharadwaj, Saneem Chemmengath, Suranjana Samanta, Karthik Sankaranarayanan