Patents Examined by Hai Tao Sun
  • Patent number: 11373383
    Abstract: An immersive ecosystem is provided comprising a VR headset configured to display a 3D rendering to a user and sensor(s) configured to measure a user response to dynamic 3D asset(s) in the 3D rendering. The immersive ecosystem further comprises a processor, an AI engine, and a first non-transitory computer-readable storage medium encoded with program code executable for providing the 3D rendering to the VR headset. The AI engine is operably coupled to a second non-transitory computer-readable storage medium configured to store predetermined response values and time values for dynamic 3D assets. The AI engine comprises a third non-transitory computer-readable storage medium encoded with program code executable for receiving the measured user response, comparing the received user response to the predetermined response value at the predetermined time value, based on the comparison, modifying dynamic 3D asset(s), and communicating the modified dynamic 3D asset(s) to the processor for providing within 3D rendering.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: June 28, 2022
    Inventor: Tyler H. Gates
  • Patent number: 11367257
    Abstract: There is provided an information processing apparatus to bring a three-dimensional model generated in accordance with observation information, closer to a real object. The information processing apparatus includes: a control section configured to allocate, to a second three-dimensional model being at least a partial three-dimensional model included in a first three-dimensional model, a definite shaped model having a predetermined shape corresponding to a shape of the second three-dimensional model.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: June 21, 2022
    Assignee: SONY CORPORATION
    Inventors: Akihiko Kaino, Shunichi Homma, Masaki Fukuchi
  • Patent number: 11367259
    Abstract: A preferred method for dynamically displaying virtual and augmented reality scenes can include determining input parameters, calculating virtual photometric parameters, and rendering a VAR scene with a set of simulated photometric parameters.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: June 21, 2022
    Assignee: Dropbox, Inc.
    Inventors: Terrence Edward McArdle, Benjamin Zeis Newhouse
  • Patent number: 11341655
    Abstract: An image processing method according to some embodiments of the present disclosure includes: obtaining rendering durations of images of M frames; determining whether the rendering durations of the images of the M frames match a motion gesture requirement; if yes, controlling a difference ?t between a rendering start timing of the image of each frame and a warp processing start timing of the image of a corresponding frame to be less than or equal to a preset difference ?taim; otherwise, setting a system frame rate to make the rendering duration of the image of each frame match the motion gesture requirement.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: May 24, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Qingwen Fan, Bin Zhao, Yukun Sun, Jinghua Miao, Xuefeng Wang, Wenyu Li, Jinbao Peng, Jianwen Suo, Xi Li, Zhifu Li, Lili Chen, Hao Zhang
  • Patent number: 11340380
    Abstract: A raster log digitization system and method are disclosed. The system and method receives a raster log in which the raster log has one or more values of one or more measurements of a well and each value of each measurement being recorded at a plurality of depths of the well. In the raster log, the value of at least one measurement wraps around the raster log. The system and method may generate using the received raster log a digital log from the raster log wherein the digital log resolves the values of at least one measurement that wrapped around the raster log.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: May 24, 2022
    Assignee: Enverus, Inc.
    Inventor: John Neave
  • Patent number: 11327708
    Abstract: A computer-implemented method for processing operations for integrating audience participation content into virtual reality (VR) content presented by a head mounted display (HMD) of an HMD user is provided. The method includes providing a VR scene to the HMD of the HMD user and receiving indications from one or more spectator devices of respective one or more spectators. The indications corresponding to requests for audience participation content for participating in the VR scene. The method includes sending audience participation content to the one or more spectator devices. The audience participation content configured to be displayed on respective displays associated with the one or more spectator devices. The audience participation content further includes interactive content for obtaining spectator input from the one or more spectators via the one or more spectator devices, respectively.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: May 10, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Glenn T. Black, Michael G. Taylor, Todd Tokubo
  • Patent number: 11314429
    Abstract: The present disclosure includes apparatuses and methods for operations using compressed and decompressed data. An example method includes receiving compressed data to a processing in memory (PIM) device and decompressing the compressed data on the PIM device.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: April 26, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Jeremiah J. Willcock, Perry V. Lea, Anton Korzh
  • Patent number: 11308675
    Abstract: Techniques related to capturing 3D faces using image and temporal tracking neural networks and modifying output video using the captured 3D faces are discussed. Such techniques include applying a first neural network to an input vector corresponding to a first video image having a representation of a human face to generate a morphable model parameter vector, applying a second neural network to an input vector corresponding to a first and second temporally subsequent to generate a morphable model parameter delta vector, generating a 3D face model of the human face using the morphable model parameter vector and the morphable model parameter delta vector, and generating output video using the 3D face model.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: April 19, 2022
    Assignee: Intel Corporation
    Inventors: Shandong Wang, Ming Lu, Anbang Yao, Yurong Chen
  • Patent number: 11301952
    Abstract: Systems and methods for determining a foreground application and at least one background application from multiple graphics applications executing within an execution environment are disclosed. Pixel data rendered by the foreground application may be displayed in the execution environment while a rendering thread of the background application may be paused.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: April 12, 2022
    Assignee: Intel Corporation
    Inventors: Tao Zhao, John C. Weast, Brett P. Wang
  • Patent number: 11295474
    Abstract: A gaze point determination method and apparatus, an electronic device, and a computer storage medium are provided. The method includes: obtaining two-dimensional coordinates of eye feature points of at least one eye of a face in an image, the eye feature points including an eyeball center area feature point; obtaining, in the preset three-dimensional coordinate system, three-dimensional coordinate of a corresponding eyeball center area feature point in a three-dimensional face model corresponding to the face in the image based on the obtained two-dimensional coordinate of the eyeball center area feature point; and obtaining a determination result for a position of a gaze point of the eye of the face in the image according to two-dimensional coordinates of feature points other than the eyeball center area feature point in the eye feature points and the three-dimensional coordinate of the eyeball center area feature point in the preset three-dimensional coordinate system.
    Type: Grant
    Filed: December 29, 2019
    Date of Patent: April 5, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Tinghao Liu, Quan Wang, Chen Qian
  • Patent number: 11295504
    Abstract: Systems, methods, and non-transitory computer-readable media can receive render instructions for rendering an animation. The animation comprises a plurality of layers, each layer comprising one or more layer properties, and a first dynamic property to be defined at runtime prior to rendering the animation. The first dynamic property is mapped to a first set of layer properties of the one or more layer properties. A first dynamic property value is received for the first dynamic property. The first set of layer properties are defined based on the first dynamic property value. The animation is rendered on a computing device based on the render instructions and the first dynamic property value.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: April 5, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Robert Alexander Allen, Jr., Michael O'Brien, Nicholas J. Kwiatek, Alexander Zats, Jerod Wanner, Emily Dubinsky Gasca, Eduardo de Mello Maia, Christopher Slowik, Renyu Liu, Rajesh Janakiraman, David Graham McDermott
  • Patent number: 11288877
    Abstract: Provided is a method for matching a virtual scene of a remote scene with a real scene for augmented reality and mixed reality. Multiple coordinate systems are established and a position relationship between the multiple coordinate systems is determined. A position of a point cloud scene in the near-side environmental space and the position of the near-side virtual scene are determined in the near-side environmental space through real marks, so as to realize the high-precision matching and positioning for augmented reality and mixed reality. Based on the position of objects marked in the real space, the method realizes adaptive and accurate positioning of the virtual objects in the augmented reality and mixed reality by overcoming spatial barriers. The scene in the virtual space is accurately superimposed into the near-side environmental space.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: March 29, 2022
    Assignee: 38TH RESEARCH INSTITUTE, CHINA ELECTRONICS TECHNOLOGY GROUP CORP.
    Inventors: Yixiong Wei, Hongqi Zhang, Yanlong Zhang, Lei Guo, Hongqiao Zhou, Qianhao Wu, Fujun Tian
  • Patent number: 11289049
    Abstract: A colour processor for mapping an image from source to destination colour gamuts includes an input for receiving a source image having a plurality of source colour points expressed according to the source gamut; a colour characterizer configured to, for each source colour point in the source image, determine a position of intersection of a curve with the boundary of the destination gamut; and a gamut mapper configured to, for each source colour point in the source image: if the source colour point lies inside the destination gamut, apply a first translation factor to translate the source colour point to a destination colour point within a first range of values; or if the source colour point lies outside the destination gamut, apply a second translation factor, different than the first translation factor, to translate the source colour point to a destination colour point within a second range of values.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: March 29, 2022
    Assignee: Imagination Technologies Limited
    Inventor: Paolo Fazzini
  • Patent number: 11263813
    Abstract: Disclosed herein is an information processing device including an acquiring unit that acquires positional information of a flat surface present in a first space around a first user and positional information of a flat surface present in a second space around a second user, and a transformation parameter determining unit that determines a coordinate transformation parameter for transforming position coordinates of the first space and the second space into position coordinates in a virtual space such that a position of the flat surface present in the first space and a position of the flat surface present in the second space coincide with each other. A position of an object present in the first space and a position of another object present in the second space are transformed into positions in the virtual space according to the determined coordinate transformation parameter.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: March 1, 2022
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Yoshinori Ohashi
  • Patent number: 11258997
    Abstract: In described examples, structured light elements are projected for display on a projection screen surface. The projected light elements are captured for determining a three-dimensional characterization of the projection screen surface. A three-dimensional characterization of the projection screen surface is generated in response to the displayed structured light elements. An observer perspective characterization of the projection screen surface is generated in response to an observer position and the three-dimensional characterization. A depth for at least one point of the observer perspective characterization is determined in response to depth information of respective neighboring points of the at least one point of the observer perspective characterization. A compensated image can be projected on the projection screen surface in response to the observer perspective characterization and depth information of respective neighboring points of the at least one point of the observer perspective characterization.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: February 22, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Jaime Rene De La Cruz, Jeffrey Mathew Kempf
  • Patent number: 11238772
    Abstract: The present disclosure relates to methods and apparatus for display processing. The apparatus can determine at least one data parameter corresponding to each of a plurality of layers in a display frame. The apparatus can also calculate a model for the at least one data parameter corresponding to each of the plurality of layers. Additionally, the apparatus can modify the model for the at least one data parameter based on one or more application use cases of the display frame. Moreover, the apparatus can implement the modified model on each of the plurality of layers in the display frame. In some aspects, the apparatus can also determine one or more composition settings for each of the plurality of layers based on the modified model. The apparatus can also apply the one or more composition settings to each of the plurality of layers based on the modified model.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: February 1, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Srinivas Pullakavi, Dileep Marchya, Padmanabhan Komanduru V
  • Patent number: 11232765
    Abstract: Examples herein relate to monitor calibration. In some examples, monitor calibration can include a scaler processing resource and a memory resource storing machine readable instructions to cause the scaler processing resource to record, by a sensor included on a monitor, color measurements of the monitor in response to receiving record instructions from an external computing device, transmit the recorded color measurements to the external computing device, receive calibration instructions from the external computing device based on the recorded color measurements, and calibrate the monitor using the calibration instructions received from the external computing device.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: January 25, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Syed S Azam, Greg Staten
  • Patent number: 11222455
    Abstract: Methods, devices, media, and other embodiments are described for managing and configuring a pseudorandom animation system and associated computer animation models. One embodiment involves generating image modification data with a computer animation model configured to modify frames of a video image to insert and animate the computer animation model within the frames of the video image, where the computer animation model of the image modification data comprises one or more control points. Motion patterns and speed harmonics are automatically associated with the control points, and motion states are generated based on the associated motions and harmonics. A probability value is then assigned to each motion state. The motion state probabilities can then be used when generating a pseudorandom animation.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: January 11, 2022
    Assignee: Snap Inc.
    Inventors: Gurunandan Krishnan Gorumkonda, Shree K. Nayar
  • Patent number: 11216989
    Abstract: The present invention relates to a mobile device and a method for controlling same, and the subject matter of the present invention comprises: classifying a received first texture as a static texture or a dynamic texture on the basis of the attribute of the texture; when the first texture is a static texture, classifying the first texture as a compressed texture or an uncompressed texture on the basis of compression application; when the first texture is a static texture and a compressed texture, classifying the first texture as a mipmapped texture or a non-mipmapped texture on the basis of mipmap application; when the first texture is a static texture and an uncompressed texture, classifying the first texture as a mipmapped texture or a non-mipmapped texture on the basis of mipmap application; when the first texture is a dynamic texture, classifying the first texture as a shadow map or a non-shadow map on the basis of the aspect ratio of a screen; adjusting the size of the first texture on the basis of the c
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: January 4, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Jaeho Nah, Yeongkyu Lim, Byeongjun Choi
  • Patent number: 11210853
    Abstract: For a space including an object to be displayed, images of the space viewed from reference points of view are created in advance as reference images, and the reference images are combined according to a position of an actual point of view to draw a display image. In this case, a reference image not displaying reflection is used to determine the color of the object (S50). In a case of expressing reflection of another object (Y in S52), a position of the reflected object is estimated in a three-dimensional space (S54), a position on the reference image corresponding to the position is acquired (S56), and a color of the position is combined with the color of the object (S60).
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: December 28, 2021
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Masakazu Suzuoki, Yuki Karasawa