Patents Examined by Xilin Guo
  • Patent number: 11361524
    Abstract: The quality of digital impressions geometry and color but also fluorescence, transillumination, reflection and absorption properties can be inferior to single frame 2D/3D images e.g. due to averaging effects, rendering effects, difficult scanning situation etc. This can lead to difficulties in the dental workflow such as the creation of a preparation margin line or detection of diseases such as caries. By providing a correlated view of high-quality single frame images and corresponding 3D models, the dental workflow can be guided to be completed more efficiently using the single frame images.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: June 14, 2022
    Assignee: DENTSPLY SIRONA INC.
    Inventors: Hong Keun Kim, Anders Adamson, Ole Jakubik, Frederike Franke
  • Patent number: 11361501
    Abstract: In one implementation, a method of semantically labeling a point cloud cluster is performed at a device including one or more processors and non-transitory memory. The method includes obtaining a point cloud of a physical environment including a plurality of points, each of the plurality of points associated with coordinates in a three-dimensional space. The method includes spatially disambiguating portions of the plurality of points into a plurality of clusters. The method includes determining a semantic label based on a volumetric arrangement of the points of a particular cluster of the plurality of clusters. The method includes generating a characterization vector of a particular point of the points of the particular cluster, wherein the characterization vector includes the coordinates of the particular point, a cluster identifier of the particular cluster, and the semantic label.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: June 14, 2022
    Assignee: APPLE INC.
    Inventor: Payal Jotwani
  • Patent number: 11354842
    Abstract: An animation system includes an animated figure, multiple sensors, and an animation controller that includes a processor and a memory. The memory stores instructions executable by the processor. The instructions cause the animation controller to receive guest detection data from the multiple sensors, receive shiny object detection data from the multiple sensors, determine an animation sequence of the animated figure based on the guest detection data and shiny object detection data, and transmit a control signal indicative of the animation sequence to cause the animated figure to execute the animation sequence. The guest detection data is indicative of a presence of a guest near the animated figure. The animation sequence is responsive to a shiny object detected on or near the guest based on the guest detection data and the shiny object detection data.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: June 7, 2022
    Assignee: Universal City Studios LLC
    Inventors: David Michael Churchill, Clarisse Vamos, Jeffrey A. Bardt
  • Patent number: 11353700
    Abstract: An orientation predicting method, adapted to a virtual reality headset, comprises obtaining an orientation training data and an adjusted orientation data, wherein the adjusted orientation data is obtained by cutting a data segment off from the orientation training data, wherein the data segment corresponds to a time interval determined by an application latency; training an initial neural network model based on the orientation training data and the adjusted orientation data corresponding to the time interval; retrieving a real-time orientation data by an orientation sensor of the virtual reality headset; and inputting the real-time orientation data to the trained neural network model to output a predicted orientation data. The present disclosure further discloses a virtual reality headset and a non-transitory computer-readable medium.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: June 7, 2022
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Tommy Sugiarto, Chi-Tien Sun
  • Patent number: 11348305
    Abstract: In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include obtaining first content representing a physical environment in which an electronic device is located using an image sensor of the electronic device. A physical feature corresponding to a physical object in the physical environment is detected using the first content. A feature descriptor corresponding to a physical parameter of the physical feature is determined using the first content. Second content representing a computer generated reality (CGR) environment is generated based on the feature descriptor and presented on a display of the electronic device.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: May 31, 2022
    Assignee: Apple Inc.
    Inventors: Earl M. Olson, Nicolai Georg, Omar R. Khan, James M. A. Begole
  • Patent number: 11341703
    Abstract: An aspect provides a computer-implemented method for training controls for an animation control rig using a neural network.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: May 24, 2022
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Niall J. Lenihan, Sander van der Steen, Richard Chi Lei, Florian Deconinck
  • Patent number: 11334149
    Abstract: An example process executed by a processor of a computing device to communicate a first light field data to a user device, wherein the first light field data is limited to a first volume of space that represents a first plurality of views for display on the user device; receive a sensor data associated with the user device; predict a behavior based at least in part from the received sensor data; generate a second light field data based at least in part on the predicted behavior, wherein the second light field data is limited to a second volume of space that is different from the first volume of space and represents a second plurality of views for display on the user device; and communicate the second light field data to the user device.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: May 17, 2022
    Assignee: Shopify Inc.
    Inventor: Jonathan Wade
  • Patent number: 11335069
    Abstract: In some embodiments, users' experience of engaging with augmented reality technology is enhanced by providing a process, referred to as face animation synthesis, that replaces an actor's face in the frames of a video with a user's face from the user's portrait image. The resulting face in the frames of the video retains the facial expressions, as well as color and lighting, of the actor's face but, at the same time, has the likeness of the user's face. An example face animation synthesis experience can be made available to uses of a messaging system by providing a face animation synthesis augmented reality component.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: May 17, 2022
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Yurii Volkov, Jeremy Baker Voss
  • Patent number: 11308699
    Abstract: A computer-implemented method includes generating, using a scene generator, first candidate scene data; obtaining reference scene data corresponding to a predetermined reference scene; processing the first candidate scene data and the reference scene data, using a scene discriminator, to generate first discrimination data for estimating whether each of the first candidate scene data and the reference scene data corresponds to a predetermined reference scene; updating a set of parameter values for the scene discriminator using the first discrimination data; generating, using the scene generator, second candidate scene data; processing the second candidate scene data, using the scene discriminator with the updated set of parameter values for the scene discriminator, to generate second discrimination data for estimating whether the second candidate scene data corresponds to a predetermined reference scene; and updating a set of parameter values for the scene generator using the second discrimination data.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: April 19, 2022
    Assignee: Arm Limited
    Inventors: Vasileios Laganakos, Irenéus Johannes De Jong, Gary Dale Carpenter
  • Patent number: 11308708
    Abstract: A method and apparatus for processing a pattern piece receives, as an input from a user, a selection of at least a portion of an outer line of a pattern piece to be processed among one or more pattern pieces included in a two-dimensional (2D) pattern of clothes, generates a template including a plurality of lines that divide an area set by the input, processes the template based on fullness settings set by the user for the pattern piece, processes the pattern piece based on the processed template, and outputs the processed pattern piece.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: April 19, 2022
    Assignee: CLO VIRTUAL FASHION INC.
    Inventors: Hohyun Lee, Yeji Kim
  • Patent number: 11295499
    Abstract: A switchable rendering system uses both instanced rendering and vector rendering in rendering a raster or vector graphic with a nested repetition. The nested repetition includes multiple levels of repetition and for each level the switchable rendering system selects instanced rendering or vector rendering to render the level. This selection is based on resource availability, such as using instanced rendering for a level when the current resource availability is sufficient to allow instanced rendering for the level, and using vector rendering for a level when the current resource availability is not sufficient to allow instanced rendering for the level.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: April 5, 2022
    Assignee: Adobe Inc.
    Inventors: Tarun Beri, Gaurav Jain
  • Patent number: 11282275
    Abstract: A method for generating a storybook includes generating metadata including shape information which is a predefined value for specifying a shape that a character model has in each of scenes in which a character of storybook content appears, receiving a facial image of a user, generating a user model based on a user face by applying texture information of the facial image to the character, generating a model image of the user model having a predefined shape in each of the scenes by reflecting shape information predefined in each of the scenes into the user model, and generating a file printable on a certain actual object to include at least one of the model images.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: March 22, 2022
    Assignee: ILLUNI INC.
    Inventors: Byunghwa Park, Youngjun Kwon, Gabee Jo
  • Patent number: 11276219
    Abstract: Various examples of cross-application systems and methods for authoring, transferring, and evaluating rigging control systems for virtual characters are disclosed. A first application, which implements a first rigging control protocol, can provide an input associated with a request for a behavior from the rig for the virtual character. The input can be converted to be compatible with a second rigging control protocol that is different from the first rigging control protocol. One or more control systems can be evaluated based on the input to determine an output to provide the requested behavior from the virtual character rig. The one or more control systems can be defined according to the second rigging control protocol. The output can be converted to be compatible with the first rigging control protocol and provided to the first application to manipulate the virtual character according to the requested behavior.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: March 15, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Geoffrey Wedig, James Jonathan Bancroft
  • Patent number: 11276239
    Abstract: A method according to an aspect of the disclosure is performed by a computing device and includes rendering a video frame, displaying an object generated by user manipulation on the rendered video frame, calculating a relative position of the object with respect to a reference point of the video frame, and transmitting object information generated based on the relative position of the object. According to the method, even if the streaming screen continuously changes when streaming video between remote terminals, the shared 3D object can be placed at an accurate position, and objective position information of the shared 3D object can be provided regardless of the surrounding environment or situation.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: March 15, 2022
    Assignee: SAMSUNG SDS CO., LTD.
    Inventors: Seung Jin Kim, Hee Tae Yoon, Jun Ho Kang
  • Patent number: 11276340
    Abstract: Methods, systems, and devices that support a dynamic screen refresh rate are described. An electronic device may dynamically (e.g., autonomously, while operating) adjust the rate at which a screen is refreshed, such as to balance considerations such as user experience and power consumption by the electronic device. For example, the electronic device may use an increased refresh rate when executing applications for which user experience is enhanced by a higher refresh rate and may use a decreased refresh rate when executing other applications. As another example, the electronic device may use different refresh rates while executing different portions of the same application, as some aspects of an application (e.g., more intense portions of a video game) may benefit more than others from a higher refresh rate. The electronic device may also account of rother factors, such as battery level, when setting or adjusting the refresh rate of the screen.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: March 15, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Ashish Ranjan, Carly M. Wantulok, Prateek Trivedi, Carla L. Christensen, Jun Huang, Avani F. Trivedi
  • Patent number: 11250607
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for providing angular snapping guides to efficiently, accurately, and flexibly align user interactions and editing operations to existing angular linear segments of digital design objects in a digital design document. In particular, in one or more embodiments, the disclosed systems determine target angular linear segments for presentation of angular snapping guides by generating angular bin data structures based on orientation and signed distances of angular linear segments within the digital design document. Accordingly, in one or more embodiments, the disclosed systems can efficiently search these angular bin data structures based on angles and signed distances corresponding to user interactions.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: February 15, 2022
    Assignee: Adobe Inc.
    Inventors: Arushi Jain, Praveen Kumar Dhanuka
  • Patent number: 11250321
    Abstract: An immersive feedback loop is disclosed for improving artificial intelligence (AI) applications used for virtual reality (VR) environments. Users may iteratively generate synthetic scene training data, train a neural network on the synthetic scene training data, generate synthetic scene evaluation data for an immersive VR experience, indicate additional training data needed to correct neural network errors indicated in the VR experience, and then generate and retrain on the additional training data, until the neural network reaches an acceptable performance level.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: February 15, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Emanuel Shalev
  • Patent number: 11250636
    Abstract: An information processing device includes: an acquisition unit (110, 140) configured to acquire a captured image including a subject and three-dimensional subject position information indicating a three-dimensional position of the subject; and a content configuration information generation unit (150) configured to generate content configuration information including the captured image, the three-dimensional subject position information, and virtual space association information which is information used for an interaction in which the subject in the captured image displayed in the virtual space is involved and is information for associating the subject in the captured image with the three-dimensional subject position information.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: February 15, 2022
    Assignee: SONY CORPORATION
    Inventors: Takayoshi Shimizu, Tsuyoshi Ishikawa
  • Patent number: 11228758
    Abstract: An imaging method and device, the method comprising: acquiring, at a specified imaging time, a pulse sequence in a time period before the specified imaging time, with regard to each pixel of a plurality of pixels; calculating a pixel value of the pixel according to the pulse sequence; obtaining a display image at the specified imaging time according to a space arrangement of the pixels, in accordance with pixel values of the plurality of pixels at the specified imaging time.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: January 18, 2022
    Assignee: PEKING UNIVERSITY
    Inventor: Tiejun Huang
  • Patent number: 11222387
    Abstract: The present invention is a computer-implemented method comprising: receiving at least one architectural drawing; analyzing each of the at least one architectural drawing, wherein non-structural elements are removed from each of the at least one architectural drawing; generating structural drawings for each of the at least one architectural drawing; marking each element within the structural drawings; generating a 3D model based on the structural drawings; analyzing the 3D model, wherein the 3D model is tested for predetermined characteristics; and generating a report based on the analyzed results of the predetermined characteristics.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: January 11, 2022
    Assignee: Consulting Engineers, Corp.
    Inventor: Maharaj Jalla