Patents Examined by Yingchun He
  • Patent number: 11955046
    Abstract: The present invention includes systems and methods for a six-primary color system for display. A six-primary color system increases the number of primary colors available in a color system and color system equipment. Increasing the number of primary colors reduces metameric errors from viewer to viewer. The six-primary color system includes Red, Green, Blue, Cyan, Yellow, and Magenta primaries. The systems of the present invention maintain compatibility with existing color systems and equipment and provide systems for backwards compatibility with older color systems.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: April 9, 2024
    Assignee: BAYLOR UNIVERSITY
    Inventors: James M. DeFilippis, Gary B. Mandle
  • Patent number: 11955044
    Abstract: Systems and methods for a multi-primary color system for display. A multi-primary color system increases the number of primary colors available in a color system and color system equipment. Increasing the number of primary colors reduces metameric errors from viewer to viewer. One embodiment of the multi-primary color system includes Red, Green, Blue, Cyan, Yellow, and Magenta primaries. The systems of the present invention maintain compatibility with existing color systems and equipment and provide systems for backwards compatibility with older color systems.
    Type: Grant
    Filed: November 3, 2022
    Date of Patent: April 9, 2024
    Assignee: Baylor University
    Inventors: Mitchell J. Bogdanowicz, James M. DeFilippis, Gary B. Mandle
  • Patent number: 11954807
    Abstract: A method for generating an at least 2-dimensional virtual representation of a real environment is disclosed, the generation being performed by a mixed virtual reality headset intended to be worn by a user, the mixed virtual reality headset being associated with at least one interface device. The generation method includes acquiring relative coordinates in the real environment, corresponding to a position of the interface device in the real environment; following an interaction of the user on the interface device, determining, as a function of the relative coordinates of the position of the interface device in the real environment, a corresponding point in the virtual representation; and generating the virtual representation from at least the point associated with the relative coordinates of the acquired position of the interface device. The development also relates to a mixed virtual reality headset and a corresponding interface device.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: April 9, 2024
    Assignee: ORANGE
    Inventors: Christophe Floutier, Maxime Jouin, Valérie Ledunois
  • Patent number: 11934490
    Abstract: A method for automatically classifying transition motion includes following steps performed by a computing device: obtaining a plurality of transition motions, with each transition motion being associated with a source motion, a destination motion, and a transition mechanism converting the source motion into the destination motion; extracting a property vector from each transition motion and thereby generating a plurality of property vectors, wherein each property vector includes a plurality of transition properties; and performing a clustering algorithm according to the property vectors to generate a plurality of transition types.
    Type: Grant
    Filed: June 23, 2022
    Date of Patent: March 19, 2024
    Assignees: INVENTEC (PUDONG) TECHNOLOGY CORPORATION, INVENTEC CORPORATION
    Inventors: Jonathan Hans Soeseno, Ying-sheng Luo, Trista Pei-Chun Chen
  • Patent number: 11935171
    Abstract: An interactive physical environment providing entertainment to a patron includes a scanner, a sensor within the room, and a control system. The scanner is associated with a room provided by the interactive physical environment. The scanner identifies a patron of the interactive physical environment and transmits patron identification information. The sensor determines patron game performance data and transmits the patron performance data. The control system receives patron identification information from the scanner and patron performance data from the sensor. The control system, responsive to the received patron identification information and patron performance data, modifying an avatar associated with the patron.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: March 19, 2024
    Assignee: Open World Entertainment LLC
    Inventor: Matthew DuPlessie
  • Patent number: 11935167
    Abstract: Disclosed in embodiments of the present disclosure are a method and apparatus for virtual fitting. A specific implementation of the method comprises: receiving a fitting request comprising a model picture and a user image; performing human body positioning and surface coordinate analysis on the model picture and the user image respectively; performing clothing segmentation on the model picture and the user image respectively; on the basis of the clothing segmentation result and the surface coordinate analysis result, covering the pixels corresponding to a piece of clothing in the model picture to corresponding positions in the user image to obtain a synthesized image and information to be completed; and inputting the synthesized image, the positioning result of the user image, and said information into a pre-trained image completion network to obtain a completed image.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: March 19, 2024
    Assignees: RELING JINGDONG SHANGKE INFORMATION TECHNOLOGY CO., LTD., BEIJING JINGDONG CENTURY TRADING CO., LTD.
    Inventors: Junwei Zhu, Zhidong She, Zhentao Zhang
  • Patent number: 11922563
    Abstract: A system and method for creating, managing, and displaying 3D digital collectibles comprising a virtual, three dimensional, n-sided structure including a digital media file or set of digital media files representing an event rendered on a representation of at least a first surface thereof, and data relating to the event rendered on at least a second surface thereof and other content on one or more other surfaces, where the digital media file may be video clip of the event that can be played automatically via a media player associated with the display.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: March 5, 2024
    Assignee: Dapper Labs, Inc.
    Inventors: Donald Dundas McEroy Flavelle, Catherine Marzi Tedman, Courtney McNeil, Denise Cascelli Schwenck Bismarque, Christopher Patrick Scott, Alan Carr, Eric Yu-Yin Lin
  • Patent number: 11908094
    Abstract: Systems and methods include transforming a digital file into a three-dimensional object that is spatially positioned in a three-dimensional virtual environment to visually organize the digital file relative to the three-dimensional virtual environment. Embodiments of the present disclosure relate to receiving the digital file that includes digital file parameters and is in a file format. The digital file is transforming into the three-dimensional object based on the digital file parameters associated with the digital file. The three-dimensional object is representative of the presentation of the digital file when executed by the computing device. The three-dimensional object is spatially positioned at a spatial location in the three-dimensional environment based on the digital file parameters of the digital file.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: February 20, 2024
    Assignee: Sagan Works, Inc.
    Inventors: Erika Block, Simon McCluskey, Donald Hicks
  • Patent number: 11900845
    Abstract: A system and method for display distortion calibration are configured to capture distortion with image patterns and calibrate distortion with ray tracing for an optical pipeline with lenses. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes generating an image pattern to encode display image pixels by encoding display distortion associated with a plurality of image patterns. The method also includes determining a distortion of the image pattern resulting from a lens on a head-mounted display (HMD) and decoding the distorted image patterns to obtain distortion of pixels on a display. A lookup table is created of angular distortion of all the pixels on the display. The method further includes providing a compensation factor for the distortion by creating distortion correction based on the lookup table of angular distortion.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: February 13, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11893924
    Abstract: Systems and methods for a multi-primary color system for display. A multi-primary color system increases the number of primary colors available in a color system and color system equipment. Increasing the number of primary colors reduces metameric errors from viewer to viewer. One embodiment of the multi-primary color system includes Red, Green, Blue, Cyan, Yellow, and Magenta primaries. The systems of the present invention maintain compatibility with existing color systems and equipment and provide systems for backwards compatibility with older color systems.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: February 6, 2024
    Assignee: BAYLOR UNIVERSITY
    Inventors: Gary B. Mandle, James M. DeFilippis, Mitchell J. Bogdanowicz, Corey P. Carbonara, Michael F. Korpi
  • Patent number: 11893688
    Abstract: A computer-implemented method of smoothing a transition between two mesh sequences to be rendered successively comprises steps of: (a) providing first and second mesh sequences {1} and {2} formed mesh frames, respectively, to be fused into a fused sequence; (b) selecting mesh frames gn and gm being candidates for fusing therebetween; calculating geometric rigid and/or non-rigid transformations of candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; applying calculated geometric rigid and/or non-rigid transformations to candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; calculating textural transformations of said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; and applying calculated textural transformation to said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: February 6, 2024
    Assignee: TETAVI LTD.
    Inventors: Sefy Kagarlitsky, Shirley Keinan, Amir Green, Yair Baruch, Roi Lev, Michael Birnboim, Miky Tamir
  • Patent number: 11887520
    Abstract: The present invention provides a chipset for FRC, wherein the chipset includes a first FRC chip and a second FRC chip. The first FRC chip is configured to receive a first part of input image data, and perform a motion compensation on the first part of the input image data to generate a first part of an output image data, wherein a frame rate of the output image data is greater than or equal to a frame rate of the input image data. The second FRC chip is configured to receive a second part of the input image data, and perform the motion compensation on the second part of the input image data to generate a second part of the output image data; wherein the first part and the second part of the output image data are combined into the complete output image data for displaying on a display panel.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: January 30, 2024
    Assignee: Realtek Semiconductor Corp.
    Inventors: Tien-Hung Lin, Chia-Wei Yu
  • Patent number: 11883259
    Abstract: System for scanning a subject's teeth. Described herein are intraoral scanning apparatuses for generating a three-dimensional model of a subject's intraoral region (e.g., teeth). These apparatuses may be used for identifying and evaluating lesions, caries and cracks in the teeth.
    Type: Grant
    Filed: April 3, 2023
    Date of Patent: January 30, 2024
    Assignee: Align Technology, Inc.
    Inventors: Gilad Elbaz, Erez Lampert, Yossef Atiya, Avi Kopelman, Ofer Saphier, Maayan Moshe, Shai Ayal, Michael Sabina, Eric Kuo, Assaf Weiss, Doron Malka, Eliahou Franklin Nizard, Ido Tishel
  • Patent number: 11880934
    Abstract: An apparatus and method are described for performing an early depth test on graphics data. For example, one embodiment of a graphics processing apparatus comprises: early depth test circuitry to perform an early depth test on blocks of pixels to determine whether all pixels in the block of pixels can be resolved by the early depth test; a plurality of execution circuits to execute pixel shading operations on the blocks of pixels; and a scheduler circuit to schedule the blocks of pixels for the pixel shading operations, the scheduler circuit to prioritize the blocks of pixels in accordance with the determination as to whether all pixels in the block of pixels can be resolved by the early depth test.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: January 23, 2024
    Assignee: INTEL CORPORATION
    Inventors: Brent E. Insko, Prasoonkumar Surti
  • Patent number: 11869408
    Abstract: The present invention includes systems and methods for a multi-primary color system for display. A multi-primary color system increases the number of primary colors available in a color system and color system equipment. Increasing the number of primary colors reduces metameric errors from viewer to viewer. One embodiment of the multi-primary color system includes Red, Green, Blue, Cyan, Yellow, and Magenta primaries. The systems of the present invention maintain compatibility with existing color systems and equipment and provide systems for backwards compatibility with older color systems.
    Type: Grant
    Filed: December 15, 2022
    Date of Patent: January 9, 2024
    Assignee: BAYLOR UNIVERSITY
    Inventors: Mitchell J. Bogdanowicz, Corey P. Carbonara, Michael F. Korpi, James M. DeFilippis, Gary B. Mandle
  • Patent number: 11860363
    Abstract: A method performed by an XR rending device (124) having an NCOD (199) for generating, for a first user, XR content for an XR environment with which the first user is interacting. The method includes obtaining first user preference information for the first user, the first user preference information comprising sensory permission information indicating, expressly or implicitly, one or more sensory stimulations to which the first user agrees to be exposed. The method also includes obtaining XR scene configuration information for use in generating XR content for the first user. The method also includes generating XR content for the first user based on the sensory permission information and the XR scene configuration information such that the generated XR content does not comprise XR content for producing any sensory stimulation to which the first user has not agreed to be exposed.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: January 2, 2024
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Gregoire Phillips, Paul McLachlan, Héctor Caltenco
  • Patent number: 11861797
    Abstract: A method performed by a first terminal is provided. The method includes identifying capabilities of the first terminal connected to at least one component device, establishing, via a server, a session associated with an augmented reality (AR) service based on the capabilities of the first terminal, performing pre-processing on 3 dimensional (3D) media data acquired by the at least one component device, and transmitting, to a second terminal, the pre-processed 3D media data.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: January 2, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Eric Yip, Hyunkoo Yang, Jaeyeon Song
  • Patent number: 11861794
    Abstract: A system, software application and process help the user memorize a digital environment through the process of actions the users have to perform. The embodiments provide generating a digital environment including a first virtual room. A first set of digital objects in pre-defined locations of the first virtual room is generated. The first virtual room and the digital objects in the pre-defined locations are displayed. An acknowledgement from the user that the user has memorized positions of each of the digital objects in the first virtual room is received. The objects are removed. Selection of the digital objects and their placement in the first virtual room is received. The system determines whether every digital object was placed in a correct location and in response, generates a second virtual room including a second set of digital objects for further memory training.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: January 2, 2024
    Assignee: ENCODER INC.
    Inventor: Oleksii Ruzhytskyi
  • Patent number: 11861799
    Abstract: An apparatus comprising means for: means for joining to at least an existing participant in extended reality at a location in a virtual space, a joining participant in extended reality to enable a shared extended reality, wherein the joined participants in the shared extended reality are at least co-located in the virtual space and can share at least visually the virtual space; wherein at least part of a content configuration is inherited from the existing participant by the joining participant, wherein the content configuration controls, for participants, what is heard, what is seen and interactivity with the virtual space; wherein at least part of a join configuration is inherited between the existing participant and the joining participant, wherein the join configuration controls joining of other joining participants in the shared extended reality.
    Type: Grant
    Filed: December 9, 2021
    Date of Patent: January 2, 2024
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Lasse Juhani Laaksonen, Jussi Leppänen, Arto Lehtiniemi, Sujeet Shyamsundar Mate
  • Patent number: 11856328
    Abstract: A method for virtual 3D video conference environment generation, the method may include (a) determining a first optical axis of a first virtual camera, the first optical axis represents a line of sight of the participant while a participant of the 3D video conference environment looks at a current displayed version of a virtual 3D video conference environment (V3DVCE); the current displayed version of the V3DVCE is displayed on a display; (b) determining a second optical axis of a second virtual camera that virtually captures the V3DVCE to provide the current displayed version of the V3DVCE; and (c) generating a next displayed version of the V3DVCE based on at least one of the first optical axis and the second optical axis.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: December 26, 2023
    Assignee: TRUE MEETING INC.
    Inventors: Yuval Gronau, Ran Oz, Omri Kaduri, Osnat Goren-Peyser