Patents by Inventor Shi Hui Gui

Shi Hui Gui has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12277171
    Abstract: A method, computer system, and a computer program product are provided for training a neural network for finding queried videos. Two pairs of video clips and associated text are obtained from a first dataset and a second dataset. The first dataset is used to train two video encoders by providing the video clips to the encoders as input and providing the outputs to a cosine similarity calculator. The second dataset is used to train a multi-mentor paradigm with two mentors. A first mentor and a second mentor are each provided the pair of textual data inputs. The first mentor provides a similarity value comparison, and the second mentor provides a word mover distance. Using the output from the multi-mentor paradigm and the encoders, a contrastive loss is calculated and used to provide contrastive learning of video features by differentiating similarity and dissimilarity of the video clips.
    Type: Grant
    Filed: March 7, 2023
    Date of Patent: April 15, 2025
    Assignee: International Business Machines Corporation
    Inventors: Xiao Xia Mao, Wei Jun Zheng, Shi Hui Gui, Xiao Feng Ji
  • Publication number: 20240303272
    Abstract: A method, computer system, and a computer program product are provided for training a neural network for finding queried videos. Two pairs of video clips and associated text are obtained from a first dataset and a second dataset. The first dataset is used to train two video encoders by providing the video clips to the encoders as input and providing the outputs to a cosine similarity calculator. The second dataset is used to train a multi-mentor paradigm with two mentors. A first mentor and a second mentor are each provided the pair of textual data inputs. The first mentor provides a similarity value comparison, and the second mentor provides a word mover distance. Using the output from the multi-mentor paradigm and the encoders, a contrastive loss is calculated and used to provide contrastive learning of video features by differentiating similarity and dissimilarity of the video clips.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Xiao Xia Mao, Wei Jun Zheng, Shi Hui Gui, Xiao Feng Ji
  • Publication number: 20240303444
    Abstract: An embodiment for a method of generating and selecting optimal translations for user interfaces by balancing translation integrity and user interface design requirements. The embodiment may receive text to be translated from a first language to a second language within a user interface design context. The embodiment may generate a plurality of translations of the received text, each of the plurality of translations having a respective translation integrity value and text length. The embodiment may calculate a translation deviation range for the received text. The embodiment may determine one or more optimal user interface designs and calculate a guided text length range for the received text. The embodiment may select optimal translation outputs by balancing the respective translation integrity values and the text lengths for each of the generated plurality of translations against the calculated translation deviation range and the calculated guided text length range for the received text.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 12, 2024
    Inventors: Al Chakra, Nathan Montgomery Gurley, Xiao Xia Mao, Shi Hui Gui
  • Publication number: 20240096068
    Abstract: At least one computer processor can replace visual words of an unsupervised machine learning classification model with visual objects of an image. At least two co-occurring single visual objects adjacent to each other in pixels of the image can be combined to obtain a compound visual object. The unsupervised machine learning classification model can be augmented to model the image as a mixture of subjects, where each subject is represented through placements of the visual objects in a mixture of concentric spheres centering on a mixture of intersections on a mixture of horizontal layers. At least one processor can learn latent relationships between the placements of the visual objects in a three-dimensional space depicted in the image and image semantics. Learning the latent relationships trains the unsupervised machine learning classification model to perform image subject classification through the placements of the visual objects in a new image.
    Type: Application
    Filed: September 21, 2022
    Publication date: March 21, 2024
    Inventors: Ying Li, Fang Lu, Yuan Yuan Gong, Wen Ting Li, Shi Hui Gui, Xiao Feng Ji
  • Patent number: 11645803
    Abstract: According to an embodiment, a source object presented in a source video is identified. Attribute information of the source object in respective frames of a sequence of source frames in the source video is identified. The attribute information represents an animation effect associated with the source object across the sequence of source frames. The attribute information is provided for use in reproducing the animation effect in a target video.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Jian Jun Wang, Ting Chen, Shi Hui Gui, Li Yi Zhou, Jing Xia, Yidan Lei
  • Publication number: 20220044464
    Abstract: According to an embodiment, a source object presented in a source video is identified. Attribute information of the source object in respective frames of a sequence of source frames in the source video is identified. The attribute information represents an animation effect associated with the source object across the sequence of source frames. The attribute information is provided for use in reproducing the animation effect in a target video.
    Type: Application
    Filed: August 7, 2020
    Publication date: February 10, 2022
    Inventors: JIAN JUN WANG, Ting Chen, Shi Hui Gui, Li Yi Zhou, Jing Xia, Yidan Lei