Patents by Inventor Jialin YUAN

Jialin YUAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240415089
    Abstract: The present disclosure relates to a device for live animal transport, comprising: a bottom plate, comprising a liquid guide plate and a non-liquid guide plate, wherein the liquid guide plate has a first end and a second end opposite the first end; a plurality of support members, symmetrically provided at predetermined intervals on the bottom plate at the first end and the second end of the liquid guide plate, and extending in a first direction perpendicular to the bottom plate; a plurality of pillars, one end of each pillar being nested in a corresponding one support member, and the plurality of pillars being capable of telescoping in the first direction; a plurality of beams, extending in a second direction from the first end of the liquid guide plate to the second end of the liquid guide plate, each beam being supported by the other end of the pillar at the first end of the liquid guide plate and the other end of the pillar at the second end of the liquid guide plate; side plates, provided on the bottom pla
    Type: Application
    Filed: May 29, 2024
    Publication date: December 19, 2024
    Applicant: The Boeing Company
    Inventors: Jiao Mo, Ke Ma, Paola Trapani, Saverio Silli, Yiwei Liu, Jialin Yuan, Yuchen Tan, Jiyu Song, Kudilaiti Kuerban, Peizhong Gao, Long Long, Cynthia A. Vandewall
  • Publication number: 20240290081
    Abstract: A computerized method trains and uses a multimodal fusion transformer (MFT) model for content moderation. Language modality data and vision modality data associated with a multimodal media source is received. Language embeddings are generated from the language modality data and vision embeddings are generated from the vision modality data. Both kinds of embeddings are generated using operations and/or processes that are specific to the associated modalities. The language embeddings and vision embeddings are combined into combined embeddings and the MFT model is used with those combined embeddings to generate a language semantic output token, a vision semantic output token, and a combined semantic output token. Contrastive loss data is generated using the three semantic output tokens and the MFT model is adjusted using that contrastive loss data. After the MFT model is trained sufficiently, it is configured to perform content moderation operations using semantic output tokens.
    Type: Application
    Filed: February 28, 2023
    Publication date: August 29, 2024
    Inventors: Ye YU, Gaurav MITTAL, Matthew Brigham HALL, Sandra SAJEEV, Mei CHEN, Jialin YUAN
  • Publication number: 20240020854
    Abstract: Example solutions for video object segmentation (VOS) use a bilateral attention transformer in motion-appearance neighboring space, and perform a process that includes: receiving a video stream comprising a plurality of video frames in a sequence; receiving a first object mask for an initial video frame of the plurality of video frames; selecting a video frame of the plurality of video frames as a current query frame, the current query frame following, in the sequence, a reference frame of a reference frame set, wherein each reference frame has a corresponding object mask; using the current query frame and a video frame in the reference frame set, determining a bilateral attention; and using the bilateral attention, generating an object mask for the current query frame.
    Type: Application
    Filed: September 14, 2022
    Publication date: January 18, 2024
    Inventors: Ye YU, Gaurav MITTAL, Mei CHEN, Jialin YUAN