Patents by Inventor Yunsheng Jiang

Yunsheng Jiang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240092814
    Abstract: Provided are a water-soluble Pd(II) complex, a synthesis method thereof and use thereof as a catalytic precursor. The complex has a chemical name, ammonium dinitrooxalato palladium (II), and a molecular formula of (NH4)2[Pd(NO2)2(C2O4)]·nH2O (n is the number of crystal water). The Pd(II) complex is synthesized by using PdCl2 or [Pd(NH3)2Cl2] as a starting material which is firstly converted into [Pd(NH3)4]Cl2 in ammonium hydroxide, followed by a chemical reaction between [Pd(NH3)4]Cl2 and excessive NaNO2 to produce trans-[Pd(NH3)2(NO2)2] via ligand substitution mechanism, and finally dissolving trans-[Pd(NH3)2(NO2)2] in an aqueous solution of oxalic acid leads to the formation of the target product (NH4)2[Pd(NO2)2(C2O4)]·2H2O. The complex does not contain chlorine and other elements that are harmful to a catalyst, is readily soluble in water and has a low thermal decomposition temperature.
    Type: Application
    Filed: November 13, 2023
    Publication date: March 21, 2024
    Inventors: Weiping Liu, Juan Yu, Li Chen, Anli Gao, Yunsheng Dai, Feng Liu, Jing Jiang, Jiyang Xie, Hao Zhou, Qiaowen Chang, Caixian Yan
  • Patent number: 11564013
    Abstract: In some embodiments, a method receives a history of videos that were viewed on a video delivery system as a first sentence in a sequential order and a target video as a second sentence as input to a prediction network. The prediction network analyzes representations for the history of videos and a representation of the target video. The prediction network generates a session representation based on bidirectionally analyzing a sequence of the first representations and the second representation. The method uses the session representation to determine whether to recommend the target video.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: January 24, 2023
    Assignee: HULU, LLC
    Inventors: Peng Wang, Yunsheng Jiang, Diman Shen, Yaqi Wang, Xiaohui Xie, Brian Thomas Morrison
  • Patent number: 11481438
    Abstract: In some embodiments, a method selects a sequence of programs watched by a user account. The method calculates a first set of weights based on comparing content of the sequence of programs to content of a target program and calculates a second set of weights based on an order of the sequence of the programs and the first of weights. The first set of weights and the second set of weights are applied to the sequence of programs to generate a prediction of a similarity of the sequence of programs to the target program. Then, the method outputs the prediction of the similarity for use in determining a recommendation for the user account.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: October 25, 2022
    Assignee: HULU, LLC
    Inventors: Kaiwen Deng, Yunsheng Jiang, Xiaohui Xie, Brian Morrison, Jiarui Yang, Christopher Russell Kehler
  • Patent number: 11416546
    Abstract: In one embodiment, a method receives a set of frames from a video at a first classifier. The first classifier classifies the set of frames with classification scores that indicate a confidence that a frame contains end credit content using the first classifier using a first model that classifies content from the set of frames. A second classifier then refines the classification scores from neighboring frames in the set of frames using a second classifier using a second model that classifies classification scores from the first classifier. A boundary point is selected between a frame in the set of frames considered not including end credit content and a frame in the set of frames including end credit content based on the refined classification scores.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: August 16, 2022
    Assignee: HULU, LLC
    Inventors: Yunsheng Jiang, Xiaohui Xie, Liangliang Li
  • Publication number: 20220124409
    Abstract: In some embodiments, a method receives a history of videos that were viewed on a video delivery system as a first sentence in a sequential order and a target video as a second sentence as input to a prediction network. The prediction network analyzes representations for the history of videos and a representation of the target video. The prediction network generates a session representation based on bidirectionally analyzing a sequence of the first representations and the second representation. The method uses the session representation to determine whether to recommend the target video.
    Type: Application
    Filed: October 19, 2020
    Publication date: April 21, 2022
    Inventors: Peng WANG, Yunsheng JIANG, Diman SHEN, Yaqi WANG, Xiaohui XIE, Brian Thomas MORRISON
  • Publication number: 20210374178
    Abstract: In some embodiments, a method selects a sequence of programs watched by a user account. The method calculates a first set of weights based on comparing content of the sequence of programs to content of a target program and calculates a second set of weights based on an order of the sequence of the programs and the first of weights. The first set of weights and the second set of weights are applied to the sequence of programs to generate a prediction of a similarity of the sequence of programs to the target program. Then, the method outputs the prediction of the similarity for use in determining a recommendation for the user account.
    Type: Application
    Filed: October 6, 2020
    Publication date: December 2, 2021
    Inventors: Kaiwen Deng, Yunsheng Jiang, Xiaohui Xie, Brian Morrison, Jiarui Yang, Christopher Russell Kehler
  • Patent number: 11113537
    Abstract: In some embodiments, a first detector generates a first output based on a first probability that an image was inserted in a video. The first detector is trained with a set of known images to detect the set of known images. A second detector generates a second output based on a second probability that an image was inserted in the video. The second detector is used to detect the set of unknown images without training. The method analyzes the first output from the first detector based on the probability of the image existing in the video and the second output from the second detector based on the probability of the image existing in the video to generate a combined score from the first output and the second output. An indication of whether the image is detected in the video is output based on the combined score.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: September 7, 2021
    Assignee: HULU, LLC
    Inventors: Kaiwen Deng, Yunsheng Jiang, Xiaohui Xie
  • Publication number: 20210034878
    Abstract: In some embodiments, a first detector generates a first output based on a first probability that an image was inserted in a video. The first detector is trained with a set of known images to detect the set of known images. A second detector generates a second output based on a second probability that an image was inserted in the video. The second detector is used to detect the set of unknown images without training. The method analyzes the first output from the first detector based on the probability of the image existing in the video and the second output from the second detector based on the probability of the image existing in the video to generate a combined score from the first output and the second output. An indication of whether the image is detected in the video is output based on the combined score.
    Type: Application
    Filed: August 2, 2019
    Publication date: February 4, 2021
    Inventors: Kaiwen DENG, Yunsheng JIANG, Xiaohui XIE
  • Patent number: 10867204
    Abstract: In some embodiments, a method detects a first set of frames in a video that include lines of text, the detecting performed at a frame level on each individual frame. A first representation is generated from the first set of frames and a second representation is generated from the first set of frames. The method filters the first representation based on a number of lines of text within a space in the space dimension to select a second set of frames and filters the second representation based on a number of frames within time intervals in the time dimension to select a third set of frames. Frames in both the second set of frames and the third set of frames are analyzed to determine whether the lines of text in both the second set of frames and the third set of frames are burned-in subtitles.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: December 15, 2020
    Assignee: HULU, LLC
    Inventors: Yaqi Wang, Xiaohui Xie, Yunsheng Jiang
  • Publication number: 20200349381
    Abstract: In some embodiments, a method detects a first set of frames in a video that include lines of text, the detecting performed at a frame level on each individual frame. A first representation is generated from the first set of frames and a second representation is generated from the first set of frames. The method filters the first representation based on a number of lines of text within a space in the space dimension to select a second set of frames and filters the second representation based on a number of frames within time intervals in the time dimension to select a third set of frames. Frames in both the second set of frames and the third set of frames are analyzed to determine whether the lines of text in both the second set of frames and the third set of frames are burned-in subtitles.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Yaqi Wang, Xiaohui Xie, Yunsheng Jiang
  • Patent number: 10721388
    Abstract: In one embodiment, a system detects objects in an image and generates attention regions that are positioned in the image based on first positions of the objects in the image. Focus points for the objects are generated for the attention regions at one or more second positions. Focus boxes are generated using the second positions of the focus points. Then, the system generates information for a motion effect using content of the image based on a number of the focus boxes and third positions of the focus boxes.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: July 21, 2020
    Assignee: Hulu, LLC
    Inventors: Yunsheng Jiang, Xiaohui Xie, Ran Cao
  • Publication number: 20190294729
    Abstract: In one embodiment, a method receives a set of frames from a video at a first classifier. The first classifier classifies the set of frames with classification scores that indicate a confidence that a frame contains end credit content using the first classifier using a first model that classifies content from the set of frames. A second classifier then refines the classification scores from neighboring frames in the set of frames using a second classifier using a second model that classifies classification scores from the first classifier. A boundary point is selected between a frame in the set of frames considered not including end credit content and a frame in the set of frames including end credit content based on the refined classification scores.
    Type: Application
    Filed: March 20, 2018
    Publication date: September 26, 2019
    Inventors: Yunsheng Jiang, Xiaohui Xie, Liangliang Li
  • Publication number: 20190297248
    Abstract: In one embodiment, a system detects objects in an image and generates attention regions that are positioned in the image based on first positions of the objects in the image. Focus points for the objects are generated for the attention regions at one or more second positions. Focus boxes are generated using the second positions of the focus points. Then, the system generates information for a motion effect using content of the image based on a number of the focus boxes and third positions of the focus boxes.
    Type: Application
    Filed: March 23, 2018
    Publication date: September 26, 2019
    Inventors: Yunsheng Jiang, Xiaohui Xie, Ran Cao