Patents by Inventor Yunsheng Jiang
Yunsheng Jiang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240194134Abstract: A display substrate and a preparation method thereof, and a display apparatus is provided in the present disclosure. The display substrate include a drive circuit layer disposed on the substrate, the drive circuit layer includes a plurality of circuit units, the circuit units include a pixel drive circuit and a data signal wire providing a data signal to the pixel drive circuit and an initial signal wire providing an initial signal; the plurality of circuit units includes at least one normal circuit unit and at least one tracing circuit unit, the normal circuit unit is provided with a first compensation line extending along a first direction and a second compensation line extending along a second direction.Type: ApplicationFiled: July 30, 2021Publication date: June 13, 2024Inventors: Shilong WANG, Haigang QING, Yunsheng XIAO, Ziyang YU, Zhiliang JIANG, Ming HU
-
Publication number: 20240092814Abstract: Provided are a water-soluble Pd(II) complex, a synthesis method thereof and use thereof as a catalytic precursor. The complex has a chemical name, ammonium dinitrooxalato palladium (II), and a molecular formula of (NH4)2[Pd(NO2)2(C2O4)]·nH2O (n is the number of crystal water). The Pd(II) complex is synthesized by using PdCl2 or [Pd(NH3)2Cl2] as a starting material which is firstly converted into [Pd(NH3)4]Cl2 in ammonium hydroxide, followed by a chemical reaction between [Pd(NH3)4]Cl2 and excessive NaNO2 to produce trans-[Pd(NH3)2(NO2)2] via ligand substitution mechanism, and finally dissolving trans-[Pd(NH3)2(NO2)2] in an aqueous solution of oxalic acid leads to the formation of the target product (NH4)2[Pd(NO2)2(C2O4)]·2H2O. The complex does not contain chlorine and other elements that are harmful to a catalyst, is readily soluble in water and has a low thermal decomposition temperature.Type: ApplicationFiled: November 13, 2023Publication date: March 21, 2024Inventors: Weiping Liu, Juan Yu, Li Chen, Anli Gao, Yunsheng Dai, Feng Liu, Jing Jiang, Jiyang Xie, Hao Zhou, Qiaowen Chang, Caixian Yan
-
Patent number: 11564013Abstract: In some embodiments, a method receives a history of videos that were viewed on a video delivery system as a first sentence in a sequential order and a target video as a second sentence as input to a prediction network. The prediction network analyzes representations for the history of videos and a representation of the target video. The prediction network generates a session representation based on bidirectionally analyzing a sequence of the first representations and the second representation. The method uses the session representation to determine whether to recommend the target video.Type: GrantFiled: October 19, 2020Date of Patent: January 24, 2023Assignee: HULU, LLCInventors: Peng Wang, Yunsheng Jiang, Diman Shen, Yaqi Wang, Xiaohui Xie, Brian Thomas Morrison
-
Patent number: 11481438Abstract: In some embodiments, a method selects a sequence of programs watched by a user account. The method calculates a first set of weights based on comparing content of the sequence of programs to content of a target program and calculates a second set of weights based on an order of the sequence of the programs and the first of weights. The first set of weights and the second set of weights are applied to the sequence of programs to generate a prediction of a similarity of the sequence of programs to the target program. Then, the method outputs the prediction of the similarity for use in determining a recommendation for the user account.Type: GrantFiled: October 6, 2020Date of Patent: October 25, 2022Assignee: HULU, LLCInventors: Kaiwen Deng, Yunsheng Jiang, Xiaohui Xie, Brian Morrison, Jiarui Yang, Christopher Russell Kehler
-
Patent number: 11416546Abstract: In one embodiment, a method receives a set of frames from a video at a first classifier. The first classifier classifies the set of frames with classification scores that indicate a confidence that a frame contains end credit content using the first classifier using a first model that classifies content from the set of frames. A second classifier then refines the classification scores from neighboring frames in the set of frames using a second classifier using a second model that classifies classification scores from the first classifier. A boundary point is selected between a frame in the set of frames considered not including end credit content and a frame in the set of frames including end credit content based on the refined classification scores.Type: GrantFiled: March 20, 2018Date of Patent: August 16, 2022Assignee: HULU, LLCInventors: Yunsheng Jiang, Xiaohui Xie, Liangliang Li
-
Publication number: 20220124409Abstract: In some embodiments, a method receives a history of videos that were viewed on a video delivery system as a first sentence in a sequential order and a target video as a second sentence as input to a prediction network. The prediction network analyzes representations for the history of videos and a representation of the target video. The prediction network generates a session representation based on bidirectionally analyzing a sequence of the first representations and the second representation. The method uses the session representation to determine whether to recommend the target video.Type: ApplicationFiled: October 19, 2020Publication date: April 21, 2022Inventors: Peng WANG, Yunsheng JIANG, Diman SHEN, Yaqi WANG, Xiaohui XIE, Brian Thomas MORRISON
-
Publication number: 20210374178Abstract: In some embodiments, a method selects a sequence of programs watched by a user account. The method calculates a first set of weights based on comparing content of the sequence of programs to content of a target program and calculates a second set of weights based on an order of the sequence of the programs and the first of weights. The first set of weights and the second set of weights are applied to the sequence of programs to generate a prediction of a similarity of the sequence of programs to the target program. Then, the method outputs the prediction of the similarity for use in determining a recommendation for the user account.Type: ApplicationFiled: October 6, 2020Publication date: December 2, 2021Inventors: Kaiwen Deng, Yunsheng Jiang, Xiaohui Xie, Brian Morrison, Jiarui Yang, Christopher Russell Kehler
-
Patent number: 11113537Abstract: In some embodiments, a first detector generates a first output based on a first probability that an image was inserted in a video. The first detector is trained with a set of known images to detect the set of known images. A second detector generates a second output based on a second probability that an image was inserted in the video. The second detector is used to detect the set of unknown images without training. The method analyzes the first output from the first detector based on the probability of the image existing in the video and the second output from the second detector based on the probability of the image existing in the video to generate a combined score from the first output and the second output. An indication of whether the image is detected in the video is output based on the combined score.Type: GrantFiled: August 2, 2019Date of Patent: September 7, 2021Assignee: HULU, LLCInventors: Kaiwen Deng, Yunsheng Jiang, Xiaohui Xie
-
Publication number: 20210034878Abstract: In some embodiments, a first detector generates a first output based on a first probability that an image was inserted in a video. The first detector is trained with a set of known images to detect the set of known images. A second detector generates a second output based on a second probability that an image was inserted in the video. The second detector is used to detect the set of unknown images without training. The method analyzes the first output from the first detector based on the probability of the image existing in the video and the second output from the second detector based on the probability of the image existing in the video to generate a combined score from the first output and the second output. An indication of whether the image is detected in the video is output based on the combined score.Type: ApplicationFiled: August 2, 2019Publication date: February 4, 2021Inventors: Kaiwen DENG, Yunsheng JIANG, Xiaohui XIE
-
Patent number: 10867204Abstract: In some embodiments, a method detects a first set of frames in a video that include lines of text, the detecting performed at a frame level on each individual frame. A first representation is generated from the first set of frames and a second representation is generated from the first set of frames. The method filters the first representation based on a number of lines of text within a space in the space dimension to select a second set of frames and filters the second representation based on a number of frames within time intervals in the time dimension to select a third set of frames. Frames in both the second set of frames and the third set of frames are analyzed to determine whether the lines of text in both the second set of frames and the third set of frames are burned-in subtitles.Type: GrantFiled: April 30, 2019Date of Patent: December 15, 2020Assignee: HULU, LLCInventors: Yaqi Wang, Xiaohui Xie, Yunsheng Jiang
-
Publication number: 20200349381Abstract: In some embodiments, a method detects a first set of frames in a video that include lines of text, the detecting performed at a frame level on each individual frame. A first representation is generated from the first set of frames and a second representation is generated from the first set of frames. The method filters the first representation based on a number of lines of text within a space in the space dimension to select a second set of frames and filters the second representation based on a number of frames within time intervals in the time dimension to select a third set of frames. Frames in both the second set of frames and the third set of frames are analyzed to determine whether the lines of text in both the second set of frames and the third set of frames are burned-in subtitles.Type: ApplicationFiled: April 30, 2019Publication date: November 5, 2020Inventors: Yaqi Wang, Xiaohui Xie, Yunsheng Jiang
-
Patent number: 10721388Abstract: In one embodiment, a system detects objects in an image and generates attention regions that are positioned in the image based on first positions of the objects in the image. Focus points for the objects are generated for the attention regions at one or more second positions. Focus boxes are generated using the second positions of the focus points. Then, the system generates information for a motion effect using content of the image based on a number of the focus boxes and third positions of the focus boxes.Type: GrantFiled: March 23, 2018Date of Patent: July 21, 2020Assignee: Hulu, LLCInventors: Yunsheng Jiang, Xiaohui Xie, Ran Cao
-
Publication number: 20190294729Abstract: In one embodiment, a method receives a set of frames from a video at a first classifier. The first classifier classifies the set of frames with classification scores that indicate a confidence that a frame contains end credit content using the first classifier using a first model that classifies content from the set of frames. A second classifier then refines the classification scores from neighboring frames in the set of frames using a second classifier using a second model that classifies classification scores from the first classifier. A boundary point is selected between a frame in the set of frames considered not including end credit content and a frame in the set of frames including end credit content based on the refined classification scores.Type: ApplicationFiled: March 20, 2018Publication date: September 26, 2019Inventors: Yunsheng Jiang, Xiaohui Xie, Liangliang Li
-
Publication number: 20190297248Abstract: In one embodiment, a system detects objects in an image and generates attention regions that are positioned in the image based on first positions of the objects in the image. Focus points for the objects are generated for the attention regions at one or more second positions. Focus boxes are generated using the second positions of the focus points. Then, the system generates information for a motion effect using content of the image based on a number of the focus boxes and third positions of the focus boxes.Type: ApplicationFiled: March 23, 2018Publication date: September 26, 2019Inventors: Yunsheng Jiang, Xiaohui Xie, Ran Cao