Patents by Inventor Weilong Yang
Weilong Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11958788Abstract: The present invention discloses a method of preparing an alkali activation material by using red mud-based wet grinding and carbon sequestration and an application thereof. The preparation method includes: (1) adding water, red mud, a crystalline control agent, and a grinding aid into a wet grinding carbon sequestration apparatus to perform wet grinding, and simultaneously introducing CO2 until a slurry pH reaches 7 to 7.5; and removing wet grinding balls by a sieve to obtain a slurry A; (2) adding carbide slag, water and a water reducer to a wet planetary ball grinder tank for wet grinding, and removing wet grinding balls by a sieve to obtain a slurry B; (3) taking 50 to 80 parts of the slurry A and 20 to 50 parts of the slurry B and mixing them to obtain an alkali activation material.Type: GrantFiled: May 23, 2023Date of Patent: April 16, 2024Assignee: Hubei University Of TechnologyInventors: Xingyang He, Weilong Li, Ying Su, Zhengqi Zheng, Jin Yang, Yingbin Wang, Hongbo Tan, Chenghao Li
-
Patent number: 11900517Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: GrantFiled: December 20, 2022Date of Patent: February 13, 2024Assignee: Google LLCInventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Publication number: 20230419997Abstract: The present disclosure provides systems, methods, and computer program products for performing automated non-linear editing style transfer. A computer-implemented method may include determining one or more shot boundaries in a video, analyzing identified content in each of one or more shots in the video based on performing object detection, determining an editing style for each of the one or more shots in the video based at least in part on measuring motion across frames within the respective shots, determining a content segment to adjust from a set of target content based on analyzing the set of target content in view of the identified content and the determined editing style of a shot from the video, and automatically adjusting the content segment from the set of target content based at least in part on modifying the content segment with the determined editing style of the shot from the video.Type: ApplicationFiled: November 13, 2020Publication date: December 28, 2023Inventors: Nathan Frey, Weilong Yang
-
Publication number: 20230177754Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: ApplicationFiled: December 20, 2022Publication date: June 8, 2023Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Patent number: 11562518Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: GrantFiled: June 7, 2021Date of Patent: January 24, 2023Assignee: Google LLCInventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Publication number: 20220207873Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: ApplicationFiled: December 13, 2021Publication date: June 30, 2022Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Patent number: 11200423Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: GrantFiled: November 18, 2019Date of Patent: December 14, 2021Assignee: Google LLCInventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Publication number: 20210383584Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: ApplicationFiled: June 7, 2021Publication date: December 9, 2021Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Publication number: 20210312186Abstract: Systems and methods of automatically extracting summaries of video content are described herein. A data processing system can access, from a video database, a first video content element including a first plurality of frame. The data processing system can select an intervallic subset of the first plurality of frames of the first video content element. The data processing system can calculate, for each of a plurality of further subsets comprising a predetermined number of frames from the intervallic subset, a score for the further subset. The data processing system can identify, from the plurality of further subsets, a further subset having a highest score. The data processing system can select a portion of the first video content element comprising the frames of the further subset having the highest score. The data processing system can generate a second video content element comprising the selected portion of the first video content element.Type: ApplicationFiled: June 18, 2021Publication date: October 7, 2021Applicant: Google LLCInventors: Yi Shen, Xiangrong Chen, Min-hsuan Tsai, Yun Shi, Tianpeng Jin, Zheng Sun, Weilong Yang, Jingbin Wang
-
Patent number: 11042754Abstract: Systems and methods of automatically extracting summaries of video content are described herein. A data processing system can access, from a video database, a first video content element including a first plurality of frame. The data processing system can select an intervallic subset of the first plurality of frames of the first video content element. The data processing system can calculate, for each of a plurality of further subsets comprising a predetermined number of frames from the intervallic subset, a score for the further subset. The data processing system can identify, from the plurality of further subsets, a further subset having a highest score. The data processing system can select a portion of the first video content element comprising the frames of the further subset having the highest score. The data processing system can generate a second video content element comprising the selected portion of the first video content element.Type: GrantFiled: August 3, 2017Date of Patent: June 22, 2021Assignee: Google LLCInventors: Yi Shen, Xiangrong Chen, Min-hsuan Tsai, Yun Shi, Tianpeng Jin, Zheng Sun, Weilong Yang, Jingbin Wang
-
Patent number: 11042553Abstract: Facilitating of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a non-transitory computer-readable medium comprises computer-readable instructions that, in response to execution, cause a computing system to perform operations. The operations include aggregating information indicative of initial entities for content and initial scores associated with the initial entities received from one or more content annotation sources and mapping the initial scores to respective values to generate calibrated scores. The operations include applying weights to the calibrated scores to generate weighted scores and combining the weighted scores using a linear aggregation model to generate a final score. The operations include determining whether to annotate the content with at least one of the initial entities based on a comparison of the final score and a defined threshold value.Type: GrantFiled: November 21, 2017Date of Patent: June 22, 2021Assignee: GOOGLE LLCInventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani
-
Publication number: 20210166035Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.Type: ApplicationFiled: December 14, 2020Publication date: June 3, 2021Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susana Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
-
Publication number: 20210117691Abstract: Systems and methods of automatically extracting summaries of video content are described herein. A data processing system can access, from a video database, a first video content element including a first plurality of frame. The data processing system can select an intervallic subset of the first plurality of frames of the first video content element. The data processing system can calculate, for each of a plurality of further subsets comprising a predetermined number of frames from the intervallic subset, a score for the further subset. The data processing system can identify, from the plurality of further subsets, a further subset having a highest score. The data processing system can select a portion of the first video content element comprising the frames of the further subset having the highest score. The data processing system can generate a second video content element comprising the selected portion of the first video content element.Type: ApplicationFiled: August 3, 2017Publication date: April 22, 2021Applicant: Google LLCInventors: Yi Shen, Xiangrong Chen, Min-hsuan Tsai, Yun Shi, Tianpeng Jin, Zheng Sun, Weilong Yang, Jingbin Wang
-
Patent number: 10867183Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.Type: GrantFiled: April 23, 2018Date of Patent: December 15, 2020Assignee: Google LLCInventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
-
Patent number: 10777229Abstract: Frame-level quality scores for video frames of a video item is determined. A sliding window is applied to the video frames to identify a plurality of groups of the video frames for scoring on a group-level. A plurality of group-level quality scores for the plurality of groups of video frames of the video item is determined using the frame-level quality scores of the video frames. One of the plurality of groups of video frames of the video item is selected based on the plurality of group-level quality scores. A moving thumbnail is created using the selected group of video frames selected based on the respective group level quality score.Type: GrantFiled: July 8, 2019Date of Patent: September 15, 2020Assignee: Google LLCInventors: Weilong Yang, Min-hsuan Tsai, Zheng Sun, Pei Cao, Tomas Izo
-
Publication number: 20200082173Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: ApplicationFiled: November 18, 2019Publication date: March 12, 2020Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Patent number: 10482328Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: GrantFiled: October 2, 2017Date of Patent: November 19, 2019Assignee: Google LLCInventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Publication number: 20190333538Abstract: Frame-level quality scores for video frames of a video item is determined. A sliding window is applied to the video frames to identify a plurality of groups of the video frames for scoring on a group-level. A plurality of group-level quality scores for the plurality of groups of video frames of the video item is determined using the frame-level quality scores of the video frames. One of the plurality of groups of video frames of the video item is selected based on the plurality of group-level quality scores. A moving thumbnail is created using the selected group of video frames selected based on the respective group level quality score.Type: ApplicationFiled: July 8, 2019Publication date: October 31, 2019Inventors: Weilong Yang, Min-hsuan Tsai, Zheng Sun, Pei Cao, Tomas Izo
-
Patent number: 10347294Abstract: A method of generating a moving thumbnail is disclosed. The method includes sampling video frames of a video item. The method further includes determining frame-level quality scores for the sampled video frames. The method also includes determining multiple group-level quality scores for multiple groups of the sampled video frames using the frame-level quality scores of the sampled video frames. The method further includes selecting one of the groups of the sampled video frames based on the multiple group-level quality scores. The method includes creating a moving thumbnail using a subset of the video frames that have timestamps within a range from the start timestamp to the end timestamp.Type: GrantFiled: June 30, 2016Date of Patent: July 9, 2019Assignee: GOOGLE LLCInventors: Weilong Yang, Min-Hsuan Tsai, Zheng Sun, Pei Cao, Tomas Izo
-
Publication number: 20180239964Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.Type: ApplicationFiled: April 23, 2018Publication date: August 23, 2018Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le