Patents by Inventor Cailiang Liu
Cailiang Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10755104Abstract: In some embodiments, a method trains a first prediction network to predict similarity between images in videos. The training uses boundaries detected in the videos to train the prediction network to predict images in a same scene to have similar feature descriptors. The first prediction network generates feature descriptors that describe library images from videos in a video library offered to users of a video delivery service. A search image is received and the prediction network predicts one or more library images for one or more videos that are predicted to be similar to the received image. The one or more library images for the one or more videos are provided as a search result.Type: GrantFiled: June 18, 2018Date of Patent: August 25, 2020Assignee: HULU, LLCInventors: Fanding Li, Xiaohui Xie, Yin Zheng, Cailiang Liu, Bo Liu, Hongxiang Chen
-
Patent number: 10681428Abstract: In one embodiment, a method includes sending videos to users that use a video delivery service. The videos include shows that have episodes released sequentially. The method records historical records of video views for the video based on the sending of the videos to the users. For a show, a show-specific model is determined to predict future video views by performing: determining historical records of video views for different episodes of the show; training the show-specific model with the historical records, wherein the show-specific model models a decay curve with a regularizing term to regularize a decay speed; using the show-specific model to predict future video views for a future time range for episodes of the show; and outputting the future video views to an ad system configured to sell ads for the show.Type: GrantFiled: July 6, 2015Date of Patent: June 9, 2020Assignee: HULU, LLCInventors: Cailiang Liu, Zhibing Wang, Dong Guo
-
Publication number: 20190384987Abstract: In some embodiments, a method trains a first prediction network to predict similarity between images in videos. The training uses boundaries detected in the videos to train the prediction network to predict images in a same scene to have similar feature descriptors. The first prediction network generates feature descriptors that describe library images from videos in a video library offered to users of a video delivery service. A search image is received and the prediction network predicts one or more library images for one or more videos that are predicted to be similar to the received image. The one or more library images for the one or more videos are provided as a search result.Type: ApplicationFiled: June 18, 2018Publication date: December 19, 2019Inventors: Fanding LI, Xiaohui XIE, Yin ZHENG, Cailiang LIU, Bo LIU, Hongxiang CHEN
-
Patent number: 9852364Abstract: In one embodiment, a method determines known features for existing face tracks that have identity labels and builds a database using these features. The face tracks may have multiple different views of a face. Multiple features from the multiple faces may be taken to build the face models. For an unlabeled face track without identity information, the method determines its sampled features and finds labeled nearest neighbor features with respect to multiple feature spaces from the face models. For each face in the unlabeled face track, the method decomposes the face as a linear combination of its neighbors from the known features from the face models. Then, the method determines weights for the known features to weight the coefficients of the known features. Particular embodiments use a non-linear weighting function to learn the weights that provides more accurate labels.Type: GrantFiled: March 19, 2015Date of Patent: December 26, 2017Assignee: HULU, LLCInventors: Cailiang Liu, Zhibing Wang, Chenguang Zhang, Tao Xiong
-
Patent number: 9521470Abstract: Particular embodiments configure a video delivery system to provide different modes for seeking in a video. The different modes may segment the video on different boundaries based on different characteristics of the video. For example, the different modes may seek by scene, by shot, and by dialogue. The boundaries for scenes, shots, and dialogue may start the video on logical points that do not break up the flow of the video. In another embodiment, the media player may save a seek history for a user and allow the user to scan the previous seek requests to go back to the seek times of previous seek requests. In one embodiment, the previous seek times are adjusted via the boundary information to show thumbnails for a shot, scene, or dialogue that correspond to the boundaries in the video rather than the original seek time.Type: GrantFiled: June 12, 2015Date of Patent: December 13, 2016Assignee: HULU, LLCInventors: Tao Xiong, Zhibing Wang, Chenyang Cui, Cailiang Liu
-
Publication number: 20160007093Abstract: In one embodiment, a method includes sending videos to users that use a video delivery service. The videos include shows that have episodes released sequentially. The method records historical records of video views for the video based on the sending of the videos to the users. For a show, a show-specific model is determined to predict future video views by performing: determining historical records of video views for different episodes of the show; training the show-specific model with the historical records, wherein the show-specific model models a decay curve with a regularizing term to regularize a decay speed; using the show-specific model to predict future video views for a future time range for episodes of the show; and outputting the future video views to an ad system configured to sell ads for the show.Type: ApplicationFiled: July 6, 2015Publication date: January 7, 2016Inventors: CAILIANG LIU, ZHIBING WANG, DONG GUO
-
Publication number: 20150365736Abstract: Particular embodiments configure a video delivery system to provide different modes for seeking in a video. The different modes may segment the video on different boundaries based on different characteristics of the video. For example, the different modes may seek by scene, by shot, and by dialogue. The boundaries for scenes, shots, and dialogue may start the video on logical points that do not break up the flow of the video. In another embodiment, the media player may save a seek history for a user and allow the user to scan the previous seek requests to go back to the seek times of previous seek requests. In one embodiment, the previous seek times are adjusted via the boundary information to show thumbnails for a shot, scene, or dialogue that correspond to the boundaries in the video rather than the original seek time.Type: ApplicationFiled: June 12, 2015Publication date: December 17, 2015Inventors: Tao Xiong, Zhibing Wang, Chenyang Cui, Cailiang Liu
-
Publication number: 20150269421Abstract: In one embodiment, a method determines known features for existing face tracks that have identity labels and builds a database using these features. The face tracks may have multiple different views of a face. Multiple features from the multiple faces may be taken to build the face models. For an unlabeled face track without identity information, the method determines its sampled features and finds labeled nearest neighbor features with respect to multiple feature spaces from the face models. For each face in the unlabeled face track, the method decomposes the face as a linear combination of its neighbors from the known features from the face models. Then, the method determines weights for the known features to weight the coefficients of the known features. Particular embodiments use a non-linear weighting function to learn the weights that provides more accurate labels.Type: ApplicationFiled: March 19, 2015Publication date: September 24, 2015Inventors: Cailiang Liu, Zhibing Wang, Chenguang Zhang, Tao Xiong
-
Patent number: 9118886Abstract: A method for annotating general objects contained in video content is provided. The method sends video data to a client device and receives a first annotation from the client device defining a boundary around a portion of a first frame of the video data. Then, the first annotation is tracked through multiple frames of the video content. Other annotations determined to be associated with annotation that match the first annotation within a threshold are determined where the other annotations are received from other client devices and located in the first frame or other frames from the first frame. The method combines the other annotations and the first annotation into an object track and associates a tag with the object track. The tag is input by at least one of the client devices.Type: GrantFiled: July 17, 2013Date of Patent: August 25, 2015Assignee: HULU, LLCInventors: Zhibing Wang, Dong Wang, Tao Xiong, Cailiang Liu, Joyce Zhang, Heng Su
-
Patent number: 9047376Abstract: A video segment including interactive links to information about an actor appearing in the segment may be prepared in an automatic or semi-automatic process. A computer may detect an actor's face appearing in a frame of digital video data by processing the video file with a facial detection algorithm. A user-selectable link may be generated and activated along a track of the face through multiple frames of the video data. The user-selectable link may include a data address for obtaining additional information about an actor identified with the face. The video data may be associated with the user-selectable link and stored in a computer memory. When later viewing the video segment via a media player, a user may select the link to obtain further information about the actor.Type: GrantFiled: May 1, 2012Date of Patent: June 2, 2015Assignee: HULU, LLCInventors: Zhibing Wang, Dong Wang, Betina J. Chan-Martin, Yupeng Liao, Tao Xiong, Cailiang Liu
-
Publication number: 20140023341Abstract: A method for annotating general objects contained in video content is provided. The method sends video data to a client device and receives a first annotation from the client device defining a boundary around a portion of a first frame of the video data. Then, the first annotation is tracked through multiple frames of the video content. Other annotations determined to be associated with annotation that match the first annotation within a threshold are determined where the other annotations are received from other client devices and located in the first frame or other frames from the first frame. The method combines the other annotations and the first annotation into an object track and associates a tag with the object track. The tag is input by at least one of the client devices.Type: ApplicationFiled: July 17, 2013Publication date: January 23, 2014Applicant: Hulu, LLCInventors: Zhibing Wang, Dong Wang, Tao Xiong, Cailiang Liu, Joyce Zhang, Heng Su
-
Publication number: 20130294642Abstract: A video segment including interactive links to information about an actor appearing in the segment may be prepared in an automatic or semi-automatic process. A computer may detect an actor's face appearing in a frame of digital video data by processing the video file with a facial detection algorithm. A user-selectable link may be generated and activated along a track of the face through multiple frames of the video data. The user-selectable link may include a data address for obtaining additional information about an actor identified with the face. The video data may be associated with the user-selectable link and stored in a computer memory. When later viewing the video segment via a media player, a user may select the link to obtain further information about the actor.Type: ApplicationFiled: May 1, 2012Publication date: November 7, 2013Applicant: HULU LLCInventors: Zhibing Wang, Dong Wang, Betina J. Chan-Martin, Yupeng Liao, Tao Xiong, Cailiang Liu