Patents by Inventor ZHOU SU
ZHOU SU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12250277Abstract: Embodiments of this application provide a method for making recommendations to a user and an apparatus, a computing device, and a storage medium. The method includes obtaining user attribute information, reading attribute information, reading history information, and candidate items; performing intra-group information fusion on the reading attribute information according to preset groupings to obtain reading feature information; obtaining a reading history weight according to the reading history information; obtaining history feature information according to the reading history weight and the reading history information; obtaining user feature information according to the user attribute information, the reading feature information, and the history feature information; and selecting a recommendation item from the candidate items according to the user feature information.Type: GrantFiled: May 24, 2021Date of Patent: March 11, 2025Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Zhijie Qiu, Jun Rao, Yi Liu, Zhou Su, Shukai Liu, Zhenlong Sun, Qi Liu, Liangdong Wang, Tiantian Shang, Mingfei Liang, Lei Chen, Bo Zhang, Leyu Lin
-
Patent number: 12242270Abstract: A method and system for controlling multi-unmanned surface vessel (USV) collaborative search are disclosed which relate to the technical field of the marine intelligent USV collaborative operation. The method includes determining a task region of a USV team; determining environmental perception information corresponding to each of the USVs at the current moment according to the task region and the probability graph mode; inputting the environmental perception information corresponding to each of the USVs at the current moment into the corresponding target search strategy output model respectively to obtain an execution action of each of the USVs at the next moment; sending the execution action of each of the USVs at the next moment to a corresponding USV execution structure to search for underwater targets within the task region. The target search strategy output model is obtained by training based on a training sample and a DDQN network structure.Type: GrantFiled: September 8, 2021Date of Patent: March 4, 2025Assignees: Shanghai University, Chongqing UniversityInventors: Huayan Pu, Yuan Liu, Jun Luo, Zhijiang Xie, Xiaomao Li, Jiajia Xie, Zhou Su, Yan Peng, Shaorong Xie
-
Patent number: 11887485Abstract: A control method and system for collaborative interception by multiple unmanned surface vessels are provided. The method includes obtaining task environment information of each unmanned surface vessel in an unmanned surface vessel group at a current moment, estimating interception point information of the intruding target at the current moment by using a Kalman filter according to the task environment information of the unmanned surface vessels at the current moment, determining process state information of each unmanned surface vessel at the current moment, inputting the process state information of each unmanned surface vessel at the current moment into a corresponding intruding target interception policy output model respectively to obtain an execution action of each unmanned surface vessel at a next moment to intercept the intruding target. The application can intercept the intruding target accurately.Type: GrantFiled: September 7, 2021Date of Patent: January 30, 2024Assignees: Shanghai University, Chongqing UniversityInventors: Huayan Pu, Yuan Liu, Jun Luo, Zhijiang Xie, Jiajia Xie, Xiaomao Li, Zhou Su, Yan Peng, Hengyu Li, Shaorong Xie
-
Patent number: 11790644Abstract: Techniques and apparatus for generating dense natural language descriptions for video content are described. In one embodiment, for example, an apparatus may include at least one memory and logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to receive a source video comprising a plurality of frames, determine a plurality of regions for each of the plurality of frames, generate at least one region-sequence connecting the determined plurality of regions, apply a language model to the at least one region-sequence to generate description information comprising a description of at least a portion of content of the source video. Other embodiments are described and claimed.Type: GrantFiled: January 6, 2022Date of Patent: October 17, 2023Assignee: INTEL CORPORATIONInventors: Yurong Chen, Jianguo Li, Zhou Su, Zhiqiang Shen
-
Patent number: 11663249Abstract: An example apparatus for visual question answering includes a receiver to receive an input image and a question. The apparatus also includes an encoder to encode the input image and the question into a query representation including visual attention features. The apparatus includes a knowledge spotter to retrieve a knowledge entry from a visual knowledge base pre-built on a set of question-answer pairs. The apparatus further includes a joint embedder to jointly embed the visual attention features and the knowledge entry to generate visual-knowledge features. The apparatus also further includes an answer generator to generate an answer based on the query representation and the visual-knowledge features.Type: GrantFiled: January 30, 2018Date of Patent: May 30, 2023Assignee: Intel CorporationInventors: Zhou Su, Jianguo Li, Yinpeng Dong, Yurong Chen
-
Patent number: 11540019Abstract: A video recommendation method is provided, including: inputting a video to a first feature extraction network, performing feature extraction on at least one consecutive video frame in the video, and outputting a video feature of the video; inputting user data of a user to a second feature extraction network, performing feature extraction on the discrete user data, and outputting a user feature of the user; performing feature fusion based on the video feature and the user feature, and obtaining a recommendation probability of recommending the video to the user; and determining, according to the recommendation probability, whether to recommend the video to the user.Type: GrantFiled: May 25, 2021Date of Patent: December 27, 2022Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Zhou Su, Shukai Liu, Zhenlong Sun, Jun Rao, Zhijie Qiu, Yi Liu, Qi Liu, Liangdong Wang, Tiantian Shang, Mingfei Liang, Lei Chen, Bo Zhang
-
Publication number: 20220230268Abstract: Described herein are advanced artificial intelligence agents for modeling physical interactions. In one embodiment, an apparatus to provide an active artificial intelligence (AI) agent includes at least one database to store physical interaction data and compute cluster coupled to the at least one database. The compute cluster automatically obtains physical interaction data from a data collection module without manual interaction, stores the physical interaction data in the at least one database, and automatically trains diverse sets of machine learning program units to simulate physical interactions with each individual program unit having a different model based on the applied physical interaction data.Type: ApplicationFiled: November 2, 2021Publication date: July 21, 2022Inventors: Anbang YAO, Dongqi CAI, Libin WANG, Lin XU, Ping HU, Shandong WANG, Wenhua CHENG, Yiwen GUO, Liu YANG, Yuqing HOU, Zhou SU
-
Publication number: 20220215758Abstract: A control method and system for collaborative interception by multiple unmanned surface vessels are provided. The method includes obtaining task environment information of each unmanned surface vessel in an unmanned surface vessel group at a current moment, estimating interception point information of the intruding target at the current moment by using a Kalman filter according to the task environment information of the unmanned surface vessels at the current moment, determining process state information of each unmanned surface vessel at the current moment, inputting the process state information of each unmanned surface vessel at the current moment into a corresponding intruding target interception policy output model respectively to obtain an execution action of each unmanned surface vessel at a next moment to intercept the intruding target. The application can intercept the intruding target accurately.Type: ApplicationFiled: September 7, 2021Publication date: July 7, 2022Inventors: Huayan PU, Yuan LIU, Jun LUO, Zhijiang XIE, Jiajia XIE, Xiaomao LI, Zhou SU, Yan PENG, Hengyu LI, Shaorong XIE
-
Publication number: 20220214688Abstract: A method and system for controlling multi-unmanned surface vessel (USV) collaborative search are disclosed which relate to the technical field of the marine intelligent USV collaborative operation. The method includes determining a task region of a USV team; determining environmental perception information corresponding to each of the USVs at the current moment according to the task region and the probability graph mode; inputting the environmental perception information corresponding to each of the USVs at the current moment into the corresponding target search strategy output model respectively to obtain an execution action of each of the USVs at the next moment; sending the execution action of each of the USVs at the next moment to a corresponding USV execution structure to search for underwater targets within the task region. The target search strategy output model is obtained by training based on a training sample and a DDQN network structure.Type: ApplicationFiled: September 8, 2021Publication date: July 7, 2022Inventors: Huayan PU, Yuan LIU, Jun LUO, Zhijiang XIE, Xiaomao LI, Jiajia XIE, Zhou SU, Yan PENG, Shaorong XIE
-
Publication number: 20220180127Abstract: Techniques and apparatus for generating dense natural language descriptions for video content are described. In one embodiment, for example, an apparatus may include at least one memory and logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to receive a source video comprising a plurality of frames, determine a plurality of regions for each of the plurality of frames, generate at least one region-sequence connecting the determined plurality of regions, apply a language model to the at least one region-sequence to generate description information comprising a description of at least a portion of content of the source video. Other embodiments are described and claimed.Type: ApplicationFiled: January 6, 2022Publication date: June 9, 2022Applicant: INTEL CORPORATIONInventors: Yurong CHEN, Jianguo LI, Zhou SU, Zhiqiang SHEN
-
Patent number: 11341368Abstract: Methods and systems for advanced and augmented training of deep neural networks (DNNs) using synthetic data and innovative generative networks. A method includes training a DNN using synthetic data, training a plurality of DNNs using context data, associating features of the DNNs trained using context data with features of the DNN trained with synthetic data, and generating an augmented DNN using the associated features.Type: GrantFiled: April 7, 2017Date of Patent: May 24, 2022Assignee: Intel CorporationInventors: Anbang Yao, Shandong Wang, Wenhua Cheng, Dongqi Cai, Libin Wang, Lin Xu, Ping Hu, Yiwen Guo, Liu Yang, Yuqing Hou, Zhou Su, Yurong Chen
-
Patent number: 11263489Abstract: Techniques and apparatus for generating dense natural language descriptions for video content are described. In one embodiment, for example, an apparatus may include at least one memory and logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to receive a source video comprising a plurality of frames, determine a plurality of regions for each of the plurality of frames, generate at least one region-sequence connecting the determined plurality of regions, apply a language model to the at least one region-sequence to generate description information comprising a description of at least a portion of content of the source video. Other embodiments are described and claimed.Type: GrantFiled: June 29, 2017Date of Patent: March 1, 2022Assignee: INTEL CORPORATIONInventors: Yurong Chen, Jianguo Li, Zhou Su, Zhiqiang Shen
-
Patent number: 11176632Abstract: Described herein are advanced artificial intelligence agents for modeling physical interactions. An apparatus to provide an active artificial intelligence (AI) agent includes at least one database to store physical interaction data and compute cluster coupled to the at least one database. The compute cluster automatically obtains physical interaction data from a data collection module without manual interaction, stores the physical interaction data in the at least one database, and automatically trains diverse sets of machine learning program units to simulate physical interactions with each individual program unit having a different model based on the applied physical interaction data.Type: GrantFiled: April 7, 2017Date of Patent: November 16, 2021Assignee: Intel CorporationInventors: Anbang Yao, Dongqi Cai, Libin Wang, Lin Xu, Ping Hu, Shandong Wang, Wenhua Cheng, Yiwen Guo, Liu Yang, Yuqing Hou, Zhou Su
-
Publication number: 20210279552Abstract: Embodiments of this application provide a method for making recommendations to a user and an apparatus, a computing device, and a storage medium. The method includes obtaining user attribute information, reading attribute information, reading history information, and candidate items; performing intra-group information fusion on the reading attribute information according to preset groupings to obtain reading feature information; obtaining a reading history weight according to the reading history information; obtaining history feature information according to the reading history weight and the reading history information; obtaining user feature information according to the user attribute information, the reading feature information, and the history feature information; and selecting a recommendation item from the candidate items according to the user feature information.Type: ApplicationFiled: May 24, 2021Publication date: September 9, 2021Inventors: Zhijie QIU, Jun RAO, Yi LIU, Zhou SU, Shukai LIU, Zhenlong SUN, Qi LIU, Liangdong WANG, Tiantian SHANG, Mingfei LIANG, Lei CHEN, Bo ZHANG, Leyu LIN
-
Publication number: 20210281918Abstract: A video recommendation method is provided, including: inputting a video to a first feature extraction network, performing feature extraction on at least one consecutive video frame in the video, and outputting a video feature of the video; inputting user data of a user to a second feature extraction network, performing feature extraction on the discrete user data, and outputting a user feature of the user; performing feature fusion based on the video feature and the user feature, and obtaining a recommendation probability of recommending the video to the user; and determining, according to the recommendation probability, whether to recommend the video to the user.Type: ApplicationFiled: May 25, 2021Publication date: September 9, 2021Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Zhou SU, Shukai LIU, Zhenlong SUN, Jun RAO, Zhijie QIU, Yi LIU, Qi LIU, Liangdong WANG, Tiantian SHANG, Mingfei LIANG, Lei CHEN, Bo ZHANG
-
Publication number: 20210201078Abstract: Methods and systems for advanced and augmented training of deep neural networks (DNNs) using synthetic data and innovative generative networks. A method includes training a DNN using synthetic data, training a plurality of DNNs using context data, associating features of the DNNs trained using context data with features of the DNN trained with synthetic data, and generating an augmented DNN using the associated features.Type: ApplicationFiled: April 7, 2017Publication date: July 1, 2021Inventors: Anbang Yao, Shandong Wang, Wenhua Cheng, Dongqi Cai, Libin Wang, Lin Xu, Ping Hu, Yiwen Guo, Liu Yang, Yuging Hou, Zhou Su, Yurong Chen
-
Patent number: 11042782Abstract: Techniques are provided for training and operation of a topic-guided image captioning system. A methodology implementing the techniques according to an embodiment includes generating image feature vectors, for an image to be captioned, based on application of a convolutional neural network (CNN) to the image. The method further includes generating the caption based on application of a recurrent neural network (RNN) to the image feature vectors. The RNN is configured as a long short-term memory (LSTM) RNN. The method further includes training the LSTM RNN with training images and associated training captions. The training is based on a combination of: feature vectors of the training image; feature vectors of the associated training caption; and a multimodal compact bilinear (MCB) pooling of the training caption feature vectors and an estimated topic of the training image. The estimated topic is generated by an application of the CNN to the training image.Type: GrantFiled: March 20, 2017Date of Patent: June 22, 2021Assignee: INTEL CORPORATIONInventors: Zhou Su, Jianguo Li, Anbang Yao, Yurong Chen
-
Patent number: D1049333Type: GrantFiled: May 24, 2023Date of Patent: October 29, 2024Inventor: Zhou Su
-
Patent number: D1049334Type: GrantFiled: May 24, 2023Date of Patent: October 29, 2024Inventor: Zhou Su
-
Patent number: D1065476Type: GrantFiled: May 19, 2023Date of Patent: March 4, 2025Inventor: Zhou Su