Patents by Inventor Wenmin Wang

Wenmin Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240067414
    Abstract: A mobile cooling box having a box main body and at least one lid for opening the box and providing access to the inside of the box. The at least one lid is pivotally attached to the box main body of at least two hinge modules. Each hinge module comprises a pin module having a hinge pin with a front end, a rear end, a longitudinal axis about which the lid is pivotable and predominantly a smooth outer surface of a cylindrical shape. Each hinge module further comprises a bearing module having a hinge bearing accommodating the hinge pin. The hinge pin laterally extends with its front end into the hinge bearing. Thus, during pivoting the lid with respect to the box main body an axis of the hinge bearing remains co-linear with the longitudinal axis of the hinge pin.
    Type: Application
    Filed: November 6, 2023
    Publication date: February 29, 2024
    Inventors: Weixian Guan, Peng Wang, Wenmin Tan, Yuelong Chen
  • Patent number: 11913713
    Abstract: A mobile cooling box has a box main body and at least one lid for providing access to the inside of the mobile cooling box. At one side edge the lid is hinged to the box main body so that the lid is pivotable from a closed position to an open position. The mobile cooling box further has a latch handle module for manually locking and unlocking the lid. The latch handle module is integrated in the lid and located at another side edge of the lid. The latch handle module comprises an actuating element that is manually operable by the user, a locking element that is engageable with a corresponding counterpart located at the box main body in order to lock the lid from being opened and a casing, and the latch handle module provides a mechanism for locking and unlocking the lid.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: February 27, 2024
    Assignee: Dometic Sweden AB
    Inventors: Weixian Guan, Peng Wang, Wenmin Tan, Yuelong Chen
  • Patent number: 11397890
    Abstract: The present application discloses a cross-media retrieval method based on deep semantic space, which includes a feature generation stage and a semantic space learning stage. In the feature generation stage, a CNN visual feature vector and an LSTM language description vector of an image are generated by simulating a perception process of a person for the image; and topic information about a text is explored by using an LDA topic model, thus extracting an LDA text topic vector. In the semantic space learning phase, a training set image is trained to obtain a four-layer Multi-Sensory Fusion Deep Neural Network, and a training set text is trained to obtain a three-layer text semantic network, respectively. Finally, a test image and a text are respectively mapped to an isomorphic semantic space by using two networks, so as to realize cross-media retrieval. The disclosed method can significantly improve the performance of cross-media retrieval.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: July 26, 2022
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Mengdi Fan, Peilei Dong, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 11379711
    Abstract: A video action detection method based on a convolutional neural network (CNN) is disclosed in the field of computer vision recognition technologies. A temporal-spatial pyramid pooling layer is added to a network structure, which eliminates limitations on input by a network, speeds up training and detection, and improves performance of video action classification and time location. The disclosed convolutional neural network includes a convolutional layer, a common pooling layer, a temporal-spatial pyramid pooling layer and a full connection layer. The outputs of the convolutional neural network include a category classification output layer and a time localization calculation result output layer. The disclosed method does not require down-sampling to obtain video clips of different durations, but instead utilizes direct input of the whole video at once, improving efficiency.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: July 5, 2022
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Zhihao Li, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 11347979
    Abstract: A method and a device for MCMC framework-based sub-hypergraph matching are provided. Matching of object features is performed by constructing sub-hypergraphs. In a large number of actual images and videos, objects vary constantly, and contain various noise points as well as other interference factors, which makes image object matching and searching very difficult. Perform object feature matching by representing the appearance and positions of objects by sub-hypergraphs allows for faster and more accurate image matching. Furthermore, a sub-hypergraph has several advantages over a graph or a hypergraph: on one hand, a sub-hypergraph has more geometric information (e.g. angle transformation, rotation, scale, etc.) than a graph, and has a lower degree of difficulty and better extensibility than a hypergraph. On the other hand, the disclosed method and device have stronger capabilities to resist interference and good robustness, and are adaptable to more complex settings, especially with outliers.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: May 31, 2022
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Ruonan Zhang, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 11238274
    Abstract: An image feature extraction method for person re-identification includes performing person re-identification by means of aligned local descriptor extraction and graded global feature extraction; performing the aligned local descriptor extraction by processing an original image by affine transformation and performing a summation pooling operation on image block features of same regions to obtain an aligned local descriptor; reserving spatial information between inner blocks of the image for the aligned local descriptor; and performing the graded global feature extraction by grading a positioned pedestrian region block and solving a corresponding feature mean value to obtain a global feature. The method can resolve the problem of feature misalignment caused by posture changes of pedestrian, etc., and eliminate the effect of unrelated backgrounds on re-recognition, thus improving the precision and robustness of person re-identification.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: February 1, 2022
    Assignee: Peking University
    Inventors: Wenmin Wang, Yihao Zhang, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Wen Gao
  • Patent number: 11227178
    Abstract: A back-propagation significance detection method based on depth map mining, comprising: for an input image Io, at a preprocessing phase, obtaining a depth image Id and an image Cb with four background corners removed of the image Io; at a first processing phase, carrying out positioning detection on a significant region of the image by means of the obtained image Cb with four background corners removed and the obtained depth image Id to obtain the preliminary detection result S1 of a significant object in the image; then carrying out depth mining on a plurality of processing phases of the depth image Id to obtain corresponding significance detection results; and then optimizing the significance detection result mined in each processing phase by means of a back-propagation mechanism to obtain a final significance detection result map. The method can improve the detection accuracy of the significance object.
    Type: Grant
    Filed: November 24, 2017
    Date of Patent: January 18, 2022
    Inventors: Ge Li, Chunbiao Zhu, Wenmin Wang, Ronggang Wang, Tiejun Huang, Wen Gao
  • Publication number: 20210287034
    Abstract: A back-propagation significance detection method based on depth map mining, comprising: for an input image Io, at a preprocessing phase, obtaining a depth image Id and an image Cb with four background corners removed of the image Io; at a first processing phase, carrying out positioning detection on a significant region of the image by means of the obtained image Cb with four background corners removed and the obtained depth image Id to obtain the preliminary detection result S1 of a significant object in the image; then carrying out depth mining on a plurality of processing phases of the depth image Id to obtain corresponding significance detection results; and then optimizing the significance detection result mined in each processing phase by means of a back-propagation mechanism to obtain a final significance detection result map. The method can improve the detection accuracy of the significance object.
    Type: Application
    Filed: November 24, 2017
    Publication date: September 16, 2021
    Inventors: Ge LI, Chunbiao Zhu, Wenmin WANG, Ronggang WANG, Tiejun Huang
  • Patent number: 11106951
    Abstract: A bidirectional image-text retrieval method based on a multi-view joint embedding space includes: performing retrieval with reference to a semantic association relationship at a global level and a local level, obtaining the semantic association relationship at the global level and the local level in a frame-sentence view and a region-phrase view, and obtaining semantic association information in a global level subspace of frame and sentence in the frame-sentence view, obtaining semantic association information in a local level subspace of region and phrase in the region-phrase view, processing data by a dual-branch neural network in the two views to obtain an isomorphic feature and embedding the same in a common space, and using a constraint condition to reserve an original semantic relationship of the data during training, and merging the two semantic association relationships using multi-view merging and sorting to obtain a more accurate semantic similarity between data.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: August 31, 2021
    Assignee: Peking University Shenzhen Graduate Sohool
    Inventors: Wenmin Wang, Lu Ran, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 11100370
    Abstract: Disclosed is a deep discriminative network for person re-identification in an image or a video. Concatenation are carried out on different input images on a color channel by constructing a deep discriminative network, and an obtained splicing result is defined as an original difference space of different images. The original difference space is sent into a convolutional network. The network outputs the similarity between two input images by learning difference information in the original difference space, thereby realizing person re-identification. The features of an individual image are not learnt, and concatenation are carried out on input images on a color channel at the beginning, and difference information is learnt on an original space of the images by using a designed network. By introducing an Inception module and embedding the same into a model, the learning ability of a network can be improved, and a better differentiation effect can be achieved.
    Type: Grant
    Filed: January 23, 2018
    Date of Patent: August 24, 2021
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Yihao Zhang, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20210256365
    Abstract: The present application discloses a cross-media retrieval method based on deep semantic space, which includes a feature generation stage and a semantic space learning stage. In the feature generation stage, a CNN visual feature vector and an LSTM language description vector of an image are generated by simulating a perception process of a person for the image; and topic information about a text is explored by using an LDA topic model, thus extracting an LDA text topic vector. In the semantic space learning phase, a training set image is trained to obtain a four-layer Multi-Sensory Fusion Deep Neural Network, and a training set text is trained to obtain a three-layer text semantic network, respectively. Finally, a test image and a text are respectively mapped to an isomorphic semantic space by using two networks, so as to realize cross-media retrieval. The disclosed method can significantly improve the performance of cross-media retrieval.
    Type: Application
    Filed: August 16, 2017
    Publication date: August 19, 2021
    Inventors: Wenmin Wang, Mengdi Fan, Peilei Dong, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 11087439
    Abstract: The present disclosure provides a hybrid framework-based image bit-depth expansion method and device. The invention fuses a traditional de-banding algorithm and a depth network-based learning algorithm, and can remove unnatural effects in an image flat area whilst more realistically restoring numerical information of missing bits. The method comprises the extraction of image flat areas, local adaptive pixel value adjustment-based flat area bit-depth expansion and convolutional neural network-based non-flat area bit-depth expansion. The present invention uses a learning-based method to train an effective depth network to solve the problem of realistically restoring missing bits, whilst using a simple and robust local adaptive pixel value adjustment method in an flat area to effectively inhibit unnatural effects in the flat area such as banding, a ringing and flat noise, improving subjective visual quality of the flat area.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: August 10, 2021
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Yang Zhao, Ronggang Wang, Wen Gao, Zhenyu Wang, Wenmin Wang
  • Patent number: 11030444
    Abstract: Disclosed is a method for detecting pedestrians in an image by using Gaussian penalty. Initial pedestrian boundary box is screened using a Gaussian penalty, to improve the pedestrian detection performance, especially sheltered pedestrians in an image. The method includes acquiring a training data set, a test data set and pedestrian labels of a pedestrian detection image; using the training data set for training to obtain a detection model by using a pedestrian detection method, and acquiring initial pedestrian boundary box and confidence degrees and coordinates thereof; performing Gaussian penalty on the confidence degrees of the pedestrian boundary box, to obtain confidence degree of the pedestrian boundary box after the penalty; and obtaining final pedestrian boundary boxes by screening the pedestrian boundary boxes. Thus, repeated boundary boxes of a single pedestrian are removed while reserving boundary boxes of sheltered pedestrians, thereby realizing the detection of the pedestrians in an image.
    Type: Grant
    Filed: November 24, 2017
    Date of Patent: June 8, 2021
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Peilei Dong, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20210150194
    Abstract: An image feature extraction method for person re-identification includes performing person re-identification by means of aligned local descriptor extraction and graded global feature extraction; performing the aligned local descriptor extraction by processing an original image by affine transformation and performing a summation pooling operation on image block features of same regions to obtain an aligned local descriptor; reserving spatial information between inner blocks of the image for the aligned local descriptor; and performing the graded global feature extraction by grading a positioned pedestrian region block and solving a corresponding feature mean value to obtain a global feature. The method can resolve the problem of feature misalignment caused by posture changes of pedestrian, etc., and eliminate the effect of unrelated backgrounds on re-recognition, thus improving the precision and robustness of person re-identification.
    Type: Application
    Filed: December 27, 2017
    Publication date: May 20, 2021
    Inventors: Wenmin Wang, Yihao Zhang, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Wen Gao
  • Publication number: 20210150268
    Abstract: Disclosed is a deep discriminative network for person re-identification in an image or a video. Concatenation are carried out on different input images on a color channel by constructing a deep discriminative network, and an obtained splicing result is defined as an original difference space of different images. The original difference space is sent into a convolutional network. The network outputs the similarity between two input images by learning difference information in the original difference space, thereby realizing person re-identification. The features of an individual image are not learnt, and concatenation are carried out on input images on a color channel at the beginning, and difference information is learnt on an original space of the images by using a designed network. By introducing an Inception module and embedding the same into a model, the learning ability of a network can be improved, and a better differentiation effect can be achieved.
    Type: Application
    Filed: January 23, 2018
    Publication date: May 20, 2021
    Inventors: Wenmin Wang, Yihao Zhang, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20210150255
    Abstract: A bidirectional image-text retrieval method based on a multi-view joint embedding space includes: performing retrieval with reference to a semantic association relationship at a global level and a local level, obtaining the semantic association relationship at the global level and the local level in a frame-sentence view and a region-phrase view, and obtaining semantic association information in a global level subspace of frame and sentence in the frame-sentence view, obtaining semantic association information in a local level subspace of region and phrase in the region-phrase view, processing data by a dual-branch neural network in the two views to obtain an isomorphic feature and embedding the same in a common space, and using a constraint condition to reserve an original semantic relationship of the data during training, and merging the two semantic association relationships using multi-view merging and sorting to obtain a more accurate semantic similarity between data.
    Type: Application
    Filed: January 29, 2018
    Publication date: May 20, 2021
    Inventors: Wenmin Wang, Lu Ran, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Patent number: 10923120
    Abstract: A human-machine interaction method and apparatus based on artificial intelligence. In the method, a user-entered interaction sentence is received, and whether to generate an interaction result corresponding to the interaction sentence is determined; and interaction information to be presented to the user is determined based on a determining result, the interaction information including at least one of following items: the generated interaction result corresponding to the interaction sentence, or a search result corresponding to the interaction sentence in a search engine.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: February 16, 2021
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Yingzhan Lin, Zeying Xie, Yichuan Liang, Wenmin Wang, Yin Zhang, Guang Ling, Chao Zhou
  • Patent number: D935224
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: November 9, 2021
    Inventors: Wenmin Wang, Mingxing Shao
  • Patent number: D935225
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: November 9, 2021
    Inventors: Wenmin Wang, Mingxing Shao
  • Patent number: D935226
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: November 9, 2021
    Inventors: Wenmin Wang, Mingxing Shao