Patents by Inventor Bo Du
Bo Du has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250014140Abstract: A video processing system includes a graphics subsystem including a graphics processing unit (GPU) and a frame buffer. The GPU is configured to obtain a physical pixel layout corresponding to a display architecture of an electronic display, wherein the physical pixel layout is characterized by a non-uniform subpixel arrangement; receive image data, including a matrix of logical pixel chroma values; subsample the matrix of logical pixel chroma values according to the physical pixel layout to produce subsampled image data having a subpixel rendered format corresponding to the non-uniform subpixel arrangement; store the subsampled image data in the frame buffer; and enable transfer of the subsampled image data to a display processing unit (DPU) of the electronic display for composition of frames having the non-uniform subpixel arrangement.Type: ApplicationFiled: December 17, 2021Publication date: January 9, 2025Inventors: Nan Zhang, Bo Du, Yongjun Xu
-
Publication number: 20240368878Abstract: The present invention discloses connection systems and methods for cubic structures. The systems include structural members and connecting members, the structural members are configured to interconnect with another structural member by the connecting members, the structural members are provided with accommodating portions for accommodating the connecting members, the structural members include various structural members having different types of interconnection, and the connecting members include various connecting members adapted to the different types of interconnection of the structural members. The methods connect the cubic structures according to the systems. The present invention can be adapted to the connection of cubic structures in various scenarios, thereby promoting development of modular construction.Type: ApplicationFiled: July 23, 2021Publication date: November 7, 2024Inventors: Wai Ming Goman Ho, Siu Ping Clive Yau, Bo Du, Congyuan Wang
-
Patent number: 11984098Abstract: Embodiments include methods and devices for per layer motion adaptive over-drive strength control for a display panel. Various embodiments may include determining motion information associated with a frame layer, determining an over-drive strength factor for the frame layer based at least in part on the motion information associated with the frame layer, and determining whether the over-drive strength factor is associated with computing a content difference. Various embodiments may include, in response to determining that the over-drive strength factor is associated with computing a content difference, performing fragment shading on the framebuffer object for the frame layer to generate an over-drive compensated framebuffer object for the frame layer based at least in part on the over-drive strength factor, and outputting the over-drive compensated framebuffer object for the frame layer to a default framebuffer for rendering on the display panel.Type: GrantFiled: April 20, 2021Date of Patent: May 14, 2024Assignee: QUALCOMM IncorporatedInventors: Nan Zhang, Bo Du, Ya Kong, Yongjun Xu
-
Publication number: 20240127770Abstract: Embodiments include methods and devices for per layer motion adaptive over-drive strength control for a display panel. Various embodiments may include determining motion information associated with a frame layer, determining an over-drive strength factor for the frame layer based at least in part on the motion information associated with the frame layer, and determining whether the over-drive strength factor is associated with computing a content difference. Various embodiments may include, in response to determining that the over-drive strength factor is associated with computing a content difference, performing fragment shading on the framebuffer object for the frame layer to generate an over-drive compensated framebuffer object for the frame layer based at least in part on the over-drive strength factor, and outputting the over-drive compensated framebuffer object for the frame layer to a default framebuffer for rendering on the display panel.Type: ApplicationFiled: April 20, 2021Publication date: April 18, 2024Inventors: Nan ZHANG, Bo DU, Ya KONG, Yongjun XU
-
Patent number: 11941865Abstract: Disclosed in the present invention is hyperspectral image classification method based on context-rich networks. The method comprises a training stage and a prediction stage, wherein the training stage comprises image pre-processing, sample selection and network training. Firstly, performing normalization on a hyperspectral image, and then randomly selecting an appropriate proportion of marked samples from each category to generate a label map, and performing training by using the designed network; in the prediction stage, directly inputting the whole image into the trained network and obtaining a final classification result. By means of the present invention, data pre-processing, feature extraction, the process of context-rich information capturing, and classification are taken into comprehensive consideration in the whole flow; and the classification of a hyperspectral image is realized by means of constructing an end-to-end network.Type: GrantFiled: June 20, 2023Date of Patent: March 26, 2024Assignee: WUHAN UNIVERSITYInventors: Bo Du, Di Wang, Liangpei Zhang
-
Publication number: 20240013336Abstract: The present disclosure relates to graphics processing. An apparatus of the present disclosure may determine visibility streams corresponding to a target and a set of bins into which the target is divided. The apparatus may select one of a first rendering mode or a second rendering mode for the target based on the first visibility stream and based on the set of second visibility streams. When the first rendering mode is select, the apparatus may configure each of the set of bins into a first subset associated with a first type of rendering pass or a second subset associated with a second type of rendering pass. The apparatus may then render the target based on the selected one of the first rendering mode or the second rendering mode and, if applicable, based on the first rendering pass type or the second rendering pass type.Type: ApplicationFiled: November 19, 2020Publication date: January 11, 2024Inventors: Bo DU, Andrew Evan GRUBER, Yongjun XU
-
Patent number: 11837007Abstract: This invention proposes a pedestrian re-identification method based on virtual samples, comprising following steps: s1) obtaining virtual persons generated by game engine, and generating the virtual samples with person labels by fusing a background of a target dataset and a pose of real persons through a multi-factor variational generation network; s2) rendering the generated virtual samples according to lighting conditions; s3) sampling the rendered virtual samples according to person attributes of target dataset; s4) constructing a training dataset according to virtual samples obtained by sampling to train a pedestrian re-identification model, and verifying identification effect of the trained model.Type: GrantFiled: June 20, 2023Date of Patent: December 5, 2023Assignee: WUHAN UNIVERSITYInventors: Bo Du, Xiaoyang Guo, Yutian Lin, Chao Zhang, Zheng Wang
-
Patent number: 11804036Abstract: A person re-identification method based on a perspective-guided multi-adversarial attention is provided. The deep convolutional neural network includes a feature learning module, a multi-adversarial module, and a perspective-guided attention mechanism module. The multi-adversarial module is followed by a global pooling layer and a perspective discriminator after each stage of a basic network of the feature learning module. The perspective-guided attention mechanism module is an attention map generator and the perspective discriminator. The training of the deep convolutional neural network includes learning of the feature learning module, learning of the multi-adversarial module, and learning of the perspective-guided attention mechanism module. The proposed method uses the trained deep convolutional neural network to extract features of the testing images, and using an Euclidean distance to perform feature matching on images in a query set and images in a gallery set.Type: GrantFiled: May 3, 2023Date of Patent: October 31, 2023Assignee: WUHAN UNIVERSITYInventors: Bo Du, Fangyi Liu, Mang Ye
-
Publication number: 20230344412Abstract: The present disclosure relates to filters and preparation methods of the filters. One example filter includes a substrate, a series resonator, a parallel resonator, and a series branch. The series resonator includes a first Bragg reflection layer and a first piezoelectric transduction structure that are sequentially stacked on the substrate. The parallel resonator includes a second Bragg reflection layer and a second piezoelectric transduction structure that are sequentially stacked on the substrate, and a structure of the first Bragg reflection layer is different from a structure of the second Bragg reflection layer. The series branch includes the series resonator, and the series branch is coupled between an input end of the filter and an output end of the filter.Type: ApplicationFiled: June 28, 2023Publication date: October 26, 2023Inventors: Hangtian HOU, Bo DU, Peng LIU, Zongzhi GAO
-
Publication number: 20230343016Abstract: The present disclosure relates to graphics processing. An apparatus of the present disclosure may determine visibility streams corresponding to a target and a set of bins into which the target is divided. The apparatus may select one of a first rendering mode or a second rendering mode for the target based on the first visibility stream and based on the set of second visibility streams. When the first rendering mode is select, the apparatus may configure each of the set of bins into a first subset associated with a first type of rendering pass or a second subset associated with a second type of rendering pass. The apparatus may then render the target based on the selected one of the first rendering mode or the second rendering mode and, if applicable, based on the first rendering pass type or the second rendering pass type.Type: ApplicationFiled: November 18, 2020Publication date: October 26, 2023Inventors: Srihari Babu ALLA, Jonnala Gadda NAGENDRA KUMAR, Avinash SEETHARAMAIAH, Andrew Evan GRUBER, Thomas Edwin FRISINGER, Richard HAMMERSTONE, Bo DU, Yongjun XU
-
Publication number: 20230334829Abstract: Disclosed in the present invention is hyperspectral image classification method based on context-rich networks. The method comprises a training stage and a prediction stage, wherein the training stage comprises image pre-processing, sample selection and network training. Firstly, performing normalization on a hyperspectral image, and then randomly selecting an appropriate proportion of marked samples from each category to generate a label map, and performing training by using the designed network; in the prediction stage, directly inputting the whole image into the trained network and obtaining a final classification result. By means of the present invention, data pre-processing, feature extraction, the process of context-rich information capturing, and classification are taken into comprehensive consideration in the whole flow; and the classification of a hyperspectral image is realized by means of constructing an end-to-end network.Type: ApplicationFiled: June 20, 2023Publication date: October 19, 2023Applicant: WUHAN UNIVERSITYInventors: Bo DU, Di Wang, Liangpei Zhang
-
Publication number: 20230334895Abstract: This invention proposes a pedestrian re-identification method based on virtual samples, comprising following steps: sl) obtaining virtual persons generated by game engine, and generating the virtual samples with person labels by fusing a background of a target dataset and a pose of real persons through a multi-factor variational generation network; s2) rendering the generated virtual samples according to lighting conditions; s3) sampling the rendered virtual samples according to person attributes of target dataset; s4) constructing a training dataset according to virtual samples obtained by sampling to train a pedestrian re-identification model, and verifying identification effect of the trained model.Type: ApplicationFiled: June 20, 2023Publication date: October 19, 2023Applicant: WUHAN UNIVERSITYInventors: Bo DU, Xiaoyang GUO, Yutian LIN, Chao ZHANG, Zheng WANG
-
Patent number: 11790534Abstract: The invention discloses an attention-based joint image and feature adaptive semantic segmentation method. First, the image adaptation procedure is used to transform the source domain image Xs to a target-domain-like image Xs-t with an appearance similar with the target domain image Xt, to reduce the domain gap between the source domain and the target domain at the image appearance level; then using the feature adaptation procedure to align the features between Xs-t and Xt in the semantic prediction space and the image generation space, respectively, to extract the domain-invariant features, to reduce the domain difference between Xs-t and Xt. In addition, the present invention introduces an attention module in the feature adaptation procedure to help the feature adaptation procedure pay more attention to image regions worthy of attention. Finally, combining the image adaptation procedure and the feature adaptation procedure in the end-to-end manner.Type: GrantFiled: May 16, 2023Date of Patent: October 17, 2023Assignee: WUHAN UNIVERSITYInventors: Bo Du, Juhua Liu, Qihuang Zhong, Lifang'an Xiao
-
Patent number: 11783569Abstract: Disclosed is a method for classifying hyperspectral images on the basis of an adaptive multi-scale feature extraction model, the method comprising: establishing a framework comprising the two parts of a scale reference network and a feature extraction network, introducing a condition gate mechanism into the scale reference network, performing determination step-by-step by means of three groups of modules, inputting features into a corresponding scale extraction network, deep mining rich information contained in a hyperspectral remote sensing image, effectively combining features of different scales, improving a classification effect, and generating a fine classification result map.Type: GrantFiled: April 18, 2023Date of Patent: October 10, 2023Assignee: WUHAN UNIVERSITYInventors: Bo Du, Jiaqi Yang, Liangpei Zhang, Chen Wu
-
Patent number: 11783579Abstract: A hyperspectral remote sensing image classification method based on a self-attention context network is provided. The method constructs a spatial dependency between pixels in a hyperspectral remote sensing image by self-attention learning and context encoding, and learns global context features. For adversarial attacks in the hyperspectral remote sensing data, the proposed method has higher security and reliability to better meet the requirements of safe, reliable, and high-precision object recognition in Earth observation.Type: GrantFiled: March 30, 2023Date of Patent: October 10, 2023Assignee: WUHAN UNIVERSITYInventors: Bo Du, Yonghao Xu, Liangpei Zhang
-
Publication number: 20230281828Abstract: The invention discloses an attention-based joint image and feature adaptive semantic segmentation method. First, the image adaptation procedure is used to transform the source domain image Xs to a target-domain-like image Xs-t with an appearance similar with the target domain image Xt, to reduce the domain gap between the source domain and the target domain at the image appearance level; then using the feature adaptation procedure to align the features between Xs-t and Xt in the semantic prediction space and the image generation space, respectively, to extract the domain-invariant features, to reduce the domain difference between Xs-t and Xt. In addition, the present invention introduces an attention module in the feature adaptation procedure to help the feature adaptation procedure pay more attention to image regions worthy of attention. Finally, combining the image adaptation procedure and the feature adaptation procedure in the end-to-end manner.Type: ApplicationFiled: May 16, 2023Publication date: September 7, 2023Applicant: WUHAN UNIVERSITYInventors: Bo DU, Juhua LIU, Qihuang ZHONG, Lifang'an XIAO
-
Publication number: 20230267725Abstract: A person re-identification method based on a perspective-guided multi-adversarial attention is provided. The deep convolutional neural network includes a feature learning module, a multi-adversarial module, and a perspective-guided attention mechanism module. The multi-adversarial module is followed by a global pooling layer and a perspective discriminator after each stage of a basic network of the feature learning module. The perspective-guided attention mechanism module is an attention map generator and the perspective discriminator. The training of the deep convolutional neural network includes learning of the feature learning module, learning of the multi-adversarial module, and learning of the perspective-guided attention mechanism module. The proposed method uses the trained deep convolutional neural network to extract features of the testing images, and using an Euclidean distance to perform feature matching on images in a query set and images in a gallery set.Type: ApplicationFiled: May 3, 2023Publication date: August 24, 2023Applicant: WUHAN UNIVERSITYInventors: Bo DU, Fangyi LIU, Mang YE
-
Publication number: 20230260279Abstract: A hyperspectral remote sensing image classification method based on a self-attention context network is provided. The method constructs a spatial dependency between pixels in a hyperspectral remote sensing image by self-attention learning and context encoding, and learns global context features. For adversarial attacks in the hyperspectral remote sensing data, the proposed method has higher security and reliability to better meet the requirements of safe, reliable, and high-precision object recognition in Earth observation.Type: ApplicationFiled: March 30, 2023Publication date: August 17, 2023Applicant: WUHAN UNIVERSITYInventors: Bo DU, Yonghao XU, Liangpei ZHANG
-
Publication number: 20230252761Abstract: Disclosed is a method for classifying hyperspectral images on the basis of an adaptive multi-scale feature extraction model, the method comprising: establishing a framework comprising the two parts of a scale reference network and a feature extraction network, introducing a condition gate mechanism into the scale reference network, performing determination step-by-step by means of three groups of modules, inputting features into a corresponding scale extraction network, deep mining rich information contained in a hyperspectral remote sensing image, effectively combining features of different scales, improving a classification effect, and generating a fine classification result map.Type: ApplicationFiled: April 18, 2023Publication date: August 10, 2023Applicant: WUHAN UNIVERSITYInventors: Bo DU, Jiaqi YANG, Liangpei ZHANG, Chen WU
-
Patent number: D1054391Type: GrantFiled: March 14, 2023Date of Patent: December 17, 2024Assignee: SHENZHEN FANTTIK TECHNOLOGYInventor: Bo Du