Patents by Inventor Hujun Bao
Hujun Bao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11615247Abstract: Disclosed are a labeling method and apparatus for named entity recognition of a legal instrument. The method includes steps: step S1: acquiring a legal text, and transforming the legal text into an index table; step S2: outputting a sentence feature encoding result; step S3: performing training and prediction; step S4: obtaining a set; step S5: obtaining a multi-head score transfer matrix; step S6: obtaining a score transfer matrix corresponding to the legal text; step S7: determining a recognized nested entity; and S8: constructing an entity labeling template by using the recognized nested entity. According to the present disclosure, a user tries to complete recognition of nested entity labeling by changing an input of the BERT model, and a multi-head selection matrix labeling thought of the present disclosure is used to relieve the difficulty in recognizing a long text and a nested entity in an NER task to a larger extent.Type: GrantFiled: June 2, 2022Date of Patent: March 28, 2023Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Hujun Bao, Guang Chen, Chao Ma, Qing Liao
-
Publication number: 20220392201Abstract: In an image feature matching method, at least two images to be matched are acquired; a feature representation of each image to be matched is obtained by performing feature extraction on the image to be matched, wherein the feature representation comprises a plurality of first local features; transforming the first local features into first transformation features having a global receptive field of the images to be matched; and a first matching result of the at least two images to be matched is obtained by matching first transformation features in the at least two images to be matched.Type: ApplicationFiled: August 19, 2022Publication date: December 8, 2022Applicant: Zhejiang SenseTime Technology Development Co., Ltd.Inventors: Xiaowei ZHOU, Hujun BAO, Jiaming SUN, Zehong SHEN, Yuang WANG
-
Publication number: 20220270294Abstract: Methods, apparatus, systems and computer-readable storage media for calibration of image acquisition devices are provided. In one aspect, a calibration method includes: obtaining one or more images collected by an image acquisition device, the one or more images including information of a plurality of calibration plates with different position-orientation information and without being shaded by each other; detecting corner points corresponding to the plurality of calibration plates in each of the one or more images; and calibrating the image acquisition device based on the detected corner points.Type: ApplicationFiled: May 10, 2022Publication date: August 25, 2022Inventors: Hujun BAO, Guofeng ZHANG, Yuwei WANG, Yuqian LIU
-
Publication number: 20220270293Abstract: Methods, devices, systems, and computer-readable storage media for calibration of sensors are provided. In one aspect, a calibration method for a sensor includes: obtaining an image acquired by a camera of a sensor and obtaining radar point cloud data acquired by a radar of the sensor, a plurality of calibration plates being located within a common Field Of View (FOV) range of the camera and the radar and having different position-orientation information; for each of the plurality of calibration plates, detecting first coordinate points of the calibration plate in the image and second coordinate points of the calibration plate in the radar point cloud data; and calibrating an external parameter between the camera and the radar according to the first coordinate points and the second coordinate points of each of the plurality of calibration plates.Type: ApplicationFiled: May 10, 2022Publication date: August 25, 2022Inventors: Hujun BAO, Guofeng ZHANG, Yuwei WANG, Yuqian LIU
-
Publication number: 20220261946Abstract: The present invention discloses a cloud-client rendering computing method based on an adaptive virtualized rendering pipeline, comprising the following steps of: defining a rendering pipeline, including defining a rendering resource, a rendering algorithm, and a read-write relationship between the rendering algorithm and the rendering resource; selecting an optimal cloud-client computing distribution solution in a real-time manner from a cloud-client computing distribution solution set comprising each rendering resource that is allocated to a cloud or client for computing, based on self-defined optimization objectives and an optimization budget of a framework user; and executing a corresponding rendering algorithm on cloud and/or on a client according to the cloud-client computing distribution solution, thereby obtaining a rendering result.Type: ApplicationFiled: January 6, 2021Publication date: August 18, 2022Inventors: HUJUN BAO, RUI WANG, WEIJU LAN
-
Publication number: 20220261969Abstract: The present invention discloses a color contrast enhanced rendering method, device and system suitable for an optical see-through head-mounted display. The method includes: (1) acquiring a background environment in real time to obtain a background video and performing Gaussian blur and visual field correction on the video; (2) converting an original rendering color and a processed video color from an RGB color space to a CIELAB color space scaled to a unit sphere range; (3) finding an optimal rendering color based on the original rendering color and the processed video color in the scaled CIELAB space according to a set color difference constraint, a chromaticity saturation constraint, a brightness constraint and a just noticeable difference constraint; and (4) after converting the optimal rendering color back to the RGB space, performing real-time rendering by using the optimal rendering color of the RGB space.Type: ApplicationFiled: January 6, 2021Publication date: August 18, 2022Inventors: RUI WANG, HUJUN BAO, YUNJIN ZHANG
-
Publication number: 20220254089Abstract: The present invention discloses a shader auto-simplifying method and system based on a rendering instruction flow, the method including: obtaining a rendering instruction flow, extracting a target shader from the rendering instruction flow, and creating a simplifying shader differing from the target shader in code only; intercepting a current frame of a rendering instruction comprising a rendering initiating instruction of the target shader as a particular frame; obtaining time consumed by the simplifying shader by measuring time needed for rendering the particular frame with the simplifying shader; obtaining error(s) of the simplifying shader by measuring a pixel difference value between a rendering frame drawn by the simplifying shader and the particular frame when a rendering instruction corresponding to the particular frame is executed; and screening an optimal simplifying shader according to the time consumed by the simplifying shader and the error of the simplifying shader.Type: ApplicationFiled: October 21, 2020Publication date: August 11, 2022Inventors: HUJUN BAO, RUI WANG, DEJIN HE, SHI LI
-
Publication number: 20220218218Abstract: Provided is a video-based method and system for accurately estimating heart rate and facial blood volume distribution, and the method mainly comprises the following steps: firstly, carrying out face detection of video frame containing human face, and extracting face image sequence and face key position points sequence in time dimension; secondly, compressing these sequence of face image and face key position points to obtain the facial signals in time dimension; thirdly, estimating facial blood volume distribution by facial signals mentioned in third step; finally, estimating heart rate values by using model based on deep learning technology and the spectrum analysis method respectively, then fusing the estimation results by Kalman filter to promote the accuracy of heart rate estimation.Type: ApplicationFiled: March 17, 2022Publication date: July 14, 2022Inventors: Hujun BAO, Xiaogang XU, Xiaolong WANG
-
Publication number: 20220148302Abstract: Visual localization method and related apparatus are disclosed. In the method, a first candidate image sequence is determined from image library, the image library being configured to construct electronic map, image frames in the first candidate image sequence being sequentially arranged according to degrees of matching with first image, and the first image being an image collected by a camera; an order of the image frames in the first candidate image sequence is adjusted according to target window to obtain second candidate image sequence, the target window being multiple successive image frames including target image frame and determined from the image library, the target image frame being an image matching with second image, which is collected by the camera before the first image is collected, in the image library; and target posture of the camera when the first image is collected is determined according to the second candidate image sequence.Type: ApplicationFiled: January 26, 2022Publication date: May 12, 2022Inventors: Hujun BAO, Guofeng ZHANG, Hailin YU, Zhichao YE, Chongshan SHENG
-
Patent number: 11210805Abstract: A simultaneously localization and dense three-dimensional reconstruction method, being capable of processing rapid motion and frequent closing of a loop in a robust manner, and operating at any time in a large-scale scene. Provided by the method is a key frame-based simultaneous localization and map construction framework. First, depth and color information are used simultaneously, and the framework may operate on central processing unit (CPU) at high speed based on localization of the key frame, and operate in challenging scene in robust manner. To reduce accumulated errors, the method introduces increment bundle adjustment, which may greatly reduce an operation amount and enable local and global bundle adjustment to be completed in a unified framework. Secondly, provided by the method is a key frame-based fusion method, and a model may be generated online and a three-dimensional model may be updated in real time during adjustment of the key frame's pose.Type: GrantFiled: January 13, 2017Date of Patent: December 28, 2021Assignee: ZHEJIANG UNIVERSITYInventors: Hujun Bao, Guofeng Zhang, Haomin Liu, Chen Li
-
Patent number: 11199414Abstract: A method for simultaneous localization and mapping is provided, which can reliably handle strong rotation and fast motion. The method provided a simultaneous localization and mapping algorithm framework based on a key frame, which can support rapid local map extension. Under this framework, a new feature tracking method based on multiple homography matrices is provided, and this method is efficient and robust under strong rotation and fast motion. A camera orientation optimization framework based on a sliding window is further provided to increase motion constraint between successive frames with simulated or actual IMU data. Finally, a method for obtaining a real scale of a specific plane and scene is provided in such a manner that a virtual object is placed on a specific plane in real size.Type: GrantFiled: September 14, 2016Date of Patent: December 14, 2021Assignee: ZHEJIANG UNIVERSITYInventors: Guofeng Zhang, Hujun Bao, Haomin Liu
-
Patent number: 11113878Abstract: The invention discloses A screen tile-pair based binocular rendering pipeline process and method, comprising: completing space division according to a spatial relationship of two visual angles in stereo rendering, and generating input primitive lists corresponding to the divided space; searching non-full primitive lists of space divisions and obtaining a surface with spatial partition; and dispatching all generated spatial divisions, and simultaneously performing rasterizing and rendering of two visual angles for primitives in each space division. According to the new measures, a spatial correlation of two visual angles in stereo rendering is considered, and in a rendering process the two visual angles are rasterized and rendered at the same time, thereby reducing a bandwidth required for repeated reading of triangular data in the rendering process.Type: GrantFiled: January 2, 2018Date of Patent: September 7, 2021Assignee: ZHEJIANG UNIVERSITYInventors: Rui Wang, Hujun Bao, Yazhen Yuan
-
Patent number: 11004221Abstract: A depth recovery method includes: performing feature extraction on a monocular image to obtain a feature image of the monocular image; decoupling the feature image to obtain a scene structure graph of the feature image; performing gradient sensing on the feature image and the scene structure graph to obtain a region-enhanced feature image; and performing depth estimation according to the region-enhanced feature image to obtain a depth image of the monocular image.Type: GrantFiled: December 21, 2019Date of Patent: May 11, 2021Assignee: ZHEJIANG SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Hujun Bao, Guofeng Zhang, Qinhong Jiang, Jianping Shi
-
Publication number: 20210012523Abstract: The present disclosure relates to a pose estimation method and device, an electronic apparatus, and a storage medium, the method comprising: performing keypoint detection processing on a target object in an image to be processed to obtain a plurality of keypoints of the target object in the image to be processed and a first covariance matrix corresponding to each keypoint; screening out a target keypoint from the plurality of keypoints in accordance with the first covariance matrix corresponding to each keypoint; and performing pose estimation processing in accordance with the target keypoint to obtain a rotation matrix and a displacement vector.Type: ApplicationFiled: September 25, 2020Publication date: January 14, 2021Applicant: Zhejiang SenseTime Technology Development Co., Ltd.Inventors: Xiaowei Zhou, Hujun Bao, Yuan Liu, Sida Peng
-
Publication number: 20210012143Abstract: The present disclosure relates to a key point detection method and apparatus, an electronic device and a storage medium. The method comprises: determining an area in which a plurality of pixels of an image to be processed are located and first direction vectors of the plurality of pixels pointing to a key point of the area; and determining the position of the key point in the area based on the area in which the pixels are located and the first direction vectors of the plurality of pixels in the area.Type: ApplicationFiled: September 30, 2020Publication date: January 14, 2021Applicant: Zhejiang SenseTime Technology Development Co., Ltd.Inventors: Hujun Bao, Xiaowei Zhou, Sida Peng, Yuan Liu
-
Patent number: 10733791Abstract: The invention discloses a real-time rendering method based on energy consumption-error precomputation, comprising: determining the spatial level structure of the scene to be rendered through adaptive subdivision of the space positions and look space of the camera browsable to the user in the 3D scene to be rendered; during the process of adaptive subdivision of the space, for each position subspace obtained at the completion of each subdivision, obtaining the error and energy consumption of the camera for rendering the 3D scene using a plurality of sets of preset rendering parameters in each look subspace at each vertex of the bounding volume that bounds the position subspace, and Pareto curve of the corresponding vertex and look subspace is built based on the error and energy consumption; based on the current camera viewpoint information, searching and obtaining the target Pareto curve in the spatial level structure to determine a set of rendering parameters satisfying the precomputation condition as optimumType: GrantFiled: April 18, 2017Date of Patent: August 4, 2020Assignee: ZHEJIANG UNIVERSITYInventors: Rui Wang, Hujun Bao, Tianlei Hu, Bowen Yu
-
Publication number: 20200193704Abstract: The invention discloses A screen tile-pair based binocular rendering pipeline process and method, comprising: completing space division according to a spatial relationship of two visual angles in stereo rendering, and generating input primitive lists corresponding to the divided space; searching non-full primitive lists of space divisions and obtaining a surface with spatial partition; and dispatching all generated spatial divisions, and simultaneously performing rasterizing and rendering of two visual angles for primitives in each space division. According to the new measures, a spatial correlation of two visual angles in stereo rendering is considered, and in a rendering process the two visual angles are rasterized and rendered at the same time, thereby reducing a bandwidth required for repeated reading of triangular data in the rendering process.Type: ApplicationFiled: January 2, 2018Publication date: June 18, 2020Inventors: RUI WANG, HUJUN BAO, YAZHEN YUAN
-
Publication number: 20200143552Abstract: A depth recovery method includes: performing feature extraction on a monocular image to obtain a feature image of the monocular image; decoupling the feature image to obtain a scene structure graph of the feature image; performing gradient sensing on the feature image and the scene structure graph to obtain a region-enhanced feature image; and performing depth estimation according to the region-enhanced feature image to obtain a depth image of the monocular image.Type: ApplicationFiled: December 21, 2019Publication date: May 7, 2020Applicant: ZHEJIANG SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Hujun BAO, Guofeng ZHANG, Qinhong JIANG, Jianping SHI
-
Publication number: 20200043189Abstract: A simultaneously localization and dense three-dimensional reconstruction method, being capable of processing rapid motion and frequent closing of a loop in a robust manner, and operating at any time in a large-scale scene. Provided by the method is a key frame-based simultaneous localization and map construction framework. First, depth and color information are used simultaneously, and the framework may operate on central processing unit (CPU) at high speed based on localization of the key frame, and operate in challenging scene in robust manner. To reduce accumulated errors, the method introduces increment bundle adjustment, which may greatly reduce an operation amount and enable local and global bundle adjustment to be completed in a unified framework. Secondly, provided by the method is a key frame-based fusion method, and a model may be generated online and a three-dimensional model may be updated in real time during adjustment of the key frame's pose.Type: ApplicationFiled: January 13, 2017Publication date: February 6, 2020Inventors: Hujun BAO, Guofeng ZHANG, Haomin LIU, Chen LI
-
Publication number: 20190234746Abstract: The present disclosure discloses a method for simultaneous localization and mapping, which can reliably handle strong rotation and fast motion. The method provided a simultaneous localization and mapping algorithm framework based on a key frame, which can support rapid local map extension. Under this framework, a new feature tracking method based on multiple homography matrices is provided, and this method is efficient and robust under strong rotation and fast motion. A camera orientation optimization framework based on a sliding window is further provided to increase motion constraint between successive frames with simulated or actual IMU data. Finally, a method for obtaining a real scale of a specific plane and scene is provided in such a manner that a virtual object is placed on a specific plane in real size.Type: ApplicationFiled: September 14, 2016Publication date: August 1, 2019Inventors: Guofeng ZHANG, Hujun BAO, Haomin LIU