Patents by Inventor Jiaming XU

Jiaming XU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961244
    Abstract: Disclosed is a high-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection. The method comprises: firstly, by means of the fringe projection technology based on a stereoscopic phase unwrapping method, and with the assistance of an adaptive dynamic depth constraint mechanism, acquiring high-precision three-dimensional (3D) data of an object in real time without any additional auxiliary fringe pattern; and then, after a two-dimensional (2D) matching points optimized by the means of corresponding 3D information is rapidly acquired, by means of a two-thread parallel mechanism, carrying out coarse registration based on Simultaneous Localization and Mapping (SLAM) technology and fine registration based on Iterative Closest Point (ICP) technology.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: April 16, 2024
    Assignee: NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Chao Zuo, Jiaming Qian, Qian Chen, Shijie Feng, Tianyang Tao, Yan Hu, Wei Yin, Liang Zhang, Kai Liu, Shuaijie Wu, Mingzhu Xu, Jiaye Wang
  • Publication number: 20240118355
    Abstract: The present disclosure relates to a wide-range perpendicular sensitive magnetic sensor and the method for manufacturing the same, the magnetic sensor includes a substrate, a plurality of magnetic tunnel junctions, a plurality of magnetic flux regulators, a first output port and a second output port.
    Type: Application
    Filed: September 6, 2023
    Publication date: April 11, 2024
    Applicant: DIGITAL GRID RES. INST., CHINA SOUTHERN PWR. GRID
    Inventors: Peng LI, Qiancheng LV, Bing TIAN, Zejie TAN, Zhiming WANG, Jie WEI, Renze CHEN, Xiaopeng FAN, Zhong LIU, Zhenheng XU, Senjing YAO, Licheng LI, Yuehuan LIN, Shengrong LIU, Bofeng LUO, Jiaming ZHANG, Xu YIN
  • Patent number: 11953568
    Abstract: The present disclosure relates to a wide-range perpendicular sensitive magnetic sensor and the method for manufacturing the same, the magnetic sensor includes a substrate, a plurality of magnetic tunnel junctions, a plurality of magnetic flux regulators, a first output port and a second output port.
    Type: Grant
    Filed: September 6, 2023
    Date of Patent: April 9, 2024
    Assignee: DIGITAL GRID RES. INST., CHINA SOUTHERN PWR. GRID
    Inventors: Peng Li, Qiancheng Lv, Bing Tian, Zejie Tan, Zhiming Wang, Jie Wei, Renze Chen, Xiaopeng Fan, Zhong Liu, Zhenheng Xu, Senjing Yao, Licheng Li, Yuehuan Lin, Shengrong Liu, Bofeng Luo, Jiaming Zhang, Xu Yin
  • Patent number: 11945434
    Abstract: In one embodiment, a process is performed during controlling Autonomous Driving Vehicle (ADV). A confidence level associated with a sensed obstacle is determined. If the confidence level is below a confidence threshold, and a distance between the ADV and a potential point of contact with the sensed obstacle is below a distance threshold, then performance of a driving decision is delayed. Otherwise, the driving decision is performed to reduce risk of contact with the sensed obstacle.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: April 2, 2024
    Assignee: BAIDU USA LLC
    Inventors: Jiaming Tao, Jiaxuan Xu, Jiacheng Pan, Jinyun Zhou, Hongyi Sun, Yifei Jiang, Jiangtao Hu
  • Publication number: 20240071392
    Abstract: This application provides an upgrade method including: An electronic device acquires a first verification voice entered by a user; processes the first verification voice by using a first model stored in the electronic device, to obtain a first voiceprint feature; verifies an identity of the user based on the first voiceprint feature and a first user feature template stored in the electronic device; after the identity of the user is verified, if the electronic device has received a second model, processes the first verification voice by using the second model, to obtain a second voiceprint feature; and updates the first user feature template based on the second voiceprint feature, and updates the first model by using the second model.
    Type: Application
    Filed: November 6, 2023
    Publication date: February 29, 2024
    Inventors: Jiaming XU, Yue LANG, Yunfan DU
  • Publication number: 20240013789
    Abstract: This application provides a voice control method and apparatus, a wearable device, and a terminal. The method includes: obtaining voice information of a user; obtaining identity information of the user based on a first voiceprint recognition result of a first voice component of the voice information, a second voiceprint recognition result of a second voice component of the voice information, and a third voiceprint recognition result of a third voice component of the voice information, where the first voice component is captured by an in-ear voice sensor of a wearable device, the second voice component is captured by an out-of-ear voice sensor of the wearable device, and the third voice component is captured by a bone vibration sensor of the wearable device; and executing an operation instruction when the identity information of the user matches the preset identity information.
    Type: Application
    Filed: September 21, 2023
    Publication date: January 11, 2024
    Inventors: Jiaming XU, Yue LANG, Churonggui SA
  • Publication number: 20240005941
    Abstract: Disclosed are a target speaker separation system, an electronic device and a storage medium. The system includes: first, performing, jointly unified modeling on a plurality of cues based a masked pre-training strategy, to boost the inference capability of a model for missing cues and enhance the representation accuracy of disturbed cues; and second, constructing a hierarchical cue modulation module. A spatial cue is introduced into a primary cue modulation module for directional enhancement of a speech of a speaker; in an intermediate cue modulation module, the speech of the speaker is enhanced on the basis of temporal coherence of a dynamic cue and an auditory signal component; a steady-state cue is introduced into an advanced cue modulation module for selective filtering; and finally, the supervised learning capability of simulation data and the unsupervised learning effect of real mixed data are sufficiently utilized.
    Type: Application
    Filed: November 3, 2022
    Publication date: January 4, 2024
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jiaming XU, Jian CUI, Bo XU
  • Publication number: 20230335148
    Abstract: A speech separation method is provided, and relates to the field of speech. The method includes: obtaining, in a speaking process of a user, audio information including a user speech and video information including a user face; coding the audio information to obtain a mixed acoustic feature; extracting a visual semantic feature of the user from the video information; inputting the mixed acoustic feature and the visual semantic feature into a preset visual speech separation network to obtain an acoustic feature of the user; and decoding the acoustic feature of the user to obtain a speech signal of the user. An electronic device, a chip, and a computer-readable storage medium are provided.
    Type: Application
    Filed: August 24, 2021
    Publication date: October 19, 2023
    Inventors: Henghui Lu, Lei Qin, Peng Zhang, Jiaming Xu, Bo Xu
  • Patent number: 11580647
    Abstract: A global and local binary pattern image crack segmentation method based on robot vision comprises the following steps: enhancing a contrast of an acquired original image to obtain an enhanced map; using an improved local binary pattern detection algorithm to process the enhanced map and construct a saliency map; using the enhanced map and the saliency map to segment cracks and obtaining a global and local binary pattern automatic crack segmentation method; and evaluating performance of the obtained global and local binary pattern automatic crack segmentation method. The present application uses logarithmic transformation to enhance the contrast of a crack image, so that information of dark parts of the cracks is richer. Texture features of a rotation invariant local binary pattern are improved. Global information of four directions is integrated, and the law of universal gravitation and gray and roundness features are introduced to correct crack segmentation results, thereby improving segmentation accuracy.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: February 14, 2023
    Assignees: Guangzhou University, Zhongkai University of Agriculture Engineering, Guangzhou Guangjian Construction Engineering Testing Center Co., Ltd., GuangZhou Cheng'an Testing LTD. of Highway & Bridge
    Inventors: Jiyang Fu, Airong Liu, Zhicheng Yang, Jihua Mao, Bingcong Chen, Jiaming Xu, Yongmin Yang, Xiaosheng Wu, Jianting Cheng
  • Patent number: 11487950
    Abstract: The method of the present disclosure includes: obtaining an image to be processed and a question text corresponding to the image; using an optimized dialogue model to encode the image into an image vector and encode the question text into a question vector; generating a state vector based on the image vector and the question vector; decoding the state vector to obtain and output an answer text. A discriminator needs to be introduced in an optimization process of the optimized dialogue model. The dialogue model and the discriminator are alternately optimized until a value of a hybrid loss function of the dialogue model and a value of a loss function of the discriminator do not decrease or fall below a preset value, thereby accomplishing the optimization process.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: November 1, 2022
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jiaming Xu, Yiqun Yao, Bo Xu
  • Patent number: 11184277
    Abstract: Systems and methods are provided to perform operations on a packet of network traffic based on a routing rule of the packet. A stateful network routing service may include multiple network gateways for receiving packets of network traffic. The stateful network routing service may receive a packet and obtain or generate a routing rule based on the source and destination of the packet based on receiving the packet via a client-facing network gateway. The stateful network routing service may transmit the packet to a network appliance based on the routing rule. The stateful network routing service may further receive a packet via an appliance-facing network gateway. Based on receiving the packet via the appliance-facing network gateway, the stateful network routing service may decapsulate the packet and transmit the packet to a network destination. The stateful network routing service may further validate the packet.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: November 23, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Dheerendra Talur, Manasa Chandrashekar, Jiaming Xu, Liwen Wu, Meher Aditya Kumar Addepalli
  • Publication number: 20210150151
    Abstract: The method of the present disclosure includes: obtaining an image to be processed and a question text corresponding to the image; using an optimized dialogue model to encode the image into an image vector and encode the question text into a question vector; generating a state vector based on the image vector and the question vector; decoding the state vector to obtain and output an answer text. A discriminator needs to be introduced in an optimization process of the optimized dialogue model. The dialogue model and the discriminator are alternately optimized until a value of a hybrid loss function of the dialogue model and a value of a loss function of the discriminator do not decrease or fall below a preset value, thereby accomplishing the optimization process.
    Type: Application
    Filed: April 19, 2019
    Publication date: May 20, 2021
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jiaming XU, Yiqun YAO, Bo XU
  • Patent number: 10923136
    Abstract: A speech extraction method based on the supervised learning auditory attention includes: converting an original overlapping speech signal into a two-dimensional time-frequency signal representation by a short-time Fourier transform to obtain a first overlapping speech signal; performing a first sparsification on the first overlapping speech signal, mapping intensity information of a time-frequency unit of the first overlapping speech signal to preset D intensity levels, and performing a second sparsification on the first overlapping speech signal based on information of the preset D intensity levels to obtain a second overlapping speech signal; converting the second overlapping speech signal into a pulse signal by a time coding method; extracting a target pulse from the pulse signal by a trained target pulse extraction network; converting the target pulse into a time-frequency representation of the target speech to obtain the target speech by an inverse short-time Fourier transform.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: February 16, 2021
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jiaming Xu, Yating Huang, Bo Xu
  • Publication number: 20200402526
    Abstract: A speech extraction method based on the supervised learning auditory attention includes: converting an original overlapping speech signal into a two-dimensional time-frequency signal representation by a short-time Fourier transform to obtain a first overlapping speech signal; performing a first sparsification on the first overlapping speech signal, mapping intensity information of a time-frequency unit of the first overlapping speech signal to preset D intensity levels, and performing a second sparsification on the first overlapping speech signal based on information of the preset D intensity levels to obtain a second overlapping speech signal; converting the second overlapping speech signal into a pulse signal by a time coding method; extracting a target pulse from the pulse signal by a trained target pulse extraction network; converting the target pulse into a time-frequency representation of the target speech to obtain the target speech by an inverse short-time Fourier transform.
    Type: Application
    Filed: April 19, 2019
    Publication date: December 24, 2020
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jiaming XU, Yating HUANG, Bo XU
  • Patent number: 10818311
    Abstract: An auditory selection method based on a memory and attention model, including: step S1, encoding an original speech signal into a time-frequency matrix; step S2, encoding and transforming the time-frequency matrix to convert the matrix into a speech vector; step S3, using a long-term memory unit to store a speaker and a speech vector corresponding to the speaker; step S4, obtaining a speech vector corresponding to a target speaker, and separating a target speech from the original speech signal through an attention selection model. A storage device includes a plurality of programs stored in the storage device. The plurality of programs are configured to be loaded by a processor and execute the auditory selection method based on the memory and attention model. A processing unit includes the processor and the storage device.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: October 27, 2020
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jiaming Xu, Jing Shi, Bo Xu
  • Publication number: 20200227064
    Abstract: An auditory selection method based on a memory and attention model, including: step S1, encoding an original speech signal into a time-frequency matrix; step S2, encoding and transforming the time-frequency matrix to convert the matrix into a speech vector; step S3, using a long-term memory unit to store a speaker and a speech vector corresponding to the speaker; step S4, obtaining a speech vector corresponding to a target speaker, and separating a target speech from the original speech signal through an attention selection model. A storage device includes a plurality of programs stored in the storage device. The plurality of programs are configured to be loaded by a processor and execute the auditory selection method based on the memory and attention model. A processing unit includes the processor and the storage device.
    Type: Application
    Filed: November 14, 2018
    Publication date: July 16, 2020
    Applicant: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jiaming XU, Jing SHI, Bo XU