Patents by Inventor Lu Yuan

Lu Yuan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11751512
    Abstract: The present application discloses a woody rootstock for efficient grafting of solanaceous vegetables and an efficient grafting and seedling culture method thereof. According to the present application, a woody rootstock clone with high consistency is provided through tissue culture, efficient grafting is completed through sleeve grafting technology, and the grafting survival rate is improved by regulating the healing environment. A new idea for efficient industrial grafting of solanaceous vegetables is provided, scions are imparted with new features through distant grafting, and the problems of low grafting efficiency and low survival rate are solved. The method has the advantages of strong operability, simplicity, high efficiency and low cost, and provides a technical support for the industrial production of grafted seedlings of solanaceous vegetables.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: September 12, 2023
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Liping Chen, Tingjin Wang, Lu Yuan, Ke Liu, Aijun Zhang, Yang Yang, Xuan Zhang, Yuzhuo Li, Zhenyu Qi
  • Publication number: 20230275635
    Abstract: Various aspects of the present disclosure generally relate to wireless communication. In some aspects, a network entity may receive a sounding reference signal (SRS) at a multi-panel system of the network entity, where the multi-panel system includes one or more sounded panels and one or more non-sounded panels. The network entity may estimate a channel to obtain channel state information (CSI) for the one or more sounded panels based at least in part on the SRS. The network entity may estimate CSI for the one or more non-sounded panels based at least in part on the CSI for the one or more sounded panels. The network entity may transmit or receive a communication based at least in part on the CSI for the one or more sounded panels and the CSI for the one or more non-sounded panels. Numerous other aspects are described.
    Type: Application
    Filed: February 28, 2022
    Publication date: August 31, 2023
    Inventors: Saeid SAHRAEI, Muhammad Sayed Khairy ABDELGHAFFAR, Renqiu WANG, Lu YUAN, Joseph Patrick BURKE, Tingfang JI, Peter GAAL
  • Publication number: 20230229960
    Abstract: Some disclosed systems are configured to obtain a knowledge module configured to receive one or more knowledge inputs corresponding to one or more different modalities and generate a set of knowledge embeddings to be integrated with a set of multi-modal embeddings generated by a multi-modal main model. The systems receive a knowledge input at the knowledge module, identify a knowledge type associated with the knowledge input, and extract a knowledge unit from the knowledge input. The systems select a representation model that corresponds to the knowledge type and select a grounding type configured to ground the at least one knowledge unit into the representation model. The systems then ground the knowledge unit into the representation model according to the grounding type.
    Type: Application
    Filed: January 19, 2022
    Publication date: July 20, 2023
    Inventors: Chenguang ZHU, Lu YUAN, Yao QIAN, Yu SHI, Nanshan ZENG, Xuedong David HUANG
  • Patent number: 11700156
    Abstract: An intelligent data and knowledge-driven method for modulation recognition includes the following steps: collecting spectrum data; constructing corresponding attribute vector labels for different modulation schemes; constructing and pre-training an attribute learning model based on the attribute vector labels for different modulation schemes; constructing and pre-training a visual model for modulation recognition; constructing a feature space transformation model, and constructing an intelligent data and knowledge-driven model for modulation recognition based on the attribute learning model and the visual model; transferring parameters of the pre-trained visual model and the pre-trained attribute learning model and retraining the transformation model; and determining whether training on a network is completed and outputting a classification result.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: July 11, 2023
    Assignee: Nanjing University of Aeronautics and Astronautics
    Inventors: Fuhui Zhou, Rui Ding, Ming Xu, Hao Zhang, Lu Yuan, Qihui Wu, Chao Dong
  • Patent number: 11593615
    Abstract: Image stylization is based on a learning network. A learning network is trained with a plurality of images and a reference image with a particular texture style. A plurality of different sub-networks of the learning network is trained, respectively. Specifically, one of the sub-networks is trained to extract one or more feature maps from the source image and transform the feature maps with the texture style applied thereon to a target image. Each of the feature maps indicates part of feature information of the source image. Another sub-network is trained to apply a specified texture style to the extracted feature maps, such that the target image generated based on the processed feature maps can embody the specified texture style.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: February 28, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Gang Hua, Lu Yuan, Jing Liao, Dongdong Chen
  • Publication number: 20220391621
    Abstract: A system for tracking a target object across a plurality of image frames. The system comprises a logic machine and a storage machine. The storage machine holds instructions executable by the logic machine to calculate a trajectory for the target object over one or more previous frames occurring before a target frame. Responsive to assessing no detection of the target object in the target frame, the instructions are executable to predict an estimated region for the target object based on the trajectory, predict an occlusion center based on a set of candidate occluding locations for a set of other objects within a threshold distance of the estimated region, each location of the set of candidate occluding locations overlapping with the estimated region, and automatically estimate a bounding box for the target object in the target frame based on the occlusion center.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dongdong CHEN, Qiankun LIU, Lu YUAN, Lei ZHANG
  • Patent number: 11514261
    Abstract: According to implementations of the subject matter described herein, there is provided an image colorization solution. The solution includes determining a similarity between contents presented in a grayscale source image and a color reference image and determining a col or target image corresponding to the source image based on the similarity. Specifically, a first and a second sets of blocks similar and dissimilar to the reference image are determined based on the similarity; a first color for the first set of blocks is determined based on a color of corresponding blocks in the reference image; a second color for the second set of blocks is determined independently of the reference image. Through this solution, it is possible to provide user controllability and customized effects in colorization, and there is no strict requirement on correspondence between the color image and grayscale image, achieving more robustness to selection of color reference images.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: November 29, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jing Liao, Lu Yuan, Dongdong Chen, Mingming He
  • Patent number: 11481869
    Abstract: Implementations of the present disclosure provide a solution for cross-domain image translation. In this solution, a first learning network for geometric deformation from a first to a second image domain is determined based on first and second images in the first and second domains, images in the two domains having different styles and objects in the images having geometric deformation with respect to each other. Geometric deformation from the second to the first domains is performed on the second image or geometric deformation from the first to the second domains is performed on the first image, to generate an intermediate image. A second learning network for style transfer from the first to the second domains is determined based on the first and intermediate images or based on the second and intermediate images generated. Accordingly, processing accuracy of leaning networks for cross-domain image translation can be improved and complexity is lowered.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: October 25, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jing Liao, Lu Yuan, Kaidi Cao
  • Patent number: 11467711
    Abstract: Systems and methods for displaying and associating context images with zones or devices of a security system or a home automation system are provided. Such systems and methods may include associating each of a plurality of zones or devices of the security system or the home automation system with a respective context image and displaying the respective context image for one of the plurality of zones or devices in response to a user interface of the security system or the home automation system receiving user input.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: October 11, 2022
    Assignee: Ademco Inc.
    Inventors: Xinyu Ma, Qingqing Zhang, Lu Yuan
  • Publication number: 20220318541
    Abstract: Systems and methods for object detection generate a feature pyramid corresponding to image data, and rescaling the feature pyramid to a scale corresponding to a median level of the feature pyramid, wherein the rescaled feature pyramid is a four-dimensional (4D) tensor. The 4D tensor is reshaped into a three-dimensional (3D) tensor having individual perspectives including scale features, spatial features, and task features corresponding to different dimensions of the 3D tensor. The 3D tensor is used with a plurality of attention layers to update a plurality of feature maps associated with the image data. Object detection is performed on the image data using the updated plurality of feature maps.
    Type: Application
    Filed: April 5, 2021
    Publication date: October 6, 2022
    Inventors: Xiyang DAI, Yinpeng CHEN, Bin XIAO, Dongdong CHEN, Mengchen LIU, Lu YUAN, Lei ZHANG
  • Publication number: 20220301892
    Abstract: The present disclosure provides a method of cleaning a wafer and a wafer cleaning apparatus. The method of cleaning a wafer includes: providing a wafer to be cleaned, the surface of the wafer having contaminants; and spraying a surfactant onto the surface of the wafer, and scrubbing the surface of the wafer with a polishing pad to remove the contaminants from the surface of the wafer.
    Type: Application
    Filed: December 6, 2021
    Publication date: September 22, 2022
    Inventors: Shouzhuang Song, Chang-Yi Tsai, Lu-Yuan Lin
  • Publication number: 20220292828
    Abstract: The disclosure herein enables tracking of multiple objects in a real-time video stream. For each individual frame received from the video stream, a frame type of the frame is determined. Based on the individual frame being an object detection frame type, a set of object proposals is detected in the individual frame, associations between the set of object proposals and a set of object tracks are assigned, and statuses of the set of object tracks are updated based on the assigned associations. Based on the individual frame being an object tracking frame type, single-object tracking is performed on the frame based on each object track of the set of object tracks and the set of object tracks is updated based on the performed single-object tracking. For each frame received, a real-time object location data stream is provided based on the set of object tracks.
    Type: Application
    Filed: May 28, 2022
    Publication date: September 15, 2022
    Inventors: Ishani CHAKRABORTY, Yi-Ling CHEN, Lu YUAN
  • Patent number: 11386662
    Abstract: The disclosure herein enables tracking of multiple objects in a real-time video stream. For each individual frame received from the video stream, a frame type of the frame is determined. Based on the individual frame being an object detection frame type, a set of object proposals is detected in the individual frame, associations between the set of object proposals and a set of object tracks are assigned, and statuses of the set of object tracks are updated based on the assigned associations. Based on the individual frame being an object tracking frame type, single-object tracking is performed on the frame based on each object track of the set of object tracks and the set of object tracks is updated based on the performed single-object tracking. For each frame received, a real-time object location data stream is provided based on the set of object tracks.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: July 12, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Yi-Ling Chen, Lu Yuan
  • Publication number: 20220188595
    Abstract: A computer device for automatic feature detection comprises a processor, a communication device, and a memory configured to hold instructions executable by the processor to instantiate a dynamic convolution neural network, receive input data via the communication network, and execute the dynamic convolution neural network to automatically detect features in the input data. The dynamic convolution neural network compresses the input data from an input space having a dimensionality equal to a predetermined number of channels into an intermediate space having a dimensionality less than the number of channels. The dynamic convolution neural network dynamically fuses the channels into an intermediate representation within the intermediate space and expands the intermediate representation from the intermediate space to an expanded representation in an output space having a higher dimensionality than the dimensionality of the intermediate space.
    Type: Application
    Filed: December 16, 2020
    Publication date: June 16, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yinpeng CHEN, Xiyang DAI, Mengchen LIU, Dongdong CHEN, Lu YUAN, Zicheng LIU, Ye YU, Mei CHEN, Yunsheng LI
  • Publication number: 20220188599
    Abstract: A neural architecture search (NAS) with a weak predictor comprises: receiving network architecture scoring information; iteratively sampling a search space, wherein the sampling comprises: generating a set of candidate architectures within the search space; learning a first predictor; evaluating performance of the candidate architectures; and based on at least the performance of the set of candidate architectures and the network architecture scoring information, refining the search space to a smaller search space; based on at least the network architecture scoring information, thresholding the performance of candidate architectures to determine scored output candidate architectures; and reporting the scored output candidate architectures. In some examples, the candidate architectures each comprise a machine learning (ML) model, for example a neural network (NN).
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: Xiyang DAI, Dongdong CHEN, Yinpeng CHEN, Mengchen LIU, Ye YU, Zicheng LIU, Mei CHEN, Lu YUAN, Junru WU
  • Patent number: 11335008
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: May 17, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
  • Publication number: 20220148197
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Application
    Filed: January 26, 2022
    Publication date: May 12, 2022
    Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
  • Patent number: 11308576
    Abstract: In accordance with implementations of the subject matter described herein, there is proposed a solution of visual stylization of stereoscopic images. In the solution, a first feature map for a first source image and a second feature map for a second source image are extracted. The first and second source images correspond to first and second views of a stereoscopic image, respectively. A first unidirectional disparity from the first source image to the second source image is determined based on the first and second source images. First and second target images having a specified visual style are generated by processing the first and second feature maps based on the first unidirectional disparity. Through the solution, a disparity between two source images of a stereoscopic image are taken into account when performing the visual style transfer, thereby maintaining a stereoscopic effect in the stereoscopic image consisting of the target images.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: April 19, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lu Yuan, Gang Hua, Jing Liao, Dongdong Chen
  • Patent number: 11290550
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a virtual objection distribution method are provided. One of the methods includes: performing an image scanning to a local environment of a user; conducting image identification to an acquired image; acquiring an electronic certificate from a server if an image identifier is identified in the image; saving the electronic certificate; and, in response to a determination that a category count of the received electronic certificates reaches the threshold, sending to the server a virtual object distribution request to cause the server to distribute a virtual object to the user. This method significantly increases interactivity and entertainingness of a virtual object distribution process.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: March 29, 2022
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventors: Qinglong Duan, Guanhua Chen, Jing Ji, Jiahui Cheng, Lu Yuan
  • Publication number: 20220092792
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 24, 2022
    Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO