Patents by Inventor Lu Yuan

Lu Yuan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11481869
    Abstract: Implementations of the present disclosure provide a solution for cross-domain image translation. In this solution, a first learning network for geometric deformation from a first to a second image domain is determined based on first and second images in the first and second domains, images in the two domains having different styles and objects in the images having geometric deformation with respect to each other. Geometric deformation from the second to the first domains is performed on the second image or geometric deformation from the first to the second domains is performed on the first image, to generate an intermediate image. A second learning network for style transfer from the first to the second domains is determined based on the first and intermediate images or based on the second and intermediate images generated. Accordingly, processing accuracy of leaning networks for cross-domain image translation can be improved and complexity is lowered.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: October 25, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jing Liao, Lu Yuan, Kaidi Cao
  • Patent number: 11467711
    Abstract: Systems and methods for displaying and associating context images with zones or devices of a security system or a home automation system are provided. Such systems and methods may include associating each of a plurality of zones or devices of the security system or the home automation system with a respective context image and displaying the respective context image for one of the plurality of zones or devices in response to a user interface of the security system or the home automation system receiving user input.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: October 11, 2022
    Assignee: Ademco Inc.
    Inventors: Xinyu Ma, Qingqing Zhang, Lu Yuan
  • Publication number: 20220318541
    Abstract: Systems and methods for object detection generate a feature pyramid corresponding to image data, and rescaling the feature pyramid to a scale corresponding to a median level of the feature pyramid, wherein the rescaled feature pyramid is a four-dimensional (4D) tensor. The 4D tensor is reshaped into a three-dimensional (3D) tensor having individual perspectives including scale features, spatial features, and task features corresponding to different dimensions of the 3D tensor. The 3D tensor is used with a plurality of attention layers to update a plurality of feature maps associated with the image data. Object detection is performed on the image data using the updated plurality of feature maps.
    Type: Application
    Filed: April 5, 2021
    Publication date: October 6, 2022
    Inventors: Xiyang DAI, Yinpeng CHEN, Bin XIAO, Dongdong CHEN, Mengchen LIU, Lu YUAN, Lei ZHANG
  • Publication number: 20220301892
    Abstract: The present disclosure provides a method of cleaning a wafer and a wafer cleaning apparatus. The method of cleaning a wafer includes: providing a wafer to be cleaned, the surface of the wafer having contaminants; and spraying a surfactant onto the surface of the wafer, and scrubbing the surface of the wafer with a polishing pad to remove the contaminants from the surface of the wafer.
    Type: Application
    Filed: December 6, 2021
    Publication date: September 22, 2022
    Inventors: Shouzhuang Song, Chang-Yi Tsai, Lu-Yuan Lin
  • Publication number: 20220292828
    Abstract: The disclosure herein enables tracking of multiple objects in a real-time video stream. For each individual frame received from the video stream, a frame type of the frame is determined. Based on the individual frame being an object detection frame type, a set of object proposals is detected in the individual frame, associations between the set of object proposals and a set of object tracks are assigned, and statuses of the set of object tracks are updated based on the assigned associations. Based on the individual frame being an object tracking frame type, single-object tracking is performed on the frame based on each object track of the set of object tracks and the set of object tracks is updated based on the performed single-object tracking. For each frame received, a real-time object location data stream is provided based on the set of object tracks.
    Type: Application
    Filed: May 28, 2022
    Publication date: September 15, 2022
    Inventors: Ishani CHAKRABORTY, Yi-Ling CHEN, Lu YUAN
  • Patent number: 11386662
    Abstract: The disclosure herein enables tracking of multiple objects in a real-time video stream. For each individual frame received from the video stream, a frame type of the frame is determined. Based on the individual frame being an object detection frame type, a set of object proposals is detected in the individual frame, associations between the set of object proposals and a set of object tracks are assigned, and statuses of the set of object tracks are updated based on the assigned associations. Based on the individual frame being an object tracking frame type, single-object tracking is performed on the frame based on each object track of the set of object tracks and the set of object tracks is updated based on the performed single-object tracking. For each frame received, a real-time object location data stream is provided based on the set of object tracks.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: July 12, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Yi-Ling Chen, Lu Yuan
  • Publication number: 20220188595
    Abstract: A computer device for automatic feature detection comprises a processor, a communication device, and a memory configured to hold instructions executable by the processor to instantiate a dynamic convolution neural network, receive input data via the communication network, and execute the dynamic convolution neural network to automatically detect features in the input data. The dynamic convolution neural network compresses the input data from an input space having a dimensionality equal to a predetermined number of channels into an intermediate space having a dimensionality less than the number of channels. The dynamic convolution neural network dynamically fuses the channels into an intermediate representation within the intermediate space and expands the intermediate representation from the intermediate space to an expanded representation in an output space having a higher dimensionality than the dimensionality of the intermediate space.
    Type: Application
    Filed: December 16, 2020
    Publication date: June 16, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yinpeng CHEN, Xiyang DAI, Mengchen LIU, Dongdong CHEN, Lu YUAN, Zicheng LIU, Ye YU, Mei CHEN, Yunsheng LI
  • Publication number: 20220188599
    Abstract: A neural architecture search (NAS) with a weak predictor comprises: receiving network architecture scoring information; iteratively sampling a search space, wherein the sampling comprises: generating a set of candidate architectures within the search space; learning a first predictor; evaluating performance of the candidate architectures; and based on at least the performance of the set of candidate architectures and the network architecture scoring information, refining the search space to a smaller search space; based on at least the network architecture scoring information, thresholding the performance of candidate architectures to determine scored output candidate architectures; and reporting the scored output candidate architectures. In some examples, the candidate architectures each comprise a machine learning (ML) model, for example a neural network (NN).
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: Xiyang DAI, Dongdong CHEN, Yinpeng CHEN, Mengchen LIU, Ye YU, Zicheng LIU, Mei CHEN, Lu YUAN, Junru WU
  • Patent number: 11335008
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: May 17, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
  • Publication number: 20220148197
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Application
    Filed: January 26, 2022
    Publication date: May 12, 2022
    Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
  • Patent number: 11308576
    Abstract: In accordance with implementations of the subject matter described herein, there is proposed a solution of visual stylization of stereoscopic images. In the solution, a first feature map for a first source image and a second feature map for a second source image are extracted. The first and second source images correspond to first and second views of a stereoscopic image, respectively. A first unidirectional disparity from the first source image to the second source image is determined based on the first and second source images. First and second target images having a specified visual style are generated by processing the first and second feature maps based on the first unidirectional disparity. Through the solution, a disparity between two source images of a stereoscopic image are taken into account when performing the visual style transfer, thereby maintaining a stereoscopic effect in the stereoscopic image consisting of the target images.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: April 19, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lu Yuan, Gang Hua, Jing Liao, Dongdong Chen
  • Patent number: 11290550
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a virtual objection distribution method are provided. One of the methods includes: performing an image scanning to a local environment of a user; conducting image identification to an acquired image; acquiring an electronic certificate from a server if an image identifier is identified in the image; saving the electronic certificate; and, in response to a determination that a category count of the received electronic certificates reaches the threshold, sending to the server a virtual object distribution request to cause the server to distribute a virtual object to the user. This method significantly increases interactivity and entertainingness of a virtual object distribution process.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: March 29, 2022
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventors: Qinglong Duan, Guanhua Chen, Jing Ji, Jiahui Cheng, Lu Yuan
  • Publication number: 20220092792
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 24, 2022
    Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
  • Publication number: 20220044352
    Abstract: Implementations of the present disclosure provide a solution for cross-domain image translation. In this solution, a first learning network for geometric deformation from a first to a second image domain is determined based on first and second images in the first and second domains, images in the two domains having different styles and objects in the images having geometric deformation with respect to each other. Geometric deformation from the second to the first domains is performed on the second image or geometric deformation from the first to the second domains is performed on the first image, to generate an intermediate image. A second learning network for style transfer from the first to the second domains is determined based on the first and intermediate images or based on the second and intermediate images generated. Accordingly, processing accuracy of leaning networks for cross-domain image translation can be improved and complexity is lowered.
    Type: Application
    Filed: September 5, 2019
    Publication date: February 10, 2022
    Inventors: Jing LIAO, Lu YUAN, Kaidi Cao
  • Publication number: 20220022378
    Abstract: The present application discloses a woody rootstock for efficient grafting of solanaceous vegetables and an efficient grafting and seedling culture method thereof. According to the present application, a woody rootstock clone with high consistency is provided through tissue culture, efficient grafting is completed through sleeve grafting technology, and the grafting survival rate is improved by regulating the healing environment. A new idea for efficient industrial grafting of solanaceous vegetables is provided, scions are imparted with new features through distant grafting, and the problems of low grafting efficiency and low survival rate are solved. The method has the advantages of strong operability, simplicity, high efficiency and low cost, and provides a technical support for the industrial production of grafted seedlings of solanaceous vegetables.
    Type: Application
    Filed: October 11, 2021
    Publication date: January 27, 2022
    Inventors: Liping CHEN, Tingjin WANG, Lu YUAN, Ke LIU, Aijun ZHANG, Yang YANG, Xuan ZHANG, Yuzhuo LI, Zhenyu QI
  • Publication number: 20210374421
    Abstract: The disclosure herein enables tracking of multiple objects in a real-time video stream. For each individual frame received from the video stream, a frame type of the frame is determined. Based on the individual frame being an object detection frame type, a set of object proposals is detected in the individual frame, associations between the set of object proposals and a set of object tracks are assigned, and statuses of the set of object tracks are updated based on the assigned associations. Based on the individual frame being an object tracking frame type, single-object tracking is performed on the frame based on each object track of the set of object tracks and the set of object tracks is updated based on the performed single-object tracking. For each frame received, a real-time object location data stream is provided based on the set of object tracks.
    Type: Application
    Filed: May 28, 2020
    Publication date: December 2, 2021
    Inventors: Ishani CHAKRABORTY, Yi-Ling CHEN, Lu YUAN
  • Publication number: 20210337036
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a virtual objection distribution method are provided. One of the methods includes: performing an image scanning to a local environment of a user; conducting image identification to an acquired image; acquiring an electronic certificate from a server if an image identifier is identified in the image; saving the electronic certificate; and, in response to a determination that a category count of the received electronic certificates reaches the threshold, sending to the server a virtual object distribution request to cause the server to distribute a virtual object to the user. This method significantly increases interactivity and entertainingness of a virtual object distribution process.
    Type: Application
    Filed: July 7, 2021
    Publication date: October 28, 2021
    Inventors: Qinglong DUAN, Guanhua CHEN, Jing JI, Jiahui CHENG, Lu YUAN
  • Patent number: 11128822
    Abstract: An “Adaptive Exposure Corrector” performs automated real-time exposure correction of individual images or image sequences of arbitrary length. “Exposure correction” is defined herein as automated adjustments or corrections to any combination of shadows, highlights, high-frequency features, and color saturation of images. The Adaptive Exposure Corrector outputs perceptually improved images based on image ISO and camera ISO capabilities in combination with camera noise characteristics via exposure corrections by a variety of noise-aware image processing functions. An initial calibration process adapts these noise aware image processing functions to noise characteristics of particular camera models and types in combination with particular camera ISO settings. More specifically, this calibration process precomputes a Noise Aware Scaling Function (NASF) and a Color Scalar Function (CSF).
    Type: Grant
    Filed: July 26, 2016
    Date of Patent: September 21, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lu Yuan, Sing Bing Kang, Chintan A. Shah
  • Publication number: 20210250531
    Abstract: An “Adaptive Exposure Corrector” performs automated real-time exposure correction of individual images or image sequences of arbitrary length. “Exposure correction” is defined herein as automated adjustments or corrections to any combination of shadows, highlights, high-frequency features, and color saturation of images. The Adaptive Exposure Corrector outputs perceptually improved images based on image ISO and camera ISO capabilities in combination with camera noise characteristics via exposure corrections by a variety of noise-aware image processing functions. An initial calibration process adapts these noise aware image processing functions to noise characteristics of particular camera models and types in combination with particular camera ISO settings. More specifically, this calibration process precomputes a Noise Aware Scaling Function (NASF) and a Color Scalar Function (CSF).
    Type: Application
    Filed: July 26, 2016
    Publication date: August 12, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Lu YUAN, Sing Bing KANG, Chintan A. SHAH
  • Patent number: 11070637
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a virtual objection distribution method are provided. One of the methods includes: performing an image scanning to a local environment of a user; conducting image identification to an acquired image; acquiring an electronic certificate from a server if an image identifier is identified in the image; saving the electronic certificate; and, in response to a determination that a category count of the received electronic certificates reaches the threshold, sending to the server a virtual object distribution request to cause the server to distribute a virtual object to the user. This method significantly increases interactivity and entertainingness of a virtual object distribution process.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: July 20, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD
    Inventors: Qinglong Duan, Guanhua Chen, Jing Ji, Jiahui Cheng, Lu Yuan