Patents by Inventor Zhiqiang YUAN

Zhiqiang YUAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11562486
    Abstract: Implementations relate to diagnosis of crop yield predictions and/or crop yields at the field- and pixel-level. In various implementations, a first temporal sequence of high-elevation digital images may be obtained that captures a geographic area over a given time interval through a crop cycle of a first type of crop. Ground truth operational data generated through the given time interval and that influences a final crop yield of the first geographic area after the crop cycle may also be obtained. Based on these data, a ground truth-based crop yield prediction may be generated for the first geographic area at the crop cycle's end. Recommended operational change(s) may be identified based on distinct hypothetical crop yield prediction(s) for the first geographic area. Each distinct hypothetical crop yield prediction may be generated based on hypothetical operational data that includes altered data point(s) of the ground truth operational data.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: January 24, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Cheng-en Guo, Wilson Zhao, Jie Yang, Zhiqiang Yuan, Elliott Grant
  • Patent number: 11553634
    Abstract: Implementations are described herein for analyzing vision data depicting undesirable plants such as weeds to detect various attribute(s). The detected attribute(s) of a particular undesirable plant may then be used to select, from a plurality of available candidate remediation techniques, the most suitable remediation technique to eradicate or otherwise eliminate the undesirable plants.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: January 17, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Elliott Grant, Hongxiao Liu, Zhiqiang Yuan, Sergey Yaroshenko, Benoit Schillings, Matt VanCleave
  • Patent number: 11544920
    Abstract: Implementations are described herein for automatically generating synthetic training images that are usable as training data for training machine learning models to detect, segment, and/or classify various types of plants in digital images. In various implementations, a digital image may be obtained that captures an area. The digital image may depict the area under a lighting condition that existed in the area when a camera captured the digital image. Based at least in part on an agricultural history of the area, a plurality of three-dimensional synthetic plants may be generated. The synthetic training image may then be generated to depict the plurality of three-dimensional synthetic plants in the area. In some implementations, the generating may include graphically incorporating the plurality of three-dimensional synthetic plants with the digital image based on the lighting condition.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: January 3, 2023
    Assignee: X DEVELOPMENT LLC
    Inventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
  • Publication number: 20220405962
    Abstract: Implementations are described herein for localizing individual plants using high-elevation images at multiple different resolutions. A first set of high-elevation images that capture the plurality of plants at a first resolution may be analyzed to classify a set of pixels as invariant anchor points. High-elevation images of the first set may be aligned with each other based on the invariant anchor points that are common among at least some of the first set of high-elevation images. A mapping may be generated between pixels of the aligned high-elevation images of the first set and spatially-corresponding pixels of a second set of higher-resolution high-elevation images. Based at least in part on the mapping, individual plant(s) of the plurality of plants may be localized within one or more of the second set of high-elevation images for performance of one or more agricultural tasks.
    Type: Application
    Filed: June 22, 2021
    Publication date: December 22, 2022
    Inventors: Zhiqiang Yuan, Jie Yang
  • Patent number: 11532080
    Abstract: Implementations are described herein for normalizing counts of plant-parts-of-interest detected in digital imagery to account for differences in spatial dimensions of plants, particularly plant heights. In various implementations, one or more digital images depicting a top of a first plant may be processed. The one or more digital images may have been acquired by a vision sensor carried over top of the first plant by a ground-based vehicle. Based on the processing: a distance of the vision sensor to the first plant may be estimated, and a count of visible plant-parts-of-interest that were captured within a field of view of the vision sensor may be determined. Based on the estimated distance, the count of visible plant-parts-of-interest may be normalized with another count of visible plant-parts-of-interest determined from one or more digital images capturing a second plant.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: December 20, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Zhiqiang Yuan, Bodi Yuan, Ming Zheng
  • Publication number: 20220398415
    Abstract: Implementations are described herein for localizing individual plants by aligning high-elevation images using invariant anchor points while disregarding variant feature points, such as deformable plants. High-elevation images that capture the plurality of plants at a resolution at which wind-triggered deformation of individual plants is perceptible between the high-elevation images may be obtained. First regions of the high-elevation images that depict the plurality of plants may be classified as variant features that are unusable as invariant anchor points. Second regions of the high-elevation images that are disjoint from the first set of regions may be classified as invariant anchor points. The high-elevation images may be aligned based on invariant anchor point(s) that are common among at least some of the high-elevation images. Based on the aligned high-elevation images, individual plant(s) may be localized within one of the high-elevation images for performance of one or more agricultural tasks.
    Type: Application
    Filed: June 10, 2021
    Publication date: December 15, 2022
    Inventor: Zhiqiang Yuan
  • Publication number: 20220391752
    Abstract: Implementations are described herein for automatically generating labeled synthetic images that are usable as training data for training machine learning models to make an agricultural prediction based on digital images. A method includes: generating a plurality of simulated images, each simulated image depicting one or more simulated instances of a plant; for each of the plurality of simulated images, labeling the simulated image with at least one ground truth label that identifies an attribute of the one or more simulated instances of the plant depicted in the simulated image, the attribute describing both a visible portion and an occluded portion of the one or more simulated instances of the plant depicted in the simulated image; and training a machine learning model to make an agricultural prediction using the labeled plurality of simulated images.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Inventors: Elliott Grant, Kangkang Wang, Bodi Yuan, Zhiqiang Yuan
  • Patent number: 11501443
    Abstract: Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: November 15, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Jie Yang, Cheng-en Guo, Zhiqiang Yuan, Elliott Grant, Hongxu Ma
  • Publication number: 20220358265
    Abstract: Implementations are described herein for realistic plant growth modeling and various applications thereof. In various implementations, a plurality of two-dimensional (2D) digital images that capture, over time, one or more of a particular type of plant based on one or more machine learning models to generate output, may be processed. The output may be analyzed to extract temporal features that capture change over time to one or more structural features of the particular type of plant. Based on the captured temporal features, a first parameter subspace of whole plant parameters may be learned, wherein the whole plant parameters are usable to generate a three-dimensional (3D) growth model that realistically simulates growth of the particular type of plant over time. Based on the first parameter subspace, one or more 3D growth models that simulate growth of the particular type of plant may be non-deterministically generated and used for various purposes.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Kangkang Wang, Bodi Yuan, Zhiqiang Yuan, Hong Wu, Daniel Ribeiro Silva, Zihao Li
  • Publication number: 20220319005
    Abstract: Implementations are described herein for automatically generating synthetic training images that are usable, for instance, as training data for training machine learning models to detect and/or classify various types of plant diseases at various stages in digital images. In various implementations, one or more environmental features associated with an agricultural area may be retrieved. One or more synthetic plant models may be generated to visually simulate one or more stages of a progressive plant disease, taking into account the one or more environmental features associated with the agricultural area. The one or more synthetic plant models may be graphically incorporated into a synthetic training image that depicts the agricultural area.
    Type: Application
    Filed: June 17, 2022
    Publication date: October 6, 2022
    Inventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
  • Patent number: 11398028
    Abstract: Implementations are described herein for automatically generating synthetic training images that are usable, for instance, as training data for training machine learning models to detect and/or classify various types of plant diseases at various stages in digital images. In various implementations, one or more environmental features associated with an agricultural area may be retrieved. One or more synthetic plant models may be generated to visually simulate one or more stages of a progressive plant disease, taking into account the one or more environmental features associated with the agricultural area. The one or more synthetic plant models may be graphically incorporated into a synthetic training image that depicts the agricultural area.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: July 26, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
  • Publication number: 20220217894
    Abstract: Implementations are described herein for predicting soil organic carbon (“SOC”) content for agricultural fields detected in digital imagery. In various implementations, one or more digital images depicting portion(s) of one or more agricultural fields may be processed. The one or more digital images may have been acquired by a vision sensor carried through the field(s) by a ground-based vehicle. Based on the processing, one or more agricultural inferences indicating agricultural practices or conditions predicted to affect SOC content may be determined. Based on the agricultural inferences, one or more predicted SOC measurements for the field(s) may be determined.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 14, 2022
    Inventors: Cheng-en Guo, Jie Yang, Zhiqiang Yuan, Elliott Grant
  • Publication number: 20220219329
    Abstract: Implementations are described herein for coordinating semi-autonomous robots to perform agricultural tasks on a plurality of plants with minimal human intervention. In various implementations, a plurality of robots may be deployed to perform a respective plurality of agricultural tasks. Each agricultural task may be associated with a respective plant of a plurality of plants, and each plant may have been previously designated as a target for one of the agricultural tasks. It may be determined that a given robot has reached an individual plant associated with the respective agricultural task that was assigned to the given robot. Based at least in part on that determination, a manual control interface may be provided at output component(s) of a computing device in network communication with the given robot. The manual control interface may be operable to manually control the given robot to perform the respective agricultural task.
    Type: Application
    Filed: March 1, 2022
    Publication date: July 14, 2022
    Inventors: Zhiqiang Yuan, Elliott Grant
  • Publication number: 20220156917
    Abstract: Implementations are described herein for normalizing counts of plant-parts-of-interest detected in digital imagery to account for differences in spatial dimensions of plants, particularly plant heights. In various implementations, one or more digital images depicting a top of a first plant may be processed. The one or more digital images may have been acquired by a vision sensor carried over top of the first plant by a ground-based vehicle. Based on the processing: a distance of the vision sensor to the first plant may be estimated, and a count of visible plant-parts-of-interest that were captured within a field of view of the vision sensor may be determined. Based on the estimated distance, the count of visible plant-parts-of-interest may be normalized with another count of visible plant-parts-of-interest determined from one or more digital images capturing a second plant.
    Type: Application
    Filed: November 17, 2020
    Publication date: May 19, 2022
    Inventors: Zhiqiang Yuan, Bodi Yuan, Ming Zheng
  • Publication number: 20220129673
    Abstract: Implementations are disclosed for selectively operating edge-based sensors and/or computational resources under circumstances dictated by observation of targeted plant trait(s) to generate targeted agricultural inferences. In various implementations, triage data may be acquired at a first level of detail from a sensor of an edge computing node carried through an agricultural field. The triage data may be locally processed at the edge using machine learning model(s) to detect targeted plant trait(s) exhibited by plant(s) in the field. Based on the detected plant trait(s), a region of interest (ROI) may be established in the field. Targeted inference data may be acquired at a second, greater level of detail from the sensor while the sensor is carried through the ROI. The targeted inference data may be locally processed at the edge using one or more of the machine learning models to make a targeted inference about plants within the ROI.
    Type: Application
    Filed: October 22, 2020
    Publication date: April 28, 2022
    Inventors: Sergey Yaroshenko, Zhiqiang Yuan
  • Publication number: 20220121919
    Abstract: Techniques are disclosed that enable generating a predicted yield for a cereal grain crop based on one or more traits extracted from image(s) of the cereal grain crop. Various implementations include determining a heading trait value based on the number of identified spikelets, where the spikelets are identified by processing the image(s) of the cereal grain crop using a spikelet detection model. Additional or alternative implementations include generating a predicted cereal grain crop yield based on one or more additional or alternative trait values such as one or more heading values, one or more projected leaf area values, one or more stand spacing values, one or more wheat rust values, one or more maturity detection values, one or more intercropping phenotyping values extracted cereal grains intercropped with other crops, one or more additional or alternative trait output values, and/or combinations thereof.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Inventors: Zhiqiang Yuan, Theodore Monyak
  • Patent number: 11285612
    Abstract: Implementations are described herein for coordinating semi-autonomous robots to perform agricultural tasks on a plurality of plants with minimal human intervention. In various implementations, a plurality of robots may be deployed to perform a respective plurality of agricultural tasks. Each agricultural task may be associated with a respective plant of a plurality of plants, and each plant may have been previously designated as a target for one of the agricultural tasks. It may be determined that a given robot has reached an individual plant associated with the respective agricultural task that was assigned to the given robot. Based at least in part on that determination, a manual control interface may be provided at output component(s) of a computing device in network communication with the given robot. The manual control interface may be operable to manually control the given robot to perform the respective agricultural task.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: March 29, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Zhiqiang Yuan, Elliott Grant
  • Patent number: 11268568
    Abstract: An assembling structure for a ceiling fan has a hanging rod, a hanging ball, and a hanging bracket. A top end of the hanging rod forms a bending edge. The hanging ball has a first hole and a supporting wall. The hanging rod is mounted through the first hole. The supporting wall has a supporting platform abutting a bottom surface of the bending edge. The hanging bracket has a ball mounting segment and a ceiling mounting segment adapted to be mounted to a ceiling. The ball mounting segment has a hanging hole. A diameter of the hanging hole is smaller than a diameter of the hanging ball. The hanging ball abuts downward a periphery of the hanging hole. The hanging rod is mounted through the hanging hole. With a large contact area between the hanging rod and the hanging ball, the structural strength is high.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: March 8, 2022
    Assignee: Fóshan Carro Electrical Co., Ltd.
    Inventors: Jiansheng Zhang, Zhiqiang Yuan, Ruhui Huang
  • Publication number: 20220067451
    Abstract: Implementations are described herein for automatically generating quasi-realistic synthetic training images that are usable as training data for training machine learning models to perceive various types of plant traits in digital images. In various implementations, multiple labeled simulated images may be generated, each depicting simulated and labeled instance(s) of a plant having a targeted plant trait. In some implementations, the generating may include stochastically selecting features of the simulated instances of plants from a collection of plant assets associated with the targeted plant trait. The collection of plant assets may be obtained from ground truth digital image(s). In some implementations, the ground truth digital image(s) may depict real-life instances of plants having the target plant trait.
    Type: Application
    Filed: August 26, 2020
    Publication date: March 3, 2022
    Inventors: Kangkang Wang, Bodi Yuan, Lianghao Li, Zhiqiang Yuan
  • Patent number: 11256915
    Abstract: Implementations are described herein for utilizing various image processing techniques to facilitate tracking and/or counting of plant-parts-of-interest among crops. In various implementations, a sequence of digital images of a plant captured by a vision sensor while the vision sensor is moved relative to the plant may be obtained. A first digital image and a second digital image of the sequence may be analyzed to determine one or more constituent similarity scores between plant-parts-of-interest across the first and second digital images. The constituent similarity scores may be used, e.g., collectively as a composite similarity score, to determine whether a depiction of a plant-part-of-interest in the first digital images matches a depiction of a plant-part-of-interest in the second digital image.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: February 22, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Yueqi Li, Hongxiao Liu, Zhiqiang Yuan