Patents by Inventor Zhiqiang YUAN

Zhiqiang YUAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220036070
    Abstract: Techniques are described herein for using artificial intelligence to predict crop yields based on observational crop data. A method includes: obtaining a first digital image of at least one plant; segmenting the first digital image of the at least one plant to identify at least one seedpod in the first digital image; for each of the at least one seedpod in the first digital image: determining a color of the seedpod; determining a number of seeds in the seedpod; inferring, using one or more machine learning models, a moisture content of the seedpod based on the color of the seedpod; and estimating, based on the moisture content of the seedpod and the number of seeds in the seedpod, a weight of the seedpod; and predicting a crop yield based on the moisture content and the weight of each of the at least one seedpod.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Inventors: Bodi Yuan, Zhiqiang Yuan, Ming Zheng
  • Publication number: 20210397836
    Abstract: Implementations are described herein for automatically generating synthetic training images that are usable as training data for training machine learning models to detect, segment, and/or classify various types of plants in digital images. In various implementations, a digital image may be obtained that captures an area. The digital image may depict the area under a lighting condition that existed in the area when a camera captured the digital image. Based at least in part on an agricultural history of the area, a plurality of three-dimensional synthetic plants may be generated. The synthetic training image may then be generated to depict the plurality of three-dimensional synthetic plants in the area. In some implementations, the generating may include graphically incorporating the plurality of three-dimensional synthetic plants with the digital image based on the lighting condition.
    Type: Application
    Filed: August 31, 2021
    Publication date: December 23, 2021
    Inventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
  • Publication number: 20210383535
    Abstract: Implementations are described herein for automatically generating synthetic training images that are usable, for instance, as training data for training machine learning models to detect and/or classify various types of plant diseases at various stages in digital images. In various implementations, one or more environmental features associated with an agricultural area may be retrieved. One or more synthetic plant models may be generated to visually simulate one or more stages of a progressive plant disease, taking into account the one or more environmental features associated with the agricultural area. The one or more synthetic plant models may be graphically incorporated into a synthetic training image that depicts the agricultural area.
    Type: Application
    Filed: June 8, 2020
    Publication date: December 9, 2021
    Inventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
  • Patent number: 11113525
    Abstract: Implementations are described herein for automatically generating synthetic training images that are usable as training data for training machine learning models to detect, segment, and/or classify various types of plants in digital images. In various implementations, a digital image may be obtained that captures an area. The digital image may depict the area under a lighting condition that existed in the area when a camera captured the digital image. Based at least in part on an agricultural history of the area, a plurality of three-dimensional synthetic plants may be generated. The synthetic training image may then be generated to depict the plurality of three-dimensional synthetic plants in the area. In some implementations, the generating may include graphically incorporating the plurality of three-dimensional synthetic plants with the digital image based on the lighting condition.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: September 7, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
  • Publication number: 20210256702
    Abstract: Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.
    Type: Application
    Filed: December 2, 2020
    Publication date: August 19, 2021
    Inventors: Jie Yang, Cheng-en Guo, Zhiqiang Yuan, Elliott Grant, Hongxu Ma
  • Publication number: 20210150717
    Abstract: Implementations relate to diagnosis of crop yield predictions and/or crop yields at the field- and pixel-level. In various implementations, a first temporal sequence of high-elevation digital images may be obtained that captures a geographic area over a given time interval through a crop cycle of a first type of crop. Ground truth operational data generated through the given time interval and that influences a final crop yield of the first geographic area after the crop cycle may also be obtained. Based on these data, a ground truth-based crop yield prediction may be generated for the first geographic area at the crop cycle's end. Recommended operational change(s) may be identified based on distinct hypothetical crop yield prediction(s) for the first geographic area. Each distinct hypothetical crop yield prediction may be generated based on hypothetical operational data that includes altered data point(s) of the ground truth operational data.
    Type: Application
    Filed: January 28, 2021
    Publication date: May 20, 2021
    Inventors: Cheng-en Guo, Wilson Zhao, Jie Yang, Zhiqiang Yuan, Elliott Grant
  • Publication number: 20210092891
    Abstract: Implementations are described herein for analyzing vision data depicting undesirable plants such as weeds to detect various attribute(s). The detected attribute(s) of a particular undesirable plant may then be used to select, from a plurality of available candidate remediation techniques, the most suitable remediation technique to eradicate or otherwise eliminate the undesirable plants.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 1, 2021
    Inventors: Elliott Grant, Hongxiao Liu, Zhiqiang Yuan, Sergey Yaroshenko, Benoit Schillings, Matt VanCleave
  • Patent number: 10949972
    Abstract: Implementations relate to diagnosis of crop yield predictions and/or crop yields at the field- and pixel-level. In various implementations, a first temporal sequence of high-elevation digital images may be obtained that captures a geographic area over a given time interval through a crop cycle of a first type of crop. Ground truth operational data generated through the given time interval and that influences a final crop yield of the first geographic area after the crop cycle may also be obtained. Based on these data, a ground truth-based crop yield prediction may be generated for the first geographic area at the crop cycle's end. Recommended operational change(s) may be identified based on distinct hypothetical crop yield prediction(s) for the first geographic area. Each distinct hypothetical crop yield prediction may be generated based on hypothetical operational data that includes altered data point(s) of the ground truth operational data.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: March 16, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Cheng-en Guo, Wilson Zhao, Jie Yang, Zhiqiang Yuan, Elliott Grant
  • Publication number: 20210053229
    Abstract: Implementations are described herein for coordinating semi-autonomous robots to perform agricultural tasks on a plurality of plants with minimal human intervention. In various implementations, a plurality of robots may be deployed to perform a respective plurality of agricultural tasks. Each agricultural task may be associated with a respective plant of a plurality of plants, and each plant may have been previously designated as a target for one of the agricultural tasks. It may be determined that a given robot has reached an individual plant associated with the respective agricultural task that was assigned to the given robot. Based at least in part on that determination, a manual control interface may be provided at output component(s) of a computing device in network communication with the given robot. The manual control interface may be operable to manually control the given robot to perform the respective agricultural task.
    Type: Application
    Filed: August 20, 2019
    Publication date: February 25, 2021
    Inventors: Zhiqiang Yuan, Elliott Grant
  • Publication number: 20210056307
    Abstract: Implementations are described herein for utilizing various image processing techniques to facilitate tracking and/or counting of plant-parts-of-interest among crops. In various implementations, a sequence of digital images of a plant captured by a vision sensor while the vision sensor is moved relative to the plant may be obtained. A first digital image and a second digital image of the sequence may be analyzed to determine one or more constituent similarity scores between plant-parts-of-interest across the first and second digital images. The constituent similarity scores may be used, e.g., collectively as a composite similarity score, to determine whether a depiction of a plant-part-of-interest in the first digital images matches a depiction of a plant-part-of-interest in the second digital image.
    Type: Application
    Filed: August 20, 2019
    Publication date: February 25, 2021
    Inventors: Yueqi Li, Hongxiao Liu, Zhiqiang Yuan
  • Publication number: 20210011694
    Abstract: Techniques are described herein for translating source code in one programming language to source code in another programming language using machine learning. In various implementations, one or more components of one or more generative adversarial networks, such as a generator machine learning model, may be trained to generate “synthetically-naturalistic” source code that can be used as a translation of source code in an unfamiliar language. In some implementations, a discriminator machine learning model may be employed to aid in training the generator machine learning model, e.g., by being trained to discriminate between human-generated (“genuine”) and machine-generated (“synthetic”) source code.
    Type: Application
    Filed: July 9, 2019
    Publication date: January 14, 2021
    Inventors: Bin Ni, Zhiqiang Yuan, Qianyu Zhang
  • Patent number: 10891735
    Abstract: Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: January 12, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Jie Yang, Cheng-en Guo, Zhiqiang Yuan, Elliott Grant, Hongxu Ma
  • Publication number: 20200401883
    Abstract: Implementations are described herein for training and applying machine learning models to digital images capturing plants, and to other data indicative of attributes of individual plants captured in the digital images, to recognize individual plants in distinction from other individual plants. In various implementations, a digital image that captures a first plant of a plurality of plants may be applied, along with additional data indicative of an additional attribute of the first plant observed when the digital image was taken, as input across a machine learning model to generate output. Based on the output, an association may be stored in memory, e.g., of a database, between the digital image that captures the first plant and one or more previously-captured digital images of the first plant.
    Type: Application
    Filed: June 24, 2019
    Publication date: December 24, 2020
    Inventors: Jie Yang, Zhiqiang Yuan, Hongxu Ma, Cheng-en Guo, Elliott Grant, Yueqi Li
  • Publication number: 20200362910
    Abstract: An assembling structure for a ceiling fan has a hanging rod, a hanging ball, and a hanging bracket. A top end of the hanging rod forms a bending edge. The hanging ball has a first hole and a supporting wall. The hanging rod is mounted through the first hole. The supporting wall has a supporting platform abutting a bottom surface of the bending edge. The hanging bracket has a ball mounting segment and a ceiling mounting segment adapted to be mounted to a ceiling. The ball mounting segment has a hanging hole. A diameter of the hanging hole is smaller than a diameter of the hanging ball. The hanging ball abuts downward a periphery of the hanging hole. The hanging rod is mounted through the hanging hole. With a large contact area between the hanging rod and the hanging ball, the structural strength is high.
    Type: Application
    Filed: November 6, 2019
    Publication date: November 19, 2020
    Inventors: JIANSHENG ZHANG, ZHIQIANG YUAN, RUHUI HUANG
  • Patent number: 10823375
    Abstract: A light cover assembling structure has a light housing, a mounting sleeve, a light-emitting element, a transparent light cover, and a supporting cover. The mounting sleeve has an inner end mounted in the light housing and an outer end extending out of the light housing. An inner surface of the light housing and an outer surface of the mounting sleeve form an assembling space. The transparent light cover is mounted on the outer end of the mounting sleeve. The supporting cover is mounted on the outer end of the mounting sleeve, and has a first supporting side wall formed circumferentially and abutting the transparent light cover. The first supporting side wall of the supporting cover supports the transparent light cover such that the transparent light cover is firmly mounted on the mounting sleeve.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: November 3, 2020
    Assignee: Foshan Carro Electrical Co., Ltd.
    Inventors: Jiansheng Zhang, Zhiqiang Yuan
  • Publication number: 20200208809
    Abstract: A light cover assembling structure has a light housing, a mounting sleeve, a light-emitting element, a transparent light cover, and a supporting cover. The mounting sleeve has an inner end mounted in the light housing and an outer end extending out of the light housing. An inner surface of the light housing and an outer surface of the mounting sleeve form an assembling space. The transparent light cover is mounted on the outer end of the mounting sleeve. The supporting cover is mounted on the outer end of the mounting sleeve, and has a first supporting side wall formed circumferentially and abutting the transparent light cover. The first supporting side wall of the supporting cover supports the transparent light cover such that the transparent light cover is firmly mounted on the mounting sleeve.
    Type: Application
    Filed: November 6, 2019
    Publication date: July 2, 2020
    Inventors: JIANSHENG ZHANG, ZHIQIANG YUAN
  • Patent number: 10638667
    Abstract: Systems and Methods for Augmented-Human Field Inspection Tools for Automated Phenotyping Systems and Agronomy Tools. In one embodiment, a method for plant phenotyping, includes: acquiring a first set of observations about plants in a field by a trainer. The trainer carries a sensor configured to collect observations about the plant, and the first set of observations includes ground truth data. The method also includes processing the first set of observations about the plants by a trait extraction model to generate instructions for a trainee; and acquiring a second set of observations about the plants by a trainee while the trainee follows the instructions.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: May 5, 2020
    Assignee: X Development LLC
    Inventors: William Regan, Matthew Bitterman, David Brown, Elliott Grant, Zhiqiang Yuan
  • Publication number: 20200125929
    Abstract: Implementations relate to crop yield prediction at the field- and pixel-level. In various implementations, a first temporal sequence of high-elevation digital images may be obtained that capture a first geographic area and are acquired over a first predetermined time interval while the first geographic area includes a particular crop. A first plurality of other data points may also be obtained that influence a ground truth crop yield of the first geographic area after the first predetermined time interval. The first plurality of other data points may be grouped into temporal chunks corresponding temporally with respective images of the first temporal sequence. The first temporal sequence and the temporal chunks of the first plurality of other data points may be applied, e.g., iteratively, as input across a machine learning model to estimate a crop yield of the first geographic area at the end of the first predetermined time interval.
    Type: Application
    Filed: December 18, 2018
    Publication date: April 23, 2020
    Inventors: Cheng-en Guo, Wilson Zhao, Jie Yang, Zhiqiang Yuan
  • Publication number: 20200126232
    Abstract: Implementations relate to diagnosis of crop yield predictions and/or crop yields at the field- and pixel-level. In various implementations, a first temporal sequence of high-elevation digital images may be obtained that captures a geographic area over a given time interval through a crop cycle of a first type of crop. Ground truth operational data generated through the given time interval and that influences a final crop yield of the first geographic area after the crop cycle may also be obtained. Based on these data, a ground truth-based crop yield prediction may be generated for the first geographic area at the crop cycle's end. Recommended operational change(s) may be identified based on distinct hypothetical crop yield prediction(s) for the first geographic area. Each distinct hypothetical crop yield prediction may be generated based on hypothetical operational data that includes altered data point(s) of the ground truth operational data.
    Type: Application
    Filed: December 31, 2018
    Publication date: April 23, 2020
    Inventors: Cheng-en Guo, Wilson Zhao, Jie Yang, Zhiqiang Yuan, Elliott Grant
  • Publication number: 20200125822
    Abstract: Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.
    Type: Application
    Filed: January 8, 2019
    Publication date: April 23, 2020
    Inventors: Jie Yang, Cheng-en Guo, Zhiqiang Yuan, Elliott Grant, Hongxu Ma