Patents by Inventor Zhiqiang YUAN
Zhiqiang YUAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11268568Abstract: An assembling structure for a ceiling fan has a hanging rod, a hanging ball, and a hanging bracket. A top end of the hanging rod forms a bending edge. The hanging ball has a first hole and a supporting wall. The hanging rod is mounted through the first hole. The supporting wall has a supporting platform abutting a bottom surface of the bending edge. The hanging bracket has a ball mounting segment and a ceiling mounting segment adapted to be mounted to a ceiling. The ball mounting segment has a hanging hole. A diameter of the hanging hole is smaller than a diameter of the hanging ball. The hanging ball abuts downward a periphery of the hanging hole. The hanging rod is mounted through the hanging hole. With a large contact area between the hanging rod and the hanging ball, the structural strength is high.Type: GrantFiled: November 6, 2019Date of Patent: March 8, 2022Assignee: Fóshan Carro Electrical Co., Ltd.Inventors: Jiansheng Zhang, Zhiqiang Yuan, Ruhui Huang
-
Publication number: 20220067451Abstract: Implementations are described herein for automatically generating quasi-realistic synthetic training images that are usable as training data for training machine learning models to perceive various types of plant traits in digital images. In various implementations, multiple labeled simulated images may be generated, each depicting simulated and labeled instance(s) of a plant having a targeted plant trait. In some implementations, the generating may include stochastically selecting features of the simulated instances of plants from a collection of plant assets associated with the targeted plant trait. The collection of plant assets may be obtained from ground truth digital image(s). In some implementations, the ground truth digital image(s) may depict real-life instances of plants having the target plant trait.Type: ApplicationFiled: August 26, 2020Publication date: March 3, 2022Inventors: Kangkang Wang, Bodi Yuan, Lianghao Li, Zhiqiang Yuan
-
Patent number: 11256915Abstract: Implementations are described herein for utilizing various image processing techniques to facilitate tracking and/or counting of plant-parts-of-interest among crops. In various implementations, a sequence of digital images of a plant captured by a vision sensor while the vision sensor is moved relative to the plant may be obtained. A first digital image and a second digital image of the sequence may be analyzed to determine one or more constituent similarity scores between plant-parts-of-interest across the first and second digital images. The constituent similarity scores may be used, e.g., collectively as a composite similarity score, to determine whether a depiction of a plant-part-of-interest in the first digital images matches a depiction of a plant-part-of-interest in the second digital image.Type: GrantFiled: August 20, 2019Date of Patent: February 22, 2022Assignee: X DEVELOPMENT LLCInventors: Yueqi Li, Hongxiao Liu, Zhiqiang Yuan
-
Publication number: 20220036070Abstract: Techniques are described herein for using artificial intelligence to predict crop yields based on observational crop data. A method includes: obtaining a first digital image of at least one plant; segmenting the first digital image of the at least one plant to identify at least one seedpod in the first digital image; for each of the at least one seedpod in the first digital image: determining a color of the seedpod; determining a number of seeds in the seedpod; inferring, using one or more machine learning models, a moisture content of the seedpod based on the color of the seedpod; and estimating, based on the moisture content of the seedpod and the number of seeds in the seedpod, a weight of the seedpod; and predicting a crop yield based on the moisture content and the weight of each of the at least one seedpod.Type: ApplicationFiled: July 30, 2020Publication date: February 3, 2022Inventors: Bodi Yuan, Zhiqiang Yuan, Ming Zheng
-
Publication number: 20210397836Abstract: Implementations are described herein for automatically generating synthetic training images that are usable as training data for training machine learning models to detect, segment, and/or classify various types of plants in digital images. In various implementations, a digital image may be obtained that captures an area. The digital image may depict the area under a lighting condition that existed in the area when a camera captured the digital image. Based at least in part on an agricultural history of the area, a plurality of three-dimensional synthetic plants may be generated. The synthetic training image may then be generated to depict the plurality of three-dimensional synthetic plants in the area. In some implementations, the generating may include graphically incorporating the plurality of three-dimensional synthetic plants with the digital image based on the lighting condition.Type: ApplicationFiled: August 31, 2021Publication date: December 23, 2021Inventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
-
Publication number: 20210383535Abstract: Implementations are described herein for automatically generating synthetic training images that are usable, for instance, as training data for training machine learning models to detect and/or classify various types of plant diseases at various stages in digital images. In various implementations, one or more environmental features associated with an agricultural area may be retrieved. One or more synthetic plant models may be generated to visually simulate one or more stages of a progressive plant disease, taking into account the one or more environmental features associated with the agricultural area. The one or more synthetic plant models may be graphically incorporated into a synthetic training image that depicts the agricultural area.Type: ApplicationFiled: June 8, 2020Publication date: December 9, 2021Inventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
-
Patent number: 11113525Abstract: Implementations are described herein for automatically generating synthetic training images that are usable as training data for training machine learning models to detect, segment, and/or classify various types of plants in digital images. In various implementations, a digital image may be obtained that captures an area. The digital image may depict the area under a lighting condition that existed in the area when a camera captured the digital image. Based at least in part on an agricultural history of the area, a plurality of three-dimensional synthetic plants may be generated. The synthetic training image may then be generated to depict the plurality of three-dimensional synthetic plants in the area. In some implementations, the generating may include graphically incorporating the plurality of three-dimensional synthetic plants with the digital image based on the lighting condition.Type: GrantFiled: May 18, 2020Date of Patent: September 7, 2021Assignee: X DEVELOPMENT LLCInventors: Lianghao Li, Kangkang Wang, Zhiqiang Yuan
-
Publication number: 20210256702Abstract: Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.Type: ApplicationFiled: December 2, 2020Publication date: August 19, 2021Inventors: Jie Yang, Cheng-en Guo, Zhiqiang Yuan, Elliott Grant, Hongxu Ma
-
Publication number: 20210150717Abstract: Implementations relate to diagnosis of crop yield predictions and/or crop yields at the field- and pixel-level. In various implementations, a first temporal sequence of high-elevation digital images may be obtained that captures a geographic area over a given time interval through a crop cycle of a first type of crop. Ground truth operational data generated through the given time interval and that influences a final crop yield of the first geographic area after the crop cycle may also be obtained. Based on these data, a ground truth-based crop yield prediction may be generated for the first geographic area at the crop cycle's end. Recommended operational change(s) may be identified based on distinct hypothetical crop yield prediction(s) for the first geographic area. Each distinct hypothetical crop yield prediction may be generated based on hypothetical operational data that includes altered data point(s) of the ground truth operational data.Type: ApplicationFiled: January 28, 2021Publication date: May 20, 2021Inventors: Cheng-en Guo, Wilson Zhao, Jie Yang, Zhiqiang Yuan, Elliott Grant
-
Publication number: 20210092891Abstract: Implementations are described herein for analyzing vision data depicting undesirable plants such as weeds to detect various attribute(s). The detected attribute(s) of a particular undesirable plant may then be used to select, from a plurality of available candidate remediation techniques, the most suitable remediation technique to eradicate or otherwise eliminate the undesirable plants.Type: ApplicationFiled: October 1, 2019Publication date: April 1, 2021Inventors: Elliott Grant, Hongxiao Liu, Zhiqiang Yuan, Sergey Yaroshenko, Benoit Schillings, Matt VanCleave
-
Patent number: 10949972Abstract: Implementations relate to diagnosis of crop yield predictions and/or crop yields at the field- and pixel-level. In various implementations, a first temporal sequence of high-elevation digital images may be obtained that captures a geographic area over a given time interval through a crop cycle of a first type of crop. Ground truth operational data generated through the given time interval and that influences a final crop yield of the first geographic area after the crop cycle may also be obtained. Based on these data, a ground truth-based crop yield prediction may be generated for the first geographic area at the crop cycle's end. Recommended operational change(s) may be identified based on distinct hypothetical crop yield prediction(s) for the first geographic area. Each distinct hypothetical crop yield prediction may be generated based on hypothetical operational data that includes altered data point(s) of the ground truth operational data.Type: GrantFiled: December 31, 2018Date of Patent: March 16, 2021Assignee: X DEVELOPMENT LLCInventors: Cheng-en Guo, Wilson Zhao, Jie Yang, Zhiqiang Yuan, Elliott Grant
-
Publication number: 20210053229Abstract: Implementations are described herein for coordinating semi-autonomous robots to perform agricultural tasks on a plurality of plants with minimal human intervention. In various implementations, a plurality of robots may be deployed to perform a respective plurality of agricultural tasks. Each agricultural task may be associated with a respective plant of a plurality of plants, and each plant may have been previously designated as a target for one of the agricultural tasks. It may be determined that a given robot has reached an individual plant associated with the respective agricultural task that was assigned to the given robot. Based at least in part on that determination, a manual control interface may be provided at output component(s) of a computing device in network communication with the given robot. The manual control interface may be operable to manually control the given robot to perform the respective agricultural task.Type: ApplicationFiled: August 20, 2019Publication date: February 25, 2021Inventors: Zhiqiang Yuan, Elliott Grant
-
Publication number: 20210056307Abstract: Implementations are described herein for utilizing various image processing techniques to facilitate tracking and/or counting of plant-parts-of-interest among crops. In various implementations, a sequence of digital images of a plant captured by a vision sensor while the vision sensor is moved relative to the plant may be obtained. A first digital image and a second digital image of the sequence may be analyzed to determine one or more constituent similarity scores between plant-parts-of-interest across the first and second digital images. The constituent similarity scores may be used, e.g., collectively as a composite similarity score, to determine whether a depiction of a plant-part-of-interest in the first digital images matches a depiction of a plant-part-of-interest in the second digital image.Type: ApplicationFiled: August 20, 2019Publication date: February 25, 2021Inventors: Yueqi Li, Hongxiao Liu, Zhiqiang Yuan
-
Publication number: 20210011694Abstract: Techniques are described herein for translating source code in one programming language to source code in another programming language using machine learning. In various implementations, one or more components of one or more generative adversarial networks, such as a generator machine learning model, may be trained to generate “synthetically-naturalistic” source code that can be used as a translation of source code in an unfamiliar language. In some implementations, a discriminator machine learning model may be employed to aid in training the generator machine learning model, e.g., by being trained to discriminate between human-generated (“genuine”) and machine-generated (“synthetic”) source code.Type: ApplicationFiled: July 9, 2019Publication date: January 14, 2021Inventors: Bin Ni, Zhiqiang Yuan, Qianyu Zhang
-
Patent number: 10891735Abstract: Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.Type: GrantFiled: January 8, 2019Date of Patent: January 12, 2021Assignee: X DEVELOPMENT LLCInventors: Jie Yang, Cheng-en Guo, Zhiqiang Yuan, Elliott Grant, Hongxu Ma
-
Publication number: 20200401883Abstract: Implementations are described herein for training and applying machine learning models to digital images capturing plants, and to other data indicative of attributes of individual plants captured in the digital images, to recognize individual plants in distinction from other individual plants. In various implementations, a digital image that captures a first plant of a plurality of plants may be applied, along with additional data indicative of an additional attribute of the first plant observed when the digital image was taken, as input across a machine learning model to generate output. Based on the output, an association may be stored in memory, e.g., of a database, between the digital image that captures the first plant and one or more previously-captured digital images of the first plant.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Inventors: Jie Yang, Zhiqiang Yuan, Hongxu Ma, Cheng-en Guo, Elliott Grant, Yueqi Li
-
Publication number: 20200362910Abstract: An assembling structure for a ceiling fan has a hanging rod, a hanging ball, and a hanging bracket. A top end of the hanging rod forms a bending edge. The hanging ball has a first hole and a supporting wall. The hanging rod is mounted through the first hole. The supporting wall has a supporting platform abutting a bottom surface of the bending edge. The hanging bracket has a ball mounting segment and a ceiling mounting segment adapted to be mounted to a ceiling. The ball mounting segment has a hanging hole. A diameter of the hanging hole is smaller than a diameter of the hanging ball. The hanging ball abuts downward a periphery of the hanging hole. The hanging rod is mounted through the hanging hole. With a large contact area between the hanging rod and the hanging ball, the structural strength is high.Type: ApplicationFiled: November 6, 2019Publication date: November 19, 2020Inventors: JIANSHENG ZHANG, ZHIQIANG YUAN, RUHUI HUANG
-
Patent number: 10823375Abstract: A light cover assembling structure has a light housing, a mounting sleeve, a light-emitting element, a transparent light cover, and a supporting cover. The mounting sleeve has an inner end mounted in the light housing and an outer end extending out of the light housing. An inner surface of the light housing and an outer surface of the mounting sleeve form an assembling space. The transparent light cover is mounted on the outer end of the mounting sleeve. The supporting cover is mounted on the outer end of the mounting sleeve, and has a first supporting side wall formed circumferentially and abutting the transparent light cover. The first supporting side wall of the supporting cover supports the transparent light cover such that the transparent light cover is firmly mounted on the mounting sleeve.Type: GrantFiled: November 6, 2019Date of Patent: November 3, 2020Assignee: Foshan Carro Electrical Co., Ltd.Inventors: Jiansheng Zhang, Zhiqiang Yuan
-
Publication number: 20200208809Abstract: A light cover assembling structure has a light housing, a mounting sleeve, a light-emitting element, a transparent light cover, and a supporting cover. The mounting sleeve has an inner end mounted in the light housing and an outer end extending out of the light housing. An inner surface of the light housing and an outer surface of the mounting sleeve form an assembling space. The transparent light cover is mounted on the outer end of the mounting sleeve. The supporting cover is mounted on the outer end of the mounting sleeve, and has a first supporting side wall formed circumferentially and abutting the transparent light cover. The first supporting side wall of the supporting cover supports the transparent light cover such that the transparent light cover is firmly mounted on the mounting sleeve.Type: ApplicationFiled: November 6, 2019Publication date: July 2, 2020Inventors: JIANSHENG ZHANG, ZHIQIANG YUAN
-
Patent number: 10638667Abstract: Systems and Methods for Augmented-Human Field Inspection Tools for Automated Phenotyping Systems and Agronomy Tools. In one embodiment, a method for plant phenotyping, includes: acquiring a first set of observations about plants in a field by a trainer. The trainer carries a sensor configured to collect observations about the plant, and the first set of observations includes ground truth data. The method also includes processing the first set of observations about the plants by a trait extraction model to generate instructions for a trainee; and acquiring a second set of observations about the plants by a trainee while the trainee follows the instructions.Type: GrantFiled: December 26, 2017Date of Patent: May 5, 2020Assignee: X Development LLCInventors: William Regan, Matthew Bitterman, David Brown, Elliott Grant, Zhiqiang Yuan