Patents by Inventor Yueqi Li

Yueqi Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11703351
    Abstract: Implementations are directed to assigning corresponding semantic identifiers to a plurality of rows of an agricultural field, generating a local mapping of the agricultural field that includes the plurality of rows of the agricultural field, and subsequently utilizing the local mapping in performance of one or more agricultural operations. In some implementations, the local mapping can be generated based on overhead vision data that captures at least a portion of the agricultural field. In these implementations, the local mapping can be generated based on GPS data associated with the portion of the agricultural field captured in the overhead vision data. In other implementations, the local mapping can be generated based on driving data generated during an episode of locomotion of a vehicle through the agricultural field. In these implementations, the local mapping can be generated based on GPS data associated with the vehicle traversing through the agricultural field.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: July 18, 2023
    Assignee: MINERAL EARTH SCIENCES LLC
    Inventors: Alan Eneev, Jie Yang, Yueqi Li, Yujing Qian, Nanzhu Wang, Sicong Wang, Sergey Yaroshenko
  • Patent number: 11687960
    Abstract: Implementations are described herein for using machine learning to determine whether candidate crop fields are suitable for management by particular agricultural entities. In various implementations, a machine learning model may be applied to input data to generate output data. The input data may include a first plurality of data points corresponding to field-level agricultural management practices of an agricultural entity. The output data may be indicative of one or more predicted outcomes of the agricultural entity implementing the field-level agricultural management practices on one or more candidate crop fields not currently managed by the agricultural entity. Based on one or more of the predicted outcomes, one or more computing devices may be caused to provide a user associated with the agricultural entity with information about one or more of the candidate crop fields, and/or one or more parameter inputs of a graphical user interface may be prepopulated.
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: June 27, 2023
    Assignee: MINERAL EARTH SCIENCES LLC
    Inventors: Nanzhu Wang, Chunfeng Wen, Yueqi Li
  • Publication number: 20230171303
    Abstract: Implementations are disclosed for dynamically allocating aspects of platform-independent machine-learning based agricultural state machines among edge and cloud computing resources. In various implementations, a GUI may include a working canvas on which graphical elements corresponding to platform-independent logical routines are manipulable to define a platform-independent agricultural state machine. Some of the platform-independent logical routines may include logical operations that process agricultural data using phenotyping machine learning model(s). Edge computing resource(s) available to a user for which the agricultural state machine is to be implemented may be identified. Constraint(s) imposed by the user on implementation of the agricultural state machine may be ascertained.
    Type: Application
    Filed: December 1, 2021
    Publication date: June 1, 2023
    Inventors: Yueqi Li, Alexander Ngai
  • Publication number: 20230057168
    Abstract: Implementations are disclosed for facilitating visual programming of machine learning state machines. In various implementations, one or more graphical user interfaces (GUIs) may be rendered on one or more displays. Each GUI may include a working canvas on which a plurality of graphical elements corresponding to at least some of a plurality of available logical routines are manipulable to define a machine learning state machine. One or more of the available logical routines may include logical operations that process data using machine learning model(s). Two or more at least partially redundant logical routines that include overlapping logical operations may be identified, and overlapping logical operations of the two or more at least partially redundant logical routines may be merged into a consolidated logical routine. At least some of the logical operations that were previously downstream from the overlapping logical operations may be logically coupled with the consolidated logical routine.
    Type: Application
    Filed: August 23, 2021
    Publication date: February 23, 2023
    Inventor: Yueqi Li
  • Publication number: 20230059741
    Abstract: Implementations are disclosed for automated design and implementation of machine learning (ML) state machines that include at least some aspect of machine learning. In various implementations, unstructured input may be received from a user. The unstructured input may convey operational aspect(s) of a machine learning (ML) state machine desired by the user. The unstructured input may be semantically processed to determine an intent of the user. The intent may include the operational aspect(s) of the ML state machine desired by the user. Based on the intent of the user, a plurality of modular logical routines may be selected from an existing library of modular logical routines. At least one logical routine of the selected plurality of logical routines may include logical operations that process data using one or more machine learning models. The selected plurality of logical routines may be assembled into the desired state ML state machine.
    Type: Application
    Filed: August 23, 2021
    Publication date: February 23, 2023
    Inventor: Yueqi Li
  • Publication number: 20230028706
    Abstract: Implementations are described herein for edge-based real time crop yield predictions made using sampled subsets of robotically-acquired vision data. In various implementations, one or more robots may be deployed amongst a plurality of plants in an area such as a field. Using one or more vision sensors of the one or more robots, a superset of high resolution images may be acquired that depict the plurality of plants. A subset of multiple high resolution images may then be sampled from the superset of high resolution images. Data indicative of the subset of high resolution images may be applied as input across a machine learning model, with or without additional data, to generate output indicative of a real time crop yield prediction.
    Type: Application
    Filed: October 5, 2022
    Publication date: January 26, 2023
    Inventors: Kathleen Watson, Jie Yang, Yueqi Li
  • Patent number: 11562497
    Abstract: Implementations are described herein for analyzing a sequence of digital images captured by a mobile vision sensor (e.g., integral with a robot), in conjunction with information (e.g., ground truth) known about movement of the vision sensor, to determine spatial dimensions of object(s) and/or an area captured in a field of view of the mobile vision sensor. Techniques avoid the use of visual indicia of known dimensions and/or other conventional tools for determining spatial dimensions, such as checkerboards. Instead, techniques described herein allow spatial dimensions to be determined using less resources, and are more scalable than conventional techniques.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: January 24, 2023
    Assignee: X DEVELOPMENT LLC
    Inventor: Yueqi Li
  • Patent number: 11508092
    Abstract: Implementations are described herein for edge-based real time crop yield predictions made using sampled subsets of robotically-acquired vision data. In various implementations, one or more robots may be deployed amongst a plurality of plants in an area such as a field. Using one or more vision sensors of the one or more robots, a superset of high resolution images may be acquired that depict the plurality of plants. A subset of multiple high resolution images may then be sampled from the superset of high resolution images. Data indicative of the subset of high resolution images may be applied as input across a machine learning model, with or without additional data, to generate output indicative of a real time crop yield prediction.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: November 22, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Kathleen Watson, Jie Yang, Yueqi Li
  • Publication number: 20220196433
    Abstract: Implementations are directed to assigning corresponding semantic identifiers to a plurality of rows of an agricultural field, generating a local mapping of the agricultural field that includes the plurality of rows of the agricultural field, and subsequently utilizing the local mapping in performance of one or more agricultural operations. In some implementations, the local mapping can be generated based on overhead vision data that captures at least a portion of the agricultural field. In these implementations, the local mapping can be generated based on GPS data associated with the portion of the agricultural field captured in the overhead vision data. In other implementations, the local mapping can be generated based on driving data generated during an episode of locomotion of a vehicle through the agricultural field. In these implementations, the local mapping can be generated based on GPS data associated with the vehicle traversing through the agricultural field.
    Type: Application
    Filed: December 22, 2020
    Publication date: June 23, 2022
    Inventors: Alan Eneev, Jie Yang, Yueqi Li, Yujing Qian, Nanzhu Wang, Sicong Wang, Sergey Yaroshenko
  • Publication number: 20220188854
    Abstract: Implementations are described herein for using machine learning to determine whether candidate crop fields are suitable for management by particular agricultural entities. In various implementations, a machine learning model may be applied to input data to generate output data. The input data may include a first plurality of data points corresponding to field-level agricultural management practices of an agricultural entity. The output data may be indicative of one or more predicted outcomes of the agricultural entity implementing the field-level agricultural management practices on one or more candidate crop fields not currently managed by the agricultural entity. Based on one or more of the predicted outcomes, one or more computing devices may be caused to provide a user associated with the agricultural entity with information about one or more of the candidate crop fields, and/or one or more parameter inputs of a graphical user interface may be prepopulated.
    Type: Application
    Filed: March 8, 2022
    Publication date: June 16, 2022
    Inventors: Nanzhu Wang, Chunfeng Wen, Yueqi Li
  • Patent number: 11341656
    Abstract: Implementations are described herein are directed to reconciling disparate orientations of multiple vision sensors deployed on a mobile robot (or other mobile vehicle) by altering orientations of the vision sensors or digital images they generate. In various implementations, this reconciliation may be performed with little or no ground truth knowledge of movement of the robot. Techniques described herein also avoid the use of visual indicia of known dimensions and/or other conventional tools for determining vision sensor orientations. Instead, techniques described herein allow vision sensor orientations to be determined and/or reconciled using less resources, and are more scalable than conventional techniques.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: May 24, 2022
    Assignee: X DEVELOPMENT LLC
    Inventor: Yueqi Li
  • Patent number: 11295331
    Abstract: Implementations are described herein for using machine learning to determine whether candidate crop fields are suitable for management by particular agricultural entities. In various implementations, a machine learning model may be applied to input data to generate output data. The input data may include a first plurality of data points corresponding to field-level agricultural management practices of an agricultural entity. The output data may be indicative of one or more predicted outcomes of the agricultural entity implementing the field-level agricultural management practices on one or more candidate crop fields not currently managed by the agricultural entity. Based on one or more of the predicted outcomes, one or more computing devices may be caused to provide a user associated with the agricultural entity with information about one or more of the candidate crop fields, and/or one or more parameter inputs of a graphical user interface may be prepopulated.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: April 5, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Nanzhu Wang, Chunfeng Wen, Yueqi Li
  • Patent number: 11256915
    Abstract: Implementations are described herein for utilizing various image processing techniques to facilitate tracking and/or counting of plant-parts-of-interest among crops. In various implementations, a sequence of digital images of a plant captured by a vision sensor while the vision sensor is moved relative to the plant may be obtained. A first digital image and a second digital image of the sequence may be analyzed to determine one or more constituent similarity scores between plant-parts-of-interest across the first and second digital images. The constituent similarity scores may be used, e.g., collectively as a composite similarity score, to determine whether a depiction of a plant-part-of-interest in the first digital images matches a depiction of a plant-part-of-interest in the second digital image.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: February 22, 2022
    Assignee: X DEVELOPMENT LLC
    Inventors: Yueqi Li, Hongxiao Liu, Zhiqiang Yuan
  • Publication number: 20220005055
    Abstract: Implementations are described herein for using machine learning to determine whether candidate crop fields are suitable for management by particular agricultural entities. In various implementations, a machine learning model may be applied to input data to generate output data. The input data may include a first plurality of data points corresponding to field-level agricultural management practices of an agricultural entity. The output data may be indicative of one or more predicted outcomes of the agricultural entity implementing the field-level agricultural management practices on one or more candidate crop fields not currently managed by the agricultural entity. Based on one or more of the predicted outcomes, one or more computing devices may be caused to provide a user associated with the agricultural entity with information about one or more of the candidate crop fields, and/or one or more parameter inputs of a graphical user interface may be prepopulated.
    Type: Application
    Filed: July 1, 2020
    Publication date: January 6, 2022
    Inventors: Nanzhu Wang, Chunfeng Wen, Yueqi Li
  • Patent number: 11151737
    Abstract: Implementations are described herein for analyzing a sequence of digital images captured by a mobile vision sensor (e.g., integral with a robot), in conjunction with information (e.g., ground truth) known about movement of the vision sensor, to determine spatial dimensions of object(s) and/or an area captured in a field of view of the mobile vision sensor. Techniques avoid the use of visual indicia of known dimensions and/or other conventional tools for determining spatial dimensions, such as checkerboards. Instead, techniques described herein allow spatial dimensions to be determined using less resources, and are more scalable than conventional techniques.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: October 19, 2021
    Assignee: X DEVELOPMENT LLC
    Inventor: Yueqi Li
  • Publication number: 20210183108
    Abstract: Implementations are described herein for edge-based real time crop yield predictions made using sampled subsets of robotically-acquired vision data. In various implementations, one or more robots may be deployed amongst a plurality of plants in an area such as a field. Using one or more vision sensors of the one or more robots, a superset of high resolution images may be acquired that depict the plurality of plants. A subset of multiple high resolution images may then be sampled from the superset of high resolution images. Data indicative of the subset of high resolution images may be applied as input across a machine learning model, with or without additional data, to generate output indicative of a real time crop yield prediction.
    Type: Application
    Filed: December 16, 2019
    Publication date: June 17, 2021
    Inventors: Kathleen Watson, Jie Yang, Yueqi Li
  • Publication number: 20210056307
    Abstract: Implementations are described herein for utilizing various image processing techniques to facilitate tracking and/or counting of plant-parts-of-interest among crops. In various implementations, a sequence of digital images of a plant captured by a vision sensor while the vision sensor is moved relative to the plant may be obtained. A first digital image and a second digital image of the sequence may be analyzed to determine one or more constituent similarity scores between plant-parts-of-interest across the first and second digital images. The constituent similarity scores may be used, e.g., collectively as a composite similarity score, to determine whether a depiction of a plant-part-of-interest in the first digital images matches a depiction of a plant-part-of-interest in the second digital image.
    Type: Application
    Filed: August 20, 2019
    Publication date: February 25, 2021
    Inventors: Yueqi Li, Hongxiao Liu, Zhiqiang Yuan
  • Patent number: 10930065
    Abstract: Implementations are described herein for three-dimensional (“3D”) modeling of objects that target specific features of interest of the objects, and ignore other features of less interest. In various implementations, a plurality of two-dimensional (“2D”) images may be received from a 2D vision sensor. The plurality of 2D images may capture an object having multiple classes of features. Data corresponding to a first set of the multiple classes of features may be filtered from the plurality of 2D images to generate a plurality of filtered 2D images in which a second set of features of the multiple classes of features is captured. 2D-3D processing, such as structure from motion (“SFM”) processing, may be performed on the 2D filtered images to generate a 3D representation of the object that includes the second set of one or more features.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: February 23, 2021
    Assignee: X DEVELOPMENT LLC
    Inventors: Elliott Grant, Yueqi Li
  • Publication number: 20200401883
    Abstract: Implementations are described herein for training and applying machine learning models to digital images capturing plants, and to other data indicative of attributes of individual plants captured in the digital images, to recognize individual plants in distinction from other individual plants. In various implementations, a digital image that captures a first plant of a plurality of plants may be applied, along with additional data indicative of an additional attribute of the first plant observed when the digital image was taken, as input across a machine learning model to generate output. Based on the output, an association may be stored in memory, e.g., of a database, between the digital image that captures the first plant and one or more previously-captured digital images of the first plant.
    Type: Application
    Filed: June 24, 2019
    Publication date: December 24, 2020
    Inventors: Jie Yang, Zhiqiang Yuan, Hongxu Ma, Cheng-en Guo, Elliott Grant, Yueqi Li
  • Patent number: 10867396
    Abstract: Implementations are described herein are directed to reconciling disparate orientations of multiple vision sensors deployed on a mobile robot (or other mobile vehicle) by altering orientations of the vision sensors or digital images they generate. In various implementations, this reconciliation may be performed with little or no ground truth knowledge of movement of the robot. Techniques described herein also avoid the use of visual indicia of known dimensions and/or other conventional tools for determining vision sensor orientations. Instead, techniques described herein allow vision sensor orientations to be determined and/or reconciled using less resources, and are more scalable than conventional techniques.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: December 15, 2020
    Assignee: X DEVELOPMENT LLC
    Inventor: Yueqi Li