Patents by Inventor Lingling JIN

Lingling JIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240128445
    Abstract: Embodiments of the present application provide an electrode piece, a battery cell and a battery, where the electrode piece includes a current collector, a first material layer, and an active material layer; the first material layer and the active material layer are provided on a surface of the current collector, and the first material layer and the active material layer extend along a length direction of the current collector and are alternately arranged in a width direction of the current collector; where the first material layer includes a first material, and the first material includes an amphiphilic polymer and a structural conductive polymer. The infiltration effect of the electrolyte can be improved, the aging time is shortened, and an injection amount of electrolyte is reduced.
    Type: Application
    Filed: December 5, 2023
    Publication date: April 18, 2024
    Inventors: Chengpeng LAI, Kaiming YU, Lingling JIN, Hongguang SHEN, Meili WANG
  • Patent number: 11579680
    Abstract: A method for power management based on synthetic machine learning benchmarks, including generating a record of synthetic machine learning benchmarks for synthetic machine learning models that are obtained by changing machine learning network topology parameters, receiving hardware information from a client device executing a machine learning program or preparing to execute a machine learning program, selecting a synthetic machine learning benchmark based on the correlation of the hardware information with the synthetic machine learning models, and determining work schedules based on the selected synthetic machine learning benchmark.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: February 14, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Wei Wei, Lingjie Xu, Lingling Jin, Wei Zhang
  • Patent number: 11347679
    Abstract: Systems and methods for a hybrid system-on-chip usable for predicting performance and power requirements of a host server include a big cores module, including central processing units, for receiving and pre-processing performance and power metrics data of the host server and to allocate computing resources, a small cores module, including massively parallel processing units, for mapping each instance associated with host server in the performance and power metrics data to a corresponding massively parallel processing unit based on the allocated computing resources for a per-instance metrics calculation, and an artificial intelligence (AI) accelerator for calculating performance and power prediction results based on the per-instance calculations from the small cores module.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: May 31, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Jun Song, Yi Liu, Lingling Jin, Guan Wang, Ying Wang, Hong Tang, Nan Zhang, Zhengxiong Tian, Yu Zhou, Chao Qian, Shuiwang Liu, Jun Ruan, Bo Yang, Lin Yu, Jiangwei Huang, Hong Zhou, Yijun Lu, Ling Xu, Shiwei Li, Xiaolin Meng
  • Publication number: 20220072002
    Abstract: A treatment use of a pyrrolopyrimidine compound, and a solid pharmaceutical composition of a pyrrolopyrimidine compound. In particular, the present invention relates to a pyrrolopyrimidine compound or a pharmaceutical composition thereof for treating myeloproliferative neoplasms, and a method therefor or a use thereof.
    Type: Application
    Filed: December 24, 2019
    Publication date: March 10, 2022
    Applicant: CHIA TAI TIANQING PHARMACEUTICALGROUP CO., LTD.
    Inventors: Dong WANG, Qingxia LI, Jun DAI, Chen LI, Zhulian JIANG, Yanqing SUN, Jingjing CHEN, Lingling JIN, Jundong LIU, Qide LI
  • Publication number: 20210279372
    Abstract: The present disclosure discloses a fabric detecting and recording method and apparatus. The method includes: acquiring a fabric identification of a fabric to be detected; acquiring an image data of a current detecting part of the fabric to be detected; detecting defects on the image data, if there is a defect on the detecting part included in the image data, a detection data corresponding to the detecting part will be generated; packaging the fabric identification and the detection data into a detection data packet, and sending the packet into the blockchain network for broadcasting. The blockchain technology is used in the solution of the embodiments of the present disclosure to broadcast the fabric detection data in real time through the blockchain network, without any manual uploading operation, thus to reduce the risk of data being tampered with during the uploading stage.
    Type: Application
    Filed: May 26, 2021
    Publication date: September 9, 2021
    Inventors: LINGLING JIN, TENGFA LUO
  • Publication number: 20210264220
    Abstract: The present disclosure relates to a method for updating a machine learning model. The method includes selecting a first column to be removed from a first embedding table to obtain a first reduced number of columns for the first embedding table; obtaining a first accuracy result determined by applying a plurality of vectors into the machine learning model, the plurality of vectors including a first vector having a number of numeric values that are converted using the first embedding table with the first reduced number of columns; and determining whether to remove the first column from the first embedding table in in accordance with an evaluation of the first accuracy result against a first predetermined criterion.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Wei WEI, Wei ZHANG, Lingjie XU, Lingling JIN
  • Patent number: 11093276
    Abstract: Embodiments of the present disclosure provides systems and methods for batch accessing. The system includes a plurality of buffers configured to store data; a plurality of processor cores that each have a corresponding buffer of the plurality of buffers; a buffer controller configured to generate instructions for performing a plurality of buffer transactions on at least some buffers of the plurality of buffers; and a plurality of data managers communicatively coupled to the buffer controller, each data manager is coupled to a corresponding buffer of the plurality of buffers and configured to execute a request for a buffer transaction at the corresponding buffer according to an instruction received from the buffer controller.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: August 17, 2021
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventors: Qinggang Zhou, Lingling Jin
  • Patent number: 10877542
    Abstract: The present disclosure provides a system and a method for power management of accelerators using interconnect configuration. The method comprises receiving a power management command comprising a power budget and a designated accelerator, identifying a port associated with the designated accelerator from a plurality of ports of an interconnect, determining a target data transmission parameter for the designated accelerator according to the power budget, and controlling a data transmission parameter relative to the target data transmission parameter through the port associated with the designated accelerator. The present disclosure further provides a non-transitory computer-readable medium that stores a set of instructions that are executable by one or more processors of an apparatus to perform the method for power management of accelerators using interconnect configuration.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: December 29, 2020
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventor: Lingling Jin
  • Publication number: 20200401093
    Abstract: Systems and methods for a hybrid system-on-chip usable for predicting performance and power requirements of a host server include a big cores module, including central processing units, for receiving and pre-processing performance and power metrics data of the host server and to allocate computing resources, a small cores module, including massively parallel processing units, for mapping each instance associated with host server in the performance and power metrics data to a corresponding massively parallel processing unit based on the allocated computing resources for a per-instance metrics calculation, and an artificial intelligence (AI) accelerator for calculating performance and power prediction results based on the per-instant calculations from the small cores module.
    Type: Application
    Filed: February 8, 2018
    Publication date: December 24, 2020
    Inventors: Jun Song, Yi Liu, Lingling Jin, Guan Wang, Ying Wang, Hong Tang, Nan Zhang, Zhengxiong Tian, Yu Zhou, Chao Qian, Shuiwang Liu, Jun Ruan, Bo Yang, Lin Yu, Jiangwei Huang, Hong Zhou, Yijun Lu, Shao Xu, Shiwei Li, Xiaoli Meng
  • Patent number: 10848440
    Abstract: The present disclosure provides methods and systems directed to providing quality of service to cluster of accelerators. The system can include a root connector; an interconnect switch communicatively coupled to the root connector over a plurality of lanes comprising a first set of lanes and a second set of lanes, wherein the first set of lanes are associated with a first virtual communication channel and a second set of lanes are associated with a second virtual communication channel; a first accelerator communicatively coupled to the interconnect switch and associated with a first traffic class identifier corresponding to first communication traffic communicated over the first set of lanes; and a plurality of accelerators communicatively coupled to the interconnect switch and associated with a second traffic class identifier that corresponds to second communication traffic having lower priority than the first communication traffic and that is communicated over the second set of lanes.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: November 24, 2020
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventors: Li Zhao, Lingling Jin
  • Publication number: 20200304426
    Abstract: The present disclosure provides methods and systems directed to providing quality of service to cluster of accelerators. The system can include a root connector; an interconnect switch communicatively coupled to the root connector over a plurality of lanes comprising a first set of lanes and a second set of lanes, wherein the first set of lanes are associated with a first virtual communication channel and a second set of lanes are associated with a second virtual communication channel; a first accelerator communicatively coupled to the interconnect switch and associated with a first traffic class identifier corresponding to first communication traffic communicated over the first set of lanes; and a plurality of accelerators communicatively coupled to the interconnect switch and associated with a second traffic class identifier that corresponds to second communication traffic having lower priority than the first communication traffic and that is communicated over the second set of lanes.
    Type: Application
    Filed: March 20, 2019
    Publication date: September 24, 2020
    Inventors: Li ZHAO, Lingling JIN
  • Publication number: 20200285467
    Abstract: The present disclosure provides a system and a method for power management of accelerators using interconnect configuration. The method comprises receiving a power management command comprising a power budget and a designated accelerator, identifying a port associated with the designated accelerator from a plurality of ports of an interconnect, determining a target data transmission parameter for the designated accelerator according to the power budget, and controlling a data transmission parameter relative to the target data transmission parameter through the port associated with the designated accelerator. The present disclosure further provides a non-transitory computer-readable medium that stores a set of instructions that are executable by one or more processors of an apparatus to perform the method for power management of accelerators using interconnect configuration.
    Type: Application
    Filed: March 7, 2019
    Publication date: September 10, 2020
    Inventor: Lingling JIN
  • Publication number: 20200272896
    Abstract: The present disclosure provides systems and methods for deep learning training using edge devices. The methods can include identifying one or more edge devices, determining characteristics of the identified edge devices, evaluating a deep learning workload to determine an amount of resources for processing, assigning the deep learning workload to one or more identified edge devices based on the characteristics of the one or more identified edge devices, and facilitating communication between the one or more identified edge devices for completing the deep learning workload.
    Type: Application
    Filed: April 30, 2019
    Publication date: August 27, 2020
    Inventors: Wei WEI, Lingjie XU, Lingling JIN, Wei ZHANG
  • Publication number: 20200249740
    Abstract: A method for power management based on synthetic machine learning benchmarks, including generating a record of synthetic machine learning benchmarks for synthetic machine learning models that are obtained by changing machine learning network topology parameters, receiving hardware information from a client device executing a machine learning program or preparing to execute a machine learning program, selecting a synthetic machine learning benchmark based on the correlation of the hardware information with the synthetic machine learning models, and determining work schedules based on the selected synthetic machine learning benchmark.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 6, 2020
    Inventors: Wei WEI, Lingjie XU, Lingling JIN, Wei ZHANG
  • Publication number: 20200218985
    Abstract: Embodiments described herein provide a system for facilitating efficient benchmarking of a piece of hardware configured to process artificial intelligence (AI) related operations. During operation, the system determines the workloads of a set of AI models based on layer information associated with a respective layer of a respective AI model. The set of AI models are representative of applications that run on the piece of hardware. The system forms a set of workload clusters from the workloads and determines a representative workload for a workload cluster. The system then determines, using a meta-heuristic, an input size that corresponds to the representative workload. The system determines, based on the set of workload clusters, a synthetic AI model configured to generate a workload that represents statistical properties of the workloads on the piece of hardware. The input size can generate the representative workload at a computational layer of the synthetic AI model.
    Type: Application
    Filed: January 3, 2019
    Publication date: July 9, 2020
    Applicant: Alibaba Group Holding Limited
    Inventors: Wei Wei, Lingjie Xu, Lingling Jin
  • Publication number: 20200042419
    Abstract: Embodiments described herein provide a system for facilitating efficient benchmarking of a piece of hardware for artificial intelligence (AI) models. During operation, the system determines a set of AI models that are representative of applications that run on the piece of hardware. The piece of hardware can be configured to process AI-related operations. The system can determine workloads of the set of AI models based on layer information associated with a respective layer of a respective AI model in the set of AI models and form a set of workload clusters from the determined workloads. The system then determines, based on the set of workload clusters, a synthetic AI model configured to generate a workload that represents statistical properties of the determined workload.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Applicant: Alibaba Group Holding Limited
    Inventors: Wei Wei, Lingjie Xu, Lingling Jin
  • Publication number: 20190227838
    Abstract: Embodiments of the present disclosure provides systems and methods for batch accessing. The system includes a plurality of buffers configured to store data; a plurality of processor cores that each have a corresponding buffer of the plurality of buffers; a buffer controller configured to generate instructions for performing a plurality of buffer transactions on at least some buffers of the plurality of buffers; and a plurality of data managers communicatively coupled to the buffer controller, each data manager is coupled to a corresponding buffer of the plurality of buffers and configured to execute a request for a buffer transaction at the corresponding buffer according to an instruction received from the buffer controller.
    Type: Application
    Filed: January 14, 2019
    Publication date: July 25, 2019
    Inventors: Qinggang ZHOU, Lingling JIN
  • Publication number: 20190228308
    Abstract: The present disclosure relates to a machine learning accelerator system and methods of transporting data using the machine learning accelerator system. The machine learning accelerator system may include a switch network comprising an array of switch nodes, and an array of processing elements. Each processing element of the array of processing elements is connected to a switch node of the array of switch nodes and is configured to generate data that is transportable via the switch node. The method may include receiving input data using a switch node from a data source and generating output data based on the input data, using a processing element that is connected to the switch node. The method may include transporting the generated output data to a destination processing element using a switch node.
    Type: Application
    Filed: January 23, 2019
    Publication date: July 25, 2019
    Inventors: Qinggang ZHOU, Lingling JIN