Patents by Inventor Sanping Li

Sanping Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11599801
    Abstract: Embodiments of the present disclosure provide a method for solving a problem, a computing system and a program product. A method for solving a problem includes determining information related to a to-be-solved problem; acquiring, based on the information, knowledge elements that can be used for the to-be-solved problem from a knowledge repository, the knowledge repository storing: solved problems, at least one executable task related to the solved problems, at least one processing flow for implementing the at least one executable task, and a corresponding function module included in the at least one processing flow; and determining, based at least on the acquired knowledge elements, a solution to the to-be-solved problem. By such arrangements, automatic problem solving can be achieved in a faster, simpler way with a lower cost through division of the repository and the knowledge elements.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: March 7, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: YuHong Nie, WuiChak Wong, Sanping Li, Xuwei Tang
  • Patent number: 11507782
    Abstract: A method for determining a model compression rate comprises determining a near-zero importance value subset from an importance value set associated with a machine learning model, a corresponding importance value in the importance value set indicating an importance degree of a corresponding input of a processing layer of the machine learning model, importance values in the near-zero importance value subset being closer to zero than other importance values in the importance value set; determining a target importance value from the near-zero importance value subset, the target importance value corresponding to a turning point of a magnitude of the importance values in the near-zero importance value subset; determining a proportion of importance values less than the target importance value in the importance value set in the importance value set; and determining the compression rate for the machine learning model based on the determined proportion.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: November 22, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Wenbin Yang, Jinpeng Liu, WuiChak Wong, Sanping Li, Zhen Jia
  • Patent number: 11487589
    Abstract: Systems and methods are provided for implementing a self-adaptive batch dataset partitioning control process which is utilized in conjunction with a distributed deep learning model training process to optimize load balancing among a set of accelerator resources. An iterative batch size tuning process is configured to determine an optimal job partition ratio for partitioning mini-batch datasets into sub-batch datasets for processing by a set of hybrid accelerator resources, wherein the sub-batch datasets are partitioned into optimal batch sizes for processing by respective accelerator resources to minimize a time for completing the deep learning model training process.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: November 1, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Wei Cui, Sanping Li, Kun Wang
  • Publication number: 20220345450
    Abstract: Embodiments of the present disclosure provide a method, an electronic device, and a program product implemented at an edge switch for data encryption. For example, the present disclosure provides a data encryption method implemented at an edge switch. The method may include receiving encryption and decryption information for an encryption operation or a decryption operation from a source device. In addition, the method may include encrypting a data packet received from the source device based on encryption information in the encryption and decryption information to generate an encrypted data packet. The method may further include sending the encrypted data packet to a target device indicated by the data packet. The embodiments of the present disclosure can reduce the computing loads of Internet of Things (IoT) devices, clouds, and servers while ensuring encryption performance, and can also reduce the time delay caused by encryption and decryption operations.
    Type: Application
    Filed: May 17, 2021
    Publication date: October 27, 2022
    Inventors: Chenxi Hu, Sanping Li, Zhen Jia
  • Publication number: 20220327016
    Abstract: A method, an electronic device, and a program product for determining a score of a log file are provided. The method includes acquiring a log file related to a monitored system and source code corresponding to the log file. The method may further include determining a first score of the log file based on a first log rule subset in a log rule set, the log rule set being used to evaluate at least one of analyzability of the log file and supportability of the monitored system. The method may further include determining a second score of the source code based on a second log rule subset in the log rule set and determining a third score of the log file at least based on the first score and the second score.
    Type: Application
    Filed: August 24, 2021
    Publication date: October 13, 2022
    Inventors: Yongbing Xue, Min Liu, Weiyang Liu, Yudai Wang, Sanping Li
  • Publication number: 20220300505
    Abstract: A method, an electronic device, and a computer program product for obtaining a hierarchical data structure and processing a log entry is disclosed. The method for obtaining the hierarchical data structure includes: obtaining corresponding characteristic information included in each log entry of a set of log entries and determining multiple log entry patterns based on the corresponding characteristic information. The pattern characteristic information of each log entry pattern corresponds to the characteristic information of a subset of log entries in the set of log entries. The method also includes storing the set of log entries according to the hierarchical data structure so that each log entry is associated with at least one of multiple nodes of the hierarchical data structure. The multiple nodes respectively correspond to the multiple log entry patterns, and are hierarchically organized based on respective corresponding log entry patterns.
    Type: Application
    Filed: July 29, 2021
    Publication date: September 22, 2022
    Inventors: Yudai Wang, Min Liu, Sanping Li, Travis Liu, Yongbing Xue
  • Patent number: 11442779
    Abstract: Embodiments of the present disclosure relate to a method, device and computer program product for determining a resource amount of dedicated processing resources. The method comprises obtaining a structural representation of a neural network for deep learning processing, the structural representation indicating a layer attribute of the neural network that is associated with the dedicated processing resources; and determining the resource amount of the dedicated processing resources required for the deep learning processing based on the structural representation. In this manner, the resource amount of the dedicated processing resources required by the deep learning processing may be better estimated to improve the performance and resource utilization rate of the dedicated processing resource scheduling.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: September 13, 2022
    Assignee: Dell Products L.P.
    Inventors: Junping Zhao, Sanping Li
  • Patent number: 11436050
    Abstract: Embodiments of the present disclosure provide a method, apparatus and computer program product for resource scheduling. The method comprises obtaining a processing requirement for a deep learning task, the processing requirement being specified by a user and at least including a requirement related to a completion time of the deep learning task. The method further comprises determining, based on the processing requirement, a resource required by the deep learning task such that processing of the deep learning task based on the resource satisfies the processing requirement. Through the embodiments of the present disclosure, the resources can be scheduled reasonably and flexibly to satisfy the user's processing requirement for a particular deep learning task without requiring the user to manually specify the requirement on the resources.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: September 6, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Layne Lin Peng, Kun Wang, Sanping Li
  • Patent number: 11190620
    Abstract: Embodiments of the present disclosure relate to methods and an electronic device for transmitting and receiving data. The data transmission method includes: determining a hash value of original data to be transmitted; determining whether the hash value exist in a predetermined set of hash values; in response to the hash value being present in the set of hash values, transmitting the hash value, rather than the original data, to a server; and in response to the hash value being absent from the set of hash values, transmitting the original data to the server; and adding the hash value to the set of hash values. The embodiments of the present disclosure can avoid transmitting duplicated data between a client and a server, and it is not required to add extra remote procedure calling commands between the client and the server.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: November 30, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Wei Cui, Sanping Li, Kun Wang
  • Publication number: 20210344571
    Abstract: Embodiments of the present disclosure relate to a method, a device, and a computer program product for processing data. The method includes: loading, at a switch and in response to receipt of a model loading request from a terminal device, a data processing model specified in the model loading request. The method further includes: acquiring model parameters of the data processing model from the terminal device. The method further includes: processing, in response to receipt of to-be-processed data from the terminal device, the data using the data processing model based on the model parameters. Through the method, data may be processed at a switch, which improves the efficiency of data processing and the utilization rate of computing resources, and reduces the delay of data processing.
    Type: Application
    Filed: May 26, 2020
    Publication date: November 4, 2021
    Inventors: Chenxi Hu, Sanping Li
  • Patent number: 11150975
    Abstract: Embodiments of the present disclosure provide a method and apparatus for determining a cause of performance degradation of a storage system. The method comprises: monitoring performance of the storage system according to a predetermined policy; generating a performance degradation event from a result of the monitoring based on system performance baseline; in response to performance degradation event, obtaining information about the performance degradation; and offline analyzing the information to determine the cause of the performance degradation. Compared with the prior art, embodiments of the present disclosure can manage the system performance degradation effectively and continuously to minimize running costs and enable current performance profiling tools in the manner of insertion.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: October 19, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Frank Zhao, Yu Cao, Sanping Li
  • Publication number: 20210286654
    Abstract: A first set of requirements of a first set of computing tasks for computing resources in a computing system is acquired respectively. Based on a determination that the requirement of a computing task in the first set of computing tasks for a computing resource satisfies a resource threshold condition, the computing task is divided into a plurality of sub-tasks. The resource threshold condition describes the threshold of a computing resource provided by a computing device in a plurality of computing devices in the computing system. A merging task for merging a plurality of sub-results of the plurality of sub-tasks into a result of the computing task is generated. Based on other computing tasks than the computing task in the set of computing tasks, the plurality of sub-tasks, and the merging task, a second set of computing tasks of the computing job is determined so as to process the computing job.
    Type: Application
    Filed: April 10, 2020
    Publication date: September 16, 2021
    Inventors: Jinpeng Liu, Jin Li, Sanping Li, Zhen Jia
  • Publication number: 20210271932
    Abstract: A method for determining a model compression rate comprises determining a near-zero importance value subset from an importance value set associated with a machine learning model, a corresponding importance value in the importance value set indicating an importance degree of a corresponding input of a processing layer of the machine learning model, importance values in the near-zero importance value subset being closer to zero than other importance values in the importance value set; determining a target importance value from the near-zero importance value subset, the target importance value corresponding to a turning point of a magnitude of the importance values in the near-zero importance value subset; determining a proportion of importance values less than the target importance value in the importance value set in the importance value set; and determining the compression rate for the machine learning model based on the determined proportion.
    Type: Application
    Filed: March 20, 2020
    Publication date: September 2, 2021
    Inventors: Wenbin Yang, Jinpeng Liu, WuiChak Wong, Sanping Li, Zhen Jia
  • Publication number: 20210271987
    Abstract: Embodiments of the present disclosure provide a method for solving a problem, a computing system and a program product. A method for solving a problem includes determining information related to a to-be-solved problem; acquiring, based on the information, knowledge elements that can be used for the to-be-solved problem from a knowledge repository, the knowledge repository storing: solved problems, at least one executable task related to the solved problems, at least one processing flow for implementing the at least one executable task, and a corresponding function module included in the at least one processing flow; and determining, based at least on the acquired knowledge elements, a solution to the to-be-solved problem. By such arrangements, automatic problem solving can be achieved in a faster, simpler way with a lower cost through division of the repository and the knowledge elements.
    Type: Application
    Filed: March 20, 2020
    Publication date: September 2, 2021
    Inventors: YuHong Nie, WuiChak Wong, Sanping Li, Xuwei Tang
  • Patent number: 11048550
    Abstract: Embodiments of the present disclosure provide methods, devices, and computer program products for processing a task. A method of processing a task comprises: receiving, at a network device and from a set of computing devices, a set of processing results derived from processing the task by the set of computing devices; in response to receiving the set of processing results, executing a reduction operation on the set of processing results; and transmitting a result of the reduction operation to the set of computing devices. In this way, embodiments of the present disclosure can significantly reduce an amount of data exchanged among a plurality of devices processing a task in parallel, and thus reduce network latency caused by data exchange.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: June 29, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Hu Chenxi, Kun Wang, Sanping Li, Junping Zhao
  • Patent number: 11023274
    Abstract: A method for processing data includes receiving an adjustment request for adjusting a number of consumer instances from a first number to a second number, and determining a migration overhead for adjusting a first distribution of states associated with the first number of consumer instances to a second distribution of the states associated with the second number of consumer instances, wherein the states are intermediate results of processing the data and the migration overhead includes a latency and a bandwidth shortage incurred for migrating the states. Based on the determined migration overhead, the states are migrated between the first number of consumer instances and the second number of consumer instances, and thereafter the data is processed based on the second distribution of the states at the second number of consumer instances.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: June 1, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Simon Tao, Yu Cao, Zhe Dong, Sanping Li
  • Publication number: 20210133588
    Abstract: A method for model adaptation, an electronic device, and a computer program product are disclosed. For example, the method comprises processing first input data by using a first machine learning model having first parameter set values, to obtain first feature information of the first input data, the first machine learning model having a capability of self-ordering and the first parameter set values being updated after the processing of the first input data; generating a first classification result for the first input data based on the first feature information by using a second machine learning model having second parameter set values; processing second input data by using the first machine learning model having the updated first parameter set values, to obtain second feature information of the second input data; and generating a second classification result for the second input data based on the second feature information by using the second machine learning model having the second parameter set values.
    Type: Application
    Filed: March 3, 2020
    Publication date: May 6, 2021
    Inventors: WuiChak Wong, Sanping Li, Jin Li
  • Publication number: 20210034922
    Abstract: A method comprises: generating, at a first computing device, a first set of gradient values associated with a data block processed by nodes of a machine learning model, the first set of gradient values being in a first data format; determining a first shared factor from the first set of gradient values, the first shared factor being in a second data format of a lower a precision than that of the first data format; and scaling the first set of gradient values with the first shared factor, to obtain a second set of gradient values having the second data format. In addition, the method comprises sending the second set of gradient values and the first shared factor to a second computing device; and, in response to receiving a third set of gradient values and a second shared factor from the second computing device, adjusting parameters of the machine learning model.
    Type: Application
    Filed: November 6, 2019
    Publication date: February 4, 2021
    Inventors: Hu Chenxi, Sanping Li
  • Patent number: 10909415
    Abstract: Embodiments of the present disclosure relate to a method, a device and a computer readable medium for generating an image tag. According to the embodiments of the present disclosure, an index value of an image is determined based on contents of the image, similarities between a plurality of images is determined based on index values of the plurality of images, and thereby tags are generated for images. According to the embodiments of the present disclosure, images is further grouped depending on similarities between them.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: February 2, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Sanping Li, Junping Zhao
  • Patent number: 10860245
    Abstract: Embodiments of the present disclosure propose a method and apparatus for optimizing storage of application data. The method comprises obtaining description information for application data from an application; performing storage optimization based on the description information; and performing at least pan of a storage function to be implemented by the back-end storage device, based on the description information before transmitting application data to the back-end storage device. With the method or apparatus according to the embodiments of the present disclosure, an efficient manner of integrating the application and non-volatile storage device is provided to coordinate the application and storage, thereby improving efficiency and expanding capability.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: December 8, 2020
    Assignee: EMC IP Holding Company, LLC
    Inventors: Junping Frank Zhao, Kun Wang, Yu Cao, Zhe Dong, Sanping Li