Patents by Inventor Junping ZHAO

Junping ZHAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10409778
    Abstract: A method for processing a data request in a software defined storage system, wherein the software defined storage system comprises one or more nodes configured as a set of client modules operatively coupled to a set of server modules, comprises the following steps. A data request with a data set is received at one of the set of client modules. One or more data services (e.g., deduplication and/or data compression) are performed on the data set, wherein the performance of the one or more data services on the data set is dynamically shared between one or more of the set of client modules and one or more of the set of server modules.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: September 10, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Accela Zhao, Ricky Sun, Kun Wang, Sanping Li, Kenneth Durazzo
  • Patent number: 10394664
    Abstract: An apparatus in one embodiment comprises a distributed processing system including a plurality of processing nodes. The processing nodes implement respective ones of a plurality of operators for a processing a data stream in the distributed processing system. Responsive to a detected fault in a given one of the operators processing the data stream, other ones of the operators processing the data stream are partitioned into one or more upstream operators, one or more immediately downstream operators, and one or more further downstream operators, relative to the given faulted operator. The given faulted operator is recovered from a checkpoint of that operator. In parallel with recovering the given faulted operator, different sets of operations are performed for respective ones of the upstream operators, immediately downstream operators and further downstream operators. A given such set of operations may be performed utilizing window metadata maintained for respective buffers of the processing nodes.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: August 27, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Kevin Xu, Junping Zhao
  • Patent number: 10382751
    Abstract: Transparent, fine-grained, and adaptive data compression is described. A system determines a first data chunk and a second data chunk in a persistent storage. The system determines a first data read count and/or a first data write count for the first data chunk, and a second data read count and/or a second data write count for the second data chunk. The system determines then a first data compression status for the first data chunk and a second data compression status for the second data chunk. Based on the first data compression status and second data compression status, the system stores data in the first data chunk and data in the second data chunk to the persistent storage.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: August 13, 2019
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Junping Zhao, Kenneth J. Taylor, Lin Peng, Kun Wang
  • Publication number: 20190233817
    Abstract: The present disclosure relates to compositions and methods for treating APOC3-related diseases such as: hypertriglyceridemia (e.g., Type V Hypertriglyceridemia), abnormal lipid metabolism, abnormal cholesterol metabolism, atherosclerosis, hyperlipidemia, diabetes, including Type 2 diabetes, obesity, cardiovascular disease, and coronary artery disease, among other disorders relating to abnormal metabolism or otherwise, using a therapeutically effective amount of a RNAi agent to APOC3.
    Type: Application
    Filed: February 6, 2019
    Publication date: August 1, 2019
    Inventors: Jan Weiler, William Chutkow, Jeremy Lee Baryza, Andrew Krueger, Junping Zhao
  • Patent number: 10359968
    Abstract: Virtual storage domains (VSD) are each associated with unique VSD domain ID associated with a first policy and tagged to a request to a storage system when an entity writes a data set to it. A first hash digest, based on data set content, is calculated and combined with first unique VSD domain ID into a second hash digest associated with data set. When first policy is changed to second policy associated with second VSD, a third hash digest of first data set is calculated, the third hash digest based on content of first data set and on second unique VSD domain ID. If third hash digest does not exist in second VSD, data set is copied to the second VSD; else, reference count of the third hash digest, associated with second VSD domain, is incremented, and reference count of second hash digest, associated with first VSD domain, is decremented.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: July 23, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Xiangping Chen, Anton Kucherov, Junping Zhao
  • Publication number: 20190220308
    Abstract: Embodiments of the present disclosure relate to a method, device and a computer readable medium for processing a GPU task. The method implemented at a client side for processing the GPU task comprises: receiving a request for the GPU task from an application; determining whether the request relates to a query about an execution state of the GPU task; and in response to the request relating to the query, providing a positive acknowledgement for the query to the application, without forwarding the request to a machine that executes the GPU task.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 18, 2019
    Inventors: Wei Cui, Kun Wang, Junping Zhao
  • Publication number: 20190220316
    Abstract: Embodiments of the present disclosure relate to a method, device and computer program product for determining a resource amount of dedicated processing resources. The method comprises obtaining a structural representation of a neural network for deep learning processing, the structural representation indicating a layer attribute of the neural network that is associated with the dedicated processing resources; and determining the resource amount of the dedicated processing resources required for the deep learning processing based on the structural representation. In this manner, the resource amount of the dedicated processing resources required by the deep learning processing may be better estimated to improve the performance and resource utilization rate of the dedicated processing resource scheduling.
    Type: Application
    Filed: January 4, 2019
    Publication date: July 18, 2019
    Inventors: Junping Zhao, Sanping Li
  • Publication number: 20190220208
    Abstract: Embodiments of the present disclosure relate to a method, device and computer program product for storing data. The method comprises obtaining a first range of replica levels supported by a storage apparatus, wherein the replica level indicates the number of replicas of data. The method further comprises receiving a replica configuration requirement for an application, wherein the application supports a second range of replica levels. Moreover, the method further comprises determining a first replica level for the storage apparatus and a second replica level for the application based on the replica configuration requirement, the first range and the second range. By extracting the replica function supported by the storage apparatus, embodiments of the present disclosure can configure replica levels of the storage apparatus and the application from the global level for the user requirement for replicas of the data service.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 18, 2019
    Inventors: Yongjun Shi, Junping Zhao
  • Publication number: 20190220311
    Abstract: Embodiments of the present disclosure relate to a method, apparatus and computer program product for scheduling dedicated processing resources. The method comprises: in response to receiving a scheduling request for a plurality of dedicated processing resources, obtaining a topology of the plurality of dedicated processing resources, the topology being determined based on connection attributes related to connections among the plurality of dedicated processing resources; and determining, based on the topology, a target dedicated processing resource satisfying the scheduling request from the plurality of dedicated processing resources. In this manner, the performance and the resource utilization rate of scheduling the dedicated processing resources are improved.
    Type: Application
    Filed: January 11, 2019
    Publication date: July 18, 2019
    Inventors: Junping Zhao, Layne Lin Peng, Zhi Ying
  • Publication number: 20190220384
    Abstract: Embodiments of the present disclosure relate to a method of tracing a computing system, a device for tracing a computing system, and a computer readable medium. According to some embodiments, tracing data is extracted from a request that requests a dedicated processing resource for a task, the request being initiated by an application executed on a client and the tracing data including a parameter for performing the task, an identifier of the application, and time elapsed from initiating the request. The tracing data is stored in a volatile memory to facilitate transmitting the tracing data to a database server. The request is caused to be processed by a computing server hosting the dedicated processing resource. In this way, the cloud computing system may be traced rather than tracing the stand-alone tasks only.
    Type: Application
    Filed: January 4, 2019
    Publication date: July 18, 2019
    Inventors: Zhi Ying, Junping Zhao
  • Publication number: 20190196875
    Abstract: The present disclosure relates to a method, system and computer program product for processing a computing task. There is provided a method for processing a computing task, comprising: establishing a connection with a client in response to receiving a processing request from the client, the processing request being for requesting an allocation of a set of computing resources for processing the computing task; receiving a set of resource calling instructions associated with the computing task from the client via the established connection; executing the set of resource calling instructions to obtain a processing result by using the set of computing resources; and returning the processing result to the client.
    Type: Application
    Filed: October 29, 2018
    Publication date: June 27, 2019
    Inventors: Junping Zhao, Zhi Ying
  • Publication number: 20190197655
    Abstract: A graphics processing unit (GPU) service platform includes a control server, and a cluster of GPU servers each having one or more GPU devices. The control server receives a service request from a client system for GPU processing services, allocates multiple GPU servers within the cluster to handle GPU processing tasks specified by the service request by logically binding the allocated GPU servers, and designating one of the at least two GPU servers as a master server, and send connection information to the client system to enable the client system to connect to the master server. The master GPU server receives a block of GPU program code transmitted from the client system, which is associated with the GPU processing tasks specified by the service request, processes the block of GPU program code using the GPU devices of the logically bound GPU servers, and returns processing results to the client system.
    Type: Application
    Filed: February 27, 2019
    Publication date: June 27, 2019
    Inventors: Yifan Sun, Layne Peng, Robert A. Lincourt, JR., John Cardente, Junping Zhao
  • Patent number: 10325343
    Abstract: Techniques are provided for implementing a graphics processing unit (GPU) service platform that is configured to provide topology aware grouping and provisioning of GPU resources for GPU-as-a-Service. A GPU server node receives a service request from a client system for GPU processing services provided by the GPU server node, wherein the GPU server node comprises a plurality of GPU devices. The GPU server node accesses a performance metrics data structure which comprises performance metrics associated with an interconnect topology of the GPU devices and hardware components of the GPU sever node. The GPU server node dynamically forms a group of GPU devices of the GPU server node based on the performance metrics of the accessed data structure, and provisions the dynamically formed group of GPU devices to the client system to handle the service request.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: June 18, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Zhi Ying, Kenneth Durazzo
  • Patent number: 10324656
    Abstract: A method of controlling one or more data services in a computing environment includes the following steps. A request to one of read data from and write data to one or more storage devices in a computing environment is obtained from an application executing on a host device in the computing environment. One or more application-aware parameters associated with the data of the request are obtained. Operation of the one or more data services is controlled based on the one or more application-aware parameters.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: June 18, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Accela Zhao, Ricky Sun, Kenneth Durazzo
  • Publication number: 20190173960
    Abstract: Embodiments of the present disclosure relate to a method, device and computer program product for protocol selection. According to embodiments of the present disclosure, a client may determine supported transmission protocols based on its own hardware information and transmit a connection request to a server using a protocol with a higher priority. The server may determine supported protocols based on its own hardware information and respond to the connection request according to the supported protocols. In this way, the establishment of the connection between the client and the server is transparent to users.
    Type: Application
    Filed: October 29, 2018
    Publication date: June 6, 2019
    Inventors: Junping Zhao, Zhi Ying
  • Publication number: 20190171487
    Abstract: Embodiments of the present disclosure relate to a method, a device and a computer readable medium for managing a dedicated processing resource. According to the embodiments of the present disclosure, a server receives a request of a first application from a client, and based on an index of a resource subset as comprised in the request, determines a dedicated processing resource corresponding to the resource subset for processing the first application request. According to the embodiments of the present disclosure, the dedicated processing resource is divided into a plurality of resource subsets, so that the utilization efficiency of the dedicated processing resource is improved.
    Type: Application
    Filed: October 29, 2018
    Publication date: June 6, 2019
    Inventors: Junping Zhao, Kun Wang, Fan Guo
  • Publication number: 20190173739
    Abstract: Embodiments of the present disclosure relate to a method, a device and a computer program product for managing a distributed system. The method comprises sending heartbeat messages from a master node to a plurality of slave nodes, the master node and the plurality of slave nodes being included in a plurality of nodes in the distributed system, and the plurality of nodes being divided into one or more partitions. The method further comprises, in response to receiving a response to the heartbeat messages from a portion of slave nodes in the plurality of slave nodes, determining respective states of the one or more partitions. In addition, the method further comprises a state of a first slave node in the plurality of slave nodes at least based on the respective states of the one or more partitions, the master node failing to receive a response to the heartbeat messages from the first slave node.
    Type: Application
    Filed: October 29, 2018
    Publication date: June 6, 2019
    Inventors: Wei Cui, Junping Zhao, Huan Chen, Brown Zan Liu
  • Publication number: 20190171907
    Abstract: Embodiments of the present disclosure relate to a method, a device and a computer readable medium for generating an image tag. According to the embodiments of the present disclosure, an index value of an image is determined based on contents of the image, similarities between a plurality of images is determined based on index values of the plurality of images, and thereby tags are generated for images. According to the embodiments of the present disclosure, images is further grouped depending on similarities between them.
    Type: Application
    Filed: October 26, 2018
    Publication date: June 6, 2019
    Inventors: Sanping Li, Junping Zhao
  • Publication number: 20190171489
    Abstract: Embodiments of the present disclosure provide a method of managing dedicated processing resources, a server system and a computer program product. The method may include receiving a request for the dedicated processing resources from an application having an assigned priority. The method may also include determining a total amount of resource to be occupied by the application based on the request. The method may further include, in response to the total amount of resource approximating or exceeding a predetermined quota associated with the priority, assigning, from the dedicated processing resources, a first amount of dedicated processing resources to the application. Besides, the method may include, in response to the application completing an operation associated with the request using the assigned dedicated processing resources, causing the application to sleep for a time period.
    Type: Application
    Filed: October 22, 2018
    Publication date: June 6, 2019
    Inventors: Fan Guo, Kun Wang, Junping Zhao
  • Patent number: 10289555
    Abstract: Systems, methods, and articles of manufacture comprising processor-readable storage media are provided to implement read-ahead memory operations using learned memory access patterns for memory management systems. For example, a method for managing memory includes receiving a request from requestor (e.g., an active process) to perform a memory access operation, which includes a requested memory address. A determination is made as to whether a data block (e.g., page) associated with the requested memory address resides in a cache memory.
    Type: Grant
    Filed: April 14, 2017
    Date of Patent: May 14, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Adrian Michaud, Kenneth J. Taylor, Randall Shain, Stephen Wing-Kin Au, Junping Zhao