Patents by Inventor Yandong Wang

Yandong Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180329861
    Abstract: A cache management system performs cache management in a Remote Direct Memory Access (RDMA) key value data store. The cache management system receives a request from at least one client configured to access a data item stored in a data location of a remote server, and determines a popularity of the data item based on a frequency at which the data location is accessed by the at least one client. The system is further configured to determine a lease period of the data item based on the frequency and assigning the lease period to the data location.
    Type: Application
    Filed: July 23, 2018
    Publication date: November 15, 2018
    Inventors: Michel H. Hack, YUFEI REN, Yandong Wang, Li Zhang
  • Publication number: 20180322383
    Abstract: A storage controller of a machine receives training data associated with a neural network model. The neural network model includes a plurality of layers, and the machine further including at least one graphics processing unit. The storage controller trains at least one layer of the plurality of layers of the neural network model using the training data to generate processed training data. A size of the processed data is less than a size of the training data. Training of the at least one layer includes adjusting one or more weights of the at least one layer using the training data. The storage controller sends the processed training data to at least one graphics processing unit of the machine. The at least one graphics processing unit is configured to store the processed training data and train one or more remaining layers of the plurality of layers using the processed training data.
    Type: Application
    Filed: May 2, 2017
    Publication date: November 8, 2018
    Applicant: International Business Machines Corporation
    Inventors: Minwei Feng, Yufei Ren, Yandong Wang, Li Zhang, Wei Zhang
  • Publication number: 20180307651
    Abstract: A cache management system performs cache management in a Remote Direct Memory Access (RDMA) key value data store. The cache management system receives a request from at least one client configured to access a data item stored in a data location of a remote server, and determines a popularity of the data item based on a frequency at which the data location is accessed by the at least one client. The system is further configured to determine a lease period of the data item based on the frequency and assigning the lease period to the data location.
    Type: Application
    Filed: June 25, 2018
    Publication date: October 25, 2018
    Inventors: Michel H. Hack, Yufei Ren, Yandong Wang, Li Zhang
  • Publication number: 20180307972
    Abstract: A network interface controller of a machine receives a packet including at least one model parameter of a neural network model from a server. The packet includes a virtual address associated with the network interface controller, and the machine further includes a plurality of graphics processing units coupled to the network interface controller by a bus. The network interface controller translates the virtual address to a memory address associated with each of the plurality of graphics processing units. The network interface controller broadcasts the at least one model parameter to the memory address associated with each of the plurality of graphics processing units.
    Type: Application
    Filed: April 24, 2017
    Publication date: October 25, 2018
    Applicant: International Business Machines Corporation
    Inventors: Minwei Feng, Yufei Ren, Yandong Wang, Li Zhang, Wei Zhang
  • Patent number: 10083193
    Abstract: A method to share remote DMA (RDMA) pointers to a key-value store among a plurality of clients. The method allocates a shared memory and accesses the key-value store with a key from a client and receives an information from the key-value store. The method further generates a RDMA pointer from the information, maps the key to a location in the shared memory, and generates a RDMA pointer record at the location. The method further stores the RDMA pointer and the key in the RDMA pointer record and shares the RDMA pointer record among the plurality of clients.
    Type: Grant
    Filed: January 9, 2015
    Date of Patent: September 25, 2018
    Assignee: International Business Machines Corporation
    Inventors: Shicong Meng, Xiaoqiao Meng, Jian Tan, Yandong Wang, Li Zhang
  • Publication number: 20180253423
    Abstract: Computational storage techniques for distribute computing are disclosed. The computational storage server receives input from multiple clients, which is used by the server when executing one or more computation functions. The computational storage server can aggregate multiple client inputs before applying one or more computation functions. The computational storage server sets up: a first memory area for storing input received from multiple clients; a second memory area designated for storing the computation functions to be executed by the computational storage server using the input data received from the multiple clients; a client specific memory management area for storing metadata related to computations performed by the computational storage server for specific clients; and a persistent storage area for storing checkpoints associated with aggregating computations performed by the computation functions.
    Type: Application
    Filed: March 2, 2017
    Publication date: September 6, 2018
    Inventors: MICHEL H. T. HACK, YUFEI REN, WEI TAN, YANDONG WANG, XINGBO WU, LI ZHANG, WEI ZHANG
  • Publication number: 20180253646
    Abstract: A processing unit topology of a neural network including a plurality of processing units is determined. The neural network includes at least one machine in which each machine includes a plurality of nodes, and wherein each node includes at least one of the plurality of processing units. One or more of the processing units are grouped into a first group according to a first affinity. The first group is configured, using a processor and a memory, to use a first aggregation procedure for exchanging model parameters of a model of the neural network between the processing units of the first group. One or more of the processing units are grouped into a second group according to a second affinity. The second group is configured to use a second aggregation procedure for exchanging the model parameters between the processing units of the second group.
    Type: Application
    Filed: March 5, 2017
    Publication date: September 6, 2018
    Applicant: International Business Machines Corporation
    Inventors: Minwei Feng, Yufei Ren, Yandong Wang, Li Zhang, Wei Zhang
  • Patent number: 10037302
    Abstract: A cache management system performs cache management in a Remote Direct Memory Access (RDMA) key value data store. The cache management system receives a request from at least one client configured to access a data item stored in a data location of a remote server, and determines a popularity of the data item based on a frequency at which the data location is accessed by the at least one client. The system is further configured to determine a lease period of the data item based on the frequency and assigning the lease period to the data location.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: July 31, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michel H. Hack, Yufei Ren, Yandong Wang, Li Zhang
  • Patent number: 10031883
    Abstract: A cache management system performs cache management in a Remote Direct Memory Access (RDMA) key value data store. The cache management system receives a request from at least one client configured to access a data item stored in a data location of a remote server, and determines a popularity of the data item based on a frequency at which the data location is accessed by the at least one client. The system is further configured to determine a lease period of the data item based on the frequency and assigning the lease period to the data location.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: July 24, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michel H. Hack, Yufei Ren, Yandong Wang, Li Zhang
  • Patent number: 9975737
    Abstract: The present invention relates to a horizontally movable vertical shaft rope guide and a regulating method thereof, which are suitable for guiding of hoisting containers in vertical shafts. The vertical shaft rope guide comprises a hoisting rope, and two hoisting containers suspended from the tail ends of the hoisting rope, wherein, cage guide ropes are led through guide cage lugs arranged on the two sides respectively, a tensioner arranged on the ground at the shaft top is connected to the upper end of each cage guide rope, and a connector arranged under a steel slot at the shaft bottom is connected to the lower end of each cage guide rope; a hydraulic cylinder is connected at the other side of each tensioner and the corresponding connector, and the hydraulic cylinder is connected to the tensioner or connector.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: May 22, 2018
    Assignee: CHINA UNIVERSITY OF MINING AND TECHNOLOGY
    Inventors: Zhencai Zhu, Guohua Cao, Yandong Wang, Lu Yan, Weihong Peng, Yuxing Peng, Gongbo Zhou, Wei Li, Gang Shen
  • Publication number: 20180129969
    Abstract: A machine receives a first set of global parameters from a global parameter server. The first set of global parameters includes data that weights one or more operands used in an algorithm that models an entity type. Multiple learner processors in the machine execute the algorithm using the first set of global parameters and a mini-batch of data known to describe the entity type. The machine generates a consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the mini-batch of data. The machine transmits the consolidated set of gradients to the global parameter server. The machine then receives a second set of global parameters from the global parameter server, where the second set of global parameters is a modification of the first set of global parameters based on the consolidated set of gradients.
    Type: Application
    Filed: November 10, 2016
    Publication date: May 10, 2018
    Inventors: MINWEI FENG, YUFEI REN, YANDONG WANG, LI ZHANG, WEI ZHANG
  • Patent number: 9946684
    Abstract: Embodiments relate to methods, systems and computer program products for cache management in a Remote Direct Memory Access (RDMA) data store. Aspects include receiving a request from a remote computer to access a data item stored in the RDMA data store and creating a lease including a local expiration time for the data item. Aspects further include creating a remote pointer to the data item, wherein the remote pointer includes a remote expiration time and transmitting the remote pointer to the remote computer, wherein the lease is an agreement that that the remote computer can perform RDMA reads on the data item until the remote expiration time.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: April 17, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiaoqiao Meng, Jian Tan, Yandong Wang, Li Zhang
  • Patent number: 9940301
    Abstract: Embodiments relate to methods, systems and computer program products for cache management in a Remote Direct Memory Access (RDMA) data store. Aspects include receiving a request from a remote computer to access a data item stored in the RDMA data store and creating a lease including a local expiration time for the data item. Aspects further include creating a remote pointer to the data item, wherein the remote pointer includes a remote expiration time and transmitting the remote pointer to the remote computer, wherein the lease is an agreement that that the remote computer can perform RDMA reads on the data item until the remote expiration time.
    Type: Grant
    Filed: January 9, 2015
    Date of Patent: April 10, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiaoqiao Meng, Jian Tan, Yandong Wang, Li Zhang
  • Publication number: 20180052627
    Abstract: Various embodiments manage dynamic memory allocation data. In one embodiment, a set of memory allocation metadata is extracted from a memory heap space. Process dependent information and process independent information is identified from the set of memory allocation metadata based on the set of memory allocation metadata being extracted. The process dependent information and the process independent information at least identify a set of virtual memory addresses available in the memory heap space and a set of virtual memory addresses allocated to a process associated with the memory heap space. A set of allocation data associated with the memory heap space is stored in a persistent storage based on the process dependent information and the process independent information having been identified. The set of allocation data includes the process independent allocation information and a starting address associated with the memory heap space.
    Type: Application
    Filed: October 27, 2017
    Publication date: February 22, 2018
    Applicant: International Business Machines Corporation
    Inventors: Michel HACK, Xiaoqiao MENG, Jian TAN, Yandong WANG, Li ZHANG
  • Patent number: 9891959
    Abstract: A method, apparatus, and computer program product for configuring a computer cluster. Job information identifying a data processing job to be performed is received by a processor unit. The data processing job to be performed comprises a plurality of stages. Cluster information identifying a candidate computer cluster is also received by the processor unit. The processor unit identifies stage performance models for modeled stages that are similar to the plurality of stages. The processor unit predicts predicted stage performance times for performing the plurality of stages on the candidate computer cluster using the stage performance models and combines the predicted stage performance times for the plurality of stages to determine a predicted job performance time. The predicted job performance time may be used to configure the computer cluster.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: February 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Min Li, Valentina Salapura, Jian Tan, Yandong Wang, Li Zhang
  • Patent number: 9880761
    Abstract: Various embodiments manage dynamic memory allocation data. In one embodiment, a set of memory allocation metadata is extracted from a memory heap space. Process dependent information and process independent information is identified from the set of memory allocation metadata based on the set of memory allocation metadata being extracted. The process dependent information and the process independent information at least identify a set of virtual memory addresses available in the memory heap space and a set of virtual memory addresses allocated to a process associated with the memory heap space. A set of allocation data associated with the memory heap space is stored in a persistent storage based on the process dependent information and the process independent information having been identified. The set of allocation data includes the process independent allocation information and a starting address associated with the memory heap space.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: January 30, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michel Hack, Xiaoqiao Meng, Jian Tan, Yandong Wang, Li Zhang
  • Publication number: 20180007158
    Abstract: A caching management method includes embedding a notification request tag in a dummy file, uploading the dummy file to a cache server, recording a timestamp indicating a first point in time that the dummy file is uploaded to the cache server, receiving an eviction notification indicating a second point in time that the dummy file is evicted from the cache server, and calculating an eviction time indicating an amount of time taken for the dummy file to be evicted from the cache server. Transmission of the eviction notification is triggered in response to processing the notification request tag, and the dummy file is not retrieved from the cache server between the first point in time and the second point in time. The eviction time is equal to a difference between the first point in time and the second point in time.
    Type: Application
    Filed: June 29, 2016
    Publication date: January 4, 2018
    Inventors: MICHEL HACK, YUFEI REN, YANDONG WANG, LI ZHANG
  • Patent number: 9845605
    Abstract: A system and a method for automatically regulating the tensions of the guide ropes of a flexible cable suspension platform. The system includes a guide rope regulator mounted on a flexible cable suspension platform, a hydraulic pump station arranged on the flexible cable suspension platform, and a hydraulic system associated to the hydraulic pump station. The guide rope regulator automatically regulates the tensions of the guide ropes to enable the tensions of all the guide ropes to be consistent, so as to further ensure that the flexible cable suspension platform is in a level condition. The guide rope regulator also can measure the tension states of the guide ropes conveniently so as to ensure that the guide ropes have enough tensions to efficiently limit the swing amplitude of a lilting container. The system is simple, and convenient to operate, and has a good automatic regulating effect.
    Type: Grant
    Filed: January 27, 2014
    Date of Patent: December 19, 2017
    Assignee: Science Academy of China University of Mining and Technology
    Inventors: Guohua Cao, Yandong Wang, Zhencai Zhu, Weihong Peng, Jinjie Wang, Zhi Liu, Shanzeng Liu, Gang Shen, Jishan Xia, Lei Zhang
  • Publication number: 20170344904
    Abstract: Version vector-based rules are used to facilitate asynchronous execution of machine learning algorithms. The method uses version vector based rule to generate aggregated parameters and determine when to return the parameters. The method also includes coordinating the versions of aggregated parameter sets among all the parameter servers. This allows to broadcast to enforce the version consistency; generate parameter sets in an on-demand manner to facilitate version control. Furthermore the method includes enhancing the version consistency at the learner's side and resolving the inconsistent version when mismatching versions are detected.
    Type: Application
    Filed: May 31, 2016
    Publication date: November 30, 2017
    Inventors: Michel H.T. Hack, Yufei Ren, Yandong Wang, Li Zhang
  • Publication number: 20170344905
    Abstract: Version vector-based rules are used to facilitate asynchronous execution of machine learning algorithms. The method uses version vector based rule to generate aggregated parameters and determine when to return the parameters. The method also includes coordinating the versions of aggregated parameter sets among all the parameter servers. This allows to broadcast to enforce the version consistency; generate parameter sets in an on-demand manner to facilitate version control. Furthermore the method includes enhancing the version consistency at the learner's side and resolving the inconsistent version when mismatching versions are detected.
    Type: Application
    Filed: May 31, 2016
    Publication date: November 30, 2017
    Inventors: Michel H.T. Hack, Yufei Ren, Yandong Wang, Li Zhang