Patents by Inventor Asim Kadav

Asim Kadav has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9778856
    Abstract: The subject disclosure is directed towards one or more parallel storage components for parallelizing block-level input/output associated with remote file data. Based upon a mapping scheme, the file data is partitioned into a plurality of blocks in which each may be equal in size. A translator component of the parallel storage may determine a mapping between the plurality of blocks and a plurality of storage nodes such that at least a portion of the plurality of blocks is accessible in parallel. Such a mapping, for example, may place each block in a different storage node allowing the plurality of blocks to be retrieved simultaneously and in its entirety.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: October 3, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Bin Fan, Asim Kadav, Edmund Bernard Nightingale, Jeremy E. Elson, Richard F. Rashid, James W. Mickens
  • Publication number: 20170116520
    Abstract: Methods and systems for training a neural network include sampling multiple local sub-networks from a global neural network. The local sub-networks include a subset of neurons from each layer of the global neural network. The plurality of local sub-networks are trained at respective local processing devices to produce trained local parameters. The trained local parameters from each local sub-network are averaged to produce trained global parameters.
    Type: Application
    Filed: September 21, 2016
    Publication date: April 27, 2017
    Inventors: Renqiang Min, Huahua Wang, Asim Kadav
  • Publication number: 20170111234
    Abstract: A network device, system, and method are provided. The network device includes a processor. The processor is configured to store a local estimate and a dual variable maintaining an accumulated subgradient for the network device. The processor is further configured to collect values of the dual variable of neighboring network devices. The processor is also configured to form a convex combination with equal weight from the collected dual variable of neighboring network devices. The processor is additionally configured to add a most recent local subgradient for the network device, scaled by a scaling factor, to the convex combination to obtain an updated dual variable. The processor is further configured to update the local estimate by projecting the updated dual variable to a primal space.
    Type: Application
    Filed: October 18, 2016
    Publication date: April 20, 2017
    Inventors: Asim Kadav, Renqiang Min, Erik Kruus, Cun Mu
  • Publication number: 20170091668
    Abstract: A machine learning method includes connecting machines in a data-center using a network aware model consistency for stochastic applications; ensuring a communication graph of all machines in the data-center is connected; propagating all updates uniformly across the cluster without update; and preferring connections to a machine with first network throughput over machines with second network throughput smaller than the first network throughput.
    Type: Application
    Filed: July 27, 2016
    Publication date: March 30, 2017
    Applicant: NEC Laboratories America, Inc.
    Inventors: Asim Kadav, Erik Kruus
  • Publication number: 20170039485
    Abstract: A machine learning method includes installing a plurality of model replicas for training on a plurality of computer learning nodes; receiving training data at a each model replica and updating parameters for the model replica after trailing; sending the parameters to other model replicas with a communication batch size; evaluating received parameters from other model replicas; and dynamically adjusting the communication batch size to balance computation and communication overhead and ensuring convergence even with a mismatch in processing abilities on different computer learning nodes.
    Type: Application
    Filed: July 12, 2016
    Publication date: February 9, 2017
    Inventor: Asim Kadav
  • Publication number: 20160125316
    Abstract: Systems and methods are disclosed for parallel machine learning with a cluster of N parallel machine network nodes by determining k network nodes as a subset of the N network nodes to update learning parameters, wherein k is selected to disseminate the updates across all nodes directly or indirectly and to optimize predetermined goals including freshness, balanced communication and computation ratio in the cluster; sending learning unit updates to fewer nodes to reduce communication costs with learning convergence; and sending reduced learning updates and ensuring that the nodes send/receive learning updates in a uniform fashion.
    Type: Application
    Filed: October 6, 2015
    Publication date: May 5, 2016
    Applicant: NEC Laboratories America, Inc.
    Inventors: Asim Kadav, Cristian Ungureanu, Erik Kruus, Hao Li
  • Publication number: 20160103901
    Abstract: Systems and methods are disclosed for providing distributed learning over a plurality of parallel machine network nodes by allocating a per-sender receive queue at every machine network node and performing distributed in-memory training; and training each unit replica and maintaining multiple copies of the unit replica being trained, wherein all unit replicas train, receive unit updates and merge in parallel in a peer-to-peer fashion, wherein each receiving machine network node merges updates at later point in time without interruption and wherein the propagating and synchronizing unit replica updates are lockless and asynchronous.
    Type: Application
    Filed: October 1, 2015
    Publication date: April 14, 2016
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Asim Kadav, Erik Kruus, Hao Li
  • Publication number: 20150052392
    Abstract: While connected to cloud storage, a computing device writes data and metadata to the cloud storage, indicates success of the write to an application of the computing device, and, after indicating success to the application, writes the data and metadata to local storage of the computing device. The data and metadata may be written to different areas of the local storage. The computing device may also determine that it has recovered from a crash or has connected to the cloud storage after operating disconnected and reconcile the local storage with the cloud storage. The reconciliation may be based at least on a comparison of the metadata stored in the area of the local storage with metadata received from the cloud storage. The cloud storage may store each item of data contiguously with its metadata as an expanded block.
    Type: Application
    Filed: August 19, 2013
    Publication date: February 19, 2015
    Applicant: Microsoft Corporation
    Inventors: James W. Mickens, Jeremy E. Elson, Edmund B. Nightingale, Bin Fan, Asim Kadav, Osama Khan
  • Publication number: 20140068224
    Abstract: The subject disclosure is directed towards one or more parallel storage components for parallelizing block-level input/output associated with remote file data. Based upon a mapping scheme, the file data is partitioned into a plurality of blocks in which each may be equal in size. A translator component of the parallel storage may determine a mapping between the plurality of blocks and a plurality of storage nodes such that at least a portion of the plurality of blocks is accessible in parallel. Such a mapping, for example, may place each block in a different storage node allowing the plurality of blocks to be retrieved simultaneously and in its entirety.
    Type: Application
    Filed: August 30, 2012
    Publication date: March 6, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Bin Fan, Asim Kadav, Edmund Bernard Nightingale, Jeremy E. Elson, Richard F. Rashid, James W. Mickens