Patents by Inventor Pradeep Vincent

Pradeep Vincent has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9483213
    Abstract: A virtual tape system utilizes multiple virtual tape libraries. Some virtual elements of the virtual tape libraries are connected elements, such as virtual import/export slots, that logically connect two or more virtual tape libraries. Virtual media changers of the virtual tape libraries can be commanded, as if physical media changers, to virtually move virtual media, such as virtual tapes, within and among the virtual tape libraries. By moving a virtual medium to a connected element, the virtual medium can be virtually moved from one virtual tape library to another.
    Type: Grant
    Filed: April 29, 2013
    Date of Patent: November 1, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Ian Wharton, Pradeep Vincent
  • Patent number: 9449008
    Abstract: In response to a rename request to change a file name at a storage service from a first name to a second name, a workflow comprising at least two atomic operations is initiated. In the first atomic operation, a lock is obtained on a first directory entry for the first name, and an intent record for the rename workflow is stored. In a second atomic operation, a pointer of a second directory entry for the second name is modified, and an indication of the pointer modification is stored. In a third set of operations, the intent record is deleted, the lock is released, and the first directory entry is deleted.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: September 20, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Matti Juhani Oikarinen, Pradeep Vincent, Matteo Frigo
  • Publication number: 20160266816
    Abstract: A method and apparatus for staged execution pipelining and allocating resource to staged execution pipelines are provided. One or more execution pipelines are established, where each of the one or more execution pipelines includes one or more execution stages. Data is provided to the one or more execution pipelines for processing and resources are allocated to the execution pipeline.
    Type: Application
    Filed: May 20, 2016
    Publication date: September 15, 2016
    Inventors: Nishanth Alapati, Pradeep Vincent, David Carl Salyers
  • Publication number: 20160246640
    Abstract: Virtual machines may migrate between heterogeneous sets of implementation resources in a manner that allows the virtual machines to efficiently and effectively adapt to new implementation resources. Furthermore, virtual machines may change types during migration without terminating the virtual machines. Migration templates may be established to manage migration of sets of virtual machines between sets of implementation resources and/or virtual machine types. Migration templates may be established based at least in part on information provided by migration agents added to the virtual machines under consideration for migration. The migration agents may detect and augment relevant virtual machine capabilities, as well as trigger reconfiguration of virtual machine components in accordance with migration templates.
    Type: Application
    Filed: April 28, 2016
    Publication date: August 25, 2016
    Inventor: Pradeep Vincent
  • Patent number: 9396010
    Abstract: Some embodiments facilitate high performance packet-processing by enabling one or more processors that perform packet-processing to determine whether to enter an idle state or similar state. As network packets usually arrive or are transmitted in batches, the processors of some embodiments determine that more packets may be coming down a multi-stage pipeline upon receiving a first packet for processing. As a result, the processors may stay awake for a duration of time in anticipation of an incoming packet. Some embodiments keep track of the last packet that entered the first stage of the pipeline and compare that with a packet that the processor just processed in a pipeline stage to determine whether there may be more packets coming that need processing. In some embodiments, a processor may also look at a queue length of a queue associated with an upstream stage to determine whether more packets may be coming.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: July 19, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Pradeep Vincent, David D. Becker
  • Patent number: 9385912
    Abstract: A framework can be utilized with conventional networking components to enable those components to process packets of specific formats using conventional algorithms, such as algorithms for receive side coalescing (RCS) and TCP segmentation offloading (TSO). Format and flow information can be added to an opaque field or other portion of a packet, at an appropriate location or pre-configured offset. Placing information at a specific location or offset enables the networking hardware to quickly recognize a packet for processing. Packets can be segmented and coalesced using conventional algorithms on the networking hardware, enabling packets of various formats to be able to take advantage of various performance enhancements.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: July 5, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Publication number: 20160170885
    Abstract: Methods and apparatus for supporting cached volumes at storage gateways are disclosed. A storage gateway appliance is configured to cache at least a portion of a storage object of a remote storage service at local storage devices. In response to a client's write request, directed to at least a portion of a data chunk of the storage object, the appliance stores a data modification indicated in the write request at a storage device, and asynchronously uploads the modification to the storage service. In response to a client's read request, directed to a different portion of the data chunk, the appliance downloads the requested data from the storage service to the storage device, and provides the requested data to the client.
    Type: Application
    Filed: February 22, 2016
    Publication date: June 16, 2016
    Applicant: Amazon Technologies, Inc.
    Inventors: DAVID CARL SALYERS, PRADEEP VINCENT, ANKUR KHETRAPAL, KESTUTIS PATIEJUNAS
  • Patent number: 9348602
    Abstract: A method and apparatus for staged execution pipelining and allocating resource to staged execution pipelines are provided. One or more execution pipelines are established, where each of the one or more execution pipelines includes one or more execution stages. Data is provided to the one or more execution pipelines for processing and resources are allocated to the execution pipeline.
    Type: Grant
    Filed: September 3, 2013
    Date of Patent: May 24, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Nishanth Alapati, Pradeep Vincent, David Carl Salyers
  • Patent number: 9349010
    Abstract: Attempts to update confirmation information or firmware for a hardware device can be monitored using a secure counter that is configured to monotonically adjust a current value of the secure counter for each update or update attempt. The value of the counter can be determined every time the validity of the firmware is confirmed, and this value can be stored to a secure location. At subsequent times, such as during a boot process, the actual value of the counter can be determined and compared with the expected value. If the values do not match, such that the firmware may be in an unexpected state, an action can be taken, such as to prevent access to, or isolate, the hardware until such time as the firmware can be validated or updated to an expected state.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: May 24, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Michael David Marr, Pradeep Vincent, Matthew T. Corddry, James R. Hamilton
  • Patent number: 9329886
    Abstract: Virtual machines may migrate between heterogeneous sets of implementation resources in a manner that allows the virtual machines to efficiently and effectively adapt to new implementation resources. Furthermore, virtual machines may change types during migration without terminating the virtual machines. Migration templates may be established to manage migration of sets of virtual machines between sets of implementation resources and/or virtual machine types. Migration templates may be established based at least in part on information provided by migration agents added to the virtual machines under consideration for migration. The migration agents may detect and augment relevant virtual machine capabilities, as well as trigger reconfiguration of virtual machine components in accordance with migration templates.
    Type: Grant
    Filed: December 10, 2010
    Date of Patent: May 3, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Publication number: 20160110214
    Abstract: High-speed processing of packets to, and from, a virtualization environment can be provided while utilizing hardware-based segmentation offload and other such functionality. A hardware vendor such as a network interface card (NIC) manufacturer can enable the hardware to support open and proprietary stateless tunneling in conjunction with a protocol such as single root I/O virtualization (SR-IOV) in order to implement a virtualized overlay network. The hardware can utilize various rules, for example, that can be used by the NIC to perform certain actions, such as to encapsulate egress packets and decapsulate packets.
    Type: Application
    Filed: October 26, 2015
    Publication date: April 21, 2016
    Inventors: Pradeep Vincent, Matthew David Klein, Samuel James McKelvie
  • Patent number: 9313302
    Abstract: High-speed processing of packets to, and from, a virtualization environment can be provided while utilizing segmentation offload and other such functionality of commodity hardware. Virtualization information can be added to extension portions of protocol headers, for example, such that the payload portion is unchanged and, when physical address information is added to a frame, a frame can be processed using commodity hardware. In some embodiments, the virtualization information can be hashed and added to the payload or stream at, or relative to, various segmentation boundaries, such that the virtualization or additional header information will only be added to a subset of the packets once segmented, thereby reducing the necessary overhead. Further, the hashing of the information can allow for reconstruction of the virtualization information upon desegmentation even in the event of packet loss.
    Type: Grant
    Filed: January 26, 2015
    Date of Patent: April 12, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Pradeep Vincent, Michael David Marr
  • Patent number: 9298504
    Abstract: In a system having multiple processors, idle processors are wakened in anticipation of tasks that may be subsequently queued. When interrupting a first processor to execute a particular task, a scheduler may also send interrupts to idle or otherwise available processors, instructing the idle processors to begin monitoring task queues and to find and execute compatible tasks that may be subsequently queued.
    Type: Grant
    Filed: June 11, 2012
    Date of Patent: March 29, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 9298723
    Abstract: A receiver-side deduplication architecture for data storage systems, for example remote data storage systems that use block-based data storage and that provide the data storage to client(s) via a network. The architecture may provide network deduplication by reducing bandwidth usage on communications channel(s) between the client(s) and the data storage systems. The architecture may leverage block storage technology of a data store provided to clients by a data store provider and caching technology to implement a deduplication data dictionary. The deduplication data dictionary includes deduplication data blocks stored in the data store and a mapping tier that leverages caching technology to store and maintain a store of key/value pairs that map data block fingerprints to deduplication data blocks in the data store.
    Type: Grant
    Filed: September 19, 2012
    Date of Patent: March 29, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 9294558
    Abstract: At a particular node of a storage service to which connections have been established on behalf of one or more clients, respective workload indicators are collected from a set of peer nodes of the storage service. A determination is made at the particular node that (a) a local workload metric exceeds a connection rebalancing threshold, and (b) a peer capacity availability criterion has been met. The peer capacity availability criterion may be determined from the respective workload indicators. In response to the determination, a particular client connection is closed.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: March 22, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Pradeep Vincent, Matti Juhani Oikarinen, Douglas Stewart Laurence, Matteo Frigo
  • Patent number: 9292466
    Abstract: Information about the transmission of packets or other information can be inferred based at least in part upon the state of one or more queues used to transmit that information. In a networking example, a hook can be added to a free buffer API call from a queue of a NIC driver. When a packet is transmitted and a buffer freed, the hook can cause information for that packet to be transmitted to an appropriate location, such as a network traffic control component or control plane component, whereby that information can be compared with packet, source, and other such information to infer which packets have been transmitted, which packets are pending, and other such information. This information can be used for various purposes, such as to dynamically adjust the allocation of a resource (e.g., a NIC) to various sources based at least in part upon the monitored behavior.
    Type: Grant
    Filed: December 28, 2010
    Date of Patent: March 22, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 9274956
    Abstract: Methods and apparatus for intelligent cache eviction at storage gateways are disclosed. A system comprises computing devices configured to determine whether the number of free chunks of storage at a storage appliance for caching portions of a storage object is below a threshold value. If the number is below the threshold, the computing devices identify an eviction set of chunks to be freed, and generate a respective new instance identifier for each chunk of the eviction set. The identifier of a given chunk may be used to determine a validity of a block of the chunk. The devices store, within metadata storage of the appliance, the new instance identifiers of the eviction set, and indicate that the chunks of the eviction set are available for caching data of the storage object.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: March 1, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: David Carl Salyers, Ankur Khetrapal, Pradeep Vincent, Kestutis Patiejunas
  • Patent number: 9276864
    Abstract: Information about the transmission of packets or other information can be inferred based at least in part upon the state of one or more queues used to transmit that information. In a networking example, a hook can be added to a free buffer API call from a queue of a NIC driver. When a packet is transmitted and a buffer freed, the hook can cause information for that packet to be transmitted to an appropriate location, such as a network traffic control component or control plane component, whereby that information can be compared with packet, source, and other such information to infer which packets have been transmitted, which packets are pending, and other such information. This information can be used for various purposes, such as to dynamically adjust the allocation of a resource (e.g., a NIC) to various sources based at least in part upon the monitored behavior.
    Type: Grant
    Filed: May 14, 2013
    Date of Patent: March 1, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 9268652
    Abstract: Methods and apparatus for supporting cached volumes at storage gateways are disclosed. A storage gateway appliance is configured to cache at least a portion of a storage object of a remote storage service at local storage devices. In response to a client's write request, directed to at least a portion of a data chunk of the storage object, the appliance stores a data modification indicated in the write request at a storage device, and asynchronously uploads the modification to the storage service. In response to a client's read request, directed to a different portion of the data chunk, the appliance downloads the requested data from the storage service to the storage device, and provides the requested data to the client.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: February 23, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: David Carl Salyers, Pradeep Vincent, Ankur Khetrapal, Kestutis Patiejunas
  • Patent number: 9268651
    Abstract: Methods and apparatus for efficient recovery of cached volumes at storage gateways are disclosed. To recover, after an unplanned shutdown, a storage gateway appliance configured to cache chunks of a storage object, chunk metadata corresponding to a particular chunk is read into an in-memory metadata region from a first metadata location. Based on analysis of the chunk metadata, a validation requirement indication for the particular chunk is stored, and the chunk is designated as being accessible for client I/O requests. In response to receiving a subsequent I/O request targeted to the particular chunk, the chunk metadata is validated using a different metadata location prior to performing the requested I/O operation.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: February 23, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: David Carl Salyers, Ankur Khetrapal, Pradeep Vincent, Kestutis Patiejunas