Patents by Inventor Thomas A. Volpe

Thomas A. Volpe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10506044
    Abstract: Disclosed are techniques that can be used to efficiently collect statistical or other information from many registers in an integrated circuit without substantially burdening a central processing unit. The techniques can use logic that, without being directed by a central processing unit, can periodically collect information from a plurality of disparate registers of an integrated circuit and move the contents of the registers to memory (e.g., volatile memory accessible by the central processing unit) other than the registers.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: December 10, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Mark Anthony Banse
  • Patent number: 10474359
    Abstract: Disclosed herein are techniques for reducing the number of write operations performed to a storage-class memory in a virtualized environment. In one embodiment, when a memory page is de-allocated from a virtual machine, the memory page and/or the subpages of the memory page are marked as “trimmed” in a control table such that any read to the memory page or subpages is denied, and no physical memory initialization is performed to the memory page or subpages. A de-allocated memory page or subpage is only initialized when it is reallocated and is to be written to by the virtual machine to which the memory page is reallocated.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: November 12, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Nafea Bshara
  • Patent number: 10459662
    Abstract: Failed write handling can be implemented at a memory controller for non-volatile memory. Failure of a write to a storage location in the non-volatile memory may be detected. An indication of the failure may be sent to a microcontroller for the non-volatile memory which may return an instruction to write to a different location in the non-volatile memory. Reads and writes to the storage location of the failed write may still be allowed, in some embodiments, by redirecting the reads and writes to a copy of data of the failed write stored in a copy buffer in the memory controller.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: October 29, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Mark Anthony Banse, Steven Scott Larson
  • Patent number: 10454831
    Abstract: Forwarding of network packets generated by a networking device may be load-balanced. Network packets may be generated by a networking device, such as a packet processor, that also processes and forwards received network packets. Forwarding decisions for the generated network packets may be made according to a load balancing scheme among possible forwarding routes from the networking device. In at least some embodiments, a destination resolution pipeline for determining forwarding decisions for generated network packets may be implemented separate from a destination resolution pipeline for determining forwarding decisions for received network packets in order to determine different forwarding decisions for the generated network packets. The generated network packets may then be forwarded according to the determined forwarding decisions.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: October 22, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Asif Kahn, Nafea Bshara
  • Patent number: 10445638
    Abstract: Disclosed herein are techniques for performing neural network computations. In one embodiment, an apparatus may include an array of processing elements, the array having a configurable first effective dimension and a configurable second effective dimension. The apparatus may also include a controller configured to determine at least one of: a first number of input data sets to be provided to the array at the first time or a second number of output data sets to be generated by the array at the second time, and to configure, based on at least one of the first number or the second number, at least one of the first effective dimension or the second effective dimension of the array.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: October 15, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Sundeep Amirineni, Ron Diamant, Randy Huang, Thomas A. Volpe
  • Publication number: 20190294968
    Abstract: Disclosed herein are techniques for performing multi-layer neural network processing for multiple contexts. In one embodiment, a computing engine is set in a first configuration to implement a second layer of a neural network and to process first data related to a first context to generate first context second layer output. The computing engine can be switched from the first configuration to a second configuration to implement a first layer of the neural network. The computing engine can be used to process second data related to a second context to generate second context first layer output. The computing engine can be set to a third configuration to implement a third layer of the neural network to process the first context second layer output and the second context first layer output to generate a first processing result of the first context and a second processing result of the second context.
    Type: Application
    Filed: March 22, 2018
    Publication date: September 26, 2019
    Inventors: Dana Michelle Vantrease, Ron Diamant, Thomas A. Volpe, Randy Huang
  • Publication number: 20190294959
    Abstract: Disclosed herein are techniques for scheduling and executing multi-layer neural network computations for multiple contexts. In one embodiment, a method comprises determining a set of computation tasks to be executed, the set of computation tasks including a first computation task and a second computation task, as well as a third computation task and a fourth computation task to provide input data for the first and second computation tasks; determining a first execution batch comprising the first and second computation tasks; determining a second execution batch comprising at least the third computation task to be executed before the first execution batch; determining whether to include the fourth computation task in the second execution batch based on whether the memory device has sufficient capacity to hold input data and output data of both of the third and fourth computation; executing the second execution batch followed by the first execution batch.
    Type: Application
    Filed: March 22, 2018
    Publication date: September 26, 2019
    Inventors: Dana Michelle Vantrease, Ron Diamant, Thomas A. Volpe, Randy Huang
  • Patent number: 10404674
    Abstract: Efficient memory management can be provided in a multi-tenant virtualized environment by encrypting data to be written in memory by a virtual machine using a cryptographic key specific to the virtual machine. Encrypting data associated with multiple virtual machines using a cryptographic key unique to each virtual machine can minimize exposure of the data stored in the memory shared by the multiple virtual machines. Thus, some embodiments can eliminate write cycles to the memory that are generally used to initialize the memory before a virtual machine can write data to the memory if the memory was used previously by another virtual machine.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: September 3, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Nafea Bshara, Thomas A. Volpe, Adi Habusha, Yaniv Shapira
  • Patent number: 10397116
    Abstract: Disclosed are techniques that can be used within network devices to implement access control functionality. The techniques can include use of a content-addressable memory configured including an access control entry stored therein. Circuitry can be coupled to the content-addressable memory and configured to determine that a value is within a range of values. The circuitry can generate a compare key including a field that is set indicating that the value is within the range of values. The circuitry can provide, to the content-addressable memory, the compare key for locating a corresponding access control entry within the content-addressable memory. The circuitry can receive, from the content-addressable memory, an index of the access control entry stored within the content-addressable memory. The circuitry can select, based on the index of the access control entry, an action.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: August 27, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Chin Cheah
  • Patent number: 10397362
    Abstract: A cache-and-overflow memory handles both cache and overflow data in a single hardware structure so as to increase speed and reduce supporting hardware structure needed to implement an effective memory system. A single hash value can be used to access either a cache data value or an overflow data value stored in the combined cache-and-overflow memory. If there are a small number of overflow entries, the combined cache-and-overflow memory provides more availability for cache entries. However, overflow entries are provided priority over cache entries. As a result, the combined cache-and-overflow memory dynamically reallocates its space to efficiently store as much as cache as possible until space is needed for overflow data. At that time, the cache data is evicted in a priority order to make space for the overflow data.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: August 27, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Kari Ann O'Brien
  • Patent number: 10389632
    Abstract: Disclosed herein is an apparatus for processing an Internet Protocol (IP) header and label switching (LS) headers of a packet in a pipeline. The apparatus includes an LS header processing circuit configured to select a first operation for the packet using an LS header from the packet, and an IP header processing circuit configured to perform an IP lookup to select a second operation for the packet. The apparatus further includes a tunnel initiation circuit configured to initiate an LS tunnel or IP tunnel. The LS header processing circuit, the IP header processing circuit, and the tunnel initiation circuit are operable to operate sequentially on a same packet and concurrently on different packets in a pipeline. Each of these circuits is operable to be bypassed based on an outermost header in the packet, or the selected one of the first operation or the second operation.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: August 20, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Kari Ann O'Brien, Thomas A. Volpe, Bijendra Singh
  • Patent number: 10353843
    Abstract: A device can include one of more configurable packet processing pipelines to process a plurality of packets. Each configurable packet processing pipeline can include a plurality of packet processing components, wherein each packet processing component is configured to perform one or more packet processing operations for the device. The plurality of packet processing components are coupled to a packet processing interconnect, wherein each packet processing component is configured to route the packets through the packet processing interconnect for the one or more configurable packet processing pipelines.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: July 16, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Mark Bradley Davis, Asif Khan, Thomas A. Volpe, Robert Michael Johnson
  • Patent number: 10346342
    Abstract: A plurality of system on chips (SoCs) in a server computer can be coupled to a plurality of memory agents (MAs) via respective Serializer/Deserializer (SerDes) interfaces. Each of the plurality of MAs can include one or more memory controllers to communicate with a memory coupled to the respective MA, and globally addressable by each of the SoCs. Each of the plurality of SoCs can access the memory coupled to any of the MAs in uniform number of hops using the respective SerDes interfaces. Different types of memories, e.g., volatile memory, persistent memory, can be supported.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: July 9, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Mark Bradley Davis, Thomas A. Volpe, Nafea Bshara, Yaniv Shapira, Adi Habusha
  • Patent number: 10333813
    Abstract: A timer scheduler is used to track timeout values for network connections. A single hardware timer generates timeout values that can be tracked per connection in a linked list that is processed at set time intervals. All tracked connections can have a future timeout scheduled. Future timeout values can be stored in both a linked list and a connection state table that cross-reference each other. The linked list is traversed at predetermined intervals to determine which entries have timed out. For each entry that timed out, a second check is made against a timeout value in the connection state table. If timeout value within the connection state table indicates that a timeout occurred, then the network connection is terminated.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: June 25, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Kari Ann O'Brien, Thomas A. Volpe
  • Patent number: 10333853
    Abstract: A system can provide a unified quality-of-service for multi-path label switching (MPLS) traffic. A single network device can be configured as an ingress label switch router, an egress label switch router or a label switching router for different label switching paths. The network device can support uniform deployment model, pipe deployment model or short-pipe deployment model in any configuration. A configurable scheduler priority map can be selected from multiple scheduler priority maps to generate a scheduler priority class which can be used to assign priority and resources for the packet.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: June 25, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Kiran Kalkunte Seshadri, Bijendra Singh, Thomas A. Volpe, Kari Ann O'Brien
  • Publication number: 20190149472
    Abstract: Packet processors or other devices with packet processing pipelines may implement pipelined evaluations of algorithmic forwarding route lookups. As network packets are received, a destination address for the network packets may be divided into different possible prefix lengths and corresponding entries in a routing table for the different possible prefix lengths may be determined according to a hash scheme for the routing table. The entry values may be read from the routing table and evaluated at subsequent stages to identify the entry with a longest prefix match with respect to the destination address for the network packet. The routing table may include entries for different types of network packets and may be configured to include virtual routing and forwarding for network packets.
    Type: Application
    Filed: November 16, 2018
    Publication date: May 16, 2019
    Applicant: Amazon Technologies, Inc.
    Inventors: Bijendra Singh, Thomas A. Volpe, Kari Ann O'Brien
  • Patent number: 10284460
    Abstract: Network packet tracing may be implemented on packet processors or other devices that perform packet processing. As network packets are received, a determination may be made as to whether tracing is enabled for the network packets. For those network packets with tracing enabled, trace information may be generated and the network packets modified to include the trace information such that forwarding decisions for the network packets ignore the trace information. Trace information indicate a packet processor as a location in a route traversed by the network packets and may include ingress and egress timestamps. Forwarding decisions may then be made and the network packets sent according to the forwarding decisions. Tracing may be enabled or disabled by packet processors for individual network packets. Trace information may also be truncated at a packet processor.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: May 7, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Nafea Bshara, Leonard Thomas Tracy, Thomas A. Volpe, Mark Bradley Davis, Mark Noel Kelly, Stephen Callaghan, Justin Oliver Pietsch, Edward Crabbe
  • Patent number: 10257098
    Abstract: Provided are systems and methods for packet policing for controlling the rate of a packet flows. In some implementations, an integrated circuit is provided. The integrated circuit may comprise a memory, a counter, and a pipeline. The integrated circuit may be operable to, upon receiving packet information describing a packet, determine, using the pipeline, a drop status for the packet. Determining the drop status may include determining a previous number of credits available, a number of new credits available, a current number of credits available, and a number of credits needed to transmit the packet. The drop status may be determined by comparing the number of credits needed to transmit the packet against the current number of credits available. The integrated circuit may further update the information stored for a policing context in the memory based on the drop status and the number of credits needed to transmit the packet.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: April 9, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Mark Anthony Banse, Thomas A. Volpe
  • Patent number: 10237378
    Abstract: Provided are systems, methods, and integrated circuits for a low-latency, metadata-based packet rewriter. In various implementations, an integrated circuit may include a first pipeline stage operable to receive packet bytes for a packet and packet information. The first stage may further be operable to extract a first value from the packet bytes, and provide the packet bytes, packet information, and first value. The integrated circuit may further include a second stage, operable to receive the packet bytes and packet information. The second stage may further calculate a value using a value from the packet information, and provide the packet bytes, packet information, and second value. The integrated circuit may further include a third stage, operable to receive the packet bytes, packet information, and a third value. The third stage may further be operable to insert the third value into the packet bytes, and provide the packet bytes and packet information.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: March 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Kiran Kalkunte Seshadri, Thomas A. Volpe
  • Patent number: 10228852
    Abstract: Disclosed herein are methods and apparatuses related to the use of counter tables. A counter table can comprise a plurality of lower-level counters and an upper-level counter. A range of values capable of being represented by a lower-level counter from the plurality of lower-level counters can be enlarged by associating the lower-level counter with the upper-level counter. A counter table can be associated with a network device.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: March 12, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Mark Anthony Banse