Patents by Inventor Erez Izenberg

Erez Izenberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10594476
    Abstract: A hardware cipher module to cipher a packet. The cipher module includes a key scheduling engine and a ciphering engine. The key scheduling engine is configured to receive a compact key and iteratively generate a set of round keys, including a first round key, based on the compact key and determine, based upon a cipher mode indication and a type of ciphering whether to generate a key-scheduling-done indication after the first round key is generated and before all of the set of round keys are generated or to generate the key-scheduling-done indication after all of the set of round keys is generated. The ciphering engine is configured to begin to cipher the packet with one of the set of round keys as a result of receiving the key schedule done indication.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: March 17, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Ron Diamant, Nafea Bshara, Erez Izenberg
  • Publication number: 20200057747
    Abstract: A technique for remote direct memory access (RDMA) may include receiving a packet that was sent over a network, and determining the packet has metadata indicative of acceleration. The technique may also include selecting a queue having minimal storage stages to process the packet, and writing the data of the packet to an application memory using the datapath associated with the queue. Amended metadata can be generated to indicate that the data has been written to the application memory, and the amended metadata can be stored in a software accessible buffer.
    Type: Application
    Filed: October 23, 2019
    Publication date: February 20, 2020
    Inventors: Erez Izenberg, Leah Shalev, Georgy Machulsky, Nafea Bshara
  • Patent number: 10509764
    Abstract: Apparatus and methods are disclosed herein for remote, direct memory access (RDMA) technology that enables direct memory access from one host computer memory to another host computer memory over a physical or virtual computer network according to a number of different RDMA protocols. In one example, a method includes receiving remote direct memory access (RDMA) packets via a network adapter, deriving a protocol index identifying an RDMA protocol used to encode data for an RDMA transaction associated with the RDMA packets, applying the protocol index to a generate RDMA commands from header information in at least one of the received RDMA packets, and performing an RDMA operation using the RDMA commands.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: December 17, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Erez Izenberg, Leah Shalev, Nafea Bshara, Guy Nakibly, Georgy Machulsky
  • Publication number: 20190364136
    Abstract: A system, comprising: a configurable parser that comprises one or more configurable parsing engines, wherein the configurable parser is arranged to receive a packet and to extract from the packet headers associated with a set of protocols that comprises at least one protocol; a packet type detection unit that is arranged to determine a type of the packet in response to the set of protocols; and a configurable data integrity unit that comprises a configuration unit and at least one configurable data integrity engine; wherein the configuration unit is arranged to configure the at least one configurable data integrity engine according to the set of protocols; and wherein the at least one configurable data integrity engine is arranged to perform data integrity processing of the packet to provide at least one data integrity result
    Type: Application
    Filed: June 7, 2019
    Publication date: November 28, 2019
    Inventors: Ofer Naaman, Erez Izenberg, Nafea Bshara
  • Patent number: 10459875
    Abstract: According to an embodiment of the invention there may be provided a method for hybrid remote direct memory access (RDMA), the method may include: (i) receiving, by a first computer, a packet that was sent over a network from a second computer; wherein the packet may include data and metadata; (ii) determining, in response to the metadata, whether the data should be (a) directly written to a first application memory of the first computer by a first hardware accelerator of the first computer; or (b) indirectly written to the first application memory; (iii) indirectly writing the data to the first application memory if it determined that the data should be indirectly written to the first application memory; (iv) if it determined that the data should be directly written to the first application memory then: (iv.a) directly writing, by the first hardware accelerator the data to the first application memory without writing the data to any buffer of the operating system; and (iv.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: October 29, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Erez Izenberg, Leah Shalev, Georgy Machulsky, Nafea Bshara
  • Publication number: 20190258597
    Abstract: The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a reconfigurable logic region. The reconfigurable logic region can include logic blocks that are configurable to implement application logic. The host logic can be used for encapsulating the reconfigurable logic region. The host logic can include a host interface for communicating with a processor. The host logic can include a management function accessible via the host interface. The management function can be adapted to cause the reconfigurable logic region to be configured with the application logic in response to an authorized request from the host interface. The host logic can include a data path function accessible via the host interface. The data path function can include a layer for formatting data transfers between the host interface and the application logic.
    Type: Application
    Filed: February 27, 2019
    Publication date: August 22, 2019
    Inventors: Islam Atta, Christopher Joseph Pettey, Asif Khan, Robert Michael Johnson, Mark Bradley Davis, Erez Izenberg, Nafea Bshara, Kypros Constantinides
  • Publication number: 20190213155
    Abstract: The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a plurality of reconfigurable logic regions. Each reconfigurable region can include hardware that is configurable to implement an application logic design. The host logic can be used for separately encapsulating each of the reconfigurable logic regions. The host logic can include a plurality of data path functions where each data path function can include a layer for formatting data transfers between a host interface and the application logic of a corresponding reconfigurable logic region. The host interface can be configured to apportion bandwidth of the data transfers generated by the application logic of the respective reconfigurable logic regions.
    Type: Application
    Filed: March 21, 2019
    Publication date: July 11, 2019
    Applicant: Amazon Technologies, Inc.
    Inventors: Asif Khan, Islam Mohamed Hatem Abdulfattah Mohamed Atta, Robert Michael Johnson, Mark Bradley Davis, Christopher Joseph Pettey, Nafea Bshara, Erez Izenberg
  • Publication number: 20190215021
    Abstract: Systems and methods in accordance with various embodiments of the present disclosure provide approaches for mapping entries to a cache using a function, such as cyclic redundancy check (CRC). The function can calculate a colored cache index based on a main memory address. The function may cause consecutive address cache indexes to be spread throughout the cache according to the indexes calculated by the function. In some embodiments, each data context may be associated with a different function, enabling different types of packets to be processed while sharing the same cache, reducing evictions of other data contexts and improving performance. Various embodiments can identify a type of packet as the packet is received, and lookup a mapping function based on the type of packet. The function can then be used to lookup the corresponding data context for the packet from the cache, for processing the packet.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 11, 2019
    Inventors: Ofer Frishman, Erez Izenberg, Guy Nakibly
  • Patent number: 10320956
    Abstract: A system, comprising: a configurable parser that comprises one or more configurable parsing engines, wherein the configurable parser is arranged to receive a packet and to extract from the packet headers associated with a set of protocols that comprises at least one protocol; a packet type detection unit that is arranged to determine a type of the packet in response to the set of protocols; and a configurable data integrity unit that comprises a configuration unit and at least one configurable data integrity engine; wherein the configuration unit is arranged to configure the at least one configurable data integrity engine according to the set of protocols; and wherein the at least one configurable data integrity engine is arranged to perform data integrity processing of the packet to provide at least one data integrity result.
    Type: Grant
    Filed: January 11, 2015
    Date of Patent: June 11, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Ofer Naaman, Erez Izenberg, Nafea Bshara
  • Patent number: 10298496
    Abstract: A data or packet processing device such as a network interface controller may include cache control logic that is configured to receive a first request for processing a first data packet associated with the queue identifier, and obtain a set of memory descriptors associated with the queue identifier from the memory. The set of descriptors can be stored in the cache. When a second request for processing a second data packet associated with the queue identifier is received, the cache control logic can determine that the cache is storing memory descriptors for processing the second data packet, and provide the memory descriptors used for processing the second packet.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: May 21, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Guy Nakibly, Benzi Denkberg, Erez Izenberg, Nafea Bshara, Uri Leder, Ofer Frishman
  • Patent number: 10282330
    Abstract: The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a plurality of reconfigurable logic regions. Each reconfigurable region can include hardware that is configurable to implement an application logic design. The host logic can be used for separately encapsulating each of the reconfigurable logic regions. The host logic can include a plurality of data path functions where each data path function can include a layer for formatting data transfers between a host interface and the application logic of a corresponding reconfigurable logic region. The host interface can be configured to apportion bandwidth of the data transfers generated by the application logic of the respective reconfigurable logic regions.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: May 7, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Asif Khan, Islam Mohamed Hatem Abdulfattah Mohamed Atta, Robert Michael Johnson, Mark Bradley Davis, Christopher Joseph Pettey, Nafea Bshara, Erez Izenberg
  • Patent number: 10243865
    Abstract: A network device includes (i) a software forwarding engine, and (ii) a hardware forwarding engine, wherein the software forwarding engine is implemented using a processor executing machine readable instructions. The network device analyzes a header of a received packet to determine i) whether the received packet belongs to any flows of packets already known to the network device, and ii) a packet type of the received packet. The network device selects one of the software forwarding engine or the hardware forwarding engine to process the received packet based on i) whether the received packet belongs to any flows of packets already known to the network device, and ii) the determined packet type, including selecting the software forwarding engine when it is determined that the received packet does not belong to any flow of packets already known to the network device.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: March 26, 2019
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Erez Izenberg, Alon Pais, Ruven Torok, Dimitry Melts, Yuval Caduri, Dmitri Epshtein
  • Patent number: 10228869
    Abstract: Techniques for controlling access to shared resources may include receiving multiple requests to access shared information associated with an identifier. For each of the requests, an entry in a linked list can be allocated to the request, and each entry can be associated with the identifier. The shared information associated with the identifier can be retrieved, and stored in each entry associated with the identifier. A conflict indicator is set in each entry to indicate whether the shared information is available for the request corresponding to the entry. The shared information stored in each entry is provided for each request after the conflict indicator in the corresponding entry indicates the shared information is available for the request.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: March 12, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Guy Nakibly, Benzi Denkberg, Ofer Frishman, Erez Izenberg, Uri Leder, Nafea Bshara
  • Patent number: 10223317
    Abstract: The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a reconfigurable logic region. The reconfigurable logic region can include logic blocks that are configurable to implement application logic. The host logic can be used for encapsulating the reconfigurable logic region. The host logic can include a host interface for communicating with a processor. The host logic can include a management function accessible via the host interface. The management function can be adapted to cause the reconfigurable logic region to be configured with the application logic in response to an authorized request from the host interface. The host logic can include a data path function accessible via the host interface. The data path function can include a layer for formatting data transfers between the host interface and the application logic.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: March 5, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Islam Atta, Christopher Joseph Pettey, Asif Khan, Robert Michael Johnson, Mark Bradley Davis, Erez Izenberg, Nafea Bshara, Kypros Constantinides
  • Patent number: 10218576
    Abstract: Technologies for performing controlled bandwidth expansion are described. For example, a storage server can receive a request from a client to read compressed data. The storage server can obtain individual storage units of the compressed data. The storage server can also obtain a compressed size and an uncompressed size for each of the storage units. The storage server can generate network packet content comprising the storage units and associated padding such that the size of the padding for a given storage is based on the uncompressed and compressed sizes of the given storage unit. The storage server can send the network packet content to the client in one or more network packets. The client can receive the network packets, discard the padding, and decompress the compressed data from the storage units.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: February 26, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Ron Diamant, Leah Shalev, Nafea Bshara, Erez Izenberg
  • Patent number: 10212138
    Abstract: A hardware security accelerator includes a configurable parser that is configured to receive a packet and to extract from the packet headers associated with a set of protocols. The security accelerator also includes a packet type detection unit to determine a type of the packet in response to the set of protocols and to generate a packet type identifier indicative of the type of the packet. A configurable security unit includes a configuration unit and a configurable security engine. The configuration unit configures the configurable security engine according to the type of the packet and to content of at least one of the headers extracted from the packet. The configurable security engine performs security processing of the packet to provide at least one security result.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: February 19, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Ron Diamant, Nafea Bshara, Leah Shalev, Erez Izenberg
  • Patent number: 10185678
    Abstract: Methods and apparatuses for offloading functionality in an integrated circuit are presented. Certain embodiments are described that disclose methods pertaining to implementation of a universal offload engine that can service several functional blocks, each configured to perform a different function. The offload engine can be iteratively implemented with a common interface to functional blocks. Work descriptors can be used between DMA engines and corresponding functional blocks to instruct the DMA engines how to transport data between memory locations and/or to reformat the data.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: January 22, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Gil Stoler, Erez Izenberg
  • Publication number: 20190020538
    Abstract: A resource manager of a virtualized computing service indicates to a client that FPGA-enabled compute instances are supported at the service. From a set of virtualization hosts of the service, a particular host from which an FPGA is accessible is selected for the client based on an indication of computation objectives of the client. Configuration operations are performed to prepare the host for the application, and an FPGA-enabled compute instance is launched at the host for the client.
    Type: Application
    Filed: August 31, 2018
    Publication date: January 17, 2019
    Applicant: Amazon Technologies, Inc.
    Inventors: Erez Izenberg, Nafea Bshara, Christopher Pettey, Curtis Karl Ohrt
  • Patent number: 10177795
    Abstract: Systems and methods in accordance with various embodiments of the present disclosure provide approaches for mapping entries to a cache using a function, such as cyclic redundancy check (CRC). The function can calculate a colored cache index based on a main memory address. The function may cause consecutive address cache indexes to be spread throughout the cache according to the indexes calculated by the function. In some embodiments, each data context may be associated with a different function, enabling different types of packets to be processed while sharing the same cache, reducing evictions of other data contexts and improving performance. Various embodiments can identify a type of packet as the packet is received, and lookup a mapping function based on the type of packet. The function can then be used to lookup the corresponding data context for the packet from the cache, for processing the packet.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: January 8, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Ofer Frishman, Erez Izenberg, Guy Nakibly
  • Patent number: 10103992
    Abstract: Disclosed herein are techniques for classifying input network packets evenly into a plurality of classes. An apparatus includes an input port configured to receive a plurality of network packets. The apparatus also includes processing logic configured to receive the plurality of network packets from the input port and classify each packet of the plurality of network packets. For each packet, whether a condition is met is determined, a most recently used hash operation is selected when the condition is not met or a new hash operation is selected when the condition is met; and the selected hash operation is performed on the packet using at least a portion of the packet as an input value to classify the packet. The most recently used hash operation and the new hash operation are configured to classify packets having the same input value into different classes.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: October 16, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Nafea Bshara, Erez Izenberg, Said Bshara, Brian William Barrett