Patents by Inventor Nafea Bshara

Nafea Bshara has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10521377
    Abstract: A first write transaction is received by a device that includes a transaction identifier and a memory location identifier. The memory location identifies a register or a memory location of a device. A value from the register or memory location is read. A second write transaction is sent to a block of host memory. The second write transaction includes the value and the transaction identifier.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: December 31, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Georgy Machulsky, Nafea Bshara, Netanel Israel Belgazal, Said Bshara, Evgeny Schmeilin
  • Publication number: 20190384710
    Abstract: A method for writing data, the method may include: receiving or generating, by an interfacing module, a data unit coherent write request for performing a coherent write operation of a data unit to a first address; receiving, by the interfacing module and from a circuit that comprises a cache and a cache controller, a cache coherency indicator that indicates that a most updated version of the content stored at the first address is stored in the cache; and instructing, by the interfacing module, the cache controller to invalidate a cache line of the cache that stored the most updated version of the first address without sending the most updated version of the content stored at the first address from the cache to a memory module that differs from the cache if a length of the data unit equals a length of the cache line.
    Type: Application
    Filed: August 23, 2018
    Publication date: December 19, 2019
    Inventors: Adi Habusha, Gil Stoler, Said Bshara, Nafea Bshara
  • Patent number: 10509758
    Abstract: Provided are systems and methods for hot-plugging emulated peripheral devices (e.g., endpoints) into host devices that either have a hypervisor that does not support virtualized peripheral device or that do not include a hypervisor. In various implementations, a configurable peripheral device can emulate a switch that includes upstream ports and downstream ports. When a new endpoint device is requested, the configurable peripheral device can, using an emulation configuration for the new endpoint device, generate an emulation for the new endpoint device. The configurable peripheral device can connect the endpoint device to a downstream port, and then trigger a hot-plug mechanism, through which the host device can add the new endpoint device to the known hardware of the host device.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: December 17, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Georgy Zorik Machulsky, Nafea Bshara, Tal Zilcer
  • Patent number: 10509764
    Abstract: Apparatus and methods are disclosed herein for remote, direct memory access (RDMA) technology that enables direct memory access from one host computer memory to another host computer memory over a physical or virtual computer network according to a number of different RDMA protocols. In one example, a method includes receiving remote direct memory access (RDMA) packets via a network adapter, deriving a protocol index identifying an RDMA protocol used to encode data for an RDMA transaction associated with the RDMA packets, applying the protocol index to a generate RDMA commands from header information in at least one of the received RDMA packets, and performing an RDMA operation using the RDMA commands.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: December 17, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Erez Izenberg, Leah Shalev, Nafea Bshara, Guy Nakibly, Georgy Machulsky
  • Patent number: 10498654
    Abstract: Disclosed herein is a method including receiving, from a user application, data to be transmitted from a source address to a destination address using a single connection through a network; and splitting the data into a plurality of packets according to a communication protocol. For each packet of the plurality of packets, a respective flowlet for the packet to be transmitted in is determined from a plurality of flowlets; a field in the packet used by a network switch of the network to route the packet is set based on the determined flowlet for the packet; and the packet is sent via the determined flowlet for transmitting through the network.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: December 3, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Leah Shalev, Nafea Bshara, Georgy Machulsky, Brian William Barrett
  • Publication number: 20190364136
    Abstract: A system, comprising: a configurable parser that comprises one or more configurable parsing engines, wherein the configurable parser is arranged to receive a packet and to extract from the packet headers associated with a set of protocols that comprises at least one protocol; a packet type detection unit that is arranged to determine a type of the packet in response to the set of protocols; and a configurable data integrity unit that comprises a configuration unit and at least one configurable data integrity engine; wherein the configuration unit is arranged to configure the at least one configurable data integrity engine according to the set of protocols; and wherein the at least one configurable data integrity engine is arranged to perform data integrity processing of the packet to provide at least one data integrity result
    Type: Application
    Filed: June 7, 2019
    Publication date: November 28, 2019
    Inventors: Ofer Naaman, Erez Izenberg, Nafea Bshara
  • Publication number: 20190363989
    Abstract: Disclosed herein is a method including receiving, from a user application, data to be transmitted from a source address to a destination address using a single connection through a network; and splitting the data into a plurality of packets according to a communication protocol. For each packet of the plurality of packets, a respective flowlet for the packet to be transmitted in is determined from a plurality of flowlets; a field in the packet used by a network switch of the network to route the packet is set based on the determined flowlet for the packet; and the packet is sent via the determined flowlet for transmitting through the network.
    Type: Application
    Filed: August 13, 2019
    Publication date: November 28, 2019
    Inventors: Leah Shalev, Nafea Bshara, Georgy Machulsky, Brian William Barrett
  • Patent number: 10489302
    Abstract: An emulated input/output memory management unit (IOMMU) includes a management processor to perform page table translation in software. The emulated IOMMU can also include a hardware input/output translation lookaside buffer (IOTLB) to store translations between virtual addresses and physical memory addresses. When a translation from a virtual address to a physical address is not found in the IOTLB for an I/O request, the translation can be generated by the management processor using page tables from a memory and can be stored in the IOTLB. Some embodiments can be used to emulate interrupt translation service for message based interrupts for an interrupt controller.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: November 26, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Leah Shalev, Nafea Bshara
  • Patent number: 10474359
    Abstract: Disclosed herein are techniques for reducing the number of write operations performed to a storage-class memory in a virtualized environment. In one embodiment, when a memory page is de-allocated from a virtual machine, the memory page and/or the subpages of the memory page are marked as “trimmed” in a control table such that any read to the memory page or subpages is denied, and no physical memory initialization is performed to the memory page or subpages. A de-allocated memory page or subpage is only initialized when it is reallocated and is to be written to by the virtual machine to which the memory page is reallocated.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: November 12, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Nafea Bshara
  • Patent number: 10459875
    Abstract: According to an embodiment of the invention there may be provided a method for hybrid remote direct memory access (RDMA), the method may include: (i) receiving, by a first computer, a packet that was sent over a network from a second computer; wherein the packet may include data and metadata; (ii) determining, in response to the metadata, whether the data should be (a) directly written to a first application memory of the first computer by a first hardware accelerator of the first computer; or (b) indirectly written to the first application memory; (iii) indirectly writing the data to the first application memory if it determined that the data should be indirectly written to the first application memory; (iv) if it determined that the data should be directly written to the first application memory then: (iv.a) directly writing, by the first hardware accelerator the data to the first application memory without writing the data to any buffer of the operating system; and (iv.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: October 29, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Erez Izenberg, Leah Shalev, Georgy Machulsky, Nafea Bshara
  • Patent number: 10454831
    Abstract: Forwarding of network packets generated by a networking device may be load-balanced. Network packets may be generated by a networking device, such as a packet processor, that also processes and forwards received network packets. Forwarding decisions for the generated network packets may be made according to a load balancing scheme among possible forwarding routes from the networking device. In at least some embodiments, a destination resolution pipeline for determining forwarding decisions for generated network packets may be implemented separate from a destination resolution pipeline for determining forwarding decisions for received network packets in order to determine different forwarding decisions for the generated network packets. The generated network packets may then be forwarded according to the determined forwarding decisions.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: October 22, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Asif Kahn, Nafea Bshara
  • Patent number: 10437748
    Abstract: Apparatus, methods, and computer-readable storage media are disclosed for core-to-core communication between physical and/or virtual processor cores. In some examples of the disclosed technology, application cores write notification data (e.g., to doorbell or PCI configuration memory space accesses via a memory interface), without synchronizing with the other application cores or the service cores. In one examples of the disclosed technology, a message selection circuit is configured to, serialize data from the plurality of user cores by: receiving data from a user core, selecting one of the service cores to send the data based on a memory location addressed by the sending user core, and sending the received data to a respective message buffer dedicated to the selected service core.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: October 8, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Leah Shalev, Adi Habusha, Georgy Machulsky, Nafea Bshara, Eric Jason Brandwine
  • Patent number: 10430203
    Abstract: Disclosed are techniques regarding aspects of implementing client configurable logic within a computer system. The computer system can be a cloud infrastructure. The techniques can include providing an identifier in response to configuring client configurable logic within the computer system.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: October 1, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Robert Michael Johnson, Islam Mohamed Hatem Abdulfattah Mohamed Atta, Asif Khan, Nafea Bshara, Anthony Nicholas Liguori
  • Publication number: 20190294328
    Abstract: A peripheral device may implement storage virtualization for non-volatile storage devices connected to the peripheral device. A host system connected to the peripheral device may host one or multiple virtual machines. The peripheral device may implement different virtual interfaces for the virtual machines or the host system that present a storage partition at a non-volatile storage device to the virtual machine or host system for storage. Access requests from the virtual machines or host system are directed to the respective virtual interface at the peripheral device. The peripheral device may perform data encryption or decryption, or may perform throttling of access requests. The peripheral device may generate and send physical access requests to perform the access requests received via the virtual interfaces to the non-volatile storage devices. Completion of the access requests may be indicated to the virtual machines via the virtual interfaces.
    Type: Application
    Filed: June 7, 2019
    Publication date: September 26, 2019
    Applicant: Amazon Technologies, Inc.
    Inventors: Raviprasad Venkatesha Murthy Mummidi, Matthew Shawn Wilson, Anthony Nicholas Liguori, Nafea Bshara, Saar Gross, Jaspal Kohli
  • Patent number: 10423438
    Abstract: In a multi-tenant environment, separate virtual machines can be used for configuring and operating different subsets of programmable integrated circuits, such as a Field Programmable Gate Array (FPGA). The programmable integrated circuits can communicate directly with each other within a subset, but cannot communicate between subsets. Generally, all of the subsets of programmable ICs are within a same host server computer within the multi-tenant environment, and are sandboxed or otherwise isolated from each other so that multiple customers can share the resources of the host server computer without knowledge or interference with other customers.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: September 24, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Islam Mohamed Hatem Abdulfattah Mohamed Atta, Mark Bradley Davis, Robert Michael Johnson, Christopher Joseph Pettey, Asif Khan, Nafea Bshara
  • Patent number: 10409744
    Abstract: A processor in a peripheral device can include a wait-for-event mechanism, through which the processor can enter low-power mode and be woken from lower-power mode with an event. Using an event, rather than an interrupt, allows the processor to wake without the latency incurred by an interrupt handling routine. In various implementations, the processor may be configured to execute a sequence of instructions that include a wait-for-event instruction. The wait-for-event instruction can be called when the processor is idle. The wait-for-event instruction may initiate a low-power mode for the processor, wherein the processor suspends executing the sequence of instructions. The processor may further be configured to receive, at an event input, an event signal. The event signal may cause the processor to exit the low-power mode and to resume executing the sequence of instructions from the point at which the processor suspended executing the sequence of instructions.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: September 10, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Saar Gross, Said Bshara, Adi Habusha, Nafea Bshara, Ronen Shitrit
  • Publication number: 20190273654
    Abstract: Techniques for reconfiguring a server to perform various hardware functions are disclosed herein. In one embodiment, a client device sends an instance request to a compute service system for launching an instance. The instance request indicates a resource requirement for the instance. In response to the instance request, the compute service system selects a server from among a plurality of servers in the compute service system based on determining that the server is configurable to at least partially meet the resource requirement. The compute service system then sends a provisioning request to the selected server. The provisioning request includes information for programming a reconfigurable resource of an adapter device in the selected server according to a particular hardware function.
    Type: Application
    Filed: May 21, 2019
    Publication date: September 5, 2019
    Inventors: Anthony Nicholas Liguori, Nafea Bshara
  • Patent number: 10404674
    Abstract: Efficient memory management can be provided in a multi-tenant virtualized environment by encrypting data to be written in memory by a virtual machine using a cryptographic key specific to the virtual machine. Encrypting data associated with multiple virtual machines using a cryptographic key unique to each virtual machine can minimize exposure of the data stored in the memory shared by the multiple virtual machines. Thus, some embodiments can eliminate write cycles to the memory that are generally used to initialize the memory before a virtual machine can write data to the memory if the memory was used previously by another virtual machine.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: September 3, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Nafea Bshara, Thomas A. Volpe, Adi Habusha, Yaniv Shapira
  • Publication number: 20190258597
    Abstract: The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a reconfigurable logic region. The reconfigurable logic region can include logic blocks that are configurable to implement application logic. The host logic can be used for encapsulating the reconfigurable logic region. The host logic can include a host interface for communicating with a processor. The host logic can include a management function accessible via the host interface. The management function can be adapted to cause the reconfigurable logic region to be configured with the application logic in response to an authorized request from the host interface. The host logic can include a data path function accessible via the host interface. The data path function can include a layer for formatting data transfers between the host interface and the application logic.
    Type: Application
    Filed: February 27, 2019
    Publication date: August 22, 2019
    Inventors: Islam Atta, Christopher Joseph Pettey, Asif Khan, Robert Michael Johnson, Mark Bradley Davis, Erez Izenberg, Nafea Bshara, Kypros Constantinides
  • Patent number: 10374885
    Abstract: Techniques for reconfiguring a server to perform various hardware functions are disclosed herein. In one embodiment, a server includes a reconfigurable adapter device, where the reconfigurable adapter device includes a reconfigurable resource that is reprogrammable to perform different hardware functions. The server can receive a provisioning request corresponding to a hardware function from a management service. The reconfigurable adapter device can configure the reconfigurable resource according to the hardware function and report the configured hardware function to the server. The reconfigurable resource can be reconfigured using firmware or emulation software.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: August 6, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Anthony Nicholas Liguori, Nafea Bshara