Patents by Inventor Shachar Raindel

Shachar Raindel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240056361
    Abstract: This document relates to analyzing of network stack functionality that is implemented in hardware, such as on a network adapter. The disclosed implementations employ a programmable network device, such as a switch, to inject events into traffic and mirror the traffic for subsequent analysis. The events can have user-specified event parameters to test different types of network stack behavior, such as how the network adapters respond to corrupted packets, dropped packets, or explicit congestion notifications.
    Type: Application
    Filed: August 12, 2022
    Publication date: February 15, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Wei BAI, Jitendra PADHYE, Shachar RAINDEL, Zhuolong YU, Mahmoud ELHADDAD, Abdul KABBANI
  • Publication number: 20240004683
    Abstract: Solutions for scheduling page migrations use latency tolerance of coupled devices, such as external peripheral devices (e.g., network adapters), to prevent buffer overflows or other negative performance. A latency tolerance of a device coupled to a virtual object, such as a virtual machine (VM) is determined. This may include the device exposing its latency tolerance using latency tolerance reporting (LTR). When a page migration for the virtual object is pending, a determination is made whether sufficient time exists to perform the page migration, based on at least the latency tolerance of the device. The page migration is performed if sufficient time exists. Otherwise, the page migration is delayed. In some examples, latency tolerances of multiple devices are considered. In some examples, multiple page migrations are performed contemporaneously, based on latency tolerances. Various options are disclosed, such as the page migration being performed by the virtual object software or the device.
    Type: Application
    Filed: June 29, 2022
    Publication date: January 4, 2024
    Inventors: Shachar RAINDEL, Daniel Sebastian BERGER
  • Patent number: 11863457
    Abstract: Techniques of time-sensitive data delivery in distributed computing systems are disclosed herein. In one example, a server can disseminate the same information to multiple endpoints in a distributed computing system by transmitting multiple packets to the multiple endpoints hosted on additional servers in the distributed computing system. The multiple packets individually include a header field containing a delivery time before which the packets are not forwarded to corresponding final destinations and a payload containing data representing copies of information identical to one another destined to the multiple endpoints hosted on the additional servers.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: January 2, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Shachar Raindel
  • Publication number: 20230401160
    Abstract: In one example of the present technology, an input/output memory management unit (IOMMU) of a computing device is configured to: receive a prefetch message including a virtual address from a central processing unit (CPU) core of a processor of the computing device; perform a page walk on the virtual address through a page table stored in a main memory of the computing device to obtain a prefetched translation of the virtual address to a physical address; and store the prefetched translation of the virtual address to the physical address in a translation lookaside buffer (TLB) of the IOMMU.
    Type: Application
    Filed: June 9, 2022
    Publication date: December 14, 2023
    Inventors: Ramakrishna HUGGAHALLI, Shachar RAINDEL
  • Publication number: 20230336503
    Abstract: Embodiments of the present disclosure include techniques for receiving and processing packets. A program configures a network interface to store data from each received packet in one or more packet buffers. If data from a packet exceeds the capacity of the assigned packet buffers, remaining data from the packet may be stored in an overflow buffer. The packet may then be deleted efficiently without delays resulting from handling the remaining data.
    Type: Application
    Filed: April 15, 2022
    Publication date: October 19, 2023
    Inventor: Shachar RAINDEL
  • Publication number: 20230305739
    Abstract: Embodiments of the present disclosure includes techniques for partial memory updates in a computer system. A data structure template is received. A first write data of a first write operation is received from a first data source, the first write operation performed in connection with provisioning of a first data payload to memory communicatively coupled with a processing unit. A first merge operation is performed involving the first write data and the first data structure template to obtain a first data structure update. The first data structure update is written to the memory, thereby improving efficiency of updating a first data structure associated with the first data payload.
    Type: Application
    Filed: March 28, 2022
    Publication date: September 28, 2023
    Inventors: Ramakrishna HUGGAHALLI, Shachar RAINDEL
  • Publication number: 20230305968
    Abstract: Embodiments of the present disclosure includes techniques for cache memory replacement in a processing unit. A first data production operation to store first data to a first cache line of the cache memory is detected at a first time. A retention status of the first cache line is updated to a first retention level as a result of the first data production operation. Protection against displacement of the first data in the first cache line is increased based on the first retention level. A first data consumption operation retrieving the first data from the first cache line is detected at a second time after the first time. The retention status of the first cache line is updated to a second retention level as a result of the first data consumption operation, the second retention level being a lower level of retention than the first retention level.
    Type: Application
    Filed: March 28, 2022
    Publication date: September 28, 2023
    Inventors: Ramakrishna HUGGAHALLI, Shachar RAINDEL
  • Publication number: 20230299895
    Abstract: Techniques of packet level redundancy in distributed computing systems are disclosed herein. In one example, upon receiving an original packet to be transmitted from a source host to an application executing at a destination host, the source host generates a duplicated packet based on the received original packet. The source host can then encapsulate the original and duplicated packets with first and second outer headers having first and second header values, respectively, and transmitting the original and the duplicated packets from the source host to the destination host via a first network path and a second network path in the computer network, respectively. Then, the transmitted original and duplicated packets can be de-duplicated at the destination host before providing the de-duplicated original and duplicated packets to the application executing at the destination host.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 21, 2023
    Inventors: Daehyeok Kim, Jitu Padhye, Shachar Raindel, Wei Bai
  • Publication number: 20230261960
    Abstract: Techniques are disclosed for identifying faulty links in a virtualized computing environment. Network path latency information is received for one or more network paths in the networked computing environment. Based on the network path latency information, a probable presence of a faulty component is determined. In response to the determination, physical links for a network path associated with the probable faulty component are identified. Information indicative of likely sources of the probable faulty component is received from multiple hosts of the networked computing environment. Based on the identified physical links and information, a faulty component is determined.
    Type: Application
    Filed: April 25, 2023
    Publication date: August 17, 2023
    Inventors: Shachar RAINDEL, Jitendra D. PADHYE, Avi William LEVY, Mahmoud S. EL HADDAD, Alireza KHOSGOFTAR MONAFARED, Brian D. ZILL, Behnaz ARZANI, Xinchen GUO
  • Patent number: 11671342
    Abstract: Techniques are disclosed for identifying faulty links in a virtualized computing environment. Network path latency information is received for one or more network paths in the networked computing environment. Based on the network path latency information, a probable presence of a faulty component is determined. In response to the determination, physical links for a network path associated with the probable faulty component are identified. Information indicative of likely sources of the probable faulty component is received from multiple hosts of the networked computing environment. Based on the identified physical links and information, a faulty component is determined.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: June 6, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Shachar Raindel, Jitendra D. Padhye, Avi William Levy, Mahmoud S. El Haddad, Alireza Khosgoftar Monafared, Brian D. Zill, Behnaz Arzani, Xinchen Guo
  • Patent number: 11563662
    Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: January 24, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Shachar Raindel
  • Publication number: 20220360511
    Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.
    Type: Application
    Filed: July 14, 2022
    Publication date: November 10, 2022
    Inventor: Shachar RAINDEL
  • Patent number: 11431599
    Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: August 30, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Shachar Raindel
  • Publication number: 20220210039
    Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.
    Type: Application
    Filed: July 9, 2021
    Publication date: June 30, 2022
    Inventor: Shachar Raindel
  • Publication number: 20220191148
    Abstract: Techniques of time-sensitive data delivery in distributed computing systems are disclosed herein. In one example, a server can disseminate the same information to multiple endpoints in a distributed computing system by transmitting multiple packets to the multiple endpoints hosted on additional servers in the distributed computing system. The multiple packets individually include a header field containing a delivery time before which the packets are not forwarded to corresponding final destinations and a payload containing data representing copies of information identical to one another destined to the multiple endpoints hosted on the additional servers.
    Type: Application
    Filed: December 10, 2020
    Publication date: June 16, 2022
    Inventor: Shachar Raindel
  • Patent number: 11218537
    Abstract: Techniques for facilitating load balancing in distributed computing systems are disclosed herein. In one embodiment, a method includes receiving, at a destination server, a request packet from a load balancer via the computer network requesting a remote direct memory access (“RDMA”) connection between an originating server and one or more other servers selectable by the load balancer. The method can also include configuring, at the destination server, a rule for processing additional packets transmittable to the originating server via the RDMA connection based on the received reply packet. The rule is configured to encapsulate an outgoing packet transmittable to the originating server with an outer header having a destination field containing a network address of the originating server and a source field containing another network address of the destination server.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: January 4, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rohan Gandhi, Shachar Raindel, Daniel Firestone, Jitendra Padhye, Lihua Yuan
  • Publication number: 20210281505
    Abstract: Techniques are disclosed for identifying faulty links in a virtualized computing environment. Network path latency information is received for one or more network paths in the networked computing environment. Based on the network path latency information, a probable presence of a faulty component is determined. In response to the determination, physical links for a network path associated with the probable faulty component are identified. Information indicative of likely sources of the probable faulty component is received from multiple hosts of the networked computing environment. Based on the identified physical links and information, a faulty component is determined.
    Type: Application
    Filed: May 21, 2021
    Publication date: September 9, 2021
    Inventors: Shachar Raindel, Jitendra D. PADHYE, Avi William LEVY, Mahmoud S. EL HADDAD, Alireza KHOSGOFTAR MONAFARED, Brian D. ZILL, Behnaz ARZANI, Xinchen GUO
  • Patent number: 11050652
    Abstract: Techniques are disclosed for identifying faulty links in a virtualized computing environment. Network path latency information is received for one or more network paths in the networked computing environment. Based on the network path latency information, a probable presence of a faulty component is determined. In response to the determination, physical links for a network path associated with the probable faulty component are identified. Information indicative of likely sources of the probable faulty component is received from multiple hosts of the networked computing environment. Based on the identified physical links and information, a faulty component is determined.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: June 29, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shachar Raindel, Jitendra D. Padhye, Avi William Levy, Mahmoud S. El Haddad, Alireza Khosgoftar Monafared, Brian D. Zill, Behnaz Arzani, Xinchen Guo
  • Patent number: 11042501
    Abstract: Distributed storage systems, devices, and associated methods of data replication are disclosed herein. In one embodiment, a server in a distributed storage system is configured to write, with an RDMA enabled NIC, a block of data from a memory of the server to a memory at another server via an RDMA network. Upon completion of writing the block of data to the another server, the server can also send metadata representing a memory location and a data size of the written block of data in the memory of the another server via the RDMA network. The sent metadata is to be written into a memory location containing data representing a memory descriptor that is a part of a data structure representing a pre-posted work request configured to write a copy of the block of data from the another server to an additional server via the RDMA network.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: June 22, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yibo Zhu, Jitendra D. Padhye, Hongqiang Liu, Shachar Raindel, Daehyeok Kim, Anirudh Badam
  • Publication number: 20210126966
    Abstract: Techniques for facilitating load balancing in distributed computing systems are disclosed herein. In one embodiment, a method includes receiving, at a destination server, a request packet from a load balancer via the computer network requesting a remote direct memory access (“RDMA”) connection between an originating server and one or more other servers selectable by the load balancer. The method can also include configuring, at the destination server, a rule for processing additional packets transmittable to the originating server via the RDMA connection based on the received reply packet. The rule is configured to encapsulate an outgoing packet transmittable to the originating server with an outer header having a destination field containing a network address of the originating server and a source field containing another network address of the destination server.
    Type: Application
    Filed: May 12, 2020
    Publication date: April 29, 2021
    Inventors: Rohan Gandhi, Shachar Raindel, Daniel Firestone, Jitendra Padhye, Lihua Yuan