Patents by Inventor Shachar Raindel
Shachar Raindel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240056361Abstract: This document relates to analyzing of network stack functionality that is implemented in hardware, such as on a network adapter. The disclosed implementations employ a programmable network device, such as a switch, to inject events into traffic and mirror the traffic for subsequent analysis. The events can have user-specified event parameters to test different types of network stack behavior, such as how the network adapters respond to corrupted packets, dropped packets, or explicit congestion notifications.Type: ApplicationFiled: August 12, 2022Publication date: February 15, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Wei BAI, Jitendra PADHYE, Shachar RAINDEL, Zhuolong YU, Mahmoud ELHADDAD, Abdul KABBANI
-
Publication number: 20240004683Abstract: Solutions for scheduling page migrations use latency tolerance of coupled devices, such as external peripheral devices (e.g., network adapters), to prevent buffer overflows or other negative performance. A latency tolerance of a device coupled to a virtual object, such as a virtual machine (VM) is determined. This may include the device exposing its latency tolerance using latency tolerance reporting (LTR). When a page migration for the virtual object is pending, a determination is made whether sufficient time exists to perform the page migration, based on at least the latency tolerance of the device. The page migration is performed if sufficient time exists. Otherwise, the page migration is delayed. In some examples, latency tolerances of multiple devices are considered. In some examples, multiple page migrations are performed contemporaneously, based on latency tolerances. Various options are disclosed, such as the page migration being performed by the virtual object software or the device.Type: ApplicationFiled: June 29, 2022Publication date: January 4, 2024Inventors: Shachar RAINDEL, Daniel Sebastian BERGER
-
Patent number: 11863457Abstract: Techniques of time-sensitive data delivery in distributed computing systems are disclosed herein. In one example, a server can disseminate the same information to multiple endpoints in a distributed computing system by transmitting multiple packets to the multiple endpoints hosted on additional servers in the distributed computing system. The multiple packets individually include a header field containing a delivery time before which the packets are not forwarded to corresponding final destinations and a payload containing data representing copies of information identical to one another destined to the multiple endpoints hosted on the additional servers.Type: GrantFiled: December 10, 2020Date of Patent: January 2, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Shachar Raindel
-
Publication number: 20230401160Abstract: In one example of the present technology, an input/output memory management unit (IOMMU) of a computing device is configured to: receive a prefetch message including a virtual address from a central processing unit (CPU) core of a processor of the computing device; perform a page walk on the virtual address through a page table stored in a main memory of the computing device to obtain a prefetched translation of the virtual address to a physical address; and store the prefetched translation of the virtual address to the physical address in a translation lookaside buffer (TLB) of the IOMMU.Type: ApplicationFiled: June 9, 2022Publication date: December 14, 2023Inventors: Ramakrishna HUGGAHALLI, Shachar RAINDEL
-
Publication number: 20230336503Abstract: Embodiments of the present disclosure include techniques for receiving and processing packets. A program configures a network interface to store data from each received packet in one or more packet buffers. If data from a packet exceeds the capacity of the assigned packet buffers, remaining data from the packet may be stored in an overflow buffer. The packet may then be deleted efficiently without delays resulting from handling the remaining data.Type: ApplicationFiled: April 15, 2022Publication date: October 19, 2023Inventor: Shachar RAINDEL
-
Publication number: 20230305739Abstract: Embodiments of the present disclosure includes techniques for partial memory updates in a computer system. A data structure template is received. A first write data of a first write operation is received from a first data source, the first write operation performed in connection with provisioning of a first data payload to memory communicatively coupled with a processing unit. A first merge operation is performed involving the first write data and the first data structure template to obtain a first data structure update. The first data structure update is written to the memory, thereby improving efficiency of updating a first data structure associated with the first data payload.Type: ApplicationFiled: March 28, 2022Publication date: September 28, 2023Inventors: Ramakrishna HUGGAHALLI, Shachar RAINDEL
-
Publication number: 20230305968Abstract: Embodiments of the present disclosure includes techniques for cache memory replacement in a processing unit. A first data production operation to store first data to a first cache line of the cache memory is detected at a first time. A retention status of the first cache line is updated to a first retention level as a result of the first data production operation. Protection against displacement of the first data in the first cache line is increased based on the first retention level. A first data consumption operation retrieving the first data from the first cache line is detected at a second time after the first time. The retention status of the first cache line is updated to a second retention level as a result of the first data consumption operation, the second retention level being a lower level of retention than the first retention level.Type: ApplicationFiled: March 28, 2022Publication date: September 28, 2023Inventors: Ramakrishna HUGGAHALLI, Shachar RAINDEL
-
Publication number: 20230299895Abstract: Techniques of packet level redundancy in distributed computing systems are disclosed herein. In one example, upon receiving an original packet to be transmitted from a source host to an application executing at a destination host, the source host generates a duplicated packet based on the received original packet. The source host can then encapsulate the original and duplicated packets with first and second outer headers having first and second header values, respectively, and transmitting the original and the duplicated packets from the source host to the destination host via a first network path and a second network path in the computer network, respectively. Then, the transmitted original and duplicated packets can be de-duplicated at the destination host before providing the de-duplicated original and duplicated packets to the application executing at the destination host.Type: ApplicationFiled: March 15, 2022Publication date: September 21, 2023Inventors: Daehyeok Kim, Jitu Padhye, Shachar Raindel, Wei Bai
-
Publication number: 20230261960Abstract: Techniques are disclosed for identifying faulty links in a virtualized computing environment. Network path latency information is received for one or more network paths in the networked computing environment. Based on the network path latency information, a probable presence of a faulty component is determined. In response to the determination, physical links for a network path associated with the probable faulty component are identified. Information indicative of likely sources of the probable faulty component is received from multiple hosts of the networked computing environment. Based on the identified physical links and information, a faulty component is determined.Type: ApplicationFiled: April 25, 2023Publication date: August 17, 2023Inventors: Shachar RAINDEL, Jitendra D. PADHYE, Avi William LEVY, Mahmoud S. EL HADDAD, Alireza KHOSGOFTAR MONAFARED, Brian D. ZILL, Behnaz ARZANI, Xinchen GUO
-
Patent number: 11671342Abstract: Techniques are disclosed for identifying faulty links in a virtualized computing environment. Network path latency information is received for one or more network paths in the networked computing environment. Based on the network path latency information, a probable presence of a faulty component is determined. In response to the determination, physical links for a network path associated with the probable faulty component are identified. Information indicative of likely sources of the probable faulty component is received from multiple hosts of the networked computing environment. Based on the identified physical links and information, a faulty component is determined.Type: GrantFiled: May 21, 2021Date of Patent: June 6, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Shachar Raindel, Jitendra D. Padhye, Avi William Levy, Mahmoud S. El Haddad, Alireza Khosgoftar Monafared, Brian D. Zill, Behnaz Arzani, Xinchen Guo
-
Patent number: 11563662Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.Type: GrantFiled: July 14, 2022Date of Patent: January 24, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Shachar Raindel
-
Publication number: 20220360511Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.Type: ApplicationFiled: July 14, 2022Publication date: November 10, 2022Inventor: Shachar RAINDEL
-
Patent number: 11431599Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.Type: GrantFiled: July 9, 2021Date of Patent: August 30, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Shachar Raindel
-
Publication number: 20220210039Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.Type: ApplicationFiled: July 9, 2021Publication date: June 30, 2022Inventor: Shachar Raindel
-
Publication number: 20220191148Abstract: Techniques of time-sensitive data delivery in distributed computing systems are disclosed herein. In one example, a server can disseminate the same information to multiple endpoints in a distributed computing system by transmitting multiple packets to the multiple endpoints hosted on additional servers in the distributed computing system. The multiple packets individually include a header field containing a delivery time before which the packets are not forwarded to corresponding final destinations and a payload containing data representing copies of information identical to one another destined to the multiple endpoints hosted on the additional servers.Type: ApplicationFiled: December 10, 2020Publication date: June 16, 2022Inventor: Shachar Raindel
-
Patent number: 11218537Abstract: Techniques for facilitating load balancing in distributed computing systems are disclosed herein. In one embodiment, a method includes receiving, at a destination server, a request packet from a load balancer via the computer network requesting a remote direct memory access (“RDMA”) connection between an originating server and one or more other servers selectable by the load balancer. The method can also include configuring, at the destination server, a rule for processing additional packets transmittable to the originating server via the RDMA connection based on the received reply packet. The rule is configured to encapsulate an outgoing packet transmittable to the originating server with an outer header having a destination field containing a network address of the originating server and a source field containing another network address of the destination server.Type: GrantFiled: May 12, 2020Date of Patent: January 4, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Rohan Gandhi, Shachar Raindel, Daniel Firestone, Jitendra Padhye, Lihua Yuan
-
Publication number: 20210281505Abstract: Techniques are disclosed for identifying faulty links in a virtualized computing environment. Network path latency information is received for one or more network paths in the networked computing environment. Based on the network path latency information, a probable presence of a faulty component is determined. In response to the determination, physical links for a network path associated with the probable faulty component are identified. Information indicative of likely sources of the probable faulty component is received from multiple hosts of the networked computing environment. Based on the identified physical links and information, a faulty component is determined.Type: ApplicationFiled: May 21, 2021Publication date: September 9, 2021Inventors: Shachar Raindel, Jitendra D. PADHYE, Avi William LEVY, Mahmoud S. EL HADDAD, Alireza KHOSGOFTAR MONAFARED, Brian D. ZILL, Behnaz ARZANI, Xinchen GUO
-
Patent number: 11050652Abstract: Techniques are disclosed for identifying faulty links in a virtualized computing environment. Network path latency information is received for one or more network paths in the networked computing environment. Based on the network path latency information, a probable presence of a faulty component is determined. In response to the determination, physical links for a network path associated with the probable faulty component are identified. Information indicative of likely sources of the probable faulty component is received from multiple hosts of the networked computing environment. Based on the identified physical links and information, a faulty component is determined.Type: GrantFiled: February 1, 2019Date of Patent: June 29, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Shachar Raindel, Jitendra D. Padhye, Avi William Levy, Mahmoud S. El Haddad, Alireza Khosgoftar Monafared, Brian D. Zill, Behnaz Arzani, Xinchen Guo
-
Patent number: 11042501Abstract: Distributed storage systems, devices, and associated methods of data replication are disclosed herein. In one embodiment, a server in a distributed storage system is configured to write, with an RDMA enabled NIC, a block of data from a memory of the server to a memory at another server via an RDMA network. Upon completion of writing the block of data to the another server, the server can also send metadata representing a memory location and a data size of the written block of data in the memory of the another server via the RDMA network. The sent metadata is to be written into a memory location containing data representing a memory descriptor that is a part of a data structure representing a pre-posted work request configured to write a copy of the block of data from the another server to an additional server via the RDMA network.Type: GrantFiled: April 2, 2020Date of Patent: June 22, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Yibo Zhu, Jitendra D. Padhye, Hongqiang Liu, Shachar Raindel, Daehyeok Kim, Anirudh Badam
-
Publication number: 20210126966Abstract: Techniques for facilitating load balancing in distributed computing systems are disclosed herein. In one embodiment, a method includes receiving, at a destination server, a request packet from a load balancer via the computer network requesting a remote direct memory access (“RDMA”) connection between an originating server and one or more other servers selectable by the load balancer. The method can also include configuring, at the destination server, a rule for processing additional packets transmittable to the originating server via the RDMA connection based on the received reply packet. The rule is configured to encapsulate an outgoing packet transmittable to the originating server with an outer header having a destination field containing a network address of the originating server and a source field containing another network address of the destination server.Type: ApplicationFiled: May 12, 2020Publication date: April 29, 2021Inventors: Rohan Gandhi, Shachar Raindel, Daniel Firestone, Jitendra Padhye, Lihua Yuan