Patents by Inventor Michael Konstantinos Papamichael
Michael Konstantinos Papamichael has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230379254Abstract: Techniques and algorithms for monitoring network congestion and for triggering a flow to follow a new path through a network. The network is monitored, and network feedback data is acquired, where that data indicates whether the network is congested. If the network is congested, a feedback-driven algorithm can trigger a flow to follow a new path. By triggering the flow to follow the new path, congestion in the network is reduced. To identify congestion, the feedback data is analyzed to determine whether flows are colliding. The feedback-driven algorithm determines that a network remapping event is to occur in an attempt to alleviate the congestion. A flow is then selected to be remapped to alleviate the congestion.Type: ApplicationFiled: May 18, 2022Publication date: November 23, 2023Inventors: Michael Konstantinos PAPAMICHAEL, Mohammad Saifee DOHADWALA, Adrian Michael CAULFIELD, Prashant RANJAN
-
Publication number: 20210392210Abstract: A masked packet checksum is utilized to provide error detection and/or error correction for only discrete portions of a packet, to the exclusion of other portions, thereby avoiding retransmission if transmission errors appear only in portions excluded by the masked packet checksum. A bitmask identifies packet portions whose data is to be protected with error detection and/or error correction schemes, packet portions whose data is to be excluded from such error detection and/or error correction schemes, or combinations thereof. A bitmask can be a per-packet specification, incorporated into one or more fields of individual packets, or a single bitmask can apply equally to multiple packets, which can be delineated in numerous ways, and can be separately transmitted or derived. Bitmasks can be generated at higher layers with lower layer mechanisms deactivated, or can be generated lower layers based upon data passed down.Type: ApplicationFiled: August 30, 2021Publication date: December 16, 2021Inventors: Adrian Michael CAULFIELD, Michael Konstantinos PAPAMICHAEL
-
Patent number: 11108894Abstract: A masked packet checksum is utilized to provide error detection and/or error correction for only discrete portions of a packet, to the exclusion of other portions, thereby avoiding retransmission if transmission errors appear only in portions excluded by the masked packet checksum. A bitmask identifies packet portions whose data is to be protected with error detection and/or error correction schemes, packet portions whose data is to be excluded from such error detection and/or error correction schemes, or combinations thereof. A bitmask can be a per-packet specification, incorporated into one or more fields of individual packets, or a single bitmask can apply equally to multiple packets, which can be delineated in numerous ways, and can be separately transmitted or derived. Bitmasks can be generated at higher layers with lower layer mechanisms deactivated, or can be generated lower layers based upon data passed down.Type: GrantFiled: August 9, 2019Date of Patent: August 31, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Adrian Michael Caulfield, Michael Konstantinos Papamichael
-
Patent number: 11068412Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.Type: GrantFiled: February 22, 2019Date of Patent: July 20, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Matthew Graham Humphrey, Vadim Makhervaks, Michael Konstantinos Papamichael
-
Patent number: 11025564Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.Type: GrantFiled: February 22, 2019Date of Patent: June 1, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Matthew Graham Humphrey, Vadim Makhervaks, Michael Konstantinos Papamichael
-
Patent number: 10958717Abstract: A server system is provided that includes a plurality of servers, each server including at least one hardware acceleration device and at least one processor communicatively coupled to the hardware acceleration device by an internal data bus and executing a host server instance, the host server instances of the plurality of servers collectively providing a software plane, and the hardware acceleration devices of the plurality of servers collectively providing a hardware acceleration plane that implements a plurality of hardware accelerated services, wherein each hardware acceleration device maintains in memory a data structure that contains load data indicating a load of each of a plurality of target hardware acceleration devices, and wherein a requesting hardware acceleration device routes the request to a target hardware acceleration device that is indicated by the load data in the data structure to have a lower load than other of the target hardware acceleration devices.Type: GrantFiled: August 30, 2019Date of Patent: March 23, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Adrian Michael Caulfield, Eric S. Chung, Michael Konstantinos Papamichael, Douglas C. Burger, Shlomi Alkalay
-
Publication number: 20210044679Abstract: A masked packet checksum is utilized to provide error detection and/or error correction for only discrete portions of a packet, to the exclusion of other portions, thereby avoiding retransmission if transmission errors appear only in portions excluded by the masked packet checksum. A bitmask identifies packet portions whose data is to be protected with error detection and/or error correction schemes, packet portions whose data is to be excluded from such error detection and/or error correction schemes, or combinations thereof. A bitmask can be a per-packet specification, incorporated into one or more fields of individual packets, or a single bitmask can apply equally to multiple packets, which can be delineated in numerous ways, and can be separately transmitted or derived. Bitmasks can be generated at higher layers with lower layer mechanisms deactivated, or can be generated lower layers based upon data passed down.Type: ApplicationFiled: August 9, 2019Publication date: February 11, 2021Inventors: Adrian Michael CAULFIELD, Michael Konstantinos PAPAMICHAEL
-
Patent number: 10812415Abstract: Active intelligent message filtering can be utilized to provide error resiliency, thereby allowing messages to be received without traditional error detection, and, in turn, avoiding the inefficiency of retransmission of network communications discarded due to network transmission errors detected by such traditional error detection mechanisms. Network transmission errors can result in the receiving application receiving messages that appear to comprise values that differ from the values originally transmitted by the transmitting application. Based on the inaccuracy tolerance applicable to the transmitting and receiving applications, rules can be applied to actively intelligently filter the received messages to replace the received values with the replacement values according to the rules. In such a manner, the receiving application can continue to receive usable data from the transmitting application without any error detection at lower network communication levels.Type: GrantFiled: August 13, 2019Date of Patent: October 20, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Adrian Michael Caulfield, Michael Konstantinos Papamichael
-
Publication number: 20200272579Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.Type: ApplicationFiled: February 22, 2019Publication date: August 27, 2020Inventors: Matthew Graham HUMPHREY, Vadim MAKHERVAKS, Michael Konstantinos PAPAMICHAEL
-
Publication number: 20200274832Abstract: Techniques are disclosed for implementing direct memory access in a virtualized computing environment. A new mapping of interfaces between RNIC Consumer and RDMA Transport is defined, which enables efficient retry, a zombie detection mechanism, and identification and handling of invalid requests without bringing down the RDMA connection. Techniques are disclosed for out of order placement and delivery of ULP Requests without constraining the RNIC Consumer to the ordered networking behavior, if it is not required for the ULP (e.g., storage). This allows efficient deployment of an RDMA accelerated storage workload in a lossy network configuration, and reduction in latency jitter.Type: ApplicationFiled: February 22, 2019Publication date: August 27, 2020Inventors: Matthew Graham HUMPHREY, Vadim MAKHERVAKS, Michael Konstantinos PAPAMICHAEL
-
Publication number: 20190394260Abstract: A server system is provided that includes a plurality of servers, each server including at least one hardware acceleration device and at least one processor communicatively coupled to the hardware acceleration device by an internal data bus and executing a host server instance, the host server instances of the plurality of servers collectively providing a software plane, and the hardware acceleration devices of the plurality of servers collectively providing a hardware acceleration plane that implements a plurality of hardware accelerated services, wherein each hardware acceleration device maintains in memory a data structure that contains load data indicating a load of each of a plurality of target hardware acceleration devices, and wherein a requesting hardware acceleration device routes the request to a target hardware acceleration device that is indicated by the load data in the data structure to have a lower load than other of the target hardware acceleration devices.Type: ApplicationFiled: August 30, 2019Publication date: December 26, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Adrian Michael Caulfield, Eric S. Chung, Michael Konstantinos Papamichael, Douglas C. Burger, Shlomi Alkalay
-
Patent number: 10425472Abstract: A server system is provided that includes a plurality of servers, each server including at least one hardware acceleration device and at least one processor communicatively coupled to the hardware acceleration device by an internal data bus and executing a host server instance, the host server instances of the plurality of servers collectively providing a software plane, and the hardware acceleration devices of the plurality of servers collectively providing a hardware acceleration plane that implements a plurality of hardware accelerated services, wherein each hardware acceleration device maintains in memory a data structure that contains load data indicating a load of each of a plurality of target hardware acceleration devices, and wherein a requesting hardware acceleration device routes the request to a target hardware acceleration device that is indicated by the load data in the data structure to have a lower load than other of the target hardware acceleration devices.Type: GrantFiled: January 17, 2017Date of Patent: September 24, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adrian Michael Caulfield, Eric S. Chung, Michael Konstantinos Papamichael, Douglas C. Burger, Shlomi Alkalay
-
Publication number: 20180205785Abstract: A server system is provided that includes a plurality of servers, each server including at least one hardware acceleration device and at least one processor communicatively coupled to the hardware acceleration device by an internal data bus and executing a host server instance, the host server instances of the plurality of servers collectively providing a software plane, and the hardware acceleration devices of the plurality of servers collectively providing a hardware acceleration plane that implements a plurality of hardware accelerated services, wherein each hardware acceleration device maintains in memory a data structure that contains load data indicating a load of each of a plurality of target hardware acceleration devices, and wherein a requesting hardware acceleration device routes the request to a target hardware acceleration device that is indicated by the load data in the data structure to have a lower load than other of the target hardware acceleration devices.Type: ApplicationFiled: January 17, 2017Publication date: July 19, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Adrian Michael Caulfield, Eric S. Chung, Michael Konstantinos Papamichael, Douglas C. Burger, Shlomi Alkalay