Patents by Inventor Anil Vasudevan
Anil Vasudevan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12147832Abstract: The handling of external calls from one or more services to one or more subservices is described. Upon detecting that a service has made an external call to a subservice and prior to allowing the external call to be sent to the subservice, a system evaluates the external call against one or more pre-call thresholds to determine whether or not the one or more pre-call thresholds are met. If the determination is that a pre-call threshold of the one or more pre-call thresholds is not met, the external call is failed without sending the external call to the subservice. This failing might include communicating to the service that placed the external call that the external call has failed. Otherwise, the system sends the external call to the subservice. By applying these thresholds, the service is kept from using too many resources.Type: GrantFiled: December 2, 2021Date of Patent: November 19, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Nishand Lalithambika Vasudevan, Akshay Navneetlal Mutha, Abhishek Anil Kakhandiki, Sathya Narayanan Ramamirtham
-
Publication number: 20240267340Abstract: Examples described herein relate to a network interface device that includes an interface to a port; and a circuitry. The circuitry can be configured to: receive a first packet that comprises a time stamp associated with a prior or originating transmission of the first packet by a transmitter network interface device; enqueue an entry for the first packet in a queue; and dequeue the entry based at least in part on the time stamp.Type: ApplicationFiled: March 27, 2024Publication date: August 8, 2024Inventors: Anil VASUDEVAN, Roberto PENARANDA CEBRIAN, Md Ashiqur RAHMAN, Pedro YEBENES SEGURA, Allister ALEMANIA
-
Publication number: 20240264871Abstract: The disclosure concerns at least one processor that can execute a polling group to poll for storage transactions associated with a first group of one or more particular queue identifiers. The disclosure concerns at least one processor is configured to: execute a second polling group on a second processor, wherein the second polling group is to poll for storage transactions for a second group of one or more particular queue identifiers that are different than the one or more particular queue identifiers of the first group, wherein the second group of one or more particular queue identifiers are associated with one or more queues that can be accessed using the second polling group and not the first polling group.Type: ApplicationFiled: March 27, 2024Publication date: August 8, 2024Applicant: Intel CorporationInventors: Ziye YANG, James R. HARRIS, Kiran PATIL, Benjamin WALKER, Sudheer MOGILAPPAGARI, Yadong LI, Mark WUNDERLICH, Anil VASUDEVAN
-
Publication number: 20240126622Abstract: A set of threads of an application are identified to be executed on a platform, where the platform comprises a multi-node architecture. A set of queues of an I/O device of the platform are reserved and associated with one of a plurality of nodes in the multi-node architecture. Data is received at the I/O device, where the I/O device is included in a particular one of the plurality of nodes. Response data is generated through execution of a thread in the set of threads using a processing core and memory of the particular node, and the response data is caused to be sent on the I/O device based on inclusion of the I/O device in the particular node.Type: ApplicationFiled: December 27, 2023Publication date: April 18, 2024Inventors: Anil Vasudevan, Sridhar Samudrala, Tushar S. Gohad, Nash A. Kleppan, Stefan T. Peters
-
Publication number: 20240089219Abstract: Examples described herein relate to a switch. In some examples, the switch includes circuitry that is configured to: based on receipt of a packet and a level of a first queue, select among a first memory and a second memory device among multiple second memory devices to store the packet, based on selection of the first memory, store the packet in the first memory, and based on selection of the second memory device among multiple second memory devices, store the packet into the selected second memory device. In some examples, the packet is associated with an ingress port and an egress port, and the selected second memory device is associated with a third port that is different than the ingress port or the egress port associated with the packet.Type: ApplicationFiled: November 10, 2023Publication date: March 14, 2024Inventors: Md Ashiqur RAHMAN, Roberto PENARANDA CEBRIAN, Anil VASUDEVAN, Allister ALEMANIA, Pedro YEBENES SEGURA
-
Publication number: 20230421512Abstract: Generally, this disclosure provides devices, methods, and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.Type: ApplicationFiled: September 8, 2023Publication date: December 28, 2023Applicant: Intel CorporationInventors: Eliezer Tamir, Jesse C. Brandeburg, Anil Vasudevan
-
Publication number: 20230403236Abstract: Examples include techniques to shape network traffic for server-based computational storage. Examples include use of a class of service associated with a compute offload request that is to be sent to a computational storage server in a compute offload command, The class of service to facilitate storage of the compute offload command in one or more queues of a network interface device at the computational storage server. The storage of the compute offload command to the one or more queues to be associated with scheduling a block-based compute operation for execution by compute circuitry at the computational storage server to fulfill the compute offload request indicated in the compute offload command.Type: ApplicationFiled: August 25, 2023Publication date: December 14, 2023Inventors: Michael MESNIER, Anil VASUDEVAN, Kelley MULLICK
-
Patent number: 11843550Abstract: Generally, this disclosure provides devices, methods, and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.Type: GrantFiled: October 19, 2021Date of Patent: December 12, 2023Assignee: Intel CorporationInventors: Eliezer Tamir, Jesse C. Brandeburg, Anil Vasudevan
-
Patent number: 11816036Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.Type: GrantFiled: May 6, 2022Date of Patent: November 14, 2023Assignee: Intel CorporationInventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
-
Patent number: 11797333Abstract: Methods for performing efficient receive interrupt signaling and associated apparatus, computing platform, software, and firmware. Receive (RX) queues in which descriptors associated with packets are enqueued are implemented in host memory and logically partitioned into pools, with each RX queue pool associated with a respective interrupt vector. Receive event queues (REQs) associated with respective RX queue pools and interrupt vectors are also implemented in host memory. Event generation is selectively enabled for some RX queues, while event generation is masked for others. In response to event causes for RX queues that are event generation-enabled, associated events are generated and enqueued in the REQs and interrupts on associated interrupt vectors are asserted. The events are serviced by accessing the events in the REQs, which identify the RX queue for the event and a next activity location at which a next descriptor to be processed is located.Type: GrantFiled: December 11, 2019Date of Patent: October 24, 2023Assignee: Intel CorporationInventors: Linden Cornett, Anil Vasudevan, Parthasarathy Sarangam, Kiran Patil
-
Publication number: 20230300078Abstract: There is disclosed in one example a network interface card (NIC), comprising: an ingress interface to receive incoming traffic; a plurality of queues to queue incoming traffic; an egress interface to direct incoming traffic to a plurality of server applications; and a queuing engine, including logic to: uniquely associate a queue with a selected server application; receive an incoming network packet; determine that the selected server application may process the incoming network packet; and assign the incoming network packet to the queue.Type: ApplicationFiled: May 23, 2023Publication date: September 21, 2023Applicant: Intel CorporationInventors: Anil Vasudevan, Kiran A. Patil, Arun Chekhov Ilango
-
Patent number: 11706151Abstract: There is disclosed in one example a network interface card (NIC), comprising: an ingress interface to receive incoming traffic; a plurality of queues to queue incoming traffic; an egress interface to direct incoming traffic to a plurality of server applications; and a queuing engine, including logic to: uniquely associate a queue with a selected server application; receive an incoming network packet; determine that the selected server application may process the incoming network packet; and assign the incoming network packet to the queue.Type: GrantFiled: December 30, 2021Date of Patent: July 18, 2023Assignee: Intel CorporationInventors: Anil Vasudevan, Kiran A. Patil, Arun Chekhov Ilango
-
Patent number: 11657015Abstract: A device is provided with two or more uplink ports to connect the device via two or more links to one or more sockets, where each of the sockets includes one or more processing cores, and each of the two or more links is compliant with a particular interconnect protocol. The device further includes I/O logic to identify data to be sent to the one or more processing cores for processing, determine an affinity attribute associated with the data, and determine which of the two or more links to use to send the data to the one or more processing cores based on the affinity attribute.Type: GrantFiled: January 20, 2021Date of Patent: May 23, 2023Assignee: Intel CorporationInventors: Debendra Das Sharma, Anil Vasudevan, David Harriman
-
Patent number: 11502952Abstract: Devices and techniques for reorder resilient transport are described herein. A device may store data packets in sequential positions of a flow queue in an order in which the data packets were received. The device may retrieve a first data packet from a first sequential position and a second data packet from a second sequential position that is next in sequence to the first sequential position in the flow queue. The device may store the first data packet and the second data packet in a buffer and refrain from providing the first data packet and the second data packet to upper layer circuitry if the packet order information for the first data packet and the second data packet indicate that the first data packet and the second data packet were received out of order. Other embodiments are also described.Type: GrantFiled: May 2, 2018Date of Patent: November 15, 2022Assignee: Intel CorporationInventors: Anil Vasudevan, Parthasarathy Sarangam, Eric Mann, Daniel Cohn
-
Publication number: 20220321478Abstract: Examples described herein relate to a switch comprising: circuitry to detect congestion at a target port and re-direct one or more packets directed to the target port to one or more other ports for re-circulation via one or more uncongested ports based on congestion at the target port. In some examples, the circuitry is to identify the target port in the re-directed one or more packets. In some examples, the circuitry is to transmit a congestion level indicator to the one or more other ports based on a congestion level of the target port.Type: ApplicationFiled: June 13, 2022Publication date: October 6, 2022Inventors: Anil VASUDEVAN, Grzegorz JERECZEK, Parthasarathy SARANGAM
-
Publication number: 20220261351Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.Type: ApplicationFiled: May 6, 2022Publication date: August 18, 2022Applicant: Intel CorporationInventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
-
Publication number: 20220166698Abstract: Examples described herein relate to a packet processing device that includes circuitry to: request network resource consumption data from one or more other packet processing devices by indication in a header of a reliable transport protocol and transmit the request in a packet that includes the indication in the header. In some examples, the header includes an option field of a transmission control protocol (TCP) packet. In some examples, the network resource consumption data includes a largest network resource consumption data in a path from a sender to a receiver, and potentially one or more next largest network resource consumption data.Type: ApplicationFiled: February 8, 2022Publication date: May 26, 2022Inventors: Junggun LEE, Grzegorz JERECZEK, Junho SUH, Anil VASUDEVAN
-
Patent number: 11327894Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.Type: GrantFiled: March 30, 2020Date of Patent: May 10, 2022Assignee: Intel CorporationInventors: Anil Vasudevan, Venkata Krishnan, Andrew J. Herdrich, Ren Wang, Robert G. Blankenship, Vedaraman Geetha, Shrikant M. Shah, Marshall A. Millier, Raanan Sade, Binh Q. Pham, Olivier Serres, Chyi-Chang Miao, Christopher B. Wilkerson
-
Publication number: 20220124047Abstract: There is disclosed in one example a network interface card (NIC), comprising: an ingress interface to receive incoming traffic; a plurality of queues to queue incoming traffic; an egress interface to direct incoming traffic to a plurality of server applications; and a queuing engine, including logic to: uniquely associate a queue with a selected server application; receive an incoming network packet; determine that the selected server application may process the incoming network packet; and assign the incoming network packet to the queue.Type: ApplicationFiled: December 30, 2021Publication date: April 21, 2022Applicant: Intel CorporationInventors: Anil Vasudevan, Kiran A. Patil, Arun Chekhov Ilango
-
Publication number: 20220038395Abstract: Generally, this disclosure provides devices, methods, and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.Type: ApplicationFiled: October 19, 2021Publication date: February 3, 2022Applicant: Intel CorporationInventors: Eliezer Tamir, Jesse C. Brandeburg, Anil Vasudevan