Patents by Inventor Ram Huggahalli
Ram Huggahalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11068399Abstract: Technologies for enforcing coherence ordering in consumer polling interactions include a network interface controller (NIC) of a target computing device which is configured to receive a network packet, write the payload of the network packet to a data storage device of the target computing device, and obtain, subsequent to having transmitted a last write request to write the payload to the data storage device, ownership of a flag cache line of a cache of the target computing device. The NIC is additionally configured to receive a snoop request from a processor of the target computing device, identify whether the received snoop request corresponds to a read flag snoop request associated with an active request being processed by the NIC, and hold the received snoop request for delayed return in response to having identified the received snoop request as the read flag snoop request. Other embodiments are described herein.Type: GrantFiled: September 29, 2017Date of Patent: July 20, 2021Assignee: Intel CorporationInventors: Bin Li, Chunhui Zhang, Ren Wang, Ram Huggahalli
-
Publication number: 20190102301Abstract: Technologies for enforcing coherence ordering in consumer polling interactions include a network interface controller (NIC) of a target computing device which is configured to receive a network packet, write the payload of the network packet to a data storage device of the target computing device, and obtain, subsequent to having transmitted a last write request to write the payload to the data storage device, ownership of a flag cache line of a cache of the target computing device. The NIC is additionally configured to receive a snoop request from a processor of the target computing device, identify whether the received snoop request corresponds to a read flag snoop request associated with an active request being processed by the NIC, and hold the received snoop request for delayed return in response to having identified the received snoop request as the read flag snoop request. Other embodiments are described herein.Type: ApplicationFiled: September 29, 2017Publication date: April 4, 2019Inventors: Bin Li, Chunhui Zhang, Ren Wang, Ram Huggahalli
-
Patent number: 8751676Abstract: A network protocol unit interface is described that uses a message engine to transfer contents of received network protocol units in message segments to a destination message engine. The network protocol unit interface uses a message engine to receive messages whose content is to be transmitted in network protocol units. A message engine transmits message segments to a destination message engine without the message engine transmitter and receiver sharing memory space. In addition, the transmitter message engine can transmit message segments to a receiver message engine by use of a virtual address associated with the receiver message and a queue identifier, as opposed to a memory address.Type: GrantFiled: October 29, 2012Date of Patent: June 10, 2014Assignee: Intel CorporationInventors: Steven King, Ram Huggahalli, Xia Zhu, Mazhar Memon, Frank Berry, Nitin Bhardwaj, Amit Kumar, Theodore Willke, II
-
Patent number: 8688868Abstract: A computer system may comprise a second device operating as a producer that may steer data units to a first device operating as a consumer. A processing core of the first device may wake-up the second device after generating a first data unit. The second device may generate steering values after retrieving a first data unit directly from the cache of the first device. The second device may populate a flow table with a plurality of entries using the steering values. The second device may receive a packet over a network and store the packet directly into the cache of the first device using a first steering value. The second device may direct an interrupt signal to the processing core of the first device using a second steering value.Type: GrantFiled: September 2, 2011Date of Patent: April 1, 2014Assignee: Intel CorporationInventors: Anil Vasudevan, Partha Sarangam, Ram Huggahalli, Sujoy Sen
-
Patent number: 8645596Abstract: Techniques are described that can be used by a message engine to notify a core or hardware thread of activity. For example, an inter-processor interrupt can be used to notify the core or hardware thread. The message engine may generate notifications in response to one or more message received from a transmitting message engine. Message engines may communicate without sharing memory space.Type: GrantFiled: December 30, 2008Date of Patent: February 4, 2014Assignee: Intel CorporationInventors: Amit Kumar, Steven King, Ram Huggahalli, Xia Zhu, Mazhar Memon, Frank Berry, Nitin Bhardwaj, Theodore Willke, II
-
Publication number: 20130055263Abstract: A network protocol unit interface is described that uses a message engine to transfer contents of received network protocol units in message segments to a destination message engine. The network protocol unit interface uses a message engine to receive messages whose content is to be transmitted in network protocol units. A message engine transmits message segments to a destination message engine without the message engine transmitter and receiver sharing memory space. In addition, the transmitter message engine can transmit message segments to a receiver message engine by use of a virtual address associated with the receiver message and a queue identifier, as opposed to a memory address.Type: ApplicationFiled: October 29, 2012Publication date: February 28, 2013Inventors: Steven King, Ram Huggahalli, Xia Zhu, Mazhar Memon, Frank Berry, Nitin Bhardwaj, Amit Kumar, Theodore Willke, II
-
Patent number: 8307105Abstract: A network protocol unit interface is described that uses a message engine to transfer contents of received network protocol units in message segments to a destination message engine. The network protocol unit interface uses a message engine to receive messages whose content is to be transmitted in network protocol units. A message engine transmits message segments to a destination message engine without the message engine transmitter and receiver sharing memory space. In addition, the transmitter message engine can transmit message segments to a receiver message engine by use of a virtual address associated with the receiver message and a queue identifier, as opposed to a memory address.Type: GrantFiled: June 30, 2011Date of Patent: November 6, 2012Assignee: Intel CorporationInventors: Steven King, Ram Huggahalli, Xia Zhu, Mazhar Memon, Frank Berry, Nitin Bhardwaj, Amit Kumar, Theodore Willke, II
-
Publication number: 20120023272Abstract: A computer system may comprise a second device operating as a producer that may steer data units to a first device operating as a consumer. A processing core of the first device may wake-up the second device after generating a first data unit. The second device may generate steering values after retrieving a first data unit directly from the cache of the first device. The second device may populate a flow table with a plurality of entries using the steering values. The second device may receive a packet over a network and store the packet directly into the cache of the first device using a first steering value. The second device may direct an interrupt signal to the processing core of the first device using a second steering value.Type: ApplicationFiled: September 2, 2011Publication date: January 26, 2012Inventors: Anil Vasudevan, Partha Sarangam, Ram Huggahalli, Sujoy Sen
-
Publication number: 20110258283Abstract: A network protocol unit interface is described that uses a message engine to transfer contents of received network protocol units in message segments to a destination message engine. The network protocol unit interface uses a message engine to receive messages whose content is to be transmitted in network protocol units. A message engine transmits message segments to a destination message engine without the message engine transmitter and receiver sharing memory space. In addition, the transmitter message engine can transmit message segments to a receiver message engine by use of a virtual address associated with the receiver message and a queue identifier, as opposed to a memory address.Type: ApplicationFiled: June 30, 2011Publication date: October 20, 2011Inventors: Steven King, Ram Huggahalli, Xia Zhu, Mazhar Memon, Frank Berry, Nitin Bhardwaj, Amit Kumar, Theodore Willke, II
-
Patent number: 8041854Abstract: A computer system may comprise a second device operating as a producer that may steer data units to a first device operating as a consumer. A processing core of the first device may wake-up the second device after generating a first data unit. The second device may generate steering values after retrieving a first data unit directly from the cache of the first device. The second device may populate a flow table with a plurality of entries using the steering values. The second device may receive a packet over a network and store the packet directly into the cache of the first device using a first steering value. The second device may direct an interrupt signal to the processing core of the first device using a second steering value.Type: GrantFiled: September 28, 2007Date of Patent: October 18, 2011Assignee: Intel CorporationInventors: Anil Vasudevan, Partha Sarangam, Ram Huggahalli, Sujoy Sen
-
Patent number: 7996548Abstract: A network protocol unit interface is described that uses a message engine to transfer contents of received network protocol units in message segments to a destination message engine. The network protocol unit interface uses a message engine to receive messages whose content is to be transmitted in network protocol units. A message engine transmits message segments to a destination message engine without the message engine transmitter and receiver sharing memory space. In addition, the transmitter message engine can transmit message segments to a receiver message engine by use of a virtual address associated with the receiver message and a queue identifier, as opposed to a memory address.Type: GrantFiled: December 30, 2008Date of Patent: August 9, 2011Assignee: Intel CorporationInventors: Steven King, Ram Huggahalli, Xia Zhu, Mazhar Memon, Frank Berry, Nitin Bhardwaj, Amit Kumar, Theodore Willke, II
-
Publication number: 20100169501Abstract: A network protocol unit interface is described that uses a message engine to transfer contents of received network protocol units in message segments to a destination message engine. The network protocol unit interface uses a message engine to receive messages whose content is to be transmitted in network protocol units. A message engine transmits message segments to a destination message engine without the message engine transmitter and receiver sharing memory space. In addition, the transmitter message engine can transmit message segments to a receiver message engine by use of a virtual address associated with the receiver message and a queue identifier, as opposed to a memory address.Type: ApplicationFiled: December 30, 2008Publication date: July 1, 2010Inventors: Steven King, Ram Huggahalli, Xia Zhu, Mazhar Memon, Frank Berry, Nitin Bhardwaj, Amit Kumar, Theodore Willke, II
-
Publication number: 20100169528Abstract: Techniques are described that can be used by a message engine to notify a core or hardware thread of activity. For example, an inter-processor interrupt can be used to notify the core or hardware thread. The message engine may generate notifications in response to one or more message received from a transmitting message engine. Message engines may communicate without sharing memory space.Type: ApplicationFiled: December 30, 2008Publication date: July 1, 2010Inventors: Amit Kumar, Steven King, Ram Huggahalli, Xia Zhu, Mazhar Memon, Frank Berry, Nitin Bhardwaj, Theodore Willke, II
-
Publication number: 20090089505Abstract: A computer system may comprise a second device operating as a producer that may steer data units to a first device operating as a consumer. A processing core of the first device may wake-up the second device after generating a first data unit. The second device may generate steering values after retrieving a first data unit directly from the cache of the first device. The second device may populate a flow table with a plurality of entries using the steering values. The second device may receive a packet over a network and store the packet directly into the cache of the first device using a first steering value. The second device may direct an interrupt signal to the processing core of the first device using a second steering value.Type: ApplicationFiled: September 28, 2007Publication date: April 2, 2009Inventors: Anil Vasudevan, Partha Sarangam, Ram Huggahalli, Sujoy Sen
-
Patent number: 7512750Abstract: A memory controller is described that comprises a compression map cache. The compression map cache is to store information that identifies a cache line's worth of information that has been compressed with another cache line's worth of information. A processor and a memory controller integrated on a same semiconductor die is also described. The memory controller comprises a compression map cache. The compression map cache is to store information that identifies a cache line's worth of information that has been compressed with another cache line's worth of information.Type: GrantFiled: December 31, 2003Date of Patent: March 31, 2009Assignee: Intel CorporationInventors: Chris J. Newburn, Ram Huggahalli, Herbert H J Hum, Ali-Reza Adl-Tabatabai, Anwar M. Ghuloum
-
Patent number: 7502877Abstract: According to some embodiments, IO traffic is transferred directly into a target processor cache in accordance with routing information.Type: GrantFiled: May 16, 2007Date of Patent: March 10, 2009Assignee: Intel CorporationInventors: Ram Huggahalli, Raymond Tetrick
-
Publication number: 20090006668Abstract: In one embodiment, the present invention includes a method for receiving data from a producer input/output device in a cache associated with a consumer without writing the data to a memory coupled to the consumer and storing the data in a cache buffer until ownership of the data is obtained, and then storing the data in a cache line of the cache. Other embodiments are described and claimed.Type: ApplicationFiled: June 28, 2007Publication date: January 1, 2009Inventors: Anil Vasudevan, Sujoy Sen, Partha Sarangam, Ram Huggahalli
-
Publication number: 20070214307Abstract: According to some embodiments, IO traffic is transferred directly into a target processor cache in accordance with routing information.Type: ApplicationFiled: May 16, 2007Publication date: September 13, 2007Inventors: Ram Huggahalli, Raymond Tetrick
-
Patent number: 7257693Abstract: Cache coherency rules for a multi-processor computing system that is capable of working with compressed cache lines' worth of information are described. A multi-processor computing system that is capable of working with compressed cache lines' worth of information is also described. The multi-processor computing system includes a plurality of hubs for communicating with various computing system components and for compressing/decompressing cache lines' worth of information. A processor that is capable of labeling cache lines' worth of information in accordance with the cache coherency rules is described. A processor that includes a hub as described above is also described.Type: GrantFiled: January 15, 2004Date of Patent: August 14, 2007Assignee: Intel CorporationInventors: Chris J. Newburn, Ram Huggahalli, Herbert H J Hum, Ali-Reza Adl-Tabatabai, Anwar M. Ghuloum
-
Patent number: 7231470Abstract: According to some embodiments, IO traffic is transferred directly into a target processor cache in accordance with routing information. For example, it may be determined at a requesting agent processor that IO traffic is to be received at the target processor cache, wherein the target processor is different than the requesting agent processor. Moreover, routing information associated with the IO traffic may be received from the requesting agent processor. It may then be arranged for the IO traffic to be transferred directly into the target processor cache in accordance with the routine information.Type: GrantFiled: December 16, 2003Date of Patent: June 12, 2007Assignee: Intel CorporationInventors: Ram Huggahalli, Raymond Tetrick