Patents by Inventor Helia A. Naeimi

Helia A. Naeimi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230393814
    Abstract: Examples described herein relate to an interface and circuitry coupled to the interface, the circuitry configured to execute instructions that cause the circuitry to perform floating point (FP) operations based on floating point data received in different packets. The order of the floating point operations can be based on a reorder of the data received in the different packets and wherein the reorder of the data received in the different packets is different than the order in which the packets were received.
    Type: Application
    Filed: August 4, 2023
    Publication date: December 7, 2023
    Inventors: Helia A. NAEIMI, John Andrew FINGERHUT
  • Publication number: 20230388281
    Abstract: Examples described herein relate to an interface and circuitry coupled to the interface. The circuitry can provide an endpoint for a Transport Layer Security (TLS) over Remote Direct Memory Access (RDMA) connection with a first network interface device, provide an endpoint for a TLS over RDMA connection with a second network interface device, provide a transport layer endpoint for the packets received from the first network interface device, and provide a transport layer endpoint for the packets received from the second network interface device.
    Type: Application
    Filed: August 4, 2023
    Publication date: November 30, 2023
    Inventors: Helia A. NAEIMI, John Andrew FINGERHUT
  • Publication number: 20230379309
    Abstract: Examples described herein relate to an interface and circuitry coupled to the interface. The circuitry can provide an endpoint for a PSP Security Protocol (PSP) connection to a first network interface device, provide an endpoint for a second PSP connection with a second network interface device, provide a transport layer endpoint for the packets received from the first network interface device, and provide a second transport layer endpoint for the packets received from the second network interface device.
    Type: Application
    Filed: August 4, 2023
    Publication date: November 23, 2023
    Applicant: Intel Corporation
    Inventors: Helia A. NAEIMI, John Andrew FINGERHUT
  • Publication number: 20230379154
    Abstract: Examples described herein relate to an interface and circuitry coupled to the interface. The circuitry can provide an endpoint for a Datagram Transport Layer Security (DTLS) connection with a first network interface device, provide an endpoint for a second DTLS connection with a second network interface device, provide a transport layer endpoint for the packets received from the first network interface device, and provide a second transport layer endpoint for the packets received from the second network interface device.
    Type: Application
    Filed: August 4, 2023
    Publication date: November 23, 2023
    Inventors: Helia A. NAEIMI, John Andrew FINGERHUT
  • Publication number: 20230300063
    Abstract: Examples described herein relate to a network interface device. The network interface device can include circuitry that is to: receive a first packet comprising a first packet header and a first packet payload; receive multiple subsequent packets comprising multiple packet headers for respective multiple subsequent packets; update at least one of the multiple packet headers; and construct egress packets. In some examples, the egress packets include respective one of the multiple packet headers and the first packet payload.
    Type: Application
    Filed: May 22, 2023
    Publication date: September 21, 2023
    Inventors: Helia A. NAEIMI, Amedeo SAPIO, John Andrew FINGERHUT, Yi LI, Yanfang LE
  • Publication number: 20230155988
    Abstract: Examples described herein relate to a network interface device that includes an interface and circuitry. In some examples, the circuitry coupled to the interface is to apply encryption for packets received from a first network interface device and tunnel the encrypted packets to a second network interface device. In some examples, forwarding operations by the first network interface device and forwarding operations in the second network interface device are based on different header fields.
    Type: Application
    Filed: January 20, 2023
    Publication date: May 18, 2023
    Inventors: Surekha PERI, Helia A. NAEIMI, Anurag AGRAWAL
  • Patent number: 11641326
    Abstract: Examples are described herein that relate to a mesh in a switch fabric. The mesh can include one or more buses that permit operations (e.g., read, write, or responses) to continue in the same direction, drop off to a memory, drop off a bus to permit another operation to use the bus, or receive operations that are changing direction. A latency estimate can be determined at least for operations that drop off from a bus to permit another operation to use the bus or receive and channel operations that are changing direction. An operation with a highest latency estimate (e.g., time of traversing a mesh) can be permitted to use the bus, even causing another operation, that is not to change direction, to drop off the bus and re-enter later.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: May 2, 2023
    Assignee: Intel Corporation
    Inventors: Karl S. Papadantonakis, Robert Southworth, Arvind Srinivasan, Helia A. Naeimi, James E. McCormick, Jr., Jonathan Dama, Ramakrishna Huggahalli, Roberto Penaranda Cebrian
  • Patent number: 11381515
    Abstract: Examples herein relate to allocation of an intermediate queue to a flow or traffic class (or other allocation) of packets prior to transmission to a network. Various types of intermediate queues are available for selection. An intermediate queue can be shallow and have an associated throughput that attempts to meet or exceed latency guarantees for a packet flow or traffic class. Another intermediate queue is larger in size and expandable and can be used for packets that are sensitive to egress port incast such as latency sensitive packets. Yet another intermediate queue is expandable but provides no guarantee on maximum end-to-end latency and can be used for packets where dropping is to be avoided. Intermediate queues can be deallocated after a flow or traffic class ends and related memory space can be used for another intermediate queue.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: July 5, 2022
    Assignee: Intel Corporation
    Inventors: Arvind Srinivasan, Robert Southworth, Helia A. Naeimi
  • Publication number: 20210211467
    Abstract: Examples described herein relate to a Transport Layer Security (TLS) offload engine to: based on detection of encrypted data unassociated with a previously detected data header: search for one or more data headers; identify at least two candidate data headers for validation; and based on receipt of an indication that the at least two candidate data headers are valid, perform decryption of received data in one or more packets. In some examples, the TLS offload engine is to: based on receipt of an indication that one or more of the at least two candidate data headers is not a valid header, search for two or more other candidate data headers.
    Type: Application
    Filed: March 1, 2021
    Publication date: July 8, 2021
    Inventors: Helia A. NAEIMI, Sivakumar MUNNANGI, Namrata LIMAYE, Arvind SRINIVASAN, Gargi SAHA, Hung NGUYEN, Daniel DALY
  • Publication number: 20200412666
    Abstract: Examples are described herein that relate to a mesh in a switch fabric. The mesh can include one or more buses that permit operations (e.g., read, write, or responses) to continue in the same direction, drop off to a memory, drop off a bus to permit another operation to use the bus, or receive operations that are changing direction. A latency estimate can be determined at least for operations that drop off from a bus to permit another operation to use the bus or receive and channel operations that are changing direction. An operation with a highest latency estimate (e.g., time of traversing a mesh) can be permitted to use the bus, even causing another operation, that is not to change direction, to drop off the bus and re-enter later.
    Type: Application
    Filed: August 23, 2019
    Publication date: December 31, 2020
    Inventors: Karl S. PAPADANTONAKIS, Robert SOUTHWORTH, Arvind SRINIVASAN, Helia A. NAEIMI, James E. McCORMICK, JR., Jonathan DAMA, Ramakrishna HUGGAHALLI, Roberto PENARANDA CEBRIAN
  • Patent number: 10713052
    Abstract: Disclosed embodiments relate to a prefetcher for delinquent irregular loads. In one example, a processor includes a cache memory, fetch and decode circuitry to fetch and decode instructions from a memory; and execution circuitry including a binary translator (BT) to respond to the decoded instructions by storing a plurality of decoded instructions in a BT cache, identifying a delinquent irregular load (DIRRL) among the plurality of decoded instructions, determining whether the DIRRL is prefetchable, and, if so, generating a custom prefetcher to cause the processor to prefetch a region of instructions leading up to the prefetchable DIRRL.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: July 14, 2020
    Assignee: INTEL CORPORATION
    Inventors: Karthik Sankaranarayanan, Stephen J. Tarsa, Gautham N. Chinya, Helia Naeimi
  • Patent number: 10552257
    Abstract: Some embodiments include apparatuses and methods having an interface to receive information from memory cells, the memory cells configured to have a plurality of states to indicate values of information stored in the memory cells, and a control unit to monitor errors in information retrieved from the memory cells. Based on the errors in the information, the control unit generates control information to cause the memory cell to change to from a state among the plurality of states to an additional state. The additional state is different from the plurality of states.
    Type: Grant
    Filed: June 2, 2016
    Date of Patent: February 4, 2020
    Assignee: Intel Corporation
    Inventors: Helia Naeimi, Wei Wu, Shigeki Tomishima, Shih-Lien Lu
  • Publication number: 20200004541
    Abstract: Disclosed embodiments relate to a prefetcher for delinquent irregular loads. In one example, a processor includes a cache memory, fetch and decode circuitry to fetch and decode instructions from a memory; and execution circuitry including a binary translator (BT) to respond to the decoded instructions by storing a plurality of decoded instructions in a BT cache, identifying a delinquent irregular load (DIRRL) among the plurality of decoded instructions, determining whether the DIRRL is prefetchable, and, if so, generating a custom prefetcher to cause the processor to prefetch a region of instructions leading up to the prefetchable DIRRL.
    Type: Application
    Filed: June 28, 2018
    Publication date: January 2, 2020
    Inventors: Karthik SANKARANARAYANAN, Stephen J. TARSA, Gautham N. CHINYA, Helia NAEIMI
  • Publication number: 20190379610
    Abstract: Examples herein relate to allocation of an intermediate queue to a flow or traffic class (or other allocation) of packets prior to transmission to a network. Various types of intermediate queues are available for selection. An intermediate queue can be shallow and have an associated throughput that attempts to meet or exceed latency guarantees for a packet flow or traffic class. Another intermediate queue is larger in size and expandable and can be used for packets that are sensitive to egress port incast such as latency sensitive packets. Yet another intermediate queue is expandable but provides no guarantee on maximum end-to-end latency and can be used for packets where dropping is to be avoided. Intermediate queues can be deallocated after a flow or traffic class ends and related memory space can be used for another intermediate queue.
    Type: Application
    Filed: August 23, 2019
    Publication date: December 12, 2019
    Inventors: Arvind SRINIVASAN, Robert SOUTHWORTH, Helia A. NAEIMI
  • Patent number: 10467137
    Abstract: Provided are an apparatus, system, integrated circuit die, and method for caching data in a hierarchy of caches. A first cache line in a first level cache having modified data for an address is processed. Each cache line of cache lines in the first level cache store data for one of a plurality of addresses stored in multiple cache lines of a second level cache. A second cache line in the second level cache is selected and a determination is made of a number of corresponding bits in the first cache line and the second cache line that are different. Bits in the first cache line that are different from the corresponding bits in the second cache line are written to the corresponding bits in the second cache line in response to a determination that the number of corresponding bits that are different is less than a threshold.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: November 5, 2019
    Assignee: INTEL CORPORATION
    Inventors: Helia Naeimi, Qi Zeng
  • Patent number: 10423540
    Abstract: Provided are an apparatus, system, and method to determine a cache line in a first memory device to be evicted for an incoming cache line in a second memory device. An incoming cache line is read from the second memory device. A plurality of cache lines in the first memory device are processed to determine an eviction cache line of the plurality of cache lines in the first memory device having a least number of bits that differ from corresponding bits in the incoming cache line. Bits from the incoming cache line that are different from the bits in the eviction cache line are written to the eviction cache line in the first memory device.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: September 24, 2019
    Assignee: INTEL CORPORATION
    Inventors: Helia Naeimi, Qi Zeng
  • Patent number: 10304510
    Abstract: Systems, apparatuses and methods may provide for detecting a read-write condition in which a read operation from a location in magnetoresistive memory such as spin transfer torque (STT) memory is to be followed by a write operation to the location. Additionally, a current level associated with the read operation may be increased, wherein the read operation is conducted from the location at the increased current level. The increased current level may cause a reset of all bits in the location.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: May 28, 2019
    Assignee: Intel Corporation
    Inventor: Helia A. Naeimi
  • Patent number: 10297302
    Abstract: An apparatus is described that includes a semiconductor chip memory array having resistive storage cells. The apparatus also includes a comparator to compare a first word to be written into the array against a second word stored in the array at the location targeted by a write operation that will write the first word into the array. The apparatus also includes circuitry to iteratively write to one or more bit locations where a difference exists between the first word and the second word with increasing write current intensity with each successive iteration.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: May 21, 2019
    Assignee: Intel Corporation
    Inventors: Charles Augustine, Shigeki Tomishima, Wei Wu, Shih-Lien Lu, James W. Tschanz, Georgios Panagopoulos, Helia Naeimi
  • Publication number: 20190095349
    Abstract: Provided are an apparatus, system, and method to determine a cache line in a first memory device to be evicted for an incoming cache line in a second memory device. An incoming cache line is read from the second memory device. A plurality of cache lines in the first memory device are processed to determine an eviction cache line of the plurality of cache lines in the first memory device having a least number of bits that differ from corresponding bits in the incoming cache line. Bits from the incoming cache line that are different from the bits in the eviction cache line are written to the eviction cache line in the first memory device.
    Type: Application
    Filed: September 27, 2017
    Publication date: March 28, 2019
    Inventors: Helia NAEIMI, Qi ZENG
  • Publication number: 20190095328
    Abstract: Provided are an apparatus, system, integrated circuit die, and method for caching data in a hierarchy of caches. A first cache line in a first level cache having modified data for an address is processed. Each cache line of cache lines in the first level cache store data for one of a plurality of addresses stored in multiple cache lines of a second level cache. A second cache line in the second level cache is selected and a determination is made of a number of corresponding bits in the first cache line and the second cache line that are different. Bits in the first cache line that are different from the corresponding bits in the second cache line are written to the corresponding bits in the second cache line in response to a determination that the number of corresponding bits that are different is less than a threshold.
    Type: Application
    Filed: September 27, 2017
    Publication date: March 28, 2019
    Inventors: Helia NAEIMI, Qi ZENG