Patents by Inventor Evan Gewirtz

Evan Gewirtz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230188468
    Abstract: Systems and methods for flowlet switching using memory instructions. One embodiment is a method of distributing packets over multiple paths. The method includes determining an elapsed time between a packet and a previous packet. The method further includes, in response to determining that the elapsed time is less than an inter-packet gap threshold: retaining a previously selected path value indicated in the flow record, and providing the previously selected path value to the processing thread for transmitting the packet over a previously selected path associated with the previous packet. The method also further includes, in response to determining that the elapsed time is greater than the inter-packet gap threshold: updating the flow record by replacing the previously selected path value with the path value of the selected path of the memory instruction, and providing the path value to the processing thread for transmitting the packet over the selected path.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Brian Alleyne, Mimi Dannhardt, Evan Gewirtz, Hengwei Hsu, Alexander Shechter, Sakthi Subramanian, Mohamed Abdul Malick Mohamed Usman
  • Patent number: 8914581
    Abstract: A request for reading data from a memory location of a main memory is received, the memory location being identified by a physical memory address. In response to the request, a cache memory is accessed based on the physical memory address to determine whether the cache memory contains the data being requested. The data associated with the request is returned from the cache memory without accessing the memory location if there is a cache hit. The data associated is returned from the main memory if there is a cache miss. In response to the cache miss, it is determined whether there have been a number of accesses within a predetermined period of time. A cache entry is allocated from the cache memory to cache the data if there have been a predetermined number of accesses within the predetermined period of time.
    Type: Grant
    Filed: May 20, 2010
    Date of Patent: December 16, 2014
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Robert Hathaway, Evan Gewirtz
  • Publication number: 20140181474
    Abstract: Methods and apparatus for performing an atomic hardware operation (HWOP) instruction. According to a method in a computer processor coupled to a memory, the method includes fetching, decoding, and executing the atomic HWOP instruction. The instruction includes a source operand indicating a source location and a destination operand indicating a destination location, wherein each of the source location and the destination location is either a register of the computer processor or an address of the memory. Executing the atomic HWOP instruction includes sending a message to an external agent to cause the external agent to atomically access a set of one or more memory locations of the memory based upon a value stored at the source location, and return a result obtained from said atomic access of the set of memory locations to the destination location. The external agent is external to the computer processor.
    Type: Application
    Filed: December 26, 2012
    Publication date: June 26, 2014
    Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)
    Inventors: EVAN GEWIRTZ, ROBERT HATHAWAY, EDWARD HO, STEPHAN MEIER
  • Patent number: 8656139
    Abstract: A digital processor stores pointers of different sizes in memory. The processor, specifically, executes instructions to store a long or short pointer. Long pointers reference any address in the memory's logical address space, while short pointers merely reference any address in a subset of that space. However, short pointers are smaller in size as stored in memory than long pointers. Long pointers thus support relatively large address range capabilities, while short pointers use less memory. The processor also executes instructions to load a long or short pointer into the register file, and does so in a way that does not require the processor to distinguish between the different pointers when executing other instructions. Specifically, the processor converts long and short pointers into a common format for loading into the register file, and converts pointers in the common format back into long or short pointers for storing in the memory.
    Type: Grant
    Filed: March 11, 2011
    Date of Patent: February 18, 2014
    Assignee: Telefonaktiebolaget L M Ericsson (publ)
    Inventors: Stephan Meier, John G. Favor, Evan Gewirtz, Robert Hathaway, Eric Trehus
  • Patent number: 8402248
    Abstract: A network element that includes multiple memory types and memory sizes translates a logical memory address into a physical memory address. A memory access request is received for a data structure with a logical memory address that includes a region identifier that identifies a region that is mapped to one or more memories and is associated with a set of one or more region attributes whose values are based on processing requirements provided by a software programmer and the available memories of the network element. The network element accesses the region mapping table entry corresponding to the region identifier and, using the region attributes that are associated with the region, determines an access target for the request, determines a physical memory address offset within the access target, and generates a physical memory address. The access target includes a target class of memory, an instance within the class of memory, and a particular physical address space of the instance within the class of memory.
    Type: Grant
    Filed: December 31, 2010
    Date of Patent: March 19, 2013
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Stephan Meier, Robert Hathaway, Evan Gewirtz, Brian Alleyne, Edward Ho
  • Publication number: 20120233414
    Abstract: A digital processor stores pointers of different sizes in memory. The processor, specifically, executes instructions to store a long or short pointer. Long pointers reference any address in the memory's logical address space, while short pointers merely reference any address in a subset of that space. However, short pointers are smaller in size as stored in memory than long pointers. Long pointers thus support relatively large address range capabilities, while short pointers use less memory. The processor also executes instructions to load a long or short pointer into the register file, and does so in a way that does not require the processor to distinguish between the different pointers when executing other instructions. Specifically, the processor converts long and short pointers into a common format for loading into the register file, and converts pointers in the common format back into long or short pointers for storing in the memory.
    Type: Application
    Filed: March 11, 2011
    Publication date: September 13, 2012
    Inventors: Stephan Meier, John G. Favor, Evan Gewirtz, Robert Hathaway, Eric Trehus
  • Publication number: 20120173841
    Abstract: A network element that includes multiple memory types and memory sizes translates a logical memory address into a physical memory address. A memory access request is received for a data structure with a logical memory address that includes a region identifier that identifies a region that is mapped to one or more memories and is associated with a set of one or more region attributes whose values are based on processing requirements provided by a software programmer and the available memories of the network element. The network element accesses the region mapping table entry corresponding to the region identifier and, using the region attributes that are associated with the region, determines an access target for the request, determines a physical memory address offset within the access target, and generates a physical memory address. The access target includes a target class of memory, an instance within the class of memory, and a particular physical address space of the instance within the class of memory.
    Type: Application
    Filed: December 31, 2010
    Publication date: July 5, 2012
    Inventors: Stephan Meier, Robert Hathaway, Evan Gewirtz, Brian Alleyne, Edward Ho
  • Publication number: 20110289257
    Abstract: A request for reading data from a memory location of a main memory is received, the memory location being identified by a physical memory address. In response to the request, a cache memory is accessed based on the physical memory address to determine whether the cache memory contains the data being requested. The data associated with the request is returned from the cache memory without accessing the memory location if there is a cache hit. The data associated is returned from the main memory if there is a cache miss. In response to the cache miss, it is determined whether there have been a number of accesses within a predetermined period of time. A cache entry is allocated from the cache memory to cache the data if there have been a predetermined number of accesses within the predetermined period of time.
    Type: Application
    Filed: May 20, 2010
    Publication date: November 24, 2011
    Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)
    Inventors: Robert Hathaway, Evan Gewirtz
  • Publication number: 20110276784
    Abstract: In one embodiment, a current candidate thread is selected from each of multiple first groups of threads using a low granularity selection scheme, where each of the first groups includes multiple threads and first groups are mutually exclusive. A second group of threads is formed comprising the current candidate thread selected from each of the first groups of threads. A current winning thread is selected from the second group of threads using a high granularity selection scheme. An instruction is fetched from a memory based on a fetch address for a next instruction of the current winning thread. The instruction is then dispatched to one of the execution units for execution, whereby execution stalls of the execution units are reduced by fetching instructions based on the low granularity and high granularity selection schemes.
    Type: Application
    Filed: May 10, 2010
    Publication date: November 10, 2011
    Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)
    Inventors: Evan Gewirtz, Robert Hathaway, Stephan Meier, Edward Ho
  • Publication number: 20110276732
    Abstract: A command is received from a first agent via a first predetermined memory-mapped register, the first agent being one of multiple agents representing software processes, each being executed by one of processor cores of a network processor in a network element. A first queue associated with the command is identified based on the first predetermined memory-mapped register. A pointer is atomically read from a first hardware-based queue state register associated with the first queue. Data is atomically accessed at a memory location of the memory based on the pointer. The pointer stored in the first hardware-based queue state register is atomically updated, including incrementing the pointer of the first hardware-based queue state register, reading a queue size of the queue from a first hardware-based configuration register associated with the first queue, and wrapping around the pointer if the pointer reaches an end of the first queue based on the queue size.
    Type: Application
    Filed: May 10, 2010
    Publication date: November 10, 2011
    Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)
    Inventors: Evan Gewirtz, Robert Hathaway, Stephan Meier
  • Patent number: 8051227
    Abstract: A command is received from a first agent via a first predetermined memory-mapped register, the first agent being one of multiple agents representing software processes, each being executed by one of processor cores of a network processor in a network element. A first queue associated with the command is identified based on the first predetermined memory-mapped register. A pointer is atomically read from a first hardware-based queue state register associated with the first queue. Data is atomically accessed at a memory location of the memory based on the pointer. The pointer stored in the first hardware-based queue state register is atomically updated, including incrementing the pointer of the first hardware-based queue state register, reading a queue size of the queue from a first hardware-based configuration register associated with the first queue, and wrapping around the pointer if the pointer reaches an end of the first queue based on the queue size.
    Type: Grant
    Filed: May 10, 2010
    Date of Patent: November 1, 2011
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Evan Gewirtz, Robert Hathaway, Stephan Meier