Patents by Inventor Eyal Nagar

Eyal Nagar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230273736
    Abstract: A device for executing a software program by at least one computational device, comprising an interconnected computing grid, connected to the at least one computational device, comprising an interconnected memory grid comprising a plurality of memory units connected by a plurality of memory network nodes, each connected to at least one of the plurality of memory units; wherein configuring the interconnected memory comprises: identifying a bypassable memory unit; selecting a backup memory unit connected to a backup memory network node; configuring the respective memory network node connected to the bypassable memory unit to forward at least one memory access request, comprising an address in a first address range, to the backup memory network node; and configuring the backup memory network node to access the backup memory unit in response to the at least one memory access request, in addition to accessing the respective at least one memory unit connected thereto.
    Type: Application
    Filed: May 8, 2023
    Publication date: August 31, 2023
    Applicant: Next Silicon Ltd
    Inventors: Yoav LOSSIN, Ron SCHNEIDER, Elad RAZ, Ilan TAYARI, Eyal NAGAR
  • Patent number: 11644990
    Abstract: A device for executing a software program by at least one computational device, comprising an interconnected computing grid, connected to the at least one computational device, comprising an interconnected memory grid comprising a plurality of memory units connected by a plurality of memory network nodes, each connected to at least one of the plurality of memory units; wherein configuring the interconnected memory comprises: identifying a bypas sable memory unit; selecting a backup memory unit connected to a backup memory network node; configuring the respective memory network node connected to the bypassable memory unit to forward at least one memory access request, comprising an address in a first address range, to the backup memory network node; and configuring the backup memory network node to access the backup memory unit in response to the at least one memory access request, in addition to accessing the respective at least one memory unit connected thereto.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: May 9, 2023
    Assignee: Next Silicon Ltd
    Inventors: Yoav Lossin, Ron Schneider, Elad Raz, Ilan Tayari, Eyal Nagar
  • Publication number: 20220155985
    Abstract: A device for executing a software program by at least one computational device, comprising an interconnected computing grid, connected to the at least one computational device, comprising an interconnected memory grid comprising a plurality of memory units connected by a plurality of memory network nodes, each connected to at least one of the plurality of memory units; wherein configuring the interconnected memory comprises: identifying a bypas sable memory unit; selecting a backup memory unit connected to a backup memory network node; configuring the respective memory network node connected to the bypassable memory unit to forward at least one memory access request, comprising an address in a first address range, to the backup memory network node; and configuring the backup memory network node to access the backup memory unit in response to the at least one memory access request, in addition to accessing the respective at least one memory unit connected thereto.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Applicant: Next Silicon Ltd
    Inventors: Yoav LOSSIN, Ron SCHNEIDER, Elad RAZ, Ilan TAYARI, Eyal NAGAR
  • Patent number: 11269526
    Abstract: A device for executing a software program by at least one computational device, comprising an interconnected computing grid, connected to the at least one computational device, comprising an interconnected memory grid comprising a plurality of memory units connected by a plurality of memory network nodes, each connected to at least one of the plurality of memory units; wherein configuring the interconnected memory comprises: identifying a bypassable memory unit; selecting a backup memory unit connected to a backup memory network node; configuring the respective memory network node connected to the bypassable memory unit to forward at least one memory access request, comprising an address in a first address range, to the backup memory network node; and configuring the backup memory network node to access the backup memory unit in response to the at least one memory access request, in addition to accessing the respective at least one memory unit connected thereto.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: March 8, 2022
    Assignee: Next Silicon Ltd
    Inventors: Yoav Lossin, Ron Schneider, Elad Raz, Ilan Tayari, Eyal Nagar
  • Publication number: 20210334023
    Abstract: A device for executing a software program by at least one computational device, comprising an interconnected computing grid, connected to the at least one computational device, comprising an interconnected memory grid comprising a plurality of memory units connected by a plurality of memory network nodes, each connected to at least one of the plurality of memory units; wherein configuring the interconnected memory comprises: identifying a bypassable memory unit; selecting a backup memory unit connected to a backup memory network node; configuring the respective memory network node connected to the bypassable memory unit to forward at least one memory access request, comprising an address in a first address range, to the backup memory network node; and configuring the backup memory network node to access the backup memory unit in response to the at least one memory access request, in addition to accessing the respective at least one memory unit connected thereto.
    Type: Application
    Filed: April 23, 2020
    Publication date: October 28, 2021
    Applicant: Next Silicon Ltd
    Inventors: Yoav LOSSIN, Ron SCHNEIDER, Elad RAZ, Ilan TAYARI, Eyal NAGAR
  • Publication number: 20040218592
    Abstract: A data structure depicting unicast queues comprises a Structure Pointer memory for storing pointers to a location in memory of a segment of a packet associated with a respective queue. A Structure Pointer points to a record in the Structure Pointer memory associated with a successive segment, and a packet indicator indicates whether the segment is a first and/or a last segment in the packet. A Head & Tail memory stores an address in the Structure Pointer memory of the first and last packets in the queue, and a free structure memory points to a next available memory location in the Structure Pointer memory. To support multicast queues the data structure, a multiplicity memory stores the number of destinations to which a respective queue is to be routed. A scheduling method and system using such a data structure are also described.
    Type: Application
    Filed: June 23, 2003
    Publication date: November 4, 2004
    Inventors: Eyal Nagar, Amir Paran, Michael Bachar, Shimshon Jacobi
  • Publication number: 20040085897
    Abstract: A method for transmitting data between line cards in a distributed network switching system, through a switch fabric which is separated from said line cards, said method comprising receiving at said switch fabric information regarding queues of data packets that have arrived at said line cards and are waiting to be transmitted therefrom, recording said information at one or more databases located in said switch fabric, computing in said switch fabric a suitable array of connections between said line cards according to the information recorded at said one or more databases, providing instructions to said line cards regarding the data that is allowed to be transmitted therefrom, establishing a physical array of connections in said switch fabric to allow said transmission, and updating said one or more databases at said switch fabric accordingly, wherein said updating of the one or more databases occurs before said transmission of data from the input line cards takes place, such that the updated information stor
    Type: Application
    Filed: November 14, 2002
    Publication date: May 6, 2004
    Inventors: Shimshon Jacobi, Yuval Yunger, Assi Mashala, Eyal Nagar, Ofer Beeri, Udi Barzilai, Tsvi Slonim
  • Publication number: 20040071144
    Abstract: A method and distributed scheduler for use therewith has at least two clusters of source port modules, each tracking all queues associated with a respective input-node and relating to a respective subset of available input-nodes. Each source port module receives available output-nodes, and generates a weight for each queue therein. Each source port module generates at least one request relating to the highest weight serviceable queue. The respective requests of each source port module are accumulated, and for each cluster of source port modules, the request is chosen for which no two requests relate to the same input-node, and for each output-node, the chosen requests have highest weight. The highest weight request from all clusters is determined in respect of each output node receiving requests from one or more input nodes. A grant is sent to the input-node having the highest weight request.
    Type: Application
    Filed: June 23, 2003
    Publication date: April 15, 2004
    Inventors: Ofer Beeri, Eyal Nagar, Shimshon Jacobi