Patents by Inventor Elad RAZ
Elad RAZ has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250138787Abstract: There is provided a method, comprising simultaneously presenting in a GUI, a source code and an interactive graph of nodes connected by edges representing the source code mapped to physical configurable elements of computational cluster(s) of a processor each configurable to execute mathematical operations, each node represents operation(s) mapped to physical configurable elements, and edges represent dependencies between the operations, mapped to physical dependency links between the configurable elements, receiving, via the GUI, a user selection of a portion of the source code, determining node(s) and/or edge(s) of the interactive graph corresponding to the portion, and updating the GUI for visually distinguishing the node(s) and/or edge(s), wherein the visually distinguished node(s) represents a mapping to certain physical configurable elements and the visually distinguished edge(s) represents certain dependency links between the certain physical configurable elements of the processor configured to executeType: ApplicationFiled: May 27, 2024Publication date: May 1, 2025Applicant: Next Silicon LtdInventors: Oshri KDOSHIM, Elad RAZ
-
Publication number: 20250130802Abstract: An apparatus for executing a software program, comprising processing units and a hardware processor adapted for: in an intermediate representation of the software program, where the intermediate representation comprises blocks, each associated with an execution block of the software program and comprising intermediate instructions, identifying a calling block and a target block, where the calling block comprises a control-flow intermediate instruction to execute a target intermediate instruction of the target block; generating target instructions using the target block; generating calling instructions using the calling block and a computer control instruction for invoking the target instructions, when the calling instructions are executed by a calling processing unit and the target instructions are executed by a target processing unit; configuring the calling processing unit for executing the calling instructions; and configuring the target processing unit for executing the target instructions.Type: ApplicationFiled: December 24, 2024Publication date: April 24, 2025Applicant: Next Silicon LtdInventors: Elad RAZ, Ilan TAYARI
-
Publication number: 20250053511Abstract: A method for caching memory comprising caching two data values, each of one of two ranges of application memory addresses, each associated with one of a set of threads, by: organizing a plurality of sequences of consecutive address sub-ranges in an interleaved sequence of address sub-ranges by alternately selecting, for each thread in an identified order of threads, a next sub-range in the respective sequence of sub-ranges associated therewith; generating a mapping of the interleaved sequence of sub-ranges to a range of physical memory addresses in order of the interleaved sequence of sub-ranges; and when a thread accesses an application memory address of the respective range of application addresses associated thereof: computing a target address according to the mapping using the application address; and storing the two data values in one cache-line of a plurality of cache-lines of a cache by accessing the physical memory area using the target address.Type: ApplicationFiled: October 28, 2024Publication date: February 13, 2025Applicant: Next Silicon LtdInventors: Dan SHECHTER, Elad RAZ
-
Patent number: 12197919Abstract: A system for executing a software program comprising processing units and a hardware processor configured to: for at least one set of blocks, each set comprising a calling block and a target block of an intermediate representation of the software program, generate control-transfer information describing at least one value of the software program at an exit of the calling block (out-value) and at least one other value of the software program at an entry to the target block (in-value); select a set of blocks according to at least one statistical value collected while executing the software program; generate a target set of instructions using the target block and the control-transfer information; generate a calling set of instructions using the calling block and the control-transfer information; configure a calling processing unit to execute the calling set of instructions; and configure a target processing unit to execute the target set of instructions.Type: GrantFiled: June 17, 2024Date of Patent: January 14, 2025Assignee: Next Silicon LtdInventors: Elad Raz, Ilan Tayari, Itay Bookstein, Jonathan Lavi
-
Publication number: 20250013466Abstract: A system for processing a plurality of concurrent threads comprising: a reconfigurable processing grid, comprising logical elements and a context storage for storing thread contexts, each thread context for one of a plurality of concurrent threads, each implementing a dataflow graph comprising an identified operation; and a hardware processor configured for configuring the at reconfigurable processing grid for: executing a first thread of the plurality of concurrent threads; and while executing the first thread: storing a runtime context value of the first thread in the context storage; while waiting for completion of the identified operation by identified logical elements, executing the identified operation of a second thread by the identified logical element; and when execution of the identified operation of the first thread completes: retrieving the runtime context value of the first thread from the context storage; and executing another operation of the first thread.Type: ApplicationFiled: January 11, 2024Publication date: January 9, 2025Applicant: Next Silicon LtdInventors: Elad RAZ, Ilan TAYARI
-
Patent number: 12189412Abstract: An apparatus for executing a software program, comprising processing units and a hardware processor adapted for: in an intermediate representation of the software program, where the intermediate representation comprises blocks, each associated with an execution block of the software program and comprising intermediate instructions, identifying a calling block and a target block, where the calling block comprises a control-flow intermediate instruction to execute a target intermediate instruction of the target block; generating target instructions using the target block; generating calling instructions using the calling block and a computer control instruction for invoking the target instructions, when the calling instructions are executed by a calling processing unit and the target instructions are executed by a target processing unit; configuring the calling processing unit for executing the calling instructions; and configuring the target processing unit for executing the target instructions.Type: GrantFiled: March 29, 2023Date of Patent: January 7, 2025Assignee: Next Silicon LtdInventors: Elad Raz, Ilan Tayari
-
Patent number: 12130736Abstract: A method for caching memory comprising caching two data values, each of one of two ranges of application memory addresses, each associated with one of a set of threads, by: organizing a plurality of sequences of consecutive address sub-ranges in an interleaved sequence of address sub-ranges by alternately selecting, for each thread in an identified order of threads, a next sub-range in the respective sequence of sub-ranges associated therewith; generating a mapping of the interleaved sequence of sub-ranges to a range of physical memory addresses in order of the interleaved sequence of sub-ranges; and when a thread accesses an application memory address of the respective range of application addresses associated thereof: computing a target address according to the mapping using the application address; and storing the two data values in one cache-line of a plurality of cache-lines of a cache by accessing the physical memory area using the target address.Type: GrantFiled: August 7, 2023Date of Patent: October 29, 2024Assignee: Next Silicon LtdInventors: Dan Shechter, Elad Raz
-
Publication number: 20240345881Abstract: There is provided a memory, comprising: issuing an allocation operation for allocation of a region of a memory by a first process of a plurality of first processes executed in parallel on a first processor, sending a message to a second processor indicating the allocation of the region of the pool of the memory, issuing a free operation for release of the allocated region of the pool of the memory by a second process of a plurality of second processes executed in parallel on a second processor, and releasing, by the first processor, the allocated region of the pool of the memory as indicated in the free operation, wherein a same region of memory is allocated by the first process and released by the second process, wherein the first processes are concurrently attempting to issue the allocation operation and the second processes are concurrently attempting to issue the free operation.Type: ApplicationFiled: June 24, 2024Publication date: October 17, 2024Applicant: Next Silicon LtdInventors: Elad RAZ, Ilan TAYARI, Dan SHECHTER
-
Patent number: 12056376Abstract: A device for executing a software program by at least one computational device, comprising an interconnected computing grid, connected to the at least one computational device, comprising an interconnected memory grid comprising a plurality of memory units connected by a plurality of memory network nodes, each connected to at least one of the plurality of memory units; wherein configuring the interconnected memory comprises: identifying a bypassable memory unit; selecting a backup memory unit connected to a backup memory network node; configuring the respective memory network node connected to the bypassable memory unit to forward at least one memory access request, comprising an address in a first address range, to the backup memory network node; and configuring the backup memory network node to access the backup memory unit in response to the at least one memory access request, in addition to accessing the respective at least one memory unit connected thereto.Type: GrantFiled: May 8, 2023Date of Patent: August 6, 2024Assignee: Next Silicon LtdInventors: Yoav Lossin, Ron Schneider, Elad Raz, Ilan Tayari, Eyal Nagar
-
Patent number: 12020069Abstract: There is provided a computer implemented method of allocation of memory, comprising: issuing an allocation operation for allocation of a region of a pool of a memory by a first process executed on a first processor, sending a message to a second processor indicating the allocation of the region of the pool of the memory, wherein the first processor and the second processor access the region of the pool of the memory, issuing a free operation for release of the allocated region of the pool of the memory by a second process executed on a second processor, and releasing, by the first processor, the allocated region of the pool of the memory as indicated in the free operation, wherein the region of the pool of the memory allocated by the first process and released by the second process is a same region of memory.Type: GrantFiled: August 11, 2022Date of Patent: June 25, 2024Assignee: Next Silicon LtdInventors: Elad Raz, Ilan Tayari, Dan Shechter, Yuval Asher Deutsher
-
Patent number: 11995419Abstract: There is provided a method, comprising simultaneously presenting in a GUI, a source code and an interactive graph of nodes connected by edges representing the source code mapped to physical configurable elements of computational cluster(s) of a processor each configurable to execute mathematical operations, each node represents operation(s) mapped to physical configurable elements, and edges represent dependencies between the operations, mapped to physical dependency links between the configurable elements, receiving, via the GUI, a user selection of a portion of the source code, determining node(s) and/or edge(s) of the interactive graph corresponding to the portion, and updating the GUI for visually distinguishing the node(s) and/or edge(s), wherein the visually distinguished node(s) represents a mapping to certain physical configurable elements and the visually distinguished edge(s) represents certain dependency links between the certain physical configurable elements of the processor configured to executeType: GrantFiled: October 25, 2023Date of Patent: May 28, 2024Assignee: Next Silicon LtdInventors: Oshri Kdoshim, Elad Raz
-
Patent number: 11966619Abstract: An apparatus for executing a software program, comprising at least one hardware processor configured for: identifying in a plurality of computer instructions at least one remote memory access instruction and a following instruction following the at least one remote memory access instruction; executing after the at least one remote memory access instruction a sequence of other instructions, where the sequence of other instructions comprises a return instruction to execute the following instruction; and executing the following instruction; wherein executing the sequence of other instructions comprises executing an updated plurality of computer instructions produced by at least one of: inserting into the plurality of computer instructions the sequence of other instructions or at least one flow-control instruction to execute the sequence of other instructions; and replacing the at least one remote memory access instruction with at least one non-blocking memory access instruction.Type: GrantFiled: September 17, 2021Date of Patent: April 23, 2024Assignee: Next Silicon LtdInventors: Elad Raz, Yaron Dinkin
-
Publication number: 20240054015Abstract: There is provided a computer implemented method of allocation of memory, comprising: issuing an allocation operation for allocation of a region of a pool of a memory by a first process executed on a first processor, sending a message to a second processor indicating the allocation of the region of the pool of the memory, wherein the first processor and the second processor access the region of the pool of the memory, issuing a free operation for release of the allocated region of the pool of the memory by a second process executed on a second processor, and releasing, by the first processor, the allocated region of the pool of the memory as indicated in the free operation, wherein the region of the pool of the memory allocated by the first process and released by the second process is a same region of memory.Type: ApplicationFiled: August 11, 2022Publication date: February 15, 2024Applicant: Next Silicon LtdInventors: Elad RAZ, Ilan TAYARI, Dan SHECHTER, Yuval Asher DEUTSHER
-
Publication number: 20240028389Abstract: A system for executing a plurality of software threads, comprising: a plurality of processing circuitries; a plurality of memory areas connected to the processing circuitries, each memory area associated with at least one of the processing circuitries; and at least one hardware processor, connected to the processing circuitries and configured for: in each of a plurality of iterations: while the processing circuitries execute the software threads, collecting for each thread a plurality of thread statistical values indicative of a plurality of memory accesses to at least some of the memory areas performed when executing the thread; for at least one thread, performing an analysis comprising the thread statistical values thereof to identify a preferred memory area of the plurality of memory areas; and configuring one of the at least one processing circuitry associated with the preferred memory area to execute the at least one thread.Type: ApplicationFiled: July 25, 2022Publication date: January 25, 2024Applicant: Next Silicon LtdInventors: Elad RAZ, Ilan TAYARI
-
Patent number: 11875153Abstract: A system for processing a plurality of concurrent threads comprising: a reconfigurable processing grid, comprising logical elements and a context storage for storing thread contexts, each thread context for one of a plurality of concurrent threads, each implementing a dataflow graph comprising an identified operation; and a hardware processor configured for configuring the at reconfigurable processing grid for: executing a first thread of the plurality of concurrent threads; and while executing the first thread: storing a runtime context value of the first thread in the context storage; while waiting for completion of the identified operation by identified logical elements, executing the identified operation of a second thread by the identified logical element; and when execution of the identified operation of the first thread completes: retrieving the runtime context value of the first thread from the context storage; and executing another operation of the first thread.Type: GrantFiled: July 5, 2023Date of Patent: January 16, 2024Assignee: Next Silicon LtdInventors: Elad Raz, Ilan Tayari
-
Publication number: 20230393979Abstract: A method for caching memory comprising caching two data values, each of one of two ranges of application memory addresses, each associated with one of a set of threads, by: organizing a plurality of sequences of consecutive address sub-ranges in an interleaved sequence of address sub-ranges by alternately selecting, for each thread in an identified order of threads, a next sub-range in the respective sequence of sub-ranges associated therewith; generating a mapping of the interleaved sequence of sub-ranges to a range of physical memory addresses in order of the interleaved sequence of sub-ranges; and when a thread accesses an application memory address of the respective range of application addresses associated thereof: computing a target address according to the mapping using the application address; and storing the two data values in one cache-line of a plurality of cache-lines of a cache by accessing the physical memory area using the target address.Type: ApplicationFiled: August 7, 2023Publication date: December 7, 2023Applicant: Next Silicon LtdInventors: Dan SHECHTER, Elad RAZ
-
Publication number: 20230376419Abstract: A method for cache coherency in a reconfigurable cache architecture is provided. The method includes receiving a memory access command, wherein the memory access command includes at least an address of a memory to access; determining at least one access parameter based on the memory access command; and determining a target cache bin for serving the memory access command based in part on the at least one access parameter and the address.Type: ApplicationFiled: August 4, 2023Publication date: November 23, 2023Applicant: Next Silicon LtdInventor: Elad RAZ
-
Publication number: 20230342157Abstract: An apparatus for computing, comprising a processing circuitry configured for computing an outcome of executing a set of computer instructions comprising a group of data variables, by: identifying an initial state of the processing circuitry; executing a set of anticipated computer instructions produced based on the set of computer instructions and a likely data value, where the likely data value is a value of one the group of data variables anticipated to be computed by executing the set of computer instructions and computed using at least one program data value; and when identifying, while executing the set of anticipated computer instructions, a failed prediction where the data variable is not equal to the likely data value: restoring the initial state of the processing circuitry; and executing a set of alternative computer instructions, produced based on the set of computer instructions and the at least one likely data value.Type: ApplicationFiled: January 11, 2022Publication date: October 26, 2023Applicant: Next Silicon LtdInventors: Elad RAZ, Ilan TAYARI
-
Publication number: 20230273736Abstract: A device for executing a software program by at least one computational device, comprising an interconnected computing grid, connected to the at least one computational device, comprising an interconnected memory grid comprising a plurality of memory units connected by a plurality of memory network nodes, each connected to at least one of the plurality of memory units; wherein configuring the interconnected memory comprises: identifying a bypassable memory unit; selecting a backup memory unit connected to a backup memory network node; configuring the respective memory network node connected to the bypassable memory unit to forward at least one memory access request, comprising an address in a first address range, to the backup memory network node; and configuring the backup memory network node to access the backup memory unit in response to the at least one memory access request, in addition to accessing the respective at least one memory unit connected thereto.Type: ApplicationFiled: May 8, 2023Publication date: August 31, 2023Applicant: Next Silicon LtdInventors: Yoav LOSSIN, Ron SCHNEIDER, Elad RAZ, Ilan TAYARI, Eyal NAGAR
-
Patent number: 11720491Abstract: A method for caching memory comprising caching two data values, each of one of two ranges of application memory addresses, each associated with one of a set of threads, by: organizing a plurality of sequences of consecutive address sub-ranges in an interleaved sequence of address sub-ranges by alternately selecting, for each thread in an identified order of threads, a next sub-range in the respective sequence of sub-ranges associated therewith; generating a mapping of the interleaved sequence of sub-ranges to a range of physical memory addresses in order of the interleaved sequence of sub-ranges; and when a thread accesses an application memory address of the respective range of application addresses associated thereof: computing a target address according to the mapping using the application address; and storing the two data values in one cache-line of a plurality of cache-lines of a cache by accessing the physical memory area using the target address.Type: GrantFiled: December 17, 2021Date of Patent: August 8, 2023Assignee: Next Silicon LtdInventors: Dan Shechter, Elad Raz