Patents by Inventor Lior Khermosh

Lior Khermosh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9781027
    Abstract: Various systems and methods to facilitate general communication, via a memory network, between compute elements and external destinations, while at the same time facilitating low latency communication between compute elements and memory modules storing data sets, without impacting negatively the latency of the communication between the compute elements and the memory modules. General communication messages between compute nodes and a gateway compute node are facilitated with a first communication protocol adapted for low latency transmissions. Such general communication messages are then transmitted to external destinations with a second communication protocol that is adapted for the general communication network and which may or may not be low latency, but such that the low latency between the compute elements and the memory modules is not negatively impacted. The memory modules may be based on RAM or DRAM or another structure allowing low latency access by the compute elements.
    Type: Grant
    Filed: April 2, 2015
    Date of Patent: October 3, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Avner Braverman, Ofir Shalvi, Lior Khermosh, Ofer Bar-Or, Eyal Benjamin Raz Oren, Gal Zuckerman
  • Patent number: 9781225
    Abstract: Various embodiments of systems and methods to efficiently use a compute element to process a plurality of values distributed over a plurality of servers using a plurality of keys. In various embodiments, a system is configured to identify (or “derive”) the various server locations of various data values, to send requests to the various servers for the needed data values, to receive the data values from the various servers, and to process the various data values received. In various embodiments, requests are sent and data values are received via a switching network. In various embodiments, the servers are organized in a key value store, which may optionally be a shared memory pool. Various embodiments are systems and methods with a small number of compute elements and servers, but in alternative embodiments the elements may be expanded to hundreds or thousands of compute elements and servers.
    Type: Grant
    Filed: February 3, 2015
    Date of Patent: October 3, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Avner Braverman, Michael Adda, Lior Amar, Lior Khermosh, Gal Zuckerman
  • Patent number: 9753873
    Abstract: Various embodiments of systems and methods to interleave high priority key-value transactions together with lower priority transactions, in which both types of transactions are communicated over a shared input-output medium. In various embodiments, a central-processing-unit (CPU) initiates high priority key-value transactions by communicating via the shared input-output medium to a key-value-store. In various embodiments, a medium controller blocks or delays lower priority transactions such that the high priority transactions may proceed without interruption. In various embodiments, both of the types of transactions are packet-based, and the system interrupts a lower priority transaction at a particular packet, then completes the high priority transaction, then completes the lower priority transaction.
    Type: Grant
    Filed: February 3, 2015
    Date of Patent: September 5, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Lior Khermosh, Avner Braverman, Gal Zuckerman
  • Patent number: 9733988
    Abstract: Various systems and methods to achieve load balancing among a plurality of compute elements accessing a shared memory pool. The shared memory pool is configured to store and serve a plurality of data sets associated with a task, a first data interface's internal registry is configured to keep track of which data sets have been extracted from the shared memory pool and served to the compute elements, the first data interface is configured to extract from the shared memory pool and serve to the compute elements data sets which have not yet been extracted and served, the rate at which data sets are extracted and served to each particular compute element is proportional to the rate at which that compute element requests data sets, and the system may continues to extract, serve, and process data sets until all of the data sets associated with the task have been processed once.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: August 15, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Khermosh, Gal Zuckerman
  • Patent number: 9720826
    Abstract: Various embodiments of systems and methods to allow and control simultaneous access and processing by multiple compute elements of multiple data sets stored in multiple memory modules. The compute elements request data to be processed without specifying any particular data sets to be received. Data interfaces receive the data requests from the compute elements, determine which data sets have not yet been served to the compute elements, select data sets to be served from among those that have not yet been served, and fetch these data sets from the memory modules. The process of requesting additional data by the compute elements, selection by the data interfaces of data sets to be served among those that have not yet been served, and providing such data sets by the data interfaces to the compute elements, may continue until all of the data sets have been served to the compute elements.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: August 1, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Khermosh, Gal Zuckerman
  • Patent number: 9690713
    Abstract: Various systems and methods to use a plurality of linked lists for keeping track of changes to be made in data sets currently in a flash memory. To enhance efficiency of the system, the changes to be made in any particular data set are aggregated in a random access memory (“RAM”) until a sufficient volume of changes have been aggregated to justify a rewrite of the flash memory block in which the particular data set is stored. Since a flash memory may have millions of memory blocks and data sets, there are potentially tremendous demands on the memory resources of the RAM to keep track of all the changes, but the problem presented by these potential demands is avoided through the use of linked lists, in which each list links all of the changes that have been aggregated in RAM and that apply to one specific data set.
    Type: Grant
    Filed: April 19, 2015
    Date of Patent: June 27, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Lior Khermosh, Ofer Bar-Or, Gal Zuckerman
  • Patent number: 9690705
    Abstract: Described herein are systems and methods to process efficiently, according to a certain order, a plurality of data sets arranged in data blocks. In one embodiment, a first compute element receives from another compute element a first set of instructions that determine an order in which a plurality of data sets are to be processed as part of a processing task. Relevant data sets are then streamed into a cache memory associated with the first compute element, but the order of streaming is not by order of storage but rather by the order conveyed in the first set of instructions.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: June 27, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Lior Amar, Avner Braverman, Lior Khermosh, Gal Zuckerman
  • Patent number: 9639407
    Abstract: Various systems and methods to perform efficiently a first processing task in conjunction with a plurality of data sets. A first code sequence comprises a plurality of general commands, and a specific command including a description of a first data processing task to be performed in conjunction with the data sets. The general commands are received and processed in a standard manner. The specific command is identified automatically by its nature, and the description within the specific command is then converted into a first sequence of executable instructions executable by a plurality of compute elements holding the plurality of data sets. The ultimate result is an efficient implementation of the first processing task. In some embodiments, the implementation of the first processing task is assisted by a pre-defined procedure that allocates the data sets to the compute elements and shares instances of executable instructions with the compute elements.
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: May 2, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Avner Braverman, Michael Adda, Lior Amar, Lior Khermosh, Eli Finer, Gal Zuckerman
  • Patent number: 9639473
    Abstract: Described herein are systems and methods to prevent a controller in a DDIO (data direct input output) system from shifting currently-required data out of a cache memory. In one embodiment, a compute element disables caching of some specific addresses in a non-cache memory, but still enables caching of other addresses in the non-cache memory, thereby practically disabling the DDIO system, so that data sets not currently needed are placed in the addresses in the non-cache memory which are not cached. As a result, currently-required data are not shifted out of cache memory. The compute element then determines that the data sets, which formerly avoided being cached, are now required. The system therefore copies the data sets that are now required from addresses in non-cache memory not accessible to cache memory, to addresses in non-cache memory accessible to cache memory, thereby allowing the caching and processing of such data sets.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: May 2, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Amar, Dan Aloni, Lior Khermosh, Gal Zuckerman
  • Patent number: 9632936
    Abstract: Described herein are systems and methods to redirect data read requests from the first tier to the second tier of a two-tier distributed memory. The first tier includes memory modules with data sets. Data interfaces associated with the first tier memory modules, receive from a second tier including compute elements and associated cache memories, requests to fetch data from the first tier. If a data set has not recently been fetched by the second tier, then the data interface will send the data set from the first tier to the cache memory associated with the requesting compute element. If a data set has recently been fetched by the second tier, the data interface will redirect the requesting compute element to fetch the data set from the cache memory in which the data set is currently located.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: April 25, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Gal Zuckerman, Avner Braverman, Lior Khermosh
  • Patent number: 9594696
    Abstract: Various systems and methods to generate automatically a procedure operative to distributively process a plurality of data sets stored on a plurality of memory modules. Under the instruction of the automatically generated procedure, compute elements request data sets relevant to a particular task, such data sets are fetched from memory modules by data interfaces which provide such data sets to the requesting compute elements, and the compute elements then process the received data sets until the task is completed. Relevant data sets are fetched and processed asynchronously, which means that the relevant data sets need not be fetched and processed in any particular order.
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: March 14, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Avner Braverman, Michael Adda, Lior Amar, Lior Khermosh, Eli Finer, Gal Zuckerman
  • Patent number: 9594688
    Abstract: Described herein are systems and methods to execute efficiently a plurality of actions, in which multiple actions require the use of a single data set. The data set is fetched from a data source, across a switching network, to a memory associated with a first compute element. This is the only fetching of the data set from the data source, and the only fetching across a switching network, thereby minimizing fetching across the switching network, reducing the load on the switching network, decreasing the time by which the data set will be accessed in second and subsequent processes, and enhancing the efficiency of the system. In some embodiments, processes are migrated from second and subsequent compute elements to the compute element in which the data set is stored. In some embodiments, second and subsequent compute elements access the data set stored in the memory associated with the first compute element.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: March 14, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Amar, Lior Khermosh, Gal Zuckerman
  • Patent number: 9547553
    Abstract: Various systems to achieve data resiliency in a shared memory pool are presented. Multiple memory modules are associated with multiple data interfaces, one or multiple erasure-coding interfaces are communicatively connected with the multiple data interfaces, and multiple compute elements are communicatively connected with one or multiple erasure-coding interfaces. Data sets are erasure-coded, and the resulting fragments are stored in random access memory modules distributed throughout the system. Storage in RAM allows real-time fetching of fragments using random-access read cycles and streaming of fragments using random-access write cycles, in which read operations include reconstruction of data sets from fetched data fragments, and write operations allow conversion of data sets into fragments which are then streamed and distributively stored.
    Type: Grant
    Filed: March 8, 2015
    Date of Patent: January 17, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Lior Khermosh, Avner Braverman, Ofir Shalvi, Ofer Bar-Or, Gal Zuckerman
  • Patent number: 9529622
    Abstract: Various systems and methods to generate automatically a procedure operative to divide a processing task between two or more compute elements. A first compute element converts a code sequence into a sequence of executable instructions, which direct a second compute element to perform a first processing sub-task on a data set, and which also direct a third compute element to perform a second processing sub-task on the data set modified by the first processing sub-task. A memory module storing the data set may be embedded in a server with at least one of the compute elements. In some of the embodiments, all of the compute elements are part of a single system, whereas in alternative embodiments, at least some of the compute elements are part of two or more sub-systems.
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: December 27, 2016
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Amar, Lior Khermosh, Eli Finer, Gal Zuckerman
  • Patent number: 9477412
    Abstract: Described herein are various systems and methods to automatically decide to aggregate data write requests in a distributed data store. A system initiates outgoing data write requests in synchronization with incoming data store commands, thereby facilitating low-latency read-back of the data. In response to an absence of data read requests, the system automatically changes such that each request includes two or more data sets, thereby breaking synchronization but consequently reducing traffic load on a switching network within the system. If the system later detects data read requests for previously stored data, the system will automatically change back to the original synchronized state, thereby decreasing the latency of accessing stored data. The system alternates between the modes of operation to achieve balance between low latency of data access and reduced traffic load on the switching network.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: October 25, 2016
    Assignee: Parallel Machines Ltd.
    Inventors: Lior Amar, Gal Zuckerman, Avner Braverman, Lior Khermosh, Michael Adda
  • Patent number: 8526431
    Abstract: A method for registration of multiple entities belonging to a specific optical network unit (ONU). In one embodiment, the multiple entity registration method comprises checking by an optical line terminal (OLT) if a registration request message received from the specific ONU belongs to a certain grant, and based on the check result, registering an entity as either a first or as an additional entity of the specific ONU. In another embodiment, the method comprises checking by an OLT of a reserved value of a flags field inside a registration request message, and based on the check result, registering an entity as either a first or as an additional entity of the specific ONU. The knowledge by an OLT that multiple entities belong to a specific ONU is used for grant optimization and packet data flow optimization.
    Type: Grant
    Filed: May 2, 2012
    Date of Patent: September 3, 2013
    Assignee: PMC-Sierra Israel Ltd.
    Inventors: Onn Haran, Ariel Maislos, Lior Khermosh
  • Patent number: 8422887
    Abstract: A system for redundancy in Ethernet passive optical networks (EPONs) facilitates fast recovery from failure (less than 50 msec), path redundancy of the fiber optic network, and location redundancy of the OLTs. An optical networking unit (ONU) in a normal state monitors input communications, and when the input communications are quiet for a predetermined minimum length of time, the ONU transitions to a lenient state in which: the ONU accepts old and new security keys; upon receiving a packet: the ONU updates an ONU timestamp based on the packet's timestamp; and the ONU transitions to the normal state of operation. While the ONU is in the lenient state if a packet is not received for a predetermined given length of time the ONU transitions to a deregistered state. In this system, main and standby OLTs do not require synchronization of security parameters or synchronization for differences in fiber lengths.
    Type: Grant
    Filed: January 31, 2010
    Date of Patent: April 16, 2013
    Assignee: PMC Sierra Ltd
    Inventors: Zachy Haramaty, Yaniv Kopelman, Alon Meirson, Lior Khermosh
  • Patent number: 8406620
    Abstract: An in-band OTDR uses a network's communication protocols to perform OTDR testing on a link. Because the OTDR signal (probe pulse) is handled like a data signal, the time required for OTDR testing is typically about the same as the time required for other global network events, and is not considered an interruption of service to users. A network equipment includes an optical time domain reflectometry (OTDR) transmitter and receiver, each operationally connected to a link to transmit and receive, respectively, an OTDR signal. When an OTDR is to be performed, a network device operationally connected to the link actuates the OTDR transmitter to transmit the OTDR signal on the link during a determined test time based on a communications protocol of the link, during which data signals are not transmitted to the network equipment. A processing system processes the OTDR signal to provide OTDR test results.
    Type: Grant
    Filed: July 8, 2010
    Date of Patent: March 26, 2013
    Assignee: PMC Sierra Israel Ltd.
    Inventors: Lior Khermosh, Christopher Michael Look, Tiberiu Galambos
  • Patent number: 8397064
    Abstract: A method and system is provided for securing communication on an EPON. Particularly different types of encrypted messages, each with a respective short MAC SegTAG, may be sent from the OLT to an ONU and from an ONU to the OLT without need for a full SecTAG with an explicit SCI. Discovery and control messages may be encrypted and a security offset may be less than 30 bytes. A packet header including its MAC address may be encrypted.
    Type: Grant
    Filed: January 5, 2010
    Date of Patent: March 12, 2013
    Assignee: PMC Sierra Ltd.
    Inventors: Lior Khermosh, Zachy Haramaty, Jeff Mandin
  • Patent number: 8335439
    Abstract: A method of managing forward error correction (FEC) initialization and auto-negotiation in ethernet passive optical networks includes receiving FEC data from an optical network unit (ONU), and the optical line terminal (OLT) responds to the ONU with FEC data. Upon receiving data not forward error corrected from an ONU, the OLT responds with data not coded for FEC. Similarly, upon receiving forward error corrected data from the OLT, the ONU responds with forward error corrected data; and upon receiving data not forward error corrected from the OLT, the ONU responds with data not forward error corrected. The communications quality from the ONU is monitored, if the communications quality is not sufficient, the OLT transmits forward error corrected data to the ONU; otherwise, the OLT transmits non-FEC data to the ONU.
    Type: Grant
    Filed: June 30, 2009
    Date of Patent: December 18, 2012
    Assignee: PMC-Sierra Israel Ltd.
    Inventor: Lior Khermosh