Patents by Inventor Avner BRAVERMAN

Avner BRAVERMAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10621001
    Abstract: System and methods for grouping tasks into groups, associating each of the groups with a respective isolated environment, pre-loading each the isolated environments with stand-by task-specific information of all the tasks in the group, and upon a request to activate one of the tasks, quickly activating the task in the respective isolated environment using the respective stand-by task-specific information already pre-loaded, while optionally clearing other stand-by task-specific information from the respective isolated environment, thereby efficiently executing the requested task while avoiding adverse interaction, inter-group and intra-group, between the tasks, and while also saving system resources by avoiding an allocating of a dedicated isolated environment per each of the tasks. Tasks may be grouped such as to reduce the likelihood of intra-group adverse interaction or to reduce the consequences of such adverse interaction.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: April 14, 2020
    Assignee: Binaris Inc
    Inventors: Avner Braverman, Michael Adda
  • Patent number: 10567213
    Abstract: System and methods for receiving requests for executing specific tasks, analyzing current computational resources available for executing the tasks, and selecting code segments for executing the tasks, in which the selection of the code segments is done in a way that optimizes allocation of the various computational resources among the tasks, and such that said optimization is directed and facilitated by taking into consideration constraints and guidelines associated with the requests. Each of the tasks is associated with at least two code segments operative to execute the task, in which per a given task, different code segments operative to execute the task are associated with different computational resources needed for such execution. The selection of specific code segments in turn affects utilization of the computational resources.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: February 18, 2020
    Assignee: Binaris Inc
    Inventors: Avner Braverman, Michael Adda, Ariel Shaqed
  • Patent number: 10536391
    Abstract: System and methods for intelligently directing a service request to a preferred place of execution. The services, which are executed by a unified client-server system, may be microservices associated with a microservice architecture, or other services in which a first entity sends a request to another entity to execute a certain service needed by the first entity. The unified client-server system may decide which services to execute on which of a plurality of different kinds of devices located in a variety of places. The decision may affect service request latency, network bandwidth, power consumption, and may optimize the task of transporting associated data components between the different entities in the system. The unified client-server system may be abstracted via a certain interface, such that the actual execution place of each of the services is controlled by the system and not necessarily by the requesting entity.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: January 14, 2020
    Assignee: Binaris Inc
    Inventors: Avner Braverman, Michael Adda
  • Publication number: 20190384633
    Abstract: System and methods for on-demand isolated execution of specific tasks. A system receives, via a communication interface, requests to execute tasks. The system reacts to each of the requests by allocating, on-demand and per the request received, a unique sub-set of physical computational resources, so as to result in several such unique sub-sets of resources. The system executes, per each of the tasks, the respective commands of the task, by converting the respective commands of the task into executable instructions and running the executable instructions. The respective commands of each of the tasks are converted so as to cause the resulting executable instruction to refrain from accessing other unique sub-sets that were not allocated to the task, thereby facilitating said on-demand isolated execution of each of the tasks.
    Type: Application
    Filed: September 3, 2019
    Publication date: December 19, 2019
    Applicant: Binaris Inc
    Inventors: Avner Braverman, Michael Adda
  • Patent number: 10467045
    Abstract: System and methods for on-demand isolated execution of specific tasks. A system receives, via a communication interface, requests to execute tasks. The system reacts to each of the requests by allocating, on-demand and per the request received, a unique sub-set of physical computational resources, so as to result in several such unique sub-sets of resources. The system executes, per each of the tasks, the respective commands of the task, by converting the respective commands of the task into executable instructions and running the executable instructions. The respective commands of each of the tasks are converted so as to cause the resulting executable instruction to refrain from accessing other unique sub-sets that were not allocated to the task, thereby facilitating said on-demand isolated execution of each of the tasks.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: November 5, 2019
    Assignee: Binaris Inc
    Inventors: Avner Braverman, Michael Adda
  • Patent number: 10417043
    Abstract: System and methods for receiving requests for executing tasks, executing the tasks while observing and gathering related performance levels, and using the observations to adapt execution of tasks to follow. The system adapts environments in which tasks are executed, thereby improving the ability of these environments to execute the tasks efficiently. As more performance data is available per a cretin type of tasks or per a specific task, the system gets closer to optimization. Performance may be affected by various parameters such as the particular execution environment used to executed the task, isolation techniques employed in keeping the tasks isolated from each other, actual code utilized for executing each of the tasks, and the usage of particular hardware components to facilitate related software. Environments, or combinations of various code and hardware components, that have proven inefficient in executing a certain task, will be replaced before executing similar tasks to follow.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: September 17, 2019
    Assignee: Binaris Inc
    Inventors: Avner Braverman, Michael Adda, Ariel Shaqed
  • Patent number: 10120736
    Abstract: Two or more ports of a same type are identified in a computer. A separate device driver process is initiated for each of the identified ports. A one-to-one correspondence between each of the ports and each of the device driver processes is established.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: November 6, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Adda, Dan Aloni, Avner Braverman
  • Patent number: 9781027
    Abstract: Various systems and methods to facilitate general communication, via a memory network, between compute elements and external destinations, while at the same time facilitating low latency communication between compute elements and memory modules storing data sets, without impacting negatively the latency of the communication between the compute elements and the memory modules. General communication messages between compute nodes and a gateway compute node are facilitated with a first communication protocol adapted for low latency transmissions. Such general communication messages are then transmitted to external destinations with a second communication protocol that is adapted for the general communication network and which may or may not be low latency, but such that the low latency between the compute elements and the memory modules is not negatively impacted. The memory modules may be based on RAM or DRAM or another structure allowing low latency access by the compute elements.
    Type: Grant
    Filed: April 2, 2015
    Date of Patent: October 3, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Avner Braverman, Ofir Shalvi, Lior Khermosh, Ofer Bar-Or, Eyal Benjamin Raz Oren, Gal Zuckerman
  • Patent number: 9781225
    Abstract: Various embodiments of systems and methods to efficiently use a compute element to process a plurality of values distributed over a plurality of servers using a plurality of keys. In various embodiments, a system is configured to identify (or “derive”) the various server locations of various data values, to send requests to the various servers for the needed data values, to receive the data values from the various servers, and to process the various data values received. In various embodiments, requests are sent and data values are received via a switching network. In various embodiments, the servers are organized in a key value store, which may optionally be a shared memory pool. Various embodiments are systems and methods with a small number of compute elements and servers, but in alternative embodiments the elements may be expanded to hundreds or thousands of compute elements and servers.
    Type: Grant
    Filed: February 3, 2015
    Date of Patent: October 3, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Avner Braverman, Michael Adda, Lior Amar, Lior Khermosh, Gal Zuckerman
  • Patent number: 9753873
    Abstract: Various embodiments of systems and methods to interleave high priority key-value transactions together with lower priority transactions, in which both types of transactions are communicated over a shared input-output medium. In various embodiments, a central-processing-unit (CPU) initiates high priority key-value transactions by communicating via the shared input-output medium to a key-value-store. In various embodiments, a medium controller blocks or delays lower priority transactions such that the high priority transactions may proceed without interruption. In various embodiments, both of the types of transactions are packet-based, and the system interrupts a lower priority transaction at a particular packet, then completes the high priority transaction, then completes the lower priority transaction.
    Type: Grant
    Filed: February 3, 2015
    Date of Patent: September 5, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Lior Khermosh, Avner Braverman, Gal Zuckerman
  • Patent number: 9733988
    Abstract: Various systems and methods to achieve load balancing among a plurality of compute elements accessing a shared memory pool. The shared memory pool is configured to store and serve a plurality of data sets associated with a task, a first data interface's internal registry is configured to keep track of which data sets have been extracted from the shared memory pool and served to the compute elements, the first data interface is configured to extract from the shared memory pool and serve to the compute elements data sets which have not yet been extracted and served, the rate at which data sets are extracted and served to each particular compute element is proportional to the rate at which that compute element requests data sets, and the system may continues to extract, serve, and process data sets until all of the data sets associated with the task have been processed once.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: August 15, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Khermosh, Gal Zuckerman
  • Patent number: 9720826
    Abstract: Various embodiments of systems and methods to allow and control simultaneous access and processing by multiple compute elements of multiple data sets stored in multiple memory modules. The compute elements request data to be processed without specifying any particular data sets to be received. Data interfaces receive the data requests from the compute elements, determine which data sets have not yet been served to the compute elements, select data sets to be served from among those that have not yet been served, and fetch these data sets from the memory modules. The process of requesting additional data by the compute elements, selection by the data interfaces of data sets to be served among those that have not yet been served, and providing such data sets by the data interfaces to the compute elements, may continue until all of the data sets have been served to the compute elements.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: August 1, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Khermosh, Gal Zuckerman
  • Patent number: 9690705
    Abstract: Described herein are systems and methods to process efficiently, according to a certain order, a plurality of data sets arranged in data blocks. In one embodiment, a first compute element receives from another compute element a first set of instructions that determine an order in which a plurality of data sets are to be processed as part of a processing task. Relevant data sets are then streamed into a cache memory associated with the first compute element, but the order of streaming is not by order of storage but rather by the order conveyed in the first set of instructions.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: June 27, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Lior Amar, Avner Braverman, Lior Khermosh, Gal Zuckerman
  • Patent number: 9639407
    Abstract: Various systems and methods to perform efficiently a first processing task in conjunction with a plurality of data sets. A first code sequence comprises a plurality of general commands, and a specific command including a description of a first data processing task to be performed in conjunction with the data sets. The general commands are received and processed in a standard manner. The specific command is identified automatically by its nature, and the description within the specific command is then converted into a first sequence of executable instructions executable by a plurality of compute elements holding the plurality of data sets. The ultimate result is an efficient implementation of the first processing task. In some embodiments, the implementation of the first processing task is assisted by a pre-defined procedure that allocates the data sets to the compute elements and shares instances of executable instructions with the compute elements.
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: May 2, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Avner Braverman, Michael Adda, Lior Amar, Lior Khermosh, Eli Finer, Gal Zuckerman
  • Patent number: 9639473
    Abstract: Described herein are systems and methods to prevent a controller in a DDIO (data direct input output) system from shifting currently-required data out of a cache memory. In one embodiment, a compute element disables caching of some specific addresses in a non-cache memory, but still enables caching of other addresses in the non-cache memory, thereby practically disabling the DDIO system, so that data sets not currently needed are placed in the addresses in the non-cache memory which are not cached. As a result, currently-required data are not shifted out of cache memory. The compute element then determines that the data sets, which formerly avoided being cached, are now required. The system therefore copies the data sets that are now required from addresses in non-cache memory not accessible to cache memory, to addresses in non-cache memory accessible to cache memory, thereby allowing the caching and processing of such data sets.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: May 2, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Amar, Dan Aloni, Lior Khermosh, Gal Zuckerman
  • Patent number: 9632936
    Abstract: Described herein are systems and methods to redirect data read requests from the first tier to the second tier of a two-tier distributed memory. The first tier includes memory modules with data sets. Data interfaces associated with the first tier memory modules, receive from a second tier including compute elements and associated cache memories, requests to fetch data from the first tier. If a data set has not recently been fetched by the second tier, then the data interface will send the data set from the first tier to the cache memory associated with the requesting compute element. If a data set has recently been fetched by the second tier, the data interface will redirect the requesting compute element to fetch the data set from the cache memory in which the data set is currently located.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: April 25, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Gal Zuckerman, Avner Braverman, Lior Khermosh
  • Patent number: 9619308
    Abstract: A method, including receiving, by a user space driver framework (UDF) library executing from a user space of a memory over a monolithic operating system kernel, a kernel application programming interface (API) call from a device driver executing from the user space. The UDF library then performs an operation corresponding to the kernel API call.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: April 11, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Adda, Dan Aloni, Avner Braverman
  • Patent number: 9600294
    Abstract: A method includes tagging, by a processor executing a first operating system kernel, a region of a memory used by a first storage area network (SAN) adapter driver coupled to a SAN adapter, and decoupling the first SAN adapter driver from the SAN adapter. A boot of a second operating system kernel is then initiated while preserving in the tagged region of the memory contents stored therein. After the boot, a second SAN adapter driver is then coupled to the SAN adapter.
    Type: Grant
    Filed: September 2, 2014
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan Aloni, Kanael Arditti, Maor Ben-Dayan, Avner Braverman, Haim Helman, Ben Reuveni, Liran Zvibel
  • Patent number: 9594696
    Abstract: Various systems and methods to generate automatically a procedure operative to distributively process a plurality of data sets stored on a plurality of memory modules. Under the instruction of the automatically generated procedure, compute elements request data sets relevant to a particular task, such data sets are fetched from memory modules by data interfaces which provide such data sets to the requesting compute elements, and the compute elements then process the received data sets until the task is completed. Relevant data sets are fetched and processed asynchronously, which means that the relevant data sets need not be fetched and processed in any particular order.
    Type: Grant
    Filed: June 16, 2015
    Date of Patent: March 14, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Avner Braverman, Michael Adda, Lior Amar, Lior Khermosh, Eli Finer, Gal Zuckerman
  • Patent number: 9594688
    Abstract: Described herein are systems and methods to execute efficiently a plurality of actions, in which multiple actions require the use of a single data set. The data set is fetched from a data source, across a switching network, to a memory associated with a first compute element. This is the only fetching of the data set from the data source, and the only fetching across a switching network, thereby minimizing fetching across the switching network, reducing the load on the switching network, decreasing the time by which the data set will be accessed in second and subsequent processes, and enhancing the efficiency of the system. In some embodiments, processes are migrated from second and subsequent compute elements to the compute element in which the data set is stored. In some embodiments, second and subsequent compute elements access the data set stored in the memory associated with the first compute element.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: March 14, 2017
    Assignee: Parallel Machines Ltd.
    Inventors: Michael Adda, Avner Braverman, Lior Amar, Lior Khermosh, Gal Zuckerman