Patents by Inventor Evgeny Shumsky

Evgeny Shumsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9996468
    Abstract: In a method for managing memory space in a network device two or more respective allocation requests from two or more processing cores among a plurality of processing cores sharing a memory space are received at a memory management device during a first single clock cycle of the memory management device, the two or more allocation requests requesting to allocate, to the two or more processing cores, respective buffers in the shared memory space. In response to receiving the two or more allocation requests, the memory management device allocates to the two or more processing cores, respective two or more buffers in the shared memory space. Additionally, the management device, during a second single clock cycle of the memory management device, transmits respective allocation responses to each of the two or more processing cores, wherein each allocation response includes an indication of a respective allocated buffer.
    Type: Grant
    Filed: November 1, 2012
    Date of Patent: June 12, 2018
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Evgeny Shumsky, Shira Turgeman, Gil Levy
  • Patent number: 9807027
    Abstract: A plurality of packets are received by a packet processing device, and the packets are distributed among two or more packet processing node elements for processing of the packets. The packets are assigned to respective packet classes, each class corresponding to a group of packets for which an order in which the packets were received is to be preserved. The packets are queued in respective queues corresponding to the assigned packet classes and according to an order in which the packets were received by the packet processing device. The packet processing node elements issue respective instructions indicative of processing actions to be performed with respect to the packets, and indications of at least some of the processing actions are stored. A processing action with respect to a packet is performed when the packet has reached a head of a queue corresponding to the class associated with the packet.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: October 31, 2017
    Assignee: Marvell Isreal (M.I.S.L.) Ltd.
    Inventors: Evgeny Shumsky, Gil Levy, Adar Peery, Amir Roitshtein, Aron Wohlgemuth
  • Patent number: 9658951
    Abstract: In a method for storing packets in a network device, a processor and a plurality of memory banks for storing packet data during processing of packets by the processor are provided on an integrated circuit device. Each memory bank has a separate channel for transferring data. A plurality of buffers are defined such that each buffer in the plurality of buffers includes a respective memory space in more than one memory bank and less than all memory banks. A buffer of the plurality of buffers is allocated for storing a single packet or a portion of a single packet. The single packet or the portion of the single packet in the allocated buffer.
    Type: Grant
    Filed: November 1, 2012
    Date of Patent: May 23, 2017
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Evgeny Shumsky, Carmi Arad, Gil Levy, Ehud Sivan
  • Patent number: 9553820
    Abstract: A plurality of packets that belong to a data flow are received and are distributed to two or more packet processing elements, wherein a packet is sent to a first packet processing element. A first instance of the packet is queued at a first packet processing element according to an order of the packet within the data flow. The first instance of the packet is caused to be transmitted when processing of the first instance is completed and the first instance of the packet is at a head of a queue at the first ordering unit. A second instance of the packet is queued at a second ordering unit. The second instance of the packet is caused to be transmitted when processing of the second instance is completed and the second instance of the packet is at a head of a queue at the second ordering unit.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: January 24, 2017
    Assignee: Marvell Israel (M.L.S.L) Ltd.
    Inventors: Evgeny Shumsky, Gil Levy, Adar Peery, Amir Roitshtein
  • Patent number: 9459829
    Abstract: Systems and methods are provided for a first-in-first-out buffer. A buffer includes a first sub-buffer configured to store data received from a buffer input, and a second sub-buffer. The second sub-buffer is configured to store data received from either the buffer input or the first sub-buffer and to output data to a buffer output in a same order as that data is received at the buffer input. Buffer control logic is configured to selectively route data from the buffer input or the first sub-buffer to the second sub-buffer so that data received at the buffer input is available to be output from the second sub-buffer in a first-in-first-out manner.
    Type: Grant
    Filed: July 15, 2014
    Date of Patent: October 4, 2016
    Assignee: MARVELL ISRAEL (M.I.S.L) LTD.
    Inventors: Evgeny Shumsky, Jonathan Kushnir
  • Patent number: 9455907
    Abstract: One or more processing operations are performed on a packet at a first packet processing element of a plurality of packet processing elements of a network device. A processing state corresponding to processing of the packet is indicated in a packet processing context associated with the packet. An external processing engine is triggered for performing an additional processing operation on the packet, and processing of the packet is suspended by the first packet processing element. Subsequent to completion of the additional processing operation by the external processing engine, processing of the packet is resumed, based on the packet processing context, by a second packet processing element when the second packet processing element is available for processing of the packet and the first packet processing element is not available for processing of the packet.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: September 27, 2016
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Evgeny Shumsky, Gil Levy
  • Publication number: 20160182392
    Abstract: A plurality of packets are received by a packet processing device, and the packets are distributed among two or more packet processing node elements for processing of the packets. The packets are assigned to respective packet classes, each class corresponding to a group of packets for which an order in which the packets were received is to be preserved. The packets are queued in respective queues corresponding to the assigned packet classes and according to an order in which the packets were received by the packet processing device. The packet processing node elements issue respective instructions indicative of processing actions to be performed with respect to the packets, and indications of at least some of the processing actions are stored. A processing action with respect to a packet is performed when the packet has reached a head of a queue corresponding to the class associated with the packet.
    Type: Application
    Filed: February 29, 2016
    Publication date: June 23, 2016
    Inventors: Evgeny SHUMSKY, Gil LEVY, Adar PEERY, Amir ROITSHTEIN, Aron WOHLGEMUTH
  • Patent number: 9276868
    Abstract: A plurality of packets are received by a packet processing device, and the packets are distributed among two or more packet processing node elements for processing of the packets. The packets are assigned to respective packet classes, each class corresponding to a group of packets for which an order in which the packets were received is to be preserved. The packets are queued in respective queues corresponding to the assigned packet classes and according to an order in which the packets were received by the packet processing device. The packet processing node elements issue respective instructions indicative of processing actions to be performed with respect to the packets, and indications of at least some of the processing actions are stored. A processing action with respect to a packet is performed when the packet has reached a head of a queue corresponding to the class associated with the packet.
    Type: Grant
    Filed: December 17, 2013
    Date of Patent: March 1, 2016
    Assignee: MARVELL ISRAEL (M.I.S.L) LTD.
    Inventors: Evgeny Shumsky, Gil Levy, Adar Peery, Amir Roitshtein, Aron Wohlgemuth
  • Publication number: 20150254191
    Abstract: An apparatus and method of bypassing server DRAM by redirecting internal data transactions to an embedded buffer provides an innovative implementation for intermediate storage for internal transactions, providing transparent functionality with improved performance as compared to conventional solutions. Transaction throughput is improved at least in part by avoiding using conventional DRAM, thus eliminating conventional bottlenecks in DRAM intermediate storage. The current embodiment is particularly useful in sending and receiving data blocks between disk storage and network connections.
    Type: Application
    Filed: March 10, 2014
    Publication date: September 10, 2015
    Applicant: Riverscale Ltd
    Inventors: Vitaly SUKONIK, Evgeny SHUMSKY
  • Publication number: 20150253837
    Abstract: Static and dynamic power is saved in systems on a chip (SoCs) with an array of multiple RISC cores by adjusting power consumption using a combination of architecture and algorithm. Elements can be turned on and off with a higher granularity as compared to conventional implementations. An event distributor/power manager matches input queues queue occupancy to how many elements need to be active continuously to process incoming events without delaying event processing. Both instantaneous and average power can be controlled, in particular reduced to lower levels than in conventional systems while maintaining continuous processing of a varying level (number) of received events. Resulting power consumption is optimally tuned to the instantaneous workload. As compared to conventional solutions, the current implementation is a complex system approach taking into considerations multiple factors, and the algorithm can be implemented autonomously for more dynamic system re-configuration (than conventional solutions).
    Type: Application
    Filed: March 10, 2014
    Publication date: September 10, 2015
    Applicant: Riverscale Ltd
    Inventors: Vitaly SUKONIK, Evgeny SHUMSKY
  • Publication number: 20150254099
    Abstract: A Software Enabled Network Storage Accelerator (SENSA) system includes a number of SENSA components. The components can be implemented individually or in combination for a variety of applications, in particular, data base acceleration, disk caching, and event stream processing applications. Hardware (HW) real time operating system (RTOS) optimization for network storage stack applications such as event processing avoids conventional CPU usage by processing the payload, or internal data, of a packet using an array of at least two event processing elements (EPEs), each EPE in the array configured for: receiving events, each of the events having a task corresponding to the event; and processing the task in run-to-completion manner by operating on some portions of the task and offloading other portions of the task.
    Type: Application
    Filed: March 10, 2014
    Publication date: September 10, 2015
    Applicant: Riverscale Ltd
    Inventors: Vitaly SUKONIK, Evgeny SHUMSKY
  • Publication number: 20150254196
    Abstract: A system and method for bypassing server CPU by redirecting data transactions between network and disk provides an innovative implementation for intercepting network to disk data traffic and performing transactions on this data using internal logic rather than a CPU, providing transparent functionality with improved performance as compared to conventional solutions. This is particularly useful in sending and receiving data blocks between network connections and disk storage, such as in distributed storage servers.
    Type: Application
    Filed: March 10, 2014
    Publication date: September 10, 2015
    Applicant: Riverscale Ltd
    Inventors: Vitaly SUKONIK, Evgeny SHUMSKY
  • Publication number: 20150254100
    Abstract: A storage virtualization offload engine (SVOE) optimizes network storage stack applications, providing an innovative implementation for network storage event processing. The current embodiment is particularly suited for distributed storage servers, offloading storage related functions from CPU to a co-processor. The SVOE improves system performance and power consumption by executing heavy operations (such as wide vector computations) by dedicated hardware engines. Thus, the SVOE avoids the significant overhead and overall task latency of a CPU using system calls in the middle of software thread to offload processing. A system includes two or more event processing elements (EPEs). Each EPE is configured for receiving events that include respective tasks and for processing only data access portions of the tasks.
    Type: Application
    Filed: March 10, 2014
    Publication date: September 10, 2015
    Applicant: Riverscale Ltd
    Inventors: Vitaly SUKONIK, Evgeny SHUMSKY
  • Publication number: 20150256645
    Abstract: A server receives requests as events from a client via a network. Each event includes a respective task that requires access to disk storage. The server includes one or more processors that process the tasks in a run-to-completion manner and two or more hardware engines to which the processor(s) offload(s) at least some of the processing of the tasks. The hardware engines perform computation-intensive operations such as table lookups and hashes. Preferably, if there are more than one processor, the processors are identical RISC-core event processing elements, all configured with identical instruction code for execution. Preferably, the server also includes a network interface card; the processor(s) and the hardware engines may be part of either the network interface card or a separate co-processor.
    Type: Application
    Filed: March 10, 2014
    Publication date: September 10, 2015
    Applicant: Riverscale Ltd
    Inventors: Vitaly SUKONIK, Evgeny SHUMSKY
  • Patent number: 9104531
    Abstract: Some of the embodiments of the present disclosure provide a multi-core switch device comprising a plurality of P processing cores for processing packets received from a computer network; a memory comprising a plurality of M memory banks, the plurality of processing cores and the plurality of memory banks being arranged such that the plurality of processing cores have access to multiple memory banks among the plurality of memory banks to perform corresponding memory operations; and a memory access controller coupling the plurality of processing cores to the plurality of memory banks, the memory access controller configured to selectively provide, to each of the plurality of processing cores, access to multiple memory banks among the plurality of memory banks over a number of N physical couplings such that N (i) is an integer and (ii) is less than P times M.
    Type: Grant
    Filed: November 12, 2012
    Date of Patent: August 11, 2015
    Assignee: Marvell Israel (M.I.S.L.) Ltd.
    Inventor: Evgeny Shumsky
  • Publication number: 20150006770
    Abstract: Systems and methods are provided for a first-in-first-out buffer. A buffer includes a first sub-buffer configured to store data received from a buffer input, and a second sub-buffer. The second sub-buffer is configured to store data received from either the buffer input or the first sub-buffer and to output data to a buffer output in a same order as that data is received at the buffer input. Buffer control logic is configured to selectively route data from the buffer input or the first sub-buffer to the second sub-buffer so that data received at the buffer input is available to be output from the second sub-buffer in a first-in-first-out manner.
    Type: Application
    Filed: July 15, 2014
    Publication date: January 1, 2015
    Inventors: Evgeny Shumsky, Jonathan Kushnir
  • Patent number: 8819312
    Abstract: Systems and methods are provided for a first-in-first-out buffer. A buffer includes a first sub-buffer configured to store data received from a buffer input, and a second sub-buffer. The second sub-buffer is configured to store data received from either the buffer input or the first sub-buffer and to output data to a buffer output in a same order as that data is received at the buffer input. Buffer control logic is configured to selectively route data from the buffer input or the first sub-buffer to the second sub-buffer so that data received at the buffer input is available to be output from the second sub-buffer in a first-in-first-out manner.
    Type: Grant
    Filed: August 12, 2011
    Date of Patent: August 26, 2014
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Evgeny Shumsky, Jonathan Kushnir
  • Publication number: 20140192815
    Abstract: A plurality of packets that belong to a data flow are received and are distributed to two or more packet processing elements, wherein a packet is sent to a first packet processing element. A first instance of the packet is queued at a first packet processing element according to an order of the packet within the data flow. The first instance of the packet is caused to be transmitted when processing of the first instance is completed and the first instance of the packet is at a head of a queue at the first ordering unit. A second instance of the packet is queued at a second ordering unit. The second instance of the packet is caused to be transmitted when processing of the second instance is completed and the second instance of the packet is at a head of a queue at the second ordering unit.
    Type: Application
    Filed: March 13, 2014
    Publication date: July 10, 2014
    Applicant: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Evgeny Shumsky, Gil Levy, Adar Peery, Amir Roitshtein
  • Publication number: 20140169378
    Abstract: A plurality of packets are received by a packet processing device, and the packets are distributed among two or more packet processing node elements for processing of the packets. The packets are assigned to respective packet classes, each class corresponding to a group of packets for which an order in which the packets were received is to be preserved. The packets are queued in respective queues corresponding to the assigned packet classes and according to an order in which the packets were received by the packet processing device. The packet processing node elements issue respective instructions indicative of processing actions to be performed with respect to the packets, and indications of at least some of the processing actions are stored. A processing action with respect to a packet is performed when the packet has reached a head of a queue corresponding to the class associated with the packet.
    Type: Application
    Filed: December 17, 2013
    Publication date: June 19, 2014
    Applicant: MARVELL ISRAEL (M.I.S.L) LTD.
    Inventors: Evgeny Shumsky, Gil Levy, Adar Peery, Amir Roitshtein, Aron Wohlgemuth
  • Patent number: 8432926
    Abstract: Aspects of the disclosure provide an arbitration system for scheduling access of a plurality of clients to a shared resource. The arbitration system includes a plurality of association circuits corresponding to a plurality of profiles, a plurality of trigger circuits respectively coupled to the plurality of association circuits, and a selection circuitry. Each association circuit is configured to associate clients with the corresponding profile based on client attributes. Each trigger circuit is configured to periodically generate triggers at a rate based on the corresponding profile of the coupled association circuit, and each trigger causes the associated clients of the corresponding profile to be placed on a list of eligible clients. The selection circuitry is configured to select, for a time slice in a Time Division Multiplexing (TDM) frame, a client from the list of eligible clients using an arbitration scheme for accessing the shared resource.
    Type: Grant
    Filed: February 25, 2011
    Date of Patent: April 30, 2013
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Ehud Sivan, Evgeny Shumsky