Patents by Inventor Rami Zemach
Rami Zemach has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9813336Abstract: A first set of bits is extracted from a header of a first packet. A second set of bits is extracted from a header of a second packet. The first set of bits and the second set of bits are combined into a combined single data unit representing the first packet and the second packet. The combined single data unit is transferred to a packet processing device. The packet processing device decomposes the single data unit to extract the first set of bits corresponding to the first packet and the second set of bits corresponding to the second packet. A first reduced set of processing operations is performed to process the first packet using the first set of bits corresponding to the first packet. A second reduced set of processing operations is performed to process the second packet using the second set of bits corresponding to the second packet.Type: GrantFiled: December 17, 2014Date of Patent: November 7, 2017Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Gil Levy, Amir Roitshtein, Rami Zemach
-
Patent number: 9705807Abstract: Aspects of the disclosure provide a method for counting packets and bytes in a distributed packet-switched system. The method includes receiving a packet stream having at least one packet flow at a device of a packet-switched system having a plurality of distributed devices, statistically determining whether to update a designated device based on receipt of a packet belonging to the packet flow, and transmitting packet counting information to the designated device based on the statistical determination, where the designated device counts packets of the packet flow based on the packet counting information.Type: GrantFiled: March 4, 2015Date of Patent: July 11, 2017Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Amir Roitshtein, Carmi Arad, Gil Levy, Rami Zemach
-
Publication number: 20170133271Abstract: Aspects of the disclosure provide a method for semiconductor wafer manufacturing. The method includes utilizing a subset of lower level masks in a mask set to form multiple modular units of lower level circuit structures on a semiconductor wafer. The mask set includes the subset of lower level masks and at least a first subset of upper level masks and a second subset of upper level masks. The first subset of upper level masks defines intra-unit interconnections. The second subset of upper level masks defines both intra-unit interconnections and inter-unit interconnections. The method further includes selecting one of at least the first subset of upper level masks and the second subset of upper level masks based on a composition request of a final integrated circuit (IC) product and utilizing the selected subset of upper level masks to form upper level structures on the semiconductor wafer.Type: ApplicationFiled: November 7, 2016Publication date: May 11, 2017Applicant: Marvell Israel (M.I.S.L) Ltd.Inventors: Eran Rotem, Rami Zemach, Itay Peled
-
Patent number: 9571380Abstract: A packet is received at a packet processing element, among a plurality of like packet processing elements, of a network device, and request specifying a processing operation to be performed with respect to the packet by an accelerator engine functionally different from the plurality of like packet processing elements is generated by the packet processing element. The request is transmitted to an interconnect network that includes a plurality of interconnect units arranged in stages. A path through the interconnect network is selected among a plurality of candidate paths, wherein no path of the candidate paths includes multiple interconnect units within a same stage of the interconnect network. The request is then transmitted via the determined path to a particular accelerator engine among multiple candidate accelerator engines configured to perform the processing operation. The processing operation is then performed by the particular accelerator engine.Type: GrantFiled: September 10, 2014Date of Patent: February 14, 2017Assignee: Marvell World Trade Ltd.Inventors: Aviran Kadosh, Rami Zemach
-
Publication number: 20160328158Abstract: A method for data storage includes receiving one or more read commands and one or more write commands, for execution in a same clock cycle in a memory array that comprises multiple single-port memory banks divided into groups. The write commands provide data for storage but do not specify storage locations in which the data is to be stored. One or more of the groups, which are not accessed by the read commands in the same clock cycle, are selected. Available storage locations are chosen for the write commands in the single-port memory banks of the selected one or more groups. During the same clock cycle, the data provided in the write commands is stored in the chosen storage locations, and the data requested by the read commands is retrieved. Execution of the write commands is acknowledged by reporting the chosen storage locations.Type: ApplicationFiled: April 17, 2016Publication date: November 10, 2016Inventors: Dror Bromberg, Roi Sherman, Rami Zemach
-
Publication number: 20160321184Abstract: A memory supports a write or multiple read operations in any given clock cycle. In a first clock cycle, new content data is written to a particular content memory bank among a set of content memory banks. Also in the first clock cycle, current content data is read from corresponding locations in one or more other content memory banks among the set of content memory banks. New parity data is generated based on the new content data written to the particular content memory bank and the current content data read from the one or more other content memory banks. The new parity data is written to a cache memory associated with the one or more parity banks. In a second clock cycle subsequent to the first clock cycle, the new parity data is transferred from the cache memory to one of the one or more parity memory banks.Type: ApplicationFiled: April 29, 2016Publication date: November 3, 2016Inventors: Dror BROMBERG, Roi SHERMAN, Rami ZEMACH
-
Patent number: 9467399Abstract: One or more processing operations with respect to a packet are performed at a packet processing node of a network device, the packet processing node configured to perform multiple different processing operations with respect to the packet. A first accelerator engine is triggered for performing a first additional processing operation with respect to the packet. The first additional processing operation constitutes an operation that is different from the multiple different processing operations that the packet processing node is configured to perform. The first additional processing operation is performed by the first accelerator engine. Concurrently with performing the first additional processing operation at the first accelerator engine, at least a portion of a second additional processing operation with respect to the packet is performed by the packet processing node, the second additional processing operation not dependent on a result of the first additional processing operation.Type: GrantFiled: October 16, 2014Date of Patent: October 11, 2016Assignee: Marvell World Trade Ltd.Inventors: Aron Wohlgemuth, Rami Zemach, Gil Levy
-
Patent number: 9461939Abstract: A processing unit of a packet processing node initiates a transaction with an accelerator engine to trigger the accelerator engine for performing a processing operation with respect to a packet, and triggers the accelerator engine to perform the processing operation. The processing unit attempts to retrieve a result of the first processing operation from a memory location to which a result is to be written. It is determined whether the result has been written to the memory location, and when it is determined that the result has not yet been written to the memory location, the processing unit is locked until at least a portion of the result is written to the memory location.Type: GrantFiled: October 17, 2014Date of Patent: October 4, 2016Assignee: Marvell World Trade Ltd.Inventors: Aron Wohlgemuth, Rami Zemach, Gil Levy
-
Publication number: 20160261500Abstract: A forwarding engine in a network device selects one or more groups of multiple egress interfaces of the network device for forwarding packets received by the network device. An egress interface selector in the network device selects individual egress interfaces within the one or more groups selected by the forwarding engine. The egress interface selector includes a table associated with a first group of multiple egress interfaces, wherein elements in the table include values indicate individual egress interfaces in the first group. When the forwarding engine selects the first group, a table element selector of selects an element in the table to identify the individual egress interface for forwarding the packet.Type: ApplicationFiled: March 4, 2016Publication date: September 8, 2016Inventors: Yoram REVAH, David MELMAN, Tal MIZRAHI, Rami ZEMACH, Carmi ARAD
-
Patent number: 9374303Abstract: Packets received via ports coupled to network links are processed to determine target ports to which the packets are to be forwarded. Appropriate control paths in a network device are selected for processing multicast packets from among a plurality of different control paths having respective processing latencies, the different control paths providing alternative processing paths for processing control information for multicast packets. Multicast packets are further processed using the selected control paths.Type: GrantFiled: October 14, 2014Date of Patent: June 21, 2016Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Sharon Ulman, Gil Levy, Rami Zemach, Amir Roitshtein, Shira Turgeman
-
Publication number: 20160103777Abstract: The invention relates to a memory aggregation device for storing a set of input data streams and retrieving data to a set of output data streams, the memory aggregation device comprising: a set of first-in first-out (FIFO) memories each comprising an input and an output; an input interconnector configured to interconnect each one of the set of input data streams to each input of the set of FIFO memories according to an input interconnection matrix; an output interconnector configured to interconnect each output of the set of FIFO memories to each one of the set of output data streams according to an output interconnection matrix; an input selector; an output selector; and a memory controller.Type: ApplicationFiled: December 18, 2015Publication date: April 14, 2016Inventors: Yaron Shachar, Yoav Peleg, Alex Tal, Alex Umansky, Rami Zemach, Lixia Xiong, Yuchun Lu
-
Publication number: 20150256466Abstract: Aspects of the disclosure provide a method for counting packets and bytes in a distributed packet-switched system. The method includes receiving a packet stream having at least one packet flow at a device of a packet-switched system having a plurality of distributed devices, statistically determining whether to update a designated device based on receipt of a packet belonging to the packet flow, and transmitting packet counting information to the designated device based on the statistical determination, where the designated device counts packets of the packet flow based on the packet counting information.Type: ApplicationFiled: March 4, 2015Publication date: September 10, 2015Applicant: MARVELL ISRAEL (M.I.S.L) LTD.Inventors: Amir ROITSHTEIN, Carmi Arad, Gil Levy, Rami Zemach
-
Publication number: 20150172188Abstract: A first set of bits is extracted from a header of a first packet. A second set of bits is extracted from a header of a second packet. The first set of bits and the second set of bits are combined into a combined single data unit representing the first packet and the second packet. The combined single data unit is transferred to a packet processing device. The packet processing device decomposes the single data unit to extract the first set of bits corresponding to the first packet and the second set of bits corresponding to the second packet. A first reduced set of processing operations is performed to process the first packet using the first set of bits corresponding to the first packet. A second reduced set of processing operations is performed to process the second packet using the second set of bits corresponding to the second packet.Type: ApplicationFiled: December 17, 2014Publication date: June 18, 2015Inventors: Gil LEVY, Amir ROITSHTEIN, Rami ZEMACH
-
Publication number: 20150172187Abstract: In a method for processing packets in a network device, a first packet is received at a first port of the network device. A first set of bits, corresponding to a first set of bit locations in a header of the first packet, is extracted from the header of the first packet. A first set of processing operations is performed to process the first packet using the first set of bits. A second packet is received at a second port of the network device. A second set of bits, corresponding to a second set of bit locations in a header of the second packet, is extracted from the header of the second packet. A second set of processing operations is performed to process the second packet using the second set of bits.Type: ApplicationFiled: December 17, 2014Publication date: June 18, 2015Inventors: Gil LEVY, Amir ROITSHTEIN, Rami ZEMACH
-
Publication number: 20150113190Abstract: A processing unit of a packet processing node initiates a transaction with an accelerator engine to trigger the accelerator engine for performing a processing operation with respect to a packet, and triggers the accelerator engine to perform the processing operation. The processing unit attempts to retrieve a result of the first processing operation from a memory location to which a result is to be written. It is determined whether the result has been written to the memory location, and when it is determined that the result has not yet been written to the memory location, the processing unit is locked until at least a portion of the result is written to the memory location.Type: ApplicationFiled: October 17, 2014Publication date: April 23, 2015Inventors: Aron WOHLGEMUTH, Rami ZEMACH, Gil LEVY
-
Publication number: 20150110114Abstract: One or more processing operations with respect to a packet are performed at a packet processing node of a network device, the packet processing node configured to perform multiple different processing operations with respect to the packet. A first accelerator engine is triggered for performing a first additional processing operation with respect to the packet. The first additional processing operation constitutes an operation that is different from the multiple different processing operations that the packet processing node is configured to perform. The first additional processing operation is performed by the first accelerator engine. Concurrently with performing the first additional processing operation at the first accelerator engine, at least a portion of a second additional processing operation with respect to the packet is performed by the packet processing node, the second additional processing operation not dependent on a result of the first additional processing operation.Type: ApplicationFiled: October 16, 2014Publication date: April 23, 2015Inventors: Aron WOHLGEMUTH, Rami ZEMACH, Gil LEVY
-
Publication number: 20150071079Abstract: A packet is received at a packet processing element, among a plurality of like packet processing elements, of a network device, and request specifying a processing operation to be performed with respect to the packet by an accelerator engine functionally different from the plurality of like packet processing elements is generated by the packet processing element. The request is transmitted to an interconnect network that includes a plurality of interconnect units arranged in stages. A path through the interconnect network is selected among a plurality of candidate paths, wherein no path of the candidate paths includes multiple interconnect units within a same stage of the interconnect network. The request is then transmitted via the determined path to a particular accelerator engine among multiple candidate accelerator engines configured to perform the processing operation. The processing operation is then performed by the particular accelerator engine.Type: ApplicationFiled: September 10, 2014Publication date: March 12, 2015Inventors: Aviran KADOSH, Rami ZEMACH
-
Patent number: 8972828Abstract: A method of error mitigation for transferring packets over a chip-to-chip data interconnect using a high speed interconnect protocol, the method including grouping a pre-selected number of high speed interconnect protocol words to form a protection frame, adding at least one additional error protection bit to each word in the group, adding a synchronization bit to each word, using the synchronization bit in a first word in each frame for synchronization of the protection frame and detecting and correcting a single bit error in the protection frame using the additional error protection bits, thereby reducing packet drop when the frames are transferred over the high speed data interconnect.Type: GrantFiled: June 27, 2012Date of Patent: March 3, 2015Assignee: Compass Electro Optical Systems Ltd.Inventors: Niv Margalit, Eyal Oren, Rami Zemach, Dan Zislis
-
Patent number: 7609633Abstract: A method for controlling data transmission includes setting a respective rate criterion for each of a plurality of interfaces of a network element. Upon conveying a first data packet of a first size via a given interface of the network element at a first time, a time-stamp value is computed based on the first time, the first size and the respective rate criterion that is set for the given interface. A disposition of a second packet for conveyance via the given interface at a second time, subsequent to the first time, is determined responsively to the time-stamp value.Type: GrantFiled: June 12, 2006Date of Patent: October 27, 2009Assignee: Cisco Technology, Inc.Inventors: Doron Shoham, Rami Zemach
-
Patent number: 7606250Abstract: Disclosed are, inter alia, methods, apparatus, data structures, computer-readable media, and mechanisms, for matching items with resources, such as, but not limited to packet processing contexts, output links, memory, storage, specialized hardware or software, compute cycles, or any other entity. One implementation includes means for maintaining distribution groups of items, means for maintaining differently aged resources queues, and means for matching resources identified as being at the head of the plurality of differently aged resources queues and as being primarily and secondarily associated with said distribution groups based on a set of predetermined criteria.Type: GrantFiled: April 5, 2005Date of Patent: October 20, 2009Assignee: Cisco Technology, Inc.Inventors: Doron Shoham, Rami Zemach, Moshe Voloshin, Alon Ratinsky, Sarig Livne, John J. Williams, Jr.