Patents by Inventor Rami Zemach
Rami Zemach has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10491718Abstract: A packet received by a network device via a network. A first portion of the packet is stored in a packet memory, the first portion including at least a payload of the packet. The packet is processed based on information from a header of the packet. After the packet is processed, a second portion of the packet is stored in the packet memory, the second portion including at least a portion of the header of the packet. When the packet is to be transmitted the first portion of the packet and the second portion of the packet are retrieved from the packet memory, and the first portion and the second portion are combined to generate a transmit packet.Type: GrantFiled: May 17, 2017Date of Patent: November 26, 2019Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Carmi Arad, Ilan Mayer-Wolf, Rami Zemach, David Melman, Ilan Yerushalmi, Tal Mizrahi, Lior Valency
-
Patent number: 10411983Abstract: A network device comprises time measurement units configured to measure receipt times and transmit times of packets received/transmitted via network interfaces. One or more memories store configuration information that indicates certain network interface pairs and/or certain packet flows that are enabled for latency measurement. A packet processor includes a latency monitoring trigger unit configured to select, using the configuration information, packets that are forwarded between the certain network interface pairs and/or that belong to the certain packet flows for latency monitoring.Type: GrantFiled: May 17, 2017Date of Patent: September 10, 2019Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Tal Mizrahi, David Melman, Adar Peery, Rami Zemach
-
Publication number: 20190268272Abstract: In a network device, a flow classification hardware engine is configured to: store flow state information regarding known flows of packets in a flow information table in association with respective assigned flow identifiers (IDs). The assigned flow IDs are from an ordered set of M flow IDs, where M is a positive integer. In response to detecting new flows of packets, the flow classification hardware engine: i) assigns respective flow IDs, from the ordered set of M flow IDs, to the new flows, and ii) creates respective entries in the flow information table for the new flows. An embedded processor periodically, as part of a background process: i) identifies an oldest assigned flow ID, from the ordered set of M flow IDs, and ii) makes storage space in the flow information table corresponding to the oldest assigned flow ID available for a new flow.Type: ApplicationFiled: January 29, 2019Publication date: August 29, 2019Inventors: Tal MIZRAHI, Rami ZEMACH, Carmi ARAD, David MELMAN, Yosef KATAN
-
Patent number: 10387322Abstract: A memory supports a write or multiple read operations in any given clock cycle. In a first clock cycle, new content data is written to a particular content memory bank among a set of content memory banks. Also in the first clock cycle, current content data is read from corresponding locations in one or more other content memory banks among the set of content memory banks. New parity data is generated based on the new content data written to the particular content memory bank and the current content data read from the one or more other content memory banks. The new parity data is written to a cache memory associated with the one or more parity banks. In a second clock cycle subsequent to the first clock cycle, the new parity data is transferred from the cache memory to one of the one or more parity memory banks.Type: GrantFiled: April 29, 2016Date of Patent: August 20, 2019Assignee: Marvell Israel (M.I.S.L.) Ltd.Inventors: Dror Bromberg, Roi Sherman, Rami Zemach
-
Publication number: 20190220425Abstract: A network device includes a transfer buffer having a plurality of memory banks, and a transfer buffer controller configured to perform a first number of write operations to write processed packets into a memory bank of the transfer buffer, monitor occupancy of the transfer buffer, and when occupancy of the transfer buffer is at least equal to a threshold, perform a predetermined number of read operations during each memory cycle, and when occupancy of the transfer buffer is less than the threshold, perform a second number of read operations, greater than the predetermined number, during each memory cycle. The device concurrently performs multiple read operations and multiple write operations in a single cycle using a plurality of ports. The buffer controller distributes data among the memory banks by allocating write addresses to keep memory occupancy substantially uniform among the memory banks, thereby freeing ports to allow performance of read operations.Type: ApplicationFiled: October 30, 2018Publication date: July 18, 2019Inventors: Rami Zemach, Yaron Kittner
-
Publication number: 20190173809Abstract: A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.Type: ApplicationFiled: February 4, 2019Publication date: June 6, 2019Inventors: Rami ZEMACH, Dror BROMBERG
-
Publication number: 20190173798Abstract: A network device for a communications network includes a port configured to transmit data to the network at a maximum transmit data rate. The device also includes a transmit buffer configured to buffer data units that are ready for transmission to the network, and a packet buffer configured to buffer data units before the data units are ready for transmission. The packet buffer is configured to output data units at a maximum packet buffer transmission rate faster than the maximum transmit data rate. The device includes a rate controller configured to control a transmission rate of data from the packet buffer to the transmit buffer so that averaged over a period, the transmission rate from the packet buffer to the transmit buffer is at most equal to the maximum transmit data rate, while allowing the transmission rate, at one or more time intervals, to exceed the maximum transmit data rate.Type: ApplicationFiled: December 3, 2018Publication date: June 6, 2019Inventors: Rami Zemach, Yaron Kittner
-
Publication number: 20190173769Abstract: A network device includes a transmit buffer from which data is transmitted to a network, and a packet buffer from which data chunks are transmitted to the transmit buffer in response to read requests. The packet buffer has a maximum read latency from receipt of a read request to transmission of a responsive data chunk, and receives read requests including a read request for a first data chunk of a network packet and a plurality of additional read requests for additional data chunks of the network packet. A latency timer monitors elapsed time from receipt of the first read request, and outputs a latency signal when the elapsed time reaches the first maximum read latency. Transmission logic waits until the elapsed time equals the first maximum read latency, and then transmits the first data chunk from the transmit buffer, without regard to a fill level of the transmit buffer.Type: ApplicationFiled: December 3, 2018Publication date: June 6, 2019Inventors: Rami Zemach, Yaron Kittner
-
Patent number: 10200313Abstract: A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.Type: GrantFiled: June 1, 2017Date of Patent: February 5, 2019Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Rami Zemach, Dror Bromberg
-
Patent number: 10146710Abstract: An arbiter device, during a given clock cycle, determines an ordered set corresponding to a plurality of first interfaces. The ordered set indicates whether each first interfaces of the plurality of first interfaces is available for selection for a second interface of a plurality of second interfaces during the given clock cycle. The arbiter device, during the given clock cycle, selects a respective available first interface, from the ordered set corresponding to the plurality of first interfaces, for each of the plurality of second interfaces. Selecting an available first interface for a particular one of the second interfaces is performed in parallel with and independently from selecting available first interfaces for other ones of the second interfaces. The arbiter device, during the given clock cycle, generates an output that indicates the selections of the respective first interfaces for the second interfaces.Type: GrantFiled: August 15, 2016Date of Patent: December 4, 2018Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Rami Zemach, Dror Bromberg
-
Patent number: 10089018Abstract: A method for data storage includes receiving one or more read commands and one or more write commands, for execution in a same clock cycle in a memory array that comprises multiple single-port memory banks divided into groups. The write commands provide data for storage but do not specify storage locations in which the data is to be stored. One or more of the groups, which are not accessed by the read commands in the same clock cycle, are selected. Available storage locations are chosen for the write commands in the single-port memory banks of the selected one or more groups. During the same clock cycle, the data provided in the write commands is stored in the chosen storage locations, and the data requested by the read commands is retrieved. Execution of the write commands is acknowledged by reporting the chosen storage locations.Type: GrantFiled: April 17, 2016Date of Patent: October 2, 2018Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Dror Bromberg, Roi Sherman, Rami Zemach
-
Patent number: 9996489Abstract: The invention relates to a memory aggregation device for storing a set of input data streams and retrieving data to a set of output data streams, the memory aggregation device comprising: a set of first-in first-out (FIFO) memories each comprising an input and an output; an input interconnector configured to interconnect each one of the set of input data streams to each input of the set of FIFO memories according to an input interconnection matrix; an output interconnector configured to interconnect each output of the set of FIFO memories to each one of the set of output data streams according to an output interconnection matrix; an input selector; an output selector; and a memory controller.Type: GrantFiled: December 18, 2015Date of Patent: June 12, 2018Assignee: Huawei Technologies Co., Ltd.Inventors: Yaron Shachar, Yoav Peleg, Alex Tal, Alex Umansky, Rami Zemach, Lixia Xiong, Yuchun Lu
-
Patent number: 9923813Abstract: In a method for processing packets in a network device, a first packet is received at a first port of the network device. A first set of bits, corresponding to a first set of bit locations in a header of the first packet, is extracted from the header of the first packet. A first set of processing operations is performed to process the first packet using the first set of bits. A second packet is received at a second port of the network device. A second set of bits, corresponding to a second set of bit locations in a header of the second packet, is extracted from the header of the second packet. A second set of processing operations is performed to process the second packet using the second set of bits.Type: GrantFiled: December 17, 2014Date of Patent: March 20, 2018Assignee: MARVELL WORLD TRADE LTD.Inventors: Gil Levy, Amir Roitshtein, Rami Zemach
-
Patent number: 9898431Abstract: Aspects of the disclosure provide a circuit that includes a plurality of memory access circuits configured to access a memory to read or write data of a first width. The memory includes a plurality of memory banks that are organized in hierarchy. Further, the circuit includes a plurality of interface circuits respectively associated with the plurality of memory access circuits. Each interface circuit is configured to receive memory access requests to first level memory banks from an associated memory access circuit, segment the memory access requests into sub-requests to corresponding second level memory banks, buffer the sub-requests into buffers associated with the second level memory banks. In addition, the circuit includes arbitration circuitry configured to control multiplexing paths from the buffers to the second level memory banks to enable, in a same memory access clock, memory accesses by the memory access circuits.Type: GrantFiled: March 3, 2016Date of Patent: February 20, 2018Assignee: MARVELL ISRAEL (M.I.S.L) LTD.Inventors: Amir Bishara, Lior Valency, Rami Zemach
-
Patent number: 9876719Abstract: A forwarding engine in a network device selects one or more groups of multiple egress interfaces of the network device for forwarding packets received by the network device. An egress interface selector in the network device selects individual egress interfaces within the one or more groups selected by the forwarding engine. The egress interface selector includes a table associated with a first group of multiple egress interfaces, wherein elements in the table include values indicate individual egress interfaces in the first group. When the forwarding engine selects the first group, a table element selector of selects an element in the table to identify the individual egress interface for forwarding the packet.Type: GrantFiled: March 4, 2016Date of Patent: January 23, 2018Assignee: Marvell World Trade Ltd.Inventors: Yoram Revah, David Melman, Tal Mizrahi, Rami Zemach, Carmi Arad
-
Patent number: 9865503Abstract: Aspects of the disclosure provide a method for semiconductor wafer manufacturing. The method includes utilizing a subset of lower level masks in a mask set to form multiple modular units of lower level circuit structures on a semiconductor wafer. The mask set includes the subset of lower level masks and at least a first subset of upper level masks and a second subset of upper level masks. The first subset of upper level masks defines intra-unit interconnections. The second subset of upper level masks defines both intra-unit interconnections and inter-unit interconnections. The method further includes selecting one of at least the first subset of upper level masks and the second subset of upper level masks based on a composition request of a final integrated circuit (IC) product and utilizing the selected subset of upper level masks to form upper level structures on the semiconductor wafer.Type: GrantFiled: November 7, 2016Date of Patent: January 9, 2018Assignee: MARVELL ISRAEL (M.I.S.L) LTD.Inventors: Eran Rotem, Rami Zemach, Itay Peled
-
Publication number: 20170353403Abstract: A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.Type: ApplicationFiled: June 1, 2017Publication date: December 7, 2017Inventors: Rami ZEMACH, Dror BROMBERG
-
Publication number: 20170339041Abstract: A network device comprises time measurement units configured to measure receipt times and transmit times of packets received/transmitted via network interfaces. One or more memories store configuration information that indicates certain network interface pairs and/or certain packet flows that are enabled for latency measurement. A packet processor includes a latency monitoring trigger unit configured to select, using the configuration information, packets that are forwarded between the certain network interface pairs and/or that belong to the certain packet flows for latency monitoring.Type: ApplicationFiled: May 17, 2017Publication date: November 23, 2017Inventors: Tal MIZRAHI, David MELMAN, Adar PEERY, Rami ZEMACH
-
Publication number: 20170339074Abstract: A packet is received at a network device. The packet is processed by the network device to determine at least one egress port via which to transmit the packet, and to perform egress classification of the packet based at least in part on information determined for the packet during processing of the packet. Egress classification includes determining whether the packet should not be transmitted by the network device. When it is not determined that the packet should not be transmitted by the network device, a copy of the packet is generated for mirroring of the packet to a destination other than the determined at least one egress port, and the packet is enqueued in an egress queue corresponding to the determined at least one egress port. The packet is subsequently transferred to the determined at least one egress port for transmission of the packet.Type: ApplicationFiled: May 18, 2017Publication date: November 23, 2017Inventors: David MELMAN, Ilan MAYER-WOLF, Carmi ARAD, Rami ZEMACH
-
Publication number: 20170339259Abstract: A packet received by a network device via a network. A first portion of the packet is stored in a packet memory, the first portion including at least a payload of the packet. The packet is processed based on information from a header of the packet. After the packet is processed, a second portion of the packet is stored in the packet memory, the second portion including at least a portion of the header of the packet. When the packet is to be transmitted the first portion of the packet and the second portion of the packet are retrieved from the packet memory, and the first portion and the second portion are combined to generate a transmit packet.Type: ApplicationFiled: May 17, 2017Publication date: November 23, 2017Inventors: Carmi ARAD, Ilan MAYER-WOLF, Rami ZEMACH, David MELMAN, Ilan YERUSHALMI, Tal MIZRAHI, Lior VALENCY