Patents by Inventor Pedro Yebenes

Pedro Yebenes has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240089219
    Abstract: Examples described herein relate to a switch. In some examples, the switch includes circuitry that is configured to: based on receipt of a packet and a level of a first queue, select among a first memory and a second memory device among multiple second memory devices to store the packet, based on selection of the first memory, store the packet in the first memory, and based on selection of the second memory device among multiple second memory devices, store the packet into the selected second memory device. In some examples, the packet is associated with an ingress port and an egress port, and the selected second memory device is associated with a third port that is different than the ingress port or the egress port associated with the packet.
    Type: Application
    Filed: November 10, 2023
    Publication date: March 14, 2024
    Inventors: Md Ashiqur RAHMAN, Roberto PENARANDA CEBRIAN, Anil VASUDEVAN, Allister ALEMANIA, Pedro YEBENES SEGURA
  • Publication number: 20230403233
    Abstract: Examples described herein relate to a network interface device. In some examples, the network interface device includes a host interface; a direct memory access (DMA) circuitry; a network interface; and circuitry. The circuitry can be configured to: based on received telemetry data from at least one switch: select a next hop network interface device from among multiple network interface devices based on received telemetry data. In some examples, the telemetry data is based on congestion information of a first queue associated with a first traffic class, the telemetry data is based on per-network interface device hop-level congestion states from at least one network interface device, the first queue shares bandwidth of an egress port with a second queue, the first traffic class is associated with packet traffic subject to congestion control based on utilization of the first queue, and the utilization of the first queue is based on a drain rate of the first queue and a transmit rate from the egress port.
    Type: Application
    Filed: August 29, 2023
    Publication date: December 14, 2023
    Inventors: Md Ashiqur RAHMAN, Rong PAN, Roberto PENARANDA CEBRIAN, Allister ALEMANIA, Pedro YEBENES SEGURA
  • Publication number: 20220210075
    Abstract: Examples described herein relate to a switch, when operational, that is configured to receive in a packet an indicator of number of remaining bytes in a flow and to selectively send a congestion message based on a fullness level of a buffer and indication of remainder of the flow. In some examples, the indicator is received in an Internet Protocol version 4 consistent Options header field or Internet Protocol version 6 consistent Flow label field. In some examples, the congestion message comprises one or more of: an Explicit Congestion Control Notification (ECN), priority-based flow control (PFC), and/or in-band telemetry (INT). In some examples, to selectively send a congestion message to a transmitter based on a fullness level of a buffer that stored the packet and the number of remaining bytes in flow, the switch is to determine whether the buffer is large enough to store the remaining bytes in the flow.
    Type: Application
    Filed: October 29, 2021
    Publication date: June 30, 2022
    Inventors: Malek MUSLEH, Gene WU, Anupama KURPAD, Allister ALEMANIA, Roberto PENARANDA CEBRIAN, Robert SOUTHWORTH, Pedro YEBENES SEGURA, Curt E. BRUNS, Sujoy SEN
  • Publication number: 20220103479
    Abstract: Examples described herein relate to a sender network interface device transmitting one or more packet probes to a receiver device, when a link is underutilized, to request information concerning link or path utilization. Based on responses to the packet probes, the sender network interface device can determine a packet transmit rate of packets of one or more flows and adjust the packet transmit rate of packets of one or more flows to increase utilization of the link.
    Type: Application
    Filed: December 8, 2021
    Publication date: March 31, 2022
    Inventors: Pedro YEBENES SEGURA, Roberto PENARANDA CEBRIAN, Rong PAN, Robert SOUTHWORTH, Allister ALEMANIA, Malek MUSLEH
  • Publication number: 20220103484
    Abstract: Examples described herein relate to a network interface device that is to adjust a transmission rate of packets based on a number of flows contributing to congestion and/or based on whether latency is increasing or decreasing. In some examples, adjusting the transmission rate of packets based on a number of flows contributing to congestion comprises adjust an additive increase (AI) parameter based on the number of flows contributing to congestion. In some examples, latency is based on a measured roundtrip time and a baseline roundtrip time.
    Type: Application
    Filed: December 8, 2021
    Publication date: March 31, 2022
    Inventors: Roberto PENARANDA CEBRIAN, Robert SOUTHWORTH, Pedro YEBENES SEGURA, Rong PAN, Allister ALEMANIA, Nayan Amrutlal SUTHAR, Malek MUSLEH
  • Publication number: 20210359955
    Abstract: Examples described herein relate to a network interface device comprising: a host interface, a direct memory access (DMA) engine, and circuitry to allocate a region in a cache to store a context of a connection. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on connection reliability and wherein connection reliability comprises use of a reliable transport protocol or non-use of a reliable transport protocol. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on expected length of runtime of the connection and the expected length of runtime of the connection is based on a historic average amount of time the context for the connection was stored in the cache. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on content transmitted and the content transmitted comprises congestion messaging payload or acknowledgement.
    Type: Application
    Filed: July 23, 2021
    Publication date: November 18, 2021
    Inventors: Malek MUSLEH, Tony HURSON, Pedro YEBENES SEGURA, Allister ALEMANIA, Roberto PENARANDA CEBRIAN, Ayan BANERJEE, Robert SOUTHWORTH, Sujoy SEN, Curt E. BRUNS
  • Publication number: 20210112002
    Abstract: Examples described herein relate to a network agent, when operational, to: receive a packet, determine transmit rate-related information for a sender network device based at least on operational and telemetry information accumulated in the received packet, and transmit the transmit rate-related information to the sender network device. In some examples, the network agent includes a network device coupled to a server, a server, or a network device. In some examples, the operational and telemetry information comprises: telemetry information generated by at least one network device in a path from the sender network device to the network agent.
    Type: Application
    Filed: December 22, 2020
    Publication date: April 15, 2021
    Inventors: Rong Pan, Pedro Yebenes Sugura, Roberto Penaranda Cebrian, Robert Southworth, Malek Musleh, Jeongkeun Lee, Changhoon Kim
  • Publication number: 20210092069
    Abstract: Examples described herein relate to a network interface and at least one processor that is to indicate whether data is associated with a machine learning operation or non-machine learning operation to manage traversal of the data through one or more network elements to a destination network element and cause the network interface to include an indication in a packet of whether the packet includes machine learning data or non-machine learning data. In some examples, the indication in a packet of whether the packet includes machine learning data or non-machine learning data comprises a priority level and wherein one or more higher priority levels identify machine learning data. In some examples, for machine learning data, the priority level is based on whether the data is associated with inference, training, or re-training operations. In some examples, for machine learning data, the priority level is based on whether the data is associated with real-time or time insensitive inference operations.
    Type: Application
    Filed: December 10, 2020
    Publication date: March 25, 2021
    Inventors: Malek MUSLEH, Anupama KURPAD, Roberto PENARANDA CEBRIAN, Allister ALEMANIA, Pedro YEBENES SEGURA, Curt E. BRUNS, Robert SOUTHWORTH, Sujoy SEN
  • Publication number: 20180026878
    Abstract: A communication apparatus includes an interface and a processor. The interface is configured for connecting to a communication network, including multiple network switches divided into groups. The processor is configured to predefine a strictly monotonic order among the groups, to receive an indication of a flow of packets to be routed from a source endpoint served by a source network switch belonging to a source group to a destination endpoint served by a destination network switch belonging to a destination group, to assign a first Virtual Lane (VL) to the packets in the flow if the destination group succeeds the source group in the predefined order, to assign to the packets in the flow a second VL if the destination group does not succeed the source group in the predefined order, and to configure the network switches to route the packets of the flow in accordance with the assigned VL.
    Type: Application
    Filed: July 24, 2016
    Publication date: January 25, 2018
    Inventors: Eitan Zahavi, German Maglione-Mathey, Pedro Yebenes, Jesus Escudero-Sahuquillo, Pedro Javier Garcia, Francisco Jose Quiles