Patents by Inventor Andrea Francini
Andrea Francini has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240031302Abstract: Various example embodiments for supporting guaranteed-latency networking are presented herein. Various example embodiments for supporting guaranteed-latency networking may be configured to support guaranteed-latency networking based on use of a time division multiplexing (TDM) frame, such as a periodic service sequence (PSS), to support transmission of packets of a set of flows. Various example embodiments for supporting guaranteed-latency networking based on use of a PSS may be configured to support guaranteed-latency networking based on use of various PSS computation enhancements. Various example embodiments for supporting guaranteed-latency networking based on use of a PSS may be configured to support guaranteed-latency networking based on use of flow bundling.Type: ApplicationFiled: July 12, 2022Publication date: January 25, 2024Inventors: Andrea Francini, Raymond Miller
-
Patent number: 11848863Abstract: A network node configured to transmit packets to a destination node in a packet network, includes at least one processor and at least one memory including computer program code. The at least one memory and the computer program code are configured to, with the at least one processor, cause the network node to: assemble at least a first packet including a plurality of data units, each of the plurality of data units being grouped into one of a connection group, a network function group or an application group; and transmit the first packet to the destination node.Type: GrantFiled: August 21, 2020Date of Patent: December 19, 2023Assignee: NOKIA SOLUTIONS AND NETWORKS OYInventors: Bilgehan Erman, Andrea Francini, Edward Grinshpun, Raymond Miller
-
Publication number: 20230396516Abstract: An application specific interface (AA-NAPI) associated with an application for the purpose of controlling the networking aspects of the application session is provided by communication service provider (CSP). Cost model of network service types associated with a specific application type is learned by using the AA-NAPI. For each service-point joining the session, network properties are queried by using the AA-NAPI. During continuous service quality monitoring, upon detection of quality degradation leading to the decision to upgrade session quality, network service-type associated with a specific server-point is changed by using the AA-NAPI. Network specific actions are taken to reconfigure the network so that the requested service type is received by the specific service-point. New network capability is activated to improve network service level by using the AA-NAPI.Type: ApplicationFiled: May 16, 2023Publication date: December 7, 2023Inventors: Bilgehan ERMAN, Ejder BASTUG, Bruce CILLI, Andrea FRANCINI, Raymond MILLER, Charles PAYETTE, Sameerkumar SHARMA
-
Publication number: 20230254264Abstract: Various embodiments relate to a path computation element (PCE) configured to control a network having ingress edge nodes, interior nodes, and egress edge nodes, including: a network interface configured to communicate with the network; a memory; and a processor coupled to the memory and the network interface, wherein the processor is further configured to: receive a request for a first continuous guaranteed latency (CGL) flow to be carried by the network; make routing and admission control decisions for the requested first CGL flow without provisioning of the first CGL flow and the configuration of schedulers in the interior nodes of the network; and provide flow shaping parameters to a flow shaper at an ingress edge node of the first CGL flow.Type: ApplicationFiled: February 10, 2022Publication date: August 10, 2023Inventors: Andrea FRANCINI, Raymond MILLER, Bruce CILLI, Charles PAYETTE, Sameerkumar SHARMA
-
Patent number: 11677666Abstract: Various example embodiments for supporting queue management in communication systems are presented. Various example embodiments for supporting queue management in communication systems may be configured to support application-based queue management. Various example embodiments for supporting application-based queue management may be configured to support application-based congestion control. Various example embodiments for supporting application-based congestion control may be configured to support application-based congestion control based on use of trigger templates.Type: GrantFiled: October 11, 2019Date of Patent: June 13, 2023Assignee: Nokia Solutions and Networks OyInventors: Andrea Francini, Koen De Schepper, Olivier Tilmans, Sameerkumar Sharma
-
Patent number: 11463370Abstract: Various example embodiments for supporting scalable deterministic services in packet networks are presented. Various example embodiments for supporting scalable deterministic services in packet networks may be configured to support delay guarantees (e.g., finite end-to-end delay bounds) for a class of traffic flows referred to as guaranteed-delay (GD) traffic flows. Various example embodiments for supporting scalable deterministic services in packet networks may be configured to support delay guarantees for GD traffic flows of a network based on a queuing arrangement that is based on network outputs of the network, a packet scheduling method that is configured to support scheduling of packets of the GD traffic flows, and a service rate allocation rule configured to support delay guarantees for the GD traffic flows.Type: GrantFiled: February 12, 2020Date of Patent: October 4, 2022Assignee: Nokia Solutions and Networks OyInventors: Andrea Francini, Raymond Miller, Sameerkumar Sharma
-
Patent number: 11425592Abstract: Systems, methods, apparatuses, and computer program products for packet latency reduction in mobile radio access networks. One method may include, when a buffer of a first sublayer of a wireless access link is empty and there is a new data unit in the first sublayer or when the first sublayer buffer is not empty and a data unit leaves a second sublayer buffer, comparing the number of data units currently stored in the second sublayer buffer with a queue length threshold that defines a total amount of space in the second sublayer buffer. When the number of data units currently stored in the second sublayer buffer is less than the queue length threshold, the method may also include transferring the data unit from the first sublayer to the second sublayer.Type: GrantFiled: September 12, 2017Date of Patent: August 23, 2022Assignee: NOKIA SOLUTIONS AND NETWORKS OYInventors: Andrea Francini, Rajeev Kumar, Sameerkumar Sharma
-
Patent number: 11336582Abstract: Various example embodiments for supporting packet scheduling in packet networks are presented. Various example embodiments for supporting packet scheduling in packet networks may be configured to support scheduling-as-a-service. Various example embodiments for supporting packet scheduling in packet networks based on scheduling-as-a-service may be configured to support a virtualized packet scheduler which may be provided as a service over a general-purpose hardware platform, may be instantiated in customer hardware, or the like, as well as various combinations thereof. Various example embodiments for supporting packet scheduling in packet networks may be configured to support scheduling of packets of packet queues based on association of transmission credits with timeslots of a periodic service sequence used to provide service to the packet queues.Type: GrantFiled: December 28, 2020Date of Patent: May 17, 2022Assignee: Nokia Solutions and Networks OyInventors: Andrea Francini, Larry Hsiao Chang, Sameerkumar Sharma
-
Patent number: 11240169Abstract: Various example embodiments relate generally to supporting queuing of packets in a communication network. Various example embodiments for supporting queuing of packets in a communication network may be configured to support queueing of packets based on a packet queuing memory space including a hash entry space configured to maintain a set of H hash entries and a packet queue space configured to maintain a set of Q packet queues, wherein H is greater than Q. Various example embodiments for supporting queuing of packets in a communication network may be configured to support queueing of packets in a manner for handling packet events (e.g., packet arrival events, packet departure events, or the like) while preventing or mitigating queue collisions of hash entries (where a queue collision occurs when multiple hash entries, and the respective network flows of those hash entries, are associated with a single packet queue).Type: GrantFiled: July 31, 2018Date of Patent: February 1, 2022Assignee: Nokia Solutions and Networks OyInventors: Andrea Francini, Sameerkumar Sharma
-
Patent number: 11212687Abstract: The method includes transmitting request messages to at least one first network node, the request messages each including at least a sampling time-window and a network slice identifier, the sampling time-window defining a duration of time, the network slice identifier identifying a designated network slice within the communication network, receiving packet reports from the at least one first network node, the packet reports including latency information for packets that are processed by the at least one first network node during the sampling time-window for the designated network slice, and controlling the operation of the communication network based on the latency information.Type: GrantFiled: February 25, 2018Date of Patent: December 28, 2021Assignee: Nokia Solutions and Networks OyInventors: Sameerkumar Sharma, Edward Grinshpun, Andrea Francini
-
Publication number: 20210250301Abstract: Various example embodiments for supporting scalable deterministic services in packet networks are presented. Various example embodiments for supporting scalable deterministic services in packet networks may be configured to support delay guarantees (e.g., finite end-to-end delay bounds) for a class of traffic flows referred to as guaranteed-delay (GD) traffic flows. Various example embodiments for supporting scalable deterministic services in packet networks may be configured to support delay guarantees for GD traffic flows of a network based on a queuing arrangement that is based on network outputs of the network, a packet scheduling method that is configured to support scheduling of packets of the GD traffic flows, and a service rate allocation rule configured to support delay guarantees for the GD traffic flows.Type: ApplicationFiled: February 12, 2020Publication date: August 12, 2021Inventors: Andrea Francini, Raymond Miller, Sameerkumar Sharma
-
Publication number: 20210135988Abstract: A network node configured to transmit packets to a destination node in a packet network, includes at least one processor and at least one memory including computer program code. The at least one memory and the computer program code are configured to, with the at least one processor, cause the network node to: assemble at least a first packet including a plurality of data units, each of the plurality of data units being grouped into one of a connection group, a network function group or an application group; and transmit the first packet to the destination node.Type: ApplicationFiled: August 21, 2020Publication date: May 6, 2021Applicant: Nokia Solutions and Networks OyInventors: Bilgehan ERMAN, Andrea FRANCINI, Edward GRINSHPUN, Raymond MILLER
-
Publication number: 20210112006Abstract: Various example embodiments for supporting queue management in communication systems are presented. Various example embodiments for supporting queue management in communication systems may be configured to support application-based queue management. Various example embodiments for supporting application-based queue management may be configured to support application-based congestion control. Various example embodiments for supporting application-based congestion control may be configured to support application-based congestion control based on use of trigger templates.Type: ApplicationFiled: October 11, 2019Publication date: April 15, 2021Inventors: Andrea Francini, Koen De Schepper, Olivier Tilmans, Sameerkumar Sharma
-
Publication number: 20200389804Abstract: The method includes transmitting request messages to at least one first network node, the request messages each including at least a sampling time-window and a network slice identifier, the sampling time-window defining a duration of time, the network slice identifier identifying a designated network slice within the communication network, receiving packet reports from the at least one first network node, the packet reports including latency information for packets that are processed by the at least one first network node during the sampling time-window for the designated network slice, and controlling the operation of the communication network based on the latency information.Type: ApplicationFiled: February 25, 2018Publication date: December 10, 2020Applicant: Nokia Solutions and Networks OyInventors: Sameerkumar SHARMA, Edward GRINSHPUN, Andrea FRANCINI
-
Publication number: 20200260317Abstract: Systems, methods, apparatuses, and computer program products for packet latency reduction in mobile radio access networks. One method may include, when a buffer of a first sublayer of a wireless access link is empty and there is a new data unit in the first sublayer or when the first sublayer buffer is not empty and a data unit leaves a second sublayer buffer, comparing the number of data units currently stored in the second sublayer buffer with a queue length threshold that defines a total amount of space in the second sublayer buffer. When the number of data units currently stored in the second sublayer buffer is less than the queue length threshold, the method may also include transferring the data unit from the first sublayer to the second sublayer.Type: ApplicationFiled: September 12, 2017Publication date: August 13, 2020Inventors: Andrea FRANCINI, Rajeev KUMAR, Sameerkumar SHARMA
-
Publication number: 20200044980Abstract: Various example embodiments relate generally to supporting queuing of packets in a communication network. Various example embodiments for supporting queuing of packets in a communication network may be configured to support queueing of packets based on a packet queuing memory space including a hash entry space configured to maintain a set of H hash entries and a packet queue space configured to maintain a set of Q packet queues, wherein H is greater than Q. Various example embodiments for supporting queuing of packets in a communication network may be configured to support queueing of packets in a manner for handling packet events (e.g., packet arrival events, packet departure events, or the like) while preventing or mitigating queue collisions of hash entries (where a queue collision occurs when multiple hash entries, and the respective network flows of those hash entries, are associated with a single packet queue).Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Inventors: Andrea Francini, Sameerkumar Sharma
-
Patent number: 10038639Abstract: The present disclosure generally discloses a congestion control capability for use in communication systems (e.g., to provide congestion control over wireless links in wireless systems, over wireline links in wireline systems, and so forth). The congestion control capability may be configured to provide congestion control for a transport flow of a transport connection, sent from a transport flow sender to a transport flow receiver, based on flow control associated with the transport flow. The transport flow may traverse a flow queue of a link buffer of a link endpoint. The link endpoint may provide to the transport flow sender, via an off-band signaling channel, an indication of the saturation state of the flow queue of the transport flow. The transport flow sender may control transmission of packets of the transport flow based on the indication of the saturation state of the flow queue of the transport flow.Type: GrantFiled: September 16, 2016Date of Patent: July 31, 2018Assignees: Alcatel Lucent, Nokia of America CorporationInventors: Andrea Francini, Stepan Kucera, Sameerkumar Sharma, Joseph D. Beshay
-
Publication number: 20180191628Abstract: The present disclosure generally discloses a scheduling granularity capability. The scheduling granularity capability is configured to improve scheduling granularity in a wireless communication system supporting transport of application flows via radio bearers. The scheduling granularity capability may be configured to support improved scheduling granularity by controlling scheduling at various levels of granularity, such as at the bearer level (e.g., for scheduling bearers with respect to each other), at the application flow level (e.g., for scheduling the application flow of a bearer when the bearer includes a single application flow, for scheduling application flows of a bearer with respect to each other when the bearer includes multiple application flows, or the like), or the like, as well as various combinations thereof.Type: ApplicationFiled: December 31, 2016Publication date: July 5, 2018Applicants: Alcatel-Lucent USA Inc., Alcatel-Lucent Canada Inc.Inventors: Andrea Francini, Charles R. Payette, Kamakshi Sridhar, Jonathan Segel, Edward Grinshpun
-
Publication number: 20180159965Abstract: The present disclosure generally discloses a networked transport layer socket capability. The networked transport layer socket capability, for transport layer connection of a communication device attached to a network access device, moves the transport layer connection endpoint of the transport layer connection of the communication device (which also may be referred to as a client transport layer socket of the transport layer connection of the communication device) from the communication device into the network access device.Type: ApplicationFiled: December 2, 2016Publication date: June 7, 2018Applicant: Alcatel LucentInventors: Andrea Francini, Joseph D. Beshay
-
Publication number: 20180083878Abstract: The present disclosure generally discloses a congestion control capability for use in communication systems (e.g., to provide congestion control over wireless links in wireless systems, over wireline links in wireline systems, and so forth). The congestion control capability may be configured to provide congestion control for a transport flow of a transport connection, sent from a transport flow sender to a transport flow receiver, based on flow control associated with the transport flow. The transport flow may traverse a flow queue of a link buffer of a link endpoint. The link endpoint may provide to the transport flow sender, via an off-band signaling channel, an indication of the saturation state of the flow queue of the transport flow. The transport flow sender may control transmission of packets of the transport flow based on the indication of the saturation state of the flow queue of the transport flow.Type: ApplicationFiled: September 16, 2016Publication date: March 22, 2018Applicants: Alcatel-Lucent USA Inc., Alcatel-Lucent Ireland Ltd.Inventors: Andrea Francini, Stepan Kucera, Sameerkumar Sharma, Joseph D. Beshay