Patents by Inventor Omar Cardona

Omar Cardona has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240111573
    Abstract: Systems and methods for providing cross-partition preemption analysis and prevention. Computing devices typically include a main central processing unit (CPU) with multiple cores to execute instructions independently, cooperatively, or in other suitable manners. In some examples, one or more cores are partitioned and dedicated to a particular application, where exclusive access of the cores in the partition is intended for running processes of the application. In some examples, some “noise” can be introduced in a partition, where preemptions associated with other processes can interrupt execution of the particular application. A preemption diagnostics system and method identify and prevent sources of cross-partition preemption events from running in a dedicated CPU partition. Thus, the particular application has dedicated use of the cores in the partition. As a result, latency of the application is reduced and bounded latency corresponding to a service level agreement can be achieved.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 4, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Omar CARDONA, Matthew WOOLMAN, Giovanni PITTALIS, Dmitry MALLOY, Christopher Peter KLEYNHANS
  • Publication number: 20240094992
    Abstract: Examples of the present disclosure describe systems and methods for the non-disruptive servicing of components of a user mode process. In examples, a user mode process comprises multiple components, each encapsulating a distinct piece of functionality. A replacement component is loaded and initialized. The replacement component is validated to ensure that the required dependencies of the replacement component are satisfied by the other components of the user mode process. The component to be serviced and the components having dependencies on the component to be serviced are suspended to enable a snapshot of the runtime state of the component to be serviced to be captured. The runtime state is copied to the replacement component and the components having dependencies on the component to be serviced are updated to reference the replacement component. The replacement component is executed and the suspended components are resumed. The component to be serviced is unloaded.
    Type: Application
    Filed: September 16, 2022
    Publication date: March 21, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Robert Tyler RETZLAFF, Omar CARDONA, Jie ZHOU, Dmitry MALLOY
  • Publication number: 20240086215
    Abstract: Examples of the present disclosure describe systems and methods for non-disruptively hibernating and resuming a guest environment using a network virtual service client. In examples, when a guest environment is hibernated, a network virtual service client provides an instruction to a virtual network interface card to set the device power state of the virtual network interface card to a low power state. The network virtual service client disables the communication channels used by the network virtual service client and saves the operating state of the virtual network interface card. When the guest environment is resumed, the network virtual service client provides an instruction to set the device power state of the virtual network interface card to a full power state. The network virtual service client reenables the communication channels used by the network virtual service client and restores the operating state of the virtual network interface card.
    Type: Application
    Filed: September 12, 2022
    Publication date: March 14, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jie ZHOU, Dmitry MALLOY, Khoa A. TO, Omar CARDONA
  • Publication number: 20240004679
    Abstract: Examples of the present disclosure describe systems and methods for multiplexing driver data paths. In examples, an application in a virtual machine provides data packets to a driver multiplexer implemented in user space of the virtual machine. The driver multiplexer determines whether a virtual function is available for transmitting the data packets. If the virtual function is available, the driver multiplexer provides the data packets to the virtual function in user space of the virtual machine. The virtual function provides the data packets to a physical network interface card of the device hosting the virtual machine. If the virtual function driver is unavailable, the driver multiplexer uses a raw socket driver to provide the data packets to a raw socket in kernel space of the virtual machine. The raw socket provides the data packets to a network virtual client, which provides the data packets to the physical network interface card.
    Type: Application
    Filed: June 29, 2022
    Publication date: January 4, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Omar CARDONA, Narcisa Ana Maria VASILE
  • Publication number: 20240007412
    Abstract: Examples of the present disclosure describe systems and methods for transmit side scaling. In examples, transmission side configuration information is received by a host operating system from a guest operating system, where the transmission side scaling configuration information specifies the manner in which data packets transmitted by the host operating system are to be distributed to a network interface card of the host operating system. The transmission side configuration scaling information is implemented in an outbound transmission table of the host operating system. When a data packet is received by the host operating system, the host operating system evaluates the data packet using the outbound transmission table. Based on the evaluation, that data packet is transmitted using a specified transmit queue of the network interface card.
    Type: Application
    Filed: June 29, 2022
    Publication date: January 4, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Khoa A. TO, Omar CARDONA, Dmitry MALLOY
  • Publication number: 20240004772
    Abstract: Systems and methods for determining and reporting actual utilization of a core of a central processing unit (CPU) of a host. Prior to implementation of aspects of the present disclosure, running a poll querying endpoints of a process for work appears to the host's operating system as busy work (e.g., taking full use of the core for the poll duration). However, only a percentage of the duration of the poll is used to process a task of the process, where the remaining duration of the poll is spent querying the endpoints (idle time) and the core is not performing a task. Accordingly, a core utilization reporting system and method automatically detects the processing time of the tasks of a process, determines actual CPU utilization of the core based on a percentage of the time the core is busy polling (doing effectively no work) versus doing actual work (processing a task).
    Type: Application
    Filed: June 30, 2022
    Publication date: January 4, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Khoa A. TO, Omar CARDONA, Dmitry MALLOY, Narcisa Ana Maria VASILE, Robert Tyler RETZLAFF
  • Publication number: 20230409458
    Abstract: Techniques for aggregating execution metrics during virtualization are provided. In some embodiments, aggregated execution metrics (e.g., average execution time) are generated and stored for different types of supported virtualization service operations executed by a virtualization service provider (VSP) in a virtualization stack handling requests from a virtualization service client (VSC) running in a computer system emulator. For example, execution calls to the VSP are intercepted, and execution metrics for a triggered virtualization service operation are generated and aggregated into an aggregation entry that represents aggregated performance (e.g., average execution time) of all instances of the virtualization service operation that were completed during an interval (e.g., 1 hour). Aggregated execution metrics may be stored for any number of historical intervals.
    Type: Application
    Filed: June 16, 2022
    Publication date: December 21, 2023
    Inventors: Satish GOSWAMI, Harish SRINIVASAN, Omar CARDONA, Alexander MALYSH, Chenyan LIU, Tom XANTHOS
  • Publication number: 20230409455
    Abstract: Techniques are provided for aggregating execution metrics for virtualization service operations executed by a virtualization service provider on a host computer while handling requests from a virtualization service client running in a computer system emulator on the host computer. A dual list structure may be used to aggregate execution metrics. A first list may be populated with entries that represent aggregated execution metrics, aggregated over a current interval, for different types of supported virtualization service operations. At the end of the current interval, the entries in the first list may be pushed into a second list of entries that represent historical aggregated execution metrics for historical intervals, a new interval may be initialized, and the first list may be populated with entries representing aggregated execution metrics for the new interval. Managing aggregated execution metrics using a dual list structure facilitates more efficient storage, retrieval, and aggregation.
    Type: Application
    Filed: June 16, 2022
    Publication date: December 21, 2023
    Inventors: Satish GOSWAMI, Harish SRINIVASAN, Omar CARDONA, Alexander MALYSH, Chenyan LIU, Tom XANTHOS
  • Publication number: 20230409361
    Abstract: Techniques for aggregating execution metrics for virtualization service operations are provided. In an example implementation, a command configuring a computer system emulator on a host computer triggers execution of a plurality of virtualization service operations by a virtualization service provider running in a virtualization stack on the host machine. In-memory processing is used to aggregate execution metrics for each type of supported virtualization service operation during a current interval, and at the end of each interval, the execution metrics are pushed to a structure in the memory storing historical aggregated execution metrics. Aggregating and storing execution metrics in-memory enables faster lookup, faster aggregation, and better CPU utilization. Since aggregated metrics are effectively compressed, diagnostic information about a variety of different types of virtualization service operations may be stored and used to diagnose and repair underperforming components.
    Type: Application
    Filed: June 16, 2022
    Publication date: December 21, 2023
    Inventors: Satish GOSWAMI, Harish SRINIVASAN, Omar CARDONA, Alexander MALYSH, Chenyan LIU, Tom XANTHOS
  • Patent number: 11794116
    Abstract: Session participation in online content streams or activities like multiplayer games is enhanced through management of session tracking and automated queuing of players via a central system between host/streamer client device and guest player client devices. Spectators viewing a content stream or waiting to join a multiplayer activity over a network via a game/streaming service request to be placed in a queue to participate in a session of the content stream or activity as guest players with the host/streaming user. Sessions are tracked to determine start and end events. Sets of prior guest players are removed from sessions when the sessions end, and sets of queued spectators are automatically added to the start of a new session of the content stream or activity as guest players. Queuing may be automatically prioritized for users based on user characteristics, and guest players removed at the end of sessions may be automatically re-queued.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: October 24, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruben Omar Cardona Cruz, Keith R. Kline, Warren Alpert
  • Patent number: 11691085
    Abstract: Session participation in online content streams or activities like multiplayer games is enhanced through management of session tracking and automated queuing of players via a central system between host/streamer client device and guest player client devices. Spectators viewing a content stream or waiting to join a multiplayer activity over a network via a game/streaming service request to be placed in a queue to participate in a session of the content stream or activity as guest players with the host/streaming user. Sessions are tracked to determine start and end events. Sets of prior guest players are removed from sessions when the sessions end, and sets of queued spectators are automatically added to the start of a new session of the content stream or activity as guest players. Queuing may be automatically prioritized for users based on user characteristics, and guest players removed at the end of sessions may be automatically re-queued.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: July 4, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruben Omar Cardona Cruz, Keith R. Kline, Warren Alpert
  • Publication number: 20220391348
    Abstract: A computer system that includes at least one host device comprising at least one processor. The at least one processor is configured to implement, in a host operating system (OS) space, a teamed network interface card (NIC) software program that provides a unified interface to host OS space upper layer protocols including at least a remote direct memory access (RDMA) protocol and an Ethernet protocol. The teamed NIC software program provides multiplexing for at least two data pathways. The at least two data pathways include an RDMA data pathway that transmits communications to and from an RDMA interface of a physical NIC, and an Ethernet data pathway that transmits communications to and from an Ethernet interface of the physical NIC through a virtual switch that is implemented in a host user space and a virtual NIC that is implemented in the host OS space.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventor: Omar CARDONA
  • Publication number: 20220291875
    Abstract: Examples described herein generally relate to hosting virtual memory backed kernel isolated containers. A server includes at least one physical processor and at least one physical computer memory addressable via physical memory addresses. The at least one physical computer memory stores executable code configured to provide at least one host including a kernel and at least one kernel isolated container within the at least one host. The host allocates virtual memory having virtual memory addresses to a respective container of the at least one kernel isolated container. The host pins a subset of the virtual memory addresses to a subset of the physical memory addresses. The host performs a direct memory access operation or device memory-mapped input-output operation of the respective container on the subset of the physical memory addresses. At least part of the physical computer memory that is not pinned is oversubscribed.
    Type: Application
    Filed: August 25, 2020
    Publication date: September 15, 2022
    Inventors: Gerardo DIAZ-CUELLAR, Omar CARDONA, Jacob Kappeler OSHINS, John STARKS, Craig Daniel WILHITE
  • Patent number: 11438252
    Abstract: A packet monitoring application instantiated on a server hosting a virtualized network stack is utilized to track data packet propagations and drops at each component within the network stack to reduce the amount of time to identify a root cause for latency issues. The packet monitoring application can be selectively enabled or disabled by an administrator. Components within the virtualized network stack report packet drops and successful packet propagations to the packet monitoring application, which can filter the packets based on input parameters. Thus, a user can select at what level of granularity to filter packets within the virtualized network stack while being able to assess each packet's traversal through each component within the network stack. The packet monitoring application can also perform post-processing of on the filtered data packets to determine latency among components or sections of the virtualized network stack.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: September 6, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alexander Malysh, Thomas Edward Molenhouse, Omar Cardona, Kamran Reypour, Gregory Cusanza
  • Publication number: 20220276886
    Abstract: Examples described herein generally relate to a server for hosting process isolated containers within a virtual machine. The server includes at least one physical processor; at least one physical computer memory storing executable code for execution by the at least one physical processor, and a physical network interface controller, NIC. The executable code may be configured to provide a host virtual machine and at least one process isolated container within the host virtual machine. The physical NIC includes a physical NIC switch configured to distribute incoming data packets to a plurality of functions including a physical function and virtual functions. At least one of the virtual functions is assigned to an individual process isolated container within the virtual machine. The virtual function assigned to the individual process isolated container allows the physical NIC switch to distribute incoming data packets for the individual process isolated container at a hardware level.
    Type: Application
    Filed: August 25, 2020
    Publication date: September 1, 2022
    Inventors: Gerardo DIAZ-CUELLAR, Omar CARDONA, Dinesh Kumar GOVINDASAMY, Jason MESSER
  • Publication number: 20220272039
    Abstract: Examples described herein generally relate to hosting kernel isolated containers within a virtual machine. A server includes a physical processor and a physical computer memory storing executable code, the executable code providing a host virtual machine including a kernel and at least one kernel isolated container within the host virtual machine. The server includes a physical network interface controller, NIC, including a first physical NIC switch and a second physical NIC switch. The first physical NIC switch is configured to distribute incoming data packets to a first plurality of functions including a physical function and virtual functions. At least one of the virtual functions is assigned to the host virtual machine. The second physical NIC switch is configured to distribute the incoming data packets for the host virtual machine to a second plurality of virtual functions including a respective virtual function assigned to an individual kernel isolated container.
    Type: Application
    Filed: August 25, 2020
    Publication date: August 25, 2022
    Inventors: Omar CARDONA, Gerardo DIZA-CUELLAR, Dinesh Kumar GOVINDASAMY
  • Patent number: 11283718
    Abstract: Embodiments of hybrid network processing load distribution in a computing device are disclosed therein. In one embodiment, a method includes receiving, at a main processor, an indication from the network interface controller to perform network processing operations for first and second packets in a queue of a virtual port of the network interface controller, and in response to receiving the request, assigning multiple cores for performing the network processing operations for the first and second packets, respectively. The method also includes performing the network processing operations at the multiple cores to effect processing and transmission of the first and second packets to first and second applications, respectively, both the first and second applications executing in a virtual machine hosted on the computing device.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: March 22, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmitry Malloy, Alireza Dabagh, Gabriel Silva, Khoa To, Omar Cardona, Donald Stanwyck
  • Patent number: 11121973
    Abstract: Various systems, processes, and products may be used to filter multicast messages in virtual environments. In one implementation, a multicast filtering address is received by a network adapter from at least one of a number of virtual machines. A priority of the multicast filtering address is determined and, based on the priority, the multicast filtering address is stored in either a multicast filtering store of the network adapter or a local filtering store of the at least one virtual machine.
    Type: Grant
    Filed: July 27, 2019
    Date of Patent: September 14, 2021
    Assignee: International Business Machines Corporation
    Inventors: Omar Cardona, James B. Cunningham, Baltazar De Leon, III, Matthew R. Ochs
  • Patent number: 11121972
    Abstract: Various systems, processes, and products may be used to filter multicast messages in virtual environments. In one implementation, a multicast filtering address is received by a network adapter. A frequency of use of the multicast filtering address is determined and, based on the frequency of use of the multicast filtering address, the multicast filtering address is stored in either a multicast filtering store of the network adapter or a local filtering store of a respective virtual machine.
    Type: Grant
    Filed: July 27, 2019
    Date of Patent: September 14, 2021
    Assignee: International Business Machines Corporation
    Inventors: Omar Cardona, James B. Cunningham, Baltazar De Leon, III, Matthew R. Ochs
  • Patent number: 11115332
    Abstract: Various systems, processes, and products may be used to filter multicast messages in virtual environments. In one implementation, a multicast filtering address is received by a network adapter from at least one of a number of virtual machines. An amount of filtering data is determined corresponding to the at least one virtual machine and, based on the amount of the filtering data corresponding to the at least one virtual machine, the multicast filtering address is stored in either a multicast filtering store of the network adapter or a local filtering store of the at least one virtual machine.
    Type: Grant
    Filed: July 27, 2019
    Date of Patent: September 7, 2021
    Assignee: International Business Machines Corporation
    Inventors: Omar Cardona, James B. Cunningham, Baltazar De Leon, III, Matthew R. Ochs