Patents Examined by Timothy A Mudrick
  • Patent number: 11880726
    Abstract: Fair queuing of request tasks spawned by requests to execute generative operations such as, for example, graph query language requests to execute a graph query language query, mutation, or subscription operations. Queuing techniques are used to prevent a heavy generative operation from dominating usage of computing resources of a host that executes many generative operations concurrently including a mix of heavy and normal generative operations. Generative operations are analyzed and classified as heavy or normal as the request tasks they spawn are being executed. If a generative operation is classified as heavy, then subsequent request tasks spawned by the heavy generative operation are added to an overload queue while request tasks spawned by concurrently executing normal generative operations as added to a main queue. For fairness, request tasks are polled from the main queue for execution at greater frequency than request tasks in the overload queue.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: January 23, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Mehdi Ahmadizadeh, Richard Threlkeld, Nicholas Andrew Dejaco
  • Patent number: 11875194
    Abstract: A processor may receive user data associated with one or more locations of a user in an environment. The processor may receive edge computing data associated with utilization of edge computing resources by the user. The processor may analyze the edge computing data to associate a context with an edge computing resource need. The processor may analyze the user data to associate a context with a location of the user within the environment. The processor may determine a first location of the user in the environment at a first time. The processor may predict a first edge computing need of the user in the first location. The processor may determine an arrangement of one or more edge computing devices configured to meet the first edge computing need of the user at the first time.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: January 16, 2024
    Assignee: International Business Machines Corporation
    Inventors: Venkata Vara Prasad Karri, Sowjanya Rao, Sarbajit K. Rakshit
  • Patent number: 11861421
    Abstract: Techniques for a service provider network to communicatively couple services and/or applications in a serverless computing environment. A pipe component can configure a pipe to integrate two services by transmitting data between services and/or applications using the pipe. The pipe may also be configured to transform how a service processes an event, control timing of event transmissions using the pipe, define an event structure for an event, and/or batch events. Pipes enable an application or service to exchange data with a variety of services provided by the service provider network while controlling what type of data is generated, stored, or transmitted.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: January 2, 2024
    Inventors: Nikita Pinski, Mohamed Marzouk Adedoyin Mounirou, Nicholas Smit, Jakub Mateusz Narloch, Kunal Chopra
  • Patent number: 11853819
    Abstract: A storage product manufactured as a standalone computer component, having a bus connector to an external processor, a storage device, a random-access memory, a computational storage processor, and a processing device to identify, among storage access messages from a computer network, first messages, second messages, and third messages. The random-access memory hosts first queues shared between the processing device and the external processor, and second queues shared between the processing device and the computational storage processor. The processing device can place the first messages in the first queues for the external processor to generate fourth messages, place the second messages in the second queues for the computational storage processor to generate fifth messages, and provide the third messages to the storage device. The storage device can process the third messages, the fourth messages, and the fifth messages to implement requests in the storage access messages.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: December 26, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Luca Bert
  • Patent number: 11853777
    Abstract: Specifications are input, comprising: a plurality of lanes in an environment for a controlled system; a plurality of lane maneuvers associated with the plurality of lanes; a plurality of lane subconditions associated with the controlled system; and a rule set comprising a plurality of rules, wherein a rule in the rule set specifies a rule condition and a rule action to take when the rule condition is satisfied, wherein the rule condition comprises a corresponding set of lane subconditions, and wherein the rule action comprises a corresponding lane maneuver. The controlled system is automatically navigated dynamically, at least in part by: monitoring the plurality of lane subconditions; evaluating rule conditions associated with the plurality of rules in the rule set to determine one or more rules whose corresponding rule conditions has been met; and executing one or more lane maneuvers that correspond to the one or more determined rules.
    Type: Grant
    Filed: February 24, 2023
    Date of Patent: December 26, 2023
    Assignee: OptumSoft, Inc.
    Inventor: David R. Cheriton
  • Patent number: 11847510
    Abstract: A method for implementing application self-optimization in serverless edge computing environments is presented. The method includes requesting deployment of an application pipeline on data received from a plurality of sensors, the application pipeline including a plurality of microservices, enabling communication between a plurality of pods and a plurality of analytics units (AUs), each pod of the plurality of pods including a sidecar, determining whether each of the plurality of AUs maintains any state to differentiate between stateful AUs and stateless AUs, scaling the stateful AUs and the stateless AUs, enabling communication directly between the sidecars of the plurality of pods, and reusing and resharing common AUs of the plurality of AUs across different applications.
    Type: Grant
    Filed: October 12, 2022
    Date of Patent: December 19, 2023
    Assignee: NEC Corporation
    Inventors: Giuseppe Coviello, Kunal Rao, Biplob Debnath, Srimat Chakradhar
  • Patent number: 11836550
    Abstract: Systems and methods for moving, reconciling, and aggregating data from mainframe computers to hybrid cloud are disclosed.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: December 5, 2023
    Assignee: JPMORGAN CHASE BANK , N.A.
    Inventors: Tayo Ibikunle, Vishnuvardhan Pondugula, Mizan Miah, Howard Spector, Ashok Reddy, Arun Subramanian, Raghu Vudathu, Anupam Arora
  • Patent number: 11829813
    Abstract: Metrics corresponding to services provided by a cloud service provider can be received via a first API responsive to queries specifying identifiers of the services. A configuration file can be maintained that includes mappings between the identifiers of the services and the metrics corresponding to the services. An identifier of a new service provided by the cloud service provider can be received via a second API. A mapping between the identifier of the new service and a metric corresponding to the new service can be received by the configuration file. The metric corresponding to the new service can be received via the first API responsive to a query specifying the identifier of the new service.
    Type: Grant
    Filed: May 30, 2022
    Date of Patent: November 28, 2023
    Assignee: VMware, Inc.
    Inventors: Shyam Kasi Venkatram, Madhan Sankar, Ayushi Ghatt, Amita Ranjan
  • Patent number: 11822976
    Abstract: In one embodiment, a device presents information regarding an upstream machine learning workload and a downstream machine learning workload via a user interface. The device receives, via the user interface, a request to form a combined machine learning workload by connecting the upstream machine learning workload and the downstream machine learning workload. The device identifies, after receiving the request, a node associated with the upstream machine learning workload and a node associated with the downstream machine learning workload. The device forms the combined machine learning workload by configuring the node associated with the upstream machine learning workload to use one or more connector application programming interfaces to send data from the upstream machine learning workload to the node associated with the downstream machine learning workload for consumption.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: November 21, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Myungjin Lee, Harshit Daga, Ramana Rao V. R. Kompella
  • Patent number: 11810122
    Abstract: A method for robust communication between a client (1) and a server (2), for performing a transaction, comprises the steps of the client (1) initiating, through a transaction request (21), a transaction to be performed by the server computer (2), waiting (13) for a transaction confirmation request (22) from the server computer (2), and when receiving the response, sending a transaction confirmation response (23). After sending the transaction confirmation response (23), the client device (1) is not free to abort the transaction but is forced to wait for and accept a transaction result message (24) from the server computer (2), or, in the case of a server-side failure, a server-side transaction abort message (25).
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: November 7, 2023
    Inventor: Guy Pardon
  • Patent number: 11809879
    Abstract: A container mode can be dynamically selected when an application is launched on an end user computing device. When an application is deployed to the end user computing device, a container configurator can collect information about the application and share it with a machine learning solution to receive an application score for the application. When the application is launched on the end user computing device, the container configurator can provide the application score, capabilities of the end user computing device, current resource utilization and admin preferences to the machine learning solution. The machine learning solution can then dynamically select a container mode based on this information and provide the selection to the container configurator. The container configurator can then cause the application to be launched within a container that matches the selected container mode.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: November 7, 2023
    Assignee: Dell Products L.P.
    Inventors: Gokul Thiruchengode Vajravel, Vivek Viswanathan Iyer
  • Patent number: 11803770
    Abstract: A search device updates positions and momentums of a plurality of virtual particles, for each unit time from an initial time to an end time. The search device, for each unit time, calculates, for each of the particles, a position at a target time of a corresponding particle, calculates, for each of a plurality of nodes, a first accumulative value by cumulatively adding positions at the target time of two or more particles corresponding to outgoing two or more directed edges, calculates, for each of the nodes, a second accumulative value by cumulatively adding positions at the target time of two or more particles corresponding to incoming two or more directed edges, and calculates, for each of the particles, a momentum at the target time of a corresponding particle based on the first accumulative value and the second accumulative value.
    Type: Grant
    Filed: January 31, 2023
    Date of Patent: October 31, 2023
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kosuke Tatsumura, Hayato Goto, Masaya Yamasaki, Ryo Hidaka, Yoshisato Sakai
  • Patent number: 11797323
    Abstract: A host computer for emulating a target system includes a host memory, a CPU, and a host GPU. The host memory is configured to store a library of graphics functions and a VM. The VM includes a section of emulated memory storing target code configured to execute on the target system. The CPU is configured to execute the VM to emulate the target system. The VM is configured to execute the target code and intercept a graphics function call in the target code. The VM is further configured to redirect the graphics function call to a corresponding graphics function in the library of graphics functions stored in the host memory. The host GPU is configured to execute the corresponding graphics function to determine at least one feature configured to be rendered on a display coupled to the host GPU.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: October 24, 2023
    Assignee: The Boeing Company
    Inventors: Timothy James Dale, Jonathan Nicholas Hotra, Glenn Alan Patterson, Craig H. Sowadski
  • Patent number: 11789753
    Abstract: Generally, the present disclosure is directed to user interface understanding. More particularly, the present disclosure relates to training and utilization of machine-learned models for user interface prediction and/or generation. A machine-learned interface prediction model can be pre-trained using a variety of pre-training tasks for eventual downstream task training and utilization (e.g., interface prediction, interface generation, etc.).
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: October 17, 2023
    Assignee: GOOGLE LLC
    Inventors: Srinivas Kumar Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Holt Wichers, Gabriel Overholt Schubiner, Jindong Chen, Abhinav Kumar Rastogi, Blaise Aguera-Arcas, Zecheng He
  • Patent number: 11790260
    Abstract: Quantum process termination is disclosed. A quantum computing system receives a request to terminate a quantum process. The quantum computing system determines that the quantum process utilizes a first qubit. The quantum computing system terminates the quantum process and modifies qubit metadata to indicate that the qubit is available for use.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: October 17, 2023
    Assignee: Red Hat, Inc.
    Inventors: Leigh Griffin, Stephen Coady
  • Patent number: 11782772
    Abstract: A computer-implemented method for execution of a service in a distributed environment, the method comprising performing a speculative execution of a service and storing a related result, wherein a decision whether the speculative execution of the service is performed is dependent on a dynamically changing score value and receiving a request for an execution of the service at a request proxy. Additionally, the method comprises upon determining that a valid result of the execution of the service is available from an earlier speculative execution of a comparable service, returning the valid result by the request proxy, and upon determining that a valid result of the execution of the service is not available from an earlier speculative execution of a comparable service, executing the service in a non-speculative manner, and returning a received non-speculative result by the request proxy.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: October 10, 2023
    Assignee: International Business Machines Corporation
    Inventors: Sugandha Agrawal, Timo Kussmaul, Harald Daur, Torsten Teich
  • Patent number: 11782776
    Abstract: Systems and methods for providing referrer data to an application are provided. One method includes receiving a first set of data packets indicating a command to navigate from a first resource to a second resource. The first set of data packets identifies the first resource and secondary referrer data associated with the first resource or a first content item on the first resource. The method includes rendering the second resource and a second content item provided within the second resource. The method includes receiving a selection of the second content item. The method includes generating a second set of data packets including the secondary referrer data and primary referrer data associated with the second resource or the second content item. The method includes transmitting the second set of data packets to a server, receiving a deeplink generated by the server, and rendering a content interface using the deeplink.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: October 10, 2023
    Assignee: GOOGLE LLC
    Inventors: Justin Lewis, Scott Davies
  • Patent number: 11775309
    Abstract: The present disclosure provides an exception stack handling method, system, electronic device and storage medium and relates to the field of mobile Internet. The method may include: at the level of any executor in a distributed stream-type processing system including at least two executors, performing the following processing of: obtaining at least one exception stack from a message middleware when the executor in an idle state each time, collected exception stacks generated by users being stored in the message middleware; as for any exception stack, obtaining an anti-obfuscation map file corresponding to the exception stack, and performing anti-obfuscation processing for the exception stack by using the anti-obfuscation map file. The solution of the present disclosure may be applied to improve the processing speed.
    Type: Grant
    Filed: November 26, 2020
    Date of Patent: October 3, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Yang Peng, Hao Yang, Jing Zou, Lei Feng, Hongliang Sui
  • Patent number: 11775359
    Abstract: Methods and apparatuses for cross-layer processing. In some embodiments, kernel processes are executed at a higher privilege and priority than user space processes, thus cross-layer communication that spans both kernel and user space may introduce system vulnerabilities and/or consume limited resources in an undesirable manner. Unlike kernel space networking architectures that have to accommodate generic use cases, user space networking architectures are application specific, run in their own memory allocations, and can be terminated without affecting other user space applications 602 and/or kernel space operation. Various aspects described herein provide application specific, non-generic functionality without kernel assistance. Exemplary embodiments for buffer cloning, packet aggregation and “just in time” transformations, are illustrative of the broader concepts enabled by the present disclosure.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: October 3, 2023
    Assignee: Apple Inc.
    Inventors: Cahya Adiansyah Masputra, Eric Tsz Leung Cheng, Wei Shen, Francesco Dimambro, Sandeep Nair
  • Patent number: 11775365
    Abstract: In an example embodiment, a cross-tenant service broker with a router microservice is introduced. The router microservice writes information into the service broker. A data receiver then reads the information from the service broker and stores it in tenant specific storage. A distributer forwards data that belongs to other data centers. In each tenant, data center information is received as part of an application program interface (API). In order to address the fact that the tenancy model of a MAP and a MLAP may be different, a service registry (or service landscape registry, such as SLIS or LIS) kernel service is used to map the MLAP tenant(s) into the correct MAP tenant(s).
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: October 3, 2023
    Assignee: SAP SE
    Inventor: Anbusivam S