Priority Scheduling Patents (Class 718/103)
  • Patent number: 11973771
    Abstract: According to various embodiments, a method for detecting security vulnerabilities in at least one of cyber-physical systems (CPSs) and Internet of Things (IoT) devices is disclosed. The method includes constructing an attack directed acyclic graph (DAG) from a plurality of regular expressions, where each regular expression corresponds to control-data flow for a known CPS/IoT attack. The method further includes performing a linear search on the attack DAG to determine unexploited CPS/IoT attack vectors, where a path in the attack DAG that does not represent a known CPS/IoT attack vector represents an unexploited CPS/IoT attack vector. The method also includes applying a trained machine learning module to the attack DAG to predict new CPS/IoT vulnerability exploits. The method further includes constructing a defense DAG configured to protect against the known CPS/IoT attacks, the unexploited CPS/IoT attacks, and the new CPS/IoT vulnerability exploits.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: April 30, 2024
    Assignee: THE TRUSTEES OF PRINCETON UNIVERSITY
    Inventors: Tanujay Saha, Najwa Aaraj, Niraj K. Jha
  • Patent number: 11973694
    Abstract: In one embodiment, an in-network compute resource assignment system includes a network device to receive a request to select resources to perform a processing job, wherein the request includes at least one resource requirement of the processing job, and end point devices assigned to perform the processing job, a memory to store a state of in-network compute-resources indicating resource usage of the in-network compute-resources by other processing jobs, and a processor to manage the stored state, and responsively to receiving the request, selecting ones of the in-network compute-resources to perform the processing job based on: (a) a network topology of a network including the in-network compute-resources; (b) the state of the in-network compute-resources; and (c) the at least one resource requirement of the processing job.
    Type: Grant
    Filed: March 30, 2023
    Date of Patent: April 30, 2024
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Yishai Oltchik, Gil Bloch, Daniel Klein, Tamir Ronen
  • Patent number: 11969578
    Abstract: Methods, devices and systems are disclosed for inter-app communications between software applications on a mobile communications device. In one aspect, a computer-readable medium on a mobile computing device comprising an inter-application communication data structure to facilitate transitioning and distributing data between software applications in a shared app group for an operating system of the mobile computing device includes a scheme field of the data structure providing a scheme id associated with a target software app to transition to from a source software app, wherein the scheme id is listed on a scheme list stored with the source software app; and a payload field of the data structure providing data and/or an identification where to access data in a shared file system accessible to the software applications in the shared app group, wherein the payload field is encrypted.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: April 30, 2024
    Assignee: Dexcom, Inc.
    Inventors: Gary A. Morris, Scott M. Belliveau, Esteban Cabrera, Jr., Rian Draeger, Laura J. Dunn, Timothy Joseph Goldsmith, Hari Hampapuram, Christopher Robert Hannemann, Apurv Ullas Kamath, Katherine Yerre Koehler, Patrick Wile McBride, Michael Robert Mensinger, Francis William Pascual, Philip Mansiel Pellouchoud, Nicholas Polytaridis, Philip Thomas Pupa, Anna Leigh Davis, Kevin Shoemaker, Brian Christopher Smith, Benjamin Elrod West, Atiim Joseph Wiley
  • Patent number: 11966416
    Abstract: Techniques for triggering pipeline execution based on data change (transaction commit) are described. The pipelines can be used for data ingestion or other specified tasks. These tasks can be operational across account, organization, cloud region, and cloud provider boundaries. The tasks can be triggered by commit post-processing. Gates in the tasks can be set up to reference change data capture information. If the gate is satisfied, tasks can be executed to set up data pipelines.
    Type: Grant
    Filed: January 31, 2023
    Date of Patent: April 23, 2024
    Assignee: Snowflake Inc.
    Inventors: Tyler Arthur Akidau, Istvan Cseri, Tyler Jones, Dinesh Chandrakant Kulkarni, Daniel Mills, Daniel E. Sotolongo, Di Fei Zhang
  • Patent number: 11965973
    Abstract: Disclosed are techniques for wireless communication. In an aspect, a user equipment (UE) measures one or more positioning reference signal (PRS) resources of at least one PRS instance, and processes the one or more PRS resources of the at least one PRS instance during a PRS processing gap, wherein the PRS processing gap comprises a period of time during which the UE prioritizes PRS processing over reception, processing, or both of other downlink signals and channels.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: April 23, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Alexandros Manolakos, Weimin Duan, Sven Fischer, Krishna Kiran Mukkavilli
  • Patent number: 11966776
    Abstract: Tasks of directed acyclic graphs (DAGs) may be dynamically scheduled based on a plurality of constraints and conditions, task prioritization policies, task execution estimates, and configurations of a heterogenous system. A machine learning component may be initialized to dynamically schedule the tasks of the DAGs.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: April 23, 2024
    Assignee: International Business Machines Corporation
    Inventors: Aporva Amarnath, Augusto Vega, Alper Buyuktosunoglu, Hubertus Franke, John-David Wellman, Pradip Bose
  • Patent number: 11966825
    Abstract: There is disclosed a method and system for executing commands. The method comprises configuring an input event topic subscriber and a command orchestrator process. The input event topic subscriber is invoked. The input event topic subscriber receives an event. The event comprises an event context and associated data. The event is transformed into a command. The command orchestrator is invoked. The command is input to the command orchestrator. The command orchestrator adds contextual information to the command. The command orchestrator schedules execution of the command. The execution of the command is tracked. A returned data object corresponding to the command is received and output.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: April 23, 2024
    Assignee: SERVICENOW CANADA INC.
    Inventors: Jean-François Arcand, Gabriel Duford, Marc Boissonneault, Andre Milton, Gilbert Kowarzyk, Christian Hudon
  • Patent number: 11954044
    Abstract: A method includes executing, by a processor core, a first task; scheduling, by a scheduler, a second task to be executed by the processor core upon completion of executing the first task; responsive to scheduling the second task, providing, by the scheduler, a prewarming message to a memory management unit (MMU) coupled to the processor core; and responsive to receiving the prewarming message, fetching, by the MMU, a page table specified by a page table base of the prewarming message.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: April 9, 2024
    Assignee: Texas Instruments Incorporated
    Inventor: Daniel Brad Wu
  • Patent number: 11948434
    Abstract: A historical horse racing (HHR) gaming system and method are provided that facilitate gameplay on one or a plurality of gameplay stations connected to a gaming server. The gaming server is configured to identify, upon receiving a gameplay request, a first plurality of events to execute. The gaming server is configured to receive a wager from a player and to compare the wager against a scorecard for the plurality of events, with a payout determined from the correct and incorrect predictions. The first plurality of events is determined in a time-dependent manner so as to approximate the gameplay experience of live and/or in-person gaming experiences.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: April 2, 2024
    Assignee: CASTLE HILL HOLDING LLC
    Inventors: Daniel Fulton, Joshua Paul Larson, Alan Roireau
  • Patent number: 11936777
    Abstract: Disclosed is a secret-key provisioning (SKP) method and device based on an optical line terminal (OLT), which can generate an SKP queue according to key requests received; generate at least one secret-key according to the SKP queue; and store the at least one secret-key in key pools (KPs) of corresponding ONUS. A non-transitory computer-readable storage medium is also disclosed.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: March 19, 2024
    Assignee: Beijing University of Posts and Telecommunications
    Inventors: Yongli Zhao, Hua Wang, Xiaosong Yu, Xinyi He, Yajie Li, Jie Zhang
  • Patent number: 11934283
    Abstract: Data protection operations including replication operations are disclosed. Virtual machines, applications, and/or application data are replicated according to at least one strategy. The replication strategy can improve performance of the recovery operation.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: March 19, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Bing Liu, Jehuda Shemer, Kfir Wolfson, Jawad Said
  • Patent number: 11928512
    Abstract: A reconfigurable data processor comprises an array of configurable units configurable to allocate a plurality of sets of configurable units in the array to implement respective execution fragments of the data processing operation. Quiesce logic is coupled to configurable units in the array, configurable to respond to a quiesce control signal to quiesce the sets of configurable units in the array on quiesce boundaries of the respective execution fragments, and to forward quiesce ready signals for the respective execution fragments when the corresponding sets of processing units are ready. An array quiesce controller distributes the quiesce control signal to configurable units in the array, and receives quiesce ready signals for the respective execution fragments from the quiesce logic.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: March 12, 2024
    Assignee: SambaNova Systems, Inc.
    Inventors: Raghu Prabhakar, Manish K. Shah, Pramod Nataraja, David Brian Jackson, Kin Hing Leung, Ram Sivaramakrishnan, Sumti Jairath, Gregory Frederick Grohoski
  • Patent number: 11922180
    Abstract: A method for managing a client environment includes obtaining, by a client device upgrade manager, an upgrade estimation for a client device executing in the client environment, wherein the upgrade estimation corresponds to an application upgrade for an application, in response to the upgrade estimation: performing an optimal time slot analysis for the client device to identify a set of optimal time slots, presenting the set of optimal time slots to the client device, obtaining, by the client device, a requested time slot for the application, and in response to the requested time slot, initiating an installation of an application upgrade of the application.
    Type: Grant
    Filed: July 27, 2021
    Date of Patent: March 5, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Parminder Singh Sethi, Lakshmi Nalam, Vasanth Ds, Shelesh Chopra
  • Patent number: 11915043
    Abstract: In some examples, a data management and storage (DMS) system comprises peer DMS nodes in a node cluster, a distributed data store comprising local and cloud storage, and an IO request scheduler comprising at least one processor configured to perform operations in a method of scheduling IO requests. Example operations comprise implementing a kernel scheduler to schedule a flow of IO requests in the DMS system, and providing an adjustment layer to adjust the kernel scheduler based on an IO request prioritization. A flow of IO requests is identified and some examples implement an IO request prioritization based on the adjustments made by the adjustment layer.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: February 27, 2024
    Assignee: Rubrik, Inc.
    Inventors: Vivek Sanjay Jain, Aravind Menon, Junyong Lee, Connie Xiao Zeng
  • Patent number: 11900155
    Abstract: The present disclosure relates to a method, device and computer program product for processing a job. In a method, a first group of tasks in a first portion of the job are obtained, the first group of tasks being executable in parallel by a first group of processing devices. A plurality of priorities are set to a plurality of processing devices, respectively, based on a state of a processing resource of a processing device among the plurality of processing devices in a distributed processing system, the processing resource comprising at least one of a computing resource and a storage resource. The first group of processing devices are selected from the plurality of processing devices based on the plurality of priorities. The first group of tasks are allocated to the first group of processing devices, respectively, which process the first group of tasks for generating a first group of task results.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: February 13, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: YuHong Nie, Pengfei Wu, Jinpeng Liu, Zhen Jia
  • Patent number: 11899939
    Abstract: A read/write request processing method and server are provided. In this method, each terminal is grouped, and different service durations are assigned for all terminal groups, so that a server can process, within any service duration, only a read/write request sent by a terminal in a terminal group corresponding to the service duration. According to the application, a cache area of a network interface card of the server is enabled to store only limited quantities of queue pairs (QPs) and work queue elements (WQEs), thereby preventing uneven resource distribution in the cache area of the network interface card.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: February 13, 2024
    Assignees: Huawei Technologies Co., Ltd., TSINGHUA UNIVERSITY
    Inventors: Jiwu Shu, Youmin Chen, Youyou Lu, Wenlin Cui
  • Patent number: 11895201
    Abstract: A multitenancy system that includes a host provider, a programmable device, and multiple tenants is provided. The host provider may publish a multitenancy mode sharing and allocation policy that includes a list of terms to which the programmable device and tenants can adhere. The programmable device may include a secure device manager configured to operate in a multitenancy mode to load a tenant persona into a given partial reconfiguration (PR) sandbox region on the programmable device. The secure device manager may be used to enforce spatial isolation between different PR sandbox regions and temporal isolation between successive tenants in one PR sandbox region.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: February 6, 2024
    Assignee: Intel Corporation
    Inventors: Steffen Schulz, Patrick Koeberl, Alpa Narendra Trivedi, Scott Weber
  • Patent number: 11886911
    Abstract: At least one processing device comprises a processor and a memory coupled to the processor. The at least one processing device is configured to associate different classes of service with respective threads of one or more applications executing on at least one of a plurality of processing cores of a storage system, to configure different sets of prioritized thread queues for respective ones of the different classes of service, to enqueue particular ones of the threads associated with particular ones of the classes of service in corresponding ones of the prioritized thread queues, and to implement different dequeuing policies for selecting particular ones of the enqueued threads from the different sets of prioritized thread queues based at least in part on the different classes of service. The at least one processing device illustratively comprises at least a subset of the plurality of processing cores of the storage system.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: January 30, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Lior Kamran
  • Patent number: 11888700
    Abstract: A method for isolation in the CN domain of a network slice includes receiving a slice isolation policy and establishing a CN NSS isolation policy based on the slice isolation policy. When the CN NSS isolation policy includes a network resource isolation policy, the network resource isolation policy is mapped to a network resource allocation policy, a part of which relating to physical resources is sent to a network function management function (NFMF) and a part relating to virtual resources is sent to a network function virtualization management and orchestration function. When the NSS isolation policy includes an application level isolation policy, the application level isolation policy is mapped to an application level policy which is sent to the NFMF.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: January 30, 2024
    Assignee: Nokia Solutions and Networks Oy
    Inventors: Zhiyuan Hu, Jing Ping, Zhigang Luo, Wen Wei
  • Patent number: 11853771
    Abstract: A branded fleet server system includes a pre-assembled third-party computer system integrated into a chassis of the branded fleet server system. The pre-assembled third-party computer system is configured to execute proprietary software that is only licensed for use on branded hardware. A virtualization offloading component is included in the server chassis of the branded fleet server along with the pre-assembled third-party computer system. The virtualization offloading component acts as a bridge between the pre-assembled third-party computer system and a virtualized computing service. As such, the virtualization offloading component manages communications, security, metadata, etc. to allow the pre-assembled computer system to function as one of a fleet of virtualization hosts of the virtualized computing service.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: December 26, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Peter Zachary Bowen, Darin Lee Frink, Eric Robert Northup, David A Skirmont, Manish Singh Rathaur
  • Patent number: 11853791
    Abstract: Transaction scheduling is described for a user data cache by assessing update criteria. In one example an event records memory stores a list of events each corresponding to performance of a transaction at a remote resource for a user. The memory has criteria for each event and a criterion value for each criterion and event combination. An event manager assesses criteria for each event by performing an operation on the stored criterion value for each criterion and event combination, assigning a score for each criterion and event combination, and compiling the assigned scores to generate a composite score for each event. The events are ordered based on the respective composite scores and executed in the ordered sequence by performing a corresponding transaction at remote resource. Updated criterion values are stored for executed events.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: December 26, 2023
    Assignee: BILLGO, INC.
    Inventors: Stephen Ryan Gordon, Terry Lentz, Jr., Kalyanaraman Ganesan, Richard Yiu-Sai Chung
  • Patent number: 11842212
    Abstract: The disclosure includes systems and methods for determining a change window for taking an application off-line. The systems and methods include mapping application programming interfaces (APIs) to one or more applications, and based on the API mapping, determining a priority level for each of the applications. A network traffic volume for each of the APIs mapped to the applications is predicted. Based on the predicted network traffic volume and the priority level of each of the applications, the systems and methods determine a change window for taking the applications off-line.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: December 12, 2023
    Assignee: Mastercard International Incorporated
    Inventor: Chandra Sekhar Duggirala
  • Patent number: 11829913
    Abstract: In one aspect of the present disclosure, activities of users within an IT service end user are recorded. A given record of a user activity may include an output of a command executed by the user, and execution information, input parameter(s) to the command, output of the command, and/or any other type of execution information. In implementations, intelligence may be built into a proxy module corresponding to the command to track an execution of the command started by the user. The execution of the command is captured and stored in a buffer such that another user within the IT service end user can review the execution of the command. In another aspect, user interfaces are provided to facilitate a user within an IT service end user to review activities in administering IT services by another user within the IT service end user.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: November 28, 2023
    Assignee: SkyKick, Inc.
    Inventors: Christopher Rayner, Evan Richman, Bradley Younge, Robert P. Karaban, John Dennis, Todd Schwartz, Darren D. Peterson, Peter Joseph Wilkins, Matthew Steven Hintzke, Sergii Semenov, Alex Zammitt, Philip Pittle
  • Patent number: 11824791
    Abstract: A switching system having input ports and output ports and comprising an input queued (IQ) switch with virtual channels. Typically, only one virtual channel can, at a given time, access a given output port. Typically, the IQ switch includes an arbiter apparatus that controls the input ports and output ports to ensure that an input port transmits at most one cell at a time, and/or that an output port receives a cell over only one virtual channel, and/or an output port receives at most one cell at a time.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: November 21, 2023
    Assignee: NVIDIA CORPORATION
    Inventors: Anil Mugu, Srijith Haridas
  • Patent number: 11822805
    Abstract: Embodiments of the present disclosure describe a memory reclaiming method and a terminal. As discussed with respect to the embodiments described herein, the method may include determining, by a terminal according to a preset rule, a target application program in application programs run on a background, where the target application program is an application program that needs to be cleaned. The method may also include freezing, by the terminal, the target application program, and reclaiming data generated during running of a process of the target application program in memory. The method may also include unfreezing, by the terminal when receiving an input triggering instruction for the target application program, the target application program, and running the target application program.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: November 21, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Qiulin Chen, Bailin Wen, Xiaojun Duan
  • Patent number: 11822675
    Abstract: Providing a method and a corresponding system for encrypting customer workload data through a trusted entity such as a self-boot engine (SBE). More specifically, there is a method and a corresponding system for securely extracting out customer centric data in a manner that requires the customer payloads and/or workloads to register with the SBE and share the encryption key.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: November 21, 2023
    Assignee: International Business Machines Corporation
    Inventors: Raja Das, Sachin Gupta, Santosh Balasubramanian, Sandeep Korrapati
  • Patent number: 11822971
    Abstract: In a Boundaryless Control High Availability (“BCHA”) system (e.g., industrial control system) comprising multiple computing resources (or computational engines) running on multiple machines, technology for computing in real time the overall system availability based upon the capabilities/characteristics of the available computing resources, applications to execute and the distribution of the applications across those resources is disclosed. In some embodiments, the disclosed technology can dynamically manage, coordinate recommend certain actions to system operators to maintain availability of the overall system at a desired level. High Availability features may be implemented across a variety of different computing resources distributed across various aspects of a BCHA system and/or computing resources. Two example implementations of BCHA systems described involve an M:N working configuration and M:N+R working configuration.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: November 21, 2023
    Assignee: Schneider Electric Systems USA, Inc.
    Inventors: Raja Ramana Macha, Andrew Lee David Kling, Frans Middeldorp, Nestor Jesus Camino, Jr., James Gerard Luth, James P. McIntyre
  • Patent number: 11822959
    Abstract: Methods and systems for processing requests with load-dependent throttling. The system compares a count of active job requests being currently processed for a user associated with a new job request with an active job cap number for that user. When the count of active job requests being currently processed for that user does not exceed the active job cap number specific to that user, the job request is added to an active job queue for processing. However, when the count of active job requests being currently processed for that user exceeds the active job cap number, the job request is placed on a throttled queue to await later processing when an updated count of active job requests being currently processed for that user is below the active job cap number. Once the count is below the cap, the throttle request is moved to the active job queue for processing.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: November 21, 2023
    Assignee: Shopify Inc.
    Inventors: Robert Mic, Aline Fatima Manera, Timothy Willard, Nicole Simone, Scott Weber
  • Patent number: 11816777
    Abstract: There is provided a data processing system comprising a host processor and a processing resource operable to perform processing operations for applications executing on the host processor by executing commands within an appropriate command stream. The host processor is configured to generate a command stream layout indicating a sequence of commands for the command stream that is then provided to the processing resource. Some commands require sensor data. The processing resource is configured to process the sensor data into command stream data for inclusion into the command stream in order to populate the command stream for execution.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: November 14, 2023
    Assignee: Arm Limited
    Inventors: Maochang Dang, Anton Berko, Espen Amodt
  • Patent number: 11816501
    Abstract: Systems and methods are described for managing high volumes of alerts to increase security, reduce noise, reduce duplication of work, and increase productivity of analysts dealing with and triaging alerts. A work unit queue may be configured to buffer or smooth workflows and decouple heavy processing which may improve performance and scalability to prevent duplicate assignments. Queueing services provide lag times to prevent over-assignment or double assignment of alerts to work units. System security may be improved by creating an authentication or verification step before allowing users to update alert statuses such that only users with work unit tokens that match alert tokens may update alert statuses.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: November 14, 2023
    Assignee: ZeroFOX, Inc.
    Inventors: Samuel Kevin Small, Steven Craig Hanna, Jr., Zachary Michael Allen
  • Patent number: 11803934
    Abstract: One embodiment provides an apparatus comprising an interconnect fabric comprising one or more fabric switches, a plurality of memory interfaces coupled to the interconnect fabric to provide access to a plurality of memory devices, an input/output (IO) interface coupled to the interconnect fabric to provide access to IO devices, an array of multiprocessors coupled to the interconnect fabric, scheduling circuitry to distribute a plurality of thread groups across the array of multiprocessors, each thread group comprising a plurality of threads and each thread comprising a plurality of instructions to be executed by at least one of the multiprocessors, and a first multiprocessor of the array of multiprocessors to be assigned to process a first thread group comprising a first plurality of threads, the first multiprocessor comprising a plurality of parallel execution circuits.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: October 31, 2023
    Assignee: Intel Corporation
    Inventors: Balaji Vembu, Altug Koker, Joydeep Ray
  • Patent number: 11797182
    Abstract: A first computing device is part of a distributed electronic storage system (DESS) that also comprises one or more second computing devices. The first computing device comprises client process circuitry and DESS interface circuitry. The DESS interface circuitry is operable to: receive, from client process circuitry of the first computing device, a first client file system request that requires accessing a storage resource on one or more of the second computing devices; determine resources required for servicing of the first client file system request; generate a plurality of DESS file system requests for the first file system request; and transmit the plurality of DESS file system requests onto the one or more network links. How many such DESS file system requests are generated is determined based on the resources required for servicing the first client file system request.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: October 24, 2023
    Inventors: Maor Ben Dayan, Omri Palmon, Liran Zvibel, Kanael Arditti, Tomer Filiba
  • Patent number: 11797342
    Abstract: A method and a supporting node (150) for supporting a process scheduling node (110) when scheduling a process to a first execution node (130) of a cluster (120) of execution nodes (130, 140, 150) are disclosed. The supporting node (150) receives (A140), from the first execution node (130) being selected by the process scheduling node (110) for execution of the process, a request for allocation of one or more HA devices (131, 141, 151). The supporting node (150) allocates at least one HA device (141), being associated with a second execution node (140) of the cluster (120), to the first execution node (130). The supporting node (150) reduces a value representing number of HA devices (131, 141, 151) available for allocation to the first execution node (130) while taking said at least one HA device (141) into account. The supporting node (150) sends the value to the first execution node (130).
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: October 24, 2023
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Chakri Padala, Nhi Vo, Mozhgan Mahloo, Joao Monteiro Soares
  • Patent number: 11789771
    Abstract: Aspects of the disclosure provide methods and an apparatus including processing circuitry configured to receive workflow information of a workflow. The processing circuitry generates, based on the workflow information, the workflow including a first buffering task and a plurality of processing tasks that includes a first processing task and a second processing task. The first processing task is caused to enter a running state in which a subset of input data is processed and output to the first buffering task as first processed subset data. The first processing task is caused to transition from the running state to a non-running state based on an amount of the first processed subset data in the first buffering task being equal to a first threshold. Subsequently, the second processing task is caused to enter a running state in which the first processed subset data in the first buffering task is processed.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: October 17, 2023
    Assignee: Tencent America LLC
    Inventor: Iraj Sodagar
  • Patent number: 11780603
    Abstract: A system and method for compiling and dynamically reconfiguring the management of functionalities carried among a set of multiple common compute nodes. During the inoperation of one node of the set of multiple common compute nodes, higher-criticality functionalities can be reassigned to other common nodes to ensure maintained operation of the higher-criticality functionalities.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: October 10, 2023
    Assignee: GE Aviation Systems LLC
    Inventor: Brent Dale Hooker
  • Patent number: 11775195
    Abstract: An apparatus to facilitate copying surface data is disclosed. The apparatus includes copy engine hardware to receive a command to access surface data from a source location in memory to a destination location in the memory, divide the surface data into a plurality of surface data sub-blocks, process the surface data sub-blocks to calculate virtual addresses to which accesses to the memory are to be performed and perform the memory accesses.
    Type: Grant
    Filed: April 5, 2022
    Date of Patent: October 3, 2023
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Nilay Mistry
  • Patent number: 11768716
    Abstract: In example implementations, a method include receiving a request for a lock in a Mellor-Crummey Scott (MCS) lock protocol from a guest user that is context free (e.g., a process that does not bring a queue node). The lock determines that it contains a null value. The lock is granted to the guest user. A pi value is received from the guest user to store in the lock. The pi value notifies subsequent users that the guest user has the lock.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: September 26, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Hideaki Kimura, Tianzheng Wang, Milind M. Chabbi
  • Patent number: 11769061
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for receiving a request from a client to process a computational graph; obtaining data representing the computational graph, the computational graph comprising a plurality of nodes and directed edges, wherein each node represents a respective operation, wherein each directed edge connects a respective first node to a respective second node that represents an operation that receives, as input, an output of an operation represented by the respective first node; identifying a plurality of available devices for performing the requested operation; partitioning the computational graph into a plurality of subgraphs, each subgraph comprising one or more nodes in the computational graph; and assigning, for each subgraph, the operations represented by the one or more nodes in the subgraph to a respective available device in the plurality of available devices for operation.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: September 26, 2023
    Assignee: Google LLC
    Inventors: Paul A. Tucker, Jeffrey Adgate Dean, Sanjay Ghemawat, Yuan Yu
  • Patent number: 11768706
    Abstract: The aspects of the present disclosure provide a method and an apparatus for implementing hardware resource allocation. For example, the apparatus includes processing circuitry. The processing circuitry obtains a first value that is indicative of an allocable resource quantity of a hardware resource in a computing device. The processing circuitry also receives a second value that is indicative of a requested resource quantity of the hardware resource by a user, and then determines whether the second value is greater than the first value. When the second value is determined to be less than or equal to the first value, the processing circuitry requests the computing device to allocate the hardware resource of the requested resource quantity to the user, and subtracts the second value from the first value to update the allocable resource quantity of the hardware resource in the computing device.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: September 26, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Guwu Yi, Biao Xu, Fan Yang, Jue Wang, Rui Yang
  • Patent number: 11768690
    Abstract: A system may include a plurality of processors and a coprocessor. A plurality of coprocessor context priority registers corresponding to a plurality of contexts supported by the coprocessor may be included. The plurality of processors may use the plurality of contexts, and may program the coprocessor context priority register corresponding to a context with a value specifying a priority of the context relative to other contexts. An arbiter may arbitrate among instructions issued by the plurality of processors based on the priorities in the plurality of coprocessor context priority registers. In one embodiment, real-time threads may be assigned higher priorities than bulk processing tasks, improving bandwidth allocated to the real-time threads as compared to the bulk tasks.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: September 26, 2023
    Assignee: Apple Inc.
    Inventors: Aditya Kesiraju, Andrew J. Beaumont-Smith, Brian P. Lilly, James Vash, Jason M. Kassoff, Krishna C. Potnuru, Rajdeep L. Bhuyar, Ran A. Chachick, Tyler J. Huberty, Derek R. Kumar
  • Patent number: 11762689
    Abstract: An apparatus including a processor to: output a first request message onto a group sub-queue shared by multiple task containers to request execution of a first task routine; within a task container, respond to the first request message, by outputting a first task in-progress message onto an individual sub-queue not shared with other task containers to accede to executing the first task routine, followed by a task completion message; and respond to the task completion message by allowing the task completion message to remain on the individual sub-queue to keep the task container from executing another task routine from another request message on the group sub-queue, outputting a second request message onto the individual sub-queue to cause execution of a second task routine within the same task container to perform a second task, and responding to the second task in-progress message by de-queuing the task completion message.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: September 19, 2023
    Assignee: SAS Institute Inc.
    Inventors: Henry Gabriel Victor Bequet, Ronald Earl Stogner, Eric Jian Yang, Chaowang “Ricky” Zhang
  • Patent number: 11762685
    Abstract: A method and apparatus for scaling resources of a GPU in a cloud computing system are provided. The method includes receiving requests for services from a client device, queuing the received requests in a message bus based on a preset prioritization scheme; and scaling the resources of the GPU for the requests queued in the message bus according to a preset prioritization loop.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: September 19, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Danilo P. Ocray, Jr., Mark Andrew D. Bautista, Alvin Lee C. Ong, Joseph Alan P. Baking, Jaesung An, Manuel F Abarquez, Jr., Sungrok Yoon, Youngjin Kim
  • Patent number: 11757788
    Abstract: In a signal transfer system including a signal transfer management apparatus and a plurality of signal transfer apparatuses connected in multiple stages and forming a network between a distribution station apparatus and a central station apparatus, a signal transfer apparatus on an upper side among the plurality of signal transfer apparatuses transmits a timing adjustment request to at least one of a plurality of signal transfer apparatuses on a lower side among the plurality of signal transfer apparatuses upon determining that there is a possibility that a microburst occurs according to mobile scheduling information received from the plurality of signal transfer apparatuses on the lower side, and the signal transfer apparatus on the lower side that has received the timing adjustment request from the signal transfer apparatus on the upper side adjusts opening and closing timings of a gate based on the timing adjustment request. This can prevent the occurrence of a microburst.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: September 12, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Hiroko Nomura, Naotaka Shibata, Keita Takahashi, Tomoya Hatano
  • Patent number: 11748168
    Abstract: Methods and apparatus for flexible batch job scheduling in virtualization environments are disclosed. A descriptor for a batch job requested by a client is received at a job scheduling service. The descriptor comprises an indication of a time range during which a job iteration may be performed. A target time for executing the iteration is determined based on an analysis of a plurality of received descriptors. An indication of the target time at which the iteration is to be scheduled is provided to a selected execution platform.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: September 5, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Marcin Piotr Kowalski, Wesley Gavin King
  • Patent number: 11740827
    Abstract: The present disclosure relates to a method, an electronic device, and a computer program product for recovering data. For example, a method for recovering data is provided. The method may comprise acquiring metadata corresponding to to-be-recovered target data, the metadata comprising at least a first part of metadata corresponding to a first set of data blocks and a second part of metadata corresponding to a second set of data blocks. The method may further comprise acquiring, based on the first part of metadata, the first set of data blocks from a first backup storage device in a plurality of backup storage devices that store the target data. The method may further comprise acquiring, based on the second part of metadata, the second set of data blocks from a second backup storage device in the plurality of backup storage devices. In addition, the method may further comprise recovering the target data based on at least the first set of data blocks and the second set of data blocks.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: August 29, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Weiyang Liu, Ming Zhang, Qi Wang, Aaron Ren Wang, Yuanyi Liu
  • Patent number: 11734307
    Abstract: Caching systems and methods are described. In one implementation, a method identifies multiple files used to process a query and distributes each of the multiple files to a particular execution node to execute the query. Each execution node determines whether the distributed file is stored in the execution node's cache. If the execution node determines that the file is stored in the cache, it processes the query using the cached file. If the file is not stored in the cache, the execution node retrieves the file from a remote storage device, stores the file in the execution node's cache, and processes the query using the file.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: August 22, 2023
    Assignee: Snowflake Inc.
    Inventors: Benoit Dageville, Thierry Cruanes, Marcin Zukowski
  • Patent number: 11714668
    Abstract: An implementation of the disclosure provides identifying an amount of a resource associated with a virtual machine (VM) hosted by a first host machine of a plurality of host machines that are coupled to and are managed by a host controller, wherein a part of a quality manager is executed at the first host machine and another part of the quality manager is executed in the host controller. A requirement of an additional amount of resource by the VM is determined in view of an occurrence of an event associated with the VM. The VM may be migrated to a second host machine of the plurality of host machines for a duration of the event in view of the additional amount of the resource.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: August 1, 2023
    Assignee: Red Hat Israel, Ltd.
    Inventor: Yaniv Kaul
  • Patent number: 11714900
    Abstract: An embodiment of the present invention is directed to a Re-Run Dropped Detection Tool that provides various features and tools to prepare, execute and monitor status of a Re-Run process. An embodiment of the present invention is directed to an automated dispatch/monitoring of alert jobs as well as monitoring of Re-Run as a Service (RRAAS) solution.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: August 1, 2023
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Eshan Dave, Yusuf N. Kapadia, Rony Roy, Benjamin D. Smith, Jasir Mohammed Kundum Kadavuthu, Cosmin-Stefan Marin, Narasimham Gudimella, Pedro Gomez Garcia
  • Patent number: 11714829
    Abstract: Disclosed herein are system, method, and computer program product embodiments for replicating data from table in a source database to a target database. In some embodiments, data replication includes access plan delimitation and access plan calculation steps and is performed on a table having multiple partitions. A table may be divided into one or more partitions and each partition may be further divided into one or more access plans. Access plan delimitation may involve calculating, in parallel, boundaries of access plans within partitions of the table. Access plan calculation may be initiated on the first partition that has completed the access plan delimitation steps, and may involve transferring data from each delimited partition from the table in the source database to the target database.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: August 1, 2023
    Assignee: SAP SE
    Inventors: Alexander Becker, Sebastian Haase
  • Patent number: 11709824
    Abstract: Methods, systems, and computer program products are provided for consolidating transaction log requests and transaction logs in a database transaction log service. A scalable log service may manage log requests and logs to reduce resource consumption, such as memory and I/O. Log requests may be managed by consolidating (e.g., organizing, merging and/or de-duplicating) the log requests. Transaction log requests may be mapped to read requests for transaction log storage devices in less than a one-to-one ratio. Transaction logs may be managed by using the consolidated log requests to consolidate (e.g., and prefetch) transaction logs from multiple cache and/or storage tiers to a log pool cache. Log requests may be served from the log pool cache.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: July 25, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alexander Budovski, Eric Dean Lee, Ashmita Raju, Srikanth Sampath