Priority Scheduling Patents (Class 718/103)
  • Patent number: 11915043
    Abstract: In some examples, a data management and storage (DMS) system comprises peer DMS nodes in a node cluster, a distributed data store comprising local and cloud storage, and an IO request scheduler comprising at least one processor configured to perform operations in a method of scheduling IO requests. Example operations comprise implementing a kernel scheduler to schedule a flow of IO requests in the DMS system, and providing an adjustment layer to adjust the kernel scheduler based on an IO request prioritization. A flow of IO requests is identified and some examples implement an IO request prioritization based on the adjustments made by the adjustment layer.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: February 27, 2024
    Assignee: Rubrik, Inc.
    Inventors: Vivek Sanjay Jain, Aravind Menon, Junyong Lee, Connie Xiao Zeng
  • Patent number: 11900155
    Abstract: The present disclosure relates to a method, device and computer program product for processing a job. In a method, a first group of tasks in a first portion of the job are obtained, the first group of tasks being executable in parallel by a first group of processing devices. A plurality of priorities are set to a plurality of processing devices, respectively, based on a state of a processing resource of a processing device among the plurality of processing devices in a distributed processing system, the processing resource comprising at least one of a computing resource and a storage resource. The first group of processing devices are selected from the plurality of processing devices based on the plurality of priorities. The first group of tasks are allocated to the first group of processing devices, respectively, which process the first group of tasks for generating a first group of task results.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: February 13, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: YuHong Nie, Pengfei Wu, Jinpeng Liu, Zhen Jia
  • Patent number: 11899939
    Abstract: A read/write request processing method and server are provided. In this method, each terminal is grouped, and different service durations are assigned for all terminal groups, so that a server can process, within any service duration, only a read/write request sent by a terminal in a terminal group corresponding to the service duration. According to the application, a cache area of a network interface card of the server is enabled to store only limited quantities of queue pairs (QPs) and work queue elements (WQEs), thereby preventing uneven resource distribution in the cache area of the network interface card.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: February 13, 2024
    Assignees: Huawei Technologies Co., Ltd., TSINGHUA UNIVERSITY
    Inventors: Jiwu Shu, Youmin Chen, Youyou Lu, Wenlin Cui
  • Patent number: 11895201
    Abstract: A multitenancy system that includes a host provider, a programmable device, and multiple tenants is provided. The host provider may publish a multitenancy mode sharing and allocation policy that includes a list of terms to which the programmable device and tenants can adhere. The programmable device may include a secure device manager configured to operate in a multitenancy mode to load a tenant persona into a given partial reconfiguration (PR) sandbox region on the programmable device. The secure device manager may be used to enforce spatial isolation between different PR sandbox regions and temporal isolation between successive tenants in one PR sandbox region.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: February 6, 2024
    Assignee: Intel Corporation
    Inventors: Steffen Schulz, Patrick Koeberl, Alpa Narendra Trivedi, Scott Weber
  • Patent number: 11886911
    Abstract: At least one processing device comprises a processor and a memory coupled to the processor. The at least one processing device is configured to associate different classes of service with respective threads of one or more applications executing on at least one of a plurality of processing cores of a storage system, to configure different sets of prioritized thread queues for respective ones of the different classes of service, to enqueue particular ones of the threads associated with particular ones of the classes of service in corresponding ones of the prioritized thread queues, and to implement different dequeuing policies for selecting particular ones of the enqueued threads from the different sets of prioritized thread queues based at least in part on the different classes of service. The at least one processing device illustratively comprises at least a subset of the plurality of processing cores of the storage system.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: January 30, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Lior Kamran
  • Patent number: 11888700
    Abstract: A method for isolation in the CN domain of a network slice includes receiving a slice isolation policy and establishing a CN NSS isolation policy based on the slice isolation policy. When the CN NSS isolation policy includes a network resource isolation policy, the network resource isolation policy is mapped to a network resource allocation policy, a part of which relating to physical resources is sent to a network function management function (NFMF) and a part relating to virtual resources is sent to a network function virtualization management and orchestration function. When the NSS isolation policy includes an application level isolation policy, the application level isolation policy is mapped to an application level policy which is sent to the NFMF.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: January 30, 2024
    Assignee: Nokia Solutions and Networks Oy
    Inventors: Zhiyuan Hu, Jing Ping, Zhigang Luo, Wen Wei
  • Patent number: 11853771
    Abstract: A branded fleet server system includes a pre-assembled third-party computer system integrated into a chassis of the branded fleet server system. The pre-assembled third-party computer system is configured to execute proprietary software that is only licensed for use on branded hardware. A virtualization offloading component is included in the server chassis of the branded fleet server along with the pre-assembled third-party computer system. The virtualization offloading component acts as a bridge between the pre-assembled third-party computer system and a virtualized computing service. As such, the virtualization offloading component manages communications, security, metadata, etc. to allow the pre-assembled computer system to function as one of a fleet of virtualization hosts of the virtualized computing service.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: December 26, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Peter Zachary Bowen, Darin Lee Frink, Eric Robert Northup, David A Skirmont, Manish Singh Rathaur
  • Patent number: 11853791
    Abstract: Transaction scheduling is described for a user data cache by assessing update criteria. In one example an event records memory stores a list of events each corresponding to performance of a transaction at a remote resource for a user. The memory has criteria for each event and a criterion value for each criterion and event combination. An event manager assesses criteria for each event by performing an operation on the stored criterion value for each criterion and event combination, assigning a score for each criterion and event combination, and compiling the assigned scores to generate a composite score for each event. The events are ordered based on the respective composite scores and executed in the ordered sequence by performing a corresponding transaction at remote resource. Updated criterion values are stored for executed events.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: December 26, 2023
    Assignee: BILLGO, INC.
    Inventors: Stephen Ryan Gordon, Terry Lentz, Jr., Kalyanaraman Ganesan, Richard Yiu-Sai Chung
  • Patent number: 11842212
    Abstract: The disclosure includes systems and methods for determining a change window for taking an application off-line. The systems and methods include mapping application programming interfaces (APIs) to one or more applications, and based on the API mapping, determining a priority level for each of the applications. A network traffic volume for each of the APIs mapped to the applications is predicted. Based on the predicted network traffic volume and the priority level of each of the applications, the systems and methods determine a change window for taking the applications off-line.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: December 12, 2023
    Assignee: Mastercard International Incorporated
    Inventor: Chandra Sekhar Duggirala
  • Patent number: 11829913
    Abstract: In one aspect of the present disclosure, activities of users within an IT service end user are recorded. A given record of a user activity may include an output of a command executed by the user, and execution information, input parameter(s) to the command, output of the command, and/or any other type of execution information. In implementations, intelligence may be built into a proxy module corresponding to the command to track an execution of the command started by the user. The execution of the command is captured and stored in a buffer such that another user within the IT service end user can review the execution of the command. In another aspect, user interfaces are provided to facilitate a user within an IT service end user to review activities in administering IT services by another user within the IT service end user.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: November 28, 2023
    Assignee: SkyKick, Inc.
    Inventors: Christopher Rayner, Evan Richman, Bradley Younge, Robert P. Karaban, John Dennis, Todd Schwartz, Darren D. Peterson, Peter Joseph Wilkins, Matthew Steven Hintzke, Sergii Semenov, Alex Zammitt, Philip Pittle
  • Patent number: 11824791
    Abstract: A switching system having input ports and output ports and comprising an input queued (IQ) switch with virtual channels. Typically, only one virtual channel can, at a given time, access a given output port. Typically, the IQ switch includes an arbiter apparatus that controls the input ports and output ports to ensure that an input port transmits at most one cell at a time, and/or that an output port receives a cell over only one virtual channel, and/or an output port receives at most one cell at a time.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: November 21, 2023
    Assignee: NVIDIA CORPORATION
    Inventors: Anil Mugu, Srijith Haridas
  • Patent number: 11822805
    Abstract: Embodiments of the present disclosure describe a memory reclaiming method and a terminal. As discussed with respect to the embodiments described herein, the method may include determining, by a terminal according to a preset rule, a target application program in application programs run on a background, where the target application program is an application program that needs to be cleaned. The method may also include freezing, by the terminal, the target application program, and reclaiming data generated during running of a process of the target application program in memory. The method may also include unfreezing, by the terminal when receiving an input triggering instruction for the target application program, the target application program, and running the target application program.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: November 21, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Qiulin Chen, Bailin Wen, Xiaojun Duan
  • Patent number: 11822675
    Abstract: Providing a method and a corresponding system for encrypting customer workload data through a trusted entity such as a self-boot engine (SBE). More specifically, there is a method and a corresponding system for securely extracting out customer centric data in a manner that requires the customer payloads and/or workloads to register with the SBE and share the encryption key.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: November 21, 2023
    Assignee: International Business Machines Corporation
    Inventors: Raja Das, Sachin Gupta, Santosh Balasubramanian, Sandeep Korrapati
  • Patent number: 11822971
    Abstract: In a Boundaryless Control High Availability (“BCHA”) system (e.g., industrial control system) comprising multiple computing resources (or computational engines) running on multiple machines, technology for computing in real time the overall system availability based upon the capabilities/characteristics of the available computing resources, applications to execute and the distribution of the applications across those resources is disclosed. In some embodiments, the disclosed technology can dynamically manage, coordinate recommend certain actions to system operators to maintain availability of the overall system at a desired level. High Availability features may be implemented across a variety of different computing resources distributed across various aspects of a BCHA system and/or computing resources. Two example implementations of BCHA systems described involve an M:N working configuration and M:N+R working configuration.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: November 21, 2023
    Assignee: Schneider Electric Systems USA, Inc.
    Inventors: Raja Ramana Macha, Andrew Lee David Kling, Frans Middeldorp, Nestor Jesus Camino, Jr., James Gerard Luth, James P. McIntyre
  • Patent number: 11822959
    Abstract: Methods and systems for processing requests with load-dependent throttling. The system compares a count of active job requests being currently processed for a user associated with a new job request with an active job cap number for that user. When the count of active job requests being currently processed for that user does not exceed the active job cap number specific to that user, the job request is added to an active job queue for processing. However, when the count of active job requests being currently processed for that user exceeds the active job cap number, the job request is placed on a throttled queue to await later processing when an updated count of active job requests being currently processed for that user is below the active job cap number. Once the count is below the cap, the throttle request is moved to the active job queue for processing.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: November 21, 2023
    Assignee: Shopify Inc.
    Inventors: Robert Mic, Aline Fatima Manera, Timothy Willard, Nicole Simone, Scott Weber
  • Patent number: 11816777
    Abstract: There is provided a data processing system comprising a host processor and a processing resource operable to perform processing operations for applications executing on the host processor by executing commands within an appropriate command stream. The host processor is configured to generate a command stream layout indicating a sequence of commands for the command stream that is then provided to the processing resource. Some commands require sensor data. The processing resource is configured to process the sensor data into command stream data for inclusion into the command stream in order to populate the command stream for execution.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: November 14, 2023
    Assignee: Arm Limited
    Inventors: Maochang Dang, Anton Berko, Espen Amodt
  • Patent number: 11816501
    Abstract: Systems and methods are described for managing high volumes of alerts to increase security, reduce noise, reduce duplication of work, and increase productivity of analysts dealing with and triaging alerts. A work unit queue may be configured to buffer or smooth workflows and decouple heavy processing which may improve performance and scalability to prevent duplicate assignments. Queueing services provide lag times to prevent over-assignment or double assignment of alerts to work units. System security may be improved by creating an authentication or verification step before allowing users to update alert statuses such that only users with work unit tokens that match alert tokens may update alert statuses.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: November 14, 2023
    Assignee: ZeroFOX, Inc.
    Inventors: Samuel Kevin Small, Steven Craig Hanna, Jr., Zachary Michael Allen
  • Patent number: 11803934
    Abstract: One embodiment provides an apparatus comprising an interconnect fabric comprising one or more fabric switches, a plurality of memory interfaces coupled to the interconnect fabric to provide access to a plurality of memory devices, an input/output (IO) interface coupled to the interconnect fabric to provide access to IO devices, an array of multiprocessors coupled to the interconnect fabric, scheduling circuitry to distribute a plurality of thread groups across the array of multiprocessors, each thread group comprising a plurality of threads and each thread comprising a plurality of instructions to be executed by at least one of the multiprocessors, and a first multiprocessor of the array of multiprocessors to be assigned to process a first thread group comprising a first plurality of threads, the first multiprocessor comprising a plurality of parallel execution circuits.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: October 31, 2023
    Assignee: Intel Corporation
    Inventors: Balaji Vembu, Altug Koker, Joydeep Ray
  • Patent number: 11797182
    Abstract: A first computing device is part of a distributed electronic storage system (DESS) that also comprises one or more second computing devices. The first computing device comprises client process circuitry and DESS interface circuitry. The DESS interface circuitry is operable to: receive, from client process circuitry of the first computing device, a first client file system request that requires accessing a storage resource on one or more of the second computing devices; determine resources required for servicing of the first client file system request; generate a plurality of DESS file system requests for the first file system request; and transmit the plurality of DESS file system requests onto the one or more network links. How many such DESS file system requests are generated is determined based on the resources required for servicing the first client file system request.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: October 24, 2023
    Inventors: Maor Ben Dayan, Omri Palmon, Liran Zvibel, Kanael Arditti, Tomer Filiba
  • Patent number: 11797342
    Abstract: A method and a supporting node (150) for supporting a process scheduling node (110) when scheduling a process to a first execution node (130) of a cluster (120) of execution nodes (130, 140, 150) are disclosed. The supporting node (150) receives (A140), from the first execution node (130) being selected by the process scheduling node (110) for execution of the process, a request for allocation of one or more HA devices (131, 141, 151). The supporting node (150) allocates at least one HA device (141), being associated with a second execution node (140) of the cluster (120), to the first execution node (130). The supporting node (150) reduces a value representing number of HA devices (131, 141, 151) available for allocation to the first execution node (130) while taking said at least one HA device (141) into account. The supporting node (150) sends the value to the first execution node (130).
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: October 24, 2023
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Chakri Padala, Nhi Vo, Mozhgan Mahloo, Joao Monteiro Soares
  • Patent number: 11789771
    Abstract: Aspects of the disclosure provide methods and an apparatus including processing circuitry configured to receive workflow information of a workflow. The processing circuitry generates, based on the workflow information, the workflow including a first buffering task and a plurality of processing tasks that includes a first processing task and a second processing task. The first processing task is caused to enter a running state in which a subset of input data is processed and output to the first buffering task as first processed subset data. The first processing task is caused to transition from the running state to a non-running state based on an amount of the first processed subset data in the first buffering task being equal to a first threshold. Subsequently, the second processing task is caused to enter a running state in which the first processed subset data in the first buffering task is processed.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: October 17, 2023
    Assignee: Tencent America LLC
    Inventor: Iraj Sodagar
  • Patent number: 11780603
    Abstract: A system and method for compiling and dynamically reconfiguring the management of functionalities carried among a set of multiple common compute nodes. During the inoperation of one node of the set of multiple common compute nodes, higher-criticality functionalities can be reassigned to other common nodes to ensure maintained operation of the higher-criticality functionalities.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: October 10, 2023
    Assignee: GE Aviation Systems LLC
    Inventor: Brent Dale Hooker
  • Patent number: 11775195
    Abstract: An apparatus to facilitate copying surface data is disclosed. The apparatus includes copy engine hardware to receive a command to access surface data from a source location in memory to a destination location in the memory, divide the surface data into a plurality of surface data sub-blocks, process the surface data sub-blocks to calculate virtual addresses to which accesses to the memory are to be performed and perform the memory accesses.
    Type: Grant
    Filed: April 5, 2022
    Date of Patent: October 3, 2023
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Nilay Mistry
  • Patent number: 11768716
    Abstract: In example implementations, a method include receiving a request for a lock in a Mellor-Crummey Scott (MCS) lock protocol from a guest user that is context free (e.g., a process that does not bring a queue node). The lock determines that it contains a null value. The lock is granted to the guest user. A pi value is received from the guest user to store in the lock. The pi value notifies subsequent users that the guest user has the lock.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: September 26, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Hideaki Kimura, Tianzheng Wang, Milind M. Chabbi
  • Patent number: 11769061
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for receiving a request from a client to process a computational graph; obtaining data representing the computational graph, the computational graph comprising a plurality of nodes and directed edges, wherein each node represents a respective operation, wherein each directed edge connects a respective first node to a respective second node that represents an operation that receives, as input, an output of an operation represented by the respective first node; identifying a plurality of available devices for performing the requested operation; partitioning the computational graph into a plurality of subgraphs, each subgraph comprising one or more nodes in the computational graph; and assigning, for each subgraph, the operations represented by the one or more nodes in the subgraph to a respective available device in the plurality of available devices for operation.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: September 26, 2023
    Assignee: Google LLC
    Inventors: Paul A. Tucker, Jeffrey Adgate Dean, Sanjay Ghemawat, Yuan Yu
  • Patent number: 11768706
    Abstract: The aspects of the present disclosure provide a method and an apparatus for implementing hardware resource allocation. For example, the apparatus includes processing circuitry. The processing circuitry obtains a first value that is indicative of an allocable resource quantity of a hardware resource in a computing device. The processing circuitry also receives a second value that is indicative of a requested resource quantity of the hardware resource by a user, and then determines whether the second value is greater than the first value. When the second value is determined to be less than or equal to the first value, the processing circuitry requests the computing device to allocate the hardware resource of the requested resource quantity to the user, and subtracts the second value from the first value to update the allocable resource quantity of the hardware resource in the computing device.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: September 26, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Guwu Yi, Biao Xu, Fan Yang, Jue Wang, Rui Yang
  • Patent number: 11768690
    Abstract: A system may include a plurality of processors and a coprocessor. A plurality of coprocessor context priority registers corresponding to a plurality of contexts supported by the coprocessor may be included. The plurality of processors may use the plurality of contexts, and may program the coprocessor context priority register corresponding to a context with a value specifying a priority of the context relative to other contexts. An arbiter may arbitrate among instructions issued by the plurality of processors based on the priorities in the plurality of coprocessor context priority registers. In one embodiment, real-time threads may be assigned higher priorities than bulk processing tasks, improving bandwidth allocated to the real-time threads as compared to the bulk tasks.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: September 26, 2023
    Assignee: Apple Inc.
    Inventors: Aditya Kesiraju, Andrew J. Beaumont-Smith, Brian P. Lilly, James Vash, Jason M. Kassoff, Krishna C. Potnuru, Rajdeep L. Bhuyar, Ran A. Chachick, Tyler J. Huberty, Derek R. Kumar
  • Patent number: 11762689
    Abstract: An apparatus including a processor to: output a first request message onto a group sub-queue shared by multiple task containers to request execution of a first task routine; within a task container, respond to the first request message, by outputting a first task in-progress message onto an individual sub-queue not shared with other task containers to accede to executing the first task routine, followed by a task completion message; and respond to the task completion message by allowing the task completion message to remain on the individual sub-queue to keep the task container from executing another task routine from another request message on the group sub-queue, outputting a second request message onto the individual sub-queue to cause execution of a second task routine within the same task container to perform a second task, and responding to the second task in-progress message by de-queuing the task completion message.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: September 19, 2023
    Assignee: SAS Institute Inc.
    Inventors: Henry Gabriel Victor Bequet, Ronald Earl Stogner, Eric Jian Yang, Chaowang “Ricky” Zhang
  • Patent number: 11762685
    Abstract: A method and apparatus for scaling resources of a GPU in a cloud computing system are provided. The method includes receiving requests for services from a client device, queuing the received requests in a message bus based on a preset prioritization scheme; and scaling the resources of the GPU for the requests queued in the message bus according to a preset prioritization loop.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: September 19, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Danilo P. Ocray, Jr., Mark Andrew D. Bautista, Alvin Lee C. Ong, Joseph Alan P. Baking, Jaesung An, Manuel F Abarquez, Jr., Sungrok Yoon, Youngjin Kim
  • Patent number: 11757788
    Abstract: In a signal transfer system including a signal transfer management apparatus and a plurality of signal transfer apparatuses connected in multiple stages and forming a network between a distribution station apparatus and a central station apparatus, a signal transfer apparatus on an upper side among the plurality of signal transfer apparatuses transmits a timing adjustment request to at least one of a plurality of signal transfer apparatuses on a lower side among the plurality of signal transfer apparatuses upon determining that there is a possibility that a microburst occurs according to mobile scheduling information received from the plurality of signal transfer apparatuses on the lower side, and the signal transfer apparatus on the lower side that has received the timing adjustment request from the signal transfer apparatus on the upper side adjusts opening and closing timings of a gate based on the timing adjustment request. This can prevent the occurrence of a microburst.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: September 12, 2023
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Hiroko Nomura, Naotaka Shibata, Keita Takahashi, Tomoya Hatano
  • Patent number: 11748168
    Abstract: Methods and apparatus for flexible batch job scheduling in virtualization environments are disclosed. A descriptor for a batch job requested by a client is received at a job scheduling service. The descriptor comprises an indication of a time range during which a job iteration may be performed. A target time for executing the iteration is determined based on an analysis of a plurality of received descriptors. An indication of the target time at which the iteration is to be scheduled is provided to a selected execution platform.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: September 5, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Marcin Piotr Kowalski, Wesley Gavin King
  • Patent number: 11740827
    Abstract: The present disclosure relates to a method, an electronic device, and a computer program product for recovering data. For example, a method for recovering data is provided. The method may comprise acquiring metadata corresponding to to-be-recovered target data, the metadata comprising at least a first part of metadata corresponding to a first set of data blocks and a second part of metadata corresponding to a second set of data blocks. The method may further comprise acquiring, based on the first part of metadata, the first set of data blocks from a first backup storage device in a plurality of backup storage devices that store the target data. The method may further comprise acquiring, based on the second part of metadata, the second set of data blocks from a second backup storage device in the plurality of backup storage devices. In addition, the method may further comprise recovering the target data based on at least the first set of data blocks and the second set of data blocks.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: August 29, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Weiyang Liu, Ming Zhang, Qi Wang, Aaron Ren Wang, Yuanyi Liu
  • Patent number: 11734307
    Abstract: Caching systems and methods are described. In one implementation, a method identifies multiple files used to process a query and distributes each of the multiple files to a particular execution node to execute the query. Each execution node determines whether the distributed file is stored in the execution node's cache. If the execution node determines that the file is stored in the cache, it processes the query using the cached file. If the file is not stored in the cache, the execution node retrieves the file from a remote storage device, stores the file in the execution node's cache, and processes the query using the file.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: August 22, 2023
    Assignee: Snowflake Inc.
    Inventors: Benoit Dageville, Thierry Cruanes, Marcin Zukowski
  • Patent number: 11714668
    Abstract: An implementation of the disclosure provides identifying an amount of a resource associated with a virtual machine (VM) hosted by a first host machine of a plurality of host machines that are coupled to and are managed by a host controller, wherein a part of a quality manager is executed at the first host machine and another part of the quality manager is executed in the host controller. A requirement of an additional amount of resource by the VM is determined in view of an occurrence of an event associated with the VM. The VM may be migrated to a second host machine of the plurality of host machines for a duration of the event in view of the additional amount of the resource.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: August 1, 2023
    Assignee: Red Hat Israel, Ltd.
    Inventor: Yaniv Kaul
  • Patent number: 11714900
    Abstract: An embodiment of the present invention is directed to a Re-Run Dropped Detection Tool that provides various features and tools to prepare, execute and monitor status of a Re-Run process. An embodiment of the present invention is directed to an automated dispatch/monitoring of alert jobs as well as monitoring of Re-Run as a Service (RRAAS) solution.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: August 1, 2023
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Eshan Dave, Yusuf N. Kapadia, Rony Roy, Benjamin D. Smith, Jasir Mohammed Kundum Kadavuthu, Cosmin-Stefan Marin, Narasimham Gudimella, Pedro Gomez Garcia
  • Patent number: 11714829
    Abstract: Disclosed herein are system, method, and computer program product embodiments for replicating data from table in a source database to a target database. In some embodiments, data replication includes access plan delimitation and access plan calculation steps and is performed on a table having multiple partitions. A table may be divided into one or more partitions and each partition may be further divided into one or more access plans. Access plan delimitation may involve calculating, in parallel, boundaries of access plans within partitions of the table. Access plan calculation may be initiated on the first partition that has completed the access plan delimitation steps, and may involve transferring data from each delimited partition from the table in the source database to the target database.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: August 1, 2023
    Assignee: SAP SE
    Inventors: Alexander Becker, Sebastian Haase
  • Patent number: 11709824
    Abstract: Methods, systems, and computer program products are provided for consolidating transaction log requests and transaction logs in a database transaction log service. A scalable log service may manage log requests and logs to reduce resource consumption, such as memory and I/O. Log requests may be managed by consolidating (e.g., organizing, merging and/or de-duplicating) the log requests. Transaction log requests may be mapped to read requests for transaction log storage devices in less than a one-to-one ratio. Transaction logs may be managed by using the consolidated log requests to consolidate (e.g., and prefetch) transaction logs from multiple cache and/or storage tiers to a log pool cache. Log requests may be served from the log pool cache.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: July 25, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alexander Budovski, Eric Dean Lee, Ashmita Raju, Srikanth Sampath
  • Patent number: 11704150
    Abstract: Disclosed herein are systems and methods for dynamic job performance in secure multiparty computation (SMPC). The method may comprise receiving an SMPC query that indicates a processing job to be performed on a data input. The method may split the data input to generate a plurality of partial data inputs, based on parameters and the query type of the SMPC query. The method may generate a plurality of jobs to perform on the plurality of partial data inputs and determine a combined result of the processing job. The method may adjust the amount of worker processes in a worker pool based on at least one of: required computation, time of day, date, financial costs, power consumption, and available network bandwidth.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: July 18, 2023
    Assignee: Acronis International GmbH
    Inventors: Mark A. Will, Sanjeev Solanki, Kailash Sivanesan, Serguei Beloussov, Stanislav Protasov
  • Patent number: 11704058
    Abstract: A system and method for scheduling commands for processing by a storage device. A command is received from an application and stored in a first queue. Information is obtained on a first set of resources managed by the storage device. A second set of resources is synchronized based on the information on the first set of resources. The second set of resources is allocated into a first pool and a second pool. A condition of the second set of resources in the first pool is determined. One of the second set of resources in the first pool is allocated to the command based on a first determination of the condition, and one of the second set of resources in the second pool is allocated to the command based on a second determination of the condition.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: July 18, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yang Seok Ki, Ilgu Hong
  • Patent number: 11703425
    Abstract: An information processing apparatus configured to be connected to a sensor used to measure a state of a machine apparatus includes a processing portion. The information processing apparatus includes a processing portion configured to measure the state of the machine apparatus by using the sensor and executing a measurement task corresponding to an event condition that has been satisfied. The event condition is one of a plurality of event conditions associated with a plurality of measurement tasks. The processing portion is configured to execute a priority process in which when two or more event conditions of the plurality of event conditions have been satisfied, two or more measurement tasks corresponding to the two or more event conditions are executed in order of priority.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: July 18, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hiroki Kanai
  • Patent number: 11698816
    Abstract: Systems and methods are provided for lock-free thread scheduling. Threads may be placed in a ring buffer shared by all computer processing units (CPUs), e.g., in a node. A thread assigned to a CPU may be placed in the CPU's local run queue. However, when a CPU's local run queue is cleared, that CPU checks the shared ring buffer to determine if any threads are waiting to run on that CPU, and if so, the CPU pulls a batch of threads related to that ready-to-run thread to execute. If not, an idle CPU randomly selects another CPU to steal threads from, and the idle CPU attempts to dequeue a thread batch associated with the CPU from the shared ring buffer. Polling may be handled through the use of a shared poller array to dynamically distribute polling across multiple CPUs.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: July 11, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Matthew S. Gates, Joel E. Lilienkamp, Alex Veprinsky, Susan Agten
  • Patent number: 11695840
    Abstract: Code may be dynamically routed to computing resources for execution. Code may be received for execution on behalf of a client. Execution criteria for the code may be determined and computing resources that satisfy the execution criteria may be identified. The identified computing resources may then be procured for executing the code and then the code may be routed to the procured computing resources for execution. Permissions or authorization to execute the code may be shared to ensure that computing resources executing the code have the same permissions or authorization when executing the code.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: July 4, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: George Steven McPherson, Mehul A. Shah, Supratik Chakraborty, Prajakta Datta Damle, Gopinath Duddi, Anurag Windlass Gupta
  • Patent number: 11687238
    Abstract: A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to, in a first clock cycle, a pre-arbitration winner between a first memory access request and a second memory access request based on a first number of credits allocated to a first destination device and a second number of credits allocated to a second destination device. The arbiter circuit is further configured to, in a second clock cycle select a final arbitration winner from among the pre-arbitration winner and a subsequent memory access request based on a comparison of a priority of the pre-arbitration winner and a priority of the subsequent memory access request.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: June 27, 2023
    Assignee: Texas Instmments Incorporated
    Inventors: Matthew David Pierson, Daniel Wu, Kai Chirca
  • Patent number: 11682071
    Abstract: A computer-implemented data processing system comprises account management logic, workflow logic, and interface logic. The account management logic is configured to manage financial accounts associated with a plurality of users. The workflow logic is configured to identify workflow items to be acted upon by users in connection with financial transactions relating to the financial accounts. The interface logic cooperates with the workflow logic to generate a plurality of display screens to be displayed by wireless handheld mobile devices. The display screens comprise a home page screen that is provided to the user upon login and that includes a link to a workflow screen where the user may act upon one or more of the workflow items.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: June 20, 2023
    Assignee: Wells Fargo Bank, N.A.
    Inventor: Amy L. Johnson
  • Patent number: 11681983
    Abstract: Systems and methods for dynamically reprioritizing pick jobs in a fulfillment center are described herein. The example systems can be configured to periodically classify pick jobs in order to optimize throughput of the fulfillment center. The classifying can include determining an estimated completion time for pick jobs and identifying at-risk jobs that may complete after their associated due dates. The at-risk jobs can be assigned to autonomous vehicles based primarily on their associated due dates. Other pick jobs that are not at-risk can be assigned to autonomous vehicles based primarily on efficiency. The at-risk pick jobs can be assigned to autonomous vehicles before the other pick jobs.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: June 20, 2023
    Assignee: 6 RIVER SYSTEMS, LLC
    Inventors: Christopher Leonardo, Ellen Patridge, Félix Alberto Rodríguez Falcón
  • Patent number: 11675491
    Abstract: Systems and processes for user configurable task triggers are provided. In one example, at least one user input, including a selection of at least one condition of a plurality of conditions and a selection of at least one task of a plurality of tasks, is received. Stored context data corresponding to an electronic device is received. A determination is whether the stored context data indicates an occurrence of the at least one selected condition. In response to determining that the stored context data indicates an occurrence of the at least one selected condition, the at least one selected task associated with the at least one selected condition is performed.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: June 13, 2023
    Assignee: Apple Inc.
    Inventors: Joseph E. Meyer, Kelan Champagne, Joao Pedro De Almeida Forjaz De Lacerda, Aleksandr Gusev, Conrad B. Kramer, Yuan Li, Ari Weinstein
  • Patent number: 11669524
    Abstract: Systems and methods are provided for receiving an input comprising one or more attributes, selecting a subset of query options from a list of query options relevant to the attributes of the input, and based on query optimization results from an audit of previous queries, determining a priority order to execute each query in the set of queries based on the query optimization results, and executing each query in the priority order to generate a candidate list. For each candidate in the list of candidates, systems and methods are provided for selecting a subset of available workflows based on relevance to the candidate and based on workflow optimization results, determining an order in which the selected subset of workflows is to be executed, and executing the selected subset of workflows in the determined order to generate a match score indicating the probability that the candidate matches the input.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: June 6, 2023
    Assignee: SAP SE
    Inventors: Quincy Milton, Henry Tsai, Uma Kale, Adam Horacek, Justin Dority, Phillip DuLion, Ian Kelley, Michael Lentz, Ryan Skorupski, Aditi Godbole, Haizhen Zhang
  • Patent number: 11662906
    Abstract: Embodiments of the present disclosure provide a method, an electronic device, and a computer program product for upgrading a storage system. The method includes: acquiring information about a set of candidate periods related to a workload of a storage system, the storage system having a workload lower than a first predetermined threshold during the set of candidate periods; determining, based on user information of the storage system, a target period for upgrade from the set of candidate periods; and performing an upgrade operation on at least a part of components among multiple components of the storage system during the target period. In this manner, the upgrade operation for the storage system can be improved.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: May 30, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Tao Chen, Bing Liu, Geng Han, Jian Gao
  • Patent number: 11663026
    Abstract: A resource use method, an electronic device, and a computer program product are provided in embodiments of the present disclosure. The method includes determining a plurality of jobs requesting to use accelerator resources to accelerate data processing. The plurality of jobs are initiated by at least one virtual machine. The method further includes allocating available accelerator resources to the plurality of jobs based on job types of the plurality of jobs. The method further includes causing the plurality of jobs to be executed using the allocated accelerator resources. With the embodiments of the present disclosure, accelerator resources can be dynamically allocated, thereby improving the overall performance of a system.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: May 30, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jet Chen, Bing Liu
  • Patent number: 11663128
    Abstract: In at least one embodiment, processing can include acquiring a spinlock on a cached copy of a metadata (MD) page includes a field stored in two cache lines; updating a register to include an updated value of the field; determining whether a first portion of the updated value of the register is non-zero, wherein two portions of the updated value of the field as stored in the register correspond to the two cache lines; and responsive to determining that the first portion of the updated value of the register is non-zero, performing processing including: storing the first portion of the updated value of the field from the register in the first cache line; and subsequent to performing storing the first portion, storing the second portion of the updated value of the field as stored in the register in the second cache line.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: May 30, 2023
    Assignee: Dell Products L.P.
    Inventors: Vladimir Shveidel, Bar David, Michael Litvak