Patents Examined by Adam Lee
  • Patent number: 11487593
    Abstract: A barrier synchronization system, a parallel information processing apparatus, and the like are described in the embodiments. In an example, provided is a solution to reduce latency time and improve processing speed in barrier synchronization.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: November 1, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Kanae Nakagawa, Masaki Arai, Yasumoto Tomita
  • Patent number: 11449339
    Abstract: A system includes a memory, at least one physical processor in communication with the memory, and a plurality of hardware threads executing on the at least one physical processor. A first thread of the plurality of hardware threads is configured to execute a plurality of instructions that includes a restartable sequence. Responsive to a different second thread in communication with the first thread being pre-empted while the first thread is executing the restartable sequence, the first thread is configured to restart the restartable sequence prior to reaching a memory barrier.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: September 20, 2022
    Assignee: Red Hat, Inc.
    Inventors: Michael Tsirkin, Andrea Arcangeli
  • Patent number: 11436524
    Abstract: Techniques for hosting machine learning models are described. In some instances, a method of receiving a request to perform an inference using a particular machine learning model; determining a group of hosts to route the request to, the group of hosts to host a plurality of machine learning models including the particular machine learning model; determining a path to the determined group of hosts; determining a particular host of the group of hosts to perform an analysis of the request based on the determined path, the particular host having the particular machine learning model in memory; routing the request to the particular host of the group of hosts; performing inference on the request using the particular host; and providing a result of the inference to a requester is performed.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: September 6, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Nikhil Kandoi, Ganesh Kumar Gella, Rama Krishna Sandeep Pokkunuri, Sudhakar Rao Puvvadi, Stefano Stefani, Kalpesh N. Sutaria, Enrico Sartorello, Tania Khattar
  • Patent number: 11429448
    Abstract: The described technology relates to scheduling jobs of a plurality of types in an enterprise web application. A processing system configures a job database having a plurality of job entries, and concurrently executes a plurality of job schedulers independently of each other. Each job scheduler is configured to schedule for execution jobs in the jobs database that are of a type different from types of jobs others of the plurality of job schedulers are configured to schedule. The processing system also causes performance of jobs scheduled for execution by any of the plurality of schedulers. Method and computer readable medium embodiments are also provided.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: August 30, 2022
    Assignee: NASDAQ, INC.
    Inventor: Santhosh Philip George
  • Patent number: 11422862
    Abstract: A service that provides serverless computation environments with persistent storage for web-based applications. Users of a web application are provided with persistent user-specific contexts including a file volume and application settings. Upon logging into the application via a web application interface, the service accesses the user's context and dynamically allocates compute instance(s) and installs execution environment(s) on the compute instance(s) according to the user's context to provide a network environment for the user. A network pipe may be established between the web application interface and the network environment. Interactions with the network environment are monitored, and changes to execution environments are recorded to the user's context. Compute instances may be deallocated by the service when not in use, with new compute instances allocated as needed.
    Type: Grant
    Filed: November 29, 2019
    Date of Patent: August 23, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas Albert Faulhaber, Kevin McCormick
  • Patent number: 11416288
    Abstract: Embodiments of the present disclosure relate to a method, device and computer program product for managing a service. The method comprises in response to processor credits for the service reaching threshold credits at a first time instant (t1), determining a second time instant when a first operation for the service is to be performed. The method further comprises determining, based on a set of historical processor credits between the first time instant and the second time instant, first processor credits related to a second set of time periods which is between the first time instant and second time instant.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 16, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jian Wen, Yi Wang, Xing Min, Haitao Li, Lili Lin, Longcai Zou, Rong Qiao, Hao Yang
  • Patent number: 11410255
    Abstract: Transaction-enabled systems and methods for identifying and acquiring machine resources on a forward resource market are disclosed. An example system may include a controller having a resource requirement circuit to determine an amount of a resource required for a machine to service a task requirement, a forward resource market circuit to access a forward resource market, a resource market circuit to access a resource market, and a resource distribution circuit to execute a transaction of the resource on at least one of the resource market or the forward resource market in response to the determined amount of the resource required.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: August 9, 2022
    Assignee: Strong Force TX Portfolio 2018, LLC
    Inventor: Charles Howard Cella
  • Patent number: 11385972
    Abstract: Disclosed are various examples for virtual-machine-specific failover protection. In some examples, a power-on request is received for a protected virtual machine. Virtual-machine-specific failover protection is enabled for the protected virtual machine. The protected virtual machine is executed on a first host of a cluster, and a dynamic virtual machine slot for the protected virtual machine is created on a second host of the cluster. The dynamic virtual machine slot is created to match a hardware resource configuration of the protected virtual machine. An anti-affinity rule is maintained between the protected virtual machine and the dynamic virtual machine slot.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: July 12, 2022
    Assignee: VMware, Inc.
    Inventors: Charan Singh K, Fei Guo
  • Patent number: 11385926
    Abstract: An application and system fast launch may provide a virtual memory address area (VMA) container to manage the restore of a context of a process, i.e., process context, saved in response to a checkpoint to enhance performance and to provide a resource efficient fast launch. More particularly, the fast launch may provide a way to manage, limit and/or delay the restore of a process context saved in response to a checkpoint, by generating a VMA container comprising VMA container pages, to restore physical memory pages following the checkpoint based on the most frequently used or predicted to be used. The application and system fast launch with the VMA container may avoid unnecessary input/output (I/O) bandwidth consumption, page faults and/or memory copy operations that may otherwise result from restoring the entire context of a VMA container without regard to frequency of use.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: July 12, 2022
    Assignee: Intel Corporation
    Inventors: Chao Xie, Jia Bao, Mingwei Shi, Yifan Zhang, Qiming Shi, Beiyuan Hu, Tianyou Li, Xiaokang Qin
  • Patent number: 11372678
    Abstract: Embodiments of the present disclosure can provide distributed system resource allocation methods and apparatuses. The method comprises: receiving a resource preemption request sent by a resource scheduling server, the resource preemption request comprising job execution information corresponding to a first job management server; determining, according to the job execution information corresponding to the first job management server and comprised in the resource preemption request, resources to be returned by a second job management server and a resource return deadline; and returning, according to and the resource return deadline and a current job execution progress of the second job management server, the resources to be returned to the resource scheduling server before expiration of the resource return deadline.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: June 28, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Yang Zhang, Yihui Feng, Jin Ouyang, Qiaohuan Han, Fang Wang
  • Patent number: 11366670
    Abstract: A predictive queue control and allocation system includes a queue and a queue control server communicatively coupled to the queue. The queue includes a first and second allocation of queue locations. The queue stores a plurality of resources. The queue control server includes an interface and a queue control engine implemented by a processor. The interface monitors the plurality of resources before the plurality of resources are stored in the queue. The queue control engine predicts that one or more conditions indicate that a queue overflow will occur in the first allocation of queue locations. The queue control engine prioritizes the plurality of resources being received by the queue. The queue control engine may apply a machine learning technique to the plurality of resources. The queue control engine transfers the plurality of resources prioritized by the machine learning technique.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: June 21, 2022
    Assignee: Bank of America Corporation
    Inventors: Anuj Sharma, Gaurav Srivastava, Vishal D. Kelkar
  • Patent number: 11366695
    Abstract: A charging assistant system that assists charging for use of an accelerator unit, which is one or more accelerators, includes an operation amount obtaining unit, an acceleration rate estimation unit, and a use fee determination unit. For each of one or more commands input into the accelerator unit, the operation amount obtaining unit obtains the amount of operation related to execution of the command from a response output from the accelerator unit for the command. For the one or more commands input into the accelerator unit, the acceleration rate estimation unit estimates an acceleration rate on the basis of command execution time that is time required for processing of the one or more commands, and one or more amounts of operation obtained for the one or more commands respectively. The use fee determination unit determines a use fee of the accelerator unit on the basis of the estimated acceleration rate.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: June 21, 2022
    Assignee: HITACHI, LTD.
    Inventors: Yoshifumi Fujikawa, Kazuhisa Fujimoto, Toshiyuki Aritsuka, Kazushi Nakagawa
  • Patent number: 11347554
    Abstract: Systems and methods are provided for a context-aware scheduler. In one example embodiment, the context-aware scheduler accesses a stored application context to determine that the stored application context corresponds to a change in application context from a first application context according to which a queue of jobs for execution for an application is currently prioritized, to a second application context. The context-aware scheduler determines a list of attributions comprising assigned priority categories for the second application context and uses the list of attributions for the second application context to re-prioritize the plurality of jobs in the queue based on a job attribution tag for each job in the queue. The context-aware scheduler sets a first job in the re-prioritized queue as the next job for execution for the application.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 31, 2022
    Assignee: Snap Inc.
    Inventors: Irina Kotelnikova, David Liberman, Denis Ovod, Johan Lindell
  • Patent number: 11340886
    Abstract: Methods and systems for updating configurations. An application can be determined to be incorrectly configured. A request can be sent to update a configuration property. The configuration property can be updated in a test environment. The application can be tested with the updated configuration property in the test environment. The test of the application with the updated configuration property in the test environment can be identified as successful. An incorrect configuration of the application can be updated.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: May 24, 2022
    Assignee: Capital One Services, LLC
    Inventors: Lokesh Vijay Kumar, Poornima Bagare Raju
  • Patent number: 11334390
    Abstract: A resource reservation system includes a media module that includes a plurality of media devices and a media controller that is coupled to the plurality of media devices. The media controller retrieves media device attributes from each of the plurality of media devices that identify performance capabilities for each of the plurality of media devices and determines one or more media module partitions that are included in the media module. Each of the one or more media module partitions are provided by a subset of the plurality of media devices. The media controller then determines, for each of the media module partitions, a minimum partition performance for that media module partition based the media device attributes for the subset of the one or more media devices that provide that partition and provides the minimum partition performance for each of the media module partitions to a resource reservation device.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: May 17, 2022
    Assignee: Dell Products L.P.
    Inventors: Yung-Chin Fang, Jingjuan Gong, Xiaoye Jiang
  • Patent number: 11327744
    Abstract: Disclosed herein is technology to compare different versions of a code object to determine if the versions include equivalent changes. An example method may include: determining a set of changes of a first version of a code object, wherein the code object comprises a plurality of versions; generating a first hash in view of the set of changes; accessing a second hash representing a set of changes of a second version of the code object; comparing, by a processing device, the first hash and the second hash; and indicating the set of changes of the first version and the set of changes of the second version are equivalent.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: May 10, 2022
    Assignee: Red Hat, Inc.
    Inventor: Cleber Rodrigues Rosa Junior
  • Patent number: 11327802
    Abstract: Systems and methods for exporting logical object metadata. In one example, the system includes an electronic processor configured to receive a first input from a user. The first input includes a logical object location and at least one metadata export option. The electronic processor is also configured to create an export job based upon the first input. The electronic processor is also configured to store the export job in a job queue, determine when a computing resource is available to execute the export job, and execute the export job when the computing resource is available. The electronic processor is also configured to store a job manifest in a memory location. In one example, the job manifest includes metadata for each logical object located in the logical object location.
    Type: Grant
    Filed: October 9, 2019
    Date of Patent: May 10, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Wan Chin Wu, Hani Gamal Loza, Joe Keng Yap, Wenyu Cai, David Charles Oliver, Simon Bourdages
  • Patent number: 11321123
    Abstract: Provided are a computer program product, system, and method for determining an optimum number of threads to make available per core in a multi-core processor complex to execute tasks. A determination is made of a first processing measurement based on threads executing on the cores of the processor chip, wherein each core includes circuitry to independently execute a plurality of threads. A determination is made of a number of threads to execute on the cores based on the first processing measurement. A determination is made of a second processing measurement based on the threads executing on the cores of the processor chip. A determination is made of an adjustment to the determined number of threads to execute based on the second processing measurement resulting in an adjusted number of threads. The adjusted number of threads on the cores is utilized to execute instructions.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: May 3, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian Anthony Rinaldi, Lokesh M. Gupta, Kevin J. Ash, Matthew J. Kalos, Trung N. Nguyen, Clint A. Hardy, Louis A. Rasor
  • Patent number: 11321135
    Abstract: The embodiments disclosed herein relate to predictive rate limiting. A workload for completing a request is predicted based on, for example, characteristics of a ruleset to be applied and characteristics of a target set upon which the ruleset is to be applied. The workload is mapped to a set of tokens or credits. If a requestor has sufficient tokens to cover the workload for the request, the request is processed. The request may be processed in accordance with a set of processing queues. Each processing queue is associated with a maximum per-tenant workload. A request may be added to a processing queue as long as adding the request does not result in exceeding the maximum per-tenant workload. Requests within a processing queue may be processed in a First In First Out (FIFO) order.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: May 3, 2022
    Assignee: Oracle International Corporation
    Inventors: Amol Achyut Chiplunkar, Prasad Ravuri, Karl Dias, Gayatri Tripathi, Shriram Krishnan, Chaitra Jayaram
  • Patent number: 11314558
    Abstract: Methods, non-transitory machine readable media, and computing devices that dynamically throttle non-priority workloads to satisfy minimum throughput service level objectives (SLOs) are disclosed. With this technology, a determination is made when a number of detection intervals with a violation within a detection window exceeds a threshold, when a current one of the detection intervals is outside an observation area. The detection intervals are identified a violated based on an average throughput for priority workloads within the detection intervals exceeding a minimum throughput SLO. A throttle is then set to rate-limit non-priority workloads, when the number of violated detection intervals within the detection window exceeds the threshold.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: April 26, 2022
    Assignee: NETAPP, INC.
    Inventors: Ranjit Nandagopal, Yasutaka Hirasawa, Chandan Hoode