Patents Examined by Adam Lee
  • Patent number: 11561840
    Abstract: The present disclosure provides a system comprising: a first group of computing nodes and a second group of computing nodes, wherein the first and second groups are neighboring devices and each of the first and second groups comprising: a set of computing nodes A-D, and a set of intra-group interconnects, wherein the set of intra-group interconnects communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C; and a set of inter-group interconnects, wherein the set of inter-group interconnects communicatively couple computing node A of the first group with computing node A of the second group, computing node B of the first group with computing node B of the second group, computing node C of the first group with computing node C of the second group, and computing node D of the first group with computing node D of the second group.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: January 24, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Liang Han, Yang Jiao
  • Patent number: 11550505
    Abstract: A data stream may include a plurality of records that are ordered, and the plurality of records may be assigned to a processing shard. A first set of virtual shards may be formed, the first set of virtual shards having a first quantity of virtual shards that perform parallel processing operations on behalf of the processing shard. First records of the plurality of records may be processed using the first set of virtual shards. The first quantity of virtual shards may be modified, based at least in part on an observed record age, to a second quantity of virtual shards that perform parallel processing operations on behalf of the processing shard. A second set of virtual shards may be formed having the second quantity of virtual shards. Second records of the plurality of records may be processed using the second set of virtual shards.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: January 10, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dinesh Saheblal Gupta, Deepak Verma, Jiaxuan Lu
  • Patent number: 11550372
    Abstract: An information processing apparatus includes a fan that cools a first processor, a dust-proof bezel that prevents dust from entering a casing, a memory, and a second processor coupled to the memory. The second processor is configured to measure a temperature of the first processor and an air volume of an air flow which passes through the dust-proof bezel, compare a registered air volume to the measured air volume when the temperature matches a registered temperature included in comparison information stored in the memory. The registered air volume being included in the comparison information in association with the matched temperature and the comparison information including a registered temperature of the first processor and a registered air volume of an air flow generated by the fan in association with each other. The second processor determines an abnormality in the dust-proof bezel based on a comparison result.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: January 10, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Masakazu Matsubara, Kohei Kida, Hiromichi Okabe, Minoru Hirano
  • Patent number: 11544099
    Abstract: At an interface an analytic model for processing data is received. The analytic model is inspected to determine a language, an action, an input type, and an output type. A virtualized execution environment is generated for an analytic engine that includes executable code to implement the analytic model for processing an input data stream.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: January 3, 2023
    Assignee: ModelOp, Inc.
    Inventors: Stuart Bailey, Matthew Mahowald, Maksym Kharchenko
  • Patent number: 11513855
    Abstract: A method, computer program product, and computing system for allocating a first set of cores of a plurality of cores of a multicore central processing unit (CPU) for processing host input-output (IO) operations of a plurality of operations on a storage system. A second set of cores of the plurality of cores may be allocated for processing flush operations of the plurality of operations on the storage system. A third set of cores of the plurality of cores may be allocated for processing rebuild operations of the plurality of operations on the storage system. At least one of one or more host IO operations, one or more rebuild operations, and one or more flush operations may be processed, via the plurality of cores and based upon, at least in part, the allocation of the plurality of cores for processing the plurality of operations.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: November 29, 2022
    Assignee: EMC IP Holding Company, LLC
    Inventors: Jian Gao, Vamsi K. Vankamamidi, Hongpo Gao, Jamin Kang
  • Patent number: 11500626
    Abstract: Methods for intelligent automatic merging of source control queue items are performed by systems and apparatuses. Project changes are submitted in build requests to a gated check-in build queue requiring successful builds to commit changes to a code repository according to source control. Multiple pending build requests in the build queue are intelligently and automatically merged into a single, pending merged request based on risk factor values associated with the build requests. For merged requests successfully built, files in the build requests are committed and the build requests are removed from the queue. Merged requests unsuccessfully built are divided into equal subsets based on updated risk factor values using information from the unsuccessful build. Successful builds of subsets allow for committing of files and removal from the build queue, while unsuccessful builds are further divided and processed until single build requests are processed to identify root cause errors.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: November 15, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Elad Iwanir, Gal Tamir, Mario A. Rodriguez, Chen Lahav
  • Patent number: 11487593
    Abstract: A barrier synchronization system, a parallel information processing apparatus, and the like are described in the embodiments. In an example, provided is a solution to reduce latency time and improve processing speed in barrier synchronization.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: November 1, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Kanae Nakagawa, Masaki Arai, Yasumoto Tomita
  • Patent number: 11449339
    Abstract: A system includes a memory, at least one physical processor in communication with the memory, and a plurality of hardware threads executing on the at least one physical processor. A first thread of the plurality of hardware threads is configured to execute a plurality of instructions that includes a restartable sequence. Responsive to a different second thread in communication with the first thread being pre-empted while the first thread is executing the restartable sequence, the first thread is configured to restart the restartable sequence prior to reaching a memory barrier.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: September 20, 2022
    Assignee: Red Hat, Inc.
    Inventors: Michael Tsirkin, Andrea Arcangeli
  • Patent number: 11436524
    Abstract: Techniques for hosting machine learning models are described. In some instances, a method of receiving a request to perform an inference using a particular machine learning model; determining a group of hosts to route the request to, the group of hosts to host a plurality of machine learning models including the particular machine learning model; determining a path to the determined group of hosts; determining a particular host of the group of hosts to perform an analysis of the request based on the determined path, the particular host having the particular machine learning model in memory; routing the request to the particular host of the group of hosts; performing inference on the request using the particular host; and providing a result of the inference to a requester is performed.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: September 6, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Nikhil Kandoi, Ganesh Kumar Gella, Rama Krishna Sandeep Pokkunuri, Sudhakar Rao Puvvadi, Stefano Stefani, Kalpesh N. Sutaria, Enrico Sartorello, Tania Khattar
  • Patent number: 11429448
    Abstract: The described technology relates to scheduling jobs of a plurality of types in an enterprise web application. A processing system configures a job database having a plurality of job entries, and concurrently executes a plurality of job schedulers independently of each other. Each job scheduler is configured to schedule for execution jobs in the jobs database that are of a type different from types of jobs others of the plurality of job schedulers are configured to schedule. The processing system also causes performance of jobs scheduled for execution by any of the plurality of schedulers. Method and computer readable medium embodiments are also provided.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: August 30, 2022
    Assignee: NASDAQ, INC.
    Inventor: Santhosh Philip George
  • Patent number: 11422862
    Abstract: A service that provides serverless computation environments with persistent storage for web-based applications. Users of a web application are provided with persistent user-specific contexts including a file volume and application settings. Upon logging into the application via a web application interface, the service accesses the user's context and dynamically allocates compute instance(s) and installs execution environment(s) on the compute instance(s) according to the user's context to provide a network environment for the user. A network pipe may be established between the web application interface and the network environment. Interactions with the network environment are monitored, and changes to execution environments are recorded to the user's context. Compute instances may be deallocated by the service when not in use, with new compute instances allocated as needed.
    Type: Grant
    Filed: November 29, 2019
    Date of Patent: August 23, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas Albert Faulhaber, Kevin McCormick
  • Patent number: 11416288
    Abstract: Embodiments of the present disclosure relate to a method, device and computer program product for managing a service. The method comprises in response to processor credits for the service reaching threshold credits at a first time instant (t1), determining a second time instant when a first operation for the service is to be performed. The method further comprises determining, based on a set of historical processor credits between the first time instant and the second time instant, first processor credits related to a second set of time periods which is between the first time instant and second time instant.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 16, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jian Wen, Yi Wang, Xing Min, Haitao Li, Lili Lin, Longcai Zou, Rong Qiao, Hao Yang
  • Patent number: 11410255
    Abstract: Transaction-enabled systems and methods for identifying and acquiring machine resources on a forward resource market are disclosed. An example system may include a controller having a resource requirement circuit to determine an amount of a resource required for a machine to service a task requirement, a forward resource market circuit to access a forward resource market, a resource market circuit to access a resource market, and a resource distribution circuit to execute a transaction of the resource on at least one of the resource market or the forward resource market in response to the determined amount of the resource required.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: August 9, 2022
    Assignee: Strong Force TX Portfolio 2018, LLC
    Inventor: Charles Howard Cella
  • Patent number: 11385926
    Abstract: An application and system fast launch may provide a virtual memory address area (VMA) container to manage the restore of a context of a process, i.e., process context, saved in response to a checkpoint to enhance performance and to provide a resource efficient fast launch. More particularly, the fast launch may provide a way to manage, limit and/or delay the restore of a process context saved in response to a checkpoint, by generating a VMA container comprising VMA container pages, to restore physical memory pages following the checkpoint based on the most frequently used or predicted to be used. The application and system fast launch with the VMA container may avoid unnecessary input/output (I/O) bandwidth consumption, page faults and/or memory copy operations that may otherwise result from restoring the entire context of a VMA container without regard to frequency of use.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: July 12, 2022
    Assignee: Intel Corporation
    Inventors: Chao Xie, Jia Bao, Mingwei Shi, Yifan Zhang, Qiming Shi, Beiyuan Hu, Tianyou Li, Xiaokang Qin
  • Patent number: 11385972
    Abstract: Disclosed are various examples for virtual-machine-specific failover protection. In some examples, a power-on request is received for a protected virtual machine. Virtual-machine-specific failover protection is enabled for the protected virtual machine. The protected virtual machine is executed on a first host of a cluster, and a dynamic virtual machine slot for the protected virtual machine is created on a second host of the cluster. The dynamic virtual machine slot is created to match a hardware resource configuration of the protected virtual machine. An anti-affinity rule is maintained between the protected virtual machine and the dynamic virtual machine slot.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: July 12, 2022
    Assignee: VMware, Inc.
    Inventors: Charan Singh K, Fei Guo
  • Patent number: 11372678
    Abstract: Embodiments of the present disclosure can provide distributed system resource allocation methods and apparatuses. The method comprises: receiving a resource preemption request sent by a resource scheduling server, the resource preemption request comprising job execution information corresponding to a first job management server; determining, according to the job execution information corresponding to the first job management server and comprised in the resource preemption request, resources to be returned by a second job management server and a resource return deadline; and returning, according to and the resource return deadline and a current job execution progress of the second job management server, the resources to be returned to the resource scheduling server before expiration of the resource return deadline.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: June 28, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Yang Zhang, Yihui Feng, Jin Ouyang, Qiaohuan Han, Fang Wang
  • Patent number: 11366695
    Abstract: A charging assistant system that assists charging for use of an accelerator unit, which is one or more accelerators, includes an operation amount obtaining unit, an acceleration rate estimation unit, and a use fee determination unit. For each of one or more commands input into the accelerator unit, the operation amount obtaining unit obtains the amount of operation related to execution of the command from a response output from the accelerator unit for the command. For the one or more commands input into the accelerator unit, the acceleration rate estimation unit estimates an acceleration rate on the basis of command execution time that is time required for processing of the one or more commands, and one or more amounts of operation obtained for the one or more commands respectively. The use fee determination unit determines a use fee of the accelerator unit on the basis of the estimated acceleration rate.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: June 21, 2022
    Assignee: HITACHI, LTD.
    Inventors: Yoshifumi Fujikawa, Kazuhisa Fujimoto, Toshiyuki Aritsuka, Kazushi Nakagawa
  • Patent number: 11366670
    Abstract: A predictive queue control and allocation system includes a queue and a queue control server communicatively coupled to the queue. The queue includes a first and second allocation of queue locations. The queue stores a plurality of resources. The queue control server includes an interface and a queue control engine implemented by a processor. The interface monitors the plurality of resources before the plurality of resources are stored in the queue. The queue control engine predicts that one or more conditions indicate that a queue overflow will occur in the first allocation of queue locations. The queue control engine prioritizes the plurality of resources being received by the queue. The queue control engine may apply a machine learning technique to the plurality of resources. The queue control engine transfers the plurality of resources prioritized by the machine learning technique.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: June 21, 2022
    Assignee: Bank of America Corporation
    Inventors: Anuj Sharma, Gaurav Srivastava, Vishal D. Kelkar
  • Patent number: 11347554
    Abstract: Systems and methods are provided for a context-aware scheduler. In one example embodiment, the context-aware scheduler accesses a stored application context to determine that the stored application context corresponds to a change in application context from a first application context according to which a queue of jobs for execution for an application is currently prioritized, to a second application context. The context-aware scheduler determines a list of attributions comprising assigned priority categories for the second application context and uses the list of attributions for the second application context to re-prioritize the plurality of jobs in the queue based on a job attribution tag for each job in the queue. The context-aware scheduler sets a first job in the re-prioritized queue as the next job for execution for the application.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 31, 2022
    Assignee: Snap Inc.
    Inventors: Irina Kotelnikova, David Liberman, Denis Ovod, Johan Lindell
  • Patent number: 11340886
    Abstract: Methods and systems for updating configurations. An application can be determined to be incorrectly configured. A request can be sent to update a configuration property. The configuration property can be updated in a test environment. The application can be tested with the updated configuration property in the test environment. The test of the application with the updated configuration property in the test environment can be identified as successful. An incorrect configuration of the application can be updated.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: May 24, 2022
    Assignee: Capital One Services, LLC
    Inventors: Lokesh Vijay Kumar, Poornima Bagare Raju