Patents Examined by Sisley N Kim
  • Patent number: 10901778
    Abstract: One embodiment provides a method for optimizing data read-ahead for workflow and analytics applications including obtaining, by a processor, next file information from a workflow scheduler for next files for a next processing stage that are to be accessed by a process. Data for the next processing stage for at least one application and at least one system job is prefetched. The next files are prefetched as the prefetching data reaches an end of current inputs.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Wayne Sawdon, Deepavali Bhagwat
  • Patent number: 10901807
    Abstract: Threads running in a computer system are managed. Responsive to a thread for an application attempting to acquire a lock to a shared computing resource to perform a task for the application, a determination is made by the computer system as to whether the lock for the shared computing resource was acquired by the thread for the application. An unrelated task for the application assigned by the computer system to the thread in an absence of a determination that the lock was acquired.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Sreepurna Jasti, Lakshmi Swetha Gopireddy, Gautam Mittal, Gireesh Punathil
  • Patent number: 10901805
    Abstract: Concepts and technologies directed to distributed load balancing for processing of high-volume data streams in datacenters are disclosed herein. In various aspects, a system can include a processor and memory storing instructions that, upon execution, cause performance of operations. The operations can include receiving raw data items in an incoming queue, and generating, within each of a plurality of worker processing threads, a load hash set that includes a load hash value for each of the raw data items in the incoming queue. The operations can include determining, within each worker processing thread, which of the raw data items to process from the incoming queue based on the load hash set, and processing, via one of the plurality of worker processing threads, each of the raw data items in the incoming queue based on the load hash value for each of the raw data items.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: January 26, 2021
    Assignee: AT&T Intellectual Property I, L.P.
    Inventor: Gregory E. Feldkamp
  • Patent number: 10891165
    Abstract: Methods and systems for searching a frozen index are provided. Exemplary methods include: a method may comprise: receiving an initial search and a subsequent search; loading the initial search and the subsequent search into a throttled thread pool, the throttled thread pool including; getting the initial search from the throttled thread pool; storing a first shard from a mass storage in a memory in response to the initial search; performing the initial search on the first shard; providing first top search result scores from the initial search; and removing the first shard from the memory when the initial search is completed.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: January 12, 2021
    Assignee: Elasticsearch B.V.
    Inventor: Simon Daniel Willnauer
  • Patent number: 10884822
    Abstract: A method for deterministic locking in a parallel computing environment is provided. The method includes creating a data structure in memory of a computer for a shared resource. The data structure encapsulates a reference to an owner of a lock for the shared resource and a queue of threads able to seek exclusive access to the shared resource. The queue in turn includes different entries, each entry including an identifier for a corresponding one of the threads and a deterministic time computed for the corresponding one of the threads from a count of memory accesses occurring in the corresponding one of the threads. Consequently, a thread can be selected from the queue to receive ownership of the lock and exclusive access to the shared resource based upon a deterministic time for the selected thread as compared to other deterministic times for others of the threads in the queue, for example, a lowest deterministic time.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventors: Tobias Achterberg, Daniel Junglas, Roland Wunderling
  • Patent number: 10884806
    Abstract: In an embodiment, a method is performed by an agent installed in a computing environment on a computer system. The method includes monitoring the computing environment for optimization triggers. The method also includes, responsive to detection of an optimization trigger, identifying an optimization profile of a plurality of optimization profiles that is applicable to the optimization trigger. In addition, the method includes temporarily modifying the computing environment in accordance with the optimization profile. Further, the method includes, responsive to the temporarily modifying, monitoring the computing environment for optimization exit triggers. Additionally, the method includes, responsive to detection of an optimization exit trigger, automatically reversing the temporarily modifying.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: January 5, 2021
    Assignee: ASURVIO, LP
    Inventors: Bogdan Odulinski, Brett Pany
  • Patent number: 10884812
    Abstract: Systems and methods are described for providing performance-based hardware emulation in an on-demand network code execution system. A user may generate a task on the system by submitting code. The system may determine, based on the code or its execution, that the code executes more efficiently if certain functionality is available, such as an extension to a processor's instruction set. The system may further determine that it can provide the needed functionality using various computing resources, which may include physical hardware, emulated hardware (e.g., a virtual machine), or combinations thereof. The system may then determine and provide a set of computing resources to use when executing the user-submitted code, which may be based on factors such as availability, cost, estimated performance, desired performance, or other criteria. The system may also migrate code from one set of computing resources to another, and may analyze demand and project future computing resource needs.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: January 5, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Marc John Brooker, Philip Daniel Piwonka, Niall Mullen, Mikhail Danilov, Holly Mesrobian, Timothy Allen Wagner
  • Patent number: 10884785
    Abstract: Methods and apparatus for processor time accounting for a thread executing in a multi-threaded environment are disclosed. A thread executing in an operating system receives from an operating system an allotment of time for use of a processor, and performs timed computations using the processor. Iteratively or after completing the computations, the thread determines an amount of time used by the thread based on a thread utilization counter initialized by the operating. The thread makes this determination through a user-level library function call rather than a call to the operating. The thread obtains an amount of time remaining in the allotment of time by comparing the thread utilization counter to a current CPU time using a user-level library function call.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventor: Kelvin D. Nilsen
  • Patent number: 10877808
    Abstract: There is provided mechanisms for scheduling a task from a plurality of tasks to a processor core of a cluster of processor cores. The processor cores share caches. A method is performed by a controller. The method comprises determining group-wise task relationships between the plurality of tasks based on duration of cache misses resulting from running groups of the plurality of tasks on processor cores sharing the same cache. The method comprises scheduling the task to one of the processor cores based on the group-wise task relationships of the task.
    Type: Grant
    Filed: October 10, 2016
    Date of Patent: December 29, 2020
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Patrik â„«berg, Bengt Wikenfalk
  • Patent number: 10860383
    Abstract: A management controller of an information handling system may be configured to provide out-of-band management of the information handling system by receiving a first instruction from a first management console, the first instruction relating to a particular feature. The management controller may further be configured to receive a second instruction from a second management console, the second instruction also relating to the particular feature. In response to a determination that the first management console has a higher priority than the second management console, the management controller may execute the first instruction but not the second instruction.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: December 8, 2020
    Assignee: Dell Products L.P.
    Inventors: Smruti Ranjan Debata, K. N. Ravishankar
  • Patent number: 10838759
    Abstract: A method, a device, and a non-transitory storage medium are described in which an elastic platform virtualization service is provided in relation to a virtual device. The elastic platform virtualization service includes logic that provides for the management of a virtualized device during its life cycle. The creation or reconfiguration of the virtualized device is based on a tertiary choice between using dedicated hardware and dedicated kernel; common hardware and common kernel; or a combination of the dedicated hardware, dedicated kernel, common hardware, and common kernel.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: November 17, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventor: Mehmet Toy
  • Patent number: 10831555
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve workload domain management of virtualized server systems. An example apparatus includes a resource status analyzer to determine a health status of a first virtualized server of a workload domain, compare the health status to a decomposition threshold based on a policy, and transfer a workload of the first virtualized server to a second virtualized server of the workload domain when the health status satisfies the decomposition threshold. The example apparatus further includes a resource deallocator to deallocate the first virtualized server from the workload domain to a pool of virtualized servers to execute the workload using the second virtualized server.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: November 10, 2020
    Assignee: VMWARE, INC.
    Inventors: Santhana Krishnan, Thayumanavan Sridhar, Chitrank Seshadri
  • Patent number: 10831556
    Abstract: Various systems and methods for virtual CPU consolidation to avoid physical CPU contention between virtual machines are described herein. A processor system that includes multiple physical processors (PCPUs) includes a first virtual machine (VM) that includes multiple first virtual processors (VCPUs); a second VM that includes multiple second VCPUs; and a virtual machine monitor (VMM) to map individual ones of the first VCPUs to run on at least one of, individual PCPUs of a first subset of the PCPUs and individual PCPUs of a set of PCPUs that includes the first subset of the PCPUs and a second subset of the PCPUs, based at least in part upon compute capacity of the first subset of the PCPUs to run the first VCPUs, and to map individual ones of the second VCPUs to run on individual ones of the second subset of the PCPUs.
    Type: Grant
    Filed: December 23, 2015
    Date of Patent: November 10, 2020
    Assignee: Intel IP Corporation
    Inventors: Yuyang Du, Jian Sun, Yong Tong Chua, Mingqiu Sun, Sebastien Haezebrouck, Nicole Chalhoub, Premanand Sakarda, Richard Quinzio
  • Patent number: 10817400
    Abstract: A management apparatus is configured to acquire status information of a virtual infrastructure on which a first management service operates, perform mutual communication with a second management service that has a function identical to a function of the first management service and that operates on the virtual infrastructure, identify a status for an item used for identifying a problem area based on a communication status, the status information of the virtual infrastructure, and a communication status with the second management service, acquire, from the second management service, a status identified for the item based on status information of the virtual infrastructure and a communication status of the mutual communication, identify, based on the identified status and the acquired status for the item, whether the problem area is either the first management service or the second management service, and perform a restoration operation corresponding to the identified problem area.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: October 27, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Akira Minegishi
  • Patent number: 10810037
    Abstract: The present invention relates to a hybrid memory system with live page migration for virtual machine, and the system comprises a physical machine installed with a virtual machine and being configured to: build a channel for a shared memory between the virtual machine and a hypervisor; make the hypervisor generate to-be-migrated cold/hot page information and writing write the to-be-migrated cold/hot page information into the shared memory; make the virtual machine read the to-be-migrated cold/hot page information from the shared memory; and make the virtual machine according to the read to-be-migrated cold/hot page information perform a page migration process across heterogeneous memories of the virtual machine without stopping the virtual machine.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: October 20, 2020
    Assignee: Huazhong University of Science and Technology
    Inventors: Haikun Liu, Xiaofei Liao, Hai Jin, Dang Yang
  • Patent number: 10802879
    Abstract: A method for dynamically assigning task is provided. The method includes: broadcasting work requirements corresponding to the task to a plurality of resource provisioning devices; determining whether an application request transmitted from one of the resource provisioning devices has been received; and assigning the task to a first resource provisioning device of the resource provisioning devices when receiving the application request transmitted from the first resource provisioning device.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: October 13, 2020
    Assignee: ALTOS COMPUTING INC.
    Inventors: Szu-Ting Chou, Lee-Yu Lu
  • Patent number: 10802831
    Abstract: A computer system is associated with a number of computers including at least one central processing unit (CPU). Managing of parallel processing on the computer system may comprise determining a scheduling limit to restrict a number of worker threads available for executing tasks on the computer system. The managing may further comprise executing a plurality of tasks on the computer system. An availability of a CPU associated with the computer system is determined based on whether a load of the CPU exceeds a first threshold. When the CPU is determined to be unavailable, the scheduling limit is reduced. A further task is scheduled for execution on one of the CPUs according to the reduced scheduling limit. The worker threads available to execute tasks on the computer system may be limited, such that the quantity of worker threads available for executing tasks does not exceed the scheduling limit.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: October 13, 2020
    Assignee: SAP SE
    Inventors: Viktor Povalyayev, David C. Hu, Marvin Baumgart, Michael Maris
  • Patent number: 10795674
    Abstract: A device may receive information identifying a set of tasks to be executed by a microservices application that includes a plurality of microservices. The device may determine an execution time of the set of tasks based on a set of parameters and a model. The set of parameters may include a first parameter that identifies a first number of instances of a first microservice of the plurality of microservices, and a second parameter that identifies a second number of instances of a second microservice of the plurality of microservices. The device may compare the execution time and a threshold. The threshold may be associated with a service level agreement. The device may selectively adjust the first number of instances or the second number of instances based on comparing the execution time and the threshold.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: October 6, 2020
    Assignee: Juniper Networks, Inc.
    Inventors: Jalandip Lepcha, Tong Jiang
  • Patent number: 10782995
    Abstract: Techniques and mechanisms provide a flexible mapping for physical functions and virtual functions in an environment including virtual machines.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: September 22, 2020
    Assignee: Altera Corporation
    Inventors: Jiefan Zhang, Abdel Hafiz Rabi, Allen Chen, Mark Jonathan Lewis
  • Patent number: 10776157
    Abstract: A system and method for providing quality of service during live migration includes determining one or more quality of service (QoS) specifications for one or more virtual machines (VMs) to be live migrated. Based on the one or more QoS specifications, a QoS is applied to a live migration of the one or more VMs by controlling; resources including at least one of live migration network characteristics and VM execution parameters.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: September 15, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bulent Abali, Canturk Isci, Jeffrey O. Kephart, Suzanne K. McIntosh, Dipankar Sarma