Patents Examined by Abu Zar Ghaffari
  • Patent number: 11086672
    Abstract: A data processing system includes multiple processing units all having access to a shared memory. A processing unit includes a lower level cache memory and a processor core coupled to the lower level cache memory. The processor core includes an execution unit for executing instructions in a plurality of simultaneous hardware threads, an upper level cache memory, and a plurality of wait flags each associated with a respective one of the plurality of simultaneous hardware threads. The processor core is configured to set a wait flag among the plurality of wait flags to indicate the associated hardware thread is in a wait state in which the hardware thread suspends instruction execution and to exit the wait state based on the wait flag being reset.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: August 10, 2021
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Hugh Shen, Guy L. Guthrie
  • Patent number: 11048547
    Abstract: A distributed software system and a method for routing transactions for execution are disclosed. The distributed software system has a database sub-system partitioned into shards and a transaction routing sub-system for ordering transactions. The transaction routing sub-system has a plurality of coordinator ports and a plurality of mediator ports. The coordinator ports receive transactions to be executed by the shards and generate local per-shard orders for the received transactions. The local per-shard orders are received by the plurality of mediator ports which are pre-assigned with respective shards. The mediator ports generate centralized per-shard orders of execution based on the received per-shard orders. A given centralized per-shard order of execution is an order of execution of transactions received by a given mediator port and that are destined to be executed by a given shard that is pre-assigned to the given mediator port.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: June 29, 2021
    Assignee: YANDEX EUROPE AG
    Inventor: Denis Nikolaevich Podluzhny
  • Patent number: 11036890
    Abstract: Methods and computer systems execute biometric operations in parallel. The performance of a biometric operation includes receiving a job request to perform the biometric operation. The job request includes input data, identifies a database to be used in the performance of the biometric operation, and specifies a function to be performed. The biometric operation is restructured as one or more tasks. A number of entries in the database is assigned to each of the one or more tasks. An independent worker process is generated for each different core of the multi-core processor. Each task of the one or more tasks is assigned to one of the worker processes. Results produced by each worker process assigned one of the one or more tasks are collected. A result of the biometric operation based on the collected results is reported.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: June 15, 2021
    Assignee: AWARE, INC.
    Inventors: Steven Kolk, Louis Scott Hills
  • Patent number: 11036537
    Abstract: Techniques for on demand capacity management in a provider network are described. The provider network includes electronic devices that provide computing-related resources to customers. The unused capacity of these electronic devices—such as processor cores, memory, network bandwidth, etc.—can be used to satisfy a variety of computing needs. Services of the provider network allocate portions of the unused capacity based on customer requests for computing-related resources.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: June 15, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Michael Phillip Quinn, Nishant Mehta, Diwakar Gupta, Bradley Joseph Gussin
  • Patent number: 11030010
    Abstract: An information processing apparatus includes a computer resource including a processor and a memory, a component to be controlled, a data control module configured to receive a data request and access the component and a management module configured to receive a management request and manage the component. The management module is configured to share the computer resource with the data control module and receive the management request, and dynamically change a processing order of the management request based on a usage status of the computer resource.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: June 8, 2021
    Assignee: HITACHI, LTD.
    Inventors: Yuta Nakano, Yasuhiro Nakaaki
  • Patent number: 11030014
    Abstract: Techniques are provided for dynamically self-balancing communication and computation. In an embodiment, each partition of application data is stored on a respective computer of a cluster. The application is divided into distributed jobs, each of which corresponds to a partition. Each distributed job is hosted on the computer that hosts the corresponding data partition. Each computer divides its distributed job into computation tasks. Each computer has a pool of threads that execute the computation tasks. During execution, one computer receives a data access request from another computer. The data access request is executed by a thread of the pool. Threads of the pool are bimodal and may be repurposed between communication and computation, depending on workload. Each computer individually detects completion of its computation tasks. Each computer informs a central computer that its distributed job has finished. The central computer detects when all distributed jobs of the application have terminated.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: June 8, 2021
    Assignee: Oracle International Corporation
    Inventors: Thomas Manhardt, Sungpack Hong, Siegfried Depner, Jinsu Lee, Nicholas Roth, Hassan Chafi
  • Patent number: 11023280
    Abstract: A system receives a time series of data values from instrumented software executing on an external system. Each data value corresponds to a metric of the external system. The system stores a level value representing a current estimate of the time series and a trend value representing a trend in the time series. The level and trend values are based on data in a window having a trailing value. In response to receiving a most recent value, the system updates the level value and the trend value to add an influence of the most recent value and remove an influence of the trailing value. The system forecasts based on the updated level and trend values, and in response to determining that the forecast indicates the potential resource shortage event, takes action.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: June 1, 2021
    Assignee: Splunk Inc.
    Inventor: Joseph Ari Ross
  • Patent number: 11003516
    Abstract: When a virtualized service platform encounters a catastrophic fault, an orchestrator may instantiate new virtual machines instances to deploy additional capacity in other cloud locations to handle failover storms. After the network fault is fixed and service returns to normal condition, these additional VM instances may be removed from the platform and cloud resources may be released. The system may minimize the resource over-provisioning and may continue to support geographical redundancy or dynamic scaling in a large-scale service network.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: May 11, 2021
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Chaoxin Qiu, Robert F. Dailey, Mark A. Ratcliffe, Jeffrey L. Scruggs
  • Patent number: 10997052
    Abstract: A system, method, and computer-readable medium are disclosed for optimizing performance of an information handling system comprising: identifying a statistical model for use when optimizing performance of the information handling system; sampling the performance of the information handling system, the sampling being performed iteratively; and, adjusting the performance of the information handling system by applying optimized system configurations to the information handling system, the optimized parameters being based upon the statistical model.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: May 4, 2021
    Assignee: Dell Products L.P.
    Inventors: Farzad Khosrowpour, Nikhil Vichare
  • Patent number: 10983840
    Abstract: A technique includes monitoring for a quiescent state by checking first quiescent state criteria that are indicative of a CPU having no task running inside an RCU read-side critical section that could be affected by destructive-to-reader actions. If the quiescent state has been reached, a check may be made for the existence of a condition that is indicative of a requirement to satisfy one or more additional quiescent state criteria before reporting the quiescent state on behalf of the CPU. If the condition is detected, reporting of the quiescent state may be deferred until the one or more additional quiescent state criteria are satisfied. The quiescent state may then be reported if it is useful and safe to do so.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: April 20, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Paul E. McKenney
  • Patent number: 10977207
    Abstract: Components are dynamically associated in a multi-tier application to different layers of a corresponding multi-tier application infrastructure. This includes defining, in a memory of a host computing system, a pattern that has an inventory of components of a multi-tier application. Each of the components are associated with a corresponding tier label for an n-tier architecture and the pattern is loaded into a pattern engine. The pattern engine deploys each component of the pattern to a layer of the n-tier architecture corresponding to a tier label associated with the component.
    Type: Grant
    Filed: October 6, 2018
    Date of Patent: April 13, 2021
    Assignee: International Business Machines Corporation
    Inventors: Ajay A. Apte, Roy F. Brabson, Orvalle T. Kirby, III, Jason R. McGee, Scott C. Moonen, Donald R. Woods
  • Patent number: 10956193
    Abstract: Moving scheduling of processor time for virtual processors (VPs) out of a virtualization hypervisor. A host operating system schedules VP (virtual processor) processor time. The host operating system creates VP backing threads, one for each VP of each virtual machine. There is a one-to-one mapping between each VP thread in the host operating system and each VP in the hypervisor. When a VP thread is dispatched for a slice of processor time, the host operating system calls into the hypervisor to have the hypervisor start executing the VP, and the hypervisor may perform a processor context switch for the VP. Of note is the security separation between VP scheduling and VP context switching. The hypervisor manages VP context switching in kernel mode while VP scheduling is performed in user mode. There is a security/interface boundary between the unit that schedules VP processor time and the hypervisor.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: March 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Artem Oks, David Hepkin
  • Patent number: 10956187
    Abstract: A method is provided to enhance a virtualized infrastructure at a customer's premise with a cloud analytics service. The method includes receive a request for an expert use case on an expertise about an object in the virtualized infrastructure and performing an expertise cycle on the expert use case, which includes retrieving a manifest for the expert use case from a cloud analytics site remote from the customer's premise, collecting the telemetry data from the virtualized infrastructure based on the manifest, uploading the collected telemetry data to the cloud analytics site, and retrieving an expertise result for the expert use case from the cloud analytics site. The method further includes communicating the expertise result about the object to the customer and changing a configuration of the object.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: March 23, 2021
    Assignee: VMWARE, INC.
    Inventors: Aleksandar Petkov, Teodor Parvanov, Anton Petrov, Tanya Hristova, Miroslav Shtarbev
  • Patent number: 10942757
    Abstract: Systems and methods for embedding emulation support for a hardware feature into a virtual machine to enhance the security of the hypervisor and host system. An example method may comprise: receiving, by a processing device executing a hypervisor, a message indicating a hardware feature is unavailable; determining, by the hypervisor, whether a virtual machine is capable of emulating the hardware feature; and causing, by the hypervisor, the virtual machine to emulate the hardware feature in response to determining the virtual machine is capable of emulating the hardware feature.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: March 9, 2021
    Assignee: Red Hat, Inc.
    Inventors: Henri Han van Riel, Michael Tsirkin
  • Patent number: 10922141
    Abstract: A multi-layer committed compute reservation stack may generate prescriptive reservation matrices for controlling static reservation for computing resources. A transformation layer of the committed compute reservation stack may generate a time-mapping based on historical utilization and tagging data. An iterative analysis layer may determine a consumption-constrained committed compute state of a distribution of static reservation and dynamic requisition that achieves one or more consumption efficiency goals. Once the consumption-constrained committed compute state is determined, the prescriptive engine layer may generate a reservation matrix that may be used to control computing resource static reservation prescriptively.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: February 16, 2021
    Assignee: Accenture Global Solutions Limited
    Inventors: Madhan Kumar Srinivasan, Arun Purushothaman, Manish Sharma Kolachalam, Michael S. Eisenstein
  • Patent number: 10922140
    Abstract: Physical Graphics Processing Unit (GPU) resource scheduling system and method between virtual machines are provided. An agent is inserted between a physical GPU instruction dispatch and a physical GPU interface through a hooking method, for delaying sending instructions and data in the physical GPU instruction dispatch to the physical GPU interface, monitoring a set of GPU conditions of a guest application executing in the virtual machine and a use condition of physical GPU hardware resources, and then providing a feedback to a GPU resource scheduling algorithm based on time or a time sequence. With the agent, it is unneeded for the method to make any modification to the guest application of the virtual machine, a host operating system, a virtual machine operating system, a GPU driver and a virtual machine manager.
    Type: Grant
    Filed: June 19, 2013
    Date of Patent: February 16, 2021
    Assignee: SHANGHAI JIAOTONG UNIVERSITY
    Inventors: Miao Yu, Zhengwei Qi, Haibing Guan, Yin Wang
  • Patent number: 10915365
    Abstract: A mapper node and a reducer node respectively run on different central processing units (CPUs) in a CPU pool, and a remote shared partition shared by the mapper node and the reducer node is delimited in the storage pool. The mapper node executes a map task to obtain a data segment, and stores the data segment into a remote shared partition. The reducer node directly obtains a to-be-processed data segment from the remote shared partition, and executes a reduce task on the to-be-processed data segment.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: February 9, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jiyuan Tang, Wei Wang, Yi Cai
  • Patent number: 10891171
    Abstract: A clock task processing method includes before or when running a service process by using at least one data core, disabling a clock interrupt of the at least one data core, and processing a clock task in the at least one data core by using at least one control core of multiple control cores that cannot process service data.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: January 12, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Xuesong Pan, Jianfeng Xiu, Zichang Lin
  • Patent number: 10860358
    Abstract: Methods and devices for determining settings for a virtual machine may include partitioning a physical network into a plurality of traffic classes. The methods and devices may include determining at least one virtual enhanced transmission selection (ETS) setting for one or more virtual machines, wherein the virtual ETS setting includes at least one virtual traffic class that corresponds to one of the plurality of traffic classes. The methods and devices may include transmitting a notification to the one or more virtual machines identifying the virtual ETS setting.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: December 8, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Khoa Anh To, Omar Cardona, Daniel Firestone, Alireza Dabagh
  • Patent number: 10846117
    Abstract: Secure communication is established between a hyper-process of the virtualization layer (e.g., host) and an agent process in the guest operating system (e.g., guest) using a virtual communication device which, in an embodiment, is implemented as shared memory having two memory buffers. A guest-to-host buffer is used as a first message box configured to provide unidirectional communication from the agent to the virtualization layer and a host-to-guest buffer is used as a second message box configured to provide unidirectional communication from the virtualization layer to the agent. The buffers cooperate to transform the virtual device into a low-latency, high-bandwidth communication interface configured for bi-directional transfer of information between the agent process and the hyper-process of the virtualization layer, wherein the communication interface also includes a signaling (doorbell) mechanism configured to notify the processes that information is available for transfer over the interface.
    Type: Grant
    Filed: August 15, 2016
    Date of Patent: November 24, 2020
    Assignee: FireEye, Inc.
    Inventor: Udo Steinberg