Patents Examined by Bing Zhao
-
Patent number: 10296392Abstract: A data processing system is described herein that includes two or more software-driven host components that collectively provide a software plane. The data processing system further includes two or more hardware acceleration components that collectively provide a hardware acceleration plane. The hardware acceleration plane implements one or more services, including at least one multi-component service. The multi-component service has plural parts, and is implemented on a collection of two or more hardware acceleration components, where each hardware acceleration component in the collection implements a corresponding part of the multi-component service. Each hardware acceleration component in the collection is configured to interact with other hardware acceleration components in the collection without involvement from any host component. A function parsing component is also described herein that determines a manner of parsing a function into the plural parts of the multi-component service.Type: GrantFiled: May 20, 2015Date of Patent: May 21, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Stephen F. Heil, Adrian M. Caulfield, Douglas C. Burger, Andrew R. Putnam, Eric S. Chung
-
Patent number: 10296393Abstract: A hardware thread scheduler (HTS) is provided for a multiprocessor system. The HTS is configured to schedule processing of multiple threads of execution by resolving data dependencies between producer modules and consumer modules for each thread. Pattern adaptors may be provided in the scheduler that allows mixing of multiple data patterns across blocks of data. Transaction aggregators may be provided that allow re-using the same image data by multiple threads of execution while the image date remains in a given data buffer. Bandwidth control may be provided using programmable delays on initiation of thread execution. Failure and hang detection may be provided using multiple watchdog timers.Type: GrantFiled: September 19, 2016Date of Patent: May 21, 2019Assignee: Texas Instruments IncorporatedInventors: Niraj Nandan, Hetul Sanghvi, Mihir Narendra Mody
-
Patent number: 10275278Abstract: The technology disclosed provides a novel and innovative technique for compact deployment of application code to stream processing systems. In particular, the technology disclosed relates to obviating the need of accompanying application code with its dependencies during deployment (i.e., creating fat jars) by operating a stream processing system within a container defined over worker nodes of whole machines and initializing the worker nodes with precompiled dependency libraries having precompiled classes. Accordingly, the application code is deployed to the container without its dependencies, and, once deployed, the application code is linked with the locally stored precompiled dependencies at runtime. In implementations, the application code is deployed to the container running the stream processing system between 300 milliseconds and 6 seconds. This is drastically faster than existing deployment techniques that take anywhere between 5 to 15 minutes for deployment.Type: GrantFiled: September 14, 2016Date of Patent: April 30, 2019Assignee: salesforce.com, inc.Inventors: Elden Gregory Bishop, Jeffrey Chao
-
Patent number: 10275154Abstract: The disclosed embodiments provide a system that facilitates the execution of a software program. During operation, the system obtains a set of artifacts associated with executing a software program. Next, the system uses the set of artifacts to determine an inheritance hierarchy associated with an artifact from the set of artifacts. The system then uses the inheritance hierarchy and the set of artifacts to generate a memory layout of an object instance represented by the artifact, wherein the memory layout includes a set of fields associated with a set of levels of the inheritance hierarchy.Type: GrantFiled: November 5, 2014Date of Patent: April 30, 2019Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Charles J. Hunt, Steven J. Drach, Jean-Francois Denise
-
Patent number: 10275271Abstract: According to one example, to access at least one computer device from a virtual machine, a control domain accesses a list of at least one device. For each device in the list of devices, a determination is made as to whether the device is to be exposed to a virtual machine, and a table of devices determined to be exposed to the virtual machine is created and provided to the virtual machine. Determining whether a device is to be exposed to a virtual machine is based on at least one device attribute.Type: GrantFiled: June 30, 2014Date of Patent: April 30, 2019Assignee: Hewett-Packard Development Company, L.P.Inventor: Richard A. Bramley, Jr.
-
Patent number: 10268499Abstract: The amount of host real storage provided to a large guest storage buffer is controlled. This control is transparent to the guest that owns the buffer and is executing an asynchronous process to update the buffer. The control uses one or more indicators to determine when additional host real storage is to be provided.Type: GrantFiled: April 14, 2014Date of Patent: April 23, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Damian L. Osisek, Donald W. Schmidt, Phil C. Yeh
-
Patent number: 10268609Abstract: A resource management and task allocation controller for installation in a multicore processor having a plurality of interconnected processor elements providing resources for processing executable transactions, at least one of said elements being a master processing unit, the controller being adapted to communicate, when installed, with each of the processor elements including the master processing unit, and comprising control logic for allocating executable transactions within the multicore processor to particular processor elements in accordance with pre-defined allocation parameters.Type: GrantFiled: August 12, 2013Date of Patent: April 23, 2019Assignees: Synopsys, Inc., Fujitsu Semiconductor LimitedInventor: Mark David Lippett
-
Patent number: 10261833Abstract: The present invention provides a method which can be used to optimise the delivery of series over communications networks. Tasks which need to executed within a short timescale and those which are not due to be executed for a long time are excluded from the optimisation process. A score is determined, using fuzzy logic, for each task and its related resources and for each resource and its related tasks. This score is then used to determined which tasks should be optimised.Type: GrantFiled: June 5, 2015Date of Patent: April 16, 2019Assignee: BRITISH TELECOMMUNICATIONS public limited companyInventors: Sid Shakya, Anne Liret, Gilbert Owusu, Okung Ntofon, Ahmed Mohamed, Hani Hagras
-
Patent number: 10255093Abstract: Various embodiments are generally directed to providing virtualization using relatively minimal processing and storage resources to enable concurrent isolated execution of multiple application routines in which one of the application routines is made visible at a time. An apparatus to virtualize an operating system includes a processor component, a visibility checker for execution by the processor component to make a visibility check call to a kernel routine to request an indication of whether an instance of a framework routine that comprises the visibility checker is visible, and resource access code of the instance for execution by the processor component to perform a resource access operation to access a hardware component based on the indication and on receipt of an application programming interface (API) call from an application routine that specifies an API function to access the hardware component. Other embodiments are described and claimed.Type: GrantFiled: December 17, 2013Date of Patent: April 9, 2019Assignee: INTEL CORPORATIONInventor: Shoumeng Yan
-
Patent number: 10248471Abstract: Systems for managing shared computing resources. In a multi-process computing environment a concurrency object data structure pertaining to a shared resource is made available to be accessed by two or more processing entities. The concurrency object comprises a consecutive read count that tracks the number of consecutive read requests that have been received for shared read access to the shared resource. A shared concurrency access state is entered based on comparison of the consecutive read count to a threshold value. Entering the shared concurrency access state begins a period during which grant of further shared access requests do not require semaphore operations or other atomic operations that pertains to the shared resource.Type: GrantFiled: September 15, 2016Date of Patent: April 2, 2019Assignee: Oracle International CorporationInventors: Raunak Rungta, Jonathan Giloni, Ravi Shankar Thammaiah, Sumanta Kumar Chatterjee, Juan Loaiza
-
Patent number: 10235062Abstract: Various systems and methods for selecting resources (such as of a distributed storage system) for performing file operations (such as backup operations) based on power-usage characteristics of these resources. For example, one method involves receiving an input, where the input identifies a process to be performed. The method also involves accessing power data, where the power data indicates power usage for the process as performed using one or more resources of a plurality of resources. The method also involves selecting, using one or more processors, a selected resource from the resources based, at least in part, on the power data.Type: GrantFiled: October 31, 2016Date of Patent: March 19, 2019Assignee: Veritas Technologies LLCInventor: Dhanashri Phadke
-
Patent number: 10237136Abstract: A method of allocating network bandwidth in a network that includes several tenant virtual machines (VMs). The method calculates a first bandwidth reservation for a flow between a source VM and a destination VM that are hosted on two different host machines. The source VM sends packets to a first set of VMs that includes the destination VM. The destination VM receives packets from a second set of VMs that includes the source VM. The method receives a second bandwidth reservation for the flow calculated at the destination. The method sets the bandwidth reservation for the flow as a minimum of the first and second bandwidth reservations.Type: GrantFiled: February 17, 2017Date of Patent: March 19, 2019Assignee: NICIRA, INC.Inventors: Hua Wang, Jianjun Shen, Donghai Han, Caixia Jiang
-
Patent number: 10228972Abstract: In some embodiments, the present invention provides an exemplary computing device, including at least: a scheduler processor; a CPU; a GPU; where the scheduler processor configured to: obtain a computing task; divide the computing task into: a first set of subtasks and a second set of subtasks; submit the first set to the CPU; submit the second set to the GPU; determine, for a first subtask of the first set, a first execution time, a first execution speed, or both; determine, for a second subtask of the second set, a second execution time, a second execution speed, or both; dynamically rebalance an allocation of remaining non-executed subtasks of the computing task to be submitted to the CPU and the GPU, based, at least in part, on at least one of: a first comparison of the first execution time to the second execution time, and a second comparison of the first execution speed to the second execution speed.Type: GrantFiled: June 21, 2018Date of Patent: March 12, 2019Assignee: Banuba LimitedInventor: Yury Hushchyn
-
Patent number: 10223145Abstract: A computing resources service provider may provide customers with access to virtual computing resources to execute various applications on behalf of the customer. There may be occasional impairment to the virtual computing resources. These impairments may be detected in log information obtained by an impairment detection service. Furthermore, the impairment detection service may obtain additional information associated with the virtual computing resources. The log information and additional information may be correlated to determine one or more relevant factors in the impairments.Type: GrantFiled: June 21, 2016Date of Patent: March 5, 2019Assignee: Amazon Technologies, Inc.Inventors: Amit Neogy, Dennis Arthur Hills, Siavash Irani, Sota Baba, Cory Forsythe, Bryan Mareletto, Kenji Takehara
-
Patent number: 10198293Abstract: According to a general aspect, a method may include receiving a computing task, wherein the computing task includes a plurality of operations. The method may include allocating the computing task to a data node, wherein the data node includes at least one host processor and an intelligent storage medium, wherein the intelligent storage medium comprises at least one controller processor, and a non-volatile memory, wherein each data node includes at least three processors between the at least one host processor and the at least one controller processor. The method may include dividing the computing task into at least a first chain of operations and a second chain of operations. The method may include assigning the first chain of operations to the intelligent storage medium of the data node. The method may further include assigning the second chain of operations to the central processor of the data node.Type: GrantFiled: March 17, 2017Date of Patent: February 5, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yang Seok Ki, Jaehwan Lee
-
Patent number: 10198294Abstract: A service mapping component (SMC) is described herein for processing requests by instances of tenant functionality that execute on software-driven host components (or some other components) in a data processing system. The SMC is configured to apply at least one rule to determine whether a service requested by an instance of tenant functionality is to be satisfied by at least one of: a local host component, a local hardware acceleration component which is locally coupled to the local host component, and/or at least one remote hardware acceleration component that is indirectly accessible to the local host component via the local hardware acceleration component. In performing its analysis, the SMC can take into account various factors, such as whether or not the service corresponds to a line-rate service, latency-related considerations, security-related considerations, and so on.Type: GrantFiled: May 20, 2015Date of Patent: February 5, 2019Assignee: Microsoft Licensing Technology, LLCInventors: Derek T. Chiou, Sitaram V. Lanka, Douglas C. Burger
-
Patent number: 10180856Abstract: A method for performing dynamic port remapping during instruction scheduling in an out of order microprocessor is disclosed. The method comprises selecting and dispatching a plurality of instructions from a plurality of select ports in a scheduler module in first clock cycle. Next, it comprises determining if a first physical register file unit has capacity to support instructions dispatched in the first clock cycle. Further, it comprises supplying a response back to logic circuitry between the plurality of select ports and a plurality of execution ports, wherein the logic circuitry is operable to re-map select ports in the scheduler module to execution ports based on the response. Finally, responsive to a determination that the first physical register file unit is full, the method comprises re-mapping at least one select port connecting with an execution unit in the first physical register file unit to a second physical register file unit.Type: GrantFiled: July 25, 2016Date of Patent: January 15, 2019Assignee: INTEL CORPORATIONInventor: Nelson N. Chan
-
Patent number: 10180854Abstract: A processing system includes an execution unit, communicatively coupled to an architecturally-protected memory, the execution unit comprising a logic circuit to execute a virtual machine monitor (VMM) that supports a virtual machine (VM) comprising a guest operating system (OS) and to implement an architecturally-protected execution environment, wherein the logic circuit is to responsive to executing a blocking instruction by the guest OS directed at a first page stored in the architecturally-protected memory during a first time period identified by a value stored in a first counter, copy the value from the first counter to a second counter, responsive to executing a first tracking instruction issued by the VMM, increment the value stored in the first counter, and set a flag to indicate successful execution of the second tracking instruction.Type: GrantFiled: September 28, 2016Date of Patent: January 15, 2019Assignee: Intel CorporationInventors: Rebekah M. Leslie-Hurd, Carlos V. Rozas, Dror Caspi
-
Patent number: 10169092Abstract: A system for parallel processing tasks by allocating the use of exclusive locks to process critical sections of a task. The system includes storing update information that is updated in response to acquisition and release of an exclusive lock. When processing a task which includes a critical section containing code affecting execution of the other task, an exclusive execution unit acquires an exclusive lock prior to processing the critical section. When the section has been processed successfully, the lock is released and update information updated. Meanwhile a second task, whose critical section does not contain code affecting execution of the other task may run in parallel, without acquiring an exclusive lock, via a nonexclusive execution unit. The nonexclusive execution unit determines that the second critical section has successfully completed if the update information has not changed during processing of the second critical section.Type: GrantFiled: January 3, 2018Date of Patent: January 1, 2019Assignee: International Business Machines CorporationInventors: Maged M. Michael, Takuya Nakaike
-
Patent number: 10157085Abstract: Various embodiments are generally directed to decentralized load balancing in a host cluster utilized to coordinate performance of processing tasks in a workload, such as via service agents and/or host instances included in the host cluster, for instance. Some embodiments are particularly directed to a set of service agents on one or more host instances that utilize a shared cache to coordinate among themselves to automatically balance a workload without a centralized controller or a centralized load balancer. In one or more embodiments, a set of service agents may automatically and cooperatively balance a workload among themselves using the shared cache.Type: GrantFiled: December 22, 2017Date of Patent: December 18, 2018Assignee: SAS INSTITUTE INC.Inventors: Qing Gong, Shianchin “Sam” Chen, Zhiyong Li