Patents Examined by Abu Ghaffari
-
Patent number: 10346189Abstract: Co-locating containers based on source to improve compute density is disclosed. For example, a repository stores image files associated with metadata. A scheduler receives a request to launch a container using an image file having a source. The container is launched in a host with a first version of first and second container components loaded to a host memory. A request to launch another container using another image file having the source is received. This container includes the first version of first and third container components, and is launched in the host. The first version of the third container component is loaded to the host memory. A request to launch a third container using a third image file having a different source is received, and is launched in the second host, including a second version of the first, second and third container components, all loaded to a second host memory.Type: GrantFiled: December 5, 2016Date of Patent: July 9, 2019Assignee: Red Hat, Inc.Inventors: Huamin Chen, Jay Vyas
-
Patent number: 10331487Abstract: Embodiments provide a resource management technology that may be applied to a host, where the host includes a CPU, an endpoint connected to the CPU, and an I/O device connected to the endpoint. A method includes: allocating, by the CPU, a target endpoint to a target process, where a virtual device is disposed on the target endpoint; obtaining, by the target endpoint, a performance specification of the target process, and adjusting a performance parameter of the virtual device according to the performance specification, where the adjusted virtual device satisfies a total requirement of performance specifications of all processes that use the target endpoint; and when the target process needs to access a resource, obtaining, from the I/O device, a resource that satisfies the performance specification of the target process, and providing the obtained resource to the target process for use.Type: GrantFiled: March 28, 2017Date of Patent: June 25, 2019Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Baifeng Yu, Jiongjiong Gu, Muhui Lin, Zhou Yu, Lingzhi Mao
-
Patent number: 10324748Abstract: Apparatuses, methods and storage medium associated with live migration of virtual machines (VMs) from/to host computers with graphics virtualization are disclosed herein. In embodiments, an apparatus may include a virtual machine monitor (VMM) having a memory manager to manage accesses of system memory of the apparatus, including tracking of modified memory pages of the system memory. Additionally, the VMM may include a graphics command parser to analyze graphics commands issued to a graphics processor (GPU) of the apparatus to detect writes to the system memory caused by the graphics commands, and augment the tracking of modified memory pages. Further, the VMM may include a live migration function to live migrate a VM to another apparatus, including provision of current memory content of the VM, utilizing modified memory pages tracked by the memory manager, as augmented by the graphics command parser.Type: GrantFiled: June 26, 2017Date of Patent: June 18, 2019Assignee: Intel CorporationInventors: Yao Zu Dong, Zhiyuan Lv
-
Patent number: 10303502Abstract: The present disclosure relates to a method performed by an IP capable network node in a communication network. The method comprises receiving an IP message from an IP capable device, said IP message comprising an indication that the IP capable device requires a VM to be set up. The method also comprises sending an initiation message to a cloud service platform requesting the cloud service platform to set up the VM for the IP capable device. The method also comprises receiving a service information message from the cloud service platform comprising information about the VM set up by the cloud service platform. The method also comprises sending an IP message to the IP capable device comprising information about the VM set up for said device, allowing the device to send IP data to the VM.Type: GrantFiled: November 7, 2013Date of Patent: May 28, 2019Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Teemu Rinta-Aho, Heikki Mahkonen, Tero Kauppinen
-
Patent number: 10296373Abstract: A method of pausing a plurality of service-oriented application (SOA) instances may include receiving, from an instance of an SOA entering a pause state, an initiation message. The initiation message may include an exit criterion that identifies a business condition that must be satisfied before the instance of the SOA exits the pause state. The method may also include receiving a notification from an event producer, the notification comprising a status of a business event and determining whether the status of the business event satisfies the business condition of the exit criterion. The method may additionally include sending, in response to a determination that the status of the business event satisfies the business condition of the exit criterion, an indication to the instance of the SOA that the business condition has been satisfied such that the instance of the SOA can exit the pause state.Type: GrantFiled: May 27, 2014Date of Patent: May 21, 2019Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Raju Addala, Alok Singh, Scott Kozic, Sarita Sridharan, Sunita Datti
-
Patent number: 10296379Abstract: Scheduling threads in a system with many cores includes generating a thread map where a connection relationship between a plurality of threads is represented by a frequency of inter-process communication (IPC) between threads, generating a core map where a connection relationship between a plurality of cores is represented by a hop between cores, and respectively allocating the plurality of threads to the plurality of cores defined by the core map, based on a thread allocation policy defining a mapping rule between the thread map and the core map.Type: GrantFiled: March 17, 2017Date of Patent: May 21, 2019Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kang Ho Kim, Kwang Won Koh, Jin Mee Kim, Jeong Hwan Lee, Seung Hyub Jeon, Sung In Jung, Yeon Jeong Jeong, Seung Jun Cha
-
Patent number: 10289452Abstract: Methods and systems to assign threads in a multi-core processor are disclosed. A method to assign threads in a multi-core processor may include determining data relating to memory controllers fetching data in response to cache misses experienced by a first core and a second core. Threads may be assigned to cores based on the number of cache misses processed by respective memory controllers. Methods may further include determining that a thread is latency-bound or bandwidth-bound. Threads may be assigned to cores based on the determination of the thread as latency-bound or bandwidth-bound. In response to the assignment of the threads to the cores, data for the thread may be stored in the assigned cores.Type: GrantFiled: April 24, 2017Date of Patent: May 14, 2019Assignee: Empire Technology Development, LLCInventor: Yan Solihin
-
Patent number: 10275280Abstract: A plurality of cores are maintained in a processor complex. A core of the plurality of cores is reserved for execution of critical tasks, wherein it is preferable to prioritize execution of critical tasks over non-critical tasks. A scheduler receives a task for scheduling in the plurality of cores. In response to determining that the task is a critical task, the task is scheduled for execution in the reserved core.Type: GrantFiled: August 10, 2016Date of Patent: April 30, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Matthew G. Borlick, Lokesh M. Gupta, Clint A. Hardy, Trung N. Nguyen
-
Patent number: 10275277Abstract: According to one aspect of the present disclosure, a technique for job distribution within a grid environment includes receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters where each execution cluster includes one or more execution hosts. Resource attributes are determined corresponding to each execution host of the execution clusters. For each execution cluster, execution hosts are grouped based on the resource attributes of the respective execution hosts. For each grouping of execution hosts, a mega-host is defined for the respective execution cluster where the mega-host for a respective execution cluster defines resource attributes based on the resource attributes of the respective grouped execution hosts. Resource requirements for the jobs are determined, and candidate mega-hosts are identified for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs.Type: GrantFiled: September 12, 2016Date of Patent: April 30, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Chong Chen, Fang Liu, Qi Wang, Shutao Yuan
-
Patent number: 10275287Abstract: Techniques are provided for dynamically self-balancing communication and computation. In an embodiment, each partition of application data is stored on a respective computer of a cluster. The application is divided into distributed jobs, each of which corresponds to a partition. Each distributed job is hosted on the computer that hosts the corresponding data partition. Each computer divides its distributed job into computation tasks. Each computer has a pool of threads that execute the computation tasks. During execution, one computer receives a data access request from another computer. The data access request is executed by a thread of the pool. Threads of the pool are bimodal and may be repurposed between communication and computation, depending on workload. Each computer individually detects completion of its computation tasks. Each computer informs a central computer that its distributed job has finished. The central computer detects when all distributed jobs of the application have terminated.Type: GrantFiled: June 7, 2016Date of Patent: April 30, 2019Assignee: Oracle International CorporationInventors: Thomas Manhardt, Sungpack Hong, Siegfried Depner, Jinsu Lee, Nicholas Roth, Hassan Chafi
-
Patent number: 10268509Abstract: According to one aspect of the present disclosure, a technique for job distribution within a grid environment includes receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters where each execution cluster includes one or more execution hosts. Resource attributes are determined corresponding to each execution host of the execution clusters. For each execution cluster, execution hosts are grouped based on the resource attributes of the respective execution hosts. For each grouping of execution hosts, a mega-host is defined for the respective execution cluster where the mega-host for a respective execution cluster defines resource attributes based on the resource attributes of the respective grouped execution hosts. Resource requirements for the jobs are determined, and candidate mega-hosts are identified for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs.Type: GrantFiled: September 12, 2016Date of Patent: April 23, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Chong Chen, Fang Liu, Qi Wang, Shutao Yuan
-
Patent number: 10261834Abstract: A method for selecting a media processing unit performed in a network node of a distributed cloud. The distributed cloud comprises two or more media processing units that handle media processing required by a media service. The method includes receiving, from a communication device, a request for the media service and obtaining, for each media processing unit, at least one configurable parameter value of a parameter relating to handling of the media service. The method also includes selecting, based on the at least one parameter value, a media processing unit for processing the requested media service for the communication device.Type: GrantFiled: December 18, 2013Date of Patent: April 16, 2019Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Tomas Mecklin, Jouni Mäenpää, Miljenko Opsenica, Tommi Roth
-
Patent number: 10241835Abstract: A storage resource scheduling method and a storage and computing system, where the storage and computing system has a computing system and a storage system, the computing system has at least one computing unit, and the storage system has at least one storage unit. The method executed by the computing system includes: identifying a task type of a computing unit in the at least one computing unit; sending task type information to the storage system, where the task type information carries the task type; acquiring a scheduling policy of the task type according to the task type information; and scheduling, according to the scheduling policy, a storage unit corresponding to the computing unit. In the method, different tasks of a computing unit are perceived, and resource scheduling is performed according to a task type, thereby implementing scheduling and management on different tasks of a same storage unit.Type: GrantFiled: August 24, 2016Date of Patent: March 26, 2019Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventor: Li Wang
-
Patent number: 10223139Abstract: CRYSTAL “Cognitive radio you share, trust and access locally” (CRYSTAL) is a virtualized cognitive access point that may provide for combining multiple wireless access applications on a single hardware platform. Radio technologies such as LTE, WiMax, GSM, and the like can be supported. CRYSTAL platforms can be aggregated and managed as a cloud, which provides a model for access point sharing, control, and management. CRYSTAL may be used for scenarios such as neighborhood spectrum management. CRYSTAL security features allow for home/residential as well as private infrastructure implementations.Type: GrantFiled: March 14, 2014Date of Patent: March 5, 2019Assignee: The Trustees of the University of PennsylvaniaInventors: Jonathan M. Smith, Eric R. Keller, Thomas W. Rondeau, Kyle B. Super
-
Patent number: 10203985Abstract: In an information processing apparatus, a second controller, if a number of subtasks currently executing in the information processing apparatus does not exceed a threshold, obtains a subtask from one of a plurality of queues and causes the obtained subtask to execute by newly creating a thread, and if the number of subtasks currently executing in the information processing apparatus exceeds the threshold, does not newly create a thread, and the second controller, if a number of subtasks currently executing among subtasks registered in a first queue is less than the upper limit value defined for the first queue, obtains a subtask registered in the first queue and causes the obtained subtask to execute by newly creating a thread regardless of whether or not the number of subtasks currently executing in the information processing apparatus exceeds the threshold.Type: GrantFiled: October 26, 2016Date of Patent: February 12, 2019Assignee: Canon Kabushiki KaishaInventor: Toshiyuki Nakazawa
-
Patent number: 10198298Abstract: The technology disclosed improves existing streaming processing systems by allowing the ability to both scale up and scale down resources within an infrastructure of a stream processing system. In particular, the technology disclosed relates to a dispatch system for a stream processing system that adapts its behavior according to a computational capacity of the system based on a run-time evaluation. The technical solution includes, during run-time execution of a pipeline, comparing a count of available physical threads against a set number of logically parallel threads. When a count of available physical threads equals or exceeds the number of logically parallel threads, the solution includes concurrently processing the batches at the physical threads. Further, when there are fewer available physical threads than the number of logically parallel threads, the solution includes multiplexing the batches sequentially over the available physical threads.Type: GrantFiled: December 31, 2015Date of Patent: February 5, 2019Assignee: salesforce.com, inc.Inventors: Elden Gregory Bishop, Jeffrey Chao
-
Patent number: 10198289Abstract: A system for connecting user action flows is disclosed. The system determines when a first object is created on a first thread in response to a first user action. Additionally, the system stores a first relationship between the first thread and the first object based on the determination of when the first object is created. Moreover, the system determines when the first object is running on a second thread that differs from the first thread, and stores a second relationship between the second thread and the first object based on the determination of when the first object is running.Type: GrantFiled: April 29, 2014Date of Patent: February 5, 2019Assignee: ENTIT SOFTWARE LLCInventors: Moran Rehana, Michael Seldin, Michael Abramov
-
Patent number: 10185582Abstract: A host controller receives a request to perform an action in a virtual computing system. The host controller, creates a command to execute operations associated with the request and creates a job to monitor a progress of the operations of the command. As the command is executing the operations, the host controller receives an indication of the progress of the command execution, wherein the operations report the progress to the job each time an operation is completed.Type: GrantFiled: November 28, 2012Date of Patent: January 22, 2019Assignee: Red Hat Israel, Ltd.Inventors: Moti Asayag, Yair Zaslavsky
-
Patent number: 10185598Abstract: In an industrial automation environment, a three-tier architecture is used to offload human-machine-interaction (HMI) automation tasks to local mobile devices and then the cloud, to take advantage of distributed computing and processing resources and to add new features to the HMI panel system. A scheduling algorithm based on the characteristics of the HMI tasks distributes these tasks intelligently among the local HMI panel, mobile devices and the cloud, to best utilize the merits of each tier.Type: GrantFiled: May 8, 2014Date of Patent: January 22, 2019Assignee: Siemens AkitiengesellschaftInventors: Lingyun Wang, Arquimedes Martinez Canedo, Holger Strobel
-
Use of concurrent time bucket generations for scalable scheduling of operations in a computer system
Patent number: 10169081Abstract: Concurrent processing of objects is scheduled using time buckets of different time bucket generations. A time bucket generation includes a configuration for time buckets associated with that time bucket generation. The concurrent use of different time bucket generations includes the concurrent processing of objects referenced by time buckets of different time bucket generations.Type: GrantFiled: October 31, 2016Date of Patent: January 1, 2019Assignee: Oracle International CorporationInventors: Aditya Sawhney, Christopher Fagiani