Process Scheduling Patents (Class 718/102)
  • Patent number: 10462745
    Abstract: The disclosed technology includes techniques for preserving battery life of a mobile device by monitoring a mobile device to determine a state of inactivity. A state of inactivity may be determined if the screen of the mobile device is off and the mobile device remains stationary for a period of time. Battery life may be preserved by placing the mobile device and/or a mobile application of the mobile device into an idle state for successive idle periods separated by maintenance periods. When in an idle state, the mobile device and/or a mobile application of the mobile device may be prevented from utilizing various features or functions of the mobile device that may tend to drain the battery. The mobile device and/or mobile application may be granted temporary access to the various features and functions during the maintenance periods to temporarily allow the mobile device and/or mobile application to perform updates.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: October 29, 2019
    Assignee: Google LLC
    Inventors: Meghan Desai, Dianne Hackborn, Paul Eastham
  • Patent number: 10460004
    Abstract: Time to live (“TTL”) values are determined based on one or more factors. The TTL values may be included in responses to requests for resources, thereby affecting the frequency of subsequent requests. This dynamic determination of TTL values may provide resilience to system load, for example by using longer TTL values when the system is under greater load in order to reduce the rate at which subsequent requests are received. A dynamic TTL service may calculate a TTL value based on one or more factors, such as overall system load, resource load, hardware load, and/or software load. In various embodiments, a dynamic TTL service may act natively within a service, within a system framework, as a proxy, as a cluster, and/or as a broker.
    Type: Grant
    Filed: June 24, 2011
    Date of Patent: October 29, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: David C. Yanacek, David A. Killian, Krishnan Narayanan, Matthew J. Wren, Samuel J. Young, Eric D. Crahen
  • Patent number: 10452509
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for dynamic graph performance monitoring. One of the methods includes receiving input data by the data processing system, the input data provided by an application executing on the data processing system. The method includes determining a characteristic of the input data. The method includes identifying, by the application, a dynamic component from multiple available dynamic components based on the determined characteristic, the multiple available dynamic components being stored in a data storage system. The method includes processing the input data using the identified dynamic component. The method also includes determining one or more performance metrics associated with the processing.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: October 22, 2019
    Assignee: Ab Initio Technology LLC
    Inventors: Mark Buxbaum, Michael G. Mulligan, Tim Wakeling, Matthew Darcy Atterbury
  • Patent number: 10452445
    Abstract: The techniques disclosed herein provide a dynamically configurable cluster of storage devices. In some configurations, the dynamically configurable cluster is associated with a fault domain. The cluster may include a plurality of computing devices that each include at least a storage device. The plurality of storage devices in the cluster may be configured to support a plurality of workloads coupled to the dynamically configurable cluster. The plurality of storage devices in the dynamically configurable cluster may be allocated to one or more of the plurality of workloads based on metadata identified resiliency requirements, performance requirements, and/or cost factors linked to the one or more of the plurality of workloads.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: October 22, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Karan Mehra, Emanuel Paleologu, Vinod R. Shankar
  • Patent number: 10447806
    Abstract: Systems for distributed resource management. A method for deploying two or more computing workloads across heterogeneous computing systems commences upon establishing network communications between an on-premises computing system and a cloud computing system. Resource data corresponding to the on-premises resources and the cloud resources are collected continuously and saved as a time series history of observations. Upon a user request or other event, workload placement planning commences. A set of workload placement plans are generated, which workload placement plans are then evaluated in accordance with one or more quantitative objectives. Scheduling commands to carry out the workload placement plans are generated. A first portion of the scheduling commands is executed at the cloud computing system, and another portion of the scheduling commands is executed at the on-premises computing system.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: October 15, 2019
    Assignee: Nutanix, Inc.
    Inventors: Manjul Sahay, Ramesh U. Chandra
  • Patent number: 10445234
    Abstract: Systems, methods, and apparatuses relating to a configurable spatial accelerator are described. In an embodiment, a processor includes a plurality of processing elements; and an interconnect network between the plurality of processing elements to receive an input of a dataflow graph comprising a plurality of nodes, wherein the dataflow graph is to be overlaid into the interconnect network and the plurality of processing elements with each node represented as a dataflow operator in the plurality of processing elements, and the plurality of processing elements are to perform an atomic operation when an incoming operand set arrives at the plurality of processing elements.
    Type: Grant
    Filed: July 1, 2017
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Kermin Fleming, Kent D. Glossop, Simon C. Steely, Jr., Samantika S. Sury
  • Patent number: 10444812
    Abstract: Power consumption in a microprocessor platform is managed by setting a peak power level for power consumed by a multi-core microprocessor platform executing multi-threaded applications. The multi-core microprocessor platform contains a plurality of physical cores, and each physical core is configurable into a plurality of logical cores. A simultaneous multithreading level in at least one physical core is adjusted by changing the number of logical cores on that physical core in response to a power consumption level of the multi-core microprocessor platform exceeding the peak power level. Performance and power data based on simultaneous multi-threading levels are used in selecting the physical core to be adjusted.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: October 15, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pradip Bose, Alper Buyuktosunoglu, Hubertus Franke, Priyanka Tembey, Dilma M. Da Silva
  • Patent number: 10437644
    Abstract: Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: October 8, 2019
    Assignee: ThroughPuter, Inc.
    Inventor: Mark Henrik Sandstrom
  • Patent number: 10437589
    Abstract: The present invention provides a system capable of properly controlling the switching of the operation state of each of a plurality of arithmetic processing resources according to an increase or a decrease in an arithmetic processing load. A distributed processing control system 10 includes a load estimation unit 11 that estimates an estimation arithmetic processing load at a first point of time in a future from a reference point of time, and a state control unit 12 that starts the processing for switching the operation state of an arithmetic processing resource Sj so as to satisfy a first condition, in which the estimated arithmetic processing load is included in a predetermined range of an estimation processing capacity of the arithmetic processing resource Sj expected to be activated at the first point of time.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: October 8, 2019
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Go Nakamoto, Shuichiro Shinkai, Shuji Kikuchi
  • Patent number: 10430322
    Abstract: Embodiments of the invention include systems for testing pre and post system call exits. Aspects include executing a first test case comprises system calls and the first test case initializes a common buffer and stores system call parameters for each of the system calls. A monitoring test case is executed comprising: a pre-exit instruction that is inserted before each system call in the first test case. A post-exit instruction is inserted after each of the system calls in the first test case. Execution of the pre-exit instruction is determined prior to an execution of each system call. A first bit location is set in the common buffer to one, based on determining the pre-exit instruction executes. The system call is executed and execution of the post-exit instruction is determined. A second bit location in the common buffer is set to one based on determining that the post-exit instruction executes.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: October 1, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dominic DeMarco, Christopher Loers, Alexander Smith, Brad D. Stilwell
  • Patent number: 10423902
    Abstract: A processor extracts a first file name from a file path indicating a storage location of a first file used when launching a first job. The processor calculates, for each of a plurality of second jobs that have been executed, a similarity based on a comparison between partial character strings included in the first file name and partial character strings included in a second file name corresponding to a second file that was used when launching the second job. The processor acquires, from history information indicating an actual power consumption of each of the plurality of second jobs and in accordance with the calculated similarity, the actual power consumption of at least one second job and estimates power consumption of the first job based on the acquired actual power consumption.
    Type: Grant
    Filed: September 7, 2017
    Date of Patent: September 24, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Jun Moroo
  • Patent number: 10416751
    Abstract: A power saving mode control method and device for multiple operating systems include: setting corresponding power saving modes for each of the multiple operating systems in advance; and determining an operating system of which a power saving mode is triggered, and causing the operating system of which the power saving mode is triggered to enter the corresponding power saving mode.
    Type: Grant
    Filed: February 28, 2015
    Date of Patent: September 17, 2019
    Assignee: Yulong Computer Telecommunication Scientific (Shenzhen) Co., Ltd.
    Inventors: Zhouhua Cui, Zhengbo Fang, Wei Li
  • Patent number: 10419532
    Abstract: In accordance with an embodiment, described herein is a system and method for providing an asynchronous architecture in a server with an existing synchronous architecture. The system can include a keep-alive subsystem and a user-level request context switching application programming interface (API). A plurality of connections can be received at the keep-alive subsystem, and each connection can be assigned a request context configured to be executed in the keep-alive subsystem. When a connection being executed by a thread is blocked for I/O, the request context assigned to the connection can be saved, and the request context assigned to another connection can be restored to be executed by the thread. Resources associated with an idle connection can be placed in a pool for reuse by other connections. The system can provide an asynchronous architecture in the server without changing existing code and functionalities of the existing synchronous architecture.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: September 17, 2019
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventor: Suresh Warrier
  • Patent number: 10416750
    Abstract: Disclosed is a method and apparatus for power-efficiently processing sensor data. In one embodiment, the operations implemented include: configuring a sensor fusion engine and a peripheral controller with a general purpose processor; placing the general purpose processor into a low-power sleep mode; reading data from a sensor and storing the data into a companion memory with the peripheral controller; processing the data in the companion memory with the sensor fusion engine; and awaking the general purpose processor from the low-power sleep mode.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: September 17, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Justin Black, Rashmi Kulkarni, Leonid Sheynblat
  • Patent number: 10409695
    Abstract: Recovery of a database system by taking the database system offline is initiated. Thereafter, recovery operations specified by a redo log of the database system are replayed. During such replay, updates to pages implicated by the recovery operations are blocked. In parallel to such blocking, modified pages are adaptively flushed to physical disk storage using a factor that is based on a number of pages written to the physical disk storage and a number of write I/O operations as part of the flushing of the modified pages. Subsequently, the database system is brought online after all of the recovery operations are replayed.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: September 10, 2019
    Assignee: SAP SE
    Inventor: Dirk Thomsen
  • Patent number: 10404787
    Abstract: An apparatus in one embodiment comprises at least one processing device having a processor coupled to a memory. The processing device is configured to initiate distributed data streaming computations across data processing clusters associated with respective data zones, and in each of the data processing clusters, to separate a data stream provided by a data source of the corresponding data zone into a plurality of data batches and process the data batches to generate respective result batches. Multiple ones of the data batches across the data processing clusters are associated with a global data batch data structure, and multiple ones of the result batches across the data processing clusters are associated with a global result batch data structure based at least in part on the global data batch data structure. The result batches are processed in accordance with the global result batch data structure to generate one or more global result streams.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: September 3, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Patricia Gomes Soares Florissi, Ofri Masad
  • Patent number: 10402235
    Abstract: A computer-implemented method and computer processing system are provided. The method includes synchronizing, by a processor, respective ones of a plurality of data parallel workers with respect to an iterative distributed machine learning process. The synchronizing step includes individually continuing, by the respective ones of the plurality of data parallel workers, from a current iteration to a subsequent iteration of the iterative distributed machine learning process, responsive to a satisfaction of a predetermined condition thereby. The predetermined condition includes individually sending a per-receiver notification from each sending one of the plurality of data parallel workers to each receiving one of the plurality of data parallel workers, responsive to a sending of data there between. The predetermined condition further includes individually sending a per-receiver acknowledgement from the receiving one to the sending one, responsive to a consumption of the data thereby.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: September 3, 2019
    Assignee: NEC CORPORATION
    Inventors: Asim Kadav, Erik Kruus
  • Patent number: 10404825
    Abstract: Refresh requests are received by a data source that each request a snapshot of current members of one of a plurality of dynamically changing groups and dynamically changing rules corresponding to such group. Thereafter, the data source queues the received plurality of refresh requests for selective execution or deletion into a new request queue. In addition, real-time execution of refresh jobs are initiated for all of queued refresh requests if a number of refresh requests in both of the new request queue and a waiting requests queue is below a pre-defined threshold. Alternatively, a job framework schedules execution of task jobs for a subset of the queued requests in the new request queue and the waiting requests queue if certain conditions are met.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: September 3, 2019
    Assignee: SAP SE
    Inventors: Jia Feng, Edward Lu, Jessica Yang, Zonghan Wu, Ruibin Zhang, Fangling Liu, Xuejian Qiao, Yan Fan
  • Patent number: 10402424
    Abstract: Data can be processed in parallel across a cluster of nodes using a parallel processing framework. Using Web services calls between components allows the number of nodes to be scaled as necessary, and allows developers to build applications on the framework using a Web services interface. A job scheduler works together with a queuing service to distribute jobs to nodes as the nodes have capacity, such that jobs can be performed in parallel as quickly as the nodes are able to process the jobs. Data can be loaded efficiently across the cluster, and levels of nodes can be determined dynamically to process queries and other requests on the system.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: September 3, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Gavindaswamy Bacthavachalu, Peter Grant Gavares, Ahmed A. Badran, James E. Scharf, Jr.
  • Patent number: 10402234
    Abstract: A computer-implemented method and computer processing system are provided. The method includes synchronizing, by a processor, respective ones of a plurality of data parallel workers with respect to an iterative process. The synchronizing step includes individually continuing, by the respective ones of the plurality of data parallel workers, from a current iteration to a subsequent iteration of the iterative process, responsive to a satisfaction of a predetermined condition thereby. The predetermined condition includes individually sending a per-receiver notification from each sending one of the plurality of data parallel workers to each receiving one of the plurality of data parallel workers, responsive to a sending of data there between. The predetermined condition further includes individually sending a per-receiver acknowledgement from the receiving one to the sending one, responsive to a consumption of the data thereby.
    Type: Grant
    Filed: April 6, 2017
    Date of Patent: September 3, 2019
    Assignee: NEC CORPORATION
    Inventors: Asim Kadav, Erik Kruus
  • Patent number: 10402226
    Abstract: A system for processing media on a resource restricted device, the system including a memory to store data representing media assets and associated descriptors, and program instructions representing an application and a media processing system, and a processor to execute the program instructions, wherein the program instructions represent the media processing system, in response to a call from an application defining a plurality of services to be performed on an asset, determine a tiered schedule of processing operations to be performed upon the asset based on a processing budget associated therewith, and iteratively execute the processing operations on a tier-by-tier basis, unless interrupted.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: September 3, 2019
    Assignee: Apple Inc.
    Inventors: Albert Keinath, Ke Zhang, Yunfei Zheng, Shujie Liu, Jiefu Zhai, Chris Y. Chung, Xiaosong Zhou, Hsi-Jung Wu
  • Patent number: 10394602
    Abstract: A method at a computing device having a plurality of concurrently operative operating systems, the method comprising: operating a proxy process within a target operating system on the computing device; receiving, from an originating operating system, a request for resources from a target process within the target operating system at the proxy process; requesting, from the proxy process, the resources of the target process; and returning a handle to the target process from the proxy process to the originating operating system.
    Type: Grant
    Filed: May 29, 2014
    Date of Patent: August 27, 2019
    Assignees: BlackBerry Limited, 2236008 Ontario Inc.
    Inventors: Ravi Singh, Daniel Jonas Major, Sivakumar Nagarajan, Kevin Goodman
  • Patent number: 10394621
    Abstract: A computer readable medium and method for providing checkpointing to Windows application groups. The checkpointing may be triggered asynchronously using Asynchronous Procedure Calls. The computer readable medium includes computer-executable instructions for execution by a processing system. The computer-executable instructions may be for reviewing one or more command line arguments to determine whether to start at least one of the application groups, and when determining to start the at least one of the application groups, creating a process table in a shared memory to store information about each process of the at least one of the application groups. Further, the instructions may be for registering with a kernel module to create an application group barrier, creating a named pipe for applications of the application group to register and unregister, triggering a checkpoint thread to initiate an application group checkpoint; and launching an initial application of the applications of the application group.
    Type: Grant
    Filed: January 10, 2017
    Date of Patent: August 27, 2019
    Assignee: OPEN INVENTION NETWORK LLC
    Inventors: Keith Richard Backensto, Allan Havemose
  • Patent number: 10394609
    Abstract: Methods are provided for data processing in a multi-threaded processing arrangement. The methods include receiving a data processing task to be executed on data including a plurality of data records, the data having an associated record description including information relating to parameters or attributes of the plurality of data records. Based on the received data processing task, the record description is analyzed to determine an indication of expected workload for the data records. Further, the data is divided into a plurality of data sets. Based on the determined indication of expected workload for the data records, the data sets are allocated processing threads for parallel processing by a multi-threaded processing arrangement.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: August 27, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Steven J. Horsman, Samuel J. Smith
  • Patent number: 10394655
    Abstract: A method for detecting an abnormal application and a terminal are provided. The method includes: detecting a system event of a mobile terminal; reading process information if the system event is a triggering system event; determining whether a restarting process exists according to the process information; recording the number of restarting of the restarting process in a preset period; and determining that an application corresponding to the restarting process is abnormal, if the number of restarting is greater than a preset threshold. The method has a wide application scope for process detection, and may reduce energy consumption and resource occupancy while improving efficiency of abnormal application detection.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: August 27, 2019
    Assignee: BEIJING KINGSOFT INTERNET SECURITY SOFTWARE CO., LTD.
    Inventors: Shanglun Ding, Kangzong Zhang, Chao Xiao, Yaxiong Zhang
  • Patent number: 10387207
    Abstract: Methods are provided for data processing in a multi-threaded processing arrangement. The methods include receiving a data processing task to be executed on data including a plurality of data records, the data having an associated record description including information relating to parameters or attributes of the plurality of data records. Based on the received data processing task, the record description is analyzed to determine an indication of expected workload for the data records. Further, the data is divided into a plurality of data sets. Based on the determined indication of expected workload for the data records, the data sets are allocated processing threads for parallel processing by a multi-threaded processing arrangement.
    Type: Grant
    Filed: December 6, 2016
    Date of Patent: August 20, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Steven J. Horsman, Samuel J. Smith
  • Patent number: 10387076
    Abstract: The invention introduces a method for scheduling data-programming tasks, performed by a processing unit, including at least the following steps. At least one task of an (i+1)-th batch is performed between directing an engine to perform a task of an i-th batch and reception of an outcome of the task of the i-th batch.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: August 20, 2019
    Assignee: Silicon Motion, Inc.
    Inventor: Shen-Ting Chiu
  • Patent number: 10389598
    Abstract: In one embodiment, a system has host machines forming a cluster. Each host machine runs containers, where each container includes a segment of hardware resources associated with the host machine, a segment of an operating system utilized by the host machine, and at least one application. Host agents operate on the host machines. Each host agent collects operational parameters associated with the containers on each host machine. A management platform is operative to divide the cluster into container pools, where each container pool includes a sub-set of computation resources in the cluster and has associated container pool metrics including a priority level and computation resource limits. Operational parameters are collected from the host agents. The operational parameters are evaluated in accordance with the container pool metrics.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: August 20, 2019
    Assignee: Cisco Technology, Inc.
    Inventors: Pradeep Padala, Selvi Kadirvel, Himanshu Raj, Kiran Kamity
  • Patent number: 10387005
    Abstract: A monitoring apparatus in an electric power system includes a communication unit for performing communication with a data server included in the electric power system to receive real-time data, a user input unit for receiving user input for creating a monitoring screen with the real-time data by using a tabular function, and a display unit for displaying the monitoring screen. The apparatus further includes a control unit for creating the monitoring screen in response to the user input to display it in the display unit, to decide at least one or more tasks taking time more than a given time in processing them among a plurality of tasks for performing the tabular function, and to process the at least one or more task and the other remaining tasks than the at least one or more task among the plurality of tasks simultaneously in parallel.
    Type: Grant
    Filed: November 15, 2016
    Date of Patent: August 20, 2019
    Assignee: LSIS CO., LTD.
    Inventors: Seok-Chan Lee, Myung-Hwan Lee, Seung-Ju Lee, Yeo-Chang Yoon
  • Patent number: 10387195
    Abstract: An apparatus, computer-readable medium, and computer-implemented method for performing a data exchange, including receiving tasks for execution, generating an execution plan for executing the tasks on a plurality of nodes, the execution plan comprising one or more data exchanges, each data exchange comprising at least one stream, and each stream identifying a producer task and a consumer task and being configured to transmit output of the producer task as input to the consumer task, executing one or more producer tasks on one or more first nodes in the plurality of nodes based at least in part on the execution plan, and transmitting an output of the one or more producer tasks from the one or more first nodes to one or more streams of the data exchange via a stream application programming interface (API).
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: August 20, 2019
    Assignee: Informatica LLC
    Inventors: Salim Achouche, Udaya Bhaskar Yalamanchi, Nisheedh Raveendran
  • Patent number: 10387319
    Abstract: Systems, methods, and apparatuses relating to a configurable spatial accelerator are described. In one embodiment, a processor includes a plurality of processing elements; and an interconnect network between the plurality of processing elements to receive an input of a dataflow graph comprising a plurality of nodes, wherein the dataflow graph is to be overlaid into the interconnect network and the plurality of processing elements with each node represented as a dataflow operator in the plurality of processing elements, and the plurality of processing elements is to perform an operation when an incoming operand set arrives at the plurality of processing elements. The processor also includes a streamer element to prefetch the incoming operand set from two or more levels of a memory system.
    Type: Grant
    Filed: July 1, 2017
    Date of Patent: August 20, 2019
    Assignee: Intel Corporation
    Inventors: Michael C. Adler, Chiachen Chou, Neal C. Crago, Kermin Fleming, Kent D. Glossop, Aamer Jaleel, Pratik M. Marolia, Simon C. Steely, Jr., Samantika S. Sury
  • Patent number: 10379900
    Abstract: A method, and associated system and computer program product, for dispatching two or more jobs for execution in a computing system including processors configured to execute the jobs in parallel. Each processor is associated with a corresponding queue having a queue size equal to a maximum number of jobs that may be in the queue. A new job requested for execution is assigned to a current class. An indication is retrieved of a last processor of the processors of the current class to which a last job of jobs of the current class has been submitted for execution. An indication is retrieved of a delta number of the jobs submitted for execution to the last processor of the current class after the last job of the current class. The new job for execution is submitted to a last processor of the current class or a selected processor.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: August 13, 2019
    Assignee: International Business Machines Corporation
    Inventors: Giulia Carnevale, Marzia E. Castellani, Marco Gianfico, Roberto Ragusa
  • Patent number: 10382489
    Abstract: Technologies for privacy-safe security policy evaluation include a cloud analytics server, a trusted data access mediator (TDAM) device, and one or more client devices. The cloud analytics server curries a security policy function to generate a privacy-safe curried function set. The cloud analytics server requests parameter data from the TDAM device, which collects the parameter data, identifies sensitive parameter data, encrypts the sensitive parameter data, and transmits the encrypted sensitive parameter data to the cloud analytics server. The cloud analytics server evaluates one or more curried functions using non-sensitive parameters to generate one or more sensitive functions that each take a sensitive parameter. The cloud analytics server transmits the sensitive functions and the encrypted sensitive parameters to a client computing device, which decrypts the encrypted sensitive parameters and evaluates the sensitive functions with the sensitive parameters to return a security policy.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: August 13, 2019
    Assignee: Mcafee, LLC
    Inventors: Sudeep Das, Rajesh Poornachandran, Ned M. Smith, Vincent J. Zimmer, Pramod Sharma, Arthur Zeigler, Sumant Vashisth, Simon Hunt
  • Patent number: 10372510
    Abstract: A technique for short-circuiting normal read-copy update (RCU) grace period computations in the presence of expedited RCU grace periods. Both normal and expedited RCU grace period processing may be periodically performing to respectively report normal and expedited quiescent states on behalf of CPUs in a set of CPUs until all of the CPUs have respectively reported normal or expedited quiescent states so that the normal and expedited grace periods may be respectively ended. The expedited grace periods are of shorter duration than the normal grace periods. Responsive to a condition indicating that the normal RCU grace period processing can be short-circuited by the expedited RCU grace period processing, the expedited RCU grace period processing may report both expedited quiescent states and normal quiescent states on behalf of the same CPUs in the set of CPUs.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: August 6, 2019
    Inventor: Paul E. McKenney
  • Patent number: 10372485
    Abstract: An information processing system that includes at least one information processing apparatus and executes programs, each of which performs a predetermined process, the information processing system including a memory unit configured to store, for each of applications performing a sequence of processes using electronic data, program identification information identifying at least one program performing each process of the sequence of processes, flow information defining an execution order of the at least one program, and app identification information identifying each of the applications, while associating the program identification information, the flow information, and the app identification information, a registering unit, and a process executing unit, in receipt of a request including information related to the electronic data and the app identification information from a second apparatus, configured to cause the program identified by the program identification information in accordance with the execution
    Type: Grant
    Filed: October 13, 2016
    Date of Patent: August 6, 2019
    Assignee: Ricoh Company, Ltd.
    Inventors: Kohsuke Namihira, Yuuichiroh Hayashi, Kazunori Sugimura, Hikaru Kominami, Zhi Min, Dongzhe Zhang, Ryutaro Sakanashi
  • Patent number: 10372501
    Abstract: Provisioning of computing resources for a workload in a networked computing environment. A method receives a set of provisioning requests in a computer data structure of a networked computing environment, wherein the provisioning requests relate to a workload request. The method further identifies a set of provisioning operations for computing resources in the networked computing environment to perform the provisioning requests. The method determines, for each provisioning operation of the provisioning operations, a respective provisioning time, the respective provisioning time being an amount of time for a particular computing resource to become prepared and equipped to perform a job. The determining the respective provisioning time for each provisioning operation provides a plurality of determined provisioning times.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: August 6, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael J. Shaffer, Kevin B. Smith, David R. Waddling
  • Patent number: 10372611
    Abstract: Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by comparing a priority of the prefetch request with a priority of the transaction. Based on a result of the comparison, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request to include program instruction to perform one or both of: (i) a quiesce of the prefetch request prior to execution of the prefetch request, and (ii) a delay in execution of the prefetch request for a predetermined delay period.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: August 6, 2019
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
  • Patent number: 10374893
    Abstract: A device may receive information identifying a plurality of requests and identifying a plurality of targets for the plurality of requests. The device may generate respective routes for the plurality of targets, where a route, of the respective routes, for a target, of the plurality of targets, identifies a set of transformations to be applied to a corresponding request of the plurality of requests. The device may apply the respective routes to the plurality of requests to generate processed requests, and may communicate with at least one of the plurality of targets based on the processed requests. The device may receive results based on communicating with the at least one of the plurality of targets, wherein the results are based on the processed requests, and may provide information based on the results.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: August 6, 2019
    Assignee: Capital One Services, LLC
    Inventors: Gopi Kancharla, Nicky Joshi, Fredrick Crable
  • Patent number: 10373514
    Abstract: An apparatus displaying a job screen indicating a job procedure, has a management unit that manages a job and a plurality of processes, a monitor unit that monitors an operation of an operator, a recorder unit that records an address of a job, an instructing unit that retrieves an address in response to a notification, determines whether to update the retrieved address in response to a type of a notified operation, and stores the updated address when the retrieved address has been updated, a storage unit that stores the job screen, and a control unit that reads the job screen and controls the read job screen, wherein the management unit retrieves the updated address, and instructs the display control unit to display the job screen, and wherein the display control unit reads out the instructed job screen.
    Type: Grant
    Filed: January 6, 2012
    Date of Patent: August 6, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Kazuki Kitamura, Mitsutaka Kaneki, Yuichiro Kitagawa, Yasuo Hatanaka, Hiroshi Kobayashi
  • Patent number: 10365947
    Abstract: A multi-core processor comprises a plurality of slave cores, the slave cores being without operating system kernel-related features, and the slave cores to execute respective instructions. A master core configures the operating system kernel-related features on behalf of the slave cores. The master core is to control usage of the operating system kernel-related features during execution of the instructions on the respective slave cores.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: July 30, 2019
    Assignee: HEMETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventor: Fred A. Sprague
  • Patent number: 10360070
    Abstract: An application-level thread dispatcher that operates in a main full-weight operating system-level thread allocated to an application initializes at least one application-level pseudo threads that operates as an application-controlled thread within the main full-weight operating system-level thread allocated to the application. The application-level thread dispatcher migrates work associated with the application between the at least one application-level pseudo thread and a separate operating system-level thread in accordance with evaluated changes in run-time performance of the application.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: July 23, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul M. Cadarette, Robert D. Love, Austin J. Willoughby
  • Patent number: 10360138
    Abstract: Aspects of the present invention include a method, system and computer program product for automatically adjusting the workload of a test to match specific customer workload attributes in accordance with one or more embodiments of the present invention. The method includes a processor selecting one or more customer workload goals of a customer relating to a test of a software program; selecting one or more test workload goals of the test relating to the software program; selecting one or more test data points; determining one or more initial test workload activity levels; and performing a run of the test relating to the software program. The method further includes the processor comparing the selected one or more customer workload goals with the selected one or more test workload goals; and determining whether the selected one or more customer workload goals match with the selected one or more test workload goals.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: July 23, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas W. Conti, Kyle R. Moser
  • Patent number: 10353730
    Abstract: The present invention relates to a method for running a virtual machine on a destination host node in a computer cluster, comprising the steps of requesting (S101) a set of target configuration parameters assigned to the virtual machine, wherein the target configuration parameters have prioritizations; requesting (S102) a set of actual configuration parameters of the destination host node for checking against the requested set of target configuration parameters; and running (S103) the virtual machine on the destination host node, if the set of actual configuration parameters of the destination host node falls within the set of target configuration parameters, wherein when selecting destination host node, the prioritization of the target configuration parameters is considered.
    Type: Grant
    Filed: February 12, 2015
    Date of Patent: July 16, 2019
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Manuel Buil, Claudia Mayntz, Matthias Schreiber, Marc Vorwerk
  • Patent number: 10353978
    Abstract: In one embodiment, a method includes receiving a plurality of uniform resource identifiers (URI's) associated with a particular domain. Each of the URI's identifies a content page comprising one or more signature elements. The method further includes, for each URI in the plurality of URI's, successively testing the URI to identify a core of the URI and any unnecessary elements of the URI. The core of the URI is sufficient to retrieve a version of the content page including all of its signature elements. The method additionally includes, for each URI in the plurality of URI's, updating a set of rules based on the identified core and the identified unnecessary elements. The set of rules establishes a normalized version of the URI.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: July 16, 2019
    Assignee: Facebook, Inc.
    Inventors: Gurpreetsingh Baljeetsingh Sachdev, Shashikant Khandelwal
  • Patent number: 10346206
    Abstract: A method, system, and computer program product, include determining a task resource consumption predicted for each of one or more tasks being executed on a node, wherein the task resource consumption is a function of time and predicting a node resource consumption of the node based at least on the predicted task resource consumption, wherein the node resource consumption is a function of time.
    Type: Grant
    Filed: August 27, 2016
    Date of Patent: July 9, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Liang Liu, Junmei Qu, ChaoQiang Zhu, Wei Zhuang
  • Patent number: 10346293
    Abstract: Embodiments of the invention include systems for testing pre and post system call exits. Aspects include executing a first test case comprises system calls and the first test case initializes a common buffer and stores system call parameters for each of the system calls. A monitoring test case is executed comprising: a pre-exit instruction that is inserted before each system call in the first test case. A post-exit instruction is inserted after each of the system calls in the first test case. Execution of the pre-exit instruction is determined prior to an execution of each system call. A first bit location is set in the common buffer to one, based on determining the pre-exit instruction executes. The system call is executed and execution of the post-exit instruction is determined. A second bit location in the common buffer is set to one based on determining that the post-exit instruction executes.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: July 9, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dominic DeMarco, Christopher Loers, Alexander Smith, Brad D. Stilwell
  • Patent number: 10348621
    Abstract: Systems, methods, apparatus and computer-readable medium are described for improving efficiency and robustness for processing network packets at a network device, such as a customer premises equipment (CPE). The network device may include a plurality of physical network interfaces for receiving and transmitting network packets, and one or more processing entities. The one or more processing entities may provide a first router for providing routing functionality, wherein the first router is not virtualized, enable a virtual machine to execute a second router for providing routing functionality and forward a network packet using the first router or the second router from the device. The one or more processors may be configured to execute instructions associated with the first router from user space.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: July 9, 2019
    Assignee: AT&T Intellectual Property I. L. P.
    Inventor: Robert Bays
  • Patent number: 10348592
    Abstract: In accordance with embodiments of the present disclosure, a method may include, in response to an attempted execution of an executable endpoint, determining if the executable endpoint is unexpired, performing an endpoint operation of the executable endpoint if the endpoint is unexpired, after performance of the endpoint operation, determining if the executable endpoint has met a condition for expiration, and modifying metadata associated with the executable endpoint such that the executable endpoint is prevented from further attempted execution if the executable endpoint has met a condition for expiration.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: July 9, 2019
    Assignee: Dell Products L.P.
    Inventors: Prakash Nara, Sudhir Vittal Shetty
  • Patent number: 10338961
    Abstract: A method includes receiving, by a data processing apparatus, a plurality of file operation requests, each file operation request including a priority, a deadline, and an operation type and representing a request to perform an operation on at least one file maintained in a distributed file system; identifying, by the data processing apparatus, a group of file operation requests to be executed together from the plurality of file operation requests, the identification based at least in part on at least one of: the file operations in the group of file operations being directed to a same storage system, or file operations in the group of file operations sharing a common operation type; and sending a request to execute the group of file operation requests to a system configured to perform the group of file operation requests.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: July 2, 2019
    Assignee: Google LLC
    Inventors: Chi Ma, Kenneth J. Goldman, Yonggang Zhao, Stephen P. G. Gildea
  • Patent number: 10334036
    Abstract: Systems and methods are provided for managing server loads that accounts for various measures of risk associated with different workloads assigned to servers. The systems and methods may include a memory storing instructions for server load management operations, and a processor configured to execute the stored instructions. The processor may receive a workload, determine a value associated with the workload indicating a predetermined importance of the workload, receive information for a plurality of active servers in a server cluster associated with the processor, determine risk levels associated with the active servers based on the received information, and assign the received workload to one of the active servers based on the determined value and the determined risk levels.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 25, 2019
    Assignee: Capital One Services, LLC
    Inventors: Tao Tao, Santosh Bardwaj, Ii Sun Yoo, Yihui Tang, Jeremy Gerstle