Patents Issued in November 17, 2016
-
Publication number: 20160335099Abstract: A method for dynamically changing a fundamental attribute of an object in a game played using a system is provided. The system can include a system directory which includes a system file and a plurality of system layers including a transmit layer and a receive layer. Data signals are communicable from the transmit layer to the receive layer. The method can include identifying the object, the fundamental attribute of the object being associable with the system file when data signals communicated from the transmit layer are processed at the receive layer based on the system file. The method can further include inserting a configuration file to the system directory. The method can yet further include intercepting communication between the transmit layer and the receive layer so that the fundamental attribute of the identified object is modified based on the configuration file.Type: ApplicationFiled: May 14, 2015Publication date: November 17, 2016Applicant: CREATIVE TECHNOLOGY LTDInventors: Wong Hoo SIM, Darran NATHAN
-
Publication number: 20160335100Abstract: According to some aspects, a method of operating a data processing system is provided wherein at least one computer program is configured, the data processing system comprising least a first control, a second control and a third control, the first, second and third controls comprising at least user interface portions and operational portions, the method comprising rendering a first user interface based on the user interface portion of the first control, receiving first user input through the first user interface, the first user input providing configuration information for the at least one program, identifying the second control based at least in part on the operational portion of the first control, rendering a second user interface based on the user interface portion of the identified second control, and receiving second user input through the second user interface, the second user input providing configuration information for the at least one program.Type: ApplicationFiled: May 15, 2015Publication date: November 17, 2016Applicant: Ab Initio Software LLCInventor: Hugh F. Pyle
-
Publication number: 20160335101Abstract: A method for configuring an interface unit of a computer system with a first processor and a second processor stored in the interface unit. A data link is set up between the first processor and the second processor. A peripheral of the computer system is configured to store input data in an input data channel and to read output data from an output data channel, and the second processor is configured to read the input data from the input data channel and to store output data in the output data channel. A sequence of processor commands for the second processor is created such that a number of subsequences is created.Type: ApplicationFiled: May 11, 2016Publication date: November 17, 2016Applicant: dSPACE digital signal processing and control engineering GmbHInventors: Jochen SAUER, Robert LEINFELLNER, Matthias KLEMM, Thorsten BREHM, Robert POLNAU, Matthias SCHMITZ
-
Publication number: 20160335102Abstract: A method performed in a processing unit for finding settings to be used in relation to a sensor unit connected to the processing unit is disclosed. The method comprises inter alia receiving, from the sensor unit, a first identifier identifying a type of the sensor unit, and a second identifier identifying a group of at least one related type of sensor unit. If no settings associated with the first identifier are stored in the processing unit, but settings associated with the second identifier are stored in the processing unit, the processing unit uses the settings associated with the second identifier in relation to the sensor unit.Type: ApplicationFiled: July 25, 2016Publication date: November 17, 2016Applicant: AXIS ABInventors: Martin SANTESSON, Henrik FASTH, Magnus MÅRTENSSON, Joakim OLSSON
-
Publication number: 20160335103Abstract: An embodiment of the invention provides a method for changing a multi-processor system from a performance mode to a safety mode while the system continues to run software. When an external event or exception occurs, context is switched from the performance mode to the safety mode. After context is switched, at least one pair of CPUs is synchronized to operate in the safety mode. In addition, a multi-processor system may be switched form the safety mode to the performance mode while the software continues to operate.Type: ApplicationFiled: August 1, 2016Publication date: November 17, 2016Inventor: Alexandre Pierre Palus
-
Publication number: 20160335104Abstract: Methods, systems, and computer program products are included to provide a universal database driver, into which one or more driver implementations may be loaded. The universal database driver communicates with one or more databases using the appropriate driver implementation for each database. A driver manager is provided that requests driver implementations corresponding to the databases, and loads the driver implementations into the universal database driver.Type: ApplicationFiled: May 11, 2015Publication date: November 17, 2016Inventors: Filip Elias, Filip Nguyen
-
Publication number: 20160335105Abstract: Performing server virtual machine image migration and dependent server virtual machine image discovery in parallel is provided. Migration of a server virtual machine image that performs a workload is started to a client device via a network and, in parallel, an identity is continuously discovered of a set of dependent server virtual machine images corresponding to the server virtual machine image being migrated to the client device. In response to discovering the identity of the set of dependent server virtual machine images, a server migration pattern of the discovered set of dependent server virtual machine images is generated for the workload. A level of risk corresponding to migrating each dependent server virtual machine image of the discovered set of dependent server virtual machine images to the client device is calculated based on the server migration pattern of the discovered set of dependent server virtual machine images for the workload.Type: ApplicationFiled: June 19, 2015Publication date: November 17, 2016Inventors: Nikolaos Anerousis, Kun Bai, Hubertus Franke, Jinho Hwang, Jose E. Moreira, Maja Vukovic
-
Publication number: 20160335106Abstract: Exemplary embodiments provide techniques for managing VM migrations that use relatively simple and uncomplicated commands or APIs that can be executed through scripts or applications. Configuration and preparation for the conversion may be addressed by one set of command-lets or APIs, while the conversion itself is handled by a separate set of command-lets or APIs, which allows the conversion command-lets to be uncomplex and to require little input. Moreover, the architecture-specific commands can be largely abstracted away, so that the configuration and conversion processes can be carried out through straightforward general commands, which automatically cause an interface (e.g., at the conversion server) to call upon any necessary architecture-specific functionality.Type: ApplicationFiled: July 31, 2015Publication date: November 17, 2016Applicant: NETAPP, INC.Inventors: Shweta Behere, Sung Ryu, Joshua Flank, Pradeep Thirunavukkarasu
-
Publication number: 20160335107Abstract: Some embodiments provide a method for a first managed forwarding element (MFE). The method receives a data message that includes a logical context tag that identifies a logical port of a particular logical forwarding element. Based on the logical context tag, the method adds a local tag to the data message. The local tag is associated with the particular logical forwarding element, which is one of several logical forwarding elements to which one or more containers operating on a container virtual machine (VM) belong. The container VM connects to the first MFE. The method delivers the data message to the container VM without any logical context. A second MFE operating on the container VM uses the local tag to forward the data message to a correct container of several containers operating on the container VM.Type: ApplicationFiled: August 28, 2015Publication date: November 17, 2016Inventors: Somik Behera, Donghai Han, Jianjun Shen, Justin Pettit
-
Publication number: 20160335108Abstract: Exemplary embodiments described herein relate to a destination path for use with multiple different types of VMs, and techniques for using the destination path to convert, copy, or move data objects stored in one type of VM to another type of VM. The destination path represents a standardized (canonical) way to refer to VM objects from a proprietary VM. A destination location may be specified using the canonical destination path, and the location may be converted into a hypervisor-specific destination location. A source data object may be copied or moved to the destination location using a hypervisor-agnostic path.Type: ApplicationFiled: September 30, 2015Publication date: November 17, 2016Applicant: NETAPP, INC.Inventors: Sung Ryu, Shweta Behere, Jeffrey Teehan
-
Publication number: 20160335109Abstract: The present application provides exemplary methods, mediums, and systems for converting a virtual machine from management by one type of hypervisor to management by a second, different type of hypervisor. The exemplary method involves: (1) discovering information about the source VM; (2) making a backup copy of the source VM data (3) storing the information in the source VM; (4) copying the source VM data using cloning; (5) starting the destination VM with the cloned data by attaching the copied disks to the destination VM; (6) restoring the source VM to its original state; and (7) starting the destination VM and applying the saved system configuration to a destination guest OS. In some embodiments, the first type of hypervisor (the source hypervisor) may be a Hyper-V hypervisor, and the second type to hypervisor (the destination hypervisor) may be a VMware hypervisor.Type: ApplicationFiled: October 30, 2015Publication date: November 17, 2016Applicant: NETAPP, INC.Inventors: Sung Ryu, Shweta Behere
-
Publication number: 20160335110Abstract: Selective virtualization of resources is provided, where the resources may be intercepted and services or the resources may be intercepted and redirected. Virtualization logic monitors for a first plurality of requests that are initiated during processing of an object within the virtual machine. Each of the first plurality of requests, such as system calls for example, is associated with an activity to be performed in connection with one or more resources. The virtualization logic selectively virtualizes resources associated with a second plurality of requests that are initiated during the processing of the object within the virtual machine, where the second plurality of requests is lesser in number than the first plurality of requests.Type: ApplicationFiled: March 25, 2016Publication date: November 17, 2016Applicant: FireEye, Inc.Inventors: Sushant Paithane, Michael Vincent
-
Publication number: 20160335111Abstract: A method of managing virtual network functions for a network, the method including providing a virtual network function (VNF) including a number of virtual network function components (VNFCs) of a number of different types, each VNFC comprising a virtual machine (VM) executing application software. The method further includes creating for up to all VNFC types a number of deactivated VMs having application software, monitoring at least one performance level of the VNF, and scaling-out the VNF by activating a number of deactivated VMs of a number of VNFC types when the at least one performance level reaches a scale-out threshold.Type: ApplicationFiled: February 24, 2014Publication date: November 17, 2016Applicant: Hewlett-Packard Development Company, L.P.Inventors: Peter Michael Bruun, Thomas Mortensen
-
Publication number: 20160335112Abstract: Methods for generating a unique identifier of a distributed computing system are provided, one of methods comprise, receiving, by the first virtual machine, a first index range allocated by the identifier allocation server, receiving, by the second virtual machine, a second index range allocated by the identifier allocation server, the second index range being different from the first index range, generating, by the first virtual machine, a first unique identifier using an index in the first index range without intervention of the identifier allocation server and generating, by the second virtual machine, a second unique identifier using an index in the second index range without intervention of the identifier allocation server, wherein the first unique identifier and the second unique identifier are identifiers satisfying uniqueness for the whole distributed computing system.Type: ApplicationFiled: May 6, 2016Publication date: November 17, 2016Applicant: SAMSUNG SDS CO., LTD.Inventors: Ju Seok YUN, Young Gi KIM, Jae Hong KIM, Han Hwee JO
-
Publication number: 20160335113Abstract: Methods, systems, and techniques for automated provisioning of virtual desktops are provided. Example embodiments provide an Automated Virtual Desktop Provisioning System (“AVDPS”), which enables users to perform self-service provisioning of virtual desktops with little knowledge other than a proper license. The AVDPS is able to accomplish this through the use of pre-configured Blueprints and Templates. The Blueprints fully specify how a particular resource, for example, an application, services, or virtual infrastructure like memory, CPUs, disk space, etc., is to be installed in a user's virtual desktop(s). The Templates provide master images for a virtual infrastructure image instance (e.g., a virtual machine instance). In an example AVDPS, a single virtual infrastructure image instance supports multiple users at one time—avoided need to supply each user with its own virtual machine image and corresponding resources just in order to have a virtual desktop to access resources, for example applications.Type: ApplicationFiled: May 16, 2016Publication date: November 17, 2016Inventors: John Gorst, Will Horne
-
Publication number: 20160335114Abstract: Embodiments disclosed facilitate obtaining a cloud agnostic representation of a first Virtual Machine Image (VMI) on a first cloud; and obtaining a second VMI for a second cloud different from the first cloud, wherein the second VMI is obtained based, at least in part, on the cloud agnostic representation of the first VMI.Type: ApplicationFiled: July 22, 2016Publication date: November 17, 2016Inventors: Tianying Fu, Venkat Narayan Srinivasan
-
Publication number: 20160335115Abstract: A system and method of multi-level scheduling analysis for a general processing module of a real-time operating system. The method includes identifying any processes within respective partitions of the general processing module, for each identified process, determining if the process is local-time centric or global-time centric. The method converts global-time centric process to a local-time centric process, applies a single-level scheduling analysis technique to the processes of respective partitions, and transforms local-time based response times to global-time based response times. The method performs scheduling and response time analyses on one or more of the identified processes of respective partitions. The method can be performed on a synchronous and/or asynchronous system, and on a hierarchical scheduling system that includes a top level scheduler having a static-cyclic schedule and/or a general static schedule. A system and non-transitory computer-readable medium are also disclosed.Type: ApplicationFiled: May 14, 2015Publication date: November 17, 2016Inventors: Gregory Reed Sykes, Kevin Jones, Hongwei Liao, Panagiotis Manolios
-
Publication number: 20160335116Abstract: A task generation method includes: receiving worker information from equipment of a worker over a network, the worker information including attribute information regarding a personal attribute of the worker; calculating degrees of association between each of pieces of analysis information resulting from analysis of pieces of data stored in a storage device connected to a computer and the worker information; extracting a piece of data to be subjected to task processing the worker is requested to perform from the pieces of data as specific data, based on the degrees of association; and generating a request task that is a task for making, to the equipment of the worker, a request for performing task processing for giving label information to the extracted specific data by using the equipment of the worker.Type: ApplicationFiled: April 28, 2016Publication date: November 17, 2016Inventors: YASUNORI ISHII, SOTARO TSUKIZAWA, MASAKI TAKAHASHI, REIKO HAGAWA
-
Publication number: 20160335117Abstract: An HTM-assisted Combining Framework (HCF) may enable multiple (combiner and non-combiner) threads to access a shared data structure concurrently using hardware transactional memory (HTM). As long as a combiner executes in a hardware transaction and ensures that the lock associated with the data structure is available, it may execute concurrently with other threads operating on the data structure. HCF may include attempting to apply operations to a concurrent data structure utilizing HTM and if the HTM attempt fails, utilizing flat combining within HTM transactions. Publication lists may be used to announce operations to be applied to a concurrent data structure. A combiner thread may select a subset of the operations in the publication list and attempt to apply the selected operations using HTM. If the thread fails in these HTM attempts, it may acquire a lock associated with the data structure and apply the selected operations without HTM.Type: ApplicationFiled: May 13, 2016Publication date: November 17, 2016Inventors: Alex Kogan, Yosef Lev
-
Publication number: 20160335118Abstract: Groups of a plurality of tenants are mapped to identity management classes corresponding to respective roles that grant respective permissions. The identity management classes are associated with hierarchical delegation information that specify delegation rights among the identity management classes, the delegation rights specifying rights of members of the respective identity management classes to perform delegation with respect to further members of the identity management classes. In response to a request by a first member of a first of the identity management classes to perform delegation with respect to a second member of one of the identity management classes, it is determined, based on the hierarchical delegation information, whether the first member is allowed to perform the delegation with respect to the second member.Type: ApplicationFiled: January 20, 2014Publication date: November 17, 2016Applicant: Hewlett-Packard Development Company, L.P.Inventors: Michael B Beiter, Randall E Grohs
-
Publication number: 20160335119Abstract: A multi-processor system for batched pattern recognition may utilize a plurality of different types of neural network processors and may perform batched sets of pattern recognition jobs on a two-dimensional array of inner product units (IPUs) by iteratively applying layers of image data to the IPUs in one dimension, while streaming neural weights from an external memory to the IPUs in the other dimension. The system may also include a load scheduler, which may schedule batched jobs from multiple job dispatchers, via initiators, to one or more batched neural network processors for executing the neural network computations.Type: ApplicationFiled: May 9, 2016Publication date: November 17, 2016Inventors: Theodore MERRILL, Tijmen TIELEMAN, Sumit SANYAL, Anil HEBBAR
-
Publication number: 20160335120Abstract: A method for accelerating algorithms and applications on field-programmable gate arrays (FPGAs). The method includes: obtaining, from a host application, by a run-time configurable kernel, implemented on an FPGA, a first set of kernel input data; obtaining, from the host application, by the run-time configurable kernel, a first set of kernel operation parameters; parameterizing the run-time configurable kernel at run-time, using the first set of kernel operation parameters; and performing, by the parameterized run-time configurable kernel, a first kernel operation on the first set of kernel input data to obtain a first set of kernel output data.Type: ApplicationFiled: May 10, 2016Publication date: November 17, 2016Inventors: Nagesh Chandrasekaran Gupta, Varun Santhaseelan
-
Publication number: 20160335121Abstract: A system includes a scheduling unit for scheduling jobs to resources, and a library unit having a machine map of the system and a global status map of interconnections of resources. The library unit determines a free map of resources to execute the job to be scheduled, the free map indicating the interconnection of resources to which the job in a current scheduling cycle can be scheduled. A monitoring unit dispatches a job to the resources in the free map which match the resource mapping requirements of the job and fall within the free map.Type: ApplicationFiled: July 25, 2016Publication date: November 17, 2016Inventor: Igor Shpigelman
-
Publication number: 20160335122Abstract: The method may include collecting performance data relating to processing nodes of a computer system which provide services via one or more applications, analyzing the performance data to generate an operational profile characterizing resource usage of the processing nodes, receiving a set of attributes characterizing expected performance goals in which the services are expected to be provided, and generating at least one provisioning policy based on an analysis of the operational profile in conjunction with the set of attributes. The at least one provisioning policy may specify a condition for re-allocating resources associated with at least one processing node in a manner that satisfies the performance goals of the set of attributes. The method may further include re-allocating, during runtime, the resources associated with the at least one processing node when the condition of the at least one provisioning policy is determined as satisfied.Type: ApplicationFiled: July 28, 2016Publication date: November 17, 2016Inventors: Yiping DING, Assaf MARRON, Fred JOHANNESSEN
-
Publication number: 20160335123Abstract: Techniques are described for managing program execution capacity, such as for a group of computing nodes that are provided for executing one or more programs for a user. In some situations, dynamic program execution capacity modifications for a computing node group that is in use may be performed periodically or otherwise in a recurrent manner, such as to aggregate multiple modifications that are requested or otherwise determined to be made during a period of time, and with the aggregation of multiple determined modifications being able to be performed in various manners. Modifications may be requested or otherwise determined in various manners, including based on dynamic instructions specified by the user, and on satisfaction of triggers that are previously defined by the user. In some situations, the techniques are used in conjunction with a fee-based program execution service that executes multiple programs on behalf of multiple users of the service.Type: ApplicationFiled: July 22, 2016Publication date: November 17, 2016Inventors: Alex Maclinovsky, Blake Meike, Chiranjeeb Buragohain, Christopher Reddy Kommareddy, Geoffrey Scott Pare, John W. Heitmann, Sumit Lohia, Liang Chen, Zachary S. Musgrave
-
Publication number: 20160335124Abstract: Disclosed herein is a computer implemented method for scheduling a new task. The method comprises: receiving task data in respect of the new task, the task data comprising at least information enabling the new task to be uniquely identified and a target runtime for the new task; recording the received task data in a data structure and determining if a new job needs to be registered with an underlying job scheduler.Type: ApplicationFiled: May 14, 2015Publication date: November 17, 2016Applicant: ATLASSIAN PTY LTDInventors: BRAD BAKER, MICHAEL RUFLIN, JOSHUA HANSEN, ADAM HYNES, CLEMENT CAPIAUX, EDWARD ZHANG
-
Publication number: 20160335125Abstract: A processor and corresponding method are described including cores having a thread set allocated based on a pre-set implementation order, and a controller configured to receive scheduling information determined based on an implementation pattern regarding the allocated thread set from one of the cores and transmit the scheduling information to another of the cores. The one of cores determines the scheduling information according to characteristics of an application when implementation of the thread set is completed. Each of the cores re-determines an implementation order regarding the allocated thread set based on the determined scheduling information.Type: ApplicationFiled: May 4, 2016Publication date: November 17, 2016Applicants: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and TechnologyInventors: Minseok LEE, John Dongjun KIM, Woong SEO, Soojung RYU, Yeongon CHO
-
Publication number: 20160335126Abstract: Methods, apparatus, and products for deterministic real time business application processing in a service-oriented architecture (‘SOA’), the SOA including SOA services, each SOA service carrying out a processing step of the business application where each SOA service is a real time process executable on a real time operating system of a generally programmable computer and deterministic real time business application processing according to embodiments of the present invention includes configuring the business application with real time processing information and executing the business application in the SOA in accordance with the real time processing information.Type: ApplicationFiled: July 28, 2016Publication date: November 17, 2016Inventors: LANDON C. MILLER, SILJAN H. SIMPSON
-
Publication number: 20160335127Abstract: Systems and methods for dynamic granularity control of parallelized work in a heterogeneous multi-processor portable computing device (PCD) are provided. During operation a first parallelized portion of an application executing on the PCD is identified. The first parallelized portion comprising a plurality of threads for parallel execution on the PCD. Performance information is obtained about a plurality of processors of the PCD, each of the plurality of processors corresponding to one of the plurality of threads. A number M of workload partition granularities for the plurality of threads is determined, and a total execution cost for each of the M workload partition granularities is determined An optimal granularity comprising a one of the M workload partition granularities with a lowest total execution cost is determined, and the first parallelized portion is partitioned into a plurality of workloads having the optimal granularity.Type: ApplicationFiled: May 11, 2015Publication date: November 17, 2016Inventors: JAMES MICHAEL ARTMEIER, SUMIT SUR, ROBERT SCOTT DREYER, MICHAEL DOUGLAS SHARP, JAMES LYALL ESLIGER
-
Publication number: 20160335128Abstract: Systems, methods, and software described herein facilitate the allocation of large scale processing jobs to host computing systems. In one example, a method of allocating job processes to a plurality of host computing systems in a large scale processing environment includes identifying a job process for the large scale processing environment, and obtaining accommodation data for a plurality of host computing systems in the large scale processing environment. The method further provides identifying a host computing system in the plurality of host computing systems for the job process based on the accommodation data, and initiating a virtual node on the host computing system for the job process.Type: ApplicationFiled: May 11, 2015Publication date: November 17, 2016Inventors: Thomas A. Phelan, Michael J. Moretti, Joel Baxter, Gunaseelan Lakshminarayanan, Kumar Sreekanti
-
Publication number: 20160335129Abstract: Some embodiments provide a local network controller that manages a first managed forwarding element (MFE) operating to forward traffic on a host machine for several logical networks and configures the first MFE to forward traffic for a set of containers operating within a container virtual machine (VM) that connects to the first MFE. The local network controller receives, from a centralized network controller, logical network configuration information for a logical network to which the set of containers logically connect. The local network controller receives, from the container VM, a mapping of a tag value used by a second MFE operating on the container VM to a logical forwarding element of the logical network to which the set of containers connect. The local network controller configures the first MFE to apply the logical network configuration information to data messages received from the container VM that are tagged with the tag value.Type: ApplicationFiled: August 28, 2015Publication date: November 17, 2016Inventors: Somik Behera, Donghai Han, Jianjun Shen, Justin Pettit
-
Publication number: 20160335130Abstract: A global interconnect system. The global interconnect system includes a plurality of resources having data for supporting the execution of multiple code sequences and a plurality of engines for implementing the execution of the multiple code sequences. A plurality of resource consumers are within each of the plurality of engines. A global interconnect structure is coupled to the plurality of resource consumers and coupled to the plurality of resources to enable data access and execution of the multiple code sequences, wherein the resource consumers access the resources through a per cycle utilization of the global interconnect structure.Type: ApplicationFiled: July 25, 2016Publication date: November 17, 2016Inventor: Mohammad A. ABDALLAH
-
Publication number: 20160335131Abstract: Embodiments of the invention relate to a system and method for dynamically scheduling resources using policies to self-optimize resource workloads in a data center. The object of the invention is to allocate resources in the data center dynamically corresponding to a set of policies that are configured by an administrator. Operational parametrics that correlate to the cost of ownership of the data center are monitored and compared to the set of policies configured by the administrator. When the operational parametrics approach or exceed levels that correspond to the set of policies, workloads in the data center are adjusted with the goal of minimizing the cost of ownership of the data center. Such parametrics include yet are not limited to those that relate to resiliency, power balancing, power consumption, power management, error rate, maintenance, and performance.Type: ApplicationFiled: July 25, 2016Publication date: November 17, 2016Inventors: Eng Lim Goh, Christian Tanasescu, George L. Thomas, Charlton Port
-
Publication number: 20160335132Abstract: Provided are a computer program product, system, and method for managing processor threads of a plurality of processors. In one embodiment, a parameter of performance of the computing system is measured, and the configurations of one or more processor nodes are dynamically adjusted as a function of the measured parameter of performance. In this manner, the number of processor threads being concurrently executed by the plurality of processor nodes of the computing system may be dynamically adjusted in real time as the system operates to improve the performance of the system as it operates under various operating conditions. It is appreciated that systems employing processor thread management in accordance with the present description may provide other features in addition to or instead of those described herein, depending upon the particular application.Type: ApplicationFiled: May 12, 2015Publication date: November 17, 2016Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
-
Publication number: 20160335133Abstract: A TASKS_RCU grace period is detected whose quiescent states comprise a task undergoing a voluntary context switch, a task running in user mode, and a task running in idle-mode. A list of all runnable tasks is built. The runnable task list is scanned in one or more scan passes. Each scan pass through the runnable task list searches to identify tasks that have passed through a quiescent state by either performing a voluntary context switch, running in user mode, or running in idle-mode. If found, such quiescent state tasks are removed from the runnable task list. Searching performed during a scan pass includes identifying quiescent state tickless user mode tasks that have been running continuously in user mode on tickless CPUs that have not received a scheduling clock interrupt since commencement of the TASKS_RCU grace period. If the runnable task list is empty, the TASKS_RCU grace period is ended.Type: ApplicationFiled: August 21, 2015Publication date: November 17, 2016Inventor: Paul E. McKenney
-
Publication number: 20160335134Abstract: Provided are a computer program product, system, and method for determining storage tiers for placement of data sets during execution of tasks in a workflow. A representation of a workflow execution pattern of tasks for a job indicates a dependency of the tasks and data sets operated on by the tasks. A determination is made of an assignment of the data sets for the tasks to a plurality of the storage tiers based on the dependency of the tasks indicated in the workflow execution pattern. A moving is scheduled of a subject data set of the data sets operated on by a subject task of the tasks that is subject to an event to an assigned storage tier indicated in the assignment for the subject task subject. The moving of the data set is scheduled to be performed in response to the event with respect to the subject task.Type: ApplicationFiled: July 29, 2016Publication date: November 17, 2016Inventors: Aayush Gupta, Sangeetha Seshadri
-
Publication number: 20160335135Abstract: A method for minimizing lock contention among threads in a multithreaded system is disclosed. The method includes the steps of: (a) a processor causing a control thread, if information on a task is acquired by the control thread, to acquire a lock to thereby put the information on a task into a specific task queue which satisfies a certain condition among multiple task queues; and (b) the processor causing a specified worker thread corresponding to the specific task queue among multiple worker threads, if the lock held by the control thread is released, to acquire a lock to thereby get a task stored in the specific task queue.Type: ApplicationFiled: October 29, 2015Publication date: November 17, 2016Inventors: Young Hwi Jang, Eui Geun Chung
-
Publication number: 20160335136Abstract: A TASKS_RCU grace period is detected whose quiescent states comprise a task undergoing a voluntary context switch, a task running in user mode, and a task running in idle-mode. A list of all runnable tasks is built. The runnable task list is scanned in one or more scan passes. Each scan pass through the runnable task list searches to identify tasks that have passed through a quiescent state by either performing a voluntary context switch, running in user mode, or running in idle-mode. If found, such quiescent state tasks are removed from the runnable task list. Searching performed during a scan pass includes identifying quiescent state tickless user mode tasks that have been running continuously in user mode on tickless CPUs that have not received a scheduling clock interrupt since commencement of the TASKS_RCU grace period. If the runnable task list is empty, the TASKS_RCU grace period is ended.Type: ApplicationFiled: May 12, 2015Publication date: November 17, 2016Inventor: Paul E. McKenney
-
Publication number: 20160335137Abstract: A grace period detection technique for a preemptible read-copy update (RCU) implementation that uses a combining tree for quiescent state tracking. When a leaf level bitmask indicating online/offline CPUs is fully cleared due to all of its assigned CPUs going offline as a result of hotplugging operations, the bitmask state is not immediately propagated to the root level of the combining tree as in prior art RCU implementations. Instead, propagation is deferred until all tasks are removed from an associated leaf level task list tracking tasks that were preempted inside an RCU read-side critical section. Deferring bitmask propagation obviates the need to migrate the task list to the combining tree root level in order to prevent premature grace period termination. The task list can remain at the leaf level. In this way, CPU hotplugging is accommodated while avoiding excessive degradation of real-time latency stemming from the now-eliminated task list migration.Type: ApplicationFiled: August 21, 2015Publication date: November 17, 2016Inventor: Paul E. McKenney
-
Publication number: 20160335138Abstract: A digital assistant includes an extensibility client that interfaces with application extensions that are built by third-party developers so that various aspects of application user experiences, content, or features may be integrated into the digital assistant and rendered as native digital assistant experiences. Application extensions can use a variety of services provided from cloud-based and/or local sources such as language/vocabulary, user preferences, and context services that add intelligence and contextual relevance while enabling the extensions to plug in and operate seamlessly within the digital assistant context. Application extensions may also access and utilize general digital assistant functions, data structures, and libraries exposed by the services and implement application domain-specific context and behaviors using the programming features captured in the extension.Type: ApplicationFiled: May 14, 2015Publication date: November 17, 2016Inventors: Tanvi Surti, Michael Patten, Sean Lyndersay, Chee Chen Tong
-
Publication number: 20160335139Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for action items, user defined actions, and triggering activities. In one aspect, a method includes receiving, at a user device, input of a user defined action, the user defined action including a plurality of terms; receiving, by the user device, a selection of a user defined trigger activity, the trigger activity indicating user performance of an activity to trigger the user defined action to be presented; determining at least one environmental condition of an environment in which the user device is located; determining, based on user information and the at least one environmental condition, a user performance of the activity indicated by the trigger activity; and presenting, by the user device, a notification of the user defined action to the user device of the user.Type: ApplicationFiled: May 11, 2015Publication date: November 17, 2016Inventors: Fergus Gerard Hurley, Robin Dua
-
Publication number: 20160335140Abstract: Embodiments are provided for managing operation of an electronic device based on the connection(s) of hardware module(s) to the electronic drive via a support housing. According to certain aspects, the electronic device may activate and identify a hardware module that is connected to a controlling position of the support housing. The electronic device may identify a function associated with the hardware module, where the function may be a built-in function of the hardware module itself or of the electronic device. The electronic device may accordingly activate the identified function.Type: ApplicationFiled: May 12, 2015Publication date: November 17, 2016Inventors: Eric Liu, Yoshimichi Matsuoka, Jason Chua
-
Publication number: 20160335141Abstract: An apparatus, method, system, and program product are disclosed for command-based storage scenario prediction. A registration module registers a listener to receive notifications associated with a scenario, which comprises a predefined sequence of a plurality of commands. A command module determines an initial scenario sequence comprising a subset of the plurality of commands of the scenario. A monitor module detects execution of commands on a device. A notification module sends a notification to the listener in response to detecting execution of a sequence of commands comprising the initial scenario sequence. The notification includes a hint indicating to the listener to prepare for one or more remaining commands of the scenario.Type: ApplicationFiled: May 14, 2015Publication date: November 17, 2016Inventors: Joshua J. Crawford, Paul A. Jennas, II, Jason L. Peipelman, Matthew J. Ward
-
Publication number: 20160335142Abstract: The present disclosure discloses a notification service processing method for business process management and a business process management engine. The method includes parsing a definition of a business process when business process starts running, and creating a business process instance for a business activity when execution reaches the business activity, where an event listener is configured for the business activity, and where at least one notification service is configured for the event listener, parsing, based on the created business process instance, the event listener configured for the business activity, and invoking the notification service configured for the event listener when the event listener learns by listening that a notification service trigger condition is met to send a notification message to a corresponding party. In this way, complexity of notification service processing in business process management is reduced.Type: ApplicationFiled: July 29, 2016Publication date: November 17, 2016Inventors: Junjie Zhou, Shijun Wang
-
Publication number: 20160335143Abstract: Disclosed is a method of determining concurrency factors for an application running on a parallel processor. Also disclosed is a system for implementing the method. In an embodiment, the method includes running at least a portion of the kernel as sequences of mini-kernels, each mini-kernel including a number of concurrently executing workgroups. The number of concurrently executing workgroups is defined as a concurrency factor of the mini-kernel. A performance measure is determined for each sequence of mini-kernels. From the sequences, a particular sequence is chosen that achieves a desired performance of the kernel, based on the performance measures. The kernel is executed with the particular sequence.Type: ApplicationFiled: May 13, 2015Publication date: November 17, 2016Applicant: Advanced Micro Devices, Inc.Inventors: Rathijit Sen, Indrani Paul, Wei Huang
-
Publication number: 20160335144Abstract: Memory systems may include a memory including a plurality of memory blocks, and a controller suitable for, incrementing a first counter corresponding to a block of the plurality of blocks when the block is read, incrementing a second counter when the first counter reaches a predefined count number, determining an error count of the block when the second counter is incremented, and initiating a reclaim function when the error count exceeds an error threshold.Type: ApplicationFiled: May 13, 2016Publication date: November 17, 2016Inventors: Yu CAI, Fan ZHANG, June LEE, Haibo LI
-
Publication number: 20160335145Abstract: The present invention aims to provide a programmable device with a configuration memory that can hold the state of the occurrence abnormal situation that is difficult to assume such as a failure occurring in the programmable device due to the terrestrial radiation of the configuration memory, even during power off, in order to improve the reproducibility in device testing based on the held error information. The programmable device with the configuration memory includes: an error detection section for detecting an error in the configuration memory, and outputting the detected error as well as an address in which the error occurred, as error information; and an error information holding section provided with a non-volatile memory to store the output error information.Type: ApplicationFiled: January 24, 2014Publication date: November 17, 2016Inventors: Tadanobu TOBA, Kenichi SHIMBO, Yusuke KANNO, Nobuyasu KANEKAWA, Kotara SHIMAMURA, Hiromichi YAMADA
-
Publication number: 20160335146Abstract: A non-transitory computer-readable recording medium having stored therein a program for causing a computer to execute a process for detecting a sign, the process includes obtaining message information output from one or a plurality of information processing devices; obtaining configuration information in the one or the plurality of information processing devices; storing the obtained message information and the obtained configuration information in a common format; and outputting predetermined message information and predetermined configuration information according to comparison of a predetermined pattern described in the common format and the message information and the configuration information stored in the common format.Type: ApplicationFiled: April 28, 2016Publication date: November 17, 2016Applicant: FUJITSU LIMITEDInventors: Hiroshi Otsuka, YUKIHIRO WATANABE, YASUHIDE MATSUMOTO
-
Publication number: 20160335147Abstract: Certain aspects of the present disclosure relate to selecting a deferral period after detecting an error in a received packet by an apparatus for wireless communications. The apparatus generally includes an interface configured to obtain a frame received over a medium, and a processing system configured to detect an occurrence of an error when processing the frame, determine an intended recipient of the frame based on information included in the frame, and select a deferral period, after detecting the occurrence of the error, during which the apparatus refrains from transmitting on the medium, wherein the selection is based, at least in part, on the determination.Type: ApplicationFiled: May 4, 2016Publication date: November 17, 2016Inventor: Alfred ASTERJADHI
-
Publication number: 20160335148Abstract: A packet is identified at a port of a serial data link, and it is determined that the packet is associated with an error. Entry into an error recovery mode is initiated based on the determination that the packet is associated with the error. Entry into the error recovery mode can cause the serial data link to be forced down. In one aspect, forcing the data link down causes all subsequent inbound packets to be dropped and all pending outbound requests and completions to be aborted during the error recovery mode.Type: ApplicationFiled: February 12, 2016Publication date: November 17, 2016Applicant: Intel CorporationInventors: Prahladachar Jayaprakash Bharadwaj, Alexander Brown, Debendra Das Sharma, Junaid Thaliyil