Patents Issued in October 20, 2016
-
Publication number: 20160306649Abstract: An “old” hypervisor is upgraded to or otherwise replaced by a “new” hypervisor without migrating virtual machines to a standby computer. The old hypervisor partitions the computer that it controls between a source partition and a target partition. The hypervisor and its virtual machines initially run on the source partition, while a new hypervisor is installed on the target partition. The virtual machines are migrated to the new hypervisor without physically moving the in-memory virtual-machine data. Instead, the old hypervisor sends memory pointers, and the new hypervisor claims the respective memory locations storing the virtual-machine data. After all virtual machines are migrated, the old hypervisor bequeaths the hypervisor memory and a last processor that the old hypervisor requires to run. The new hypervisor claims the bequeathed processor and hypervisor memory after the old hypervisor terminates to complete the upgrade/exchange.Type: ApplicationFiled: June 22, 2016Publication date: October 20, 2016Applicant: VMware, Inc.Inventors: Mukund Gunti, Vishnu Sekhar, Rajesh Venkatasubramanian
-
Publication number: 20160306650Abstract: Principles for enabling power management techniques for virtual machines. In a virtual machine environment, a physical computer system may maintain management facilities to direct and control one or more virtual machines executing thereon. In some techniques described herein, the management facilities may be adapted to place a virtual processor in an idle state in response to commands from a guest operating system. One or more signaling mechanisms may be supported such that the guest operating system will command the management facilities to place virtual processors in the idle state.Type: ApplicationFiled: June 23, 2016Publication date: October 20, 2016Inventors: Haiyong Wang, Brandon S. Baker, Shuvabrata Ganguly, Nicholas Stephen Judge
-
Publication number: 20160306651Abstract: Software, firmware, and systems repurpose existing virtual machines. After a virtual machine is created, the system stores data associated with the virtual machine to permit its later repurposing. Repurposing data includes data associated with the virtual machine when the virtual machine is in a generic state from which it may be configured for use by two or more users/applications. When the system receives a request to create a new virtual machine, rather than create a brand new virtual machine, the system repurposes an existing virtual machine. The system identifies a virtual machine to repurpose, deletes data associated with the identified virtual machine, and loads a saved copy of repurposing data. The system may then load user data or otherwise customize the database and virtual machine.Type: ApplicationFiled: June 27, 2016Publication date: October 20, 2016Inventor: Sanjay Harakhchand Kripalani
-
Publication number: 20160306652Abstract: Methods, systems, and computer program products for providing fair unidirectional multi-queue virtual machine migration are disclosed. A computer-implemented method may include maintaining a current scan identifier for each of a plurality of streams used to migrate a virtual machine from a first hypervisor to a second hypervisor, determining when a current scan identifier of a first stream and a current scan identifier of a second stream are associated with different memory states of the virtual machine, and adjusting processing of memory updates when the current scan identifiers are associated with different memory states of the virtual machine. The adjusting may be performed, for example, by pausing processing on each stream having a current scan identifier subsequent to the earliest current scan identifier determined for the streams, and processing memory updates on each stream having a current scan identifier matching the earliest current scan identifier.Type: ApplicationFiled: June 30, 2016Publication date: October 20, 2016Inventors: Michael S. Tsirkin, Karen Noel
-
Publication number: 20160306653Abstract: According to an example, configurable workload optimization may include selecting a performance optimized application workload from available performance optimized application workloads. A predetermined combination of removable workload optimized modules may be selected to implement the selected performance optimized application workload. Different combinations of the removable workload optimized modules may be usable to implement different ones of the available performance optimized application workloads. The predetermined combination of the removable workload optimized modules may be managed to implement the selected performance optimized application workload. Data flows directed to the predetermined combination of the removable workload optimized modules may be received.Type: ApplicationFiled: June 29, 2016Publication date: October 20, 2016Applicant: Trend Micro IncorporatedInventors: Stephen G. LOW, James ROLETTE, Edward A. WARTHA, Matthew LASWELL
-
Publication number: 20160306654Abstract: The use of a skip element when redoing transactions, so as to avoid tracking dependencies between transactions assigned to different threads for parallel processing. When the second thread comes to a second task in the course of redoing a second transaction, if a first task that is mooted by the second task is not already performed, the second thread inserts a skip element associated with the object to be operated upon by the particular task, instead of actually performing the particular task upon the object. When the first thread later comes to the first task in the course of redoing a first transaction, the first thread encounters the skip element associated with the object. Accordingly, instead of performing the dependee task, the first thread skips the dependee task and perhaps removes the skip element. The result is the same regardless of whether the first or second task is redone first.Type: ApplicationFiled: April 14, 2015Publication date: October 20, 2016Inventor: Cristian Diaconu
-
Publication number: 20160306655Abstract: Aspects of the present disclosure are directed towards managing computing resources. Managing computing resources can include initializing in a computer system, an application that corresponds to one or more commit signatures, each of the one or more commit signatures correspond to a transaction within the application and determining that a commit signature of one or more commit signatures is saved in a commit block (COB). Managing computing resources can include retrieving, from the COB, in response to determining that the commit signature is saved in the COB, a first set of resource data that corresponds to the commit signature, the first set of resource data contains information for resource usage that corresponds to the application and allocating resources accessible to the computer system based on the first set of resource data.Type: ApplicationFiled: June 12, 2015Publication date: October 20, 2016Inventors: Rafal P. Konik, Roger A. Mittelstadt, Brian R. Muras, Chad A. Olstad
-
Publication number: 20160306656Abstract: Intelligent application back stack management may include generating a first back stack for activities of an application that have been executed by a device that executes the application. The first back stack may include a back stack size limit. A further back stack may be generated for selected ones of the activities of the application if a total number of the activities of the application and further activities of the application exceeds the back stack size limit. The first back stack may be an in-memory back stack for the device that executes the application, and the further back stack may include an external on-device back stack for the device that executes the application and/or a Cloud storage based back stack. Intelligent application back stack management may further include regenerating an activity of the selected ones of the activities that is pulled from the further back stack.Type: ApplicationFiled: June 29, 2016Publication date: October 20, 2016Applicant: Accenture Global Services LimitedInventors: Senthil KUMARESAN, Sanjoy PAUL, Nataraj KUNTAGOD
-
Publication number: 20160306657Abstract: The present disclosure relates to dynamic queue placement. In one embodiment, a method includes receiving a plurality of items for processing by a computing device, wherein each item received by the computing device is associated with a priority type. The method also includes determining a computed code for the plurality of items to assign a processing order in a queue for each of the plurality of items. The computed code is based on a timeout period for a lowest priority item of the plurality of items, and a safety margin interval of each of the plurality of items, the safety margin level including a time period for processing an item. The method may also include placing the plurality of items into the queue based on the computed code of each item.Type: ApplicationFiled: April 20, 2015Publication date: October 20, 2016Applicants: HISENSE USA CORP., Hisense Electric Co., Ltd., Hisense International Co., Ltd.Inventor: Roger STRINGER
-
Publication number: 20160306658Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for memory requests by a virtual machine. One of the methods includes initiating a migration process to move an application executing on a first device from the first device to a second device by copying pages of data, stored in a memory of the first device and used for the execution of the application, from the first device to the second device while continuing to execute the application on the first device, updating, by the first device, one or more bytes in at least one of the pages of data in response to executing the application on the first device during the migration process, stopping execution of the application on the first device, and copying the updated bytes from the first device to the second device to cause the second device to continue execution of the application.Type: ApplicationFiled: May 28, 2015Publication date: October 20, 2016Inventor: Benjamin C. Serebrin
-
Publication number: 20160306659Abstract: The use of a data stream that has therein data items and a sequence of collection records, each comprising a collection definition that is not overlapping with the collection definition in any of the sequence of collection records. The collection definition defines which data items of the data stream are included within the collection. Each collection record also includes a data stream address range at least extending from the data stream address of the first data item of the collection to the data stream address of the last data item in the collection. In this context, the data stream may be efficiently processed by quickly reviewing the data stream to find each collection record. Once a collection record is found, the collection record is dispatched for processing to a worker thread for processing of the corresponding collection.Type: ApplicationFiled: April 14, 2015Publication date: October 20, 2016Inventors: Cristian Diaconu, Daniel Vasquez Lopez, Raghavendra Thallam Kodandaramaih, Arkadi Brjazovski, Rogerio Ramos
-
Publication number: 20160306660Abstract: Embodiments of the present disclosure disclose an apparatus and a method for implementing multiple content management service operations by sending a batch service request for a batch, wherein the batch comprises multiple content management service operations; and receiving a batch service response, wherein the batch service response indicates at least one of a state of the batch and a result from executing the batch.Type: ApplicationFiled: April 13, 2016Publication date: October 20, 2016Inventors: Wei Ruan, William Wei Zhou, Jason Muhu Chen
-
Publication number: 20160306661Abstract: Dynamic pool reallocation performed by the following steps: (i) defining a plurality of resource pools including a first pool and a second pool, where each resource pool has a plurality of assigned resources; (ii) receiving a plurality of jobs to be executed; (iii) for each job of the plurality of jobs, assigning a respective resource pool, of the plurality of resource pools, to be used in completing the job; (iv) determining a preliminary schedule for executing the jobs on their respective resource pools; (v) determining whether the preliminary schedule will cause any jobs to miss service level agreement (SLA) deadlines corresponding to the job; (vi) executing the plurality of jobs on their respectively assigned resource pools; and (vii) re-assigning first resource from the second pool to the first pool during at least some of the time of the execution of the first job by the first resource pool.Type: ApplicationFiled: June 29, 2016Publication date: October 20, 2016Inventors: Arcangelo Di Balsamo, Sandro Piccinini, Luigi Presti, Luigi Schiuma
-
Publication number: 20160306662Abstract: Embodiments of the present invention provide systems and methods for allocating multiple resources. In one embodiment, a configured resource plan is used to construct a hierarchical tree. The system then identifies a set of unowned resources from the configured resource plan and sends the set of unowned resource to a share pool. The share pool is either a global or local pool and can be accessed by one or more consumers. In response to changes in workload demands, a set of unused resources are lent to a global or local pool.Type: ApplicationFiled: April 20, 2015Publication date: October 20, 2016Inventors: Alicia E. Chin, Michael Feiman, Zhenhua Hu, Jason T. S. Lam, Zhimin Lin, Lei Su, Hao Zhou
-
Publication number: 20160306663Abstract: System, method and computer program product for allocating FPGA resources in a resource pool. In an embodiment, the technical solution includes: receiving resource request for FPGA resources in the resource pool from a client; performing resource allocation operation based on resource pool state information record in response to the resource request, the resource pool state information record including utilization state information of the FPGA in the resource pool; and updating the resource pool state information record based on the result of the resource allocation operation. FPGA resource allocation can be implemented with the adoption of the technical solution of the application.Type: ApplicationFiled: June 28, 2016Publication date: October 20, 2016Inventors: Xiaotao Chang, Fei Chen, Kun Wang, Yu Zhang, Jia Zou
-
Publication number: 20160306664Abstract: Utilizing computing resources under a disabled processor node, including: identifying a disabled processor node, the disabled processor node representing a computer processor that is not being utilized for general purpose computer program instruction execution; identifying one or more computing resources that can be accessed only by the disabled processor node; and enabling a portion of the disabled processor node required to access the one or more computing resources.Type: ApplicationFiled: April 16, 2015Publication date: October 20, 2016Inventor: DOUGLAS W. OLIVER
-
Publication number: 20160306665Abstract: Aspects of the present disclosure are directed towards managing computing resources. Managing computing resources can include initializing in a computer system, an application that corresponds to one or more commit signatures, each of the one or more commit signatures correspond to a transaction within the application and determining that a commit signature of one or more commit signatures is saved in a commit block (COB). Managing computing resources can include retrieving, from the COB, in response to determining that the commit signature is saved in the COB, a first set of resource data that corresponds to the commit signature, the first set of resource data contains information for resource usage that corresponds to the application and allocating resources accessible to the computer system based on the first set of resource data.Type: ApplicationFiled: April 20, 2015Publication date: October 20, 2016Inventors: Rafal P. Konik, Roger A. Mittelstadt, Brian R. Muras, Chad A. Olstad
-
Publication number: 20160306666Abstract: A central processing unit (CPU) forming part of a computing device, initiates execution of code associated with each of a plurality of objects used by a worker thread. The CPU has an associated cache that is split into a plurality of slices. It is determined, by a cache slice allocation algorithm for each object, whether any of the slices will be exclusive to or shared by the object. Thereafter, for each object, any slices determined to be exclusive to the object are activated such that the object exclusively uses such slices and any slices determined to be shared by the object are activated such that the object shares or is configured to share such slices.Type: ApplicationFiled: April 20, 2015Publication date: October 20, 2016Inventor: Ivan Schreter
-
Publication number: 20160306667Abstract: A data processing system is described herein that includes two or more software-driven host components. The two or more host components collectively provide a software plane. The data processing system also includes two or more hardware acceleration components (such as FPGA devices) that collectively provide a hardware acceleration plane. A common physical network allows the host components to communicate with each other, and which also allows the hardware acceleration components to communicate with each other. Further, the hardware acceleration components in the hardware acceleration plane include functionality that enables them to communicate with each other in a transparent manner without assistance from the software plane.Type: ApplicationFiled: May 20, 2015Publication date: October 20, 2016Inventors: Douglas C. Burger, Andrew R. Putnam, Stephen F. Heil
-
Publication number: 20160306668Abstract: A data processing system is described herein that includes two or more software-driven host components that collectively provide a software plane. The data processing system further includes two or more hardware acceleration components that collectively provide a hardware acceleration plane. The hardware acceleration plane implements one or more services, including at least one multi-component service. The multi-component service has plural parts, and is implemented on a collection of two or more hardware acceleration components, where each hardware acceleration component in the collection implements a corresponding part of the multi-component service. Each hardware acceleration component in the collection is configured to interact with other hardware acceleration components in the collection without involvement from any host component. A function parsing component is also described herein that determines a manner of parsing a function into the plural parts of the multi-component service.Type: ApplicationFiled: May 20, 2015Publication date: October 20, 2016Inventors: Stephen F. Heil, Adrian M. Caulfield, Douglas C. Burger, Andrew R. Putnam, Eric S. Chung
-
Publication number: 20160306669Abstract: Systems, methods, and computer program products to perform an operation comprising collecting, metric data for a first job upon determining that the first job: uses a first resource of a computing system at a level that exceeds a first threshold, wherein the metric data describes a usage level of the first resource by the first job, and has been executing for a duration of time that exceeds a time threshold.Type: ApplicationFiled: April 15, 2015Publication date: October 20, 2016Inventor: Karla K. ARNDT
-
Publication number: 20160306670Abstract: Method to perform an operation comprising collecting, metric data for a first job upon determining that the first job: uses a first resource of a computing system at a level that exceeds a first threshold, wherein the metric data describes a usage level of the first resource by the first job, and has been executing for a duration of time that exceeds a time threshold.Type: ApplicationFiled: June 1, 2015Publication date: October 20, 2016Inventor: Karla K. ARNDT
-
Publication number: 20160306671Abstract: A data model for application to a constraint programming solver is generated. The data model includes a set of data model elements. A particular data model element corresponds to a particular request. The particular data model element also corresponds to one or more resources that may be assigned to the request. The data model also includes a set of constraints. One or more sort/search algorithms may be applied with the data model to the constraint programming solver. The sort/search algorithms may direct the constraint programming solver to output certain preferred assignments of resources to requests.Type: ApplicationFiled: July 24, 2015Publication date: October 20, 2016Inventors: Serdar Kadioglu, Michael Colena
-
Publication number: 20160306672Abstract: Embodiments of the present invention provide systems and methods for allocating multiple resources. In one embodiment, a configured resource plan is used to construct a hierarchical tree. The system then identifies a set of unowned resources from the configured resource plan and sends the set of unowned resource to a share pool. The share pool is either a global or local pool and can be accessed by one or more consumers. In response to changes in workload demands, a set of unused resources are lent to a global or local pool.Type: ApplicationFiled: September 21, 2015Publication date: October 20, 2016Inventors: Alicia E. Chin, Michael Feiman, Zhenhua Hu, Jason T. S. Lam, Zhimin Lin, Lei Su, Hao Zhou
-
Publication number: 20160306673Abstract: A method of resource provisioning including obtaining component metric information of one or more processing nodes, where the one or more processing nodes form a pool of processing nodes managed by the provisioning apparatus. The method also includes obtaining task characteristics of a target task executing on one or more processing nodes of a first set, where the one or more processing nodes of the first set are selected from the pool of processing nodes. The method further includes determining one or more processing nodes of a second set from the pool of processing nodes based on the task characteristics and the component metric information and the step of deploying the target task to the one or more processing nodes in the second set.Type: ApplicationFiled: April 15, 2016Publication date: October 20, 2016Inventor: Liang YOU
-
Publication number: 20160306674Abstract: A service mapping component (SMC) is described herein for processing requests by instances of tenant functionality that execute on software-driven host components (or some other components) in a data processing system. The SMC is configured to apply at least one rule to determine whether a service requested by an instance of tenant functionality is to be satisfied by at least one of: a local host component, a local hardware acceleration component which is locally coupled to the local host component, and/or at least one remote hardware acceleration component that is indirectly accessible to the local host component via the local hardware acceleration component. In performing its analysis, the SMC can take into account various factors, such as whether or not the service corresponds to a line-rate service, latency-related considerations, security-related considerations, and so on.Type: ApplicationFiled: May 20, 2015Publication date: October 20, 2016Inventors: Derek T. Chiou, Sitaram V. Lanka, Douglas C. Burger
-
Publication number: 20160306675Abstract: A method of managing virtual resources executing on a hardware platform that employs sensors to monitor the health of hardware resources of the hardware platform, includes filtering sensor data from the hardware platform and combining the sensor data with a fault model for the hardware platform to generate a health score, receiving an inventory that maps the virtual resources to the hardware resources of the hardware platform, receiving resource usage data describing use of the hardware resources of the hardware platform by the virtual resources, and generating resource utilization metrics from the resource usage data. The method includes receiving policy data specifying rules applicable to the inventory, determining a set of recommendations for changes to the inventory based on the health score, the resource usage data, and the policy data, and executing at least one recommendation to implement the changes to the inventory.Type: ApplicationFiled: June 26, 2015Publication date: October 20, 2016Inventors: Maarten WIGGERS, Manoj KRISHNAN, Anil KAPUR, Keith FARKAS, Anne HOLLER
-
Publication number: 20160306676Abstract: According to one embodiment of the present invention, a method is provided. The method may include a computer registering a first instance of a logical partition on a source server with a logical unit and placing a first persistent reservation on the logical unit, wherein the first persistent reservation indicates that only the first instance of the logical partition can hold a reservation on the logical unit. The method may further include the computer registering a second instance of the logical partition on a destination server with the logical unit and downgrading the first persistent reservation, such that the first and second instances of the logical partition can hold persistent reservations on the logical unit. The method may further include the computer placing, by one or more computer processors, a second persistent reservation on the logical unit.Type: ApplicationFiled: July 8, 2016Publication date: October 20, 2016Inventors: Kiran K. Anumalasetty, Venkata N. Anumula, Sudhir Maddali, Yadagiri Rajaboina
-
Publication number: 20160306677Abstract: Mechanisms are provided, in a data processing system comprising a primary system-on-a-chip (SOC) and a pool of SOCs, for processing a workload. The data processing system receives a cloud computing workload submitted and allocates the cloud computing workload to the primary SOC. An analytics monitor of the data processing system monitors a bus of the data processing system for at least one first signal indicative of an overloaded condition of the primary SOC. A Power, Reset, and Clocking (PRC) hardware block powers-up one or more auxiliary SOCs in the pool of SOCs in response to the analytics monitor detecting the at least one first signal. The workload is then distributed across the primary SOC and the one or more auxiliary SOCs in response to powering-up the one or more SOCs. The workload is then executed by the primary SOC and the one or more SOCs.Type: ApplicationFiled: April 14, 2015Publication date: October 20, 2016Inventors: Kalpesh Hira, Jeffrey R. Hoy, Ivan M. Milman
-
Publication number: 20160306678Abstract: Mechanisms are provided, in a data processing system comprising a primary system-on-a-chip (SOC) and a pool of SOCs, for processing a workload. The data processing system receives a cloud computing workload submitted and allocates the cloud computing workload to the primary SOC. An analytics monitor of the data processing system monitors a bus of the data processing system for at least one first signal indicative of an overloaded condition of the primary SOC. A Power, Reset, and Clocking (PRC) hardware block powers-up one or more auxiliary SOCs in the pool of SOCs in response to the analytics monitor detecting the at least one first signal. The workload is then distributed across the primary SOC and the one or more auxiliary SOCs in response to powering-up the one or more SOCs. The workload is then executed by the primary SOC and the one or more SOCs.Type: ApplicationFiled: June 3, 2015Publication date: October 20, 2016Inventors: Kalpesh Hira, Jeffrey R. Hoy, Ivan M. Milman
-
Publication number: 20160306679Abstract: A system, method and program product for managing hydrocarbon energy production. A hydrocarbon field modeler models physical characteristics of a hydrocarbon energy field. A load predictor predicts processing workload in modeling the hydrocarbon energy field, and identifying a balanced modeling unit distribution across multiple processors simulating field production. A load distribution unit distributes the modeling units across the processors for a balanced modeling unit distribution. The load predictor and load distribution unit proactively shifts loads to maintain workload balanced throughout the simulation.Type: ApplicationFiled: November 18, 2015Publication date: October 20, 2016Applicants: INTERNATIONAL BUSINESS MACHINES CORPORATION, REPSOL, S. A.Inventors: Pablo Enrique Vargas Mendoza, Jose Maria Segura Serra, Nubia Aurora Gonzalez Molano, Lashmikantha Mookanahallipatna Ramasesha, Roberto Federico Ausas, Kamal C. Das, Freddy Ernesto Mackay Espindola, Ulisses Mello, Ankur Narang, Carmen Nilda Mena Paz, Eduardo Rocha Rodrigues, Paula Aida Sesini
-
Publication number: 20160306680Abstract: The present invention discloses a thread creation method, a service request processing method, and a related device, where the method includes: acquiring a quantity of network interface card queues of a multi-queue network interface card of a server; creating processes whose quantity is equal to the quantity of network interface card queues; creating one listener thread and multiple worker threads in each process; and binding each created listener thread to a different network interface card queue. Solutions provided in embodiments of the present invention are used to make creation of a process and a thread more proper, and improve efficiency of processing parallel service requests by a server.Type: ApplicationFiled: June 24, 2016Publication date: October 20, 2016Applicant: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Qingni SHEN, Cai LI, Mingyu GUO, Tian ZHANG, Keping CHEN, Yi CAI
-
Publication number: 20160306681Abstract: Each of a plurality of accesses by a multithreaded program to shared data structures stored within a database is monitored. The accesses are implemented by varying application programming interface (API) methods. Thereafter, it is determined, based on pre-defined synchronization safeguards, whether each of the accesses is valid or invalid based on the utilized corresponding API method. Those accesses to the shared data structures that were determined to be valid are allowed to proceed while those accesses to the shared data structures that were determined to be invalid are prevented from proceeding.Type: ApplicationFiled: April 20, 2015Publication date: October 20, 2016Inventor: Ivan Schreter
-
Publication number: 20160306682Abstract: An information processing apparatus is provided including a first operating system incapable of adding or deleting an application and a second operating system capable of adding and deleting an application; and determines whether a received command is a command directed to the first operating system or a command directed to the second operating system by referencing a table in which the command and an operating system for processing the command are associated with each other; retains the table; controls a memory so that the first operating system or the second operating system can start processing based on a result of the determining by the means for determining; and transfers the received command to the first operating system or the second operating system based on the result of the determining.Type: ApplicationFiled: June 29, 2016Publication date: October 20, 2016Inventor: Yasuo Takeuchi
-
Publication number: 20160306683Abstract: A platform architecture that is configurable to provide task specific application instances compatible with one or more hosts is provided with a method for using the architecture. In one example, the platform architecture provides a transducer functionality block, a conduit functionality block, an application functionality block, and an application programming interface (API) functionality block on which each task specific application instance is based.Type: ApplicationFiled: June 27, 2016Publication date: October 20, 2016Inventors: Douglas A. Standley, Randall E. Bye, Bill C. Hardgrave, David M. Sherr
-
Publication number: 20160306684Abstract: Software that preserves information provided by a user in a first application utilizing a first interaction mode for use by a second application utilizing a second interaction mode, by performing the following steps: (i) generating a natural language log describing an interaction between a user and a first application, where the user interacts with the first application utilizing a first interaction mode; (ii) analyzing the natural language log using natural language processing to determine first user data; and (iii) utilizing the first user data by a second application, where the user interacts with the second application utilizing a second interaction mode different from the first interaction mode, and where the second application does not receive the first user data from the user via the second interaction mode.Type: ApplicationFiled: April 15, 2015Publication date: October 20, 2016Inventors: Corville O. Allen, Robert E. Loredo, Adrian X. Rodriguez, Eric Woods
-
Publication number: 20160306685Abstract: Software that preserves information provided by a user in a first application utilizing a first interaction mode for use by a second application utilizing a second interaction mode, by performing the following steps: (i) generating a natural language log describing an interaction between a user and a first application, where the user interacts with the first application utilizing a first interaction mode; (ii) analyzing the natural language log using natural language processing to determine first user data; and (iii) utilizing the first user data by a second application, where the user interacts with the second application utilizing a second interaction mode different from the first interaction mode, and where the second application does not receive the first user data from the user via the second interaction mode.Type: ApplicationFiled: September 30, 2015Publication date: October 20, 2016Inventors: Corville O. Allen, Robert E. Loredo, Adrian X. Rodriguez, Eric Woods
-
Publication number: 20160306686Abstract: An information processing apparatus to verify an operation of an application program includes a processor configured to, upon receiving notification of having detected a connection request to external services from a connection unit contained in an execution environment for verifying the operation and establishing an connection to the external services of the application program based on connecting information being set, copy the execution environment by a number matching with a count of the external services becoming operation verifying targets, to set connecting information to corresponding external services to the copied execution environments in respective connection units contained in the copied execution environments, and to continue verifying the operation per copied execution environment with respect to the corresponding external services connected by the respective connection units contained in the copied execution environments.Type: ApplicationFiled: March 22, 2016Publication date: October 20, 2016Applicant: FUJITSU LIMITEDInventors: Takayuki Maeda, Tomohiro OHTAKE, Toshihiro Kodaka
-
Publication number: 20160306687Abstract: A method comprising identifying a source program to send a target program map interaction data set to a target program, identifying a target program map function set that includes at least one target program map function, identifying a source program map function set that includes at least one source program map function, determining a transfer map function set that is a set intersection of the target program map function set and the source program map function set, identifying a source program map interaction data set, determining a target program map interaction data set to include each source program map interaction data item from the source program map interaction data set that corresponds with at least one target program map function of the target program map function set, and sending the target program map interaction data set to the target program is disclosed.Type: ApplicationFiled: April 16, 2015Publication date: October 20, 2016Inventors: Debmalya Biswas, Julian Nolan, Matthew John Lawrenson
-
Publication number: 20160306688Abstract: A remediation server receives a service request from a data processing device, the service request to diagnose a failure to load an operating system at the data processing device. A data storage device local to data processing device is identified, the data storage device storing the operating system. A diagnostic process is provided at the remediation server, the diagnostic process to mount the data storage device. A diagnostic service is performed based on information stored at the data storage device.Type: ApplicationFiled: April 15, 2015Publication date: October 20, 2016Inventors: Anantha Boyapalle, Yuan-Chang Lo, Todd E. Swierk, Carlton A. Andrews
-
Publication number: 20160306689Abstract: A nexus of a software failure can be determined. A feature module can determine execution features based at least in part on particular execution-related data. An analysis module can determine particular nexus data based at least in part upon a stored computational model and the determined execution features. In some examples, a communications module receives the particular execution-related data and transmits the determined particular nexus data via the communications interface. In some examples, a modeling module determines the computational model based at least in part on training data including execution features of a plurality of execution-related data records and respective nexus data values. Some examples include executing a program module, transmitting execution-related data of the program module, receiving a nexus data value, and executing the program module again if the nexus is a condition external to the program module.Type: ApplicationFiled: April 17, 2015Publication date: October 20, 2016Inventor: Navendu Jain
-
Publication number: 20160306690Abstract: Features are disclosed for performing integrated test design, automation, and analysis. Such features could be used to provide efficient test planning, execution, and results analysis for multiple sites. The integrated testing interface may obtain test plan data, provide test configurations to hardware or software test runners, and process results from the testing. The integrated interface provides a full-circle testing platform from requirements collection to design to execution to analysis.Type: ApplicationFiled: April 20, 2015Publication date: October 20, 2016Inventors: Mark Underseth, Ivailo Petrov
-
Publication number: 20160306691Abstract: Methods and systems for a diagnostic service assistant for connected devices. The device service assistant enables users to diagnose and repair connected devices remotely and/or locally with the use of relevant information. The diagnostic service assistant includes a knowledge base with semantic models that manage heterogeneous sources of relevant information distributed over internal and external storage locations, typically accessible through the internet. The heterogeneous sources of information include 1) device profiles, device status, device histories, and/or aggregated information from similar devices from different users, 2) electronic technical manuals and/or 3) user generated contents and 4) aggregated and analyzed knowledge of knowledge sources.Type: ApplicationFiled: December 5, 2014Publication date: October 20, 2016Inventors: Mansimar Aneja, Stefan Jungmayr, Ji Eun Kim, Tassilo Barth
-
Publication number: 20160306692Abstract: A CRC code calculation circuit including: an extraction circuit that extracts a calculation target packet that is a target of CRC calculation from a signal frame inputted as a parallel signal of a first bit length; a shift circuit that generates, when a bit length of the calculation target packet does not match an integral multiple of the first bit length, data A of a bit length that is the integral multiple of the first bit length by shifting the calculation target packet such that a last bit of the calculation target packet is positioned at a least significant bit, and adding “0” to a most significant bit side of a head bit of the shifted calculation target packet; and a calculation circuit that generates a CRC code by performing a CRC calculation on the data A based on an initial value “0” stored in a register.Type: ApplicationFiled: March 16, 2016Publication date: October 20, 2016Inventor: Masaki Yamamoto
-
Publication number: 20160306693Abstract: A read voltage level estimating method, a memory storage device and a memory control circuit unit are provided. The method includes: reading a first region of a rewritable non-volatile memory module according to a first read voltage level to obtain a first encoding unit which belongs to a block code; performing a first decoding procedure on the first encoding unit and recording first decoding information; reading the first region according to a second read voltage level to obtain a second encoding unit which belongs to the block code; performing a second decoding procedure on the second encoding unit and recording second decoding information; and estimating and obtaining a third read voltage level according to the first decoding information and the second decoding information. Accordingly, a management ability of the rewritable non-volatile memory module adopting the block code may be improved.Type: ApplicationFiled: June 22, 2015Publication date: October 20, 2016Inventors: Wei Lin, Tien-Ching Wang, Kuo-Hsin Lai
-
Publication number: 20160306694Abstract: A method is provided that includes performing first decoding operations on data obtained from a plurality of units of memory using soft information values for the plurality of units of memory, where the plurality of units of memory includes an error correction stripe. The method further includes determining that two or more units of memory have uncorrectable errors. The method further includes updating a soft information value for a first unit of memory in accordance with a magnitude of a soft information value for a second unit and a direction based on parity of the error correction stripe excluding the first unit, where the first unit of memory and the second unit of memory are included in the two or more units of memory that have uncorrectable errors. The method further includes performing a second decoding operation on data obtained from the first unit using the updated soft information value.Type: ApplicationFiled: December 17, 2015Publication date: October 20, 2016Inventors: Ying Yu Tai, Seungjune Jeon, Jiangli Zhu
-
Publication number: 20160306695Abstract: An apparatus includes a semiconductor fuse array, disposed on a semiconductor die, into which is programmed configuration data. The semiconductor fuse array has a first plurality of semiconductor fuses and a second plurality of semiconductor fuses. The first plurality of semiconductor fuses is configured to store the configuration data in an encoded and compressed format. The second plurality of semiconductor fuses is configured to store first fuse correction data that indicates locations and values corresponding to a first one or more fuses within the first plurality of fuses whose states are to be changed from that which was previously stored.Type: ApplicationFiled: June 27, 2016Publication date: October 20, 2016Inventors: G. GLENN HENRY, DINESH K. JAIN
-
Publication number: 20160306696Abstract: A method and a memory controller for accessing a non-volatile memory are disclosed. The method includes reading a first memory region of the non-volatile memory, ascertaining whether the first memory region contains a predetermined data pattern wherein the predetermined data pattern has no influence on resulting error correcting data determined for at least the first memory region. The method evaluating a data status for a second memory region of the non-volatile memory on the basis of a presence of the predetermined data pattern in the first memory region, wherein the data status indicates at least one of whether valid data is present within the second memory region and whether the second memory region is writable.Type: ApplicationFiled: June 27, 2016Publication date: October 20, 2016Inventors: Thomas Rabenalt, Ulrich Backhausen, Thomas Kern, Michael Goessel
-
Publication number: 20160306697Abstract: According to one embodiment, a controller in a magnetic disk device saves a write data sequence including a plurality of write data blocks not written to a disk, and first management information, for the plurality of write data blocks, from which logical block addresses are excluded, in a nonvolatile memory in response to a decrease in a voltage of a power supply for the magnetic disk device. The plurality of write data blocks are added with redundancy codes respectively. Logical block addresses of the plurality of write data blocks are embedded in the redundancy codes respectively.Type: ApplicationFiled: April 15, 2015Publication date: October 20, 2016Inventors: Satoru Adachi, Masaaki Motoki, Mitsuaki Sudou
-
Publication number: 20160306698Abstract: A dispersed storage (DS) processing module sends a plurality of undecodeable portions of a plurality of data files via a public wireless communication network to one or more targeted devices of a private wireless communication network. The DS processing module continues processing by sending data content indicators regarding the plurality of data files and in response to a selection of a data file of the plurality of data files based on a corresponding one of the data content indicators, sending, via the private wireless communication network, one or more encoded data slices of each of one or more sets of encoded data slices of the data file such that, for each of the one or more sets of encoded data slices, the one or more targeted devices obtains at least a decode threshold number of encoded data slices to decode the data file.Type: ApplicationFiled: June 23, 2016Publication date: October 20, 2016Inventors: Gary W. Grube, Timothy W. Markison