Patents Issued in December 6, 2018
  • Publication number: 20180349182
    Abstract: Systems and methods are disclosed for scheduling threads on a processor that has at least two different core types, such as an asymmetric multiprocessing system. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers for the thread group. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Deferred interrupts can be used to increase performance.
    Type: Application
    Filed: January 12, 2018
    Publication date: December 6, 2018
    Inventors: Jeremy C. Andrus, John G. Dorsey, James M. Magee, Daniel A. Chimene, Cyril de la Cropte de Chanterac, Bryan R. Hinch, Aditya Venkataraman, Andrei Dorofeev, Nigel R. Gamble, Russell A. Blaine, Constantin Pistol
  • Publication number: 20180349183
    Abstract: In one aspect, a method for scheduling jobs in a computational workflow includes identifying, from a computational workflow by a workflow execution engine executing on a processor, a plurality of jobs ready for execution. The method includes sorting, based on computational resource requirements associated with each identified job, the identified jobs into a prioritized queue. The method includes provisioning one or more computational instances based on the computational resource requirements of the identified jobs in the prioritized queue, wherein at least one computational instance is provisioned based on a highest priority job in the queue. The method includes submitting the prioritized jobs for execution to the one or more computational instances.
    Type: Application
    Filed: May 29, 2018
    Publication date: December 6, 2018
    Inventors: Milos Popovic, Goran Rakocevic, Mihailo Andrejevic, Aleksandar Minic
  • Publication number: 20180349184
    Abstract: The invention provides for a method for processing a plurality of data sets (105; 106; 108; 110-113; DB1; DB2) in a data repository (104) for storing at least unstructured data, the method comprising: —providing (302) a set of agents (150-168), each agent being operable to trigger the processing of one or more of the data sets, the execution of each of said agents being automatically triggered in case one or more conditions assigned to said agent are met, at least one of the conditions relating to the existence, structure, content and/or annotations of the data set whose processing can be triggered by said agent; —executing (304) a first one of the agents; —updating (306) the annotations (115) of the first data set by the first agent; and —executing (308) a second one of the agents, said execution being triggered by the updated annotations of the first data set meeting the conditions of the second agent, thereby triggering a further updating of the annotations of the first data set.
    Type: Application
    Filed: August 14, 2018
    Publication date: December 6, 2018
    Inventors: Albert Maier, Yannick Saillet, Harald C. Smith, Daniel C. Wolfson
  • Publication number: 20180349185
    Abstract: Method and system embodying the method for programmable scheduling encompassing: enqueueing at least one command into one of a plurality of queues having a plurality of entries; determining a category of the command at the head entry of each of the plurality of queues; processing each determined non-job category command by a non-job command arbitrator; and processing each determined job category command by a job arbitrator and assignor, is disclosed.
    Type: Application
    Filed: June 5, 2017
    Publication date: December 6, 2018
    Applicant: Cavium, Inc.
    Inventors: Timothy Toshio Nakada, Jason Daniel Zebchuk, Gregg Alan Bouchard, Tejas Maheshbhai Bhatt, Hong Jik Kim, Ahmed Shahid, Mark Jon Kwong
  • Publication number: 20180349186
    Abstract: Systems and methods are disclosed for scheduling threads on a processor that has at least two different core types, such as an asymmetric multiprocessing system. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers for the thread group. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Deferred interrupts can be used to increase performance.
    Type: Application
    Filed: January 12, 2018
    Publication date: December 6, 2018
    Inventors: Jeremy C. Andrus, John G. Dorsey, James M. Magee, Daniel A. Chimene, Cyril de la Cropte de Chanterac, Bryan R. Hinch, Aditya Venkataraman, Andrei Dorofeev, Nigel R. Gamble, Russell A. Blaine, Constantin Pistol
  • Publication number: 20180349187
    Abstract: The system and method generally relate to reducing heat dissipated within a data center, and more particularly, to a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency. A computer implemented method includes performing a service level agreement (SLA) analysis for one or more currently processing or scheduled processing jobs of a data center using a processor of a computer device. Additionally, the method includes identifying one or more candidate processing jobs for a schedule modification from amongst the one or more currently processing or scheduled processing jobs using the processor of the computer device. Further, the method includes performing the schedule modification for at least one of the one or more candidate processing jobs using the processor of the computer device.
    Type: Application
    Filed: August 6, 2018
    Publication date: December 6, 2018
    Inventors: Christopher J. DAWSON, Vincenzo V. DI LUOFFO, Rick A. HAMILTON, II, Michael D. KENDZIERSKI
  • Publication number: 20180349188
    Abstract: An information handling system includes a processor complex with a root complex that provides N serial data lanes, where N is an integer. The information handling system also includes boot process logic that determines that a device is coupled to X of the serial data lanes, where X is an integer less than N, determines that no device is coupled to Y of the serial data lanes, where Y is an integer less than or equal to N?X, and allocates a portion of bus resources of the root complex to the device, the portion being greater (X+Y)/N.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Inventors: John C. Beckett, Robert W. Hormuth
  • Publication number: 20180349189
    Abstract: The subject technology provides for dynamic task allocation for neural network models. The subject technology determines an operation performed at a node of a neural network model. The subject technology assigns an annotation to indicate whether the operation is better performed on a CPU or a GPU based at least in part on hardware capabilities of a target platform. The subject technology determines whether the neural network model includes a second layer. The subject technology, in response to determining that the neural network model includes a second layer, for each node of the second layer of the neural network model, determines a second operation performed at the node. Further the subject technology assigns a second annotation to indicate whether the second operation is better performed on the CPU or the GPU based at least in part on the hardware capabilities of the target platform.
    Type: Application
    Filed: September 29, 2017
    Publication date: December 6, 2018
    Inventors: Francesco Rossi, Gaurav Kapoor, Michael R. Siracusa, William B. March
  • Publication number: 20180349190
    Abstract: An information processing device calculates an index value of a load for data received from a communication device belonging to a specific group out of communication devices. The information processing device determines whether the calculated index value of the load reaches a prescribed reference. If the index value of the load reaches the prescribed reference, the information processing device requests the communication device belonging to the specific group to execute a predetermined process in advance. The predetermined process is a process executable by each communication device belonging to the specific group, out of processes to be executed by an information processing device that is to be requested to process the data received from the communication device belonging to the specific group. Thereby, the load of the information processing device is decreased.
    Type: Application
    Filed: August 8, 2018
    Publication date: December 6, 2018
    Applicant: FUJITSU LIMITED
    Inventor: TOMOHIRO NAKAJIMA
  • Publication number: 20180349191
    Abstract: Systems and methods are disclosed for scheduling threads on an asymmetric multiprocessing system having multiple core types. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Metrics for workloads offloaded to co-processors can be tracked and integrated into metrics for the offloading thread group.
    Type: Application
    Filed: June 2, 2018
    Publication date: December 6, 2018
    Inventors: JOHN G. DORSEY, DANIEL A. CHIMENE, ANDREI DOROFEEV, BRYAN R. HINCH, EVAN M. HOKE, ADITYA VENKATARAMAN
  • Publication number: 20180349192
    Abstract: In some embodiments, the present invention provides for an exemplary inventive system that includes at least the following components: an electronic control unit having a service oriented architecture (SOA ECU), where the SOA ECU includes: at least one exemplary inventive SOA server; where the SOA ECU is located within a vehicle; where the at least one SOA server is configured to provide at least one service to at least one client ECU that is located within the vehicle; and where the at least one SOA server is configured to assign at least one dedicated processing resource and at least one dedicated memory resource to provide the at least one service.
    Type: Application
    Filed: August 10, 2018
    Publication date: December 6, 2018
    Inventors: Dionis Teshler, Moshe Shlisel, Idan Nadav
  • Publication number: 20180349193
    Abstract: A streaming program execution method of an intelligent terminal is provided. The intelligent terminal does not store a program package of a program before the program is executed. The program package of the program includes a code segment, a read-only data segment, an uninitialized data segment, and a readable/writable data segment and is stored and managed by a server. The intelligent terminal obtains a program execution instruction, downloads the uninitialized data segment, the readable/writable data segment and the code segment of the program package from the server, loads the same into a local storage space and starts the execution of the program. During the execution process, according to a call request of the program on data of the code segment and the read-only data segment, the intelligent terminal downloads the requested data from the server and loads the data into the local storage space for the call of the program.
    Type: Application
    Filed: March 29, 2016
    Publication date: December 6, 2018
    Applicant: CENTRAL SOUTH UNIVERSITY
    Inventors: Yaoxue ZHANG, Letian YI, Jianbin LI
  • Publication number: 20180349194
    Abstract: Systems, methods, and software described herein facilitate accelerated input and output operations with respect to virtualized environments. In an implementation, upon being notified of a guest read process initiated by a guest element running in a virtual machine to read data into a location in guest memory associated with the guest element, a computing system identifies a location in host memory associated with the location in the guest memory and initiates a host read process to read the data into the location in the host memory that corresponds to the location in the guest memory.
    Type: Application
    Filed: July 20, 2018
    Publication date: December 6, 2018
    Inventors: Thomas A. Phelan, Michael Moretti, Dragan Stancevic
  • Publication number: 20180349195
    Abstract: Arrangement (100) for causing a scaling of an application (200) having a set of one or more virtual machines (201-1 . . . 201 n), configured to adapt a threshold value (105-1, 105-2) for scaling the application (200) having a set of one or more virtual machines on the basis of an evaluation of a monitored system key performance indicator (202) and a monitored external key performance indicator (107 2).
    Type: Application
    Filed: December 4, 2015
    Publication date: December 6, 2018
    Inventors: Ibtissam EL KHAYAT, Joerg AELKEN
  • Publication number: 20180349196
    Abstract: A data processing system is described herein that includes two or more software-driven host components that collectively provide a software plane. The data processing system further includes two or more hardware acceleration components that collectively provide a hardware acceleration plane. The hardware acceleration plane implements one or more services, including at least one multi-component service. The multi-component service has plural parts, and is implemented on a collection of two or more hardware acceleration components, where each hardware acceleration component in the collection implements a corresponding part of the multi-component service. Each hardware acceleration component in the collection is configured to interact with other hardware acceleration components in the collection without involvement from any host component. A function parsing component is also described herein that determines a manner of parsing a function into the plural parts of the multi-component service.
    Type: Application
    Filed: August 9, 2018
    Publication date: December 6, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Stephen F. Heil, Adrian M. Caulfield, Douglas C. Burger, Andrew R. Putnam, Eric S. Chung
  • Publication number: 20180349197
    Abstract: A computer-implemented method according to one embodiment includes receiving a computation algorithm to be implemented by one of a plurality of nodes, determining one or more computation operations required by the computation algorithm, identifying virtualization unit metadata for each of the plurality of nodes, determining, from the plurality of nodes, an optimal node for implementing the computation algorithm based on the one or more computation operations and the virtualization unit metadata for each of the plurality of nodes, and returning an identification of the optimal node.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Inventors: Sasikanth Eda, Deepak R. Ghuge, Kaustubh I. Katruwar, Sandeep R. Patil
  • Publication number: 20180349198
    Abstract: Shuffling of into partitions by first grouping input vertices of a limited number. Each group of input vertices may then be simply shuffled into a corresponding group of intermediate vertices, such as by broadcasting. A second grouping occurs in which the intermediate vertices are grouped by partition. The intermediate vertices then shuffle into corresponding output vertices for the respective partitions of that group. If the intermediate vertices are still too large, then this shuffling may involve recursively performing the shuffling just described, until ultimately the number of intermediate vertices shuffling into the output vertices is likewise limited. Thus, the final shuffling into the output vertices might also be simply performed by broadcasting.
    Type: Application
    Filed: May 30, 2017
    Publication date: December 6, 2018
    Inventors: Jin SUN, Shi QIAO, Jaliya Nishantha EKANAYAKE, Marc Todd FRIEDMAN, Clemens Alden SZYPERSKI
  • Publication number: 20180349199
    Abstract: A system for container migration includes containers running instances of an application running on a cluster, an orchestrator with a controller, a memory, and a processor in communication with the memory. The processor executes to monitor a vitality metric of the application. The vitality metric indicates that the application is in either a live state or a dead state. Additionally, horizontal scaling for the application is disabled and the application is scaled-down until the vitality metric indicates that the application is in the dead state. Responsive to the vitality metric indicating that the application is in the dead state, the application is scaled-up until the vitality metric indicates that the application is in the live state. Also, responsive to the vitality metric indication transitioning from the dead state to the live state, the application is migrated to a different cluster while the horizontal scaling of the application is disabled.
    Type: Application
    Filed: May 30, 2017
    Publication date: December 6, 2018
    Inventors: Jay Vyas, Huamin Chen
  • Publication number: 20180349200
    Abstract: A hob device includes a power supply unit, a receiving unit configured to receive an item of information, and a control unit configured to control the power supply unit in an operating state and to access the receiving unit. The control unit is configured to deactivate the power supply unit in the operating state for an inactivity time interval and to access the receiving unit during the inactivity time interval.
    Type: Application
    Filed: November 24, 2016
    Publication date: December 6, 2018
    Applicant: BSH Hausgeräte GmbH
    Inventors: Nicolas Blasco Rueda, Sergio Llorente Gil, Daniel Palacios Tomas, David Valeau Martin
  • Publication number: 20180349201
    Abstract: Methods, systems, and media for a platform for collaborative processing of computing tasks. The method includes sending, to client devices, a one or more client applications including program code associated with an interactive application and a machine learning application. When executed, the program code causes the client devices to generate a user interface for the interactive application; request, using the generated user interface, inputs from a user of the client devices; receive the requested inputs; process, using computing resources of the client devices, at least part of the machine learning application; and transmit data associated with results of the received inputs and the processing of at least part of the machine learning application. The method further includes receiving and processing the data associated with the results of the received inputs and the processing of at least part of the machine learning application to process the computing tasks.
    Type: Application
    Filed: June 5, 2018
    Publication date: December 6, 2018
    Inventor: Corey Clark
  • Publication number: 20180349202
    Abstract: Examples allocating resources to virtual network functions (VNFs). Some examples include monitoring information associated with a set of VNFs that includes a set of VNF instances. A resource allocation event may be predicted for a VNF instance based on the monitored information and a resource flexing model that is developed using a capacity metric of the VNF instance. A resource flexing plan may be generated based on the resource allocation event and an order of the set of VNFs in a service function chain.
    Type: Application
    Filed: May 30, 2017
    Publication date: December 6, 2018
    Inventors: Puneet SHARMA, Lianjie CAO, Vinay SAXENA
  • Publication number: 20180349203
    Abstract: A communication system according to the present disclosure includes: a management apparatus (30) configured to manage positional information regarding a communication terminal (10); a server (50) configured to provide a communication service for the communication terminal (10), and a control apparatus (60) configured to control start or stop of a communication function included in a communication apparatus (40). The server (50) is arranged in the vicinity of a base station (20), the management apparatus (30) transmits the positional information regarding the communication terminal (10) to the control apparatus (60), the control apparatus (60) controls start or stop of the communication function that the communication apparatus (40) includes based on the positional information, and the control apparatus (60) notifies the communication terminal (10) of start or stop of the communication function that the communication apparatus (40) includes via the management apparatus (30).
    Type: Application
    Filed: November 28, 2016
    Publication date: December 6, 2018
    Applicant: NEC Corporation
    Inventors: Yoshinobu OHTA, Kazuhiro EGASHIRA
  • Publication number: 20180349204
    Abstract: Embodiments of the present application provide a method for implementing a virtual GPU. The method for implementing a virtual GPU includes: allocating to each of the virtual GPUs a running time slice corresponding to the resource requirement of the virtual GPU according to resource requirements of virtual GPUs running on the same physical GPU, wherein a sum of running time slices of all virtual GPUs configured on a physical GPU is less than or equal to a scheduling period; and allocating resources of the physical GPU to the virtual GPUs according to the running time slices allocated to the virtual GPUs.
    Type: Application
    Filed: June 1, 2018
    Publication date: December 6, 2018
    Inventors: Lingfei LIU, Shuangtai TIAN, Xin LONG
  • Publication number: 20180349205
    Abstract: Systems and methods are described for distributing multi-processor workload based on sensor data. A method for managing processing workload within a vehicle includes receiving at least one measurement corresponding to a first processor of a plurality of processors installed in the vehicle; determining that the at least one measurement has satisfied a threshold; identifying a second processor of the plurality of processors installed in the vehicle, based on a type of the second processor or a measurement of an environment associated with the second processor; and distributing workload from the first processor to the second processor.
    Type: Application
    Filed: June 2, 2017
    Publication date: December 6, 2018
    Inventors: Sethu Hareesh Kolluru, Luke Michael Ekkizogloy, Yunwei Liu, Xiufeng Song
  • Publication number: 20180349206
    Abstract: A bot conflict-resolution service agent (BCRSA) for addressing conflicts between bots in a target domain is disclosed. The BCRSA is configured to receive data from a target domain that includes changes made to a content of the target domain, analyze the data to identify a first change made to the content by a first bot and a second change made to the content by a second bot, determine based on the analysis that the first and second changes conflict, determine that the first and second bots are in conflict, select an amelioration action to be executed to resolve the conflict between the first and second bots from a plurality of available amelioration actions, and resolve the conflict by executing the selected amelioration action.
    Type: Application
    Filed: June 1, 2017
    Publication date: December 6, 2018
    Inventors: Thomas D. Erickson, Clifford A. Pickover, Komminist Weldemariam
  • Publication number: 20180349207
    Abstract: A bot conflict-resolution service agent (BCRSA) for addressing conflicts between bots in a target domain is disclosed. The BCRSA is configured to receive data from a target domain that includes changes made to a content of the target domain, analyze the data to identify a first change made to the content by a first bot and a second change made to the content by a second bot, determine based on the analysis that the first and second changes conflict, determine that the first and second bots are in conflict, select an amelioration action to be executed to resolve the conflict between the first and second bots from a plurality of available amelioration actions, and resolve the conflict by executing the selected amelioration action.
    Type: Application
    Filed: November 21, 2017
    Publication date: December 6, 2018
    Inventors: Thomas D. Erickson, Clifford A. Pickover, Komminist Weldemariam
  • Publication number: 20180349208
    Abstract: A semiconductor device includes a central processing unit and a processor on one semiconductor substrate. The processor includes a buffer for storing a first register setting list and notifies the central processing unit of an access complete signal indicating completion of reading a second register setting list within a memory. The central processing unit changes the second register setting list within the memory based on the access complete signal and notifies the processor of an update request signal. The processor reads the second register setting list changed by the central processing unit into the buffer to update the first register setting list based on the update request information.
    Type: Application
    Filed: August 10, 2018
    Publication date: December 6, 2018
    Inventors: Tetsuji TSUDA, Masaru HASE, Yuki INOUE, Naohiro NISHIKAWA
  • Publication number: 20180349209
    Abstract: Techniques are disclosed relating to efficiently handling execution of multiple threads to perform various actions. In some embodiments, an application instantiates a queue and a synchronization primitive. The queue maintains a set of work items to be operated on by a thread pool maintained by a kernel. The synchronization primitive controls access to the queue by a plurality of threads including threads of the thread pool. In such an embodiment, a first thread of the application enqueues a work item in the queue and issues a system call to the kernel to request that the kernel dispatch a thread of the thread pool to operate on the first work item. In various embodiments, the dispatched thread is executable to acquire the synchronization primitive, dequeue the work item, and operate on it.
    Type: Application
    Filed: December 8, 2017
    Publication date: December 6, 2018
    Inventors: Daniel A. Steffen, Pierre Habouzit, Daniel A. Chimene, Jeremy C. Andrus, James M. Magee, Puja Gupta
  • Publication number: 20180349210
    Abstract: Embodiments for detecting deadlock in a distributed computing environment. Potential deadlocks between resources of nodes in a computing cluster by determining resource reverse pairs of the resources for each transaction from trace or log files using data analytics. The potential deadlocks are identified offline by matching a global or local resource between the nodes in sub-transactions of each transaction as recursively identified from a transaction resource chain.
    Type: Application
    Filed: June 5, 2017
    Publication date: December 6, 2018
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Shuo FENG, Zhi Hong MA, Zhiyong TIAN, Yan ZHANG, Jia Wei ZHOU
  • Publication number: 20180349211
    Abstract: NUMA-aware reader-writer locks may leverage lock cohorting techniques that introduce a synthetic level into the lock hierarchy (e.g., one whose nodes do not correspond to the system topology). The synthetic level may include a global reader lock and a global writer lock. A writer thread may acquire a node-level writer lock, then the global writer lock, and then the top-level lock, after which it may access a critical section protected by the lock. The writer may release the lock (if an upper bound on consecutive writers has been met), or may pass the lock to another writer (on the same node or a different node, according to a fairness policy). A reader may acquire the global reader lock (whether or not node-level reader locks are present), and then the top-level lock. However, readers may only hold these locks long enough to increment reader counts associated with them.
    Type: Application
    Filed: August 6, 2018
    Publication date: December 6, 2018
    Inventors: David Dice, Virendra J. Marathe
  • Publication number: 20180349212
    Abstract: Methods and systems for data communication in a distributed computing environment include: providing a first network node associated with a first data processing location, the first network node providing provide a network interface for a first distributed computing node at the first data processing location; and forwarding task data flow messages from the first distributed computing node to a second distributed computing node at a second data processing location via a second network node associated with the second data processing location.
    Type: Application
    Filed: June 6, 2017
    Publication date: December 6, 2018
    Inventors: Shuhao LIU, Li CHEN, Baochun LI, Jin CHEN, Chong CHEN
  • Publication number: 20180349213
    Abstract: The present disclosure is related to dynamically control log level in a datacenter. An example machine-readable medium may store instructions executable by a processing resource to receive a stream of log data from a plurality of end devices via associated logging interfaces in the virtual datacenter. Further, the received stream of log data is dynamically analyzed, Furthermore, the log level of any one or more of the plurality of end devices is then controlled based on the analysis. The log data associated with the controlled log level of any one of the one or more of plurality of end devices is then received, which can then assist in debugging and troubleshooting.
    Type: Application
    Filed: June 1, 2017
    Publication date: December 6, 2018
    Inventors: Jinto ANTONY, Hariharan JEYARAMAN GANESAN, Madhusudhanan GANGADHARAN, Kalyan Venu Gopal ABBARAJU
  • Publication number: 20180349214
    Abstract: A system and method enables loosely-coupled lock-step computing including sensors that detect or measure a physical property and server groups. Each server group is serially linked to another server group and includes server instances operating in virtual synchrony. Virtual synchrony middleware receives outputs from multiple server instances and renders a single reply based on the outputs from the multiple server instances. The virtual synchrony middleware replicates and orders incoming requests to the server groups to ensure each of the server instances of that server group receives the same incoming requests in the same order.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Inventors: Kerry Wayne Johnson, Christopher William Lewis Hobbs, Peter Shook
  • Publication number: 20180349215
    Abstract: Techniques for managing message transmission in a large networked computer system that includes multiple individual networked computing systems are disclosed. Message passing among the computing systems include a sending computing device transmitting a message to a receiver computing device and a receiver computing device consuming that message. A build-up of data stored in a buffer at the receiver can reduce performance. In order to reduce the potential performance degradation associated with large amounts of “waiting” data in the buffer, a sending computer system first determines whether the receiver computer system is ready to receive a message and does not transmit the message if the receiver computer system is not ready. To determine whether the receiver computer system is ready to receive a message, the receiver computer system, at the request of the sending computer system, checks a counting filter that stores indications of whether particular messages are ready.
    Type: Application
    Filed: June 5, 2017
    Publication date: December 6, 2018
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Shuai Che
  • Publication number: 20180349216
    Abstract: Systems, methods, and devices for managing predetermined functions on a mobile device within a moving vehicle, the mobile device having an operating system (OS) that includes an event API installed therein that is configured for two-way communication with an external control device, the control device being installed within the vehicle and further configured to communicate with a software application installed and running in memory resident on the mobile device. In response to initiation of a predetermined function on the mobile device, a notification message is transmitted by the event API to the control device. The control device then communicates with the software application to determine a desired action for the mobile device to take with respect to the predetermined function. The control device then instructs the mobile device on the action to take on the predetermined function by transmitting an action message to the event API.
    Type: Application
    Filed: July 23, 2018
    Publication date: December 6, 2018
    Inventors: Joseph E. Breaux, Chad A. Kennedy, Michael W. Lynn
  • Publication number: 20180349217
    Abstract: Provided are a computer program product for managing bus interface errors in a storage system coupled to a host and storage. A determination is made as to whether a first number of correctable errors on a first bus interface, connecting a first processing unit to the storage, exceeds a second number of correctable errors on a second bus interface, connecting a second processing unit to the storage, by a difference threshold. The correctable errors in the first and second bus interfaces are detected and corrected in the first and second bus interfaces by first hardware and second hardware, respectively. In response to determining that the first number of correctable errors exceeds the second number of correctable errors by the difference threshold, at least a portion of Input/Output (I/O) requests are redirected to a second processing unit using the second bus interface to connect to the storage.
    Type: Application
    Filed: June 2, 2017
    Publication date: December 6, 2018
    Inventors: Matthew G. Borlick, Lokesh M. Gupta, Trung Nguyen
  • Publication number: 20180349218
    Abstract: Some embodiments of the invention provide a novel architecture for debugging devices. This architecture includes numerous devices that without user intervention automatically detect and report bug events to a set of servers that aggregate and process the bug events. When a device detects a potential bug event, the device in some embodiments generates a description of the potential bug event, and sends the generated description to the server set through a network. In addition to generating such a description, the device in some embodiments directs one or more of its modules to gather and store a collection of one or more data sets that are relevant to the potential bug event, in case the event has to be further analyzed by the server set. In the discussion below, the generated bug-event description is referred to as the event signature, while the gathered collection of data sets for an event is referred to as the event's data archive.
    Type: Application
    Filed: July 12, 2017
    Publication date: December 6, 2018
    Inventors: Henri S. Berger, Eisuke Arai, Amit K. Vyas, David S. Choi, Franco Travostino, Abhinav Pathak, Daniel Lertpratchya, Albert Liu, Anand Ramadurai, Olivier Mardinian, Vividh Siddha
  • Publication number: 20180349219
    Abstract: Some embodiments of the invention provide a novel architecture for debugging devices. This architecture includes numerous devices that without user intervention automatically detect and report bug events to a set of servers that aggregate and process the bug events. When a device detects a potential bug event, the device in some embodiments generates a description of the potential bug event, and sends the generated description to the server set through a network. In addition to generating such a description, the device in some embodiments directs one or more of its modules to gather and store a collection of one or more data sets that are relevant to the potential bug event, in case the event has to be further analyzed by the server set. In the discussion below, the generated bug-event description is referred to as the event signature, while the gathered collection of data sets for an event is referred to as the event's data archive.
    Type: Application
    Filed: July 12, 2017
    Publication date: December 6, 2018
    Inventors: Henri S. Berger, Eisuke Arai, Amit K. Vyas, David S. Choi, Franco Travostino, Abhinav Pathak, Daniel Lertpratchya, Albert Liu, Anand Ramadurai, Olivier Mardinian, Vividh Siddha
  • Publication number: 20180349220
    Abstract: Methods and systems for printing accurate three-dimensional structures include printing a three-dimensional structure according to an original three-dimensional model. The original three-dimensional model is adjusted to reduce measured differences between the printed three-dimensional structure and the original three-dimensional model. A three-dimensional structure is printed according to the adjusted three-dimensional model.
    Type: Application
    Filed: June 5, 2017
    Publication date: December 6, 2018
    Inventors: Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Christopher J. Penny, Michael Rizzolo, Aldis G. Sipolins
  • Publication number: 20180349221
    Abstract: Methods and systems are directed to detecting and classifying changes in a distributed computing system. Divergence value are computed from distributions of different types of event messages generated in time intervals of a sliding time window. Each divergence value is a measure of change in types of events generated in each time interval. When a divergence value, or a rate of change in divergence values, exceeds a threshold, the time interval associated with the threshold violation is used to determine a change point in the operation of the distributed computing system. Based on the change point, a start time of the change is determined. The change is classified based on various previously classified change points in the disturbed computing system. A recommendation may be generated to address the change based on the classification of the change.
    Type: Application
    Filed: May 30, 2017
    Publication date: December 6, 2018
    Applicant: VMware, Inc.
    Inventors: Ashot Nshan Harutyunyan, Arnak Poghosyan, Naira Movses Grigoryan, Nicholas Kushmerick, Harutyun Beybutyan
  • Publication number: 20180349222
    Abstract: A selection decoder controls levels of a plurality of selection signals based on an address bit having at least one or more bits. A memory module is selected when its corresponding selection signal is at an activated level, and data can be read and written therein. A failure determination unit determines whether or not the selection decoder is in a failed state based on the levels of the plurality of selection signals.
    Type: Application
    Filed: May 21, 2018
    Publication date: December 6, 2018
    Applicant: Renesas Electronics Corporation
    Inventors: Takeshi HASHIZUME, Naoya FUJITA, Shunya NAGATA, Yoshisato YOKOYAMA, Katsumi SHINBO, Kouji SATOU
  • Publication number: 20180349223
    Abstract: A method by a dispersed storage (DS) processing unit of a dispersed storage network (DSN) begins by sending a set of data access requests regarding a data access transaction to a set of storage units of the DSN. The method continues by receiving from each of at least some storage units, a storage-revision indicator which includes a content-revision field, a delete-counter field, and a contest-counter field. The method continues by generating an anticipated storage-revision indicator based on a current revision level of the set of encoded data slices and based on a data access type of the data access transaction. The method continues by comparing the anticipated storage-revision indicator with the storage-revision indicators. When a threshold number of the storage-revision indicators received from the at least some storage units substantially match the anticipated storage-revision indicator, the method continues by executing the data access transaction.
    Type: Application
    Filed: June 1, 2017
    Publication date: December 6, 2018
    Inventors: Greg R. Dhuse, Ravi V. Khadiwala
  • Publication number: 20180349224
    Abstract: A method includes acquiring, by a managing unit of a dispersed storage network (DSN), storage unit status information and data object storage status information from a plurality of storage units of DSN memory of the DSN. The method further includes determining, by the managing unit, DSN status information of the DSN memory based on the storage unit status information and the data object storage status information. The method further includes identifying, by the managing unit, DSN memory issues within the DSN memory. The method further includes prioritizing, by the managing unit, corrective remedies for the DSN memory issues based on the status information of the DSN memory. The method further includes facilitating, by the managing unit, the execution of the prioritized corrective remedies to correct the DSN memory issues.
    Type: Application
    Filed: June 2, 2017
    Publication date: December 6, 2018
    Inventor: Asimuddin Kazi
  • Publication number: 20180349225
    Abstract: The present disclosure is drawn to, among other things, a method of managing a memory device. In some aspects, the method includes receiving data to be stored in a storage memory, wherein the storage memory is coupled to the memory device, wherein the memory device includes a first memory type and a second memory type different from the first memory type; storing a first copy of the received data in the first memory type; storing a second copy of the received data in the second memory type; receiving indication of a power loss to the memory device; in response to receiving indication of the power loss, copying the second copy from the second memory type to the storage memory; detecting for power restoration to the memory device after the power loss; and in response to detecting power restoration to the memory device, restoring data to the first memory type by copying data from the second memory type to the first memory type.
    Type: Application
    Filed: May 30, 2018
    Publication date: December 6, 2018
    Applicant: Everspin Technologies, Inc.
    Inventors: Pankaj BISHNOI, Trevor Sydney SMITH, James MACDONALD
  • Publication number: 20180349226
    Abstract: The described embodiments set forth techniques for preserving clone relationships between files at a computing device. In particular, the techniques involve identifying clone relationships between files in conjunction with performing operations on the files where it can be beneficial to preserve the clone relationships. The operations can include, for example, preserving clone relationships between files that are being copied from a source storage device (that supports file cloning) to a destination storage device that supports file cloning. Additionally, the operations can include preserving clone relationships when backing up and restoring files between a source storage device (that supports file cloning) and a destination storage device that does not support file cloning. In this manner, the various benefits afforded by the clone relationships between files can be retained even as the files are propagated to destination storage devices that may or may not support file cloning.
    Type: Application
    Filed: September 29, 2017
    Publication date: December 6, 2018
    Inventors: Pavel CISLER, Christopher A. WOLF, Loic E. VANDEREYKEN, Eric A. WEISS
  • Publication number: 20180349227
    Abstract: The embodiments set forth techniques for performing incremental backups of a source file system volume (FSV) managed by a source computing device. According to some embodiments, the source computing device can be configured to generate a current snapshot of the source FSV, where the current snapshot complements a previous snapshot of the source FSV (e.g., established during a previous backup). In some cases, to free up storage space, the data for files belonging to the source FSV can be stripped from the previous snapshot (where metadata for the files remains intact). Next, the source computing device can generate, within a destination storage device, a second snapshot of a destination FSV (that corresponds to the source FSV). In turn, the source computing device identifies changes made to the source FSV based on the current snapshot and the previous snapshot, and reflects the changes within the second snapshot of the destination FSV.
    Type: Application
    Filed: December 13, 2017
    Publication date: December 6, 2018
    Inventors: Pavel CISLER, Pavel SOKOLOV, Dominic B. GIAMPAOLO, Eric A. WEISS, Christopher A. WOLF
  • Publication number: 20180349228
    Abstract: A device for recovering data from a computer includes a data storage device that is capable of being coupled to a communication port of the computer. The data storage device stores a recovery program. The recovery program may automatically execute when the data storage device is coupled to the communication port of the computer and the computer is on. When executing, the recovery program may start up, or boot, a processor of the computer. The recovery program may be used on a computer with a defective operating system. Security (e.g., a password, etc.) may be automatically bypassed during execution of the recovery program. The recovery program enables a user to identify and copy files that have been stored on a hard drive associated with the old computer, and to copy selected files to another data storage device. The device may also present a user with an option to wipe the hard drive and to facilitate recycling of the old computer.
    Type: Application
    Filed: June 6, 2018
    Publication date: December 6, 2018
    Inventors: Daniel Goodman, Pavels Jonkins
  • Publication number: 20180349229
    Abstract: A data repository configured for storing original content and modified content which are addressable for point-in-time retrieval thereof. The data repository can be parsed to identify related data in another separate data source that may be affected by changes reflected in a versioned repository which is generated after an action is implemented on one or more digital files stored within the data repository.
    Type: Application
    Filed: August 10, 2018
    Publication date: December 6, 2018
    Applicant: Propylon, Inc.
    Inventor: Sean McGrath
  • Publication number: 20180349230
    Abstract: In one example in accordance with the present disclosure, a method for context aware data backup may include determining a first set of files that are altered during normal operation of a computer system and storing the first set of files at a destination location. The method may include determining a second set of files that are altered during normal operation of the computer system and determining a size difference between the first set of files and the second set of files. The method may also include determining a time difference between a first time taken to copy the first set of files and a second time taken to copy a previous set of files to the destination location. The method may include determining that the size difference and the time difference meet a threshold for backup and storing the second set of files at the destination location.
    Type: Application
    Filed: January 28, 2016
    Publication date: December 6, 2018
    Inventors: Vijay Gupta Gupta, Archana Bharathidasan, David Earl Wiser, Vsevolod Yakhontov, Aditya Shukla
  • Publication number: 20180349231
    Abstract: A computing apparatus, including: a hardware platform including a processor and memory; and a system management interrupt (SMI) handler; first logic configured to provide a first container and a second container via the hardware platform; and second logic configured to: detect an uncorrectable error in the first container; responsive to the detecting, generate a degraded system state; provide a degraded state message to the SMI handler; instruct the second container to seek a recoverable state; determine that the second container has entered a recoverable state; and initiate a recovery operation.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Applicant: Intel Corporation
    Inventors: Subhankar Panda, Sarathy Jayakumar, Gaurav Porwal, Theodros Yigzaw