Patents Issued in April 2, 2020
-
Publication number: 20200104167Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.Type: ApplicationFiled: November 28, 2019Publication date: April 2, 2020Inventors: Tianshi CHEN, Lei ZHANG, Shaoli LIU
-
Publication number: 20200104168Abstract: Techniques for adapting behavioral pairing to runtime conditions in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for adapting behavioral pairing to runtime conditions in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, at least two pairing models for assigning tasks in the task assignment system; monitoring, by the at least one computer processor, at least one parameter of the task assignment system; and selecting, by the at least one computer processor, one of the at least two pairing models based on a value of the at least one parameter.Type: ApplicationFiled: December 3, 2019Publication date: April 2, 2020Applicant: Afiniti, Ltd.Inventors: Syed Meesum Raza RIZVI, Vikash KHATRI
-
Publication number: 20200104169Abstract: Disclosed are risk visualisation methods and systems of a production environment tool. Tasks are delivered to a task board in an arrangement that indicates how long the tasks have been on the board, that is, the defined time period. For each task, a penetration value is calculated which is derived from the time elapsed since the initiation time of the task as a percentage of the defined time period. A criterion of a percentile value of the tasks establishes a subset of the tasks that have lower penetration values. Then by determining a specific task in the subset of the sequence of tasks that is at the high end of the percentile value, a visual indication of that specific task on the board provides notice to an observer of the board of whether there is a risk that the resource is falling behind in task completion.Type: ApplicationFiled: November 18, 2016Publication date: April 2, 2020Inventor: James Richard Powell
-
Publication number: 20200104170Abstract: In an embodiment, a method for scheduling tasks comprises at a task scheduler of a processing node, the processing node being a part of a processing group of a plurality of processing groups: retrieving a first task descriptor from a local memory, the task descriptor corresponding to a task scheduled for execution at the current time and comprising at least a task execution time, a frequency for performing the task, and a task identifier; determining whether the task descriptor is assigned to the processing group associated with the task scheduler for execution; if it is determined that the task descriptor is assigned to the processing group associated with the task scheduler for execution: determining whether the task descriptor is assigned to the task scheduler for execution; if it is determined that the task descriptor is assigned to task scheduler for execution: executing the task: updating the task execution time based on the current task execution time and the frequency for performing the task; and re-qType: ApplicationFiled: November 30, 2018Publication date: April 2, 2020Inventors: ALEXANDER ELSE, HAITAO LI
-
Publication number: 20200104171Abstract: Methods, systems, and computer-readable media for orchestration of computations using a remote repository are disclosed. A representation of one or more inputs to a computation is stored in a repository. The computation is assigned to one or more hosts of a plurality of hosts. A representation of program code executable to perform the computation is stored in the repository. A local copy of the one or more inputs is stored on the one or more hosts. The computation is initiated on the one or more hosts using the program code and the local copy of the one or more inputs. The computation is initiated for a plurality of keys. The computation succeeds for one or more keys after the computation has failed for one or more other keys. A representation of one or more outputs of the computation is stored in the repository.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Applicant: Amazon Technologies, Inc.Inventors: Marvin Michael Theimer, Julien Jacques Ellie, Colin Watson, Ullas Sankhla, Swapandeep Singh, Kerry Hart, Paul Anderson, Brian Dahmen, Suchi Nandini, Yunhan Chen, Shu Liu, Arjun Raman, Yuxin Xie, Fengjia Xiong
-
Publication number: 20200104172Abstract: An information processing device includes a data input unit that receives time-series data from an external device, a first processing unit that adds time information to data acquired from the data input unit and processes data to which the time information has been added, on a real-time operating system that performs a process within a specified time period, and a second processing unit that processes data to which the time information has been added, on a non-real-time operating system.Type: ApplicationFiled: July 31, 2017Publication date: April 2, 2020Applicant: Mitsubishi Electric CorporationInventor: Osamu NASU
-
Publication number: 20200104173Abstract: A connection request is received from an agent at a particular one of multiple communication processes of an automation engine, for the agent to connect to the automation engine through the particular communication process. The agent is to act as one of multiple agents configured to perform automation actions at the direction of the automation engine system, where communication between the automation engine and the agents is facilitated through the communication processes. A number of connections at each of the communication processes is determined and a threshold value is identified. The particular communication process determines that connecting the particular agent to the particular communication process would cause the number of connections at the particular communication process to violate the threshold. The particular communication process causes the particular agent to instead connect to another one of the communication processes of the automation engine based on the threshold.Type: ApplicationFiled: October 18, 2018Publication date: April 2, 2020Applicant: CA Software Österreich GmbHInventors: Andreas Ronge, Johann Niederer
-
Publication number: 20200104174Abstract: Embodiments include receiving a workload object that comprises program identifiers representing programs to be invoked in a particular order by a computing system, extracting the program identifiers from the workload object, and translating the program identifiers into respective predictions related to consumption of a resource by the programs corresponding to the program identifiers. The translating includes providing the program identifiers in the particular order as inputs to an encoder of a trained prediction model and generating, by a decoder of the trained prediction model, the predictions based, at least in part, on the particular order of the program identifiers.Type: ApplicationFiled: September 30, 2018Publication date: April 2, 2020Applicant: CA, Inc.Inventors: Vitezslav Vit Vlcek, Premysl Zitka, Petr Vilcinsky, Maryna Pavlienova, Martin Strejc
-
Publication number: 20200104175Abstract: Methods, systems, and computer-readable media for parameter variations for computations using a remote repository are disclosed. A first computation to a first set of one or more hosts. The first computation is associated with first parameters including one or more inputs and program code. A second computation is assigned to a second set of one or more hosts and is associated with a second set of parameters. Execution of the first computation is initiated using the first set of hosts and the first set of parameters. Local copies of the input(s) and program code are obtained from a storage service using a credential supplied by a repository manager. Execution of the second computation is initiated using the second set of hosts and the second set of parameters as obtained using a credential supplied by the repository manager.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Applicant: Amazon Technologies, Inc.Inventors: Marvin Michael Theimer, Julien Jacques Ellie, Colin Watson, Ullas Sankhla, Swapandeep Singh, Kerry Hart, Paul Anderson, Brian Dahmen, Suchi Nandini, Yunhan Chen, Shu Liu, Arjun Raman, Yuxin Xie, Fengjia Xiong
-
Prevent Counter Wrap During Update-Side Grace-Period-Request Processing in Tree-SRCU Implementations
Publication number: 20200104176Abstract: In an SRCU environment, per-processor data structures each maintain a list of SRCU callbacks enqueued by SRCU updaters. An SRCU management data structure maintains a current-grace-period counter that tracks a current SRCU grace period, and a future-grace-period counter that tracks a farthest-in-the-future SRCU grace period needed by the SRCU callbacks enqueued by the SRCU updaters. A combining tree is used to mediate a plurality of grace-period-start requests concurrently vying for an opportunity to update the future-grace-period record on behalf of SRCU callbacks. The current-grace-period counter is prevented from wrapping during some or all of the grace-period-start request processing. In an embodiment, the counter wrapping is prevented by performing some or all of the grace-period start-request processing within an SRCU read-side critical section.Type: ApplicationFiled: October 1, 2018Publication date: April 2, 2020Inventor: Paul E. McKenney -
Publication number: 20200104177Abstract: A resource allocation system of the invention includes: a resource allocation unit 501 that allocates resources for executing services, allocating same to one or more functional units providing a predetermined function as a service; two or more resource provision units 502 that provide resources; a surplus resource amount acquisition unit 503 that acquires a surplus resource amount from a predetermined resource provision unit 502 among the two or more resource provision units 502; and a parameter determination unit 504 that, on the basis of the surplus resource amount, determines at least one among allocation timing, allocatable amount, and priority order at allocation, said allocation timing being timing at which resource allocation control is performed in the resource allocation unit 501 and said allocatable amount being a resource amount that can be allocated during one allocation control.Type: ApplicationFiled: May 30, 2017Publication date: April 2, 2020Applicant: NEC CORPORATIONInventor: Masaki INOKUCHI
-
Publication number: 20200104178Abstract: Transaction-enabling systems and methods are disclosed. A system may include a fleet of machines each having a task resource requirement. A controller may include a resource requirement circuit to determine an amount of a resource required for each of the machines to service the task and a resource distribution circuit structured to adaptively improve a utilization of the resource for each of the fleet of machines.Type: ApplicationFiled: November 22, 2019Publication date: April 2, 2020Inventor: Charles Howard Cella
-
Publication number: 20200104179Abstract: A present invention embodiment manages resources of a distributed system to perform computational tasks within a specified time interval. A received object is classified into a type of computational processing, and a quantity of objects is maintained for each type. An execution time for processing a single object is estimated based on a corresponding computation resource template. A total execution time for the quantity of objects of a type of computational processing is determined based on the estimated execution time. In response to the total execution time exceeding a user-specified time interval, an amount of resources of the distributed system is determined to process the quantity of objects of the type within the user-specified time interval. Nodes of the distributed system with objects classified in the type use the determined amount of resources to process the quantity of objects for the type within the user-specified time interval.Type: ApplicationFiled: December 2, 2019Publication date: April 2, 2020Inventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
-
Publication number: 20200104180Abstract: In general, embodiments are disclosed for tracking and allocating graphics processor hardware resources. More particularly, a graphics hardware resource allocation system is able to generate a priority list for a plurality of data masters for graphics processor based on a comparison between a current utilizations for the data masters and a target utilizations for the data masters. The graphics hardware resource allocation system designate, based on the priority list, a first data master with a higher priority to submit work to the graphics processor compared to a second data master. The graphics hardware resource allocation system determines a stall counter value for the data master and generates a notification to pause work for the second data master based on the stall counter value.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Inventors: Kutty Banerjee, Benjamin Bowman, Terence M. Potter, Tatsuya Iwamoto, Gokhan Avkarogullari
-
Publication number: 20200104181Abstract: Example object storage systems and methods provide priority metadata processing. Metadata operations are received in response to change events for at least one data object. The metadata operations may include system operations configured to manage changes to data objects and user-method operations configured to execute user-defined methods using the data objects. System operations are executed with a first priority in response to system operations with the first priority being available for processing. User-method operations are executed with a second priority in response to no metadata operations with the first priority being available for processing.Type: ApplicationFiled: September 29, 2018Publication date: April 2, 2020Inventors: Ameet Pyati, Muhammad Tanweer Alam
-
Publication number: 20200104182Abstract: Systems and methods for role-based access control to computing resources are presented. In an example embodiment, a request to perform a type of access of a computing resource is received via a communication network from a process executing on a client device. Using a data store storing process identifiers and associated access control information, access control information associated with the requesting process is identified based on a process identifier of the requesting process. Based on the access control information associated with the requesting process, a determination is made whether the requesting process is allowed to perform the requested type of access of the computing resource. The request is processed based on the requesting process being allowed to perform the requested type of access of the computing resource.Type: ApplicationFiled: November 21, 2019Publication date: April 2, 2020Inventors: Ruchir Tewari, Vineet Banga, Atul Chandrakant Kshirsagar
-
Publication number: 20200104183Abstract: The described technology relates to scheduling jobs of a plurality of types in an enterprise web application. A processing system configures a job database having a plurality of job entries, and concurrently executes a plurality of job schedulers independently of each other. Each job scheduler is configured to schedule for execution jobs in the jobs database that are of a type different from types of jobs others of the plurality of job schedulers are configured to schedule. The processing system also causes performance of jobs scheduled for execution by any of the plurality of schedulers. Method and computer readable medium embodiments are also provided.Type: ApplicationFiled: December 3, 2019Publication date: April 2, 2020Inventor: Santhosh Philip GEORGE
-
Publication number: 20200104184Abstract: Examples described herein can be used to determine and suggest a computing resource allocation for a workload request made from an edge gateway. The computing resource allocation can be suggested using computing resources provided by an edge server cluster. Telemetry data and performance indicators of the workload request can be tracked and used to determine the computing resource allocation. Artificial intelligence (AI) and machine learning (ML) techniques can be used in connection with a neural network to accelerate determinations of suggested computing resource allocations based on hundreds to thousands (or more) of telemetry data in order to suggest a computing resource allocation. Suggestions made can be accepted or rejected by a resource allocation manager for the edge gateway and the edge server cluster.Type: ApplicationFiled: September 27, 2018Publication date: April 2, 2020Inventors: Rasika SUBRAMANIAN, Francesc GUIM BERNAT, David ZIMMERMAN
-
Publication number: 20200104185Abstract: The present disclosure relates to systems and methods to implement efficient high-bandwidth shared memory systems particularly suited for parallelizing and operating large scale machine learning and AI computing systems necessary to efficiently process high volume data sets and streams.Type: ApplicationFiled: October 2, 2019Publication date: April 2, 2020Inventors: Phillip Alvelda, VII, Markus Krause, Todd Allen Stiers
-
Publication number: 20200104186Abstract: Among other things, a machine-based method comprises receiving an application specification comprising one or more algorithms. Each algorithm is not necessarily suitable for concurrent execution on multiple nodes in parallel. One or more different object classes are grouped into one or more groups, each being appropriate for executing the one or more algorithms of the application specification. The executing involves data that is available in objects of the object classes. A user is enabled to code an algorithm of the one or more algorithms for one group in a single threaded environment without regard to concurrent execution of the algorithm on multiple nodes in parallel. An copy of the coded algorithm is distributed to each of the multiple nodes, without needing additional coding. The coded algorithm is caused to be executed on each node in association with at least one instance of a group independently of and in parallel to executing the other copies of the coded algorithm on the other nodes.Type: ApplicationFiled: December 2, 2019Publication date: April 2, 2020Applicant: Miosoft CorporationInventors: Ernst M. Siepmann, Albert B. Barabas, Mark D.A. Gulik
-
Publication number: 20200104187Abstract: A technique relates to moving a target logical partition. A software application receives a trigger to automatically move the target logical partition from a first system to a second system. The logical partition memory of the target logical partition is transferred from the first system to a coupling facility. In response to completion of transferring the logical partition memory of the target logical partition to the coupling facility, the logical partition memory of the target logical partition is transferred from the coupling facility to the second system.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Inventor: Timothy MORRELL
-
Publication number: 20200104188Abstract: Resegmenting chunks of data for load balancing is disclosed. A plurality of first chunks of data is received. The plurality of first chunks of data includes one or more entries that include raw data produced by a component of an information technology environment and that reflects activity in the information technology environment. The plurality of first chunks of data is resegmented into a plurality of second chunks of data based on entry boundaries in at least some of the plurality of first chunks of data. A first subset of the plurality of second chunks of data is distributed to a first indexer of a set of indexers. An occurrence of a trigger event is determined, and in response to the trigger event, a second subset of the plurality of second chunks of data is distributed to a second indexer of the set of indexers.Type: ApplicationFiled: December 4, 2019Publication date: April 2, 2020Inventors: Jag Kerai, Anish Shrigondekar, Mitchell Blank, Hasan Alayli
-
Publication number: 20200104189Abstract: Various examples are disclosed for workload placement using forecast data. Forecast data for workloads and providers during a predefined period of time in the future is considered when identifying stressed providers and the feasibility of a workload move. Workloads with demand spikes at different future times can be matched by stacking current demand and forecast demand by timestamps. The possibility of stress can be reduced by making moves preemptively and considering forecast demand when evaluating the feasibility of a workload move.Type: ApplicationFiled: October 1, 2018Publication date: April 2, 2020Inventors: Parikshit Santhana Gopalan Gopalan, Sandy Lau, Wei Li, Leah Nutman, Paul Pedersen, Yu Sun
-
Publication number: 20200104190Abstract: The disclosure generally describes methods, software, and systems for handling integration flows. A set of flows is initially deployed to a single worker set. A load balancing issue is identified that is associated with initial runtime interactions by workers with the single worker set. In response to identifying the load balancing issue, the load balancing issue is analyzed to determine whether to autoscale or generate a new worker set. Load balancing is performed to initiate at least one new worker set. At least one flow to be moved to the at least one new worker set is identified. Movement of the identified at least one flow from a current worker set to a new worker set is performed.Type: ApplicationFiled: October 1, 2018Publication date: April 2, 2020Inventors: Ulf Fildebrandt, Madhav Bhargava, Sapreen Ahuja, Sripad J
-
Publication number: 20200104191Abstract: Described herein is a computer implemented method comprising receiving a link to content served by a remote server, detecting activation of the link, and in response to detecting activation of the link attempting to load a passive mixed content item from a local web server. In response to determining the passive mixed content item successfully loaded the method further comprises extracting installed application information from the passive mixed content item, determining whether or not the installed application information indicates a relevant installed desktop application, and if so accessing the content referenced by the link from the remote application server using the relevant installed desktop application.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Inventors: Samuel Attard, Clifton Hensley, Issac Gerges
-
Publication number: 20200104192Abstract: A method for automatically verifying a message using a remote system includes receiving, at a remote system, a request to launch an application from a current user to communicate where the request includes a unique feature associated with a potential user's device that is required for registration. The method includes generating a selectable-link, and transmitting a first message that includes the selectable-link to the potential user's device. The first message is configured to cause the potential user's device to display the link, launch the application in response to receiving selection indication of the selectable-link, and transmit a verification code to the remote system. The method further includes registering the potential user's device in response to receiving the verification code.Type: ApplicationFiled: October 1, 2018Publication date: April 2, 2020Applicant: Google LLCInventor: Sandeep Siddhartha
-
Publication number: 20200104193Abstract: In an embodiment, an operating system provides a port group service that permits two or more ports to be bound together as a port group. A thread may listen for messages and/or events on the port group, and thus may receive a message/event from any of the ports in the port group and may process that message/event. Threads that send messages/events (“sending threads”) may send a message/event to a port in the port group, and the messages/events received on the various ports may be processed according to a queue policy for the ports in the port group. Messages/events may be transmitted to from the ports to a listening thread (a “receiving thread”) using a receive policy that determines the priority at which the receiving thread is to execute to process the message/event.Type: ApplicationFiled: September 9, 2019Publication date: April 2, 2020Inventors: Sunil Kittur, Dino R. Canton, Shawn R. Woodtke, Aleksandar Ristovski
-
Publication number: 20200104194Abstract: An electronic device is in communication with one or more wearable audio output devices. The electronic device detects occurrence of an event and outputs, via the one or more wearable audio output devices, one or more audio notifications corresponding to the event. After beginning to output the one or more audio notifications, the electronic device detects an input directed to the one or more wearable audio output devices. In response, if the input is detected within a predefined time period with respect to the one or more audio notifications corresponding to the event, the electronic device performs a first operation associated with the one or more audio notifications corresponding to the event; and, if the input is detected after the predefined time period has elapsed, the electronic device performs a second operation not associated with the one or more audio notifications corresponding to the first event.Type: ApplicationFiled: September 18, 2019Publication date: April 2, 2020Inventors: Devin W. Chalmers, Sean B. Kelly, Karlin Y. Bark
-
Publication number: 20200104195Abstract: Methods and apparatus for correcting out-of-order data transactions over an inter-processor communication (IPC) link between two (or more) independently operable processors. In one embodiment, a peripheral-side processor receives data from an external device and stores it to memory. The host processor writes data structures (transfer descriptors) describing the received data, regardless of the order the data was received from the external device. The transfer descriptors are written to a memory structure (transfer descriptor ring) in memory shared between the host and peripheral processors. The peripheral reads the transfer descriptors and writes data structures (completion descriptors) to another memory structure (completion descriptor ring). The completion descriptors are written to enable the host processor to retrieve the stored data in the correct order. In optimized variants, a completion descriptor describes groups of transfer descriptors.Type: ApplicationFiled: November 2, 2018Publication date: April 2, 2020Inventors: KARAN SANGHI, Saurabh Garg
-
Publication number: 20200104196Abstract: A method and apparatus of a network device that allocates a shared memory buffer for an object is described. In an exemplary embodiment, the network device receives an allocation request for the shared memory buffer for the object. In addition, the network device allocates the shared memory buffer from shared memory of a network device, where the shared memory buffer is accessible by a writer and a plurality of readers. The network device further returns a writer pointer to the writer, where the writer pointer references a base address of the shared memory buffer. Furthermore, the network device stores the object in the shared memory buffer, wherein the writer accesses the shared memory using the writer pointer. The network device further shares the writer pointer with at least a first reader of the plurality of readers. The network device additionally translates the base address of the shared memory buffer to a reader pointer, where the reader pointer is expressed in a memory space of the first reader.Type: ApplicationFiled: July 19, 2019Publication date: April 2, 2020Inventors: Stuart Ritchie, Sebastian Sapa, Christopher Neilson, Eric Secules, Peter Edwards
-
Publication number: 20200104197Abstract: Communication between one system and another system using one communication mechanism has failed. The one communication mechanism includes an operating system service to transfer a message between the one system and the other system. Based on determining that the communication between the one system and the other system has failed, automatically switching from the one communication mechanism to another communication mechanism to communicate the message between the one system and the other system. The other communication mechanism is different from the operating system service and uses a coupling facility list structure.Type: ApplicationFiled: September 27, 2018Publication date: April 2, 2020Inventors: Richard Schneider, Khiet Q. Nguyen
-
Publication number: 20200104198Abstract: Systems and methods are described for providing maintaining state information during processing of data sets via execution of code on an on-demand code execution system. Rather than requiring that execution environments of such a system to maintain state, an intermediary device is disclosed which retrieves calls to the system from a call queue and iteratively submits the calls to the system. Each call within the queue corresponds to a data item of the data set to be analyzed. As calls are submitted to the system, the intermediary device submits state information within the call reflecting a state of processing the data set. A response to the call includes state information updated based on processing of a data item in the call. Thus, state information is maintained for processing the data set, without requiring persistence of state information within individual execution environments.Type: ApplicationFiled: September 27, 2018Publication date: April 2, 2020Inventors: Hans-Philipp Anton Hussels, Timothy Allen Wagner, Marc John Brooker
-
Publication number: 20200104199Abstract: There is provided a method for recalling a message. The method comprises receiving a message from a publisher, sending the message to a durable subscriber for queuing pending consumption by a subscriber. When a message recall request identifying the message is received, the unconsumed message is deleted from the durable subscriber.Type: ApplicationFiled: December 1, 2019Publication date: April 2, 2020Inventors: Sujeet Mishra, Ruchir Jha, Lohitashwa Thyagaraj
-
Publication number: 20200104200Abstract: Techniques are described herein for predicting disk drive failure using a machine learning model. The framework involves receiving disk drive sensor attributes as training data, preprocessing the training data to select a set of enhanced feature sequences, and using the enhanced feature sequences to train a machine learning model to predict disk drive failures from disk drive sensor monitoring data. Prior to the training phase, the RNN LSTM model is tuned using a set of predefined hyper-parameters. The preprocessing, which is performed during the training and evaluation phase as well as later during the prediction phase, involves using predefined values for a set of parameters to generate the set of enhanced sequences from raw sensor reading. The enhanced feature sequences are generated to maintain a desired healthy/failed disk ratio, and only use samples leading up to a last-valid-time sample in order to honor a pre-specified heads-up-period alert requirement.Type: ApplicationFiled: September 27, 2018Publication date: April 2, 2020Inventors: ONUR KOCBERBER, FELIX SCHMIDT, ARUN RAGHAVAN, NIPUN AGARWAL, SAM IDICULA, GUANG-TONG ZHOU, NITIN KUNAL
-
Publication number: 20200104201Abstract: A storage device includes a non-volatile memory including a plurality of memory groups; and a memory controller configured to determine a monitoring group from among the plurality of memory groups, determine a monitoring block from among a plurality of blocks included in the monitoring group, and determine whether the monitoring group is a fail group by monitoring the monitoring block using dummy data prior to failure of the monitoring group.Type: ApplicationFiled: May 16, 2019Publication date: April 2, 2020Applicant: Samsung Electronics Co., Ltd.Inventor: Nam-wook Kang
-
Publication number: 20200104202Abstract: Disclosed herein are systems and method for backing up data in a clustered environment. A clustered resource to be backed up is selected, wherein the clustered resource is stored on a common storage system and operated on by a cluster-aware application executing on two or more nodes of a computing cluster. A first backup agent executing on a first node of the computing cluster may determine a list of changes to the clustered resource and may receive at least one list of changes to the clustered resource that are tracked by peer backup agents executing on other nodes of the computing cluster. The first backup agent may merge the lists of changes to the clustered resource, and may generate a consistent incremental backup using data retrieved from the common storage system according to the merged lists of changes to the clustered resource.Type: ApplicationFiled: October 1, 2019Publication date: April 2, 2020Inventors: Anatoly Stupak, Dmitry Kogtev, Serguei Beloussov, Stanislav Protasov
-
Publication number: 20200104203Abstract: Embodiments of the present invention provide a system for technology anomaly detection, triage, and response using solution data modeling. The system is configured for generating solution data models comprising a plurality of asset systems and a plurality of users, continuously monitoring the plurality of asset systems and detecting an anomaly associated with the one or more tasks associated with at least a first group of asset systems of the plurality of asset systems, extracting a first solution data model associated with the first group of asset systems, identifying one or more relationships associated with the first group of systems based on the extracted first solution data model, and identifying a point of failure associated with the anomaly and the first group of asset systems based on the one or more relationships.Type: ApplicationFiled: December 2, 2019Publication date: April 2, 2020Applicant: BANK OF AMERICA CORPORATIONInventors: Aaron Dion Kephart, Katy Leigh Huneycutt, Richard LeRoy Hayes
-
Publication number: 20200104204Abstract: A processing system, such as for an automobile, includes multiple processor cores, including an application core and a safety core, and a fault detection circuit in communication with the processor cores. The fault detection circuit includes a progress register for storing progress data of an application executed on the application core. The safety core, which executes a fault detection program, reads the progress data from the progress register, and generates an output based on the progress data and an expected behavior of the application. The safety core writes the output to a status register of the fault detection circuit. The fault detection circuit includes a controller that reads the status register and generates a fault signal when the output indicates there is a fault in the execution of the application. In response, the application core either recovers from the fault or runs in a safe mode.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Inventors: Hemant Nautiyal, Jan Chochola, Ashish Kumar Gupta, David Baca
-
Publication number: 20200104205Abstract: A memory device, such as a MRAM device, includes a plurality of memory macros, where each includes an array of memory cells and a first ECC circuit configured to detect data errors in the respective memory macro. A second ECC circuit that is remote from the plurality of memory macros is communicatively coupled to each of the plurality of memory macros. The second ECC circuit is configured to receive the detected data errors from the first ECC circuits of the plurality of memory macros and correct the data errors.Type: ApplicationFiled: August 8, 2019Publication date: April 2, 2020Inventors: Hiroki Noguchi, Yu-Der Chih, Hsueh-Chih Yang, Randy Osborne, Win San Khwa
-
Publication number: 20200104206Abstract: A method for compensating for a read error is disclosed, wherein each of n states are read from memory cells of a memory, the states being determined in a time domain. If the n states do not form a code word of a k-from-n code, a plurality of states from the n states, which were determined within a reading window, are provided with a first valid assignment and fed to an error processing stage. If the error processing does not indicate an error, the n states are further processed with the first valid assignment, and if the error processing indicates an error, the plurality of states that were determined within the reading window are provided with a second valid assignment and the n states are further processed with the second valid assignment. Accordingly, a device, a system and a computer program product are also disclosed.Type: ApplicationFiled: September 19, 2019Publication date: April 2, 2020Inventors: Thomas Kern, Michael Goessel
-
Publication number: 20200104207Abstract: The disclosure provides a data processing device and method. The data processing device may include: a task configuration information storage unit and a task queue configuration unit. The task configuration information storage unit is configured to store configuration information of tasks. The task queue configuration unit is configured to configure a task queue according to the configuration information stored in the task configuration information storage unit. According to the disclosure, a task queue may be configured according to the configuration information.Type: ApplicationFiled: November 28, 2019Publication date: April 2, 2020Inventors: Zai WANG, Xuda ZHOU, Zidong DU, Tianshi CHEN
-
Publication number: 20200104208Abstract: An SOC includes a security processor. The security processor includes an encryption/ECC encoding processor configured to perform an encryption operation on data using Metadata and to generate ECC data by performing ECC encoding processing on encrypted data and the Metadata, a decryption/ECC decoding processor configured to extract the encrypted data and the Metadata by performing ECC decoding processing using the ECC data and to recover the data by performing a decryption operation on the encrypted data using the Metadata, and an address controller configured to receive a first address related to storage of the data, to generate a second address based on the first address, and to perform an address generating operation identifying a same region in memory for storing the Metadata and the ECC data based on the second address.Type: ApplicationFiled: April 18, 2019Publication date: April 2, 2020Inventors: In-goo Heo, Yoon-bum SEO, Young-jin CHUNG, Jin-su HYUN
-
Publication number: 20200104209Abstract: An apparatus includes memory cells programmed to one of a plurality of data states, wherein the memory cells are configured such that the plurality of data states comprise an error-prone data state. Sense circuitry of the apparatus is configured to sense first memory cells programmed to the error-prone data state, determine a bit encoding for the first memory cells, sense other memory cells programmed to other data states, and determine a bit encoding for the other memory cells. A communication circuit of the apparatus is configured to communicate the bit encoding for the other memory cells, the bit encoding for the first memory cells, and an indication that the first memory cells are programmed to the error-prone data state, in response to a single read command from a controller.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Inventors: Mostafa EL GAMAL, Jim FITZPATRICK
-
Publication number: 20200104210Abstract: Disclosed are techniques for managing parity information for data stored on a storage device. A method can be implemented at a computing device communicably coupled to the storage device, and include (1) receiving a request to write data into a data band of the storage device, (2) writing the data into stripes of the data band, comprising, for each stripe of the data band: (i) calculating first parity information for the data written into the stripe, (ii) writing the first parity information into a volatile memory, and (iii) in response to determining that a threshold number of stripes have been written: converting the first parity information into smaller second parity information, and (3) in response to determining that the data band is read-verified: (i) converting the second parity information into smaller third parity information, and (ii) storing the smaller third parity information into a parity band of the storage device.Type: ApplicationFiled: April 11, 2019Publication date: April 2, 2020Inventors: Eran ROLL, Stas MOULER, Matthew J. BYOM, Andrew W. VOGAN, Muhammad N. ASHRAF, Elad HARUSH, Roman GUY
-
Publication number: 20200104211Abstract: The present invention provides an information processing apparatus having a user interface, a non-volatile memory that stores a loading program, and another non-volatile memory that stores a boot program and a notifying program for notifying an error. The information processing apparatus executes the loading program at startup to verify the boot program and activates the notifying program based on a detection of an alteration of the boot program to notify an error via the user interface.Type: ApplicationFiled: September 13, 2019Publication date: April 2, 2020Inventor: Yosuke Obayashi
-
Publication number: 20200104212Abstract: Featured are a method and apparatus for managing various point-in-time copies of workloads or applications using a software system called workload manager. An aspect of the invention is to receive backup images of point-in-time backup images of workload from a backup client and realize corresponding virtual resource from backup image on the cloud platform that is part of the workload manager appliance. Workload manager maintains catalog of point-in-time copies of workloads. Each item in the catalog refers resource entries on the cloud platform. When user wishes to instantiate a point-in-time copy, workload manager instantiates all the resources identified in the catalog entry. User can also restore a particular point-in-time workload to production system or migrate a particular point-in-time workload to remote application.Type: ApplicationFiled: October 3, 2019Publication date: April 2, 2020Inventors: Muralidhara R. Balcha, Giridhar Basava, Sanjay Baronia
-
Publication number: 20200104213Abstract: A storage system according to certain embodiments includes a client-side signature repository that includes information representative of a set of data blocks stored in primary storage. During storage operations of a client, the system can generate signatures corresponding to data blocks that are being stored in primary storage. The system can store the generated signatures in the client-side signature repository along with information regarding the location of the corresponding data block within primary storage. As additional instances of the data block are stored in primary storage, the system can store the location of the additional instances in the client-side signature repository.Type: ApplicationFiled: September 9, 2019Publication date: April 2, 2020Inventors: Marcus S. Muller, David Ngo
-
Publication number: 20200104214Abstract: Rather than relying on pre-defined scheduling of secondary copy operations such as backup jobs, the illustrative opportunistic approach initiates secondary copy operations based on changing operational conditions in a storage management system. An adaptive backup readiness score is based on a number of backup-readiness operational factors. An illustrative enhanced data agent which is associated with the target database application (or other executable component) may monitor the operational factors and determine the backup readiness score based on weights assigned to the respective operational factors. The enhanced data agent may evaluate recent backup jobs to determine which of the operational factors that contributed to the backup readiness score may have been most relevant.Type: ApplicationFiled: October 4, 2019Publication date: April 2, 2020Inventors: Jun H. AHN, Waqas ASHRAF, Anup KUMAR, Brahmaiah VALLABHANENI
-
Publication number: 20200104215Abstract: Embodiments of the present disclosure relate to methods, systems, and computer program products for managing a distributed system. In one embodiment, a computer-implemented method is disclosed. In the method, packets that are to be transmitted among a group of nodes in a distributed system may be collected into a queue of packets, here a packet in the queue is associated with a source node and a destination node in the group of nodes. A snapshot in the group of snapshots may be obtained from a node in the group of nodes, therefore a group of snapshots may be obtained from the group of nodes. A snapshot of the distributed system may be generated based on the queue of packets and the group of snapshots. In other embodiments, a computer-implemented system and a computer program product for managing a distributed system are disclosed.Type: ApplicationFiled: September 27, 2018Publication date: April 2, 2020Inventors: Jiang Xuan, Xin Peng Liu, Peng Hui Jiang, Hongmei Zhao
-
Publication number: 20200104216Abstract: A data management and storage (DMS) cluster of peer DMS nodes manages data of a compute infrastructure by generating snapshots of partitions of a fileset of the compute infrastructure and providing a passthrough for storing the snapshots in a data storage separate from the DMS cluster, such as a cloud computing system. In one approach, the DMS nodes determine partitions of a fileset using fileset metadata, generates snapshots of the partititons, and stores the snapshots in the data storage. Each DMS node may include a local storage which is used to facilitate creation of a snapshot of a partition. The snapshot may be removed from the local storage after being sent to the data storage. Rather than storing the snapshot, the DMS cluster stores fileset metadata that is referenced to retrieve the snapshot from the data storage. The snapshot is deployed to retrieve a file stored in the partition.Type: ApplicationFiled: October 1, 2018Publication date: April 2, 2020Inventors: Zhicong Wang, Looi Chow Lee, Andrew Kwangyum Park, JR., Karthikeyan Srinivasan