Patents Issued in February 6, 2020
  • Publication number: 20200042339
    Abstract: Systems and methods for batching operations in a virtualization environment. A method embodiment operates over a plurality of virtual machines in the virtualization environment. A user interface is used to select two or more virtual machines that are to be subjected to the same batch actions. A method step then generates at least one batch request to be performed over the two or more selected virtual machines. In forming the batch request, the states of the individual virtual machines are analyzed to determine one or more entity-specific operations that apply to the virtual machines and/or to constituent entities of the virtual machines. Once the state-specific and entity-specific operations have been determined, an entity management protocol initiates execution of the one or more entity-specific operations over the individual ones of the two or more selected virtual machines.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Applicant: Nutanix, Inc.
    Inventors: Anjana SHANKAR, Saurabh Kumar SINGH, Gourab BAKSI, Niramayee Shrikant SARPOTDAR, Sai Sruthi SAGI
  • Publication number: 20200042340
    Abstract: The present disclosure describes a technique for honoring virtual machine placement constraints established on a first host implemented on a virtualized computing environment by receiving a request to migrate one or more virtual machines from the first host to a second host and without violating the virtual machine placement constraints, identifying an architecture of the first host, provisioning a second host with an architecture compatible with that of the first host, adding the second host to the cluster of hosts, and migrating the one or more virtual machines from the first host to the second host.
    Type: Application
    Filed: June 20, 2019
    Publication date: February 6, 2020
    Inventors: Maarten WIGGERS, Gabriel TARASUK-LEVIN, Manoj KRISHNAN
  • Publication number: 20200042341
    Abstract: In various examples, access to VM memory by virtualization software is secured using a trusted firmware of a host controller to validate one or more of a command to read a VM's memory and/or the data read from VM memory in order to protect against improper access to data in VM memory. If validation fails, the firmware may refrain from reading the data and/or from providing the virtualization software with access to the data. The data may include a request command from a VM regarding establishing or modifying a connection using the host controller to another entity, such as another device within or outside of the virtualization environment. The virtualization software may use the request command to facilitate the connection. The host controller may provide an eXtensible Host Controller Interface (xHCI) or a different type of interface for the connection.
    Type: Application
    Filed: August 2, 2019
    Publication date: February 6, 2020
    Inventors: Ajay Kumar Gupta, Venkat Tammineedi, David Lim, Ashutosh Jha
  • Publication number: 20200042342
    Abstract: An improved architecture is provided which enables significant convergence of the components of a system to implement virtualization. The infrastructure is VM-aware, and permits scaled out converged storage provisioning to allow storage on a per-VM basis, while identifying I/O coming from each VM. The current approach can scale out from a few nodes to a large number of nodes. In addition, the inventive approach has ground-up integration with all types of storage, including solid-state drives. The architecture of the invention provides high availability against any type of failure, including disk or node failures. In addition, the invention provides high performance by making I/O access local, leveraging solid-state drives and employing a series of patent-pending performance optimizations.
    Type: Application
    Filed: August 8, 2019
    Publication date: February 6, 2020
    Applicant: Nutanix, Inc.
    Inventors: Mohit ARON, Dheeraj PANDEY, Ajeet SINGH, Rishi BHARDWAJ, Brent CHUN
  • Publication number: 20200042343
    Abstract: Examples herein relate to checkpoint replication and copying of updated checkpoint data. For example, a memory controller coupled to a memory can receive a write request with an associated address to write or update checkpoint data and track updates to checkpoint data based on at least two levels of memory region sizes. A first level is associated with a larger memory region size than a memory region size associated with the second level. In some examples, the first level is a cache-line memory region size and the second level is a page memory region size. Updates to the checkpoint data can be tracked at the second level unless an update was previously tracked at the first level. Reduced amounts of updated checkpoint data can be transmitted during a checkpoint replication by using multiple region size trackers.
    Type: Application
    Filed: September 27, 2019
    Publication date: February 6, 2020
    Inventors: Zhe WANG, Andrew V. ANDERSON, Alaa R. ALAMELDEEN, Andrew M. RUDOFF
  • Publication number: 20200042344
    Abstract: A cloud management platform, and a virtual machine management method and system, where the virtual machine management method includes: obtaining, by a first cloud management platform, configuration information of an inventory virtual machine from a second cloud management platform; locally creating, by the first cloud management platform, a proxy virtual machine according to the configuration information of the inventory virtual machine; generating a proxy virtual machine identification code according to the configuration information of the inventory virtual machine; sending, by the first cloud management platform, the proxy virtual machine identification code to the second cloud management platform; and updating, by the second cloud management platform to the proxy virtual machine identification code, an inventory virtual machine identification code recorded by the second cloud management platform.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Inventors: Hongwei Ao, Yaobin Wang
  • Publication number: 20200042345
    Abstract: Embodiments for volume management in a data storage environment. A network sniffing operation between virtual machines is performed to detect relationships between the virtual machines and thereby identify candidates for subsequent storage volume affiliation operations. The network sniffing operation detects the relationships based on network traffic or alternative similarity attributes of an existing placement of the virtual machines thereby deducing affiliations of storage volumes between the virtual machines such that, during the storage volume affiliation operations, the existing placement of the virtual machines is not modified. The identified candidates to be added to a new or existing storage volume affiliation operation are recommended to a user via a prompt.
    Type: Application
    Filed: October 14, 2019
    Publication date: February 6, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ohad ATIA, Amalia AVRAHAM, Ran HAREL, Rivka M. MATOSEVICH
  • Publication number: 20200042346
    Abstract: There is provided system for migrating virtual machines. The system is capable of managing virtual machines in a source service for hosting one or more source virtual machines and a destination service for hosting one or more destination virtual machines, and memory for storing program code and at least one processing core capable of executing the program code to cause generating, by the source service, a temporary virtual machine in the source service, attaching, by the source service, at least one storage drive of at least one source virtual machine, to the temporary virtual machine, preparing, by the temporary virtual machine, a disk image of the attached at least one storage drive in a format supported by the destination service, and writing, by the temporary virtual machine, the disk image to a storage drive of the destination virtual machine.
    Type: Application
    Filed: October 10, 2019
    Publication date: February 6, 2020
    Inventors: Janne KOSKINEN, Antti RISTOLAINEN
  • Publication number: 20200042347
    Abstract: Methods, non-transitory machine-readable media, and computing devices for transitioning tasks and interrupt service routines are provided. An example method includes processing, by a plurality of processor cores of a storage controller, tasks and interrupt service routines. A performance statistic is determined corresponding to the plurality of processor cores. Based on detecting that the performance statistic passes a threshold, a number of the plurality of processor cores that are assigned to the tasks and the interrupt service routines are reduced.
    Type: Application
    Filed: October 15, 2019
    Publication date: February 6, 2020
    Inventors: Kent Prosch, Matthew Weber, Arindam Banerjee, Ben McDavitt
  • Publication number: 20200042348
    Abstract: Systems, apparatuses, and methods for abstracting tasks in virtual memory identifier (VMID) containers are disclosed. A processor coupled to a memory executes a plurality of concurrent tasks including a first task. Responsive to detecting one or more instructions of the first task which correspond to a first operation, the processor retrieves a first identifier (ID) which is used to uniquely identify the first task, wherein the first ID is transparent to the first task. Then, the processor maps the first ID to a second ID and/or a third ID. The processor completes the first operation by using the second ID and/or the third ID to identify the first task to at least a first data structure. In one implementation, the first operation is a memory access operation and the first data structure is a set of page tables. Also, in one implementation, the second ID identifies a first application of the first task and the third ID identifies a first operating system (OS) of the first task.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Anirudh R. Acharya, Michael J. Mantor, Rex Eldon McCrary, Anthony Asaro, Jeffrey Gongxian Cheng, Mark Fowler
  • Publication number: 20200042349
    Abstract: Systems and methods for scheduling job requests in a virtualization system. A method embodiment commences upon initialization of a pair of multi-level queues comprising a high priority job queue and a low priority job queue. A plurality of virtual machines issue job requests. Queue management logic receives incoming job requests from the virtual machines and locates or creates a job request group corresponding to the virtual machine of the incoming job request. The incoming job request is positioned into the job request group and the job request group is positioned into a queue. When a job executor is ready for a next job, then a job for execution can be identified by locating a next job in a next job request group that is at the front of either the high priority queue or at the front of the low priority queue. When a job finishes, the queues are reorganized.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Kshitiz JAIN, Prateek KAJARIA
  • Publication number: 20200042350
    Abstract: Tenant support is provided in a multi-tenant configuration in a data center by a Physical Function driver communicating a virtual User Priority to a virtual traffic class mapper to a Virtual Function driver. The Physical Function driver configures the Network Interface Controller to map virtual User Priorities to Physical User Priorities and to enforce the Virtual Function's limited access to Traffic Classes. Data Center Bridging features assigned to the physical network interface controller are hidden by virtualizing user priorities and traffic classes. A virtual Data Center Bridging configuration is enabled for a Virtual Function, to provide access to the user priorities and traffic classes that are not visible to the Virtual Function that the Virtual Function may need.
    Type: Application
    Filed: October 7, 2019
    Publication date: February 6, 2020
    Inventors: Manasi DEVAL, Neerav PARIKH, Robert O. SHARP, Gregory J. BOWERS, Ryan E. HALL, Chinh T. CAO
  • Publication number: 20200042351
    Abstract: An event processing system for processing events in an event stream is disclosed. The system is configured for configuring a stream processor to micro-batch incoming events from a stream source. The system is also configured for generating a single timestamp for a micro-batch of the incoming events and/or receiving the micro-batch of the incoming events from the stream source. The system can also be configured for assigning the single timestamp to each event of the micro-batch and/or generating separate timestamp values for each respective event of the micro-batch. In some examples, the system can also be configured for assigning, for each respective event of the micro-batch, an individual one of the separate timestamp values.
    Type: Application
    Filed: October 15, 2019
    Publication date: February 6, 2020
    Applicant: Oracle International Corporation
    Inventors: Hoyong Park, Sandeep Bishnoi, Prabhu Thukkaram
  • Publication number: 20200042352
    Abstract: A heterogeneous resource reservation (HRR) manager configured to classify historical application requests from a past time interval for a first workload to generate labeled historical application requests. The HRR manager further configured to generate a forecast based on the labeled historical application requests and for predicting future application requests for the first workload for a future time interval and calculate a joint plan based on the forecast. The joint plan including a set of virtual resources, a set of billing contracts, and a set of load balancer weights. The HRR manager further configured to implement the joint plan for a distributed computing workload during the future time interval.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 6, 2020
    Inventors: David Breitgand, Michael Masin, Ofer Biran, Dean H. Lorenz, Eran Raichstein, Avi Weit, Ilyas Mohamed Iyoob
  • Publication number: 20200042353
    Abstract: Methods, apparatus, and processor-readable storage media for management of unit-based virtual accelerator resources are provided herein. An example computer-implemented method includes obtaining multiple accelerator device resource consumption measurements, wherein the measurements represent multiple accelerator device resource types consumed by one or more accelerator devices over a defined temporal interval; computing a composite unit of measurement of accelerator device resource consumption, attributable to the one or more accelerator devices over the defined temporal interval, by normalizing the multiple accelerator device resource consumption measurements using a scaling factor that is based at least in part on one or more static aspects of the one or more accelerator devices; and outputting the composite unit of measurement to at least one user.
    Type: Application
    Filed: August 3, 2018
    Publication date: February 6, 2020
    Inventors: John Yani Arrasjid, Derek Anderson, Eloy F. Macha
  • Publication number: 20200042354
    Abstract: [Problem] To provide an analysis node capable of appropriately managing computational resources in relation to load fluctuation, and continuously performing analysis processing with high throughput. [Solution] An analysis node includes an analysis execution means 1, a content variation observation means 2, and a resource allocation means 3. The analysis execution means 1 performs analysis processing that includes a plurality of steps including at least a pre-stage step and a post-stage step, by computational resources allocated to each of the steps. The content variation observation means 2 observes, as content variation observation information, content change of processing-target data at the pre-stage step. The resource allocation means 3 predicts fluctuation in a processing load at the post-stage step, based on the content variation observation information, and changes the computational resources allocated to the post-stage step.
    Type: Application
    Filed: October 15, 2019
    Publication date: February 6, 2020
    Applicant: NEC Corporation
    Inventors: Takeshi ARIKUMA, Takatoshi KITANO
  • Publication number: 20200042355
    Abstract: Techniques are disclosed for reallocating host resources in a virtualized computing environment when certain criteria have been met. In some embodiments, a system identifies a host disabling event. In view of the disabling event, the system identifies a resource for reallocation from a first host to a second host. Based on the identification, the computer system disassociates the identified resource's virtual identifier from the first host device and associates the virtual identifier with the second host device. Thus, the techniques disclosed significantly reduce a system's planned and unplanned downtime.
    Type: Application
    Filed: June 18, 2019
    Publication date: February 6, 2020
    Inventors: Manoj Krishnan, Maarten Wiggers
  • Publication number: 20200042356
    Abstract: Methods and systems of managing a resource in a distributed resource management system can include: receiving a resource request by at least one processor in the distributed resource management system, the resource request identifying a requested resource type corresponding to at least one of: a class identifier identifying a resource class assigned to a composite resource, and a class identifier identifying at least one additional resource associated with the composite resource; determining availability of the requested resource type; and scheduling a workload associated with the resource request for execution based on the determination.
    Type: Application
    Filed: October 10, 2019
    Publication date: February 6, 2020
    Inventors: Lei Guo, Chong Chen, Jason Lam
  • Publication number: 20200042357
    Abstract: Techniques for implementing OS/hypervisor-based persistent memory are provided. In one embodiment, an OS or hypervisor running on a computer system can allocate a portion of the volatile memory of the computer system as a persistent memory allocation. The OS/hypervisor can further receive a signal from the computer system's BIOS indicating an AC power loss or cycle event and, in response to the signal, can save data in the persistent memory allocation to a nonvolatile backing store. Then, upon restoration of AC power to the computer system, the OS/hypervisor can restore the saved data from the nonvolatile backing store to the persistent memory allocation.
    Type: Application
    Filed: September 26, 2019
    Publication date: February 6, 2020
    Inventors: Venkata Subhash Reddy Peddamallu, Kiran Tati, Rajesh Venkatasubramanian, Pratap Subrahmanyam
  • Publication number: 20200042358
    Abstract: A memory allocation method and a server, wherein the method includes: identifying, by a server, a node topology table; generating fetch hop tables of the NUMA nodes based on the node topology table; calculating fetch priorities of the NUMA nodes based on the fetch hop tables of the NUMA nodes, and using an NC hop count as an important parameter for fetch priority calculation; and when a NUMA node applies for memory, allocating memory based on the fetch priority table, and for a higher priority, more preferentially allocating memory from a NUMA node corresponding to the priority.
    Type: Application
    Filed: October 8, 2019
    Publication date: February 6, 2020
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Beilei SUN, Shengyu SHEN, Jianrong XU
  • Publication number: 20200042359
    Abstract: A method may include receiving requests for performing the medical applications from at least one client terminal via a user interface. Each of the medical applications may have a priority. The method may also include identifying, from the medical applications, a first medical application whose priority satisfies a condition and determining a first criterion associated with the first medical application indicating an estimated computing resource that the first medical application demands. The method may also include determining a second criterion associated with each of the plurality of computing devices indicating a characteristic of the each of the plurality of computing devices. The method may further include identifying a first computing device from the plurality of computing devices based on the first criterion and the second criterion, and allocating the first computing device to execute the first medical application.
    Type: Application
    Filed: December 30, 2018
    Publication date: February 6, 2020
    Applicant: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Zhiguo ZHANG, Chunshan YANG
  • Publication number: 20200042360
    Abstract: Implementations include actions of receiving, by an intelligent quality assurance (iQA) platform, a desired state (DS) file including data indicative of a desired state of a cloud computing environment, triggering, by the iQA platform, an auto-discovery process to provide an actual state of the cloud computing environment based on cloud resources instantiated within the cloud environment, and application resources executing within the cloud environment, the auto-discovery process including retrieving first credentials to enable automated access to the cloud computing environment, determining, by the iQA platform, a delta between the actual state, and the desired state, and providing, by the iQA platform, a report including the delta.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 6, 2020
    Inventors: Jayanti Vemulapati, Murtuza Chitalwala
  • Publication number: 20200042361
    Abstract: Methods, systems, and computer readable media for distributing tasks using a blockchain network. A method includes generating a task for completion via an interactive application and distributing, using the blockchain network, the task via a block in a blockchain associated with the blockchain network. The blockchain network includes a plurality of nodes and is accessible by a plurality of client devices associated with the interactive application. The method further includes receiving, from one or more of the client devices, data associated with results of processing the task via the interactive application and validating completion of the task based on the received data. Validating completion of the task may include receiving a set of user inputs from a set of the client devices, respectively, as to whether the task was completed and making a consensus determination as to whether the task was completed based on the received set of user inputs.
    Type: Application
    Filed: August 1, 2019
    Publication date: February 6, 2020
    Inventor: Corey Clark
  • Publication number: 20200042362
    Abstract: Systems and methods are provided for implementing a self-adaptive batch dataset partitioning control process which is utilized in conjunction with a distributed deep learning model training process to optimize load balancing among a set of accelerator resources. An iterative batch size tuning process is configured to determine an optimal job partition ratio for partitioning mini-batch datasets into sub-batch datasets for processing by a set of hybrid accelerator resources, wherein the sub-batch datasets are partitioned into optimal batch sizes for processing by respective accelerator resources to minimize a time for completing the deep learning model training process.
    Type: Application
    Filed: September 17, 2018
    Publication date: February 6, 2020
    Inventors: Wei Cui, Sanping Li, Kun Wang
  • Publication number: 20200042363
    Abstract: Disclosed are various aspects of replication management. A first set of resources is identified based on resources used by a virtual machine group executed by a first workload domain comprising at least one host within a rack. The first set of resources comprises a rack network resource of a rack. A property graph is generated to include configuration data for the first set of resources utilized by the virtual machine group. The configuration data includes settings for the rack network resource. A second set of resources of a second workload domain is configured using the property graph for the virtual machine group. The second set of resources is configured to include the settings for the rack network resource.
    Type: Application
    Filed: October 9, 2019
    Publication date: February 6, 2020
    Inventor: Karthick Selvaraj
  • Publication number: 20200042364
    Abstract: In an example, it is determined if a first duster of computing nodes can host an additional service. If the first cluster of computing nodes can host the additional service, a first service hosted in a second duster of computing nodes is identified as the additional service. Subsequently, an internet protocol (IP) address for the first service is reallocated from the second duster of computing nodes to the first cluster of computing nodes.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Praveen Kumar Shimoga Manjunatha, Ravikumar Vallabhu
  • Publication number: 20200042365
    Abstract: Systems, methods and computer software are disclosed for providing a Service Bus for telecommunications infrastructure. The services bus provides a communications system between mutually interacting software applications, including a plurality of microservices, each microservice comprising: an internal bus; a data store in communication with the internal bus; a data access object in communication with the internal bus; a message exchange object in communication with the internal bus; a MAPReduce engine in communication with the internal bus; and a restful Application Programming Interface (API) bus in communication with the data access object, the message exchange object and the MAPReduce engine.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 6, 2020
    Inventors: Poojan Tanna, Michael C. Silva
  • Publication number: 20200042366
    Abstract: This disclosure relates to an electronic device including a memory and at least one processor coupled to the memory. The at least one processor is configured to identify a device change event in a host operating system, wherein the host operating system includes a host namespace, switch from the host namespace to a container namespace of a container, and update the container with information based on the device change event.
    Type: Application
    Filed: July 29, 2019
    Publication date: February 6, 2020
    Inventors: Guruprasad Ganesh, Ahmed M. Azab, Rohan Bhutkar, Haining Chen, Ruowen Wang, Xun Chen, Donguk Seo, Kyoung-Joong Shin
  • Publication number: 20200042367
    Abstract: Provided are a computer program product, system, and method for determining when to send message to a computing node to process items using a machine learning module. A send message threshold indicates a send message parameter value for a send message parameter indicating when to send a message to the computing node with at least one requested item to process. Information related to sending of messages to the computing node to process requested items is provided to a machine learning module to produce a new send message parameter value for the send message parameter indicating when to send the message, which is set to the send message parameter value. A message is sent to the computing node to process at least one item in response to the current value satisfying the condition with respect to the send message parameter value.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Matthew G. Borlick, Kyler A. Anderson
  • Publication number: 20200042368
    Abstract: Provided are a computer program product, system, and method for determining when to send message to a computing node to process items by training a machine learning module. A machine learning module receives as input information related to sending of messages to the computing node to process items and outputs a send message parameter value for a send message parameter indicating when to send a message to the computing node. The send message parameter value is adjusted based on a performance condition and a performance condition threshold to produce an adjusted send message parameter value. The machine learning module is retrained with the input information related to the sending of messages to produce the adjusted send message parameter value. The retrained machine learning module is used to produce a new send message parameter value used to determine when to send a message.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Matthew G. Borlick, Kyler A. Anderson
  • Publication number: 20200042369
    Abstract: A method is used in monitoring an application in a computing environment. The method represents execution of the application on a system as a finite state machine. The finite state machine depicts at least one state of the application, where the state indicates at least one of successful application execution and unsuccessful application execution. The method identifies an error state within the finite state machine, where the error state indicates the unsuccessful application execution. The method identifies, by analyzing the finite state machine, a non-error state as a cause of the unsuccessful application execution, where the unsuccessful application execution is represented as a path comprising a plurality of states, where the path comprises the non-error state. The method maps the non-error state to a location in the application to identify the cause of the unsuccessful application execution.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Applicant: EMC IP Holding Company LLC
    Inventors: Karun THANKACHAN, Prajnan GOSWAMI, Mohammad RAFEY
  • Publication number: 20200042370
    Abstract: A management entity receives device fingerprints representing corresponding devices connected to one or more networks. Each device fingerprint includes a multi-bit word indicating hardware, software, network configuration, and failure features for a corresponding one of the devices. The management entity processes the device fingerprints using different methods including statistical risk of failure scoring methods and machine learning risk of failure scoring methods, to produce from each of the methods a respective risk of failure for each device. The management entity combines the respective risk of failures for each device into a composite risk of failure for each device, ranks the devices based on the composite risk of failures for the devices, to produce a risk ranking of the devices, and outputs the risk ranking.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Nidhi Kao, Ulf Vinneras, John W. Garrett, JR.
  • Publication number: 20200042371
    Abstract: Various embodiments relate to a method for detecting a memory leak and an electronic device thereof, the electronic device including a processor, and a memory operatively connected to the processor, wherein the memory stores instructions which, when executed by the processor, control the electronic device to: acquire usage information for the memory of a process executed by the processor based on a collection period determined based at least partially on a characteristic of the process; identify a change pattern of a usage amount for the memory of the process based on the usage information; and determine whether a memory leak occurs based on the change pattern of the usage amount.
    Type: Application
    Filed: July 30, 2019
    Publication date: February 6, 2020
    Inventors: Sangjun PARK, Sungdo MOON, Mooyoung KIM
  • Publication number: 20200042372
    Abstract: In one embodiment, a device writes messages and corresponding trace-on-failure flags to log files when failure conditions are detected. The device propagates the trace-on-failure flags to headers of the log files. The device forms a file index of the log files that have trace-on-failure flags set in their headers. The device performs, using the file index, a lookup of messages in the log files associated with a particular error context. The device sends data from the lookup to an electronic display.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 6, 2020
    Inventors: Clinton John Grant, Avinash Ashok Kumar Chiganmi, Calvin Michael Hareng, Winifred Yah Lee, Suman Sarkar
  • Publication number: 20200042373
    Abstract: A device operation anomaly identification and reporting system includes device that generates an operating metric data stream. A management system is coupled to the device and receives and analyzes the operating metric data stream. The management system identifies peaks present in the operating metric data stream, and determines a peak height and a peak area for each of the peaks. The management system then clusters the peaks into height clusters based on their heights, and clusters the peaks into area clusters based on their areas. The management system then defines an operating periodicity for the device based on the height clusters and area clusters, and when the management system detects an operating anomaly in the device using the operating periodicity defined for the device, it generates and transmits an operating anomaly alert that reports the operating anomaly in the device.
    Type: Application
    Filed: August 2, 2018
    Publication date: February 6, 2020
    Inventor: Piotr Przestrzelski
  • Publication number: 20200042374
    Abstract: An apparatus, method, and computer program product are provided to detect error conditions and otherwise monitor the status of request data object and network response assets and related systems to allow for the efficient movement of network resources and other resources in high-volume network environments. In some example implementations, otherwise unrelated request data objects and their related parameters, along with otherwise unrelated network response asset systems are depicted on a single interface such that pairings between request data objects and network response assets, and other status information can be readily viewed. Some example implementations contemplate the use of location data in connection with error detection and remediation. Some example implementations also contemplate the establishment and use of a communication channel between an interface system and a system associated with a request data object and/or a network response asset upon the detection of an error condition.
    Type: Application
    Filed: March 7, 2019
    Publication date: February 6, 2020
    Inventors: Kyle Fritz, Paul Barry, Jamie Gaskins
  • Publication number: 20200042375
    Abstract: A method to detect hardware and software errors in an embedded system is disclosed. The method includes: detecting or measuring, by a plurality of sensors, an operating state of the embedded system; operating a plurality of replicated computation engines in group synchrony, wherein the plurality of replicated computation engines are replicated instances of a single computation engine and wherein the plurality of replicated computation engines are grouped into one or more groups such that, for each group, each member of the group starts in a same processing logic state and processes same events in the same order; intercepting output of the plurality of sensors and transmitting the output to each replicated computation engine of a group in a defined order; and actuating selected computation engines of the plurality of replicated computation engines and arbitrating between outputs of the selected computation engines.
    Type: Application
    Filed: September 13, 2019
    Publication date: February 6, 2020
    Applicant: 2236008 Ontario Inc.
    Inventors: Christopher William Lewis HOBBS, Kerry Wayne JOHNSON
  • Publication number: 20200042376
    Abstract: A computing device including: more than two Universal Serial Bus (USB) ports configured to be connected respectively to more than two mobile devices simultaneously; at least one processor coupled to the USB ports; and a memory storing instructions configured to instruct the at least one processor to reprogram, through the more than two USB ports, the more than two mobile devices simultaneously.
    Type: Application
    Filed: October 8, 2019
    Publication date: February 6, 2020
    Inventor: George Huang
  • Publication number: 20200042377
    Abstract: Systems and methods are provided for agentless error management by an agentless system. The agentless system can include a management processor and a memory that stores agentless management firmware. Execution of the firmware causes to obtain first graphic data corresponding to actual output graphics that are displayed via a display device. An error is detected in the actual output graphics. The error can indicate one or more differences between the actual output graphics and intended output graphics. The detected error can be addressed, such that it is remedied or attempted to be remedied by eliminating the differences and/or extraneous graphical content from the displayed data or actual output graphics.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventor: Andrew Brown
  • Publication number: 20200042378
    Abstract: Various method and apparatus embodiments for data dependent error correction code (ECC) encoding are disclosed. In one embodiment, a data object may include multiple portions, with each portion having different characteristics. An ECC encoder may allocate error correction resources (e.g., parity bits) to the different portions at respectively different data rates (e.g., more error correction resources to some portions relative to other portions). Upon completion of the allocation, the data object and the associated error correction resources are forwarded to a storage medium for storage therein.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Ofir Pele, Ariel Navon, Alex Bazarsky
  • Publication number: 20200042379
    Abstract: A data processing apparatus includes a processor; and a direct memory access (DMA) controller coupled to the processor, the DMA controller including a control circuit that controls a DMA transfer of data, an error detection circuit that performs an error detection on the data based on a character assigned in association with the data to output a result of the error detection to the control circuit, and a diagnosis circuit that disconnects between the control circuit and the error detection circuit to diagnose an operation of the error detection circuit and provide a diagnosis result to the processor.
    Type: Application
    Filed: July 26, 2019
    Publication date: February 6, 2020
    Applicant: FUJITSU LIMITED
    Inventor: Shinya Miyata
  • Publication number: 20200042380
    Abstract: Example peer storage systems, storage devices, and methods provide data scrub using a peer communication channel. Peer storage devices establish peer communication channels that communicate data among the peer storage devices. A storage device may identify data segments from their storage medium for a data scrub process. A peer storage device may be identified that contains corresponding data segments to the data segment being scrubbed. A corresponding lock command may be sent over the peer communication channel to lock the corresponding data segments during the data scrub process. A data scrub error report may be generated from the data scrub process. If an error is discovered during the data scrub process the storage device may use the peer communication channel to retrieve recovery data from peer storage devices to rebuild the data segment with the error.
    Type: Application
    Filed: August 3, 2018
    Publication date: February 6, 2020
    Inventor: Adam Roberts
  • Publication number: 20200042381
    Abstract: A method for responding to a read request from a user for a set of encoded data slices (EDSs) in a distributed storage network begins with a processing module determining that a threshold number of encoded data slices is not available and continues with the processing module determining that one or more copies are available for the set of EDSs. The method continues with the processing module determining whether a combination of the one or more additional EDSs within the copy of the set of EDSs and the available EDSs from the set of EDSs is at least a read threshold number of EDSs, and when a read threshold is available based on the combination the processing module responds to the request using the combination.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 6, 2020
    Inventors: Harsha Hegde, Venkata G. Badanahatti
  • Publication number: 20200042382
    Abstract: A storage unit (SU) in a dispersed storage network (DSN) coordinates with affiliated dispersed storage units (SUs) to designate a leader SU among the plurality of SUs and when the SU is designated the leader, receives management information that is associated with the affiliated SUs from at least some of the affiliated SUs. The SU processes the management information from the at least some of the affiliated SUs to determine whether at least one of the affiliated SUs is offline; and based on a determination that the at least one of the SUs of the affiliated SUs is offline, transmits the management information for the affiliated SUs to one or more administrators associated with the DSN.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 6, 2020
    Inventors: Bart R. Cilfone, Alan M. Frazier, Patrick A. Tamborski, Sanjaya Kumar, Manish Motwani
  • Publication number: 20200042383
    Abstract: A memory system may include: a memory device configured to perform one or more of data write, read and erase operations; and a controller configured to execute an error management command and control the operation of the memory device, wherein the error management command is configured to determine first data which is highly likely to cause a read fail, among data stored in the memory device, determine one or more second data which is used to generate predicted error parity, and generate the predicted error parity based on the determined first and second data, and wherein the memory device performs the write operation to store indexes of the first and second data and the predicted error parity, under control of the controller.
    Type: Application
    Filed: April 15, 2019
    Publication date: February 6, 2020
    Inventor: Su Jin LIM
  • Publication number: 20200042384
    Abstract: A storage system includes memory cells arranged in an array and a memory controller coupled to the memory cells for controlling operations of the memory cells. The memory controller is configured to perform a read operation in response to a read command from a host, perform a first soft decoding of data from the read operation using existing LLR (log likelihood ratio) values stored in the memory controller, update existing LLR values using LLR values from neighboring memory cells and existing weight coefficients that account for influence from the neighboring memory cells. The memory controller is also configured to perform a second soft decoding using the updated LLR values. If the second soft decoding is successful, the memory controller performs a recursive update of weight coefficients to reflect updated influence from neighboring memory cells and stores the updated weight coefficient in the memory controller for use in further decoding.
    Type: Application
    Filed: May 23, 2019
    Publication date: February 6, 2020
    Inventors: Naveen Kumar, Aman Bhatia, Yu Cai, Fan Zhang
  • Publication number: 20200042385
    Abstract: An error correcting circuit receives a codeword including user data and a parity code, and performs an error correction operation on the user data. The circuit includes a first buffer, a decoder, a second buffer and a processor. The first buffer stores the codeword and sequentially outputs pieces of subgroup data obtained by dividing the codeword. The decoder generates pieces of integrity data for each of the pieces of subgroup data received from the first buffer, and performs the error correction operation on the user data using the parity code. The second buffer sequentially stores the pieces of integrity data for each of the pieces of subgroup data. The processor determines whether an error is present in the codeword based on the pieces of integrity data stored in the second buffer when at least one of the pieces of integrity data is updated in the second buffer.
    Type: Application
    Filed: June 18, 2019
    Publication date: February 6, 2020
    Inventors: YOUNG-JUN HWANG, MYUNG-KYU LEE, HONG-RAK SON, GEUN-YEONG YU, Ki-JUN LEE
  • Publication number: 20200042386
    Abstract: A storage controller for a storage system is provided. The storage system includes a host interface, a storage interface, a buffer coupled with the host interface and the storage interface, a storage encoder coupled with the buffer, and a storage decoder coupled with the buffer. The storage encoder and storage decoder are configured to use scatter-gather lists in reading data streams from the buffer, and storing data streams to the buffer. They are also configured to provide error correction coding and decoding, with the ability to generate data blocks for missing data blocks.
    Type: Application
    Filed: August 2, 2019
    Publication date: February 6, 2020
    Applicant: Burlywood, Inc.
    Inventor: David Christopher Pruett
  • Publication number: 20200042387
    Abstract: An apparatus comprises at least one processing device comprising a processor coupled to a memory that is configured to initiate a read data request utilizing a logical address of a content addressable storage system that maps to a physical address comprising an offset on a storage device that internally maps the offset to a first sector. The processing device is also configured to determine a health of the first sector responsive to the read data request failing, to recover data stored in the first sector responsive to the first sector being a bad sector, and to overwrite the recovered data to the logical address while maintaining the mapping to the physical address by directing a write of the recovered data to the offset to update the internal mapping of the offset in the storage device to a new physical location corresponding to a second sector different than the first sector.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Inventors: Nimrod Shani, Anton Kucherov, Lior Kamran, Leron Fliess
  • Publication number: 20200042388
    Abstract: Example redundant array of independent disks (RAID) storage systems and methods provide rebuild of logical data groups. Storage devices are configured as a storage array for storing logical data groups distributed among the storage devices. The logical data groups are written in a configuration of RAID stripes in the storage devices. A failed storage device may be rebuilt using the RAID stripes and completed rebuilds of logical blocks may be tracked during the device rebuild process. A logical group rebuild status may be determined by comparing the completed rebuilds of logical blocks to a logical group map. The logical group rebuild status for each logical data group may be provided as complete in response to all logical blocks in the logical data group having been rebuilt. In the event the array rebuild fails, the logical groups that did complete rebuild may be brought online as a partially completed rebuild to prevent the loss of the entire array.
    Type: Application
    Filed: August 3, 2018
    Publication date: February 6, 2020
    Inventor: Adam Roberts