Patents Issued in September 20, 2018
  • Publication number: 20180267818
    Abstract: Example methods are provided for locating an operating system (OS) data structure on a host according to a hypervisor-assisted approach. The method may comprise a virtualized computing instance identifying a guest virtual memory address range in which the OS data structure is stored; and configuring a hypervisor to generate notification data associated with the guest virtual memory address range. The method may further comprise the virtualized computing instance manipulating the OS data structure; obtaining notification data generated by the hypervisor in response to the manipulation; and determining a location associated with the OS data structure based on the notification data.
    Type: Application
    Filed: June 8, 2017
    Publication date: September 20, 2018
    Inventors: PRASAD DABAK, GORESH MUSALAY
  • Publication number: 20180267819
    Abstract: Example methods are provided for locating an operating system (OS) data structure on a host according to a hypervisor-assisted approach. The method may comprise a virtualized computing instance identifying a guest virtual memory address range in which the OS data structure is stored; and configuring the hypervisor to perform a safe read on the guest virtual memory address range to access data stored within the guest virtual memory address range. The method may further comprise the virtualized computing instance performing attribute matching by comparing the data stored within the guest virtual memory address range with attribute data associated with the OS data structure; and determining a location associated with the OS data structure based on the attribute matching.
    Type: Application
    Filed: June 8, 2017
    Publication date: September 20, 2018
    Inventors: PRASAD DABAK, Goresh Musalay
  • Publication number: 20180267820
    Abstract: Disclosed are a method, apparatus, and system for selectively providing a virtual machine through actual measurement of efficiency of power usage. When a user terminal requests to provide a virtual machine, candidate virtual machines are activated on multiple virtual machine servers. Input data provided by the user terminal are provided to each of the multiple candidate virtual machines through replication and network virtualization, and identical candidate virtual machines are run on the multiple virtual machine servers through replication and network virtualization. When the candidate virtual machines are run, one of the candidate virtual machines is finally selected as the virtual machine to be provided to the user terminal based on efficiency of power usage.
    Type: Application
    Filed: August 16, 2017
    Publication date: September 20, 2018
    Inventor: Su-Min JANG
  • Publication number: 20180267821
    Abstract: Techniques for enabling communication between a virtual machine and the host of the virtual machine are disclosed. An example computing device includes a host operating system and a virtual machine running on the host operating system. The storage device also includes a split driver. The split driver includes a frontend driver residing on the virtual machine and a backend driver residing on the host. The split driver processes messages received from the virtual machine and passes the messages from the frontend driver to the backend driver.
    Type: Application
    Filed: March 29, 2016
    Publication date: September 20, 2018
    Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Sudheer Kurichiyath, Joel E. Lilienkamp
  • Publication number: 20180267822
    Abstract: A computer system with a hypervisor detects the local physical connection of a portable storage device with an operating system image thereon. The computer system installs an instance of the operating system on the hypervisor, and the hypervisor assigns a set of resources to the instance to generate a virtual machine. As further portable storage devices with operating systems thereon are locally, physically connected to the computer system, additional virtual machines are generated, each with a different operating system instance, which may be the same or different from the other operating system instances. The resources of the computer system are assigned and reassigned among the virtual machines as the portable storage devices are connected and disconnected.
    Type: Application
    Filed: December 18, 2017
    Publication date: September 20, 2018
    Inventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
  • Publication number: 20180267823
    Abstract: A computer system with a hypervisor detects the local physical connection of a portable storage device with an operating system image thereon. The computer system installs an instance of the operating system on the hypervisor, and the hypervisor assigns a set of resources to the instance to generate a virtual machine. As further portable storage devices with operating systems thereon are locally, physically connected to the computer system, additional virtual machines are generated, each with a different operating system instance, which may be the same or different from the other operating system instances. The resources of the computer system are assigned and reassigned among the virtual machines as the portable storage devices are connected and disconnected.
    Type: Application
    Filed: February 13, 2018
    Publication date: September 20, 2018
    Inventors: Kevin G. Carr, Thomas D. Fitzsimmons, Johnathon J. Hoste, Angel A. Merchan
  • Publication number: 20180267824
    Abstract: Performance of parallel portions of a streaming application implemented in multiple virtual machines (VMs) is monitored and logged. When there is a need to replicate operators in an existing virtual machine to a new virtual machine, the logged performance data is used to determine a desired configuration for the new virtual machine based on a past implementation of the existing virtual machine.
    Type: Application
    Filed: May 15, 2018
    Publication date: September 20, 2018
    Inventors: Lance Bragstad, Michael J. Branson, Bin Cao, James E. Carey, Mathew R. Odden
  • Publication number: 20180267825
    Abstract: Responsive to receiving a first request from an application to create a thread for the application, a guest operating system sends a first notification to a hypervisor to create a dedicated virtual processor for the thread. Responsive to receiving an identifier associated with the dedicated virtual processor from the hypervisor, the guest operating system starts the thread using the dedicated virtual processor, and pins the thread to the dedicated virtual processor.
    Type: Application
    Filed: May 21, 2018
    Publication date: September 20, 2018
    Inventors: Michael Tsirkin, Amnon Ilan
  • Publication number: 20180267826
    Abstract: Examples are disclosed for composing memory resources across devices. In some examples, memory resources associated with executing one or more applications by circuitry at two separate devices may be composed across the two devices. The circuitry may be capable of executing the one or more applications using a two-level memory (2LM) architecture including a near memory and a far memory. In some examples, the near memory may include near memories separately located at the two devices and a far memory located at one of the two devices. The far memory may be used to migrate one or more copies of memory content between the separately located near memories in a manner transparent to an operating system for the first device or the second device. Other examples are described and claimed.
    Type: Application
    Filed: October 23, 2017
    Publication date: September 20, 2018
    Applicant: INTEL CORPORATION
    Inventors: Neven M. ABOU GAZALA, Paul S. DIEFENBAUGH, Nithyananda S. JEGANATHAN, Eugene GORBATOV
  • Publication number: 20180267827
    Abstract: An information processing device connectable to a plurality of storage devices includes a power source circuit configured to supply power from a backup power source to each of the plurality of storage devices in response to a power loss event, and a processor. The processor is configured to transmit, to each of the storage devices, a first instruction to save user data that have been transmitted to the storage device and have not been written in a non-volatile manner, in response to the power loss event, and transmit, to at least one of the storage devices, a second instruction to save updated address translation information that corresponds to the user data and has not been reflected in an address translation table, upon receiving a response indicating completion of saving the user data from each of the storage devices.
    Type: Application
    Filed: August 23, 2017
    Publication date: September 20, 2018
    Inventor: Shinichi KANNO
  • Publication number: 20180267828
    Abstract: A method of ordering multiple resources in a transaction includes receiving a transaction for a plurality of resources and determining, for each resource, the work embodied by the transaction. The work includes at least one identified parameter relating to an operation for the resource. The method further may include specifying an order of the resources according to the determination of the work, committing the transaction, and invoking the resources in the selected order. Specifying the order of the resources may include specifying the resource to be invoked last. Alternatively, or additionally, specifying the order of the resources also may include specifying that each resource carrying out read-only work be invoked first.
    Type: Application
    Filed: May 22, 2018
    Publication date: September 20, 2018
    Inventors: Andrew Wilkinson, David John Vines
  • Publication number: 20180267829
    Abstract: A method designed to configure an IT system having at least one computing core for executing instruction threads, in which each computing core is capable of executing at least two instruction threads at a time in an interlaced manner, and an operating system, being executed on the IT system, capable of providing instruction threads to each computing core. The method includes a step of configuring the operating system being executed in a mode in which it provides each computing core with a maximum of one instruction thread at a time.
    Type: Application
    Filed: May 24, 2018
    Publication date: September 20, 2018
    Inventors: Xavier BRU, Philippe GARRIGUES, Benoît WELTERLEN
  • Publication number: 20180267830
    Abstract: A policy-driven method of migrating a virtual computing resource that is executing an application workload includes the steps of determining that at least one of multiple policies of the application has been violated by the virtual computing resource while executing the workload in a first virtual data center, and responsive to said determining, programmatically performing: (1) searching for a virtual data center to which the virtual computing resource can be migrated, (2) determining that the virtual computing resource will be able to comply with all of the policies of the application while executing the workload if the virtual computing resource is migrated to the second virtual data center, and (3) based on determining the ability to comply, migrating the virtual computing resource across clouds, namely from the first virtual data center to the second virtual data center.
    Type: Application
    Filed: March 17, 2017
    Publication date: September 20, 2018
    Inventors: Rawlinson RIVERA, Chen WEI, Caixue LIN, Ping CHEN
  • Publication number: 20180267831
    Abstract: An information processing apparatus includes: a processor performs a scheduling process of scheduling a job for nodes and including: calculating, when one node executes a first job, a job execution end time when execution of the first job is completed by referring an execution history in which an execution time of a job is recorded; acquiring, from a load management node that manages a load of a metadata-process execution node which performing metadata processing to access metadata of a file among the nodes, the load of the metadata-process execution node at the job execution end time; and generating, when the load is equal to or more than a threshold, schedule data to cause a staging execution node which performs the metadata processing produced by staging, at the job execution end time, the metadata processing based on staging to a file having an execution result of the first job.
    Type: Application
    Filed: February 16, 2018
    Publication date: September 20, 2018
    Applicant: FUJITSU LIMITED
    Inventors: ATSUSHI NUKARIYA, Tsuyoshi HASHIMOTO
  • Publication number: 20180267832
    Abstract: A self-adjusting resource-provisioning system that infers the existence of extrinsic events by monitoring external information sources. When an external source satisfies a threshold condition, the system, as a function of historical records, correlates the inferred event with a likelihood that a Web site or other computerized entity's resource-utilization will reach a certain level at a future time. The system adjusts the available amount of resources to handle the predicted utilization level. If the system fails to accurately predict the actual utilization level, the system adjusts the condition to more accurately predict utilization in the future. If no threshold condition predicts an unexpected change in resource utilization, the system adjusts parameters of an existing condition or creates a new condition to better correlate utilization with future extrinsic events.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Inventors: Adam S. Biener, Andrea C. Martinez
  • Publication number: 20180267833
    Abstract: A management method of cloud resources is provided for use in a hybrid cloud system with first and second cloud systems, wherein the first cloud system includes first servers operating first virtual machines (VMs) and the second cloud system includes second servers operating second VMs, the method including the step of: collecting, by a resource monitor, performance monitoring data of the first VMs within the first servers; analyzing, by an analysis and determination device, the performance monitoring data collected to automatically send a trigger signal in response to determining that a predetermined trigger condition is met, wherein the trigger signal indicates a deployment target and a deployment type; and automatically performing, by a resource deployment device, an operation corresponding to the deployment type on the deployment target in the second cloud system in response to the trigger signal.
    Type: Application
    Filed: July 17, 2017
    Publication date: September 20, 2018
    Inventors: Wen-Kuang CHEN, Chun-Hung CHEN, Chien-Kuo HUNG, Chen-Chung LEE
  • Publication number: 20180267834
    Abstract: Method by which a plurality of processes are assigned to a plurality of computational resources, each computational resource providing resource capacities in a plurality of processing dimensions. Processing loads are associated in each processing dimension with each process. A loading metric is associated with each process based on the processing loads in each processing dimension. One or more undesignated computational resources are designated from the plurality of computational resources to host unassigned processes from the plurality of processes. In descending order of the loading metric one unassigned process is assigned from the plurality of processes to each one of the one or more designated computational resources. In ascending order of the loading metric any remaining unassigned processes are assigned from the plurality of processes to the one or more designated computational resources whilst there remains sufficient resource capacity in each of the plurality of processing dimensions.
    Type: Application
    Filed: October 21, 2015
    Publication date: September 20, 2018
    Inventor: Chris Tofts
  • Publication number: 20180267835
    Abstract: Resource provisioning to a process in a distributed computing system, such as a cloud computing system. An instruction to provision a resource is received. Portions of the resource are provisioned to the process as they become available, and prior to all portions becoming available, based on determining that the provisioning speed is greater than or equal to the use speed for the resource. If the use speed is faster, it may be actively slowed down.
    Type: Application
    Filed: May 11, 2018
    Publication date: September 20, 2018
    Inventors: Corville O. Allen, Andrew R. Freed
  • Publication number: 20180267836
    Abstract: According to one general aspect, a system may include a non-volatile memory (NVM), a resource arbitration circuit, and a shared resource. The non-volatile memory may be configured to store data and manage the execution of a task. The non-volatile memory may include a network interface configured to receive data and the task, a NVM processor configured to determine if the processor will execute that task or if the task will be assigned to a shared resource within the system, and a local communication interface configured to communicate with at least one other device within the system. The resource arbitration circuit may be configured to receive a request to assign the task to the shared resource, and manage the execution of the task by the shared resource. The shared resource may be configured to execute the task.
    Type: Application
    Filed: May 23, 2017
    Publication date: September 20, 2018
    Inventors: Sompong Paul OLARIG, David SCHWADERER
  • Publication number: 20180267837
    Abstract: A method for allocating a central processing unit resource to a virtual machine, including determining, according to a change in the number of virtual machines in an advanced resource pool, the number of allocated physical cores in the advanced resource pool; and adjusting, according to the number of the allocated physical cores in the advanced resource pool, the number of allocated physical cores in a default resource pool, where the advanced resource pool and the default resource pool are resource pools that are obtained by dividing physical cores of a central processing unit according to service levels of the resource pools.
    Type: Application
    Filed: May 24, 2018
    Publication date: September 20, 2018
    Applicant: HUAWEI TECHNOLOGIES CO.,LTD.
    Inventors: Weihua Shan, Jintao Liu, Houqing Li
  • Publication number: 20180267838
    Abstract: The systems described herein are configured to reduce the number of resource policies created and stored on a cluster for provisioning and/or managing virtual computing instances (VCIs) utilization of one or more resources. An open-ended VCI policy including at least one open-ended rule having an undefined value and a resource is selected. A set of valid values compatible with a selected resource corresponding to each open-ended rule is presented to the user. The user selects a valid value for each open-ended rule from the set of valid values via a user interface (UI). The selected valid values are assigned to each open-ended rule to create a complete VCI open-ended policy. The complete VCI open-ended policy is applied to provision one or more VCIs. The same open-ended VCI policy may be applied to provision different VCIs by assigning a different set of user selected valid values.
    Type: Application
    Filed: March 17, 2017
    Publication date: September 20, 2018
    Inventors: Georgi Kapitanski, Stoyan Hristov, Mincho Tonev, Johnny Papadami
  • Publication number: 20180267839
    Abstract: When an activity agent desires to perform a particular task on a device, the activity agent communicates a request to a resource control system of the device. The request has an associated amount of energy that is expected to be used by the activity agent to perform the task. The resource control system receives the request, determines whether to grant the request based on the amount of energy expected to be used by the activity agent to carry out the task and various additional factors, and returns an indication to the activity agent that the request is granted or denied. If denied, the activity agent delays performing its desired task. If granted, the activity agent proceeds to perform its desired task. The resource control system also continues to monitor the system state of the device, and may revoke the grant depending on changes in the system state of the device.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Paresh Maisuria, James Anthony Schwartz, JR., M. Nashaat Soliman, Candy Chiang, Aniruddha Jayant Jahagirdar, Matthew Todd Hoehnen, Matthew Holle
  • Publication number: 20180267840
    Abstract: A technique for short-circuiting normal read-copy update (RCU) grace period computations in the presence of expedited RCU grace periods. Both normal and expedited RCU grace period processing may be periodically performing to respectively report normal and expedited quiescent states on behalf of CPUs in a set of CPUs until all of the CPUs have respectively reported normal or expedited quiescent states so that the normal and expedited grace periods may be respectively ended. The expedited grace periods are of shorter duration than the normal grace periods. Responsive to a condition indicating that the normal RCU grace period processing can be short-circuited by the expedited RCU grace period processing, the expedited RCU grace period processing may report both expedited quiescent states and normal quiescent states on behalf of the same CPUs in the set of CPUs.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventor: Paul E. McKenney
  • Publication number: 20180267841
    Abstract: Disclosed aspects relate to speculative execution management in a coherent accelerator architecture. A first access request from a first component may be detected with respect to a set of memory spaces of a single shared memory in the coherent accelerator architecture. A second access request from a second component may be detected with respect to the set of memory spaces of the single shared memory in the coherent accelerator architecture. The first and second access requests may be processed by a speculative execution management engine using a speculative execution technique with respect to the set of memory spaces of the single shared memory in the coherent accelerator architecture.
    Type: Application
    Filed: March 16, 2017
    Publication date: September 20, 2018
    Inventors: Pengfei GOU, Yang LIU, Yangfan LIU, Zhenpeng ZUO
  • Publication number: 20180267842
    Abstract: Disclosed aspects relate to speculative execution management in a coherent accelerator architecture. A first access request from a first component may be detected with respect to a set of memory spaces of a single shared memory in the coherent accelerator architecture. A second access request from a second component may be detected with respect to the set of memory spaces of the single shared memory in the coherent accelerator architecture. The first and second access requests may be processed by a speculative execution management engine using a speculative execution technique with respect to the set of memory spaces of the single shared memory in the coherent accelerator architecture.
    Type: Application
    Filed: December 18, 2017
    Publication date: September 20, 2018
    Inventors: Pengfei Gou, Yang Liu, Yangfan Liu, Zhenpeng Zuo
  • Publication number: 20180267843
    Abstract: Methods and systems are provided for Remote Application Programming Interface (RAPI) communications between server and client devices. In an embodiment, server and client devices comprise memories and hardware processors coupled to the memories. The hardware processors execute instructions to perform operations that instantiate access point instances on both server side and client side. The instructions are generated from compiling API interface classes with remote communication classes, wherein the compiling includes a procedure of creating new classes through double inheritance. By receiving an API connection message from a client device, the server device clones a default relayer access point instance and assigns the cloned relayer access point instance to process API requests received thereafter from the client device.
    Type: Application
    Filed: May 21, 2018
    Publication date: September 20, 2018
    Inventor: Wenheng Zhao
  • Publication number: 20180267844
    Abstract: Generally, this disclosure provides systems, devices, methods and computer readable media for implementing function callback requests between a first processor (e.g., a GPU) and a second processor (e.g., a CPU). The system may include a shared virtual memory (SVM) coupled to the first and second processors, the SVM configured to store at least one double-ended queue (Deque). An execution unit (EU) of the first processor may be associated with a first of the Deques and configured to push the callback requests to that first Deque. A request handler thread executing on the second processor may be configured to: pop one of the callback requests from the first Deque; execute a function specified by the popped callback request; and generate a completion signal to the EU in response to completion of the function.
    Type: Application
    Filed: November 24, 2015
    Publication date: September 20, 2018
    Applicant: Intel Corporation
    Inventors: BRIAN T. LEWIS, RAJKISHORE BARIK, TATIANA SHPEISMAN
  • Publication number: 20180267845
    Abstract: Systems and methods capable of increasing reliability of received commands across a variety of different kinds of input devices and modalities are provided. The provided systems and methods easily expand to support additional input devices, and easily adapt to a wide variety of command destinations, such as subsystems and components. The provided systems and methods employ command specific verification strategies before transmitting the command to its destination. The provided systems and methods also concurrently support a wide variety of command destinations, such as subsystems and components.
    Type: Application
    Filed: March 16, 2017
    Publication date: September 20, 2018
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Jan Bilek, Rick J. Born, Martin Dostal, Pavel Kolcarek
  • Publication number: 20180267846
    Abstract: Embodiments of a multi-processor array are disclosed that may include a plurality of processors and configurable communication elements coupled together in a interspersed arrangement. Each configurable communication element may include a local memory and a plurality of routing engines. The local memory may be coupled to a subset of the plurality of processors. Each routing engine may be configured to receive one or more messages from a plurality of sources, assign each received message to a given destination of a plurality of destinations dependent upon configuration information, and forward each message to assigned destination. The plurality of destinations may include the local memory, and routing engines included in a subset of the plurality of configurable communication elements.
    Type: Application
    Filed: May 22, 2018
    Publication date: September 20, 2018
    Inventors: Carl S. Dobbs, Michael R. Trocino, Michael B. Solka
  • Publication number: 20180267847
    Abstract: Systems, methods, and computer-readable storage devices to enable secured data access from a mobile device executing a native mobile application and a headless browser are disclosed.
    Type: Application
    Filed: May 23, 2018
    Publication date: September 20, 2018
    Inventors: Charles Eric Smith, Sergio Gustavo Ayestaran
  • Publication number: 20180267848
    Abstract: A system may include a data acquisition hardware device (DAQ) for acquiring sample data and/or generating control signals, and a host system with memory that may store data samples and information associated with the DAQ and host system operations. The DAQ may push hardware status information to host memory, triggered by predetermined events taking place in the DAQ, e.g. timing events or interrupts. The DAQ may update dedicated buffers in host memory with status data for any of these events. The pushed status information may be read in a manner that allows detection of race conditions, and may be used to handle data acquisition, output control signaling, and interrupts as required without the host system having to query the DAQ. The DAQ may also detect data timing errors and report those data timing errors back to the host system, and also provide improved output operations using counters.
    Type: Application
    Filed: May 16, 2018
    Publication date: September 20, 2018
    Inventors: Rafael Castro Scorsi, Hector M. Rubio, Gerardo Daniel Domene-Ramirez
  • Publication number: 20180267849
    Abstract: Embodiments include method, systems and computer program products for an interactive, multi-level failsafe capability. In some embodiments, a failed count indicative of a number of failed attempts to launch an application may be received. A failsafe mode level corresponding to the failed count may be determined. The failsafe mode level may be initialized in response to determining the failsafe mode level corresponding to the failed count. The failsafe mode level may determine the functionality that may be enabled. Users may perform interactive debugging by editing configuration settings and manually enabling functionality.
    Type: Application
    Filed: May 24, 2018
    Publication date: September 20, 2018
    Inventors: Jon K. FRANKS, Maria E. SMITH
  • Publication number: 20180267850
    Abstract: There is disclosed in an example an interconnect apparatus having: a root circuit; and a downstream circuit comprising at least one receiver; wherein the root circuit is operable to provide a margin test directive to the downstream circuit during a normal operating state; and the downstream circuit is operable to perform a margin test and provide a result report of the margin test to the root circuit. This may be performed in-band, for example in the L0 state. There is also disclosed a system comprising such an interconnect, and a method of performing margin testing.
    Type: Application
    Filed: September 26, 2015
    Publication date: September 20, 2018
    Inventors: Daniel S. Froelich, Debendra Das Sharma, Fulvio Spagna, Per E. Fornberg, David Edward Bradley
  • Publication number: 20180267851
    Abstract: Apparatuses and methods for performing an error correction code (ECC) operation are provided. One example method can include performing a first error code correction (ECC) operation on a portion of data, performing a second ECC operation on the portion of data in response to the first ECC operation failing, and performing a third ECC operation on the portion of data in response to the second ECC operation failing.
    Type: Application
    Filed: March 17, 2017
    Publication date: September 20, 2018
    Inventors: Mustafa N. Kaynak, Patrick R. Khayat, Sivagnanam Parthasarathy
  • Publication number: 20180267852
    Abstract: A semiconductor device includes an error count signal generation circuit and a row error control circuit. The error count signal generation circuit generates an error count signal which is enabled if the number of erroneous data of cells selected to perform an error scrub operation is equal to a predetermined number. The row error control circuit stores information concerning the number of the erroneous data in response to the error count signal if the number of the erroneous data is greater than or equal to the predetermined number or stores information concerning the number of row paths exhibiting the erroneous data in response to the error count signal after more erroneous data than the predetermined number is detected.
    Type: Application
    Filed: August 28, 2017
    Publication date: September 20, 2018
    Applicant: SK hynix Inc.
    Inventors: Kihun KWON, Yong Mi KIM, Jaeil KIM
  • Publication number: 20180267853
    Abstract: A memory system has a non-volatile memory, an error corrector, an error information storage, and an access controller. The non-volatile memory comprising a plurality of memory cells. The error corrector to correct an error included in data read from the non-volatile memory. The error information storage, based on an error rate when a predetermined number or more of data is written in the non-volatile memory and read therefrom, to store first information on whether there is an error in the written data, on whether there is an error correctable by the error corrector in the written data, and on whether there is an error uncorrectable by the error corrector in the written data. The access controller, based on the first information, to control at least one of reading from or writing to the non-volatile memory.
    Type: Application
    Filed: September 8, 2017
    Publication date: September 20, 2018
    Applicant: TOSHIBA MEMORY CORPORATION
    Inventors: Daisuke SAIDA, Hiroki NOGUCHI, Keiko ABE, Shinobu FUJITA
  • Publication number: 20180267854
    Abstract: A storage device includes: a plurality of memory devices configured as a virtual device utilizing stateless data protection; and a virtual device layer configured to manage the virtual device to store objects by applying erasure coding to some of the objects and replication to other ones of the objects depending on respective sizes of the objects.
    Type: Application
    Filed: January 19, 2018
    Publication date: September 20, 2018
    Inventor: Yang Seok Ki
  • Publication number: 20180267855
    Abstract: A method is provided for execution by one or more processing modules of a dispersed storage network (DSN). The method begins by the DSN receiving a request to update one or more data segments of a data object and continues with the DSN determining whether one or more encoded data slices (EDSs) of a plurality of EDSs associated with the one or more data segments of the data object are eligible for partial updating. The DSN then executes a partial updating process for the eligible EDS while excluding any EDSs eligible for the partial updating that would be unaffected during the partial updating process.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventors: Adam M. Gray, Wesley B. Leggette
  • Publication number: 20180267856
    Abstract: A distributed storage system includes a plurality of storage nodes including: a storage device for storing data in such a way that the data can be written thereto and read therefrom; a memory in which a software program is recorded; and a CPU for executing the software program. The memory stores group management information in which a group configured with a plurality of storage nodes and the storage nodes that configure the group are associated with each other and recorded. The CPU converts data into a plurality of data blocks so that the data is redundant at a predetermined data protection level, and stores the data blocks into each of a plurality of storage nodes belonging to the same group based on the group management information.
    Type: Application
    Filed: January 7, 2016
    Publication date: September 20, 2018
    Applicant: HITACHI, LTD.
    Inventors: Mitsuo HAYASAKA, Keiichi MATSUZAWA
  • Publication number: 20180267857
    Abstract: A method for memory page erasure reconstruction in a storage array includes dividing data into multiple stripes for storage in a storage array including multiple storage devices with a topology of a hypercube of a dimension t?3. The storage devices in same hypercubes of dimension t?1 including the hypercube of dimension t have even parity. An intersection of two non-parallel planes in the hypercube topology is a line, and each point along a line is a storage device in the storage array. A reconstructor processor reconstructs erased data for erased memory pages from non-erased data in the storage array by using parities in at least three dimensions based on the hypercube topology of the storage devices.
    Type: Application
    Filed: May 17, 2018
    Publication date: September 20, 2018
    Inventors: Mario Blaum, Aayush Gupta, James Hafner, Steven R. Hetzler
  • Publication number: 20180267858
    Abstract: Examples disclosed herein relate to a baseboard management controller (BMC) capable of execution while a computing device is powered to an auxiliary state. The BMC is to process an error log according to a deep learning model to determine one of multiple field replaceable units to deconfigure in response to the error condition. The BMC is to deconfigure the field replaceable unit. The computing device is rebooted. In response to the reboot of the computing device the BMC is to determine whether the error condition persists.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Inventors: Anys Bacha, Doddyanto Hamid Umar
  • Publication number: 20180267859
    Abstract: A facility for event failure management is provided, which includes providing a failed event database containing failed event information relating to failed events and one or more components associated with each of the failed events. Upon modification to a component associated with a failed event, the failed event is retried. Based on a result of retrying the failed event, failed event information of the failed event database is updated. The failed event database may therefore be dynamically and/or automatically updated so that it contains up-to-date and appropriate information for predicting and/or managing event failures.
    Type: Application
    Filed: March 17, 2017
    Publication date: September 20, 2018
    Inventors: Mark ALLMAN, Andrew S. EDWARDS, Philip JONES, Doina L. KLINGER, Martin A. ROSS, Paul S. THORPE
  • Publication number: 20180267860
    Abstract: A system for achieving memory persistence includes a volatile memory, a non-volatile memory, and a processor. The processor may indicate a volatile memory range for the processor to backup, and open a memory window for the processor to access. The system further includes a power supply. The power supply may provide power for the processor to backup the memory range of the volatile memory. The processor may, responsive to an occurrence of a backup event, initiate a memory transfer using the opened memory window. The memory transfer uses the processor to move the memory range of the volatile memory to a memory region of the non-volatile memory.
    Type: Application
    Filed: September 18, 2015
    Publication date: September 20, 2018
    Inventors: Joseph E FOSTER, Thierry FEVRIER, James Alexander FUXA
  • Publication number: 20180267861
    Abstract: Systems and methods for performing application aware backups and/or generating other application aware secondary copies of virtual machines are described. For example, the systems and methods described herein may access a virtual machine, automatically discover various databases and/or applications (e.g., SQL, Exchange, Sharepoint, Oracle, and so on) running on the virtual machine, and perform data storage operations that generate a backup, or other secondary copy, of the virtual machine, as well as backups, or other secondary copies, of each of the discovered applications.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 20, 2018
    Inventors: Sudha Krishnan Iyer, Rahul S. Pawar
  • Publication number: 20180267862
    Abstract: A system and method is provided for data classification to control file backup operations. An exemplary method includes analyzing electronic data to identify properties and parameters of the electronic data and comparing the properties and parameters with predetermined rules that indicate storage levels based on properties and parameters. Furthermore, the method includes identifying one of the storage levels based on the comparison of the properties and parameters of the electronic data with the plurality of rules, and performing a data backup of the electronic data based on the identified storage level.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 20, 2018
    Inventors: Eugene Aseev, Stanislav S. Protasov, Serguei M. Beloussov, Sanjeev Solanki
  • Publication number: 20180267863
    Abstract: A method for distributing data among storage devices. The method comprising one or more processors receiving a first graph workload that executes within a networked computing environment. The method further includes identifying data from the first graph workload that is utilized during the execution of the first graph workload that includes a plurality of data packets. The method further includes creating a first graph workload model representative of the graph structure of the first graph workload and determining two or more partitions that are representative of a distribution of the identified data utilized by the first graph workload based, at least in part, on the first graph workload model. The method further includes allocating a plurality of network accessible storage devices among the two or more partitions and copying a first set of data packets of the plurality of data packets to a network accessible storage device.
    Type: Application
    Filed: May 14, 2018
    Publication date: September 20, 2018
    Inventors: John J. Auvenshine, Sunhwan Lee, James E. Olson, Mu Qiao, Ramani R. Routray, Stanley C. Wood
  • Publication number: 20180267864
    Abstract: An image forming apparatus capable of automatically rolling back the system to an appropriate state. The image forming apparatus updates system data set therein. History information of the system data is managed, and execution of rollback processing for replacing the system data by system data which was set before the system data is controlled based on the history information of the system data.
    Type: Application
    Filed: March 8, 2018
    Publication date: September 20, 2018
    Inventor: Takumi Michishita
  • Publication number: 20180267865
    Abstract: A system and method to create a clone of a source computing system, the system including the steps of selecting a memory space coupled to the source computing system, retrieving uncoded data from the selected memory space, encoding the uncoded data by use of a bit-marker-based encoding process executing on a backup server, storing encoded data in a protected memory coupled to the backup server, wherein the protected memory is protected from a power interruption, retrieving the encoded data from the protected memory; and decoding, the encoded data onto a target computing system, wherein the target computing system is separate from the source computing system.
    Type: Application
    Filed: April 19, 2018
    Publication date: September 20, 2018
    Inventors: Brian M. Ignomirello, Suihong Liang
  • Publication number: 20180267866
    Abstract: An apparatus comprises at least three processing circuits to perform redundant processing of common program instructions. Error detection circuitry coupled to a plurality of signal nodes of each of said at least three processing circuits comprises comparison circuitry to detect a mismatch between signals on corresponding signal nodes in said at least three processing circuits, the plurality of signal nodes forming a first group of signal nodes and a second group of signal nodes. In response to the mismatch being detected in relation to corresponding signal nodes within the first group, the error detection circuitry is configured to generate a first trigger for a full recovery process for resolving an error detected for an erroneous processing circuit using state information derived from at least two other processing circuits.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 20, 2018
    Inventors: Balaji VENU, Xabier ITURBE, Emre ÖZER
  • Publication number: 20180267867
    Abstract: A computer-implemented method is provided that is performed in a computer having a processor and multiple co-processors. The method includes launching a same set of operations in each of an original co-processor and a redundant co-processor, from among the multiple co-processors, to obtain respective execution signatures from the original co-processor and the redundant co-processor. The method further includes detecting an error in an execution of the set of operations by the original co-processor, by comparing the respective execution signatures. The method also includes designating the execution of the set of operations by the original co-processor as error-free and committing a result of the execution, responsive to identifying a match between the respective execution signatures.
    Type: Application
    Filed: March 15, 2017
    Publication date: September 20, 2018
    Inventors: Pradip Bose, Alper Buyuktosunoglu, Jingwen Leng, Ramon Bertran Monfort