Patents Issued in February 6, 2020
-
Publication number: 20200042439Abstract: A data processing system includes a host configured to handle data in response to an input received by the host, and a plurality of memory systems engaged with the host and configured to store or output the data in response to a request generated by the host. A first memory system among the plurality of memory systems can perform generation, erasure, or updating of metadata for the plurality of memory systems.Type: ApplicationFiled: July 31, 2019Publication date: February 6, 2020Applicant: SK hynix Inc.Inventor: Ik-Sung OH
-
Publication number: 20200042440Abstract: A memory management method is provided. The method includes in response to completion of a garbage collection operation, identifying one or more recycled block stripes subjected to the garbage collection operation among a plurality of block stripes of a rewritable non-volatile memory module; updating a garbage collection information table in a buffer memory according to the one or more recycled block stripes; and writing the garbage collection information table into the rewritable non-volatile memory module.Type: ApplicationFiled: September 20, 2018Publication date: February 6, 2020Applicant: Shenzhen EpoStar Electronics Limited CO.Inventors: Yu-Hua Hsiao, Hung-Chih Hsieh
-
Publication number: 20200042441Abstract: A memory management method is provided. The method includes performing a garbage collection command; generating a garbage collection information table having a predetermined size according to one or more recycled block stripes, and writing the garbage collection information table into a target block stripe, wherein the garbage collection information table includes an identification tag, a local recycled block stripe list and first padding data; reading valid data in the one or more recycled block stripes, and writing the valid data into the target block stripe, wherein the written valid data is behind and immediately adjacent to the garbage collection information table being written to; and closing the target block stripe, and adding the local recycled block stripe list into a global recycled block stripe list in a buffer memory, so as to complete the garbage collection command.Type: ApplicationFiled: September 20, 2018Publication date: February 6, 2020Applicant: Shenzhen EpoStar Electronics Limited CO.Inventors: Yu-Hua Hsiao, Hung-Chih Hsieh
-
Publication number: 20200042442Abstract: An apparatus including (i) a processor including a plurality of main buffer on board (BOB) memory controllers (MCs) and a secure engine, (ii) a plurality of simple BOB MCs, (iii) a secure delegator, and (iv) a plurality of memory modules. The secure delegator coupled to a first main BOB MC and a first simple BOB MC creates a secure channel. A second main BOB MC coupled to a second simple BOB MC creates a non-secure channel. The plurality of main BOB MCs, the secure engine and the secure delegator are provided within a trusted computing base (TCB) of the apparatus and the plurality of simple BOB MCs and the plurality of memory modules are provided outside the TCB. The secure delegator is configured to: (i) secure communication between the first main BOB MC and the secure delegator, and (ii) perform Path ORAM accesses to the plurality of memory modules.Type: ApplicationFiled: July 30, 2019Publication date: February 6, 2020Applicant: University of Pittsburgh-Of the Commonwealth System of Higher EducationInventors: Rujia Wang, Jun Yang, YouTao Zhang
-
Publication number: 20200042443Abstract: A storage access request to access a solid state drive (SSD) is received. A storage access timer is set with a time duration, where the time duration is based on a desired performance of the SSD. A non-volatile memory command associated with the storage access request is sent to non-volatile memory. The storage access timer is started. A determination is made whether the non-volatile memory completed execution of the non-volatile memory command after the storage access timer indicates that the time duration elapsed. An indication that the storage access request is complete is sent to a host if the non-volatile memory completed execution of the non-volatile memory command. Alternatively, the storage access timer is reset with the time duration if the non-volatile memory has not completed execution of the non-volatile memory command.Type: ApplicationFiled: July 29, 2019Publication date: February 6, 2020Applicant: Marvell World Trade Ltd.Inventors: Ka-Ming Keung, Dung Viet Nguyen
-
Publication number: 20200042444Abstract: Disclosed herein is an apparatus and method for a distributed memory object system. In one embodiment, a method includes forming a system cluster comprising a plurality of nodes, wherein each node includes a memory, a processor and a network interface to send and receive messages and data; creating a plurality of sharable memory spaces having partitioned data, wherein each space is a distributed memory object having a compute node, wherein the sharable memory spaces are at least one of persistent memory or DRAM cache; at a client, establishing an inter process communication between the client and a distributed memory object service; receiving a meta chunk including attributes about a file and a chunk map from a distributed memory object service, wherein the meta chunk includes chunk information including identity and location of a data chunk; and the client mapping the data chunk into virtual memory address space and accessing it directly.Type: ApplicationFiled: April 1, 2019Publication date: February 6, 2020Inventors: Wei Kang, Yue Zhao, Ning Xu, Yue Li, Jie Yu, Robert W. Beauchamp
-
Publication number: 20200042445Abstract: An aspect of cache recovery includes transmitting entries of a write cache (WC) journal (“entries”) to all nodes and, for each node, recovering the entries, detecting entries with a logical address owned by the node, and performing a recovery operation. The operation includes for each entry, and upon determining the node owns the A2N slice: if the A2N slice has been continuously owned (CO) by the node, and the entry is not owned by the node, marking the entry as WC remote and entry updates are requested from a remote WC owner; if the A2N slice has not been CO by the node, and the entry is not owned by the node, maintaining the entry and continuing write flow operations, marking the entry as WC remote and all entry updates are requested from the remote WC owner and inserting the entry to a recovery list.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Vladimir Shveidel, Lior Kamran
-
Publication number: 20200042446Abstract: Circuits and methods for combined precise and imprecise snoop filtering. A memory and a plurality of processors are coupled to the interconnect circuitry. A plurality of cache circuits are coupled to the plurality of processor circuits, respectively. A first snoop filter is coupled to the interconnect and is configured to filter snoop requests by individual cache lines of a first subset of addresses of the memory. A second snoop filter is coupled to the interconnect and is configured to filter snoop requests by groups of cache lines of a second subset of addresses of the memory. Each group encompasses a plurality of cache lines.Type: ApplicationFiled: August 2, 2018Publication date: February 6, 2020Applicant: Xilinx, Inc.Inventors: Millind Mittal, Jaideep Dastidar
-
Publication number: 20200042447Abstract: A host server in a server cluster has a memory allocator that creates a dedicated host application data cache in storage class memory. A background routine destages host application data from the dedicated cache in accordance with a destaging plan. For example, a newly written extent may be destaged based on aging. All extents may be flushed from the dedicated cache following host server reboot. All extents associated with a particular production volume may be flushed from the dedicated cache in response to a sync message from a storage array.Type: ApplicationFiled: October 10, 2019Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Arieh Don, Sahin Adnan, Martin Owen, Peter Blok, Philip Derbeko
-
Publication number: 20200042448Abstract: A host server in a server cluster has a memory allocator that creates a dedicated host application data cache in storage class memory. A background routine destages host application data from the dedicated cache in accordance with a destaging plan. For example, a newly written extent may be destaged based on aging. All extents may be flushed from the dedicated cache following host server reboot. All extents associated with a particular production volume may be flushed from the dedicated cache in response to a sync message from a storage array.Type: ApplicationFiled: October 10, 2019Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Arieh Don, Adnan Sahin, Owen Martin, Peter Blok, Philip Derbeko
-
Publication number: 20200042449Abstract: Data processing in a data processing system including a plurality of processing nodes coupled to an interconnect includes receiving, by a fabric controller, a first command from a remote processing node via the interconnect. The fabric controller determines that the command includes a replay indication, the replay indication indicative of a replay event at one or more processing nodes of the plurality of processing nodes. The first command is dropped from a deskew buffer of the fabric controller responsive to the determining that the command includes the replay indication.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Applicant: International Business Machines CorporationInventors: Charles F. Marino, William J. Starke, David J. Krolak, Paul A. Ganfield, Jeffrey A. Stuecheli
-
Publication number: 20200042450Abstract: A memory device for storing data comprises a memory bank comprising a plurality of addressable memory cells, wherein the memory bank is divided into a plurality of segments. The memory device also comprises a cache memory operable for storing a second plurality of data words, wherein each data word of the second plurality of data words is either awaiting write verification associated with the memory bank or is to be re-written into the memory bank. Also, the cache memory is divided into a plurality of segments, wherein each segment of the cache memory is direct mapped to a corresponding segment of the memory bank, wherein an address of each of the second plurality of data words is mapped to a corresponding segment in the cache memory, and wherein data words from a particular segment of the memory bank only get stored in a corresponding direct mapped segment of the cache memory.Type: ApplicationFiled: October 10, 2019Publication date: February 6, 2020Inventors: Benjamin LOUIE, Neal BERGER, Lester CRUDELE
-
Publication number: 20200042451Abstract: A method of writing data into a memory device comprising utilizing a pipeline to process write operations of a first plurality of data words addressed to a plurality of memory banks, wherein each of the plurality of memory banks is associated with a counter. The method also comprises writing a second plurality of data words and associated memory addresses into an error buffer, wherein the error buffer is associated with the plurality of memory banks and wherein further each data word of the second plurality of data words is either awaiting write verification associated with a bank from the plurality of memory banks or is to be re-written into a bank from the plurality of memory banks. Further, the method comprises maintaining a count in each of the plurality of counters for a respective number of entries in the error buffer corresponding to a respective memory bank.Type: ApplicationFiled: October 10, 2019Publication date: February 6, 2020Inventors: Susmita KARMAKAR, Neal BERGER
-
Publication number: 20200042452Abstract: Techniques are disclosed herein for providing accelerated recovery techniques of a memory device. Such techniques can allow for recovery of the memory device, such as, but not limited to, a flash memory device, following an unexpected reset event.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventor: David Aaron Palmer
-
Publication number: 20200042453Abstract: Provided are a computer program product, system, and method for managing access requests from a host to tracks in storage. A cursor is set to point to a track in a range of tracks established for sequential accesses. Cache resources are accessed for the cache for tracks in the range of tracks in advance of processing access requests to the range of tracks. Indication is received of a subset of tracks in the range of tracks for subsequent access transactions and a determination is made whether the cursor points to a track in the subset of tracks. The cursor is set to point to a track in the subset of tracks and cache resources are accessed for tracks in the subset of tracks for anticipation of access transactions to tracks in the subset of tracks.Type: ApplicationFiled: October 15, 2019Publication date: February 6, 2020Inventors: Ronald E. Bretschneider, Susan K. Candelaria, Beth A. Peterson, Dale F. Riedy, Peter G. Sutton, Harry M. Yudenfriend
-
Publication number: 20200042454Abstract: Embodiments described herein provide a system for facilitating cluster-level cache and memory in a cluster. During operation, the system presents a cluster cache and a cluster memory to a first application running on a first compute node in the cluster. The system maintains a first mapping between a first virtual address of the cluster cache and a first physical address of a first persistent storage of the first compute node. The system maintains a second mapping between a second virtual address of the cluster memory and a second physical address of a second persistent storage of a first storage node of the cluster. Upon receiving a first memory allocation request for cache memory from the first application, the system allocates a first memory location corresponding to the first physical address. The first application can be configured to access the first memory location based on the first virtual address.Type: ApplicationFiled: November 5, 2018Publication date: February 6, 2020Inventor: Shu Li
-
Publication number: 20200042455Abstract: A data storage apparatus includes a nonvolatile memory device, a processor configured to control operation of the nonvolatile memory device, and a memory loaded with a flash translation layer (FTL) including modules, the memory including a map cache buffer configured to cache at least one map segment. The modules include a map module configured to manage a map cache data structure related to the map cache buffer and a map cache allocation module configured to receive, from a module other than the map module, an allocation request for an allocating region having a required size in the map cache buffer; provide the allocation request to the map module; receive allocable size information from the map module; and provide the allocable size information to the module.Type: ApplicationFiled: December 10, 2018Publication date: February 6, 2020Inventor: Young Ick CHO
-
Publication number: 20200042456Abstract: A memory module comprises a volatile memory subsystem configured to coupled to a memory channel in computer system and capable of serving as main memory for the computer system, a non-volatile memory subsystem providing storage for the computer system, and a module controller coupled to the volatile memory subsystem, the non-volatile memory subsystem, and the C/A bus. The module controller is configured to control intra-module data transfers between the volatile memory subsystem and the non-volatile memory subsystem. The module controller is further configured to monitor C/A signals on the C/A bus and schedule the intra-module data transfers in accordance with the C/A signals so that the intra-module data transfers do not conflict with accesses to the volatile memory subsystem by the memory controller.Type: ApplicationFiled: August 13, 2019Publication date: February 6, 2020Inventors: Hyun Lee, Jayesh R. Bhakta, Chi She Chen, Jeffery C. Solomon, Mario Jesus Martinez, Hao Le, Soon J. Choi
-
Publication number: 20200042457Abstract: A dynamic premigration protocol is implemented in response to a secondary tier returning to an operational state and an amount of data associated with a premigration queue of a primary tier exceeding a first threshold. The dynamic premigration protocol can comprise at least a temporary premigration throttling level. An original premigration protocol is implemented in response to an amount of data associated with the premigration queue decreasing below the first threshold.Type: ApplicationFiled: October 10, 2019Publication date: February 6, 2020Inventors: Koichi Masuda, Katja I. Denefleh, Joseph M. Swingler
-
Publication number: 20200042458Abstract: Logical to physical tables each including logical to physical address translations for first logical addresses can be stored. Logical to physical table fragments each including logical to physical address translations for second logical address can be stored. A first level index can be stored. The first level index can include a physical table address of a respective one of the logical to physical tables for each of the first logical addresses and a respective pointer to a second level index for each of the second logical addresses. The second level index can be stored and can include a physical fragment address of a respective logical to physical table fragment for each of the second logical addresses.Type: ApplicationFiled: August 2, 2018Publication date: February 6, 2020Inventors: Daniele Balluchi, Dionisio Minopoli
-
Publication number: 20200042459Abstract: An electronic system includes a host device and a storage device including a first memory device of a volatile type and a second memory device of a nonvolatile type. The first memory device is accessed by the host device through a memory-mapped input-output interface and the second memory device is accessed by the host device through a block accessible interface. The storage device provides a virtual memory region to the host device such that a host-dedicated memory region having a first size included in the first memory device is mapped to the virtual memory region having a second size larger than the first size.Type: ApplicationFiled: March 4, 2019Publication date: February 6, 2020Inventors: DUCK-HO BAE, DONG-UK KIM, HYUNG-WOO RYU, KWANG-HYUN LA, JOO-YOUNG HWANG, YOU-RA CHOI
-
Publication number: 20200042460Abstract: A system is used in a data processing system comprising at least one memory system which is operatively engaged and disengaged from a host or from another memory system and the host transmitting commands into the at least one memory system. The system includes a metadata generator configured to generate a map table for an available address range and a reallocation table for indicating an allocable address range in the map table; and a metadata controller configured to allocate the allocable address range to the at least one memory system when the at least one memory system is operatively engaged to the host or to another memory system, or release an allocated range for the at least one memory system such that the allocated range becomes the allocable address range when the at least one memory system is operatively disengaged from the host or the another memory system.Type: ApplicationFiled: July 30, 2019Publication date: February 6, 2020Inventor: Ik-Sung OH
-
Publication number: 20200042461Abstract: A system and method relates to detecting a hardware event, determining a first virtual memory address associated with the hardware event, wherein the first virtual memory address is associated with a first processing thread, identifying, using the first virtual memory address, an entry of a logical address table, the entry comprising a file descriptor and a file offset associated with a file, identifying a memory address table associated with the file descriptor, translating, using the memory address table, the file offset into a second virtual memory address associated with a second processing thread, and transmitting, to the second processing thread, a notification comprising the second virtual memory address.Type: ApplicationFiled: October 7, 2019Publication date: February 6, 2020Inventors: Michael Tsirkin, Andrea Lee Arcangeli, David Alan Gilbert
-
Publication number: 20200042462Abstract: Initializing a data structure for use in predicting table of contents (TOC) pointer values. A request to load a module is obtained. Based on the loaded module, a pointer value for a reference data structure is determined. The pointer value is stored in a reference data structure tracking structure, and used to access a variable value for a variable of the module.Type: ApplicationFiled: October 11, 2019Publication date: February 6, 2020Inventors: Michael K. Gschwind, Valentina Salapura
-
Publication number: 20200042463Abstract: An apparatus includes a first device configured to generate a transaction request targeted to a first address, a switch, coupled to the first device and configured to the route the transaction request, a port coupled to the peripheral switch and the data processing network, and a system memory management unit, coupled to the port. The system memory management unit is configured for receiving an address query for the first address from the peripheral port translating the first address to a second address, accessing attributes of a device associated with the second address and responding to the query. Access validation for the transaction request is confirmed or denied dependent upon the second address and the attributes of the device associated with the second address. The first device may be a peripheral device, the switch may be a peripheral switch and the port may be a peripheral port.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Applicant: Arm LimitedInventors: Tessil THOMAS, Jamshed JALAL, Andrea PELLEGRINI, Anitha KONA
-
Publication number: 20200042464Abstract: An apparatus and method are provided for comparing regions associated with first and second bounded pointers to determine whether the region defined for the second bounded pointer is a subset of the region defined for the first bounded pointer. Each bounded pointer has a pointer value and associated upper and lower limits identifying the memory region for that bounded pointer. The apparatus stores first and second bounded pointer representations, each representation comprising a pointer value having p bits, and identifying the upper and lower limits in a compressed form by identifying a lower limit mantissa of q bits, an upper limit mantissa of q bits and an exponent value e. A most significant p?q?e bits of the lower limit and the upper limit is derivable from the most significant p?q?e bits of the pointer value.Type: ApplicationFiled: August 6, 2018Publication date: February 6, 2020Inventors: Daniel ARULRAJ, Lee Evan EISEN, Graeme Peter BARNES
-
Publication number: 20200042465Abstract: Various examples are directed to systems and methods for programming memory. A programming appliance may receive a command file comprising a first pre-generated digital signature. The first pre-generated digital signature may be associated with a memory system, with a first command and with a first memory system counter value. The programming appliance may send to a memory system a first command message. The first command system may comprise the first command and the first pre-generated digital signature.Type: ApplicationFiled: August 1, 2018Publication date: February 6, 2020Inventor: Olivier Duval
-
Publication number: 20200042466Abstract: A method of operating a data storage system is provided. The method includes establishing a user region on a non-volatile storage media of the data storage system configured to store user data, and establishing a recovery region on the non-volatile storage media of the data storage system configured to store recovery information pertaining to at least the user region. The method also includes updating the recovery information in the recovery region responsive to at least changes to the user region, and responsive to at least a power interruption of the data storage system, rebuilding at least a portion of the user region using the recovery information retrieved from the recovery region.Type: ApplicationFiled: August 2, 2019Publication date: February 6, 2020Applicant: Burlywood, Inc.Inventors: Amy Lee Wohlschlegel, Kevin Darveau Landin, Nathan Koch, John William Slattery, Erik Habbinga
-
Publication number: 20200042467Abstract: Disclosed is a method, apparatus, and/or computer program product for reducing latency in a processor with regard to the execution of noncacheable operations that includes receiving noncacheable operations from one or both of the level 2 cache and a level 3 cache, sending the noncacheable operations to a noncacheable unit (NCU) associated with a core of the processor, executing the noncacheable operations by the NCU, and sending results of the executed noncacheable operations to a host bridge for output to an input/out device. The noncacheable operations bypass the core of the processor.Type: ApplicationFiled: October 9, 2019Publication date: February 6, 2020Inventor: Shakti Kapoor
-
Publication number: 20200042468Abstract: Embodiments using a distributed bus arbiter for one cycle channel selection with inter-channel ordering constraints. A distributed bus arbiter that orders one or more memory bus transactions originating from a plurality of master bus components to a plurality of shared remote slaves over shared serial channels attached to differing interconnect instances may be implemented.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Dimitrios SYRIVELIS, Andrea REALE, Kostas KATRINIS
-
Publication number: 20200042469Abstract: A system and method for efficiently scheduling requests. In various embodiments, a processor sends commands such as read requests and write requests to an arbiter. The arbiter reduces latencies between commands being sent to a communication fabric and corresponding data being sent to the fabric. When the arbiter selects a given request, the arbiter identifies a first subset of stored requests affected by the given request being selected. The arbiter adjusts one or more attributes of the first subset of requests based on the selection of the given request. In one example, the arbiter replaces a weight attribute with a value, such as a zero value, indicating the first subset of requests should not be selected. Therefore, during the next selection by the arbiter, only the requests in a second subset different from the first subset are candidates for selection.Type: ApplicationFiled: August 6, 2018Publication date: February 6, 2020Inventors: Shawn Munetoshi Fukami, Jaideep Dastidar, Yiu Chun Tse
-
Publication number: 20200042470Abstract: A data processing system includes a memory system including a memory device storing data and a controller performing a data program operation or a data read operation with the memory device, and a host suitable for requesting the data program operation or the data read operation from the memory system. The controller can perform a serial communication to control a memory which is arranged outside the memory system and engaged with the host.Type: ApplicationFiled: March 28, 2019Publication date: February 6, 2020Inventor: Jong-Min LEE
-
Publication number: 20200042471Abstract: A system for serial communication includes a controller, a semiconductor package comprising a plurality of semiconductor die, and a serial interface configured to connect the plurality of semiconductor die to the controller. The serial interface includes a controller-to-package connection and a package-to-controller connection, and the serial interface is configured to employ a signaling protocol using differential data signaling with no separate clock signals.Type: ApplicationFiled: August 3, 2018Publication date: February 6, 2020Inventors: Benjamin James Kerr, Philip Rose, Robert Reed
-
Publication number: 20200042472Abstract: An apparatus to facilitate source synchronous signaling is disclosed. The apparatus includes transfer protocol logic to provide for source synchronous transfer of data within an interconnect fabric, including one or more synchronizers having logic to a transmit data signal and a source clock (clk) signal during the transfer of data.Type: ApplicationFiled: August 14, 2019Publication date: February 6, 2020Applicant: Intel CorporationInventors: Altug Koker, Joydeep Ray, Vasanth Ranganathan, Abhishek R. Appu
-
Publication number: 20200042473Abstract: Implementations are provided herein for systems, methods, and a non-transitory computer product configured to analyze an input/output (IO) pattern for a data storage system, to identify an application type based on the IO pattern, and to select optimal deduplication and compression configurations based on the application type. The teachings herein facilitate machine learning of various metrics and the interrelations between these metrics, such as past IO patterns, application types, deduplication configurations, compression configurations, and overall system performance. These metrics and interrelations can be stored in a data lake. In some embodiments, data objects can be segmented in order to optimize configurations with more granularity. In additional embodiments, predictive techniques are used to select deduplication and compression configurations when certain regarding an application type is lacking.Type: ApplicationFiled: February 13, 2019Publication date: February 6, 2020Inventors: Nickolay Dalmatov, Kirill Bezugly
-
Publication number: 20200042474Abstract: An electronic device and method for communicating with an external electronic device that is connected via a connector of the electronic device are provided. The electronic device includes a connector including a first pin and a second pin, a communication interface connected with the connector, and at least one processor electrically connected with the communication interface, wherein the at least one processor may be configured to apply a first current to the first pin, determine whether liquid is introduced into the connector using the second pin, and if the liquid is introduced into the connector, apply a second current smaller than the first current to the first pin.Type: ApplicationFiled: October 11, 2019Publication date: February 6, 2020Inventor: Yeon-Rae JO
-
Publication number: 20200042475Abstract: Systems and methods for demand-based remote direct memory access buffer management. A method embodiment commences upon initially partitioning a memory pool at a computer that is to receive memory contents from a sender. The memory pool is partitioned into memory areas that comprise a plurality of different sized buffers that serve as target buffers for one or more direct memory access data transfer operations from the data sources. An initial first set of buffer apportionments are associated with each one of the one or more data sources and those initial sets are advertised to the corresponding data sources. Over time, based on messages that have been loaded into the receiver's memory, the payload sizes of the messages are observed. Based on the observed the demand for buffers that are used for the message payload, the constituency of the advertised buffers can grow or shrink elastically as compared to previous advertisements.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Applicant: Nutanix, Inc.Inventors: Hema VENKATARAMANI, Peter Scott WYCKOFF
-
Publication number: 20200042476Abstract: A problem addressed by the present invention is to provide a transfer control device, etc., with which it is possible to reduce the number of occurrences of sending and receiving processes which a management device carries out when information is transferred among recording units. To solve the problem, provided is a transfer control device comprising: a transfer processing unit which, using each of a plurality of instances of management information, carries out an information transfer from a first recording unit to a second recording unit; an assessment unit which carries out an assessment about whether or not to carry out an update by assessing completion of partial transfers, each of which corresponds to the information transfer having been associated with each of the plurality of instances of the management information, on the basis of contracted information which represents a completion status of the partial transfers.Type: ApplicationFiled: January 15, 2018Publication date: February 6, 2020Applicant: NEC Platforms, Ltd.Inventor: Takahito YAMAMOTO
-
Publication number: 20200042477Abstract: An apparatus may include a heterogeneous computing environment that may be controlled, at least in part, by a task scheduler in which the heterogeneous computing environment may include a processing unit having fixed logical circuits configured to execute instructions; a reprogrammable processing unit having reprogrammable logical circuits configured to execute instructions that include instructions to control processing-in-memory functionality; and a stack of high-bandwidth memory dies in which each may be configured to store data and to provide processing-in-memory functionality controllable by the reprogrammable processing unit such that the reprogrammable processing unit is at least partially stacked with the high-bandwidth memory dies. The task scheduler may be configured to schedule computational tasks between the processing unit, and the reprogrammable processing unit.Type: ApplicationFiled: October 7, 2019Publication date: February 6, 2020Inventors: Krishna T. MALLADI, Hongzhong ZHENG
-
Publication number: 20200042478Abstract: An aspect of performance improvement for an active-active distributed non-ALUA (asymmetrical logical unit assignment) system with address ownerships includes receiving, by a routing module of a content-addressable storage system, an input/output (IO) request; and determining, by the routing module from a table that provides a listing of addresses and compute nodes having ownership to the address, a target location of the IO request. The target location specifies an address. An aspect also includes determining, by the routing module, a mapping between each of the compute modules and a physical path to corresponding storage controllers, an address owner of a storage controller port of a storage controller that owns the address of the IO; selecting a physical path associated with the address owner; and transmitting, by the routing module, the IO request to the storage controller port via a direct call.Type: ApplicationFiled: October 15, 2019Publication date: February 6, 2020Applicant: EMC IP HOLDING COMPANY LLCInventors: Amitai Alkalay, Zvi Schneider, Assaf Natanzon
-
Publication number: 20200042479Abstract: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.Type: ApplicationFiled: October 14, 2019Publication date: February 6, 2020Applicant: Intel CorporationInventors: Ren Wang, Yipeng Wang, Andrew Herdrich, Jr-Shian Tsai, Tsung-Yuan C. Tai, Niall D. McDonnell, Hugh Wilkinson, Bradley A. Burres, Bruce Richardson, Namakkal N. Venkatesan, Debra Bernstein, Edwin Verplanke, Stephen R. Van Doren, An Yan, Andrew Cunningham, David Sonnier, Gage Eads, James T. Clee, Jamison D. Whitesell, Jerry Pirog, Jonathan Kenny, Joseph R. Hasting, Narender Vangati, Stephen Miller, Te K. Ma, William Burroughs
-
Publication number: 20200042480Abstract: Embodiments for managing High-Definition Multimedia Interface (HDMI) data. HDMI data received by at least one of a second HDMI connector of an HDMI device and the processor of the HDMI device is transmitted to a first HDMI connector of the HDMI device according to each of a plurality of modes of operation. A switching operation between the plurality of modes of operation is automatically performed based on a time schedule programmed by a user notwithstanding a priority signal embedded within the HDMI data received at the second HDMI connector or the processor is configured to override the time schedule to initiate the switching.Type: ApplicationFiled: October 14, 2019Publication date: February 6, 2020Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David B. LECTION, Sarbajit K. RAKSHIT, Mark B. STEVENS, John D. WILSON
-
Publication number: 20200042481Abstract: Moving from a back-to-back topology to a switched topology in an InfiniBand network includes, prior to connecting a switch for a first storage controller in the network and during reboot of the first storage controller, waiting for a second storage controller in the network to become master, and upon the second storage controller becoming master, changing cache files for local ports on the first storage controller regarding adjacent ports' LID assignments. An aspect further includes restarting a system manager for the first storage controller, connecting the first storage controller to the system with new LID assignments provided by changed files on first storage controller, and upon the first storage controller becoming active, rebooting the second storage controller, changing the LID assignments in the active storage controller, and adding new switches to the system.Type: ApplicationFiled: August 1, 2018Publication date: February 6, 2020Applicant: EMC IP Holding Company LLCInventors: Ahia Lieber, Liran Loya, Alex Kulakovsky
-
Publication number: 20200042482Abstract: Described are embodiments of methods, apparatuses, and systems for PCIe tunneling across a multi-protocol I/O interconnect of a computer apparatus. A method for PCIe tunneling across the multi-protocol I/O interconnect may include establishing a first communication path between ports of a switching fabric of a multi-protocol I/O interconnect of a computer apparatus in response to a peripheral component interconnect express (PCIe) device being connected to the computer apparatus, and establishing a second communication path between the switching fabric and a PCIe controller. The method may further include routing, by the multi-protocol I/O interconnect, PCIe protocol packets of the PCIe device from the PCIe device to the PCIe controller over the first and second communication paths. Other embodiments may be described and claimed.Type: ApplicationFiled: August 19, 2019Publication date: February 6, 2020Inventors: David J. Harriman, Maxim Dan
-
Publication number: 20200042483Abstract: Aspects of the disclosure relate to computer applications for collating scattered signals in a computer system. The computer system may include a processor, memory, display, and a plurality of applications. The plurality of applications may include a central application and a plurality of peripheral applications. The peripheral applications may generate signals. The central application may access the signals generated by the peripheral applications. The central application may collate the signals and store the collated signals. The central application may present the collated signals on the display. The collated signals may be actionable in the central application. Actions performed in response to the collated signals in the central application may be conveyed to the peripheral application from where the collated signal originated.Type: ApplicationFiled: August 2, 2018Publication date: February 6, 2020Inventors: Robert S. Mumma, John E. Scully, Patrick E. Burgess, JR.
-
Publication number: 20200042484Abstract: A system comprising: a first host and a second host; and an integrated circuit comprising: a first bus and a second bus physically separate and isolated from the first bus; a first host interface to connect the first host to the first bus and a second host interface to connect the second host to the second bus; and a hot plug control channel including first and second hot plug control registers, wherein each of the hot plug control registers is connectable to a hot pluggable device; wherein the hot plug control channel is to connect the first bus to the first and second hot plug control register to thereby connect the first host to the first and second hot plug control register, and is to connect the second bus to the first and second hot plug control register to thereby connect the second host to the first and second hot plug control register.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Inventors: Walter Nixon, William Price, Chengjun Zhu
-
Publication number: 20200042485Abstract: Systems and methods for asynchronous mapping of a hot-plugged I/O device associated with a virtual machine. An example method comprises: executing, by a host computer system, a virtual machine managed by a hypervisor, wherein the virtual machine is associated with a hot-pluggable input/output (I/O) device; responsive to detecting removal of the I/O device, unpin a memory buffer associated with the I/O device; and responsive to receiving a signal indicating completion of unpinning the memory buffer, release the I/O device from the hypervisor.Type: ApplicationFiled: October 15, 2019Publication date: February 6, 2020Inventors: Alex Williamson, Michael Tsirkin
-
Publication number: 20200042486Abstract: A method for managing communications involving a lockstep processing comprising at least a first processor and a second processor can include receiving, at a data synchronizer, a first signal from a first device. The method can also include receiving, at the data synchronizer, a second signal from a second device. In addition, the method can include determining, by the data synchronizer, whether the first signal is equal to the second signal. When the first signal is equal to the second signal, the method can include transmitting, by the data synchronizer, the first signal to the first processor and the second signal to the second processor. Specifically, in example embodiments, transmitting the first signal to the first processor can occur synchronously with transmitting the second signal to the second processor.Type: ApplicationFiled: October 11, 2019Publication date: February 6, 2020Inventors: Melanie Sue-Hanson Graffy, Jon Marc Diekema
-
Publication number: 20200042487Abstract: A serial communication protocol for daisy-chained slave devices does away with the requirement for an entire byte of dummy clocks to be cycled between a slave's input and output, instead requiring a shorter set of dummy clock cycles which improves efficiency of a serial communication system. According to a specification of a serial communications protocol, data is exchanged between master and slave devices in communication frames. Each communication frame has a command portion and a data portion, and each respective portion may comprise packages of one or more bytes.Type: ApplicationFiled: September 12, 2019Publication date: February 6, 2020Inventors: Shaoxuan Wang, Yuchuan Shi, Ze Han, Lingxin Kong, Nailong Wang
-
Publication number: 20200042488Abstract: At least some aspects of the present disclosure provide for a method. In some examples, the method includes receiving, at a circuit, data via a differential input signal, detecting a rising edge in the data received via the differential input signal, and precharging a common mode voltage (Vcm) node of the differential input signal responsive to detecting the rising edge in the data received via the differential input signal, wherein the Vcm node is a floating node.Type: ApplicationFiled: May 6, 2019Publication date: February 6, 2020Inventors: Win Naing MAUNG, Saurabh GOYAL, Bhupendra SHARMA, Huanzhang HUANG, Douglas Edward WENTE, Suzanne Mary VINING, Mustafa Ulvi ERDOGAN