Shared Memory Partitioning Patents (Class 711/153)
-
Patent number: 11829625Abstract: Embodiments of the present disclosure relate to managing communications between slices on a storage device engine. Shared slice memory of a storage device engine is provisioned for use by each slice of the storage device engine. The shared slice memory is a portion of total storage device engine memory. Each slice's access to the shared memory portion is controlled.Type: GrantFiled: April 27, 2020Date of Patent: November 28, 2023Assignee: EMC IP Holding Company LLCInventors: Rong Yu, Jingtong Liu, Peng Wu
-
Patent number: 11748215Abstract: In a log management method performed by a server, the server receives a transaction and generates a command log of the transaction. When detecting the transaction is a multi-partition transaction or a non-deterministic transaction, the server generates a data log of the transaction. When the server is faulty, the server recovers data according to the command log or the data log.Type: GrantFiled: June 4, 2020Date of Patent: September 5, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Xiaohao Liang, Huimin Zhang, Weisong Wang, Tieying Wang
-
Patent number: 11632301Abstract: Techniques, methods, and systems for managing a set of data network nodes in a Network Management System (NMS). In some examples, a method may include receiving, at the network orchestrator, a service invocation for a service transaction associated with a transaction object; storing, by the network orchestrator, service metadata as part of the transaction object; determining whether there is a service metadata conflict associated with the transaction object; and in response to determining that there is the service metadata conflict associated with the transaction object, retrying the service transaction; or in response to determining that there is no service metadata conflict associated with the transaction object, applying the service metadata to one or more nodes of the set of data nodes.Type: GrantFiled: May 24, 2022Date of Patent: April 18, 2023Assignee: Cisco Technology, Inc.Inventors: Viktoria Fordos, Claes Daniel Nasten
-
Patent number: 11567796Abstract: As part of a container initialization procedure, a maximum number of hardware threads per processor core in a set of cores of a computer system are enabled, the container initialization procedure configuring an operating system executing on the computer system for container execution and configuring a first container for execution on the operating system. From a set of available cores in the set of cores, an execution core is selected. In the selected execution core, a number of threads per core to be used during execution of the first container is configured, the number of threads per core specified for the container initialization procedure by a first simultaneous multithreading (SMT) parameter. Using the configured execution core, the first container is executed, the executing virtualizing the operating system.Type: GrantFiled: October 22, 2020Date of Patent: January 31, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jeffrey W. Tenner, Joseph W. Cropper
-
Patent number: 11531386Abstract: A multi-element device includes a plurality of memory elements, each of which includes a memory array, access circuitry to control access to the memory array, and power control circuitry. The power control circuitry, which includes one or more control registers for storing first and second control values, controls distribution of power to the access circuitry in accordance with the first control value, and controls distribution of power to the memory array in accordance with the second control value. Each memory element also includes sideband circuitry for enabling a host system to set at least the first control value and the second control value in the one or more control registers.Type: GrantFiled: June 29, 2020Date of Patent: December 20, 2022Assignee: RAMBUS INC.Inventors: Deborah Lindsey Dressler, Julia Kelly Cline, Wayne Frederick Ellis
-
System and method for speed up data rebuild in a distributed storage system with local deduplication
Patent number: 11474724Abstract: A method includes obtaining a plurality of representations corresponding respectively to a plurality of blocks of data stored on a source node. A plurality of data pairs are sent to a destination node, where each data pair includes a logical address associated with a block of data from the plurality of blocks of data and the corresponding representation of the block of data. A determination is made whether the blocks of data associated with the respective logical addresses are duplicates of data stored on the destination node. In accordance with an affirmative determination, a reference to a physical address of the block of data stored on the destination node is stored. In accordance with a negative determination, an indication that the data corresponding to the respective logical address is not a duplicate is stored. The data indicated as not being a duplicate is written to the destination node.Type: GrantFiled: January 25, 2018Date of Patent: October 18, 2022Assignee: VMware, Inc.Inventors: Wenguang Wang, Christos Karamanolis, Srinath Premachandran -
Patent number: 11461139Abstract: An apparatus includes processing cores, memory blocks, a connection between each of processing core and memory block, chip selection circuit, and chip selection circuit busses between the chip selection circuit and each of the memory blocks. Each memory block includes a data port and a memory check port. The chip selection circuit is configured to enable writing data from a highest priority core through respective data ports of the memory blocks. The chip selection circuit is further configured to enable writing data from other cores through respective memory check ports of the memory blocks.Type: GrantFiled: April 8, 2020Date of Patent: October 4, 2022Assignee: Microchip Technology IncorporatedInventors: Michael Simmons, Anjana Priya Sistla, Priyank Gupta
-
Patent number: 11449251Abstract: A storage control device operable to be one of a plurality of storage control devices included in a storage device, the storage control device includes: a memory; and a processor coupled to the memory, the processor being configured to processing, the processing including: executing a determination processing that includes determining whether activation of the storage control device is caused by activation of the entire storage device or activation of the storage control device alone; and executing a region setting processing that includes setting a control information storage region that stores control information used to enable a function of the storage device according to a determination result by the determination processing.Type: GrantFiled: October 23, 2020Date of Patent: September 20, 2022Assignee: FUJITSU LIMITEDInventors: Tomohiko Muroyama, Shinichi Nishizono, Shoji Oshima
-
Patent number: 11429545Abstract: The invention relates to methods, and an apparatus for data reads in a host performance acceleration (HPA) mode. One method is performed in a host side to include: obtaining a value of an extended device-specific data (Ext_CSD) register in a flash controller from the flash controller, where the host side and the flash controller communicate with each other in an embedded multi-media card (eMMC) protocol; and allocating space in a system memory as an HPA buffer, and storing a plurality of first logical-block-address to physical-block-address (L2P) mapping entries obtained from the flash controller when the value of the Ext_CSD register comprises information indicating that an HPA function is supported, where each L2P mapping entry stores information indicating which physical address that user data of a corresponding logical address is physically stored in a flash device.Type: GrantFiled: May 19, 2021Date of Patent: August 30, 2022Assignee: SILICON MOTION, INC.Inventor: Po-Yi Shih
-
Patent number: 11422826Abstract: Methods, systems, and devices for operational code storage for an on-die microprocessor are described. A microprocessor may be formed on-die with a memory array. Operating code for the microprocessor may be stored in the memory array, possibly along with other data (e.g., tracking or statistical data) used or generated by the on-die microprocessor. A wear leveling algorithm may result in some number of rows within the memory array not being used to store user data at any given time, and these rows may be used to store the operating code and possibly other data for the on-die microprocessor. The on-die microprocessor may boot and run based on the operating code stored in memory array.Type: GrantFiled: May 19, 2020Date of Patent: August 23, 2022Assignee: Micron Technology, Inc.Inventors: Troy A. Manning, Jonathan D. Harms, Troy D. Larsen, Glen E. Hush, Timothy P. Finkbeiner
-
Patent number: 11411874Abstract: A network switch includes a first port configured for communication with a first electric device and a second port configured for communication with a second electric device in a deterministic network. The network switch includes one or more processors configured to receive at the first port a communication packet associated with the first electric device and the second electric device, determine if the communication packet satisfies a plurality of protocol constraints, and in response to the communication packet satisfying the plurality of protocol constraints, input one or more message characteristics from the communication packet into a model associated with a first industrial process. The model is configured to output a process behavioral classification based on the one or more message characteristics. The one or more processors receive a process behavioral classification for the communication packet, and selectively generate a control action for the ICS based on the process behavioral classification.Type: GrantFiled: January 4, 2019Date of Patent: August 9, 2022Assignee: GE AVIATION SYSTEMS LIMITEDInventor: Stefan Alexander Schwindt
-
Patent number: 11340955Abstract: Execution of varying tasks for heterogeneous applications running in a single runtime environment is managed. The runtime environment is capable of managing thread pools for any of the plurality of applications and receives a request to manage a thread pool for one of the applications. The request includes size thresholds for the pool, a first function to be invoked for creation of threads, and a second function to be invoked for termination of the threads. Responsive to detecting that a first size threshold is not satisfied, the runtime environment invokes the first function to cause the application to create an additional thread. Responsive to detecting that a second size threshold is not satisfied, the runtime environment places an artificial task that incorporates the second function into a work queue for the thread pool, whereby a thread executes the artificial task to invoke the second function and thereby terminates.Type: GrantFiled: January 2, 2020Date of Patent: May 24, 2022Assignee: International Business Machines CorporationInventors: Suman Mitra, Gireesh Punathil, Vipin M V
-
Patent number: 11294573Abstract: Provided are a computer program product, system, and method for generating node access information for a transaction accessing nodes of a data set index. Pages in the memory are allocated to internal nodes and leaf nodes of a tree data structure representing all or a portion of a data set index for the data set. A transaction is processed with respect to the data set that involves accessing the internal and leaf nodes in the tree data structure, wherein the transaction comprises a read or write operation. Node access information is generated in transaction information, for accessed nodes comprising nodes in the tree data structure accessed as part of processing the transaction. The node access information includes a pointer to the page allocated to the accessed node prior to the transaction in response to the node being modified during the transaction.Type: GrantFiled: June 3, 2019Date of Patent: April 5, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Derek L. Erdmann, David C. Reed, Thomas C. Reed, Max D. Smith
-
Patent number: 11249804Abstract: A computer-implemented method and system for affinity based optimization of persistent memory volumes. Responsive to receiving a request for a parent virtual PMEM device, a total memory capacity is apportioned amongst virtual persistent memory (PMEM) resources and physical memory resources. In accordance with a target affinity characteristic, a set of virtual central processor unit (CPU) sockets are assigned. Each virtual CPU socket is configured based on at least one physical central processor unit (CPU) core in conjunction with a subset of the virtual PMEM and physical memory resources. Child virtual PMEM devices are created for respective ones of the virtual CPU sockets, each of the child virtual PMEM devices being dedicated to the parent virtual PMEM device.Type: GrantFiled: October 7, 2019Date of Patent: February 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David Anthony Larson Stanton, Stuart Zachary Jacobs, Troy David Armstrong, Peter J. Heyrman
-
Patent number: 11243891Abstract: Methods, devices, and systems for virtual address translation. A memory management unit (MMU) receives a request to translate a virtual memory address to a physical memory address and searching a translation lookaside buffer (TLB) for a translation to the physical memory address based on the virtual memory address. If the translation is not found in the TLB, the MMU searches an external memory translation lookaside buffer (EMTLB) for the physical memory address and performs a page table walk, using a page table walker (PTW), to retrieve the translation. If the translation is found in the EMTLB, the MMU aborts the page table walk and returns the physical memory address. If the translation is not found in the TLB and not found in the EMTLB, the MMU returns the physical memory address based on the page table walk.Type: GrantFiled: September 25, 2018Date of Patent: February 8, 2022Assignee: ATI Technologies ULCInventors: Nippon Harshadk Raval, Philip Ng
-
Patent number: 11226860Abstract: A method includes receiving a set of difference lists from a set of storage units of the DSN, where the set of storage units store a plurality of sets of encoded data slices, wherein a first difference list identifies first encoded data slices that have first indicators that are different than corresponding first indicators of the first encoded data slices included in a shared common list. The method continues by determining, for a set of encoded data slices of the plurality of sets of encoded data slices, whether a storage inconsistency exists based on one or more indicators associated with the encoded data slice included in the set of difference lists. When the storage inconsistency exists, the method continues by flagging for rebuilding encoded data slices of the set of encoded data slices associated with the storage inconsistency.Type: GrantFiled: July 22, 2019Date of Patent: January 18, 2022Assignee: PURE STORAGE, INC.Inventors: Andrew D. Baptist, Ravi V. Khadiwala, Jason K. Resch
-
Patent number: 11216371Abstract: In a cache memory, a main unit stores memory address information which is associated with part of data stored in a memory space to be accessed, on a cache line-by-cache line basis. The memory space is divided into a plurality of memory regions. The address generation unit generates a cache memory address from a memory address specified by an external access request, based on a memory region among the plurality of memory regions which is associated with the memory address specified by the access request. A main unit is searched according to the cache memory address, thereby searching and replacing different ranges of cache lines for different memory regions.Type: GrantFiled: March 27, 2017Date of Patent: January 4, 2022Assignee: MITSUBISHI ELECTRIC CORPORATIONInventor: Keita Yamaguchi
-
Patent number: 11210153Abstract: An information handling system includes interleaved dual in-line memory modules (DIMMs) that are partitioned into logical partitions, wherein each logical partition is associated with a namespace. A DIMM controller sets a custom DIMM-level namespace-based threshold to detect a DIMM error and to identify one of the logical partitions of the DIMM error using the namespace associated with the logical partition. The detected DIMM error is repaired if it exceeds an error correcting code (ECC) threshold.Type: GrantFiled: June 19, 2020Date of Patent: December 28, 2021Assignee: Dell Products L.P.Inventors: Vijay B. Nijhawan, Chandrashekar Nelogal, Syama S. Poluri, Vadhiraj Sankaranarayanan
-
Patent number: 11159605Abstract: Selective resource migration is disclosed. A computer system includes physical memory and a plurality of physical processors. Each of the processors has one or more cores and each core instantiates one or more virtual processors that executes program code. Each core is configured to invoke a hyper-kernel on its hosting physical processor when the core cannot access a portion of the physical memory needed by the core. The hyper-kernel selectively moves the needed memory closer to a location accessible by the physical processor or remaps the virtual processor to another core.Type: GrantFiled: February 19, 2020Date of Patent: October 26, 2021Assignee: TidalScale, Inc.Inventor: Isaac R. Nassi
-
Patent number: 11151013Abstract: The present disclosure provides systems and methods for performance evaluation of Input/Output (I/O) intensive enterprise applications. Representative workloads may be generated for enterprise applications using synthetic benchmarks that can be used across multiple platforms with different storage systems. I/O traces are captured for an application of interest at low concurrencies and features that affect performance significantly are extracted, fed to a synthetic benchmark and replayed on a target system thereby accurately creating the same behavior of the application. Statistical methods are used to extrapolate the extract features to predict performance at higher concurrency level without generating traces at those concurrency levels. The method does not require deploying the application or database on the target system since performance of system is dependent on access patterns instead of actual data.Type: GrantFiled: January 29, 2018Date of Patent: October 19, 2021Assignee: Tate Consultancy Services LimitedInventors: Dheeraj Chahal, Manoj Karunakaran Nambiar
-
Patent number: 11144432Abstract: A computer program product for testing a server code in a server concurrently handling multiple client requests includes creating a job-specific breakpoint in the server code using a library application programming interface, the job-specific breakpoint in the server code is enabled or disabled based on a job identifier, the library application programming interface controls the job-specific breakpoint in the server code and includes an application programming interface for establishing a new server connection with the server and retrieving the job identifier from the server associated with the established new server connection, pausing execution of a client job based on enabling the job-specific breakpoint in the server code using the library application programming interface, and resuming execution of the client job based on disabling the job-specific breakpoint in the server code using the library application programming interface.Type: GrantFiled: January 3, 2020Date of Patent: October 12, 2021Assignee: International Business Machines CorporationInventors: Amit P. Joglekar, Praveen Mohandas
-
Patent number: 11119820Abstract: One embodiment provides for a general-purpose graphics processing unit comprising a set of processing elements to execute one or more thread groups of a second kernel to be executed by the general-purpose graphics processor, an on-chip memory coupled to the set of processing elements, and a scheduler coupled with the set of processing elements, the scheduler to schedule the thread groups of the kernel to the set of processing elements, wherein the scheduler is to schedule a thread group of the second kernel to execute subsequent to a thread group of a first kernel, the thread group of the second kernel configured to access a region of the on-chip memory that contains data written by the thread group of the first kernel in response to a determination that the second kernel is dependent upon the first kernel.Type: GrantFiled: March 15, 2019Date of Patent: September 14, 2021Assignee: Intel CorporationInventors: Valentin Andrei, Aravindh Anantaraman, Abhishek R. Appu, Nicolas C. Galoppo von Borries, Altug Koker, SungYe Kim, Elmoustapha Ould-Ahmed-Vall, Mike Macpherson, Subramaniam Maiyuran, Vasanth Ranganathan, Joydeep Ray, Varghese George
-
Patent number: 11115423Abstract: Techniques described herein provide multi-factor authentication based on positioning data. Generally described, configurations disclosed herein enable a system to authorize a particular action using positioning data, and possibly other data, associated with an identity. For example, when a user wishes to change a password or access a secured account, the system can authenticate a user if a device associated with the user is located in the secure area. The system can authenticate a user if a requested operation and/or a predetermined pattern of movement associated with the user is detected. For instance, the system allows the user to change the password when the user's computer has followed a predetermined pattern of movement, and when one or more verification procedures meets one or more criteria while the location of the computing device is within the predetermined area.Type: GrantFiled: July 10, 2019Date of Patent: September 7, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Stephen P. DiAcetis, David Mahlon Hoover, Cristina del Amo Casado, Lanny D. Natucci, Jr., Janet Lynn Schneider, Sanjib Saha, Fernando Gonzalez, Jonathan Matthew Kay
-
Patent number: 11099753Abstract: A method for processing I/O requests that are received at a distributed storage system including a plurality of receiver nodes, a plurality of first nodes, and a plurality of second nodes, the method comprising: receiving, at a receiver node, an I/O request and executing the I/O request by using at least one of the first nodes and at least one of the second nodes; receiving, by the receiver node, one or more latency metrics from each of the first nodes and second nodes that are used to execute the I/O request, and reconfiguring the storage system, by the receiver node, based on any of the received latency metrics.Type: GrantFiled: July 27, 2018Date of Patent: August 24, 2021Assignee: EMC IP Holding Company LLCInventor: Vladimir Shveidel
-
Patent number: 11093522Abstract: A database replication method and apparatus for a distributed system are provided and relate to the database field. The method includes: receiving by a coordination server, a timestamp of a multi-partition transaction newly added to a first partition of a secondary cluster; determining, by the coordination server, a target timestamp for the first partition based on the received timestamp of the newly added multi-partition transaction and a stored multi-partition transaction timestamp of each partition of the secondary cluster; and sending, by the coordination server, the target timestamp to the first partition, so that the first partition executes a replication log in the first partition based on the target timestamp. In this way, the corresponding partition can execute, without waiting, a multi-partition transaction that is present in all the partitions but has not been executed, thereby avoiding data inconsistency and increasing replication efficiency.Type: GrantFiled: October 19, 2018Date of Patent: August 17, 2021Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Guoping Wang, Junhua Zhu
-
Patent number: 11086520Abstract: Provided are a method, system, computer readable storage medium, and switch for configuring a switch to assign partitions in storage devices to compute nodes. A management controller configures the switch to dynamically allocate partitions of at least one of the storage devices to the compute nodes based on a workload at the compute node.Type: GrantFiled: July 12, 2019Date of Patent: August 10, 2021Assignee: Intel CorporationInventors: Mark A. Schmisseur, Mohan J. Kumar, Balint Fleischer, Debendra Das Sharma, Raj K. Ramanujan
-
Patent number: 11082523Abstract: A virtual memory management method, system, and computer program product at a first machine, receiving a request to access memory associated with a virtual address, at the first machine, initiating a translation of the virtual address to a logical address, during the translation of the virtual address to the logical address, determining that a machine identifier corresponds to a second machine, communicating the request to access the memory to the second machine, and at the second machine, fulfilling the memory access request.Type: GrantFiled: February 9, 2017Date of Patent: August 3, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Kirk J. Krauss
-
Patent number: 11056206Abstract: A non-volatile storage apparatus includes a set of non-volatile memory cells and one or more control circuits in communication with the set of non-volatile memory cells. The one or more control circuits are configured to group physical addresses of the set of non-volatile memory cells into groups of configurable sizes and to individually apply wear leveling schemes to non-volatile memory cells of a group.Type: GrantFiled: December 21, 2017Date of Patent: July 6, 2021Assignee: Western Digital Technologies, Inc.Inventors: Amir Gholamipour, Chandan Mishra
-
Patent number: 11029848Abstract: A file management method, a distributed storage system, and a management node are disclosed. In the distributed storage system, after receiving a file creation request sent by a host for requesting to create a file in a distributed storage system, a management node allocates, to the file, first virtual space from global virtual address space of the distributed storage system, where local virtual address space of each storage node in the distributed storage system is corresponding to a part of the global virtual address space. Then, the management node records metadata of the file, where the metadata of the file includes information about the first virtual space, and the information about the first virtual space is used to point to local virtual address space of a storage node that is used to store the file. Further, the management node sends, the information about the first virtual space to the host.Type: GrantFiled: November 1, 2018Date of Patent: June 8, 2021Assignee: Huawei Technologies Co., Ltd.Inventors: Jun Xu, Junfeng Zhao, Yuangang Wang
-
Patent number: 11003591Abstract: An arithmetic processor, having: an arithmetic logical operation unit configured to execute an instruction; and a cache unit including a cache memory configured to store a part of data in a first main memory and a part of data in a second main memory which has a wider band than the first main memory when at least a predetermined capacity of data having consecutive addresses is accessed, and a cache control unit configured to read data in the cache memory responding to a memory request issued by the arithmetic logical operation unit and respond to the memory request source, wherein a ratio of capacity of the data in the second main memory with respect to the data in the first main memory stored in the cache memory is limited to a predetermined ratio or less.Type: GrantFiled: July 19, 2019Date of Patent: May 11, 2021Assignee: FUJITSU LIMITEDInventor: Naoto Fukumoto
-
Patent number: 11003578Abstract: Parallel mark processing is disclosed including traversing first objects in a virtual machine heap based on correspondences between memory blocks in the virtual machine heap and N marking threads, pushing a first pointer of a first object into a private stack of a marking thread corresponding to a memory block, the first object being located in the memory block, performing first mark processing of the first object based on a push-in condition of the first pointer, and after traversal of the first objects has been completed, launching the N marking threads to cause the N marking threads to synchronously perform mark processing used in garbage collection based on push-in conditions of first pointers in respective private stacks of the first pointers.Type: GrantFiled: September 27, 2018Date of Patent: May 11, 2021Assignee: BANMA ZHIXING NETWORK (HONGKONG) CO., LIMITEDInventors: Zhefeng Wu, Jianghua Yang
-
Patent number: 10969981Abstract: An information processing device includes a memory, and a processor configured to perform a first process configured to generate control data used in communication and storing the generated control data in a locked state in the memory while performing start processing of the first process, release the locked state of the control data in response to completion of the start processing or suspension of the start processing, and communicate with a communication device in response to a communication request, and perform a second process configured to determine, based on the control data, whether connection with the first process is established, when it is determined that the connection with the first process is not established, select processing for connecting with the first process in accordance with whether the control data in the memory is locked, and transmit the communication request to the first process while connecting with the first process.Type: GrantFiled: October 16, 2019Date of Patent: April 6, 2021Assignee: FUJITSU LIMITEDInventor: Yuki Ikeda
-
Patent number: 10956287Abstract: Provided are techniques for implementing shared Ethernet adapter (SEA) failover, including receiving a first ARP packet at a first SEA coupled to a first switch; parsing, by the first SEA, a first MAC address and VLAN ID (VID) corresponding to the first ARP packet; transmitting the first MAC address and VID to a second SEA coupled to a second switch; detecting the first SEA has transitioned from a primary configuration to an inactive configuration and the second SEA has transitioned from a backup configuration to the primary configuration; and responsive to the detecting, transmitting a reverse ARP packet to the second switch notifying the second switch that the first SEA has transitioned to an inactive configuration and that the second SEA has transitioned to an active configuration; and configuring the first switch to forward any subsequent packets to the second switch rather than the first SEA.Type: GrantFiled: September 18, 2018Date of Patent: March 23, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Juliet M. Kim
-
Patent number: 10949297Abstract: Devices and techniques for NAND device mixed parity management are described herein. A first portion of data that corresponds to a first data segment and a second data segment—respectively defined with respect to a structure of a NAND device—are received. A parity value using the first portion of data and the second portion of data is computed and then stored for error correction operations.Type: GrantFiled: December 5, 2018Date of Patent: March 16, 2021Assignee: Micron Technology, Inc.Inventor: Giuseppe Cariello
-
Patent number: 10929342Abstract: Techniques for limiting storage consumed by a file system without shrinking a volume upon which the file system is deployed. The techniques are employed in a clustered environment including multiple NAS nodes, each having access to block storage including multiple storage devices. By deploying the file system on a volume of a NAS node within the clustered environment, setting the value of the FS user size to be equal to the FS volume size, and if, at a later time, it is desired to reduce the file system size, setting the value of the FS user size to a lesser value than the FS volume size, IO requests received at the NAS node can be satisfied within the logical limit of the lesser value of the FS user size without shrinking the local volume, allowing the file system size to be reduced without requiring close coordination with the block storage.Type: GrantFiled: July 30, 2018Date of Patent: February 23, 2021Assignee: EMC IP Holding Company LLCInventors: Ahsan Rashid, Walter C. Forrester, Marc De Souter, Morgan A. Clark
-
Patent number: 10922013Abstract: The disclosure relates in some aspects to suspending a read for a non-volatile memory (NVM) device. For example, a lower priority read may be suspended to enable a higher priority read to occur. Once the higher priority read completes, the lower priority read is resumed. To improve the efficiency of the read suspension, the lower priority read may be suspended once data sensing at a current level of the NVM device completes. The data for each level that has already been sensed is then stored so that this data does not need to be sensed again. Once the lower priority read is resumed, the data sensing starts at the next level of the NVM device. The data output for the lower priority read thus includes the stored data for any levels read before the read is suspended, along with the data from the levels read after the read is resumed.Type: GrantFiled: April 9, 2018Date of Patent: February 16, 2021Assignee: Western Digital Technologies, Inc.Inventors: Revanasiddaiah Prabhuswamy Mathada, Saugata Das Purkayastha, Anantharaj Thalaimalaivanaraj, Nisha Padattil Kuliyampattil
-
Patent number: 10866895Abstract: A method of managing memory access includes receiving, at an input output memory management unit, a memory access request from a device. The memory access request includes a virtual steering tag associate associated with a virtual machine. The method further includes translating the virtual steering tag to a physical steering tag directing memory access of a cache memory associated with a processor core of a plurality of processor cores. The virtual machine is implemented on the processor core. The method also includes accessing the cache memory to implement the memory access request.Type: GrantFiled: December 18, 2018Date of Patent: December 15, 2020Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULCInventors: Philip Ng, Nippon Harshadk Raval, Francisco L. Duran
-
Patent number: 10866740Abstract: Systems and methods for managing performance and quality of service (QoS) with multiple namespace resource allocation. NVM Express (NVMe) implements a paired submission queue and completion queue mechanism, with host software on the host device placing commands into the submission queue. The memory device processes the commands through various phases including fetching, processing, posting a completion message, and sending an interrupt to the host. NVMe may support the use of namespaces. Namespace configuration may be modified to include performance criteria specific to each namespace. The memory device may then receive commands directed to specific namespaces an apply memory device resources to commands in each namespace queue such that QoS may be applied to control execution of commands such that commands in each namespace receive resources based on host selected performance parameters for each namespace.Type: GrantFiled: October 1, 2018Date of Patent: December 15, 2020Assignee: Western Digital Technologies, Inc.Inventors: Shay Benisty, Ariel Navon, Alex Bazarsky
-
Patent number: 10783096Abstract: A storage system provides a logical volume to a computer, manages the logical volume and a port receiving an I/O request for the logical volume in correspondence with each other, and holds assigned processor management information for managing correspondence between a processor for executing I/O processing based on an I/O request accumulated in a queue and an assigned port being a port corresponding to the queue. The processor identifies an assigned port on the basis of the assigned processor management information, executes I/O processing for the logical volume corresponding to the assigned port, and executes I/O processing on the basis of an I/O request received via the assigned port corresponding to another operation core.Type: GrantFiled: August 30, 2018Date of Patent: September 22, 2020Assignee: HITACHI, LTD.Inventors: Takashi Nagao, Tomohiro Yoshihara, Kohei Tatara, Miho Kobayashi
-
Patent number: 10776256Abstract: A method is provided for sharing a global memory by a plurality of threads in a memory management system. The method includes allocating, by a controller of the system, thread-local memory areas in the global memory to a given thread and other threads, from among the plurality of threads. The method further includes gathering, by the controller, fragments of the thread-local memory areas previously allocated to the other threads, responsive to the fragments being scanned. The method also includes allocating, by the controller to the given thread, a requested memory size of the fragments of the thread-local memory areas previously allocated to the other threads, responsive to the fragments not being collectively smaller than the requested memory size. The method additionally includes allocating, by the controller to the given thread, a new memory area from the global memory, responsive to the fragments being collectively smaller than the requested memory size.Type: GrantFiled: May 16, 2018Date of Patent: September 15, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Takeshi Yoshimura, Michihiro Horie
-
Patent number: 10747440Abstract: It is provided a storage system comprising at least one controller and a storage device. The at least one controller verifies, for each predetermined storage area within a logical volume provided to the host computer, whether data of the each predetermined storage area is duplicated to another storage area. The storage device holds unshared data associated only with the storage area and shared data associated with the storage area and the another storage area in the case where the data of the each predetermined storage area is identical to the data of the another storage area. The at least one controller reads the unshared data in the case where a request to read the data is received under a state in which the unshared data and the shared data are held, and releases an area in which the unshared data is stored at predetermined timing.Type: GrantFiled: September 24, 2014Date of Patent: August 18, 2020Assignee: HITACHI, LTD.Inventors: Kazuei Hironaka, Akira Yamamoto, Yoshihiro Yoshii, Mitsuo Hayasaka
-
Patent number: 10727200Abstract: A memory device includes a buffer die including a first bump array and a second bump array spaced apart from each other in a first direction parallel to a lower surface of the buffer die; a first memory die stacked on the buffer die through a plurality of first through silicon vias and including banks; and a second memory die stacked on the first memory die by a plurality of second through silicon vias and including banks, wherein the first bump array is provided for a first channel to communicate between the first and second memory dies and a first processor, wherein the second bump array is provided for a second channel to communicate between the first and second memory dies and a second processor, and wherein the first channel and the second channel are independent of each other such that banks allocated to the first channel are accessed only by the first processor not the second processor through the first channel and banks allocated to the second channel are accessed only by the second processor not theType: GrantFiled: August 29, 2018Date of Patent: July 28, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Chul-Hwan Choo, Woong-Jae Song
-
Patent number: 10698739Abstract: Routers and host machines can host desktops for two or more enterprises. A virtual local area network is established for each enterprise. Each virtual local area network is connected to a plurality of host machines for the enterprise, with each host machine supporting desktops for use by the enterprise. The desktops access computer resources on the enterprise network of the enterprise to which it is connected. Resources within a host machine are shared by having a virtual switch for each enterprise the host machine supports. The virtual switch for an enterprise is connected to the virtual local area network of the enterprise. Desktops in the host machine that are allocated to the enterprise are given network addresses that include the tag for that enterprise. Virtual desktops for different enterprises can be hosted on different partitions of the same host machine.Type: GrantFiled: September 19, 2016Date of Patent: June 30, 2020Assignee: VMware, Inc.Inventors: Kenneth Ringdahl, Charles Davies, Andre Biryukov
-
Patent number: 10684968Abstract: A processor implemented method for spreading data traffic across memory controllers with respect to conditions is provided. The processor implemented method includes determining whether the memory controllers are balanced. The processor implemented method includes executing a conditional spreading with respect to the conditions when the memory controllers are determined as unbalanced. The processor implemented method includes executing an equal spreading when the memory controllers are determined as balanced.Type: GrantFiled: June 15, 2017Date of Patent: June 16, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David D. Cadigan, Thomas J. Dewkett, Glenn D. Gilda, Patrick J. Meaney, Craig R. Walters
-
Patent number: 10671525Abstract: A computer program product, according to one embodiment, includes a computer readable storage medium having program instructions embodied therewith. The computer readable storage medium is not a transitory signal per se. The program instructions are readable and/or executable by a processor to cause the processor to perform a method which includes: receiving a request to delete a volume stored in one or more regions in physical space of a storage system; determining whether at least one of the regions having at least a portion of the volume includes reclaimable space; deleting the portion of the volume from the at least one region having the reclaimable space in response to determining that at least one of the regions having at least a portion of the volume includes reclaimable space; and failing the received request to delete the volume in response to determining that none of the regions include reclaimable space.Type: GrantFiled: June 21, 2017Date of Patent: June 2, 2020Assignee: International Business Machines CorporationInventors: Jonathan Fischer-Toubol, Asaf Porat-Stoler, Yosef Shatsky
-
Patent number: 10665548Abstract: An integrated circuit device or devices is presented that include internal connection ports to transmit data to or receive data from a first portion of the integrated circuit device. The integrated circuit device(s) also include external connection ports to transmit data to or receive data from outside the integrated circuit device, such as between integrated circuit devices. The integrated circuit device also includes remapping circuitry that remaps from a first connection between a first internal connection port of the internal connection ports and a first external connection port of the external connection ports to a second connection between a second internal connection port of the internal connection ports and a second external connection port of the external connection ports.Type: GrantFiled: September 17, 2018Date of Patent: May 26, 2020Assignee: Intel CorporationInventor: Chee Hak Teh
-
Patent number: 10642748Abstract: This disclosure provides techniques hierarchical address virtualization within a memory controller and configurable block device allocation. By performing address translation only at select hierarchical levels, a memory controller can be designed to have predictable I/O latency, with brief or otherwise negligible logical-to-physical address translation time. In one embodiment, address transition may be implemented entirely with logical gates and look-up tables of a memory controller integrated circuit, without requiring processor cycles. The disclosed virtualization scheme also provides for flexibility in customizing the configuration of virtual storage devices, to present nearly any desired configuration to a host or client.Type: GrantFiled: August 29, 2017Date of Patent: May 5, 2020Assignee: Radian Memory Systems, Inc.Inventors: Robert Lercari, Alan Chen, Mike Jadon, Craig Robertson, Andrey V. Kuzmin
-
Patent number: 10642640Abstract: Concepts and technologies disclosed herein are directed to data-driven feedback control systems for an acceptable level of real-time application transaction completion rate in virtualized networks, while maximizing virtualized server utilization. According to one aspect disclosed herein, a network virtualization platform (“NVP”) includes a plurality of hardware resources, a virtual machine (“VM”), and a virtual machine monitor (“VMM”). The VMM can track an execution state of each of a plurality of applications associated with the VM. The VMM can measure a real-time application transaction completion rate of the VM. The VMM can determine whether a trigger condition exists for priority scheduling of real-time applications based upon the real-time application transaction completion rate and a pre-set threshold value.Type: GrantFiled: February 19, 2018Date of Patent: May 5, 2020Assignee: AT&T Intellectual Property I, L.P.Inventors: Tsong-Ho Wu, Wen-Jui Li
-
Patent number: 10635494Abstract: An apparatus includes processing cores, memory blocks, a connection between each of processing core and memory block, chip selection circuit, and chip selection circuit busses between the chip selection circuit and each of the memory blocks. Each memory block includes a data port and a memory check port. The chip selection circuit is configured to enable writing data from a highest priority core through respective data ports of the memory blocks. The chip selection circuit is further configured to enable writing data from other cores through respective memory check ports of the memory blocks.Type: GrantFiled: May 8, 2018Date of Patent: April 28, 2020Assignee: MICROCHIP TECHNOLOGY INCORPORATEDInventors: Michael Simmons, Anjana Priya Sistla, Priyank Gupta
-
Patent number: 10585702Abstract: In some embodiments, the invention involves partitioning resources of a manycore platform for simultaneous use by multiple clients, or adding/reducing capacity to a single client. Cores and resources are activated and assigned to a client environment by reprogramming the cores' route tables and source address decoders. Memory and I/O devices are partitioned and securely assigned to a core and/or a client environment. Instructions regarding allocation or reallocation of resources is received by an out-of-band processor having privileges to reprogram the chipsets and cores. Other embodiments are described and claimed.Type: GrantFiled: February 3, 2014Date of Patent: March 10, 2020Assignee: Intel CorporationInventors: Vincent J. Zimmer, Michael A. Rothman, Mark Doran