Shared Memory Area Patents (Class 711/147)
  • Patent number: 10776229
    Abstract: Database processing engines of a single cluster are configured such that each engine is a primary engine and a dedicated fallback engine to one other engine of the cluster. In an embodiment, the cluster includes more than two processing engines.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: September 15, 2020
    Assignee: Teradata US, Inc.
    Inventors: Tirunagari Venkata Rama Krishna, Srinath Boppudi, Frederick Stuart Kaufmann, Donald Raymond Pederson
  • Patent number: 10776292
    Abstract: An integrated circuit has a master processing core with a central processing unit coupled with a non-volatile memory and a slave processing core operating independently from the master processing core and having a central processing unit coupled with volatile program memory, wherein the master central processing unit is configured to transfer program instructions into the non-volatile memory of the slave processing core and wherein a transfer of the program instructions is performed by executing a dedicated instruction within the central processing unit of the master processing core.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: September 15, 2020
    Assignee: MICROCHIP TECHNOLOGY INCORPORATED
    Inventors: Michael Catherwood, David Mickey, Bryan Kris, Calum Wilkie, Jason Sachs, Andreas Reiter
  • Patent number: 10769020
    Abstract: Techniques for sharing private space among storage system components. The techniques include determining an amount of private space for each of a rebuild component, an FSCK component, and a deduplication component, reserving private space equal to the sum of (i) the amount determined for the rebuild component and (ii) the maximum of the amounts determined for the FSCK and deduplication components, and allocating the remaining amount of storage space as user space. If a storage device fails, then the rebuild component rebuilds the failed drive data on a hot spare drive in the private space reserved for the rebuild component. If data files become corrupted, then the FSCK component performs offline recovery operations using the private space for the hot spare drive. If such private space for the hot spare drive is unavailable, then the FSCK component performs offline recovery operations using the private space reserved for the deduplication component.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 8, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Vamsi K. Vankamamidi, Philippe Armangau, Steven A. Morley, Shuyu Lee, Daniel E. Cummins
  • Patent number: 10761872
    Abstract: A method for zeroing guest memory of a VM during boot up, includes the guest OS attempts to set a page to zero. A page fault is generated and is handled by the hypervisor. The page is mapped by the hypervisor to a page in host memory, and is given to the guest. The guest OS attempts to set the next page to zero. Another page fault is generated the hypervisor unmaps the host memory page, and the second page is mapped to the same page. The hypervisor then gives the page to the guest, which contains all zeros. The process is repeated for remaining pages of the guest memory.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: September 1, 2020
    Assignee: Virtuozzo International GmbH
    Inventors: Denis Lunev, Alexey Kobets
  • Patent number: 10761957
    Abstract: A method and system for collecting statistics associated with multiple memory nodes to determine if a read-only page is read accessed in aggregate by multiple processing devices of the multiple memory nodes at or above a first threshold value. If so, the read-only page may be replicated to an additional memory node. If a determination is made that the read-only page is read accessed in aggregate by the multiple processing devices below the first threshold value, the read-only page may be de-replicated upon receipt of a write request associated with the read-only page.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: September 1, 2020
    Assignee: Red Hat Israel, Ltd.
    Inventor: Avi Kivity
  • Patent number: 10747673
    Abstract: Embodiments described herein provide a system for facilitating cluster-level cache and memory in a cluster. During operation, the system presents a cluster cache and a cluster memory to a first application running on a first compute node in the cluster. The system maintains a first mapping between a first virtual address of the cluster cache and a first physical address of a first persistent storage of the first compute node. The system maintains a second mapping between a second virtual address of the cluster memory and a second physical address of a second persistent storage of a first storage node of the cluster. Upon receiving a first memory allocation request for cache memory from the first application, the system allocates a first memory location corresponding to the first physical address. The first application can be configured to access the first memory location based on the first virtual address.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: August 18, 2020
    Assignee: Alibaba Group Holding Limited
    Inventor: Shu Li
  • Patent number: 10740813
    Abstract: In one aspect, this application describes a method for determining a version of a software application targeted for a computing device. The method includes receiving, at an application marketplace system and from a user associated with a computing device that operates remotely from the application marketplace system, a request that corresponds to a software application distributed by the application marketplace system, the software application having multiple versions on the application marketplace system. The method also includes determining one or more device attributes that are associated with the computing device, and identifying a particular version of the software application, from among the multiple versions on the application marketplace system, that is targeted for the computing device based on the device attributes. The method also includes providing, for display to the user and in response to the request, information related to the particular version of the software application.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: August 11, 2020
    Assignee: Google LLC
    Inventors: Ilya Firman, Jasper S. Lin, Mark D. Womack, Yu-Kuan Lin, Sheng-chi Hsieh, Juliana Tsang
  • Patent number: 10732905
    Abstract: A method of selecting among a plurality of I/O streams through which data is to be written to a multi-streaming flash storage device is presented. According to an example embodiment, the method comprises: assigning write sequences of similar length to the same I/O streams; receiving instructions for a write operation, the instructions including a starting logical block address (LBA) and a number of blocks of data to be written; determining whether the write operation is part of an existing write sequence; identifying an I/O stream associated with an existing write sequence; and providing a stream ID of the identified I/O stream to the multi-streaming flash storage device.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: August 4, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sina Hassani, Anahita Shayesteh, Vijay Balakrishnan
  • Patent number: 10728171
    Abstract: Disclosed herein are a system, non-transitory computer readable medium, and method for governing communications of a bare metal guest in a cloud network. A network interface handles packets of data in accordance with commands by a control agent.
    Type: Grant
    Filed: April 30, 2013
    Date of Patent: July 28, 2020
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Jeffrey Clifford Mogul, Jose Renato G. Santos, Yoshio Turner, Kevin T. Lim
  • Patent number: 10725709
    Abstract: Systems and methods for offloading processing from a host to one or more storage processing units using an interconnect network are provided. One such method includes receiving a processing task from the host at a first storage processing unit (SPU) of a plurality of SPUs via a host interface, performing, at the first SPU, the processing task, and transferring data from the first SPU to a second SPU via an interconnection network, where each of the plurality of SPUs includes a non-volatile memory (NVM) and a processing circuitry configured to perform the processing task.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: July 28, 2020
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Arup De, Kiran Kumar Gunnam
  • Patent number: 10691590
    Abstract: Affinity domain based garbage collection is facilitated for a non-uniform memory access (NUMA) computing environment. The affinity domain that a memory region is allocated is determined. Based on a processor of the NUMA computing environment having a matching affinity domain to the affinity domain in the memory region, the processor performs garbage collection processing on the memory region. The processor is one processor of a plurality of processors of the NUMA computing environment. In a global garbage collection work queue embodiment, the processor initially determines that it has a matching affinity domain to the affinity domain of the memory region to be processed. In a multiple garbage collection work queue implementation, memory regions are enqueued on a designated work queue for the affinity domain to which the memory region is allocated. The locality domain-based garbage collection processing presented may be implemented at one or more system architectural levels.
    Type: Grant
    Filed: November 9, 2017
    Date of Patent: June 23, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Zhong L. Wang
  • Patent number: 10649827
    Abstract: Digital objects are stored and accessed within a fixed content storage cluster by using a page mapping table and a pages index. A stream is read from the cluster by using a portion of its unique identifier as a key into the page mapping table. The page mapping table indicates a node holding a pages index indicating where the stream is stored. A stream is written by storing the stream on any suitable node and then updating a pages index stored within the cluster responsible for knowing the location of digital objects having unique identifiers that fall within a particular address range. The cluster recovers from a node failure by first replicating streams from the failed node and reallocating a page mapping table to create a new pages index. The remaining nodes send records of the unique identifiers corresponding to objects they hold to the new pages index.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: May 12, 2020
    Assignee: CARINGO INC.
    Inventors: Paul R. M. Carpentier, Russell Turpin
  • Patent number: 10642501
    Abstract: Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: May 5, 2020
    Assignee: MIPS Tech, LLC
    Inventors: Sanjay Patel, Ranjit J Rozario
  • Patent number: 10635589
    Abstract: A method for writing data, the method may include: receiving or generating, by an interfacing module, a data unit coherent write request for performing a coherent write operation of a data unit to a first address; receiving, by the interfacing module and from a circuit that comprises a cache and a cache controller, a cache coherency indicator that indicates that a most updated version of the content stored at the first address is stored in the cache; and instructing, by the interfacing module, the cache controller to invalidate a cache line of the cache that stored the most updated version of the first address without sending the most updated version of the content stored at the first address from the cache to a memory module that differs from the cache if a length of the data unit equals a length of the cache line.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: April 28, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Gil Stoler, Said Bshara, Nafea Bshara
  • Patent number: 10635827
    Abstract: Embodiments disclosed herein describe systems and for isolating data communicated to and from peripherals coupled to an LPC bus or similar shared bus. In embodiments, the isolated data may be communicated to only a targeted peripheral while other peripherals receive masked data.
    Type: Grant
    Filed: November 10, 2017
    Date of Patent: April 28, 2020
    Inventor: Timothy Raymond Pearson
  • Patent number: 10628354
    Abstract: Systems and techniques for a translation device that is configured to enable communication between a host device and a memory technology using different communication protocols (e.g., a communication protocol that is not preconfigured in the host device) is described herein. The translation device may be configured to receive signals from the host device using a first communication protocol and transmit signals to the memory device using a second communication protocol, or vice-versa. When converting signals between different communication protocols, the translation device may be configured to convert commands, map memory addresses to new addresses, map between channels having different characteristics, encode data using different modulation schemes, or a combination thereof.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: April 21, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Brent Keeth, Richard C. Murphy, Elliott C. Cooper-Balis
  • Patent number: 10613992
    Abstract: Systems and methods are provided for performing a remote procedure call. One method may comprise, at a client device, generating a request including setting a status field in a request header to indicate to a server processor that the request is ready, writing the request to a server memory via a RDMA write operation and fetching a response generated by the server processor from the server memory via a RDMA read operation. The method may further comprise, at a server device, checking a mode flag to determine that an operation mode is set to repeated remote fetching, retrieving the request from a server memory, processing the request to generate a response and writing the response to the server memory for the response to be fetched by a client device. The response includes a response header that comprises a status field for the status of the response and a response time.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: April 7, 2020
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Kang Chen, Yongwei Wu, Weimin Zheng, Maomeng Su, Teng Ma, Mingxing Zhang
  • Patent number: 10616333
    Abstract: A system to manage out-of-order traffic in an interconnect network has initiators that provide requests through the interconnect network to memory resource targets and provide responses back through the interconnect network. The system includes components upstream the interconnect network to perform response re-ordering, which include memory to store responses from the interconnect network and a memory map controller to store the responses on a set of logical circular buffers. Each logical circular buffer corresponds to an initiator. The memory map controller computes an offset address for each buffer and stores an offset address of a given request received on a request path. The controller computes an absolute write memory address where responses are written in the memory, the response corresponding to the given request based on the given request offset address.
    Type: Grant
    Filed: March 16, 2015
    Date of Patent: April 7, 2020
    Assignee: STMICROELECTRONICS S.R.L.
    Inventors: Mirko Dondini, Daniele Mangano
  • Patent number: 10599335
    Abstract: Embodiment of this disclosure provides a hierarchical structure of ordering points. In some embodiments, the hierarchical structure includes a single primary ordering point (POP) and at least one (or more) auxiliary order point (AOP) of a processing device. In one implementation, the processing device includes one or more cores; and a coherency circuit, operatively coupled to the cores. The processing device is to receive a plurality of memory access requests to be ordered by a first ordering point of the processing device. The processing device determines whether to stop the first ordering point based on a system event. Responsive to determining that the first ordering point is stopped, a second ordering point of the processing device is identified. Thereupon, a memory access request of the plurality of memory access requests is provided to the second ordering point.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: March 24, 2020
    Assignee: Intel Corporation
    Inventors: Erik Hallnor, Matthew Erler
  • Patent number: 10587615
    Abstract: Systems and methods for using micro accelerations as a biometric factor for multi-factor authentication, the method including receiving, filtering, and determining an identifying pattern from micro acceleration data representative of the user, storing the identifying pattern for later use in authenticating the identity of the user, and using the identifying pattern as one factor in a multi factor authentication.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: March 10, 2020
    Assignee: Capital One Services, LLC
    Inventor: David Wurmfeld
  • Patent number: 10579487
    Abstract: A semiconductor device (1) includes a first processing unit (10-1), a second processing unit (10-2), a writing unit (12), a storage unit (14), and a processing control unit (20). The writing unit (12) writes first information related to processing of each of the first processing unit (10-1) and the second processing unit (10-2) into the storage unit (14). The processing control unit (20) controls the operations of the first processing unit (10-1) and the second processing unit (10-2). The processing control unit (20) performs control to stop the first processing unit (10-1) when an error occurs in the first processing unit (10-1). When it is determined that the second processing unit (10-2) where an error has not occurred is able to maintain execution of the first processing by using first information stored in the storage unit (14), the second processing unit (10-2) maintains execution of the first processing.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: March 3, 2020
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Shunsuke Nakano, Yoshitaka Taki
  • Patent number: 10572288
    Abstract: An apparatus and method are described for efficient inter-virtual machine (VM) communication. For example, an apparatus comprises inter-VM communication logic to map a first specified set of device virtual memory addresses of a first VM to a first set of physical memory addresses in a shared system memory and to further map a second specified set of device virtual memory addresses of a second VM to the first set physical memory addresses in the shared system memory.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: February 25, 2020
    Assignee: Intel Corporation
    Inventors: Kun Tian, Yao Zu Dong
  • Patent number: 10558510
    Abstract: Testing a data coherency algorithm of a multi-processor environment. The testing includes implementing a global time incremented every processor cycle and used for timestamping; implementing a transactional execution flag representing a processor core guaranteeing the atomicity and coherency of the currently executed instructions; implementing a transactional footprint, which keeps the address of each cache line that was used by the processor core; implementing a reference model, which operates on every cache line and keeps a set of timestamps for every cache line; implementing a core observed timestamp representing a global timestamp, which is the oldest construction date of data used before; implementing interface events; and reporting an error whenever a transaction end event is detected and any cache line is found in the transactional footprint with an expiration date that is older than or equal to the core observed time.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: February 11, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
  • Patent number: 10551996
    Abstract: A method and a device for starting an application under a screen-locked state, which are applied in an electronic device, said method comprising: detecting whether the electronic device is under the screen lock state; receiving a sliding operation instruction with respect to an icon of the application to be started; providing memory resources for starting the application to be started, if a sliding operation with respect to the icon of the application to be started satisfies a first predetermined condition; performing a screen unlocking operation and starting the application to be started, if the sliding operation with respect to the icon of the application to be started satisfies a second predetermined condition. With embodiments of the present disclosure, it is convenient for a user to start the application under the screen lock state, an efficiency of starting the application may be enhanced and an experience effect of the user may be improved.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: February 4, 2020
    Assignee: CHEETAH MOBILE INC.
    Inventors: Yong Chen, Shengsheng Huang, Mengxue Zhan, Yandan He
  • Patent number: 10545909
    Abstract: A system management command is stored in a management partition of a global memory by a first node of a multi-node computing system. The global memory is shared by each node of the multi-node computing system. In response to an indication to access the management partition, the system management command is accessed from the management partition by a second node of the multi-node computing system. The system management command is executed by the second node. Executing the system management command includes managing the second node.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: January 28, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Yuan Chen, Daniel Juergen Gmach, Dejan S. Milojicic, Vanish Talwar, Zhikui Wang
  • Patent number: 10547668
    Abstract: A communications system and a communication method are disclosed. The communication method includes: sending, by a first computing node, a communication manner parsing request to a management node, where the communication manner parsing request includes an identifier of the first computing node and an identifier of a second computing node; determining, by the management node, information about a physical communication manner between the first computing node and the second computing node according to the communication manner parsing request and communication manner reference information, where the communication manner reference information includes system topology information and a system physical resource allocation result; sending, by the management node, the information about the physical communication manner to the first computing node; and communicating, by the first computing node, with the second computing node based on the information about the physical communication manner.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: January 28, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jiyuan Tang, Bin Huang, Wei Wang
  • Patent number: 10540290
    Abstract: Methods and apparatus obtain one or more system page table entries that represent virtual system (e.g., memory) page to physical system page translations. A number of the obtained system page table entries that can be encoded in each of a plurality of translation lookaside buffer (TLB) entry encoding formats are determined. The method and apparatus may select one of the TLB entry encoding formats that encode a number of the obtained system page table entries. The method and apparatus may encode a number of obtained system page table entries in the TLB entry encoding format selected into a compressed encoding format TLB entry. The method and apparatus may associate the compressed encoding format TLB entry with an encoding format indication of the encoding format selected. The method and apparatus may decode a compressed encoding format TLB entry based on a determined TLB entry encoding format.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: January 21, 2020
    Assignees: ATI Technologies ULC, Advanced Micro Devices, Inc.
    Inventors: Gabriel H Loh, Jimshed Mirza
  • Patent number: 10540286
    Abstract: Systems and methods for dynamically modifying coherence domains are discussed herein. In various embodiments, a hardware controller may be provided that is configured to automatically recognize application behavior and dynamically reconfigure coherence domains in hardware and software to tradeoff performance for reliability and scalability. Modifying the coherence domains may comprise repartitioning the system based on cache coherence independently of one or more software layers of the system. Memory-driven algorithms may be invoked to determine one or more dynamic coherence domain operations to implement. In some embodiments, declarative policy statements may be received from a user via one or more interfaces associated with the controller. The controller may be configured to dynamically adjust cache coherence policy based on the declarative policy statements received from the user.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: January 21, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Dejan S Milojicic, Keith Packard, Michael S. Woodacre, Andrew R Wheeler
  • Patent number: 10535368
    Abstract: A data object has a lock and a condition indicator associated with it. Based at least partly on detecting a first setting of the condition indicator, a reader stores an indication that the reader has obtained read access to the data object in an element of a readers structure and reads the data object without acquiring the lock. A writer detects the first setting and replaces it with a second setting, indicating that the lock is to be acquired by readers before reading the data object. Prior to performing a write on the data object, the writer verifies that one or more elements of the readers structure have been cleared.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: January 14, 2020
    Assignee: Oracle International Corporation
    Inventors: David Dice, Alex Kogan
  • Patent number: 10534644
    Abstract: Described herein are systems and methods for implementing a processor-local (e.g., a CPU-local) storage mechanism. An exemplary system includes a plurality of processors executing an operating system, the operating system including a processor local storage mechanism, wherein each processor accesses data unique to the processor based on the processor local storage mechanism. Each of the plurality of processors of the system may have controlled access to the resource and each of the processors is dedicated to one of a plurality of tasks of an application. The application including the plurality of tasks may be replicated using the processor local storage mechanism, wherein each of the tasks of the replicated application includes an affinity to one of the plurality of processors.
    Type: Grant
    Filed: June 25, 2009
    Date of Patent: January 14, 2020
    Assignee: Wind River Systems, Inc.
    Inventors: Andrew Gaiarsa, Maarten Koning
  • Patent number: 10521262
    Abstract: A computer-implemented method includes identifying two or more memory locations and referencing, by a memory access request, the two or more memory locations. The memory access request is a single action pursuant to a memory protocol. The computer-implemented method further includes sending the memory access request from one or more processors to a node and fetching, by the node, data content from each of the two or more memory locations. The computer-implemented method further includes packaging, by the node, the data content from each of the two or more memory locations into a memory package, and returning the memory package from the node to the one or more processors. A corresponding computer program product and computer system are also disclosed.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: December 31, 2019
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael Karl Gschwind, Valentina Salapura, Timothy J. Slegel
  • Patent number: 10521159
    Abstract: Presented herein are system and method for providing a non-disruptive mechanism for splitting a parent volume located on a first aggregate into a new volume, the method comprising: splitting the parent volume, by the network storage system, into a new volume, wherein the new volume comprises an application; and providing a snapshot of the parent volume at the new volume.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: December 31, 2019
    Assignee: NETAPP, INC.
    Inventors: Nikul Patel, Prathamesh Deshpande, Rupa Natarajan, Anureita Rao, Vikhyath Rao
  • Patent number: 10515006
    Abstract: A pseudo main memory system. The system includes a memory adapter circuit for performing memory augmentation using compression, deduplication, and/or error correction. The memory adapter circuit is connected to a memory, and employs the memory augmentation methods to increase the effective storage capacity of the memory. The memory adapter circuit is also connected to a memory bus and implements an NVDIMM-F or modified NVDIMM-F interface for connecting to the memory bus.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: December 24, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Krishna T. Malladi, Jongmin Gim, Hongzhong Zheng
  • Patent number: 10515066
    Abstract: Described embodiments include an apparatus that includes circuitry, configured to facilitate writing to a shared memory, and a processor. The processor is configured to compute a local current-version number by incrementing a shared current-version number that is stored in the shared memory. The processor is further configured to, subsequently to computing the local current-version number, using the circuitry, atomically write at least part of the local current-version number to a portion of the shared memory that is referenced by the local current-version number. The processor is further configured to, subsequently to atomically writing the at least part of the local current-version number, store data in the shared memory in association with the at least part of the local current-version number, and subsequently to storing the data, atomically overwrite the shared current-version number with the local current-version number. Other embodiments are also described.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: December 24, 2019
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Guy Shattah, Ariel Almog
  • Patent number: 10514971
    Abstract: A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and processing circuitry operably coupled to the interface and to the memory. The processing circuitry is configured to execute the operational instructions to perform various operations and functions. The computing device obtains directory metrics associated with a directory structure that is associated with a directory file that is segmented into a plurality of data segments and based on a determination to reconfigure the directory structure based on the directory metrics, the computing device determines a number of layers for a reconfigured directory structure, a number of spans per layer of the number of layers for the reconfigured directory structure, and directory entry reassignments. The computing device reconfigures the directory structure based on the number of layers, the spans per layer, and the directory entry reassignments.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: December 24, 2019
    Assignee: PURE STORAGE, INC.
    Inventors: Jason K. Resch, Wesley B. Leggette, Andrew D. Baptist, Ilya Volvovski, Greg R. Dhuse
  • Patent number: 10505937
    Abstract: The unauthorized access of database nodes by application nodes within an electronic computing and communications system can be prevented using an access table that stores access table records indicating that at least some of the application nodes are authorized to access at least some of the database nodes. The access table records can be generated by identifying connections between application nodes and database nodes within a configuration management database. Responsive to receiving a request to access a database node sent from a first application node, the access table can be queried to determine whether an access table record indicating that the first application node is authorized to access the database node is stored in the access table. If that access table record is not stored in the access table, the request is denied. Otherwise, the request is allowed.
    Type: Grant
    Filed: February 1, 2017
    Date of Patent: December 10, 2019
    Assignee: ServiceNow, Inc.
    Inventors: Jeremy Norris, Antony Chan, Siddharth Shah
  • Patent number: 10503413
    Abstract: Methods and apparatus for a system including a storage array with solid state drive (SSD) storage and a controller coupled to the SSD storage. The controller may include a data system to perform input/output operations to the SSD storage, a control system coupled to the data system to control an address to hash value mapping, a routing system coupled to the control system to process commands from remote hosts, segment data into data blocks, and generate the hash values for the data blocks, and a data server associated with the routing system to receive read and write commands from a data client running on a remote host, wherein the storage array contributes a portion of the SSD storage to storage pools of a distributed elastic storage system.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: December 10, 2019
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Yochai Gal, Niko Farhi, Nir Sela, Yaniv Kaul
  • Patent number: 10496322
    Abstract: Techniques of backing up data stored on host computing devices involve selecting a backup server from among multiple servers on which to back up host data based on a measure of commonality between the host data and data stored in the backup servers. Prior to sending data for backup, a host sends a set of host data representations to a backup system. Each host data representation is based on a respective hash value computed from a respective block of the host data. The backup system compares the set of host data representations with server data representations for each backup server and computes a commonality score for each backup server. The backup system then selects a backup server on which to place the host data based at least in part on the commonality scores. Host data are then directed to the selected backup server for backup.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: December 3, 2019
    Assignee: EMC IP Holding Company LLC
    Inventor: Nickolay Alexandrovich Dalmatov
  • Patent number: 10481924
    Abstract: One or more examples provide techniques to dynamically manage serial port interface(s) of virtualization software executing in a host device. In an example, a method of managing a serial port interface of virtualization software executing on a host device includes initializing a serial port interface of the host device and examining a headless flag to determine if the host device is headless. If the headless flag is set, the method includes setting one or more serial port options to a default value, where a first serial port option connects a direct console user interface (DCUI) service to the serial port interface.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: November 19, 2019
    Assignee: VMware, Inc.
    Inventor: Nagib Gulam
  • Patent number: 10482249
    Abstract: Some embodiments provide a method for an end machine, that implements a distributed application, to redirect new network connection requests to other end machines that also implement the distributed application. The method receives a set of measurement data from a set of resources of the end machine and determines whether a measurement data received from a particular resource has exceeded a threshold. When the measurement data has exceeded the threshold, the method notifies a load balancer that balances new requests for connection to the distributed application between the end machines. The notification causes the load balancer not to send any new connection request to the end machine and redirect them to other end machines.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: November 19, 2019
    Assignee: NICIRA, INC.
    Inventors: Amit Vasant Patil, Vasantha Kumar
  • Patent number: 10484763
    Abstract: An optical inter-switch link cluster includes an array of first switch trays and an array of second switch trays. The array of first switch trays is arranged in a first orientation. Each of the first switch trays includes a plurality of first switch chips and connected to each other, and a plurality of first optical connectors are connected to the plurality of first switch chips. The array of second switch trays is arranged in a second orientation. Each of the second switch trays includes a plurality of second switch chips disposed thereon and connected to each other, and a plurality of second optical connectors are connected to the plurality of second switch chips. Each of the plurality of first optical connectors connected to each of the first switch trays is connected to one of the plurality of second optical connectors of a different one of the plurality of second switch trays.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: November 19, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Kevin B. Leigh
  • Patent number: 10474365
    Abstract: An example system and method includes an electronic memory configured to store electronic data. The system further includes a controller coupled to an electronic storage device including electronic data storage locations arranged in a consecutive sequence on a storage medium and configured to store electronic data corresponding to electronic files in the electronic storage locations and access the electronic storage locations serially according to the consecutive sequence. The controller may be configured to cause the electronic storage device to serially access and transmit to the electronic memory, according to the consecutive sequence, at least some electronic data, cause the electronic memory to store the electronic data as received so that the electronic data of the file forms a complete file, and cause a processor to access the files from the electronic memory upon all electronic data associated with ones of the files having been stored in the electronic memory.
    Type: Grant
    Filed: January 27, 2014
    Date of Patent: November 12, 2019
    Assignees: STROZ FRIEDBERG, LLC, CYBER TECHNOLOGY SERVICES, INC.
    Inventors: Jon Stewart, Geoffrey Black, Joel Uckelman
  • Patent number: 10474376
    Abstract: An operating method of a memory controller may include determining a physical page to be accessed in a plurality of memory devices by mapping a logical address to a physical address; and determining a distribution pattern in which data of the physical page are distributed to the plurality of memory devices using the logical address.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: November 12, 2019
    Assignee: SK hynix Inc.
    Inventors: Jing-Zhe Xu, Jung-Hyun Kwon, Sung-Eun Lee, Jae-Sun Lee, Sang-Gu Jo
  • Patent number: 10466674
    Abstract: A programmable logic controller of a programmable logic controller system includes: a communication section that enables communication with an external instrument; a control processing section that executes a control program and executes a process associated with a request received by the communication section; and a data storage section that stores computation data that are handled in a process associated with execution of the control program. A peripheral instrument of the programmable logic controller system includes a setting processing section that defines, in the data storage section, a first area in which writing associated with the request is enabled and a second area in which writing associated with the request is prohibited based on an input operation including variables that are targets of control by the control program.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: November 5, 2019
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Yu Ogikubo
  • Patent number: 10459861
    Abstract: A unified cache subsystem includes a data memory configured as both a shared memory and a local cache memory. The unified cache subsystem processes different types of memory transactions using different data pathways. To process memory transactions that target shared memory, the unified cache subsystem includes a direct pathway to the data memory. To process memory transactions that do not target shared memory, the unified cache subsystem includes a tag processing pipeline configured to identify cache hits and cache misses. When the tag processing pipeline identifies a cache hit for a given memory transaction, the transaction is rerouted to the direct pathway to data memory. When the tag processing pipeline identifies a cache miss for a given memory transaction, the transaction is pushed into a first-in first-out (FIFO) until miss data is returned from external memory. The tag processing pipeline is also configured to process texture-oriented memory transactions.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: October 29, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Xiaogang Qiu, Ronny Krashinsky, Steven Heinrich, Shirish Gadre, John Edmondson, Jack Choquette, Mark Gebhart, Ramesh Jandhyala, Poornachandra Rao, Omkar Paranjape, Michael Siu
  • Patent number: 10462182
    Abstract: Exemplary methods, apparatuses, and systems perform a secure socket layer (SSL) protocol initialization and maintenance on behalf of a virtual machine (VM). When a secure virtual machine (SVM) receives a data packet sent by an application running on a VM, the SVM transmits a request message to the VM to enable the VM to perform a handshake with a destination computer to initiate an encrypted session between the VM and the computer. Once the encrypted session is active, the SVM encrypts the data packet, and transmits the encrypted data packet to the VM to perform the transmission of the encrypted data packet to the destination server.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: October 29, 2019
    Assignee: VMware, Inc.
    Inventors: Vasantha Kumar, Leena Soman, Hrishikesh Ghatnekar
  • Patent number: 10452538
    Abstract: Disclosed are systems and methods for determining task scores reflective of memory access statistics in NUMA systems. An example method may comprise: determining, by a processing device, a first memory access score of a task with respect to a first node of a Non-Uniform Memory Access (NUMA) system; adjusting the first memory access score using memory access scores of the task with respect to one or more nodes of the NUMA system; and migrating, in view of the adjusting, at least one of: the task or a memory page associated with the task.
    Type: Grant
    Filed: January 21, 2015
    Date of Patent: October 22, 2019
    Assignee: Red Hat, Inc.
    Inventors: Henri Han van Riel, Vivek Goyal
  • Patent number: 10445009
    Abstract: Systems and methods that manage memory usage by a virtual machine are provided. These systems and methods compact the virtual machine's memory footprint, thereby promoting efficient use of memory and gaining performance benefits of increased data locality. In some embodiments, a guest operating system running within the virtual machine is enhanced to allocate its VM memory in a compact manner. The guest operating system includes a memory manager that is configured to reference an artificial access cost when identifying memory areas to allocate for use by applications. These access costs are described as being artificial because they are not representative of actual, hardware based access costs, but instead are fictitious costs that increase as the addresses of the memory areas increase. Because of these increasing artificial access costs, the memory manager identifies memory areas with lower addresses for allocation and use prior to memory areas with higher addresses.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: October 15, 2019
    Assignee: INTEL CORPORATION
    Inventors: Graham Whaley, Adriaan van de Ven, Manohar R. Castelino, Jose C. Venegas Munoz, Samuel Ortiz
  • Patent number: 10437728
    Abstract: Circular buffers containing instructions that enable the execution of operations on logical elements are described where data in the circular buffers is swapped to storage. The instructions comprise a branchless instruction set. Data stored in circular buffers is paged in and out to a second level memory. State information for each logical element is also saved and restored using paging memory. Instructions are provided to logical elements, such as processing elements, via circular buffers. The instructions enable a group of processing elements to perform operations implementing a desired functionality. That functionality is changed by updating the circular buffers with new instructions that are transferred from paging memory. The previous instructions can be saved off in paging memory before the new instructions are copied over to the circular buffers. This enables the hardware to be rapidly reconfigured amongst multiple functions.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: October 8, 2019
    Assignee: Wave Computing, Inc.
    Inventor: Christopher John Nicol
  • Patent number: 10423415
    Abstract: Disclosed herein is an apparatus which comprises a plurality of execution units, and a first general register file (GRF) communicatively couple to the plurality of execution units, wherein the first GRF is shared by the plurality of execution units.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: September 24, 2019
    Assignee: INTEL CORPORATION
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Kamal Sinha, Kiran C. Veernapu, Subramaniam Maiyuran, Prasoonkumar Surti, Guei-Yuan Lueh, David Puffer, Supratim Pal, Eric J. Hoekstra, Travis T. Schluessler, Linda L. Hurd