Shared Memory Area Patents (Class 711/147)
-
Patent number: 12236096Abstract: A method, computer program product, and computer system for receiving, by a computing device, a plurality of IO requests. A portion of the plurality of IO requests may be aggregated based upon a block size. The portion of the plurality of IO requests may be committed to persistent storage in a batch based upon, at least in part, aggregating the portion of the plurality of IO requests based upon the block size.Type: GrantFiled: October 22, 2021Date of Patent: February 25, 2025Assignee: EMC IP Holding Company, LLCInventors: Oran Baruch, Vamsi K. Vankamamidi, Ronen Gazit
-
Patent number: 12235837Abstract: Systems and methods are provided for managing read requests in a database system. The same read request is communicated to multiple nodes to reduce long tail latency. If the read request is communicated to two nodes and the first node is experiencing a communication failure, the read request is serviced by the second node. Once a response is received from the second node, the read request to the first node can be canceled.Type: GrantFiled: June 8, 2021Date of Patent: February 25, 2025Assignee: MongoDB, Inc.Inventors: Therese Avitabile, Misha Tyulenev, Jason Carey, Andrew Michalski Schwerin, Ben Caimano, Amirsaman Memaripour, Cheahuychou Mao, Jeff Yemin, Garaudy Etienne
-
Patent number: 12236239Abstract: According to one embodiment, a memory module includes: a memory die including a dynamic random access memory (DRAM) banks, each including: an array of DRAM cells arranged in pages; a row buffer to store values of one of the pages; an input/output (IO) module; and an in-memory compute (IMC) module including: an arithmetic logic unit (ALU) to receive operands from the row buffer or the IO module and to compute an output based on the operands and one of a plurality of ALU operations; and a result register to store the output of the ALU; and a controller to: receive, from a host processor, operands and an instruction; determine, based on the instruction, a data layout; supply the operands to the DRAM banks in accordance with the data layout; and control an IMC module to perform one of the ALU operations on the operands in accordance with the instruction.Type: GrantFiled: September 14, 2023Date of Patent: February 25, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Krishna T. Malladi, Wenqin Huangfu
-
Patent number: 12210764Abstract: Replication of data from a primary computing system to a secondary computing system. The replication is single-threaded or multi-threaded depending on one or more characteristics of the data to be replicated. As an example, the characteristics could include the type of data being replicated and/or the variability on that data. Also, the multi-threading capabilities of the primary and secondary computing systems are determined. Then, based on the identified one or more characteristics of the data, the primary computing system decides whether to perform multi-threaded replication and the multi-threading parameters of the replication based on the one or more characteristics of that data, as well as on the multi-threading capabilities of the primary and secondary computing system.Type: GrantFiled: February 27, 2024Date of Patent: January 28, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Deepak Verma, Kesavan Shanmugam, Michael Gregory Montwill
-
Patent number: 12210465Abstract: An electronic device includes a processor that executes one or more guest operating systems and an input-output memory management unit (IOMMU). The IOMMU accesses, for/on behalf of each guest operating system among the one or more guest operating systems, IOMMU memory-mapped input-output (MMIO) registers in a separate copy of a set of IOMMU MMIO registers for that guest operating system.Type: GrantFiled: January 11, 2021Date of Patent: January 28, 2025Assignees: ADVANCED MICRO DEVICES, INC., ATI Technologies ULCInventors: Maggie Chan, Philip Ng, Paul Blinzer
-
Patent number: 12204455Abstract: A method includes synthesizing a hardware description language (HDL) code into a netlist comprising a first a second and a third components. The method further includes allocating addresses to each component of the netlist. Each allocated address includes assigned addresses and unassigned addresses. An internal address space for a chip is formed based on the allocated addresses. The internal address space includes assigned addresses followed by unassigned addresses for the first component concatenated to the assigned addresses followed by unassigned addresses for the second component concatenated to the assigned addresses followed by unassigned addresses for the third component. An external address space for components outside of the chip is generated that includes only the assigned addresses of the first component concatenated to the assigned addresses of the second component concatenated to the assigned addresses of the third component. Internal addresses are translated to external addresses and vice versa.Type: GrantFiled: February 22, 2023Date of Patent: January 21, 2025Assignee: Marvell Asia Pte LtdInventors: Saurabh Shrivastava, Shrikant Sundaram, Guy T. Hutchison
-
Patent number: 12190109Abstract: A method of storing data in general purpose registers (GPRs) includes packing a tile of data items into GPRs, where the tile includes multiple channels. The tile of data items is read from memory. At least two channels of the data are stored in a first GPR, and at least two additional channels are stored in a second GPR. Auxiliary data is loaded into a third GPR. The auxiliary data and the tile data can be used together for performing convolution operations.Type: GrantFiled: September 27, 2021Date of Patent: January 7, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Lin Chen, Zhou Hong, Yufei Zhang
-
Patent number: 12182899Abstract: An apparatus and method for scheduling workloads across virtualized graphics processors. For example, one embodiment of a graphics processing apparatus comprises first graphics processing resources to process graphics commands and execute graphics data; workload scheduling circuitry to schedule workloads for execution on the first graphics processing resources; and workload queuing circuitry to implement a local queue to store local workload entries, each local workload entry associated with a locally-submitted workload and an external workload queue to store external workload entries, each external workload entry associated with an externally-submitted workload submitted for execution by an external graphics processing apparatus, in one embodiment, the workload scheduling circuitry schedules the locally-submitted workloads identified in the local queue and externally-submitted workloads identified in the external workload queue for processing by specified portions of the first graphics processing resources.Type: GrantFiled: June 24, 2019Date of Patent: December 31, 2024Assignee: Intel CorporationInventors: Weinan Li, Yan Zhao, Zhi Wang, Wei Zhang
-
Patent number: 12182411Abstract: A semiconductor storage device includes a plurality of semiconductor memory chips and a bridge chip. The bridge chip includes a first interface connectable to an external memory controller that is external to the semiconductor storage device, a plurality of second interfaces connected to the semiconductor memory chips, and a controller. The controller is configured to, upon receiving, by the first interface, a first command sequence that includes a data transfer command to perform data transfer with one of the semiconductor chips and size information indicating a size of data to be transferred, start an operation to perform the data transfer, and end the operation, upon an amount of data that has been received by the first interface during the data transfer reaching the size indicated by the size information.Type: GrantFiled: February 28, 2023Date of Patent: December 31, 2024Assignee: Kioxia CorporationInventor: Goichi Ootomo
-
Patent number: 12174741Abstract: A neural processing device is provided. The neural processing device comprises: a processing unit configured to perform calculations, an L0 memory configured to receive data from the processing unit and provide data to the processing unit, and an LSU (Load/Store Unit) configured to perform load and store operations of the data, wherein the LSU comprises: a neural core load unit configured to issue a load instruction of the data, a neural core store unit configured to issue a store instruction for transmitting and storing the data, and a sync ID logic configured to provide a sync ID to the neural core load unit and the neural core store unit to thereby cause a synchronization signal to be generated for each sync ID.Type: GrantFiled: August 10, 2023Date of Patent: December 24, 2024Assignee: Rebellions Inc.Inventors: Jinseok Kim, Jinwook Oh, Donghan Kim
-
Method and system for improving responsiveness in exchanging frames in a wireless local area network
Patent number: 12177716Abstract: A method and system for improving responsiveness in exchanging management and control frames in a wireless local area network are disclosed. An initiator sends a frame (action frame, management frame, CSI frame, control frame, or data frame) to a responder. Upon correctly receiving the frame, the responder sends a response frame to the initiator instead of directly sending an acknowledgement (ACK) packet. The responder preferably accesses the wireless medium to send the response frame in a short inter-frame spacing (SIFS). With this scheme, a long delay associated with having to contend for the wireless medium to send the response frame is avoided and therefore, the responsiveness and timeliness of the feedback mechanism is significantly enhanced. The response frame may be piggybacked on or aggregated with another packet.Type: GrantFiled: February 22, 2022Date of Patent: December 24, 2024Assignee: InterDigital Technology CorporationInventors: Arty Chandra, Eldad M. Zeira, Mohammed Sammour, Sudheer A. Grandhi -
Patent number: 12175300Abstract: Disclosed embodiments relate to software control of graphics hardware that supports logical slots. In some embodiments, a GPU includes circuitry that implements a plurality of logical slots and a set of graphics processor sub-units that each implement multiple distributed hardware slots. Control circuitry may determine mappings between logical slots and distributed hardware slots for different sets of graphics work. Various mapping aspects may be software-controlled. For example, software may specify one or more of the following: priority information for a set of graphics work, to retain the mapping after completion of the work, a distribution rule, a target group of sub-units, a sub-unit mask, a scheduling policy, to reclaim hardware slots from another logical slot, etc. Software may also query status of the work.Type: GrantFiled: August 11, 2021Date of Patent: December 24, 2024Assignee: Apple Inc.Inventors: Andrew M. Havlir, Steven Fishwick, Melissa L. Velez
-
Patent number: 12169741Abstract: A method and apparatus of a network device that allocates a shared memory buffer for an object is described. In an exemplary embodiment, the network device receives an allocation request for the shared memory buffer for the object. In addition, the network device allocates the shared memory buffer from shared memory of a network device, where the shared memory buffer is accessible by a writer and a plurality of readers. The network device further returns a writer pointer to the writer, where the writer pointer references a base address of the shared memory buffer. Furthermore, the network device stores the object in the shared memory buffer, wherein the writer accesses the shared memory using the writer pointer. The network device further shares the writer pointer with at least a first reader of the plurality of readers. The network device additionally translates the base address of the shared memory buffer to a reader pointer, where the reader pointer is expressed in a memory space of the first reader.Type: GrantFiled: August 22, 2023Date of Patent: December 17, 2024Assignee: ARISTA NETWORKS, INC.Inventors: Stuart Ritchie, Sebastian Sapa, Christopher Neilson, Eric Secules, Peter Edwards
-
Patent number: 12147346Abstract: Embodiments described herein provide a scalable coherency tracking implementation that utilizes shared virtual memory to manage data coherency. In one embodiment, coherency tracking granularity is reduced relative to existing coherency tracking solutions, with coherency tracking storage memory moved to memory as a page table metadata. For example and in one embodiment, storage for coherency state is moved from dedicated hardware blocks to system memory, effectively providing a directory structure that is limitless in size.Type: GrantFiled: December 6, 2023Date of Patent: November 19, 2024Assignee: Intel CorporationInventor: Altug Koker
-
Patent number: 12141076Abstract: Disclosed herein is a virtual cache and method in a processor for supporting multiple threads on the same cache line. The processor is configured to support virtual memory and multiple threads. The virtual cache directory includes a plurality of directory entries, each entry is associated with a cache line. Each cache line has a corresponding tag. The tag includes a logical address, an address space identifier, a real address bit indicator, and a per thread validity bit for each thread that accesses the cache line. When a subsequent thread determines that the cache line is valid for that thread the validity bit for that thread is set, while not invalidating any validity bits for other threads.Type: GrantFiled: August 18, 2023Date of Patent: November 12, 2024Assignee: International Business Machines CorporationInventors: Markus Helms, Christian Jacobi, Ulrich Mayer, Martin Recktenwald, Johannes C. Reichart, Anthony Saporito, Aaron Tsai
-
Patent number: 12130740Abstract: Embodiments of an invention a processor architecture are disclosed. In an embodiment, a processor includes a decoder, an execution unit, a coherent cache, and an interconnect. The decoder is to decode an instruction to zero a cache line. The execution unit is to issue a write command to initiate a cache line sized write of zeros. The coherent cache is to receive the write command, to determine whether there is a hit in the coherent cache and whether a cache coherency protocol state of the hit cache line is a modified state or an exclusive state, to configure a cache line to indicate all zeros, and to issue the write command toward the interconnect. The interconnect is to, responsive to receipt of the write command, issue a snoop to each of a plurality of other coherent caches for which it must be determined if there is a hit.Type: GrantFiled: April 4, 2022Date of Patent: October 29, 2024Assignee: Intel CorporationInventors: Jason W. Brandt, Robert S. Chappell, Jesus Corbal, Edward T. Grochowski, Stephen H. Gunther, Buford M. Guy, Thomas R. Huff, Christopher J. Hughes, Elmoustapha Ould-Ahmed-Vall, Ronak Singhal, Seyed Yahya Sotoudeh, Bret L. Toll, Lihu Rappoport, David B. Papworth, James D. Allen
-
Patent number: 12118394Abstract: An apparatus for memory integrated management in a cluster system including a plurality of physical nodes connected to each other by a network determines one of the plurality of physical nodes as a node to place a new virtual machine, allocates the first type of memory allocated to the one physical node to the new virtual machine as much as the memory capacity required by the new virtual machine, and distributes the second type of memory to a plurality of virtual machines running on the plurality of physical nodes by integrating and managing the second type of memory allocated to each of the plurality of physical nodes. In this case, the access speed of the second type of memory is faster than that of the first type of memory.Type: GrantFiled: November 2, 2021Date of Patent: October 15, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: Changdae Kim, Kwang-Won Koh, Kang Ho Kim, Taehoon Kim
-
Patent number: 12102012Abstract: According to one embodiment, a magnetoresistance memory device includes: a first conductor; a variable resistance material on a top surface of the first conductor; a second conductor on a top surface of the variable resistance material; a first insulator other than nitride on a top surface of the second conductor; a magnetoresistance effect element on a top surface of the first insulator; and a third conductor located on a side surface of the first insulator and extending on a side surface of the second conductor and a side surface of the magnetoresistance effect element.Type: GrantFiled: September 10, 2021Date of Patent: September 24, 2024Assignee: Kioxia CorporationInventors: Naoki Akiyama, Kenichi Yoshino
-
Patent number: 12086654Abstract: Virtualization techniques can include determining virtual function routing tables for the virtual parallel processing units (PPUs) from a logical topology of a virtual function. A first mapping of the virtual PPUs to a first set of a plurality of physical PPUs can be generated. Virtualization can also include generating a first set of physical function routing tables for the first set of physical PPUs based on the virtual function tables and the first virtual PPU to physical PPU mapping. An application can be migrated from the first set of physical PPUs to a second set of PPUs by generating a second mapping of the virtual PPUs to a second set of a plurality of physical PPUs. A second set of physical function routing table for the second set of physical PPUs can also be generated based on the virtual function tables and the second virtual PPU to physical PPU mapping.Type: GrantFiled: September 16, 2021Date of Patent: September 10, 2024Assignee: T-Head (Shanghai) Semiconductor Co., Ltd.Inventors: Liang Han, Guoyu Zhu, ChengYuan Wu, Rong Zhong
-
Patent number: 12074966Abstract: Methods, systems, and computer readable medium facilitating encrypted information retrieval. Methods can include receiving a batch of queries that includes queries to special buckets in each database shard. Query results responsive to the batch of queries are transmitted to the client device. The query results includes server-encrypted secret shares obtained from the special buckets. Client-encrypted versions of the secret shares are received. A full set of server-encrypted secret shares is transmitted to the client device, which is encrypted by the client device to create a full set of client-server-encrypted secret shares. The client device is classified based on how many of the secret shares are included in both of the client-encrypted secret shares received from the client device and the full set of client-server-encrypted secret shares received from the client device.Type: GrantFiled: July 1, 2022Date of Patent: August 27, 2024Assignee: Google LLCInventors: Eli Simon Fox-Epstein, Kevin Wei Li Yeo
-
Patent number: 12073871Abstract: Provided is a method of performing an internal processing operation of a memory device in a system including a host device and the memory device. The memory device includes a memory cell array and a processor-in-memory (PIM) performing an internal processing operation. In an internal processing mode, by the PIM, the memory device performs the internal processing operation based on internal processing information stored in the memory cell array. When the internal processing information is an internal processing operation command indicating a type of the internal processing operation, the memory device outputs the internal processing operation command including an internal processing read command and an internal processing write command to the host device. The host device issues to the memory device a priority command determined from among a data transaction command and the internal processing operation command.Type: GrantFiled: July 18, 2023Date of Patent: August 27, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Pavan Kumar Kasibhatla, Seong-il O, Hak-soo Yu
-
Patent number: 12066973Abstract: A computer system that includes at least one host device comprising at least one processor. The at least one processor is configured to implement, in a host operating system (OS) space, a teamed network interface card (NIC) software program that provides a unified interface to host OS space upper layer protocols including at least a remote direct memory access (RDMA) protocol and an Ethernet protocol. The teamed NIC software program provides multiplexing for at least two data pathways. The at least two data pathways include an RDMA data pathway that transmits communications to and from an RDMA interface of a physical NIC, and an Ethernet data pathway that transmits communications to and from an Ethernet interface of the physical NIC through a virtual switch that is implemented in a host user space and a virtual NIC that is implemented in the host OS space.Type: GrantFiled: June 4, 2021Date of Patent: August 20, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Omar Cardona
-
Patent number: 12061895Abstract: Example embodiments facilitate prioritizing the recycling of computing resources, e.g., server-side computing systems and accompanying resources (e.g., non-volatile memory, accompanying firmware, data, etc.) leased by customers in a cloud-based computing environment, whereby computing resources (e.g., non-volatile memory) to be forensically analyzed/inspected, sanitized, and/or updated are prioritized for recycling based on estimates of when the computing resources are most likely to require recycling, e.g., via background sanitizing and updating. Computing resources that are likely to be recycled first are given priority over computing resources that are more likely to be recycled later. By prioritizing the recycling of computing resources according to embodiments discussed herein, other cloud-based computing resources that are used to implement computing resource recycling can be efficiently allocated and preserved.Type: GrantFiled: July 31, 2023Date of Patent: August 13, 2024Assignee: Oracle International CorporationInventors: Tyler Vrooman, Graham Schwinn, Greg Edvenson
-
Patent number: 12056366Abstract: A volume to be accessed by a host is provided. A reliability policy related to data reliability and a performance policy related to response performance to an access to the volume are set in the volume. A node that processes redundant data of data for a node that processes the data related to the volume is determined based on the reliability policy. The determined node returns a result of an access to the volume from the host according to the performance policy.Type: GrantFiled: September 8, 2022Date of Patent: August 6, 2024Assignee: HITACHI, LTD.Inventors: Taisuke Ono, Hideo Saito, Takaki Nakamura, Takahiro Yamamoto
-
Patent number: 12050798Abstract: A destination host includes a processor core, a system fabric, a memory system, and a link controller communicatively coupled to the system fabric and configured to be communicatively coupled, via a communication link, to a source host with which the destination host is non-coherent. The destination host migrates, via the communication link, a state of a logical partition from the source host to the destination host and page table entries for translating addresses of a dataset of the logical partition from the source host to the destination host. After migrating the state and page table entries, the destination host initiates execution of the logical partition on the processor core while at least a portion of the dataset of the logical partition resides in the memory system of the source host and migrates, via the communication link, the dataset of the logical partition to the memory system of the destination host.Type: GrantFiled: July 29, 2021Date of Patent: July 30, 2024Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, William J. Starke, Jeffrey A. Stuecheli
-
Patent number: 12045198Abstract: Embodiments are described for a multi-domain and multi-tier architecture for clustered network file systems. This system allows a user to create sub-cluster of physical nodes, called domains, and file system resources for the data placed in a domain are allocated only from the nodes in the domain. It limits the impact of system failures to the files within a domain. A file system redirection service manages a global namespace spanning the domains and redirects file accesses to the appropriate domain where it is stored. In each domain, there are different classes of storage, tiers, with different cost and performance characteristics. Files can be placed on a set of tiers depending on a storage level agreement (SLA) specified for a file. Tier examples include a higher performance tier consisting of SSDs and a lower performance tier of HDDs.Type: GrantFiled: July 29, 2022Date of Patent: July 23, 2024Assignee: Dell Products L.P.Inventors: George Mathew, Chegu Vinod, Abhinav Duggal, Philip Shilane
-
Patent number: 12008016Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for action ordering in a multi-agent system such as a distributed system of agent nodes. In some implementations, an exemplary method includes accessing local queues including one or more actions; applying an aggregation function to the local queues; based on the application of the aggregation function, determining an ordering of actions from the actions of each local queue; generating a shared queue based on the ordering of the actions including a first action at a first position of the shared queue and a second action at a second, sequential position of the shared queue; and synchronizing the actions of the distributed system in response to processing the actions of the shared queue according to the ordering of the actions.Type: GrantFiled: December 29, 2021Date of Patent: June 11, 2024Assignee: Onomy LLCInventor: Charles Dusek
-
Patent number: 12001329Abstract: A memory device defines portions of the storage space as memory mode memory or storage mode memory. Memory mode memory is represented as a portion of a system physical address space of an information handling system, and storage mode memory is represented as a storage device in the information handling system. An operating system instantiates a paged virtual memory architecture on the information handling system. The information handling system determines a page miss rate for pages stored in the first portion of the storage space, receives a request to increase a first size of the first portion of storage space in response to determining the page miss rate, and increases the first size of the first portion of storage space to a second size in response to the request.Type: GrantFiled: January 5, 2021Date of Patent: June 4, 2024Assignee: Dell Products L.P.Inventors: Parmeshwr Prasad, Anusha Bhaskar
-
Patent number: 11983182Abstract: An information handling system includes a hardware device having a query processing engine to provide queries into source data and to provide responses to the queries. A processor stores a query to a query address in the memory device, issues a command to the hardware device, the command including the query address and a response address in the memory device, and retrieves a response to the query from the response address. The hardware device retrieves the query from the query address in response to the command, provides the query to the query processing engine, and stores a response to the query from the query processing engine to the response address.Type: GrantFiled: October 27, 2020Date of Patent: May 14, 2024Assignee: Dell Products L.P.Inventors: Shyamkumar Iyer, Krishna Ramaswamy, Gaurav Chawla
-
Patent number: 11983405Abstract: A management device that may communicate with at least one devices is disclosed. The management device may include a communication logic to communicate with the devices over a communication channels about data associated with the devices. The management device may also include reception logic that may receive a query from a host. The query may request information from the management device about the devices. The management device may also include a transmission logic to send the data about the devices to the host. The host may be configured to send a message to the devices.Type: GrantFiled: November 16, 2020Date of Patent: May 14, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sompong Paul Olarig, Son T. Pham
-
Patent number: 11966737Abstract: Systems and methods for an efficient and robust multiprocessor-coprocessor interface that may be used between a streaming multiprocessor and an acceleration coprocessor in a GPU are provided. According to an example implementation, in order to perform an acceleration of a particular operation using the coprocessor, the multiprocessor: issues a series of write instructions to write input data for the operation into coprocessor-accessible storage locations, issues an operation instruction to cause the coprocessor to execute the particular operation; and then issues a series of read instructions to read result data of the operation from coprocessor-accessible storage locations to multiprocessor-accessible storage locations.Type: GrantFiled: September 2, 2021Date of Patent: April 23, 2024Assignee: NVIDIA CORPORATIONInventors: Ronald Charles Babich, Jr., John Burgess, Jack Choquette, Tero Karras, Samuli Laine, Ignacio Llamas, Gregory Muthler, William Parsons Newhall, Jr.
-
Patent number: 11960750Abstract: Replication of data from a primary computing system to a secondary computing system. The replication is single-threaded or multi-threaded depending on one or more characteristics of the data to be replicated. As an example, the characteristics could include the type of data being replicated and/or the variability on that data. Also, the multi-threading capabilities of the primary and secondary computing systems are determined. Then, based on the identified one or more characteristics of the data, the primary computing system decides whether to perform multi-threaded replication and the multi-threading parameters of the replication based on the one or more characteristics of that data, as well as on the multi-threading capabilities of the primary and secondary computing system.Type: GrantFiled: December 8, 2022Date of Patent: April 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Deepak Verma, Kesavan Shanmugam, Michael Gregory Montwill
-
Patent number: 11960391Abstract: A processing device comprises processors, a first memory shared by the processors, and a cache comprising a second memory comprising a plurality of memory units, each of the plurality of memory units in the second memory being associated with a respective one of a plurality of request identifiers. The cache receives a memory read request including a request identifier and a memory address from at least one of the processors, identifies an allocated memory address identifier for the memory address, accesses the first memory to read data of the memory address, obtains one or more request identifiers which requested data of the memory address from the second memory based on the allocated memory address identifier, and transmitting the data of the memory address to one or more processors which requested data of the memory address based on the one or more request identifiers.Type: GrantFiled: August 7, 2023Date of Patent: April 16, 2024Assignee: Rebellions Inc.Inventors: Sungpill Choi, Jae-Sung Yoon
-
Patent number: 11960922Abstract: In an embodiment, a processor comprises: an execution circuit to execute instructions; at least one cache memory coupled to the execution circuit; and a table storage element coupled to the at least one cache memory, the table storage element to store a plurality of entries each to store object metadata of an object used in a code sequence. The processor is to use the object metadata to provide user space multi-object transactional atomic operation of the code sequence. Other embodiments are described and claimed.Type: GrantFiled: September 24, 2020Date of Patent: April 16, 2024Assignee: Intel CorporationInventors: Joshua B. Fryman, Jason M. Howard, Ibrahim Hur, Robert Pawlowski
-
Patent number: 11960771Abstract: A first controller manages first mapping information for accessing data stored in a storage area, management of which is assigned to the first controller, and second mapping information for accessing data stored in a predetermined storage area, management of which is assigned to a second controller. The second controller, when having executed garbage collection on the predetermined storage area, changes mapping information to post-migration mapping information for accessing data after being migrated by the garbage collection.Type: GrantFiled: September 1, 2022Date of Patent: April 16, 2024Assignee: HITACHI, LTD.Inventors: Shugo Ogawa, Ryosuke Tatsumi, Yoshinori Ohira, Hiroto Ebara, Junji Ogawa
-
Patent number: 11947801Abstract: An apparatus to facilitate in-place memory copy during remote data transfer in a heterogeneous compute environment is disclosed. The apparatus includes a processor to receive data via a network interface card (NIC) of a hardware accelerator device; identify a destination address of memory of the hardware accelerator device to write the data; determine that access control bits of the destination address in page tables maintained by a memory management unit (MMU) indicate that memory pages of the destination address are both registered and free; write the data to the memory pages of the destination address; and update the access control bits for memory pages of the destination address to indicate that the memory pages are restricted, wherein setting the access control bits to restricted prevents the NIC and a compute kernel of the hardware accelerator device from accessing the memory pages.Type: GrantFiled: July 29, 2022Date of Patent: April 2, 2024Assignee: INTEL CORPORATIONInventors: Reshma Lal, Sarbartha Banerjee
-
Patent number: 11940994Abstract: Techniques are disclosed that relate to manipulating a chain of database objects without locking the chain. A computer system may maintain a chain that orders a set of database objects stored in a cache of the computer system. The computer system may receive a set of requests to perform database transactions. Based on those received set of requests, the computer system may determine to perform a plurality of chain operations that involve modifying the chain. The computer system may perform two or more of the plurality of chain operations at least partially in parallel using a set of atomic operations without acquiring a lock on the chain.Type: GrantFiled: October 29, 2021Date of Patent: March 26, 2024Assignee: Salesforce, Inc.Inventors: Rui Zhang, Prateek Swamy, Yi Xia, Punit B. Shah, Rama K. Korlapati
-
Patent number: 11934827Abstract: An apparatus that manages multi-process execution in a processing-in-memory (“PIM”) device includes a gatekeeper configured to: receive an identification of one or more registered PIM processes; receive, from a process, a memory request that includes a PIM command; if the requesting process is a registered PIM process and another registered PIM process is active on the PIM device, perform a context switch of PIM state between the registered PIM processes; and issue the PIM command of the requesting process to the PIM device.Type: GrantFiled: December 20, 2021Date of Patent: March 19, 2024Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Sooraj Puthoor, Muhammad Amber Hassaan, Ashwin Aji, Michael L. Chu, Nuwan Jayasena
-
Patent number: 11921635Abstract: Embodiments described herein provide a scalable coherency tracking implementation that utilizes shared virtual memory to manage data coherency. In one embodiment, coherency tracking granularity is reduced relative to existing coherency tracking solutions, with coherency tracking storage memory moved to memory as a page table metadata. For example and in one embodiment, storage for coherency state is moved from dedicated hardware blocks to system memory, effectively providing a directory structure that is limitless in size.Type: GrantFiled: March 3, 2021Date of Patent: March 5, 2024Assignee: Intel CorporationInventor: Altug Koker
-
Patent number: 11914546Abstract: An information handling system includes a memory and a baseboard management controller. The memory stores one or more device update packages, and each of the first device update packages includes an inter-integrated circuit payload. The baseboard management controller receives a first device update package, and stores the first device update package in the memory. In response to the first device update package being stored in the memory, the baseboard management controller launches a handler. The baseboard management controller retrieves a bus number and an address for a target device identified in the first device update package. The baseboard management controller parses data in a body of the inter-integrated circuit payload of the first device update package, and executes inter-integrated circuit commands in the body to provide a firmware image update to the target device.Type: GrantFiled: October 4, 2021Date of Patent: February 27, 2024Assignee: Dell Products L.P.Inventors: Yogesh P. Kulkarni, Chandrasekhar Mugunda, Rui An, Akshata Sheshagiri Naik
-
Patent number: 11914544Abstract: According to one embodiment, a memory system includes a board, a memory controller, and a semiconductor memory. When a signal input to a third port or a command received from an outside of the memory system satisfies a first condition, the memory controller is configured to use a first port as a first signal port and to use a second port as a second signal port. When the signal input to the third port or the command received from the outside of the memory system satisfies a second condition, the memory controller is configured to use the first port as the second signal port and to use the second port as the first signal port.Type: GrantFiled: June 15, 2022Date of Patent: February 27, 2024Assignee: Kioxia CorporationInventors: Nana Kawamoto, Naoki Kimura
-
Patent number: 11907176Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: receiving a request for a lock on a page from a virtual database amongst two or more virtual databases, the virtual database including a number of containers respectively corresponding to the same number of database components of the virtual database. A copy of the page is refreshed with a latest copy of the page in an overall cache prior to granting the lock based on ascertaining that the page is not locked by any other virtual database. The virtual database is granted with the lock and have an exclusive access to the page.Type: GrantFiled: May 10, 2019Date of Patent: February 20, 2024Assignee: International Business Machines CorporationInventors: Xin Peng Liu, ShengYan Sun, Shuo Li, Xiaobo Wang
-
Patent number: 11907101Abstract: Disclosed herein are systems and methods for selective patching processes. In one exemplary aspect, the method includes: identifying, via a user space patching service, a patch that modifies at least one function included in a process, wherein the process is executed on a computing device; generating a list of target pages in virtual memory of the computing device, wherein the list of target pages includes code associated with the at least one function; marking the target pages as non-executable based on file identification; intercepting, using an amended page-fault event handler, an attempt to execute the code associated with the at least one function by the process; and applying the patch to modify the at least one function.Type: GrantFiled: February 22, 2022Date of Patent: February 20, 2024Assignee: Cloud Linux Software, Inc.Inventors: Igor Seletskiy, Pavel Boldin
-
Patent number: 11907577Abstract: A plurality of commands is received from at least one application. A command of the plurality of commands is to be performed by a Data Storage Device (DSD) after one or more conditions have been satisfied by the DSD. The plurality of commands is enqueued and the command is enqueued with the one or more conditions for performing the command. It is determined whether the one or more conditions have been satisfied by the DSD, and in response to determining that the one or more conditions have been satisfied by the DSD, the command is sent to the DSD for performance of the command.Type: GrantFiled: December 6, 2021Date of Patent: February 20, 2024Assignee: Western Digital Technologies, Inc.Inventors: Tomer Spector, Doron Ganon, Eran Arad
-
Patent number: 11899933Abstract: A management device that may communicate with at least one devices is disclosed. The management device may include a communication logic to communicate with the devices over a communication channels about data associated with the devices. The management device may also include reception logic that may receive a query from a host. The query may request information from the management device about the devices. The management device may also include a transmission logic to send the data about the devices to the host. The host may be configured to send a message to the devices.Type: GrantFiled: November 16, 2020Date of Patent: February 13, 2024Inventors: Sompong Paul Olarig, Son T. Pham
-
Patent number: 11886987Abstract: A multiply-accumulate method and architecture are disclosed. The architecture includes a plurality of networks of non-volatile memory elements arranged in tiled columns. Logic digitally modulates the equivalent conductance of individual networks among the plurality of networks to map the equivalent conductance of each individual network to a single weight within the neural network. A first partial selection of weights within the neural network is mapped into the equivalent conductances of the networks in the columns to enable the computation of multiply-and-accumulate operations by mixed-signal computation. The logic updates the mappings to select a second partial selection of weights to compute additional multiply-and-accumulate operations and repeats the mapping and computation operations until all computations for the neural network are completed.Type: GrantFiled: June 25, 2019Date of Patent: January 30, 2024Assignee: Arm LimitedInventors: Shidhartha Das, Matthew Mattina, Glen Arnold Rosendale, Fernando Garcia Redondo
-
Patent number: 11861168Abstract: A management device that may communicate with at least one devices is disclosed. The management device may include a communication logic to communicate with the devices over a communication channels about data associated with the devices. The management device may also include reception logic that may receive a query from a host. The query may request information from the management device about the devices. The management device may also include a transmission logic to send the data about the devices to the host. The host may be configured to send a message to the devices.Type: GrantFiled: November 16, 2020Date of Patent: January 2, 2024Inventors: Sompong Paul Olarig, Son T. Pham
-
Patent number: 11853209Abstract: Shared memory workloads using existing network fabrics, including: presenting, by a Memory Mapped Input/Output (MMIO) translator, memory of the MMIO translator as a portion of a memory space of a host; receiving, by the MMIO translator, a first interrupt from an input/output (I/O) adapter; and storing, by the MMIO translator, without sending the first interrupt to an operating system, data associated with the first interrupt from the I/O adapter into the memory of the MMIO translator.Type: GrantFiled: June 30, 2020Date of Patent: December 26, 2023Assignee: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.Inventors: Connor B. Reed, Jeffrey R. Hamilton, Clifton E. Kerr
-
Patent number: 11790981Abstract: Provided is a method of performing an internal processing operation of a memory device in a system including a host device and the memory device. The memory device includes a memory cell array and a processor-in-memory (PIM) performing an internal processing operation. In an internal processing mode, by the PIM, the memory device performs the internal processing operation based on internal processing information stored in the memory cell array. When the internal processing information is an internal processing operation command indicating a type of the internal processing operation, the memory device outputs the internal processing operation command including an internal processing read command and an internal processing write command to the host device. The host device issues to the memory device a priority command determined from among a data transaction command and the internal processing operation command.Type: GrantFiled: August 8, 2022Date of Patent: October 17, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Pavan Kumar Kasibhatla, Seong-il O, Hak-soo Yu
-
Patent number: 11775437Abstract: A neural processing device is provided. The neural processing device comprises: a processing unit configured to perform calculations, an L0 memory configured to receive data from the processing unit and provide data to the processing unit, and an LSU (Load/Store Unit) configured to perform load and store operations of the data, wherein the LSU comprises: a neural core load unit configured to issue a load instruction of the data, a neural core store unit configured to issue a store instruction for transmitting and storing the data, and a sync ID logic configured to provide a sync ID to the neural core load unit and the neural core store unit to thereby cause a synchronization signal to be generated for each sync ID.Type: GrantFiled: November 18, 2022Date of Patent: October 3, 2023Assignee: Rebellions Inc.Inventors: Jinseok Kim, Jinwook Oh, Donghan Kim