Patents Examined by Christopher B. Shin
-
Patent number: 12032824Abstract: An event log management technique may include determining a new event associated with a storage device has occurred, determining a new event log can be stored in an event log chunk stored in an event log buffer, and deleting a number of old event logs starting from an oldest event log among old event logs of the event log chunk stored in the event log buffer if the new event log can be stored in the event log chunk stored in the event log buffer. The number of old event logs being deleted corresponds to a size of a new event log associated with the new event. The technique may also include storing the new event log starting at a start position of the oldest event log.Type: GrantFiled: June 9, 2022Date of Patent: July 9, 2024Assignee: SK hynix Inc.Inventors: Do Geon Park, Soong Sun Shin
-
Patent number: 12019909Abstract: Disclosed is an IO request pipeline processing device. The device mainly includes: an IO state buffer and a pipeline controller, wherein the IO state buffer includes multiple elements, for storing context information including a module calling sequence generated by a CPU; and the pipeline controller is configured to perform pipeline control on an IO request according to the context information. The device performs pipeline management on an IO processing state by arranging hardware modules, which shares the huge workload during an original CPU software control process, and also reduces the requirements for CPU design. At the same time, processing logic of the pipeline controller is triggered by the module calling sequence recorded in the IO state buffer, which may reduce the implementation power consumption and improve the implementation efficiency.Type: GrantFiled: September 28, 2021Date of Patent: June 25, 2024Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.Inventor: Bo Zhang
-
Patent number: 12001370Abstract: A device in an interconnect network is provided. The device comprises an end point processor comprising end point memory and an interconnect network link in communication with an interconnect network switch. The device is configured to issue, by the end point processor, a request to send data from the end point memory to other end point memory of another end point processor of another device in the interconnect network and provide, to the interconnect network switch, the request using memory addresses from a global memory address map which comprises a first global memory address range for the end point processor and a second global memory address range for the other end point processor.Type: GrantFiled: December 30, 2021Date of Patent: June 4, 2024Assignee: ADVANCED MICRO DEVICES, INC.Inventor: Brock A. Taylor
-
Patent number: 12001722Abstract: There is provided an apparatus, method, and computer-readable medium. The apparatus comprises interconnect circuitry to couple a device to one or more processing elements and to one or more storage structures. The apparatus also comprises stashing circuitry configured to receive stashing transactions from the device, each stashing transaction comprising payload data and control data. The stashing circuitry is responsive to a given stashing transaction whose control data identifies a plurality of portions of the payload data, to perform a plurality of independent stashing decision operations, each of the plurality of independent stashing decision operations corresponding to a respective portion of the plurality of portions of payload data and comprising determining, with reference to the control data, whether to direct the respective portion to one of the one or more storage structures or whether to forward the respective portion to memory.Type: GrantFiled: August 18, 2022Date of Patent: June 4, 2024Assignee: Arm LimitedInventors: Pavel Shamis, Honnappa Nagarahalli, Jamshed Jalal
-
Patent number: 11989143Abstract: Described herein are systems, methods, and products utilizing a cache coherent switch on chip. The cache coherent switch on chip may utilize Compute Express Link (CXL) interconnect open standard and allow for multi-host access and the sharing of resources. The cache coherent switch on chip provides for resource sharing between components while independent of a system processor, removing the system processor as a bottleneck. Cache coherent switch on chip may further allow for cache coherency between various different components. Thus, for example, memories, accelerators, and/or other components within the disclose systems may each maintain caches, and the systems and techniques described herein allow for cache coherency between the different components of the system with minimal latency.Type: GrantFiled: June 28, 2022Date of Patent: May 21, 2024Assignee: Avago Technologies International Sales Pte. LimitedInventors: Shreyas Shah, George Apostol, Jr., Nagarajan Subramaniyan, Jack Regula, Jeffrey S. Earl
-
Patent number: 11989416Abstract: A computing device includes a system-on-a-chip. The computing device comprises a network interface controller (NIC) that hosts a plurality of virtual functions and physical functions. Two or more compute nodes are coupled to the NIC. Each compute node is configured to operate a plurality of Virtual Machines (VMs). Each VM is configured to operate in conjunction with a virtual function via a virtual function driver. A dedicated VM operates in conjunction with a virtual NIC using a physical function hosted by the NIC via a physical function driver hosted by the compute node. The computing device further comprises a fabric manager configured to own a physical function of the NIC, to bind virtual functions hosted by the NIC to individual compute nodes, and to pool I/O devices across the two or more compute nodes.Type: GrantFiled: October 24, 2022Date of Patent: May 21, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Siamak Tavallaei, Ishwar Agarwal
-
Patent number: 11983443Abstract: An apparatus includes a plurality of data buffer and multiplexer devices that communicate data signals with the host memory controller at twice the clock rate that the data buffer and multiplexer devices communicate first and second data signals with first and second memory modules. The apparatus further includes a registered clock driver that communicates host command and address signals at twice the clock rate that the registered clock driver communicates first and second command and address signals with the first and second memory modules. The second data signals and second command and address signals may be directed to a data conversion module that converts the signals to be communicated over a serial computer expansion bus with the second memory module.Type: GrantFiled: September 30, 2022Date of Patent: May 14, 2024Inventor: Jonathan Hinkle
-
Patent number: 11983423Abstract: Methods, systems, and devices for host recovery for a stuck condition of a memory system are described. The host system may transmit a first command for the memory system to transition from a first power mode to a second power mode (e.g., low-power mode). In some cases, the host system may transmit a second command for the memory system to exit the second power mode shortly after transmitting the first command. The host system may activate a timer associated with a time-out condition for exiting the second power mode and may determine that a duration indicated by the timer expires. In some examples, the host system may transmit a third command for the memory system to perform a hardware reset operation based on determining that the duration of the timer expires.Type: GrantFiled: January 19, 2022Date of Patent: May 14, 2024Assignee: Micron Technology, Inc.Inventors: Deping He, Jonathan S. Parry
-
Patent number: 11983110Abstract: A storage circuit, a chip, a data processing method, and an electronic device are disclosed. The storage circuit includes: an input control circuit and a memory. The input control circuit is configured to: receive n input data and an input control signal; perform first data processing on the n input data based on the input control signal to obtain n intermediate data corresponding to the n input data one by one; and write the n intermediate data and a sign signal corresponding to the n input data into the memory; the memory is configured to store the n intermediate data and the sign signal; different values of the sign signal respectively represent different processing processes of the first data processing, and n is a positive integer.Type: GrantFiled: June 27, 2022Date of Patent: May 14, 2024Assignee: Lemon Inc.Inventors: Junmou Zhang, Dongrong Zhang, Shan Lu, Jian Wang
-
Patent number: 11977756Abstract: A computer device, a setting method for a memory module, and a mainboard are provided. The computer device includes a memory module, a processor, and the mainboard. A basic input output system (BIOS) of the mainboard stores a custom extreme memory profile (XMP). When the processor executes the BIOS, so that the computer device displays a user interface (UI), the BIOS displays multiple default XMPs stored in the memory module and the custom XMP through the UI. The BIOS stores one of the default XMPs and the custom XMP to the memory module according to a selecting result of the one of the default XMPs and the custom XMP displayed on the UI.Type: GrantFiled: March 16, 2022Date of Patent: May 7, 2024Assignee: GIGA-BYTE TECHNOLOGY CO., LTD.Inventors: Chia-Chih Chien, Sheng-Liang Kao, Chen-Shun Chen, Chieh-Fu Chung, Hua-Yi Wu
-
Patent number: 11966330Abstract: Examples described herein relate to processor circuitry to issue a cache coherence message to a central processing unit (CPU) cluster by selection of a target cluster and issuance of the request to the target cluster, wherein the target cluster comprises the cluster or the target cluster is directly connected to the cluster. In some examples, the selected target cluster is associated with a minimum number of die boundary traversals. In some examples, the processor circuitry is to read an address range for the cluster to identify the target cluster using a single range check over memory regions including local and remote clusters. In some examples, issuance of the cache coherence message to a cluster is to cause the cache coherence message to traverse one or more die interconnections to reach the target cluster.Type: GrantFiled: June 5, 2020Date of Patent: April 23, 2024Assignee: Intel CorporationInventors: Vinit Mathew Abraham, Jeffrey D. Chamberlain, Yen-Cheng Liu, Eswaramoorthi Nallusamy, Soumya S. Eachempati
-
Patent number: 11960942Abstract: A method, computer program product, and computing system for receiving a plurality of lock sequences associated with a plurality of objects of the computing device. A plurality of matrices may be generated for each lock sequence of the plurality of lock sequences, thus defining a plurality of lock sequence matrix towers. The plurality of lock sequence matrix towers may be combined, thus defining a combined lock sequence matrix tower. One or more lock sequence conflicts may be identified within the plurality of lock sequences based upon, at least in part, the combined lock sequence matrix tower.Type: GrantFiled: April 12, 2021Date of Patent: April 16, 2024Assignee: EMC IP Holding Company, LLCInventors: Ming Zhang, Lei Gao, Wai Chuen Yim
-
Patent number: 11947469Abstract: Embodiments herein describe partitioning an acceleration device based on the needs of each user application executing in a host. In one embodiment, a flexible queue provisioning method allows the acceleration device to be dynamically partitioned by pushing the configuration through a control command queue to the device by management software running in a trusted zone. The new configuration is parsed and verified by trusted firmware, which, then, creates isolated IO command queues on the acceleration device. These IO command queues can be directly mapped to a user application, VM, or other PCIe devices. In one embodiment, each IO command queue exposes only the compute resource assigned by the trusted firmware in the acceleration device.Type: GrantFiled: February 18, 2022Date of Patent: April 2, 2024Assignee: XILINX, INC.Inventors: Cheng Zhen, Sonal Santan, Min Ma, Chien-Wei Lan
-
Patent number: 11947995Abstract: A multilevel memory system includes a nonvolatile memory (NVM) device with an NVM media having a media write unit that is different in size than a host write unit of a host controller of the system that has the multilevel memory system. The memory device includes a media controller that controls writes to the NVM media. The host controller sends a write transaction to the media controller. The write transaction can include the write data in host write units, while the media controller will commit data in media write units to the NVM media. The media controller can send a transaction message to indicate whether the write data for the write transaction was successfully committed to the NVM media.Type: GrantFiled: May 19, 2020Date of Patent: April 2, 2024Assignee: Intel CorporationInventors: Kuan Hua Tan, Sahar Khalili, Eng Hun Ooi, Shrinivas Venkatraman, Dimpesh Patel
-
Patent number: 11934330Abstract: Examples described herein relate to an offload processor to receive data for transmission using a network interface or received in a packet by a network interface. In some examples, the offload processor can include a packet storage controller to determine whether to store data in a buffer of the offload processing device or a system memory after processing by the offload processing device. In some examples, determine whether to store data in a buffer of the offload processor or a system memory is based on one or more of: available buffer space, latency limit associated with the data, priority associated with the data, or available bandwidth through an interface between the buffer and the system memory. In some examples, the offload processor is to receive a descriptor and specify a storage location of data in the descriptor, wherein the storage location is within the buffer or the system memory.Type: GrantFiled: May 13, 2020Date of Patent: March 19, 2024Assignee: Intel CorporationInventors: Patrick G. Kutch, Andrey Chilikin
-
Patent number: 11928053Abstract: A system controller determines a to-be-collected first logical chunk group. The first logical chunk group includes a first data logical chunk located in a first solid state disk of the plurality of solid state disks. Valid data is stored in a first logical address in the first logical chunk group, and there is a correspondence between the first logical address and an actual address in which the valid data is stored. The system controller creates a second logical chunk group. At least one second data logical chunk in the second logical chunk group is distributed in the solid state disk in which the first data logical chunk storing a valid data is located in order to ensure that the valid data is migrated from the first logical chunk group to the second logical chunk group, but an actual address of the valid data remains unchanged.Type: GrantFiled: September 15, 2020Date of Patent: March 12, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Guiyou Pu, Yang Liu, Qiang Xue
-
Patent number: 11921647Abstract: A system can include a plurality of sequencers each configured to provide a number of sequenced output signals responsive to assertion of a respective sequencer enable signal provided thereto. The system can include chaining circuitry coupled to the plurality of sequencers. The chaining circuitry can comprise logic to: responsive to assertion of a primary enable signal received thereby, assert respective sequencer enable signals provided to the plurality of sequencers in accordance with a first sequence; and responsive to deassertion of the primary enable signal, assert the respective sequencer enable signals provided to the plurality of sequencers in accordance with a second sequence.Type: GrantFiled: January 3, 2023Date of Patent: March 5, 2024Assignee: Micron Technology, Inc.Inventors: Keith A Benjamin, Thomas Dougherty
-
Patent number: 11919162Abstract: An identification (ID) number setting method for a modular device that comprises a master building element and a plurality of slave building elements that are connected to the master building element, includes: disconnecting the slave building elements from the master building element; setting ID numbers of all of the slave building elements to be a preset ID number; and assigning new ID numbers to slave building elements of N tiers that are connected to one output interface of the master building element in an order from first tier to Nth tier, wherein the slave building elements of the first tier are slave building elements that are directly connected to the output interface, the slave building elements of the Nth tier are slave building elements that are indirectly connected to the output interface through slave building elements of a (N?1)th tier, N is a natural number greater than 1.Type: GrantFiled: December 24, 2020Date of Patent: March 5, 2024Assignee: UBTECH ROBOTICS CORP LTDInventors: Wei He, Youjun Xiong
-
Patent number: 11914551Abstract: The present application discloses a pre-reading method and system of a kernel client, and a computer-readable storage medium. The method includes: receiving a reading request for a file and determining whether the reading of the file is continuous; if the reading of the file is discontinuous, generating a head node of a file inode, and constructing a linked list embedded in the head node; determining whether the file includes a reading rule for the file, and if the file includes the reading rule for the file, acquiring, based on the reading rule, the number of reading requests for the file and a reading offset corresponding to each request, generating a map route based on the number of reading requests and corresponding reading offsets, and storing the map route in the linked list; and executing pre-reading based on the linked list.Type: GrantFiled: November 30, 2021Date of Patent: February 27, 2024Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.Inventor: Yamao Xue
-
Patent number: 11907814Abstract: A system and method for machine learning. The system includes a GPU with a GPU memory, and a key value storage device connected to the GPU memory. The method includes, writing, by the GPU, a key value request to a key value request queue in a input-output region of the GPU memory, the key value request including a key. The method further includes reading, by the key value storage device, the key value request from the key value request queue, and writing, by the key value storage device, in response to the key value request, a value to the input-output region of the GPU memory, the value corresponding to the key of the key value request.Type: GrantFiled: November 22, 2021Date of Patent: February 20, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Joo Hwan Lee, Yang Seok Ki