Patents Examined by Idriss N Alrobaye
  • Patent number: 11876702
    Abstract: A network interface controller (NIC) capable of facilitating efficient memory address translation is provided. The NIC can be equipped with a host interface, a cache, and an address translation unit (ATU). During operation, the ATU can determine an operating mode. The operating mode can indicate whether the ATU is to perform a memory address translation at the NIC. The ATU can then determine whether a memory address indicated in the memory access request is available in the cache. If the memory address is not available in the cache, the ATU can perform an operation on the memory address based on the operating mode.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: January 16, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Abdulla M. Bataineh, Thomas L. Court, Hess M. Hodge
  • Patent number: 11874784
    Abstract: A memory device of a memory module includes a CA buffer that receives a command/address (CA) signal through a bus shared by a memory device different from the memory device of the memory module, and a calibration logic circuit that identifies location information of the memory device on the bus. The memory device recognizes its own location on a bus in a memory module to perform self-calibration, and thus, the memory device appropriately operates even under an operation condition varying depending on a location in the memory module.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: January 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Heon Su Jeong, Hangi Jung, Wangsoo Kim, Hae Young Chung
  • Patent number: 11874779
    Abstract: A data bus is determined to be in a write mode. Whether a number of memory queues that identify at least one write operation satisfies a threshold criterion is determined. The memory queues include identifiers of one or more write operations and identifiers of one or more read operations. Responsive to determining that the number of memory queues satisfies the threshold criterion, a write operation from the memory queues is transmitted over the data bus.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: January 16, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Wei Wang, Jiangli Zhu, Ying Yu Tai, Samir Mittal
  • Patent number: 11868297
    Abstract: A far-end data migration device and method based on a FPGA cloud platform. The device includes a server, a switch, and a plurality of FPGA acceleration cards. The server transmits data to be accelerated to the FPGA acceleration cards by means of the switch. The FPGA acceleration cards are configured to perform a primary and/or secondary acceleration on the data, and are configured to migrate the accelerated data. The method includes: transmitting data to be accelerated to a FPGA acceleration card from a server by means of a switch; performing, by the FPGA acceleration card, a primary and/or secondary acceleration on the data to be accelerated; and migrating, by the FPGA acceleration card, the accelerated data.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: January 9, 2024
    Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Jiangwei Wang, Rui Hao, Hongwei Kan
  • Patent number: 11869113
    Abstract: Apparatuses including general-purpose graphics processing units and graphics multiprocessors that exploit queues or transitional buffers for improved low-latency high-bandwidth on-die data retrieval are disclosed. In one embodiment, a graphics multiprocessor includes at least one compute engine to provide a request, a queue or transitional buffer, and logic coupled to the queue or transitional buffer. The logic is configured to cause a request to be transferred to a queue or transitional buffer for temporary storage without processing the request and to determine whether the queue or transitional buffer has a predetermined amount of storage capacity.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: January 9, 2024
    Assignee: Intel Corporation
    Inventors: Aravindh Anantaraman, Altug Koker, Varghese George, Subramaniam Maiyuran, SungYe Kim, Valentin Andrei
  • Patent number: 11868301
    Abstract: A computer system includes symmetrical sets of motherboard serial channels which couple processor devices on a motherboard with a common serial link interface. The common serial link interface can be coupled with an endpoint device to establish symmetrical serial links between the endpoint device and the processor devices. The computer system can include a riser card which can be coupled with the serial link interface. The riser card can include an endpoint device interface and serial channels which can couple the processor devices with the endpoint device via symmetrical limited selections of the motherboard serial channels. The riser can include additional interfaces which can couple the processor devices with additional expansion devices.
    Type: Grant
    Filed: March 25, 2015
    Date of Patent: January 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Darin Lee Frink, Michael Jon Moen, Christopher Nathan Watson
  • Patent number: 11868654
    Abstract: A semiconductor device includes: a nonvolatile memory cell including first memory cells and second memory cells; a bit latch; and a saved register. In a first writing operation, first writing data are stored in the bit latch and the saved register, and writing to the first memory cells is executed based on the first writing data. During the first writing operation, the first writing operation is interrupted based on a suspension command, and a second writing operation is executed. In the second writing operation, second writing data are stored in the bit latch, and writing to the second memory cells is executed based on the second writing data. After the second writing operation is ended, the first writing data is reset to the bit latch based on a resume command, and the interrupted first writing operation is restarted based on the first writing data reset to the bit latch.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: January 9, 2024
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Takanori Moriyasu, Kazuo Yoshihara, Takayuki Nishiyama
  • Patent number: 11861188
    Abstract: A storage system, blades, removable modules, and method of configuring a storage system are described. The storage system has blades with computing resources and storage resources. At least one of the blades has, or has added, one or more removable modules.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: January 2, 2024
    Assignee: PURE STORAGE, INC.
    Inventors: Hari Kannan, Yuhong Mao, Mark Heuchert
  • Patent number: 11860809
    Abstract: A computing device includes: a housing defining an exterior of the computing device; a controller supported within the housing; a first communication port disposed on the exterior; a second communication port disposed on the exterior; a port-sharing subsystem supported within the housing, having (i) a first state to connect the controller with the first communication port, exclusive of the second communication port, and (ii) a second state to connect the controller with the first communication port and the second communication port; the controller configured to: detect engagement of an external device with the first communication port; obtain connection parameters from the external device; based on the connection parameters, set the port-sharing subsystem in either the first state or the second state; and establish a connection to the external device via the port-sharing subsystem and the first communication port.
    Type: Grant
    Filed: December 3, 2021
    Date of Patent: January 2, 2024
    Assignee: Zebra Technologies Corporation
    Inventor: Michael Robustelli
  • Patent number: 11860802
    Abstract: In accordance with some aspects of the present disclosure, a non-transitory computer readable medium is disclosed. In some embodiments, the non-transitory computer readable medium includes instructions that, when executed by a processor, cause the processor to receive, from a workload hosted on a host of a cluster, first I/O traffic programmed according to a first I/O traffic protocol supported by a cluster-wide storage fabric exposed to the workload as being hosted on the same host. In some embodiments, the workload is recovered by a hypervisor hosted on the same host. In some embodiments, the non-transitory computer readable medium includes the instructions that, when executed by the processor, cause the processor to adapt the first I/O traffic to generate second I/O traffic programmed according to a second I/O traffic protocol supported by a repository external to the storage fabric and forward the second I/O traffic to the repository.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: January 2, 2024
    Assignee: Nutanix, Inc.
    Inventors: Dezhou Jiang, Kiran Tatiparthi, Monil Devang Shah, Mukul Sharma, Prakash Narayanasamy, Praveen Kumar Padia, Sagi Sai Sruthi, Deepak Narayan
  • Patent number: 11861227
    Abstract: A method of operating a storage device including a non-volatile memory and a multi-core processor with at least two cores includes the following steps: receiving, by a host interface of the storage device, a first command from a host for requesting the non-volatile memory to perform a predetermined memory operation; generating, by a task scheduler of the storage device, first and second tasks from the first memory command; selecting, by the task scheduler, a first core from among the at least two cores based on execution times of the at least two cores; assigning, by the task scheduler, the first and second tasks to the first core; and requesting, by the first core, a subsequent task from the scheduler while the first core processes the first assigned task and loads code for processing the second task.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: January 2, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Wan-Soo Choi, Young Wook Kim, Do Hyeon Park
  • Patent number: 11853603
    Abstract: The present disclosure generally relates to host memory buffer (HMB) cache management. HMB is transient memory and may not always be available. For example, when the link between the data storage device and the host device is not active, the data storage device can't access the HMB. Placing an HMB log in the HMB controller that is disposed in the data storage device provides access to data that would otherwise be inaccessible in the HMB. The HMB log contains any deltas that have occurred since either the last copying to an HMB cache in the memory device or any delta that have occurred since the link became inactive. The HMB cache mirrors the HMB. In so doing, the data of the HMB is available to the data storage device not only when the link is active, but also when the link is not active.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: December 26, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Judah Gamliel Hahn, Shay Benisty, Ariel Navon
  • Patent number: 11853245
    Abstract: A computing system framework and method for configuration thereof are provided. A plurality of processing modules are accessed. Each processing module includes a plurality of processing nodes and each processing node is associated with an intra-module port and an inter-module port. A plurality of intra-module networks are formed. Each intra-module network includes connections between at least a portion of the processing nodes in one of the processing modules via the associated intra-module ports. An enclosed shape of the processing modules is formed by connecting at one inter-module port on each processing module to one inter-module port on an adjacent processing modules. A cable is linked between one of the inter-module ports of one processing module of the enclosed shape to an inter-module port of another processing module of a different group of interconnected processing modules.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: December 26, 2023
    Assignee: XEROX CORPORATION
    Inventor: Daniel Davies
  • Patent number: 11853233
    Abstract: Described are an information handling system, peripheral devices, and methods to connect the information handling or host to the peripheral devices. Physical connections connect the host with the one or more peripheral devices and the peripheral devices with one another. Electrical and communication connections connect the host with the one or more peripheral devices and the peripheral devices with one another. Power and power management are provided by the host through the electrical connection. An input from the host to the peripheral devices is used to establish communication flow between the host and the peripheral devices.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: December 26, 2023
    Assignee: Dell Products L.P.
    Inventors: Steven Michael Christensen, Yimin Xiao
  • Patent number: 11853236
    Abstract: A device includes a memory, a plurality of registers, a multiplexer/demultiplexer circuit, and a controller circuit. The memory stores a plurality of pages of pointers and a table of commands. The plurality of registers store information about a plurality of target devices. The multiplexer/demultiplexer circuit selects (i) information from a register of the plurality of registers based on a request received from a target device of the plurality of target devices, (ii) a page from the plurality of pages based on the selected information, and (iii) a pointer from the selected page based on the selected information. The controller circuit executes a command from the table of commands based on the selected pointer.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: December 26, 2023
    Assignee: Synopsys, Inc.
    Inventors: Suresh Venkatachalam, Pratap Neelashetty
  • Patent number: 11853608
    Abstract: An information writing method is applied to an non-volatile dual in-line memory module (NVDIMM), the NVDIMM includes an NVDIMM controller and a non-volatile memory (NVM), and the method includes receiving, by the NVDIMM controller, a sanitize command from a host, where the sanitize command is used to instruct the NVDIMM controller to sanitize data in the NVM using a first write pattern, and the first write pattern is one of at least two patterns of writing information into the NVM, and writing, by the NVDIMM controller, information into the NVM according to the sanitize command.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: December 26, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Florian Longnos, Feng Yang, Wei Yang
  • Patent number: 11853105
    Abstract: A system is disclosed. An upstream interface enables communication with a processor; a downstream interface enables communication with a storage device. The system may also include an acceleration module implemented using hardware to execute an acceleration instruction. The storage device may include an endpoint of the storage device for communicating with the acceleration module, a controller to manage operations of the storage device, storage for data, and a storage device acceleration module to assist the acceleration module in executing the acceleration instruction.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: December 26, 2023
    Inventors: Ramdas P. Kachare, Fred Worley, Harry Rogers, Wentao Wu, Nagarajan Subramaniyan
  • Patent number: 11847088
    Abstract: The present disclosure provides a data transmission method and device. The data transmission method is used for transmitting data between an advanced reduced instruction set computing machine (ARM) and a field programmable logic gate array (FPGA) via an Inter-Integrated Circuit (IIC) bus, comprising the following steps: receiving, by the FPGA, communication data transmitted by the ARM via the IIC bus, wherein the communication data comprises first address data, first content data and N second content data, N being an integer greater than 0, the first content data and the N second content data being arranged in sequence, and the first address data being address data corresponding to the first content data; and generating, by the FPGA, second address data corresponding to each of the second content data according to the sequence of the N second content data and the first address data.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: December 19, 2023
    Assignee: BOE Technology Group Co., Ltd.
    Inventor: Tianmin Rao
  • Patent number: 11843376
    Abstract: A system containing a host and a device having a field-programmable gate array (“FPGA”) is disclosed. The system includes a set of configurable logic blocks (“LBs”), a bus, and a Universal Serial Bus (“USB”) interface. The configurable LBs, in one aspect, are able to be selectively programmed to perform one or more logic functions. The bus contains a P-channel and an N-channel operable to transmit signals in accordance with a high-speed USB protocol. The USB interface is configured to include a first differential comparator operable to identify a logic zero state at the P-channel and a second differential comparator operable to identify a logic zero state at the N-channel.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: December 12, 2023
    Assignee: Gowin Semiconductor Corporation
    Inventor: Grant Thomas Jennings
  • Patent number: 11843691
    Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: December 12, 2023
    Assignee: Intel Corporation
    Inventors: Thomas E. Willis, Brad Burres, Amit Kumar