Patents Examined by Tim T. Vo
  • Patent number: 12288075
    Abstract: A cache hit-miss prediction is determined for a memory access instruction using a predictor. The predictor includes a tracker for the memory access instruction. The tracker is used to provide a prediction confidence level of the cache hit-miss prediction for the memory access instruction. Using the tracker, the prediction confidence level of the cache hit-miss prediction is ascertained. Based on the prediction confidence level indicating the cache hit-miss prediction is to be used, the cache hit-miss prediction is provided to be used in instruction execution scheduling.
    Type: Grant
    Filed: February 23, 2024
    Date of Patent: April 29, 2025
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dominic DiTomaso, David Trilla Rodriguez, Alper Buyuktosunoglu, Craig R Walters, Ram Sai Manoj Bamdhamravuri
  • Patent number: 12260120
    Abstract: An electronic device includes a processor that executes a guest operating system; a memory having a guest portion that is reserved for storing data and information to be accessed by the guest operating system; and an input-output memory management unit (IOMMU). The IOMMU writes, in the guest portion, information into guest buffers and/or logs used for communicating information from the IOMMU to the guest operating system. The IOMMU also reads, from the guest portion, information in guest buffers and/or logs used for communicating information from the guest operating system to the IOMMU.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: March 25, 2025
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Maggie Chan, Philip Ng, Paul Blinzer
  • Patent number: 12124736
    Abstract: The present application relates to an in-memory computing module and method, and an in-memory computing network and a construction method therefor. The in-memory computing module comprises at least two computing submodules, and low latency can be achieved when computing units in the computing submodules access memory units. Multiple computing submodules present a symmetric layer design, and such a symmetric layer structure facilitates the construction of a topology network so as to achieve large-scale or ultra-large-scale computation. The memory capacity of the memory units in each computing submodule can be customized, and designed flexibly. These computing submodules are in a bonding connection, and the data bit width after the bonding connection may be positive integer multiple of the data bit width of the computing units, so that high data bandwidth is achieved.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: October 22, 2024
    Assignee: XI'AN UNIIC SEMICONDUCTORS CO., LTD.
    Inventors: Xiping Jiang, Xiaofeng Zhou, Fengguo Zuo
  • Patent number: 11797304
    Abstract: A microprocessor system comprises a vector computational unit and a control unit. The vector computational unit includes a plurality of processing elements. The control unit is configured to provide at least a single processor instruction to the vector computational unit. The single processor instruction specifies a plurality of component instructions to be executed by the vector computational unit in response to the single processor instruction and each of the plurality of processing elements of the vector computational unit is configured to process different data elements in parallel with other processing elements in response to the single processor instruction.
    Type: Grant
    Filed: January 19, 2023
    Date of Patent: October 24, 2023
    Assignee: Tesla, Inc.
    Inventors: Debjit Das Sarma, Emil Talpes, Peter Joseph Bannon
  • Patent number: 11791838
    Abstract: An accelerator is disclosed. The accelerator may include a memory that may store a dictionary table. An address generator may be configured to generate an address in the dictionary table based on an encoded value, which may have an encoded width. An output filter may be configured to filter a decoded value from the dictionary table based on the encoded value, the encoded width, and a decoded width of the decoded data. The accelerator may be configured to support at least two different encoded widths.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: October 17, 2023
    Inventors: Sahand Salamat, Joo Hwan Lee, Armin Haj Aboutalebi, Praveen Krishnamoorthy, Xiaodong Zhao, Hui Zhang, Yang Seok Ki
  • Patent number: 11782718
    Abstract: Techniques related to executing a plurality of instructions by a processor comprising receiving a first instruction configured to cause the processor to output a first data value to a first address in a first data cache, outputting, by the processor, the first data value to a second address in a second data cache, receiving a second instruction configured to cause a streaming engine associated with the processor to prefetch data from the first data cache, determining that the first data value has not been outputted from the second data cache to the first data cache, stalling execution of the second instruction, receiving an indication, from the second data cache, that the first data value has been output from the second data cache to the first data cache, and resuming execution of the second instruction based on the received indication.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: October 10, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Kai Chirca, Timothy D. Anderson, Duc Bui, Abhijeet A. Chachad, Son Hung Tran
  • Patent number: 11755204
    Abstract: Provided a data management system which includes a data acquisition unit that acquires measurement data obtained by measuring a fluid flowing in a flow path from each of a plurality of sensors, a data recording unit that records the acquired measurement data, and a data volume reduction unit that reduces a data volume to be recorded for a target sensor based on the measurement data acquired from another sensor installed in either an upstream or a downstream from itself in the flow path among the plurality of sensors.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: September 12, 2023
    Assignee: Yokogawa Electric Corporation
    Inventors: Nobuaki Ema, Yoshitaka Yoshida
  • Patent number: 11748277
    Abstract: Method and apparatus for enhancing performance of a storage device, such as a solid-state drive (SSD). In some embodiments, the storage device monitors a rate at which client I/O access commands are received from a client to transfer data with a non-volatile memory (NVM) of the storage device. A ratio of background access commands to the client I/O access commands is adjusted to maintain completion rates of the client I/O access commands at a predetermined level. The background access commands transfer data internally with the NVM to prepare the storage device to service the client I/O access commands, and can include internal reads and writes to carry out garbage collection and metadata map updates. The ratio may be adjusted by identifying a workload type subjected to the storage device by the client.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: September 5, 2023
    Assignee: Seagate Technology, LLC
    Inventors: Ryan James Goss, David W. Claude, Graham David Ferris, Daniel John Benjamin, Ryan Charles Weidemann
  • Patent number: 11709610
    Abstract: A memory system, a memory controller and an operating method are disclosed. A first area, a second area included in the first area, and a third area are set. An area to which target data is to be written is determined to the first area or the third area. When the target data is written to the first area, the target data is preferentially written to the second area. The number of data bits stored per memory cell in the first area is less than the number of data bits stored per memory cell in the third area. As a consequence, it is possible to secure storage capacity of the memory system to at least a set reference while securing data write performance of the memory system recognized by a host to at least a set reference.
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: July 25, 2023
    Assignee: SK hynix Inc.
    Inventor: Kwang Su Kim
  • Patent number: 11687244
    Abstract: A processing device, operatively coupled with the memory device, is configured to provide a plurality of functions for accessing the memory device, a function of the plurality of functions receives input/output (I/O) operations from a host computing system. The processing device further selects a first function of the plurality of functions to service and assigns a first operation weight to a first I/O operation type of I/O operations received at the first function and a second operation weight to a second I/O operation type of I/O operations received at the first function. The processing device also selects, for execution, a first number of operations of the first I/O operation type of the I/O operations received at the first function according to the first operation weight and a second number of operations of the second I/O operation type of the I/O operations received at the first function according to the second operation weight.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: June 27, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Luca Bert
  • Patent number: 11650929
    Abstract: A memory system includes: a memory device including a plurality of memory dies including the plurality of planes; and a controller configured to store data in a plurality of stripes each including physical pages of different planes and a plurality of unit regions, the controller comprising: a processor configured to queue write commands in a write queue, and select, among the plurality of stripes, a stripe in which data chunks corresponding to the write commands are to be stored; and a striping engine configured to receive queued orders of the write commands, and output, by referring to a lookup table, addresses of unit regions, in which the data chunks are to be arranged, to the processor, wherein the processor in configured to control the memory device to store the data chunks in the unit regions corresponding to the outputted addresses of the selected stripe.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: May 16, 2023
    Assignee: SK hynix Inc.
    Inventor: Ju Hyun Kim
  • Patent number: 11625168
    Abstract: The storage device includes a first memory, a process device that stores data in the first memory and reads the data from the first memory, and an accelerator that includes a second memory different from the first memory. The accelerator stores compressed data stored in one or more storage drives storing data, in the second memory, decompresses the compressed data stored in the second memory to generate plaintext data, extracts data designated in the process device from the plaintext data, and transmits the extracted designated data to the first memory.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: April 11, 2023
    Assignee: HITACHI, LTD.
    Inventors: Masahiro Tsuruya, Nagamasa Mizushima, Tomohiro Yoshihara, Kentaro Shimada
  • Patent number: 11550504
    Abstract: A system includes an application processor configured to generate a read request and including a data memory; a host processor configured to generate a read command corresponding to the read request; and a data storage device including a data storage memory, wherein the data storage device transmits read data output from the data storage device according to the read command to the data memory of the application processor without passing the host processor.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: January 10, 2023
    Assignees: SK hynix Inc., Sogang University Research and Business Development Foundation
    Inventors: Changgyu Lee, Youngjae Kim, Donggyu Park, Mingyo Jung, Sungyong Park, Jung Ki Noh, Woo Suk Chung, Kyoung Park
  • Patent number: 11429517
    Abstract: A storage system in one embodiment comprises multiple storage nodes each comprising at least one storage device. Each of the storage nodes further comprises a set of processing modules configured to communicate over one or more networks with corresponding sets of processing modules on other ones of the storage nodes. The sets of processing modules of the storage nodes each comprise at least one control module. The storage system is configured to assign portions of a logical address space of the storage system to respective ones of the control modules, to receive a plurality of tracks of data records in a count-key-data format, and to store the tracks in respective ones of the portions of the logical address space assigned to respective ones of the control modules. Each of the tracks is stored in its entirety in the portion of the logical address space assigned to a corresponding one of the control modules.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: August 30, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: David Meiri, Anton Kucherov
  • Patent number: 11372772
    Abstract: A storage system in one embodiment comprises a plurality of storage devices and a storage controller. The storage system is configured by the storage controller to receive a plurality of data records in a count-key-data format, to separate count and key portions of the data records from remaining portions of the data records, to store the count and key portions of the data records in at least one designated page of a set of pages of a logical storage volume of the storage system, and to store the remaining portions of the data records in one or more other pages of the set of pages of the logical storage volume of the storage system. The designated page of the set of pages of the logical storage volume may comprise a first page of the set of pages, and the one or more other pages of the set of pages may comprise respective ones of a sequence of consecutive pages following the first page.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: June 28, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: David Meiri, Anton Kucherov
  • Patent number: 11237989
    Abstract: An apparatus includes a processor and a machine-readable medium coupled to the processor and comprising instructions. The instructions, when loaded into the processor and executed, configure the processor to identify that a USB element has attached to a USB hub at a port, classify the USB element according to power operations of the USB element, and assign an upstream or downstream setting of the port based upon the classification of the USB element based on power operations of the USB element. The instructions may further configure the processor to classify the USB element as only a producer of power, evaluate whether an enumeration process is initiated within a timeout period, and if so, assign the USB element as a USB host.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: February 1, 2022
    Assignee: Microchip Technology Incorporated
    Inventors: Atish Ghosh, Mark Gordon, Ken Nagai, Larisa Troyegubova
  • Patent number: 11232059
    Abstract: In example implementations, an apparatus is provided. The apparatus includes a first interface, an upstream device detector, a second interface, and a processor. The first interface receives a multi-channel connection. The upstream device detector is to detect a connection to external graphical processor unit (eGPU) via the first interface. The second interface is to connect a peripheral device that transmit data over the multi-channel connection via the first interface through the eGPU and to a host computer. The processor disables a portion of the multi-channel connection on the first interface when the upstream device detector detects the connection to the eGPU.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: January 25, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Roger D. Benson, Ho-sup Chung
  • Patent number: 11232057
    Abstract: The present application is directed to a television device and a control method therefor. The television device comprises an SOC chip, a DFP interface thereof being connected to a switch module via a USB D+/D? differential pair, and the USB D+/D? differential pair between the DFP interface and the switch module being a first channel; a USB Type-C interface main control module provided with a UFP interface, the UFP interface being connected to the switch module via a USB D+/D? differential pair, and the USB D+/D? differential pair between the UFP interface and the switch module being a second channel; and a USB Type-C interface connected to the switch module via a USB D+/D? differential pair. The USB Type-C interface main control module is also connected to the switch module via a control signal line.
    Type: Grant
    Filed: January 20, 2020
    Date of Patent: January 25, 2022
    Assignee: Hisense Visual Technology Co., Ltd.
    Inventor: Xuebin Sun
  • Patent number: 11232053
    Abstract: A direct memory access (DMA) system can include a memory configured to store a plurality of host profiles, a plurality of interfaces, wherein two or more of the plurality of interfaces correspond to different ones of a plurality of host processors, and a plurality of data engines coupled to the plurality of interfaces. The plurality of data engines are independently configurable to access different ones of the plurality of interfaces for different flows of a DMA operation based on the plurality of host profiles.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: January 25, 2022
    Assignee: Xilinx, Inc.
    Inventors: Chandrasekhar S. Thyamagondlu, Darren Jue, Ravi Sunkavalli, Akhil Krishnan, Tao Yu, Kushagra Sharma
  • Patent number: 11232060
    Abstract: In one embodiment, an apparatus includes an input/output (I/O) circuit to communicate information at a selected voltage via an interconnect to which a plurality of devices may be coupled, and a host controller to couple to the interconnect. The host controller may include a supply voltage policy control circuit to initiate a supply voltage policy exchange with a first device to obtain a first supply voltage capability of the first device and to cause the I/O circuit and the first device to be configured to communicate via the interconnect at a first supply voltage based on the first supply voltage capability. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: January 25, 2022
    Assignee: Intel Corporation
    Inventors: Amit Kumar Srivastava, Kenneth P. Foust