Patents Examined by Hyun Nam
  • Patent number: 11687338
    Abstract: The technology disclosed herein provides a method including determining one or more dedicated computations storage programs (CSPs) used in a target market for a computational storage device, storing the dedicated CSPs in one or more pre-programmed computing instruction set (CIS) slots in the computational storage device, translating one or more instructions of the dedicated CSPs for processing using a native processor, loading one or more instructions of programmable CSPs to a CSP processor implemented within an application specific integrated circuit (ASIC) of the computational storage device, and processing the one or more instructions of the programmable CSPs using the CSP processor.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: June 27, 2023
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventor: Marc Tim Jones
  • Patent number: 11687471
    Abstract: A method is described. The method includes executing solid state drive program code from system memory of a computing system to perform any/all of garbage collection, wear leveling and logical block address to physical block address translation routines for a solid state drive that is coupled to a computing system that the system memory is a component of.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: June 27, 2023
    Assignee: SK Hynix NAND Product Solutions Corp.
    Inventors: Joseph D. Tarango, Randal Eike, Michael Allison, Eric Hoffman
  • Patent number: 11675590
    Abstract: Disclosed embodiments relate to systems and methods for performing instructions to transform matrices into a row-interleaved format. In one example, a processor includes fetch and decode circuitry to fetch and decode an instruction having fields to specify an opcode and locations of source and destination matrices, wherein the opcode indicates that the processor is to transform the specified source matrix into the specified destination matrix having the row-interleaved format; and execution circuitry to respond to the decoded instruction by transforming the specified source matrix into the specified RowInt-formatted destination matrix by interleaving J elements of each J-element sub-column of the specified source matrix in either row-major or column-major order into a K-wide submatrix of the specified destination matrix, the K-wide submatrix having K columns and enough rows to hold the J elements.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: June 13, 2023
    Assignee: Intel Corporation
    Inventors: Raanan Sade, Robert Valentine, Bret Toll, Christopher J. Hughes, Alexander F. Heinecke, Elmoustapha Ould-Ahmed-Vall, Mark J. Charney
  • Patent number: 11675599
    Abstract: An information handling system may include a processor, one or more accelerators communicatively coupled to the processor, and a management controller communicatively coupled to the processor and the one or more accelerators and configured for out-of-band management of the information handling system, the management controller further configured to receive information regarding the one or more accelerators, determine a criticality factor for each of the one or more accelerators based on the information, determine an accelerator health status for each of the one or more accelerators, and determine an overall system health of the information handling system based on the criticality factors and the accelerator health statuses.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: June 13, 2023
    Assignee: Dell Products L.P.
    Inventors: Chitrak Gupta, Rama Rao Bisa, John R. Palmer
  • Patent number: 11652718
    Abstract: A semiconductor device and an operating method thereof are provided. An operating method of a semiconductor device, includes monitoring a plurality of request packets and a plurality of response packets that are being transmitted between a master device and a slave device; detecting a target request packet that matches desired identification (ID) information from among the plurality of request packets; counting the number of events of a transaction including the target request packet by using an event counter; counting the number of request packets whose corresponding response packets are yet to be detected, from among the plurality of request packets by using a Multiple Outstanding (MO) counter; determining whether an MO count value of the MO counter is valid; and if the MO count value is invalid, resetting the event counter.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: May 16, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jae Geun Yun, Seong Min Jo, Yun Kyo Cho, Byeong Jin Kim, Dong Soo Kang, Nak Hee Seong
  • Patent number: 11650818
    Abstract: A processor includes an execution unit and a processing logic operatively coupled to the execution unit, the processing logic to: enter a first execution state and transition to a second execution state responsive to executing a control transfer instruction. Responsive to executing a target instruction of the control transfer instruction, the processing logic further transitions to the first execution state responsive to the target instruction being a control transfer termination instruction of a mode identical to a mode of the processing logic following the execution of the control transfer instruction; and raises an execution exception responsive to the target instruction being a control transfer termination instruction of a mode different than the mode of the processing logic following the execution of the control transfer instruction.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: May 16, 2023
    Assignee: Intel Corporation
    Inventors: Vedvyas Shanbhogue, Jason W. Brandt, Ravi L. Sahita, Xiaoning Li
  • Patent number: 11640319
    Abstract: A task processing method, an electronic device and a storage medium, which relate to the field of artificial intelligence, such as intelligent voices, artificial intelligence chips, or the like, are disclosed. The method may include: for to-be-executed tasks, in at least one round of processing, performing the following operations: in response to determining that one or more high-priority tasks exist in the to-be-executed tasks, calling the one or more high-priority tasks to process audio data cached in a memory; and after execution of the one or more high-priority tasks is completed, and in response to determining that one or more low-priority task exist in the to-be-executed tasks, calling the one or more low-priority tasks to process the audio data.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: May 2, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Gang Ji, Chao Tian, Lei Jia
  • Patent number: 11635967
    Abstract: An array processor includes processor element arrays distributed in rows and columns. The processor element arrays perform operations on parameter values. The array processor also includes memory interfaces that broadcast sets of the parameter values to mutually exclusive subsets of the rows and columns of the processor element arrays. In some cases, the array processor includes single-instruction-multiple-data (SIMD) units including subsets of the processor element arrays in corresponding rows, workgroup processors (WGPs) including subsets of the SIMD units, and a memory fabric configured to interconnect with an external memory that stores the parameter values. The memory interfaces broadcast the parameter values to the SIMD units that include the processor element arrays in rows associated with the memory interfaces and columns of processor element arrays that are implemented across the SIMD units in the WGPs. The memory interfaces access the parameter values from the external memory via the memory fabric.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: April 25, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Sateesh Lagudu, Allen H. Rush, Michael Mantor, Arun Vaidyanathan Ananthanarayan, Prasad Nagabhushanamgari, Maxim V. Kazakov
  • Patent number: 11630668
    Abstract: A processor including a pointer storage that stores pointer descriptors each including addressing information, an arithmetic logic unit (ALU) configured to execute an instruction which includes operand indexes each identifying a corresponding pointer descriptor, multiple address generation units (AGUs), each configured to translate addressing information from a corresponding pointer descriptors into memory addresses for accessing corresponding operands stored in a memory, and a smart cache. The smart cache includes a cache storage, and uses the memory addresses from the AGUs to retrieve and store operands from the memory into the cache storage, and to provide the stored operands to the ALU when executing the instruction. The smart cache replaces a register file used by a conventional processor for retrieving and storing operand information. The pointer operands include post-update capability that reduces instruction fetches. Wasted memory cycles associated with cache speculation are avoided.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: April 18, 2023
    Assignee: NXP B.V.
    Inventors: Kevin Bruce Traylor, Jayakrishnan Cheriyath Mundarath, Michael Andrew Fischer
  • Patent number: 11630790
    Abstract: An integrated circuit is provided, which includes: a processor, a general interrupt controller, and a bus master. The bus master includes: a bus-control circuit and a polling circuit, which is configured to detect whether an interrupt signal of the sensing device is asserted. In response to the polling circuit detecting that the interrupt signal is asserted, the bus-control circuits fetches each task stored in a task queue of a memory in sequence, and performs one or more data-transfer operations corresponding to each task to obtain sensor data from the sensing device. In response to a task-completion signal of the tasks generated by the bus-control circuit, the general interrupt controller generates an interrupt request signal. In response to the interrupt request signal, the processor reports a sensor event using the sensor data obtained by the data-transfer operations corresponding to each task.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: April 18, 2023
    Assignee: MEDIATEK SINGAPORE PTE. LTD.
    Inventors: Hui Zhang, Peng Zhou, Shi Ma, Fei Yin, Wulin Li
  • Patent number: 11630691
    Abstract: Disclosed embodiments relate to an improved memory system architecture for multi-threaded processors. In one example, a system includes a system comprising a multi-threaded processor core (MTPC), the MTPC comprising: P pipelines, each to concurrently process T threads; a crossbar to communicatively couple the P pipelines; a memory for use by the P pipelines, a scheduler to optimize reduction operations by assigning multiple threads to generate results of commutative arithmetic operations, and then accumulate the generated results, and a memory controller (MC) to connect with external storage and other MTPCs, the MC further comprising at least one optimization selected from: an instruction set architecture including a dual-memory operation; a direct memory access (DMA) engine; a buffer to store multiple pending instruction cache requests; multiple channels across which to stripe memory requests; and a shadow-tag coherency management unit.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: April 18, 2023
    Assignee: Intel Corporation
    Inventors: Robert Pawlowski, Ankit More, Jason M. Howard, Joshua B. Fryman, Tina C. Zhong, Shaden Smith, Sowmya Pitchaimoorthy, Samkit Jain, Vincent Cave, Sriram Aananthakrishnan, Bharadwaj Krishnamurthy
  • Patent number: 11625344
    Abstract: A data transmission system has a master circuit, a slave circuit, and a transmission control circuit. The slave circuit stores a plurality of data in a first format. The master circuit processes data in a second format to perform a corresponding function. The transmission control circuit is coupled to the master circuit and the slave circuit. The transmission control circuit accesses a first datum from the slave circuit according to a first access command of the master circuit, converts the first datum in the first format into a first application datum in the second format, and transmits the first application datum to the master circuit.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: April 11, 2023
    Assignee: Realtek Semiconductor Corp.
    Inventors: Kai-Ting Shr, Kuan-Hsing Lu
  • Patent number: 11614949
    Abstract: An integrated circuit comprises a processing unit configured for booting up with a set of boot instructions, then for determining the size of the instructions of an application programme and potentially rebooting on its own initiative, while being reconfigured, in order for it to execute the instructions of the application program. Only one boot memory is needed as a consequence.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: March 28, 2023
    Assignees: STMicroelectronics (Grenoble 2) SAS, STMicroelectronics (Grand Ouest) SAS
    Inventors: Loic Pallardy, Ignazio Antonino Urzi, Jean-Francis Duret
  • Patent number: 11604755
    Abstract: Presented herein are improvement to computer system architecture. In one embodiment, a method includes reconfiguring system interconnect links disposed between a first central processing unit socket and a second central processing unit socket, disposed together on a single motherboard, as peripheral bus links; and transmitting electrical signals, via the peripheral bus links, and via a printed circuit board that bridges the second central processing unit socket, to at least one input/output functional block that is disposed on the single motherboard and that is selectively connectable to the second central processing unit socket.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: March 14, 2023
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Jayaprakash Balachandran, Bidyut Kanti Sen, Kenny Lieu, Dattatri N. Mattur
  • Patent number: 11599490
    Abstract: A packet header is received from a host and written to a header queue. A direct memory access (DMA) descriptor is received from the host and written to a packet descriptor queue. The DMA descriptor points to packet data in a host memory. The packet data is fetched from host memory and the packet header and the packet data are provided to a network interface.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: March 7, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Georgy Machulsky, Nafea Bshara, Netanel Israel Belgazal, Evgeny Schmeilin, Said Bshara
  • Patent number: 11593278
    Abstract: Some embodiments provide a method of providing distributed storage services to a host computer from a network interface card (NIC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC that is not only used to access the set of external storages but also for forwarding packets not related to an external storage. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol to access the external storage set. The method presents the external storage as a local storage of the host computer to a set of programs executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer on the NIC to create a local storage construct that presents the set of external storages as a local storage of the host computer.
    Type: Grant
    Filed: January 9, 2021
    Date of Patent: February 28, 2023
    Assignee: VMWARE, INC.
    Inventors: Jinpyo Kim, Claudio Fleiner, Marc Fleischmann, Anjaneya P. Gondi, Yongqi Hu
  • Patent number: 11580371
    Abstract: A method, apparatus, and system are discussed to efficiently process and execute Artificial Intelligence operations. An integrated circuit has a tailored architecture to process and execute Artificial Intelligence operations, including computations for a neural network having weights with a sparse value. The integrated circuit contains at least a scheduler, one or more arithmetic logic units, and one or more random access memories configured to cooperate with each other to process and execute these computations for the neural network having weights with the sparse value.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: February 14, 2023
    Assignee: Roviero, Inc.
    Inventor: Deepak Mital
  • Patent number: 11580052
    Abstract: The present disclosure relates to a communication method by I2C bus between a emitting device and a receiving device, in which: a rising edge of a clock signal of the I2C bus, directly following a start condition of an I2C communication, is recorded; and when an interruption is generated within the receiving device, the receiving device verifies whether the rising edge was recorded.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: February 14, 2023
    Assignee: STMicroelectronics (Grand Ouest) SAS
    Inventor: Yves Magnaud
  • Patent number: 11573797
    Abstract: Disclosed are various embodiments for computing 2-body statistics on graphics processing units (GPUs). Various types of two-body statistics (2-BS) are regarded as essential components of data analysis in many scientific and computing domains. However, the quadratic complexity of these computations hinders timely processing of data. According, various embodiments of the present disclosure involve parallel algorithms for 2-BS computation on Graphics Processing Units (GPUs). Although the typical 2-BS problems can be summarized into a straightforward parallel computing pattern, traditional wisdom from (general) parallel computing often falls short in delivering the best possible performance. Therefore, various embodiments of the present disclosure involve techniques to decompose 2-BS problems and methods for effective use of computing resources on GPUs. We also develop analytical models that guide users towards the appropriate parameters of a GPU program.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: February 7, 2023
    Assignee: UNIVERSITY OF SOUTH FLORIDA
    Inventors: Yicheng Tu, Napath Pitaksirianan
  • Patent number: 11573795
    Abstract: In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: February 7, 2023
    Assignee: NVIDIA Corporation
    Inventors: Ahmad Itani, Yen-Te Shih, Jagadeesh Sankaran, Ravi P Singh, Ching-Yu Hung