Patents Examined by Zachary K Huson
  • Patent number: 11977497
    Abstract: There is provided a method for an I/O (input/output) scheduling method, the method comprises: assigning a system call identifier to each of a plurality of I/O requests derived from at least one system call requested by at least one application; sorting the plurality of I/O requests in order of the system call identifier; and transferring the sorted plurality of I/O requests to a computer-readable storage medium. Accordingly, in a mobile or desktop environment in which an application that frequently interacts with the user is executed, it is possible to minimize the read delay time increased due to file fragmentation, and moreover, it is possible to improve the user experience (UX).
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: May 7, 2024
    Assignee: Research & Business Foundation Sungkyunkwan University
    Inventors: Young Ik Eom, Jong Gyu Park
  • Patent number: 11977902
    Abstract: An automaton is implemented in a state machine engine. The automaton is configured to observe data from a beginning of an input data stream until a point when an end of data (EOD) signal is seen. Additionally the automaton is configured to report an event only when one and only one occurrence of a target symbol is seen in the input data stream.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: May 7, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Harold B Noyes, Michael C. Leventhal, Jeffery M. Tanner, Inderjit Singh Bains
  • Patent number: 11972230
    Abstract: Embodiments for a matrix transpose and multiply operation are disclosed. In an embodiment, a processor includes a decoder and execution circuitry. The decoder is to decode an instruction having a format including an opcode field to specify an opcode, a first destination operand field to specify a destination matrix location, a first source operand field to specify a first source matrix location, and a second source operand field to specify a second source matrix location. The execution circuitry is to, in response to the decoded instruction, transpose the first source matrix to generate a transposed first source matrix, perform a matrix multiplication using the transposed first source matrix and the second source matrix to generate a result, and store the result in a destination matrix location.
    Type: Grant
    Filed: June 27, 2020
    Date of Patent: April 30, 2024
    Assignee: Intel Corporation
    Inventors: Menachem Adelman, Robert Valentine, Barukh Ziv, Amit Gradstein, Simon Rubanovich, Zeev Sperber, Mark J. Charney, Christopher J. Hughes, Alexander F. Heinecke, Evangelos Georganas, Binh Pham
  • Patent number: 11972835
    Abstract: A latch circuit device includes: a latch circuit configured to latch an input signal to a microcomputer; a detection circuit configured to detect that the input signal is input to the latch circuit during a sleep period in which the microcomputer is in a sleep state; a wake-up circuit configured to transmit a wake-up signal to the microcomputer when an input of the input signal is detected during the sleep period; a sampling circuit configured to read the input signal from the latch circuit; a transmission circuit configured to transmit the input signal read by the sampling circuit to the microcomputer returned from the sleep state based on the wake-up signal; and a release circuit configured to release a latch state of the latch circuit after the input signal is read.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: April 30, 2024
    Assignee: DENSO TEN Limited
    Inventor: Kazuo Horiuchi
  • Patent number: 11960891
    Abstract: A digital data processor includes an instruction memory storing instructions each specifying a data processing operation and at least one data operand field, an instruction decoder coupled to the instruction memory for sequentially recalling instructions from the instruction memory and determining the data processing operation and the at least one data operand, and at least one operational unit coupled to a data register file and to the instruction decoder to perform a data processing operation upon at least one operand corresponding to an instruction decoded by the instruction decoder and storing results of the data processing operation. The at least one operational unit is configured to perform a table write in response to a look up table write instruction by writing at least one data element from a source data register to a specified location in a specified number of at least one table.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: April 16, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Duc Bui, Dheera Balasubramanian Samudrala
  • Patent number: 11954489
    Abstract: Disclosed embodiments relate to systems for performing instructions to quickly convert and use matrices (tiles) as one-dimensional vectors. In one example, a processor includes fetch circuitry to fetch an instruction having fields to specify an opcode, locations of a two-dimensional (2D) matrix and a one-dimensional (1D) vector, and a group of elements comprising one of a row, part of a row, multiple rows, a column, part of a column, multiple columns, and a rectangular sub-tile of the specified 2D matrix, and wherein the opcode is to indicate a move of the specified group between the 2D matrix and the 1D vector, decode circuitry to decode the fetched instruction; and execution circuitry, responsive to the decoded instruction, when the opcode specifies a move from 1D, to move contents of the specified 1D vector to the specified group of elements.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: April 9, 2024
    Assignee: Intel Corporation
    Inventors: Bret Toll, Christopher J. Hughes, Dan Baum, Elmoustapha Ould-Ahmed-Vall, Raanan Sade, Robert Valentine, Mark J. Charney, Alexander F. Heinecke
  • Patent number: 11954008
    Abstract: Task automation is enabled by recording, over a period of time, inputs of a computing device user to generate a log of inputs by the user in connection with one or more task applications. The user inputs are stored along with information pertaining to the one or more task applications. The log is processed to identify the one or more task applications to generate a user task file. The log is further processed to identify the fields in the task applications with which the user entered inputs and the identified fields are stored to the task file. The task file is processed to identify one or more tasks performed by the user. An automated software robot which is encoded with instructions to perform automatically, when invoked, one or more of the tasks performed by the user may be automatically generated.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: April 9, 2024
    Assignee: Automation Anywhere, Inc.
    Inventor: Abhijit Kakhandiki
  • Patent number: 11954491
    Abstract: A multithread processor includes a time counter and a register scoreboard and provides a method for statically dispatching instructions with preset execution times based on a write time of a register in the register scoreboard and the time counter provided to an execution pipeline.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: April 9, 2024
    Assignee: Simplex Micro, Inc.
    Inventor: Thang Minh Tran
  • Patent number: 11928341
    Abstract: The disclosure relates to a sleep control method and a sleep control circuit, a data transmission circuit includes at least two data transmission structures, each includes a storage transmission end, a bus transmission end, and an interactive transmission end, the storage transmission end is connected to a storage area, the bus transmission end is connected to a data bus, and the interactive transmission end is connected to another data transmission structure; the method includes: in a sleep stage, sleep data is transmitted to the data bus; the bus transmission end and the storage transmission end are turned on, a sending terminal of the interactive transmission end is turned on, and a receiving terminal of the interactive transmission end is turned off, so that data input from the bus transmission end is output through the storage transmission end and the interactive transmission end.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: March 12, 2024
    Assignee: CHANGXIN MEMORY TECHNOLOGIES, INC.
    Inventor: Yufeng Tao
  • Patent number: 11928468
    Abstract: Various embodiments of a system and associated method for generating a valid mapping for a computational loop on a CGRA are disclosed herein. In particular, the method includes generating randomized schedules within particular constraints to explore greater mapping spaces than previous approaches. Further, the system and related method employs a feasibility test to test validity of each schedule such that mappings are only generated from valid schedules.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: March 12, 2024
    Assignee: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Mahesh Balasubramanian, Aviral Shrivastava
  • Patent number: 11915046
    Abstract: A method for image processing is provided. The method may include: obtaining a plurality of frames, each of the plurality of frames comprising a plurality of pixels; determining, based on the plurality of frames, whether a current frame of the plurality of frames comprises a moving object; in response to determining that the current frame includes no moving object, obtaining a first count of frames, and generating a target image by superimposing the first count of frames; in response to determining that the current frame includes a moving object, obtaining a second count of frames, and generating the target image by superimposing the second count of frames.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: February 27, 2024
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Wanli Teng, Shurui Zhao, Juan Feng, Yecheng Han, Chunhua Jiang, Yong E
  • Patent number: 11914899
    Abstract: A system (e.g., NVMe controller) for managing access to a memory resource by multiple users may include memory storing function queue categorizations for function queues associated with each user, and circuitry to store and execute a multi-user arbitration algorithm that arbitrates access to the memory resource by the multiple users. The function queue categorizations assign a function category to each function queue associated with each user.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: February 27, 2024
    Assignee: Microchip Technology Incorporated
    Inventors: Kwok Kong, William Brent Wilson, Ihab Jaser, Donia Sebastian, Dan McLeran
  • Patent number: 11914997
    Abstract: A method for executing new instructions is provided. The method is used in a processor and includes: receiving an instruction; when the received instruction is an unknown instruction, executing a conversion program by an operating system, wherein the conversion program executes the following steps: determining whether the received instruction is a new instruction; converting the received instruction into at least one old instruction when the received instruction is a new instruction; and executing the at least one old instruction.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: February 27, 2024
    Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.
    Inventors: Weilin Wang, Mengchen Yang, Yingbing Guan
  • Patent number: 11914999
    Abstract: This disclosure has presented a new loop fusion framework called DNNFusion. The key advantages of DNNFusion include: 1) a new high-level abstraction comprising mapping type of operators and their combinations and the Extended Computational Graph, and analyses on these abstractions, 2) a novel mathematical-property-based graph rewriting, and 3) an integrated fusion plan generation. DNNFusion is extensively evaluated on 15 diverse DNN models on multiple mobile devices, and evaluation results show that it outperforms four state-of-the-art DNN execution frameworks by up to 8.8× speedup, and for the first time allows many cutting-edge DNN models not supported by prior end-to-end frameworks to execute on mobile devices efficiently (even in real-time). In addition, DNNFusion improves both cache performance and device utilization, enabling execution on devices with more restricted resources. It also reduces performance tuning time during compilation.
    Type: Grant
    Filed: June 16, 2022
    Date of Patent: February 27, 2024
    Assignee: William and Mary University
    Inventors: Bin Ren, Wei Niu
  • Patent number: 11907159
    Abstract: A method of representing a distributed computing system, the distributed computing system comprising a plurality of processing devices connected together according to a predefined topology. The method comprising receiving at least one piece of data from an activity log file relating to at least one processing device among the plurality of processing devices, receiving at least one metric relating to at least one processing device among the plurality of processing devices, receiving at least the predefined topology of the distributed computing system, constructing a graph representative of a distributed computing system operation, the graph comprising the data item extracted from the received log file, the received metric, and the received topology, and embedding at least one part of the graph to obtain at least one state vector representing the at least one part of the embedded graph.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: February 20, 2024
    Assignees: BULL SAS, LE COMMISSARIAT À L'ÉNERGIE ATOMIQUE ET AUX ÉNERGIES ALTERNATIVES
    Inventors: Emeric Dynomant, Pierre Seroul
  • Patent number: 11900116
    Abstract: A system may determine that two instructions may be combined based on a processing power of the processor and a size of the instructions, fuse the two instructions into a pair, map the two instructions with a single register tag, write the register tag into a mapper with bits indicating that the register tag is for a first instruction of the two instructions, write the register tag into the mapper with bits indicating that the register tag is for a second instruction of the two instructions, write the fused instruction pair into an issue queue, issue the fused instruction pair to a vector-scalar transformation units (VSU), and execute the two instructions.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: February 13, 2024
    Assignee: International Business Machines Corporation
    Inventors: Dung Q. Nguyen, Brian W. Thompto, Jose E. Moreira, Jessica Hui-Chun Tseng, Pratap C. Pattnaik, Kattamuri Ekanadham, Manoj Kumar
  • Patent number: 11901037
    Abstract: Apparatuses and methods for writing and storing parameter codes for operating parameters, and selecting between the parameter codes to set an operating condition for a memory are disclosed. An example apparatus includes a first mode register and a second mode register. The first mode register is configured to store first and second parameter codes for a same operating parameter. The second mode register is configured to store a parameter code for a control parameter to select between the first and second parameter codes to set a current operating condition for the operating parameter. An example method includes storing in a first register a first parameter code for an operating parameter used to set a first memory operating condition, and further includes storing in a second register a second parameter code for the operating parameter used to set a second memory operating condition.
    Type: Grant
    Filed: January 23, 2023
    Date of Patent: February 13, 2024
    Inventors: Dean D. Gans, Daniel C. Skinner
  • Patent number: 11880757
    Abstract: Embodiments relate to a neural engine circuit that includes an input buffer circuit, a kernel extract circuit, and a multiply-accumulator (MAC) circuit. The MAC circuit receives input data from the input buffer circuit and a kernel coefficient from the kernel extract circuit. The MAC circuit contains several multiply-add (MAD) circuits and accumulators used to perform neural networking operations on the received input data and kernel coefficients. MAD circuits are configured to support fixed-point precision (e.g., INT8) and floating-point precision (FP16) of operands. In floating-point mode, each MAD circuit multiplies the integer bits of input data and kernel coefficients and adds their exponent bits to determine a binary point for alignment. In fixed-point mode, input data and kernel coefficients are multiplied. In both operation modes, the output data is stored in an accumulator, and may be sent back as accumulated values for further multiply-add operations in subsequent processing cycles.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: January 23, 2024
    Assignee: APPLE INC.
    Inventor: Christopher L Mills
  • Patent number: 11868771
    Abstract: A system and method which allows the basic checkpoint-reverse-mode AD strategy (of recursively decomposing the computation to reduce storage requirements of reverse-mode AD) to be applied to arbitrary programs: not just programs consisting of loops, but programs with arbitrarily complex control flow. The method comprises (a) transforming the program into a formalism that allows convenient manipulation by formal tools, and (b) introducing a set of operators to allow computations to be decomposed by running them for a given period of time then pausing them, while treating the paused program as a value subject to manipulation.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: January 9, 2024
    Assignee: Purdue Research Foundation
    Inventors: Jeffrey Mark Siskind, Barak Avrum Pearlmutter
  • Patent number: 11861369
    Abstract: A PIM device writes elements of a first matrix to a first memory bank, and may writes elements of a second matrix to a second memory bank. The PIM device simultaneously reads elements with the same order among the elements of the first and second matrices by simultaneously accessing the first and second memory banks. An MAC operator generates arithmetic data by performing a calculation on data that is read from the first and second memory banks, and writes the arithmetic data to a third memory bank.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: January 2, 2024
    Assignee: SK hynix Inc.
    Inventor: Choung Ki Song