Patents Examined by Chun-Kuan Lee
  • Patent number: 11580386
    Abstract: Disclosed herein are a convolutional layer acceleration unit, an embedded system having the convolutional layer acceleration unit, and a method for operating the embedded system. The method for operating an embedded system, the embedded system performing an accelerated processing capability programmed using a Lightweight Intelligent Software Framework (LISF), includes initializing and configuring, by a parallelization managing function entity (FE), entities present in resources for performing mathematical operations in parallel, and processing in parallel, by an acceleration managing FE, the mathematical operations using the configured entities.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: February 14, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Seung-Tae Hong
  • Patent number: 11573799
    Abstract: An apparatus and method for performing dual concurrent multiplications of packed data elements.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: February 7, 2023
    Assignee: Intel Corporation
    Inventors: Venkateswara Madduri, Elmoustapha Ould-Ahmed-Vall, Mark Charney, Robert Valentine, Jesus Corbal, Binwei Yang
  • Patent number: 11561922
    Abstract: Communication is performed more reliably. A CCI (I3C DDR) processing section determines status of an index when requested to be accessed by an I3C master for a read operation. An error handling section then controls an I3C slave 13 to detect occurrence of an error based on the status of the index and to neglect all communication until DDR mode is stopped or restarted by the I3C master, the I3C slave 13 being further controlled to send a NACK response when performing acknowledge processing on a signal sent from the I3C master. This technology can be applied to the I3C bus, for example.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: January 24, 2023
    Assignee: Sony Semiconductor Solutions Corporation
    Inventors: Hiroo Takahashi, Naohiro Koshisaka
  • Patent number: 11556850
    Abstract: The present disclosure relates to a system, a method, and a product for optimizing hyper-parameters for generation and execution of a machine-learning model under constraints. The system includes a memory storing instructions and a processor in communication with the memory. When executed by the processor, the instructions cause the processor to obtain input data and an initial hyper-parameter set; for an iteration, to build a machine learning model based on the hyper-parameter set, evaluate the machine learning model based on the target data to obtain a performance metrics set, and determine whether the performance metrics set satisfies the stopping criteria set. If yes, the instructions cause the processor to perform an exploitation process to obtain an optimal hyper-parameter set, and exit the iteration; if no, perform an exploration process to obtain a next hyper-parameter set, and perform a next iteration with using the next hyper-parameter set as the hyper-parameter set.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: January 17, 2023
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Andrew Nam, Yao Yang, Teresa Sheausan Tung, Mohamad Mehdi Nasr-Azadani, Zaid Tashman, Ruiwen Li
  • Patent number: 11550742
    Abstract: The present disclosure includes apparatuses and methods for in data path compute operations. An example apparatus includes an array of memory cells. Sensing circuitry is selectably coupled to the array. A plurality of shared input/output (I/O) lines provides a data path. The plurality of shared I/O lines selectably couples a first subrow of a row of the array via the sensing circuitry to a first compute component in the data path to move a first data value from the first subrow to the first compute component and a second subrow of the respective row via the sensing circuitry to a second compute component to move a second data value from the second subrow to the second compute component. An operation is performed on the first data value from the first subrow using the first compute component substantially simultaneously with movement of the second data value from the second subrow to the second compute component.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: January 10, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Perry V. Lea
  • Patent number: 11543842
    Abstract: An integrated circuit includes a clock control circuit coupled to a reference clock signal node and a plurality of circuits including a voltage regulator, a digital circuit, and an analog circuit. The voltage regulator, in operation, supplies a regulated voltage. The clock control circuit, in operation, generates a system clock. Input/output interface circuitry is coupled to the plurality of circuits and a common input/output node. The input/output interface circuitry, in operation, selectively couples one of the plurality of circuits to the common input/output node.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: January 3, 2023
    Assignee: STMICROELECTRONICS S.r.l.
    Inventors: Mirko Dondini, Daniele Mangano, Riccardo Condorelli
  • Patent number: 11546189
    Abstract: An access node that can be configured and optimized to perform input and output (I/O) tasks, such as storage and retrieval of data to and from network devices (such as solid state drives), networking, data processing, and the like. For example, the access node may be configured to receive data to be processed, wherein the access node includes a plurality of processing cores, a data network fabric, and a control network fabric; receive, over the control network fabric, a work unit message indicating a processing task to be performed a processing core; and process the work unit message, wherein processing the work unit message includes retrieving data associated with the work unit message over the data network fabric.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: January 3, 2023
    Assignee: Fungible, Inc.
    Inventors: Pradeep Sindhu, Jean-Marc Frailong, Bertrand Serlet, Wael Noureddine, Felix A. Marti, Deepak Goel, Paul Kim, Rajan Goyal, Aibing Zhou
  • Patent number: 11544069
    Abstract: A system, method and apparatus to facilitate data exchange via pointers. For example, in a computing system having a first processor and a second processor that is separate and independent from the first processor, the first processor can run a program configured to use a pointer identifying a virtual memory address having an ID of an object and an offset within the object. The first processor can use the virtual memory address to store data at a memory location in the computing system and/or identify a routine at the memory location for execution by the second processor. After the pointer is communicated from the first processor to the second processor, the second processor can access the same memory location identified by the virtual memory address. The second processor may operate on the data stored at the memory location or load the routine from the memory location for execution.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: January 3, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Steven Jeffrey Wallach
  • Patent number: 11537864
    Abstract: Embodiments relate to a neural processor that includes one or more neural engine circuits and planar engine circuits. The neural engine circuits can perform convolution operations of input data with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. A planar engine circuit can be configured to multiple modes. In a reduction mode, the planar engine circuit may process values arranged in one or more dimensions of input to generate a reduced value. The reduced values across multiple input data may be accumulated. The planar engine circuit may program a filter circuit as a reduction tree to gradually reduce the data into a reduced value. The reduction operation reduces the size of one or more dimensions of a tensor.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: December 27, 2022
    Assignee: Apple Inc.
    Inventors: Christopher L. Mills, Kenneth W. Waters, Youchang Kim
  • Patent number: 11526453
    Abstract: Methods, apparatuses, and systems related to an apparatus are described. The apparatus may include (1) a read state circuit configured to control the schedule/timing associated with parallel pipelines, and (2) a timing control circuit configured to coordinate output of data from the parallel pipelines.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: December 13, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Kallol Mazumder, Navya Sri Sreeram, Ryo Fujimaki
  • Patent number: 11520713
    Abstract: Embodiments using a distributed bus arbiter for one cycle channel selection with inter-channel ordering constraints. A distributed bus arbiter that orders one or more memory bus transactions originating from a plurality of master bus components to a plurality of shared remote slaves over shared serial channels attached to differing interconnect instances may be implemented.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: December 6, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dimitrios Syrivelis, Andrea Reale, Kostas Katrinis
  • Patent number: 11511882
    Abstract: A method for identifying aircraft faults, comprising: receiving a dataset comprising a plurality of low priority messages and a plurality of high priority messages, each low priority message identifying a minor aircraft fault and each high priority message identifying a major aircraft fault; for each low priority message, generating an embedding vector which maps the low priority message in an embedding space; for each high priority message, generating an embedding vector which maps the high priority message in the embedding space; providing, to a machine learning unit, the embedding vector for each low priority message of the plurality of low priority messages and the embedding vector for each high priority message of the plurality of high priority messages; and obtaining, from the machine learning unit, a probability of a target high priority message occurring based on each low priority message of the plurality of low priority messages.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: November 29, 2022
    Assignees: Qatar Foundation for Education, Science and Community Development, The Boeing Company
    Inventors: Mohamed M. Elshrif, Sanjay Chawla, Franz D. Betz, Dragos D. Margineantu
  • Patent number: 11507421
    Abstract: Information handling systems (IHSs) and methods are provided herein to allocate Peripheral Component Interconnect Express (PCIe) bus resources to a plurality of PCIe slots according to various PCIe bus resource allocation option settings. At least one host processor is included within the IHS for executing program instructions to detect a PCIe bus allocation option setting selected from a plurality of options provided in a boot firmware setup menu; determine if the PCIe bus allocation option setting has changed since the IHS was last booted; and allocate PCIe bus resources to the plurality of PCIe slots according to the detected PCIe bus allocation option setting. The plurality of options provided in the boot firmware setup menu include at least an auto detect option, which when selected, enables the at least one host processor to automatically detect unused PCIe slots and reallocate PCIe bus resources to used PCIe slots.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: November 22, 2022
    Assignee: Dell Products L.P.
    Inventors: Chih-Yu Chan, Terry Matula
  • Patent number: 11500795
    Abstract: A storage circuit includes a buffer coupled between the storage controller and the nonvolatile memory devices. The circuit includes one or more groups of nonvolatile memory (NVM) devices, a storage controller to control access to the NVM device, and the buffer. The buffer is coupled between the storage controller and the NVM devices. The buffer is to re-drive signals on a bus between the NVM devices and the storage controller, including synchronizing the signals to a clock signal for the signals. The circuit can include a data buffer, a command buffer, or both.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: November 15, 2022
    Assignee: Intel Corporation
    Inventors: Emily P. Chung, Frank T. Hady, George Vergis
  • Patent number: 11494322
    Abstract: A method of operation of a computing system includes: providing a first cluster having a first kernel unit for managing a first reconfigurable hardware device; analyzing an application descriptor associated with an application; generating a first bitstream based on the application descriptor for loading the first reconfigurable hardware device, the first bitstream for implementing at least a first portion of the application; and implementing a first fragment with the first bitstream in the first cluster.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: November 8, 2022
    Assignee: Xcelemor, Inc.
    Inventor: Peter J. Zievers
  • Patent number: 11487542
    Abstract: Instruction cache behavior and branch prediction are used to improve the functionality of a computing device by profiling branching instructions in an instruction cache to identify likelihoods of proceeding to a plurality of targets from the branching instructions; identifying a hot path in the instruction cache based on the identified likelihoods; and rearranging the plurality of targets relative to one another and associated branching instructions so that a first branching instruction that has a higher likelihood of proceeding to a first hot target than to a first cold target and that previously flowed to the first cold target and jumped to the first hot target instead flows to the first hot target and jumps to the first cold target.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: November 1, 2022
    Assignee: International Business Machines Corporation
    Inventors: Yang Liu, Ting Wang, Qi Li, Qing Zhang, Gui Haochen, Xiao Ping Guo, Xiao Hua Zeng, Yangming Wang, Yi Li, Hua Qing Li, Fei Fei
  • Patent number: 11487681
    Abstract: Enhanced techniques for communicating with an integrated circuit chip card are disclosed. An integrated circuit chip card may include a processor, a memory storing a plurality applications executable by the processor, an input/output (I/O) interface, and a network interface coupled to the (I/O) interface. The network interface may implement a plurality of logical ports, and the network interface can be configurable to select between multiple communication protocols to communicate with an external device in a socket communication mode. The network interface can be configured to establish a plurality of communication channels between the external device the integrated circuit chip card using the plurality of logical ports, and each of the communication channels may support communication with one of the plurality of applications.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: November 1, 2022
    Assignee: VISA INTERNATIONAL SERVICE ASSOCIATION
    Inventor: Kiushan Pirzadeh
  • Patent number: 11487673
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: November 1, 2022
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, Sherry Cheung, James Leroy Deming, Samuel H. Duncan, Lucien Dunning, Robert George, Arvind Gopalakrishnan, Mark Hairgrove, Chenghuan Jia, John Mashey
  • Patent number: 11474824
    Abstract: Systems and methods for performance benchmarking-based selection of processor for generating graphic primitives. An example method comprises: initializing, by a computer system comprising a plurality of processors of a plurality of processor types, a current value of a graphic primitive parameter; for each processor type of the plurality of processor types, computing a corresponding value of a performance metric by generating, using at least one processor of a currently selected processor type, a corresponding graphic primitive of a specified graphic primitive type, wherein the graphic primitive is characterized by the current value of the graphic primitive parameter; and estimating, based on the computed performance metric values, a threshold value of the graphic primitive parameter.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: October 18, 2022
    Assignee: Corel Corporation
    Inventors: Christopher Tremblay, John Jason Kurczak
  • Patent number: 11475314
    Abstract: A learning device includes a data storage unit configured to store learning data for learning a decision tree; a learning unit configured to determine whether to cause learning data stored in the data storage unit to branch to one node or to the other node of lower nodes of a node based on a branch condition for the node of the decision tree; and a first buffer unit and a second buffer unit configured to buffer learning data determined to branch to the one node and the other node, respectively, by the learning unit up to capacity determined in advance. The first buffer unit and the second buffer unit are configured to, in response to buffering learning data up to the capacity determined in advance, write the learning data into continuous addresses of the data storage unit for each predetermined block.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: October 18, 2022
    Assignee: RICOH COMPANY, LTD.
    Inventors: Ryosuke Kasahara, Takuya Tanaka