Patents Examined by Farley Abad
  • Patent number: 11494609
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for a neural network that is configured to receive a network input and to generate a network output for the network input. The neural network comprises a plurality of layers arranged in a sequence, including a plurality of capsule layers. Each particular capsule in a particular capsule layer is configured to receive respective inputs including: (i) outputs generated by capsules of a previous capsule layer that is before the particular capsule layer in the sequence, and (ii) final routing factors between capsules of the previous capsule layer and the particular capsule, wherein the final routing factors are generated by a routing subsystem. Each particular capsule in the particular capsule layer is configured to determine a particular capsule output based on the received inputs, wherein the particular capsule output is of dimension greater than one.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: November 8, 2022
    Assignee: Google LLC
    Inventors: Geoffrey E. Hinton, Nicholas Myles Wisener Frosst, Sara Sabour Rouh Aghdam
  • Patent number: 11483010
    Abstract: An output control circuit, a method for transmitting data, and an electronic device are disclosed. The output control circuit includes: a serial-to-parallel conversion circuit configured to obtain at least one group of parallel data through a serial-to-parallel conversion; an intermediate-stage cache circuit configured to divide the at least one group of parallel data into at least two categories of subgroup parallel data according to sequence of serial-to-parallel conversion; a latch output circuit including a plurality of latch arrays each of which receiving any category of subgroup parallel data and latching and outputting any subgroup parallel data in any category of subgroup parallel data; and a selection control circuit configured to, within an effective pulse duration of the any subgroup parallel data, control a latch array for the any subgroup parallel data in the plurality of latch arrays to latch and output the any subgroup parallel data.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: October 25, 2022
    Assignee: BEIJING BOE TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Junrui Zhang, Xuehui Zhu, Ronghua Lan, Xin Xiang, Xiaoqiao Liu, Xizhu Peng, He Tang
  • Patent number: 11461100
    Abstract: Process address space identifier virtualization uses hardware paging hint.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: October 4, 2022
    Assignee: Intel Corporation
    Inventors: Kun Tian, Sanjay Kumar, Ashok Raj, Yi Liu, Rajesh M. Sankaran, Philip R. Lantz
  • Patent number: 11449347
    Abstract: Time-multiplexing implementation of hardware accelerated functions includes associating each function of a plurality of functions from program code with an accelerator binary image specifying a hardware accelerated version of the associated function and determining which accelerator binary images are data independent. Using the computer hardware, the accelerator binary images can be scheduled for implementation in a programmable integrated circuit within each of a plurality of partial reconfiguration regions based on data independence.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: September 20, 2022
    Inventors: Raymond Kong, Brian S. Martin, Hao Yu, Jun Liu, Ashish Sirasao
  • Patent number: 11442729
    Abstract: A method and system for processing a bit-packed array using one or more processors, including determining a data element size of the bit-packed array, determining a lane configuration of a single-instruction multiple-data (SIMD) unit for processing the bit-packed array based at least in part on the determined data element size, the lane configuration being determined from among a plurality of candidate lane configurations, each candidate lane configuration having a different number of vector register lanes and a corresponding bit capacity per vector register lane, configuring the SIMD unit according to the determined lane configuration, and loading one or more data elements into each vector register lane of the SIMD unit. SIMD instructions may be executed on the loaded one or more data elements of each vector register lane in parallel, and a result of the SIMD instruction may be stored in memory.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: September 13, 2022
    Assignee: Google LLC
    Inventors: Junwhan Ahn, Jichuan Chang, Andrew McCormick, Yuanwei Fang, Yixin Luo
  • Patent number: 11438093
    Abstract: A host/device interface coupled between a host and a storage device, and including a data interface and a Quality of Service (QoS) and configured to communicate a QoS signal with the host. The QoS interface cooperates with the data interface to selectively manage a storage QoS on the storage device. A method is provided for storing data on a data medium including receiving a Quality of Service (QoS) command; selecting a portion of the data medium on which to store a data stream; forming a stream chunk from a portion of the data stream; configuring a transducer to store the stream chunk on the data medium in response to the QoS command; and storing the data on the data medium, such that the storing conforms to a QoS command value.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: September 6, 2022
    Inventors: Rod Brittner, Ron Benson
  • Patent number: 11436040
    Abstract: A new approach of systems and methods to support a hierarchical interrupt propagation scheme for efficient interrupt propagation and handling is proposed. The hierarchical interrupt propagation scheme organizes a plurality of slave interrupt handlers associated functional blocks in a chip in a hierarchy. When an exception or error condition occurs in a functional block, a slave interrupt handler associated with the functional block creates an interrupt packet as an interrupt notification and utilizes pre-existing input and output interfaces that have already been utilized for accessing registers of the functional block to transmit the created interrupt packet to a central interrupt handler through the hierarchy without running dedicated interconnect wires out of the functional block.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: September 6, 2022
    Assignee: Marvell Asia Pte Ltd
    Inventors: Saurabh Shrivastava, Guy T. Hutchison
  • Patent number: 11429388
    Abstract: Aspects of the current subject matter are directed to an approach in which a parallel load operation of file ID mapping containers is accomplished at start and/or restart of a database system. Parallel load operation of file ID mapping and/or large binary object (LOB) file ID mapping is done among a plurality of scanning engines into a plurality of data buffers that are associated with each of the plurality of scanning engines. Each scanning engine operates on a certain path of a page chain of a page structure including the mapping, causing the page chain to be split among scanning engines to process maps. Contents of the data buffers are pushed to mapping engines via a queue. The mapping engines load the file ID mapping and the LOB file ID mapping into maps for in-system access.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: August 30, 2022
    Assignee: SAP SE
    Inventors: Dirk Thomsen, Thorsten Glebe, Tobias Scheuer, Werner Thesing, Johannes Gloeckle
  • Patent number: 11422813
    Abstract: The invention introduces an apparatus for segmenting a data stream, installed in a physical layer, to include a host interface, a data register and a boundary detector. The data register is arranged to operably store data received from the host side through the host interface. The boundary detector is arranged to operably detect the content of the data register. When the data register includes a boundary-lock pattern or a special symbol, the boundary detector outputs a starting address that the boundary-lock pattern or the special symbol is stored in the data register to an offset register to update a value stored in the offset register, thereby enabling a stream splitter to divide data bits of the data register according to the updated value of the offset register.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: August 23, 2022
    Assignee: SILICON MOTION, INC.
    Inventor: Han-Cheng Huang
  • Patent number: 11416281
    Abstract: Embodiments of systems, methods, and apparatuses for heterogeneous computing are described. In some embodiments, a hardware heterogeneous scheduler dispatches instructions for execution on one or more plurality of heterogeneous processing elements, the instructions corresponding to a code fragment to be processed by the one or more of the plurality of heterogeneous processing elements, wherein the instructions are native instructions to at least one of the one or more of the plurality of heterogeneous processing elements.
    Type: Grant
    Filed: December 31, 2016
    Date of Patent: August 16, 2022
    Assignee: Intel Corporation
    Inventors: Rajesh M. Sankaran, Gilbert Neiger, Narayan Ranganathan, Stephen R. Van Doren, Joseph Nuzman, Niall D. McDonnell, Michael A. O'Hanlon, Lokpraveen B. Mosur, Tracy Garrett Drysdale, Eriko Nurvitadhi, Asit K. Mishra, Ganesh Venkatesh, Deborah T. Marr, Nicholas P. Carter, Jonathan D. Pearce, Edward T. Grochowski, Richard J. Greco, Robert Valentine, Jesus Corbal, Thomas D. Fletcher, Dennis R. Bradford, Dwight P. Manley, Mark J. Charney, Jeffrey J. Cook, Paul Caprioli, Koichi Yamada, Kent D. Glossop, David B. Sheffield
  • Patent number: 11416778
    Abstract: A feature extractor for a convolutional neural network (CNN) is disclosed, wherein the feature extractor is deployed on a member of the group consisting of (1) a reconfigurable logic device, (2) a graphics processing unit (GPU), and (3) a chip multi-processor (CMP). A processing pipeline can be implemented on the member, where the processing pipeline implements a plurality convolution layers for the CNN, wherein each of a plurality of the convolutional layers comprises (1) a convolution stage that convolves first data with second data if activated and (2) a sub-sampling stage that performs a member of the group consisting of (i) a max pooling operation, (ii) an averaging operation, and (iii) a sampling operation on data received thereby if activated. The processing pipeline can be controllable with respect to which of the convolution stages are activated/deactivated and which of the sub-sampling stages are activated/deactivated when processing streaming data through the processing pipeline.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: August 16, 2022
    Assignee: IP RESERVOIR, LLC
    Inventors: Roger D. Chamberlain, Ronald S. Indeck
  • Patent number: 11416394
    Abstract: A memory management method, apparatus, and system are provided. The memory management method is performed by a memory management hardware accelerator, and the memory management hardware accelerator is coupled to an application subsystem and a communications subsystem. The application subsystem is configured to run a main operating system, and the communications subsystem is configured to run a communications operating system. The method includes: obtaining a set of memory addresses corresponding to dynamic memory space allocated by the main operating system to the communications subsystem, where the set of memory addresses includes one or more memory addresses; and sending some memory addresses in the set of memory addresses to a component of the communications subsystem.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: August 16, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yuelong Wang, Xinzhu Wang, Zhiguo Tu, Shaohua Wang
  • Patent number: 11416113
    Abstract: According to one embodiment, a method for remotely controlling peripheral devices in a mobile communication terminal includes acquiring a profile for a controlled peripheral device, configuring a control application for the controlled peripheral device based on the acquired profile, and controlling the controlled peripheral device using the configured control application.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: August 16, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Du-Seok Kim, Hyun-Cheol Park, Giu-Yeol Kim, Jun-Mo Yang, Dong-Yun Shin, Hyo-Yong Jeong
  • Patent number: 11412302
    Abstract: A detection circuit and a wake-up method are provided. The detection circuit is adapted to a high definition multimedia interface (HDMI) receiver that enters a power-saving mode in a fixed rate link (FRL) mode to detect whether or not an HDMI transmitter starts to transmit video packets through the FRL. The detection circuit includes a signal detection circuit detecting whether or not signal exists on the FRL and an FRL packet determination circuit determining whether or not the FRL packets are the video packets according to a variable value characteristic of the video packets and/or a fixed value characteristic of gap packets. An existence of the signal on the FRL indicates an existence of FRL packets on the FRL. When the FRL packets are the video packets, the FRL packet determination circuit wakes the HDMI receiver from the power-saving mode to resolve the video packets and display videos.
    Type: Grant
    Filed: July 5, 2021
    Date of Patent: August 9, 2022
    Assignee: REALTEK SEMICONDUCTOR CORP.
    Inventors: Chun-Chieh Chan, Ming-An Wu, Chia-Hao Chang, Chien-Hsun Lu
  • Patent number: 11410036
    Abstract: An arithmetic processing apparatus includes: a memory that stores, when a training of a given machine learning model is repeatedly performed in a plurality of iterations, an error of a decimal point position of each of a plurality of fixed-point number data obtained one in each of the plurality of iterations, the error being obtained based on statistical information related to a distribution of leftmost set bit positions for positive number and leftmost unset bit positions for negative number or a distribution of rightmost set bit positions of the plurality of fixed-point number data; and a processor coupled to the memory, the processor being configured to: determine, based on a tendency of the error in each of the plurality of iterations, an offset amount for correcting a decimal point position of fixed-point number data used in the training.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: August 9, 2022
    Assignee: FUJITSU LIMITED
    Inventor: Makiko Ito
  • Patent number: 11409534
    Abstract: According to one embodiment, a system receives, at a host system a public attestation key (PK_ATT) or a signed PK_ATT from a data processing (DP) accelerator over a bus. The system verifies the PK_ATT using a public root key (PK_RK) associated with the DP accelerator. In response to successfully verifying the PK_ATT, the system transmits a kernel identifier (ID) to the DP accelerator to request attesting a kernel object stored in the DP accelerator. In response to the system receives a kernel digest or a signed kernel digest corresponding to the kernel object from the DP accelerator, verifying the kernel digest using the PK_ATT. The system sends the verification results to the DP accelerator for the DP accelerator to access the kernel object based on the verification results.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: August 9, 2022
    Assignees: BAIDU USA LLC, BAIDU.COM TIMES TECHNOLOGY (BEIJING) CO., LTD., KUNLUNXIN TECHNOLOGY (BEIJING) COMPANY LIMITED
    Inventors: Yueqiang Cheng, Yong Liu, Tao Wei, Jian Ouyang
  • Patent number: 11403239
    Abstract: A processing in memory (PIM) device includes a memory configured to receive data through a first path from a host processor provided outside the PIM device, and an information gatherer configured to receive the data through a second path connected to the first path when the data is transferred to the memory via the first path, and to generate information by processing the data received through the second path.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 2, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Shinhaeng Kang, Sukhan Lee
  • Patent number: 11403096
    Abstract: Systems, apparatuses, and methods related to acceleration circuitry for posit operations are described. Signaling indicative of performance of an operation to write a first bit string to a first buffer resident on acceleration circuitry and a second bit string resident on the acceleration circuitry can be received at an DMA controller couplable to the acceleration circuitry. The acceleration circuitry can be configured to perform arithmetic operations, logical operations, or both on bit strings formatted in a unum or posit format. Signaling indicative of an arithmetic operation, a logical operation, or both, to be performed using the first and second bit strings can be transmitted to the acceleration circuitry. The arithmetic operation, the logical operation, or both can be performed via the acceleration circuitry and according to the signaling. Signaling indicative of a result of the arithmetic operation, the logical operation, or both can be transmitting to the DMA controller.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: August 2, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Vijay S. Ramesh, Phillip G. Hays, Craig M. Cutler, Andrew J. Rees
  • Patent number: 11403071
    Abstract: Disclosed embodiments relate to systems and methods for performing instructions to transpose rectangular tiles. In one example, a processor includes fetch circuitry to fetch an instruction having fields to specify an opcode and locations of first destination, second destination, first source, and second source matrices, the specified opcode to cause the processor to process each of the specified source and destination matrices as a rectangular matrix, decode circuitry to decode the fetched rectangular matrix transpose instruction, and execution circuitry to respond to the decoded rectangular matrix transpose instruction by transposing each row of elements of the specified first source matrix into a corresponding column of the specified first destination matrix and transposing each row of elements of the specified second source matrix into a corresponding column of the specified second destination matrix.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: August 2, 2022
    Assignee: Intel Corporation
    Inventors: Raanan Sade, Robert Valentine, Mark J. Charney, Simon Rubanovich, Amit Gradstein, Zeev Sperber, Bret Toll, Jesus Corbal, Christopher J. Hughes, Alexander F. Heinecke, Elmoustapha Ould-Ahmed-Vall
  • Patent number: 11392379
    Abstract: Disclosed embodiments relate to executing a vector multiplication instruction. In one example, a processor includes fetch circuitry to fetch the vector multiplication instruction having fields for an opcode, first and second source identifiers, and a destination identifier, decode circuitry to decode the fetched instruction, execution circuitry to, on each of a plurality of corresponding pairs of fixed-sized elements of the identified first and second sources, execute the decoded instruction to generate a double-sized product of each pair of fixed-sized elements, the double-sized product being represented by at least twice a number of bits of the fixed size, and generate a signed fixed-sized result by rounding the most significant fixed-sized portion of the double-sized product to fit into the identified destination.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Venkateswara R. Madduri, Carl Murray, Elmoustapha Ould-Ahmed-Vall, Mark J. Charney, Robert Valentine, Jesus Corbal