Patents Examined by Farley Abad
  • Patent number: 11944917
    Abstract: A computer system and method for synchronizing actions associated with media between a media/network device and peripherals. In an example implementation, a system includes a one or more processors configured to receive, by a communication module from a media/network device based on peripheral addressing information, a peripheral payload including a first set of actions and timing information related to media. The one or more processors perform the first set of actions based on the peripheral payload, generate response data for the first set of actions, and transmit the response data to the media/network device via a wireless network.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: April 2, 2024
    Assignee: OPENTV, INC.
    Inventors: Claes Georg Andersson, John Michael Teixeira, Nicholas Daniel Doerring, Nicholas Fishwick, Colin Reed Miller
  • Patent number: 11940939
    Abstract: Data may be communicated from a sender device to a receiver device over enabled or selected byte positions or other data bit groups of a data bus. The sender device may determine data values to be sent over the data bus and may determine which byte positions are enabled or selected and which are not selected. The sender device may also determine a code. The code may be a value that is not included in the data values to be sent over the data bus. The sender device may then send the selected data values in selected byte positions of the data bus and send the code in non-selected byte positions of the data bus. The sender device may also send the code to the receiver device separately from the data bit lanes of the data bus.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: March 26, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Philippe Boucard, Christophe Layer, Luc Montperrus
  • Patent number: 11934694
    Abstract: A method of a memory device, a storage system, and a memory device are provided. The method includes receiving a set of entries, where the set of entries includes a first entry from a source queue and addressed to a first destination and a second entry addressed to a second destination, determining to add a third entry associated with the first entry and addressed to the first destination to the set of entries, selecting one of the first entry and the third entry as a restock entry and the other of the first entry and the third entry as a pass-through entry, sending the restock entry to the source queue, and sending the second entry and the pass-through entry to a serial link connected to the first destination and the second destination.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: March 19, 2024
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Chun-Chu Chen-Jhy Archie Wu, Joseph Michael Findley
  • Patent number: 11934343
    Abstract: Disclosed is a data processing system to receive a processing graph of an application. A compile time logic is configured to modify the processing graph and generate a modified processing graph. The modified processing graph is configured to apply a post-padding tiling after applying a cumulative input padding that confines padding to an input. The cumulative input padding pads the input into a padded input. The post-padding tiling tiles the padded input into a set of pre-padded input tiles with a same tile size, tiles intermediate representation of the input into a set of intermediate tiles with a same tile size, and tiles output representation of the input into a set of non-overlapping output tiles with a same tile size. Runtime logic is configured with the compile time logic to execute the modified processing graph to execute the application.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: March 19, 2024
    Assignee: SambaNova Systems, Inc.
    Inventors: Tejas Nagendra Babu Nama, Ruddhi Chaphekar, Ram Sivaramakrishnan, Raghu Prabhakar, Sumti Jairath, Junjue Wang, Kaizhao Liang, Adi Fuchs, Matheen Musaddiq, Arvind Krishna Sujeeth
  • Patent number: 11934827
    Abstract: An apparatus that manages multi-process execution in a processing-in-memory (“PIM”) device includes a gatekeeper configured to: receive an identification of one or more registered PIM processes; receive, from a process, a memory request that includes a PIM command; if the requesting process is a registered PIM process and another registered PIM process is active on the PIM device, perform a context switch of PIM state between the registered PIM processes; and issue the PIM command of the requesting process to the PIM device.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: March 19, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Sooraj Puthoor, Muhammad Amber Hassaan, Ashwin Aji, Michael L. Chu, Nuwan Jayasena
  • Patent number: 11928443
    Abstract: A circuit system includes a memory block and first and second processing circuits. The first and second processing circuits store a matrix in the memory block by concurrently writing elements in first and second rows or columns of the matrix to first and second regions of storage in the memory block, respectively. The first and second processing circuits transpose the matrix to generate a transposed matrix by concurrently reading elements in first and second rows or columns of the transposed matrix from third and fourth regions of storage in the memory block, respectively.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: March 12, 2024
    Assignee: Intel Corporation
    Inventor: Hong Shan Neoh
  • Patent number: 11928066
    Abstract: The present invention relates to a bridge device operable between a master device and a slave device of a communication system, said master device and said slave device arranged for communicating with each other via a parent I2C bus and a child I2C bus and using the I2C protocol, said bridge device comprising—a parent module arranged for connecting said parent I2C bus and comprising a parent I2C transmitter/receiver device and a parent module state machine, —a child module arranged for connecting said child I2C bus and comprising a child I2C transmitter/receiver device and a child module state machine, whereby said parent module and said child module each comprise an internal bridge interface to exchange messages between said parent module and said child module, said messages being generated by said parent module state machine or said child module state machine in response to a change of state caused by an event on their respective I2C buses, whereby said parent module and said child module are each arranged
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: March 12, 2024
    Assignee: IRISTICK NV
    Inventors: Jasper Van Bourgognie, Vianney Le Clément de Saint-Marcq, Riemer Grootjans, Peter Verstraeten
  • Patent number: 11928466
    Abstract: Techniques for generating distributed representations of computing processes and events are provided. According to one set of embodiments, a computer system can receive occurrence data pertaining to a plurality of computing processes and a plurality of events associated with the plurality of computing processes. The computer system can then generate, based on the occurrence data, (1) a set of distributed process representations that includes, for each computing process, a representation that encodes a sequence of events associated with the computing process in the occurrence data, and (2) a set of distributed event representations that includes, for each event, a representation that encodes one or more event properties associated with the event and one or more events that occur within a window of the event in the occurrence data.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: March 12, 2024
    Assignee: VMware LLC
    Inventors: Mahmood Sharif, Vijay Ganti
  • Patent number: 11928067
    Abstract: Embodiments provide a read operation circuit, a semiconductor memory, and a read operation method. The read operation circuit includes: a data determination module configured to read read data from a memory bank, and determine whether to invert the read data according to the number of bits of low data in the read data to output global bus data for transmission through a global bus and inversion flag data for transmission through an inversion flag signal line; a data receiving module configured to determine whether to invert the global bus data according to the inversion flag data to output cache data; a parallel-to-serial conversion circuit configured to perform parallel-to-serial conversion on the cache data to generate output data of the DQ port; and a precharge module configured to set an initial state of the global bus to High.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: March 12, 2024
    Assignee: CHANGXIN MEMORY TECHNOLOGIES, INC.
    Inventor: Liang Zhang
  • Patent number: 11922178
    Abstract: Methods, apparatus, systems, and articles of manufacture to load data into an accelerator are disclosed. An example apparatus includes data provider circuitry to load a first section and an additional amount of compressed machine learning parameter data into a processor engine. Processor engine circuitry executes a machine learning operation using the first section of compressed machine learning parameter data. A compressed local data re-user circuitry determines if a second section is present in the additional amount of compressed machine learning parameter data. The processor engine circuitry executes a machine learning operation using the second section when the second section is present in the additional amount of compressed machine learning parameter data.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: March 5, 2024
    Assignee: Intel Corporation
    Inventors: Arnab Raha, Deepak Mathaikutty, Debabrata Mohapatra, Sang Kyun Kim, Gautham Chinya, Cormac Brick
  • Patent number: 11921668
    Abstract: The present disclosure provides a processor array and a multiple-core processor. The processor array includes a plurality of processing elements arranged in a two-dimensional array, a plurality of first load units correspondingly arranged and connected to the processing elements of the first edge row, respectively, a plurality of second load units correspondingly arranged and connected to the processing elements of the first edge column, respectively, a plurality of first store units correspondingly arranged and connected to the processing elements of the second edge column, respectively, a plurality of second store units correspondingly arranged and connected to the processing elements of the second edge row, respectively.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: March 5, 2024
    Assignee: BEIJING TSINGMICRO INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Peng Ouyang, Guozhi Song
  • Patent number: 11921649
    Abstract: Various implementations described herein relate to systems and methods for a solid state drive (SSD) that includes a first controller and a NAND package. The NAND package includes a plurality of dies grouped into a plurality of subsets. The NAND package includes a second controller operatively coupled to each of the plurality of subsets via a corresponding one of a plurality of parallel mode channels. The first controller is operatively coupled to the NAND package via a serial link.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: March 5, 2024
    Assignee: KIOXIA CORPORATION
    Inventors: Tiruvur Radhakrishna Ramesh, Avadhani Shridhar, Senthilkumar Diraviam, Gary Lin
  • Patent number: 11922173
    Abstract: An information handling system may include a processor, a display device communicatively coupled to the processor, and a basic input/output system (BIOS) communicatively coupled to the processor and configured to cause the processor to, during a pre-boot environment of the information handling system, collect contextual information regarding the information handling system, based on the contextual information, determine whether to enable soft keyboard functionality, and responsive to a determination to enable soft keyboard functionality, cause display of soft keyboard functionality to the display device.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: March 5, 2024
    Assignee: Dell Products L.P.
    Inventors: Ibrahim Sayyed, Adolfo Montero, Jagadish Babu Jonnada
  • Patent number: 11921643
    Abstract: A processor is provided that includes a first multiplication unit in a first data path of the processor, the first multiplication unit configured to perform single issue multiply instructions, and a second multiplication unit in the first data path, the second multiplication unit configured to perform single issue multiply instructions, wherein the first multiplication unit and the second multiplication unit are configured to execute respective single issue multiply instructions in parallel.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: March 5, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Mujibur Rahman, Timothy David Anderson, Soujanya Narnur
  • Patent number: 11916552
    Abstract: Techniques and apparatus for dynamically modifying a kernel (and associated user-specified circuitry) for a dynamic region of a programmable integrated circuit (IC) without affecting (e.g., while allowing) operation of other kernels ((and other associated user-specified circuitry) in the programmable IC. Dynamically modifying a kernel may include, for example, unloading an existing kernel, loading a new kernel, or replacing a first kernel with a second kernel). In the case of networking (e.g., in a data center application) where the programmable IC may be part of a hardware acceleration card (e.g., a network interface card (NIC)), the kernel may be user code referred to as a “plugin.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: February 27, 2024
    Assignee: XILINX, INC.
    Inventors: Ellery Cochell, Ripduman Singh Sohan, Kieran Mansley
  • Patent number: 11907715
    Abstract: Techniques are provided to implement hardware accelerated application of preconditioners to solve linear equations. For example, a system includes a processor, and a resistive processing unit coupled to the processor. The resistive processing unit includes an array of cells which include respective resistive devices, wherein at least a portion of the resistive devices are tunable to encode entries of a preconditioning matrix which is storable in the array of cells. When the preconditioning matrix is stored in the array of cells, the processor is configured to apply the preconditioning matrix to a plurality of residual vectors by executing a process which includes performing analog matrix-vector multiplication operations on the preconditioning matrix and respective ones of the plurality of residual vectors to generate a plurality of output vectors used in one or more subsequent operations.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: February 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Vasileios Kalantzis, Lior Horesh, Shashanka Ubaru
  • Patent number: 11904918
    Abstract: A computer interlocking system includes: a first sub-system and a second sub-system that have a same structure and function, where the first sub-system and the second sub-system form a double 2-vote-2 architecture, respectively including a main control layer, a network layer, and a communication and execution layer; the network layer being configured to construct a communication network of a sub-system in which the network layer is located; the main control layer and the communication and execution layer in the first sub-system being respectively connected to a communication network of the first sub-system; and the main control layer and the communication and execution layer in the second sub-system being respectively connected to a communication network of the second sub-system.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: February 20, 2024
    Assignee: BYD COMPANY LIMITED
    Inventors: Yejun Qin, Tao Yang, Faping Wang
  • Patent number: 11900112
    Abstract: A method to reverse source data in a processor in response to a vector reverse instruction includes specifying, in respective fields of the vector reverse instruction, a source register containing the source data and a destination register. The source register includes a plurality of lanes and each lane contains a data element, and the destination register includes a plurality of lanes corresponding to the lanes of the source register. The method further includes executing the vector reverse instruction by creating reversed source data by reversing the order of the data elements, and storing the reversed source data in the destination register.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: February 13, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Timothy D. Anderson, Duc Bui
  • Patent number: 11899616
    Abstract: The present disclosure provides a systolic array-based data processing method that includes determining an input splice quantity for the systolic array based on a target input depth and a standard input depth, and determining an output splice quantity for the systolic array based on a target output depth and a standard output depth; inputting the input data matching the input splice quantity to an input buffer of the systolic array in batches, without overlaps in the input data, and processing, by the systolic array, the input data in the input buffer to generate output data corresponding to each piece of input data; and in accordance with a determination that a quantity of output data received by an output buffer of the systolic array from the systolic array matches the output splice quantity, outputting, in the output buffer, output data having a quantity matching the output splice quantity in batches.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: February 13, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiaoyu Yu, Dewei Chen, Heng Zhang
  • Patent number: 11893391
    Abstract: The example embodiments provide a method, a system, a mobile device, and an acceleration device for processing computing jobs. The method includes: obtaining, by a mobile device, a computing job, wherein a first interface of the mobile device is connected to a second interface, the second interface included in an acceleration device; transmitting, by the mobile device, the computing job from the first interface to the second interface via a write command; receiving, by the acceleration device, the computing job at the second interface; processing, by the acceleration device, the computing job and transmitting a processing result from the second interface to the first interface; and obtaining, by the mobile device, the processing result from the first interface via a read command.
    Type: Grant
    Filed: April 26, 2020
    Date of Patent: February 6, 2024
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventors: Wente Wang, Jiejing Zhang