Patents Examined by Ryan C Vaughn
  • Patent number: 11961003
    Abstract: A device, system, and method is provided for training a new neural network to mimic a target neural network without access to the target neural network or its original training dataset. The target neural network and the new neural network may be probed with input data to generate corresponding target and new output data. Input data may be detected that generate a maximum or above threshold difference between the corresponding target and new output data. A divergent probe training dataset may be generated comprising the input data that generate the maximum or above threshold difference and the corresponding target output data. The new neural network may be trained using the divergent probe training dataset to generate the target output data. The new neural network may be iteratively trained using an updated divergent probe training dataset dynamically adjusted as the new neural network changes during training.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: April 16, 2024
    Assignee: NANO DIMENSION TECHNOLOGIES, LTD.
    Inventors: Eli David, Eri Rubin
  • Patent number: 11954564
    Abstract: A computer-implemented method, and a computer system are provided for implementing dynamic and automatic altering a user profile based on machine learning and cognitive analysis to improve user performance. The user profile is dynamically altered based upon live data from multiple external data sources using machine learning and cognitive application programming interfaces (APIs) without explicit input from the user. The altered user profile is automatically stored for the user. The stored user profile is deployed for multiple selected user applications enabling enhanced performance for the user.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: April 9, 2024
    Assignee: International Business Machines Corporation
    Inventors: Thomas N. Adams, Sarah W. Huber, Meghna Paruthi, Maria R. Ward
  • Patent number: 11954585
    Abstract: The present disclosure relates to the technical field of semiconductor integrated circuits and discloses a multi-mode array structure for in-memory computing, and a chip, including: an array of memory cells, function lines corresponding to all the memory cells measured by rows in the array of memory cells, and complementary function lines and bit lines BL corresponding to all the memory cells measured by columns in the array of memory cells. According to the present disclosure, the TCAM function and CNN and SNN operations are enabled; the multi-mode array for in-memory computing herein goes beyond the limits of the von Neumann architecture by integrating the multiple modes of storage and computation, achieving efficient operation and computation; in addition to solving the computing power problem, a new array mode is provided to promote the development of high-integration circuits.
    Type: Grant
    Filed: May 29, 2023
    Date of Patent: April 9, 2024
    Assignee: ZJU-Hangzhou Global Scientific and Technological Innovation Center
    Inventors: Yishu Zhang, Hua Wang, Xuemeng Fan
  • Patent number: 11941514
    Abstract: The present disclosure discloses a method for execution of a computational graph in a neural network model and an apparatus thereof, including: creating task execution bodies on a native machine according to a physical computational graph compiled and generated by a deep learning framework, and designing a solution for allocating a plurality of idle memory blocks to each task execution body, so that the entire computational graph participates in deep learning training tasks of different batches of data in a pipelining and parallelizing manner.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: March 26, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Hujun Bao, Guang Chen, Lingfang Zeng, Hongcai Cheng, Yong Li, Jian Zhu, Huanbo Zheng
  • Patent number: 11941528
    Abstract: Methods and systems for performing a training operation of a neural network are provided. In one example, a method comprises: performing backward propagation computations for a second layer of a neural network to generate second weight gradients; splitting the second weight gradients into portions; causing a hardware interface to exchange a first portion of the second weight gradients with the second computer system; performing backward propagation computations for a first layer of the neural network to generate first weight gradients when the exchange of the first portion of the second weight gradients is underway, the first layer being a lower layer than the second layer in the neural network; causing the hardware interface to transmit the first weight gradients to the second computer system; and causing the hardware interface to transmit the remaining portions of the second weight gradients to the second computer system.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: March 26, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Vignesh Vivekraja, Thiam Khean Hah, Randy Renfu Huang, Ron Diamant, Richard John Heaton
  • Patent number: 11941532
    Abstract: Disclosed is a method for adapting a deep learning framework to a hardware device based on a unified backend engine, which comprises the following steps: S1, adding the unified backend engine to the deep learning framework; S2, adding the unified backend engine to the hardware device; S3, converting a computational graph, wherein the computational graph compiled and generated by the deep learning framework is converted into an intermediate representation of the unified backend engine; S4, compiling the intermediate representation, wherein the unified backend engine compiles the intermediate representation on the hardware device to generate an executable object; S5, running the executable object, wherein the deep learning framework runs the executable object on the hardware device; S6: managing memory of the unified backend engine.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: March 26, 2024
    Assignee: ZHEJIANG LAB
    Inventors: Hongsheng Wang, Wei Hua, Hujun Bao, Fei Yang
  • Patent number: 11941491
    Abstract: In some embodiments, a non-transitory processor-readable medium stores code representing instructions to be executed by a processor. The code includes code to cause the processor to receive a structured file for which a machine learning model has made a malicious content classification. The code further includes code to remove a portion of the structured file to define a modified structured file that follows a format associated with a type of the structured file. The code further includes code to extract a set of features from the modified structured file. The code further includes code to provide the set of features as an input to the machine learning model to produce an output. The code further includes code to identify an impact of the portion of the structured file on the malicious content classification of the structured file based on the output.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: March 26, 2024
    Assignee: Sophos Limited
    Inventors: Richard Harang, Joshua Daniel Saxe
  • Patent number: 11928760
    Abstract: Techniques are described for automatically detecting and accommodating state changes in a computer-generated forecast. In one or more embodiments, a representation of a time-series signal is generated within volatile and/or non-volatile storage of a computing device. The representation may be generated in such a way as to approximate the behavior of the time-series signal across one or more seasonal periods. Once generated, a set of one or more state changes within the representation of the time-series signal is identified. Based at least in part on at least one state change in the set of one or more state changes, a subset of values from the sequence of values is selected to train a model. An analytical output is then generated, within volatile and/or non-volatile storage of the computing device, using the trained model.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: March 12, 2024
    Assignee: Oracle International Corporation
    Inventors: Dustin Garvey, Uri Shaft, Sampanna Shahaji Salunke, Lik Wong
  • Patent number: 11900243
    Abstract: A computing core circuit, including: an encoding module, a route sending module, and a control module, wherein the control module is configured to control the encoding module to perform encoding processing on a pulse sequence determined by pulses of at least one neuron in a current computing core to be transmitted, so as to obtain an encoded pulse sequence, and control the route sending module to determine a corresponding route packet according to the encoded pulse sequence, so as to send the route packet. The present disclosure further provides a data processing method, a chip, a board, an electronic device, and a computer-readable storage medium.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: February 13, 2024
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventors: Zhenzhi Wu, Yaolong Zhu, Luojun Jin, Wei He, Qikun Zhang
  • Patent number: 11900239
    Abstract: Systems and methods for dynamically executing sparse neural networks are provided. In one implementation, a system for providing dynamic sparsity in a neural network may include at least one memory storing instructions and at least one processor configured to execute the instructions to: reduce an input vector and a set of weights of the neural network, execute an input layer of the neural network using the reduced input vector and set of weights to generate a reduced output vector; expand the reduced output vector to a full output vector using first predictable output neurons (PONs); using a PON map, reduce a dimension of the full output vector; execute subsequent layers of the neural network using the reduced full output vector to produce a second reduced output vector; and expand the second reduced output vector to a second full output vector using second PONs.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: February 13, 2024
    Assignee: Alibaba Group Holding Limited
    Inventors: Zhenyu Gu, Liu Liu, Shuangchen Li, Yuan Xie
  • Patent number: 11880761
    Abstract: Systems and methods for adding a new domain to a natural language understanding system to form an updated language understanding system with multiple domain experts are provided. More specifically, the systems and methods are able to add a new domain utilizing data from one or more of the domains already present in the natural language understanding system while keeping the new domain and the already present domains separate from each other.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: January 23, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Imed Zitouni, Dongchan Kim, Young-Bum Kim
  • Patent number: 11861486
    Abstract: A neural processing unit of a binarized neural network (BNN) as a hardware accelerator is provided, for the purpose of reducing hardware resource demand and electricity consumption while maintaining acceptable output precision. The neural processing unit may include: a first block configured to perform convolution by using a binarized feature map with a binarized weight; and a second block configured to perform batch-normalization on an output of the first block. A register having a particular size may be disposed between the first block and the second block. Each of the first block and the second block may include one or more processing engines. The one or more processing engines may be connected in a form of pipeline.
    Type: Grant
    Filed: November 11, 2022
    Date of Patent: January 2, 2024
    Assignee: DEEPX CO., LTD.
    Inventors: Lok Won Kim, Quang Hieu Vo
  • Patent number: 11853862
    Abstract: A method of performing unsupervised detection of repeating patterns in a series (TS) of events (E21, E12, E5, . . . ), comprising the steps of: a) Providing a plurality of neurons (NR1-NRP), each neuron being representative of W event types; b) Acquiring an input packet (IV) comprising N successive events of the series; c) Attributing to at least some neurons a potential value (PT1-PTP), representative of the number of common events between the input packet and the neuron; d) Modifying the event types of neurons having a potential value exceeding a first threshold TL; and e) Generating a first output signal (OS1-OSP) for all neurons having a potential value exceeding a second threshold TF, and a second output signal, different from the first one, for all other neurons. A digital electronic circuit and system configured for carrying out the above method.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: December 26, 2023
    Assignee: BrainChip, Inc.
    Inventors: Simon Thorpe, Timothée Masquelier, Jacob Martin, Amir Reza Yousefzadeh, Bernabe Linares-Barranco
  • Patent number: 11853852
    Abstract: Systems and methods for preventing machine learning models from negatively affecting mobile devices are provided. For example, a mobile device including a camera, memory devices, and one or more processors are provided. In some embodiments, the processors may be configured to provide images captured by the camera to a machine learning model at a first rate. The processors may also be configured to determine whether one or more of the images includes an object. If one or more of the images includes the object, the processors may be further configured to adjust the first rate of providing the images to the machine learning model to a second rate, and in some embodiments, determine whether to adjust the second rate of providing the images to the machine learning model to a third rate based on output received from the machine learning model.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: December 26, 2023
    Assignee: Capital One Services, LLC
    Inventors: Micah Price, Sunil Subrahmanyam Vasisht, Stephen Michael Wylie, Geoffrey Dagley, Qiaochu Tang, Jason Richard Hoover
  • Patent number: 11853877
    Abstract: Whether to train a new neural network model can be determined based on similarity estimates between a sample data set and a plurality of source data sets associated with a plurality of prior-trained neural network models. A cluster among the plurality of prior-trained neural network models can be determined. A set of training data based on the cluster can be determined. The new neural network model can be trained based on the set of training data.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: December 26, 2023
    Assignee: International Business Machines Corporation
    Inventors: Patrick Watson, Bishwaranjan Bhattacharjee, Siyu Huo, Noel Christopher Codella, Brian Michael Belgodere, Parijat Dube, Michael Robert Glass, John Ronald Kender, Matthew Leon Hill
  • Patent number: 11853896
    Abstract: The present disclosure belongs to the technical field of machine learning. Specifically provided is a neural network model, including at least one intermediate layer including different types of neurons which correspond to different types of neural networks. The neural network model is obtained based on an initial neural network and a multi-valued mask during a training process, and the multi-valued mask is obtained by means of performing multi-value processing on a continuous mask. Further provided are a method for training neural network model, a time sequence data processing method, an electronic device, and a readable medium.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: December 26, 2023
    Assignee: LYNXI TECHNOLOGIES CO., LTD.
    Inventor: Yaolong Zhu
  • Patent number: 11715020
    Abstract: A device for operating a machine learning system. The machine learning system is assigned a predefinable rollout, which characterizes a sequence in which each of the layers ascertains an intermediate variable. When assigning the rollout, each connection or each layer is assigned a control variable, which characterizes whether the intermediate variable of each of the subsequent connected layers is ascertained according to the sequence or regardless of the sequence. A calculation of an output variable of the machine learning system as a function of an input variable of the machine learning system is controlled as a function of the predefinable rollout. Also described is a method for operating the machine learning system.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: August 1, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventor: Volker Fischer
  • Patent number: 11699095
    Abstract: A training apparatus includes an acquiring unit that acquires a first model including an input layer to which input information is input; a plurality of intermediate layers that executes a calculation based on a feature of the input information that has been input; and an output layer that outputs output information that corresponds to output of the intermediate layer. The training apparatus includes a training unit that trains the first model such that, when predetermined input information is input to the first model, the first model outputs predetermined output information that corresponds to the predetermined input information and intermediate information output from a predetermined intermediate layer among the intermediate layers becomes close to feature information that corresponds to a feature of correspondence information that corresponds to the predetermined input information.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: July 11, 2023
    Assignee: YAHOO JAPAN CORPORATION
    Inventors: Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami
  • Patent number: 11694065
    Abstract: Devices and methods related to spiking neural units in memory. One device includes a memory array and a complementary metal-oxide semiconductor (CMOS) coupled to the memory array and located under the memory array, wherein the CMOS includes a spiking neural unit comprising logic configured to receive an input to increase a weight stored in a memory cell of the memory array, collect the weight from the memory cell of the memory array, accumulate the weight with an increase based on the input, compare the accumulated weight to a threshold weight, and provide an output in response to the accumulated weight being greater than the threshold weight.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: July 4, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Richard C. Murphy, Glen E. Hush, Honglin Sun
  • Patent number: 11687759
    Abstract: A neural network implementation is disclosed. The implementation allows the computations for the neural network to be performed on either an accelerator or a processor. The accelerator and the processor share a memory and communicate over a bus to perform the computations and to share data. The implementation uses weight compression and pruning, as well as parallel processing, to reduce computing, storage, and power requirements.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: June 27, 2023
    Assignee: SEMICONDUCTOR COMPONENTS INDUSTRIES, LLC
    Inventors: Ivo Leonardus Coenen, Dennis Wayne Mitchler