Patents Issued in November 15, 2022
-
Patent number: 11501112Abstract: A computerized method of diagnosing a mislabeling of a source type of a received event. The method comprising operations of receiving an event by a source type analysis logic with a data index and query system, wherein the event includes a portion of raw machine data and is associated with a specific point in time, obtaining an original source type assigned to the event and one or more predicted source types. The one or more predicted source types are determined by analysis of a data representation of the event in view of training data and the training data includes a plurality of data representations corresponding to known source types. Additionally, the computerized method also includes an operation of, determining whether the event has been mislabeled and in response to determining the event has been mislabeled, diagnosing a source of the mislabeling.Type: GrantFiled: April 30, 2018Date of Patent: November 15, 2022Assignee: SPLUNK Inc.Inventors: Adam Oliner, Kristal Curtis, Nghi Huu Nguyen, Alexander Johnson
-
Patent number: 11501113Abstract: Identifying insect species integrates image processing, feature selection, unsupervised clustering, and a support vector machine (SVM) learning algorithm for classification. Results with a total of 101 mosquito specimens spread across nine different vector carrying species demonstrate high accuracy in species identification. When implemented as a smart-phone application, the latency and energy consumption were minimal. The currently manual process of species identification and recording can be sped up, while also minimizing the ensuing cognitive workload of personnel. Citizens at large can use the system in their own homes for self-awareness and share insect identification data with public health agencies.Type: GrantFiled: February 5, 2021Date of Patent: November 15, 2022Assignee: University of South FloridaInventors: Sriram Chellappan, Pratool Bharti, Mona Minakshi, Willie McClinton, Jamshidbek Mirzakhalov
-
Patent number: 11501114Abstract: The generating of actionable recommendations for tuning model metrics of an Artificial Intelligence (AI) system includes partitioning a key performance indicator (KPI) range associated with a target system into a plurality of buckets. Log data including at least one KPI of the target system and one or more AI model metrics is partitioned and distributed across the plurality of buckets. For each bucket, an aggregate value of the one or more AI model metrics across the log data is computed and weighted according to the volume of log data in that bucket. A correlation factor between the aggregate value and a representative KPI value for each bucket is determined. A model tuning recommendation to increase ranking of the AI model metrics according to the determined correlation factor is provided to an output device and/or to the AI system for updating the one or more AI model.Type: GrantFiled: December 2, 2019Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Matthew Arnold, Evelyn Duesterwald, Darrell Reimer, Michael Desmond, Harold Leon Ossher, Robert Yates
-
Patent number: 11501115Abstract: Methods, systems, and computer program products for explaining cross domain model predictions are provided herein. A computer-implemented method includes providing a test data point to a domain adaptation model to obtain a prediction, wherein the domain adaptation model is trained on a set of labeled data points and a set of unlabeled data points. The method includes generating a task specific explanation for the prediction that includes one or more data points from among the sets that satisfy a threshold score for influencing the prediction. Additionally, the method includes generating a domain invariant explanation for the prediction. The domain variation explanation is generated by ranking pairs of data points based on a statistical similarity to the test data point, wherein each pair includes a data point from the set of labeled data points and a data point from the set of unlabeled data points.Type: GrantFiled: February 14, 2020Date of Patent: November 15, 2022Assignee: International Business Machines CorporationInventors: Saswati Dana, Dinesh Garg, Saneem Chemmengath, Sreyash Kenkre, L. Venkata Subramaniam
-
Patent number: 11501116Abstract: A computing device accesses a machine learning model trained on training data of first bonding operations (e.g., a ball and/or stitch bond). The first bonding operations comprise operations to bond a first set of multiple wires to a first set of surfaces. The machine learning model is trained by supervised learning. The device receives input data indicating process data generated from measurements of second bonding operations. The second bonding operations comprise operations to bond a second set of multiple wires to a second set of surfaces. The device weights the input data according to the machine learning model. The device generates an anomaly predictor indicating a risk for an anomaly occurrence in the second bonding operations based on weighting the input data according to the machine learning model. The device outputs the anomaly predictor to control the second bonding operations.Type: GrantFiled: January 21, 2022Date of Patent: November 15, 2022Assignee: SAS Institute Inc.Inventors: Deovrat Vijay Kakde, Haoyu Wang, Anya Mary McGuirk
-
Patent number: 11501117Abstract: In order to classify a current waveform of current estimated to be supplied to the same electric instrument, even when an operation mode of an operating electric instrument is unknown, a classification computer includes: a first classification unit to perform first classification of each piece of set information by information being included in each piece of the set information being a combination of waveform information and on/off information, and representing a similarity degree of the waveform information; a second classification unit to perform second classification of each piece of the set information by information being included in each piece of the set information and representing a similarity degree of the on/off information; and a third classification unit to classify the set information by a classification result related to the first classification and the second classification.Type: GrantFiled: January 9, 2020Date of Patent: November 15, 2022Assignee: NEC CORPORATIONInventor: Takeo Hidaka
-
Patent number: 11501118Abstract: A digital model repair method includes: providing a point cloud digital model of a target object as input to a generative network of a trained generative adversarial network ‘GAN’, the input point cloud comprising a plurality of points erroneously perturbed by one or more causes, and generating, by the generative network of the GAN, an output point cloud in which the erroneous perturbation of some or all of the plurality of points has been reduced; where the generative network of the GAN was trained using input point clouds comprising a plurality of points erroneously perturbed by said one or more causes, and a discriminator of the GAN was trained to distinguish point clouds comprising a plurality of points erroneously perturbed by said one or more causes and point clouds substantially without such perturbations.Type: GrantFiled: May 26, 2020Date of Patent: November 15, 2022Assignee: Sony Interactive Entertainment Inc.Inventors: Nigel John Williams, Fabio Cappello
-
Patent number: 11501119Abstract: An apparatus for identifying a warship receives a warship image, estimates a photographing angle and a photographing altitude of the warship from the real warship image, generates virtual warship images from stored virtual warship models based on the estimated photographing angle and photographing altitude, generates virtual warship part images in which main parts are classified and displayed for each of the virtual warship images, generates a part segmentation image in which main parts are classified and displayed for the real warship image, and outputs a type and class identification result of the warship by calculating similarity between each virtual warship part image and the part segmentation image.Type: GrantFiled: February 2, 2021Date of Patent: November 15, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jiwon Lee, Do-Won Nam, Sung-Won Moon, Ah Reum Oh, Jung Soo Lee
-
Patent number: 11501120Abstract: An artifact is received and features are extracted therefrom to form a feature vector. Thereafter, a determination is made to alter a malware processing workflow based on a distance of one or more features in the feature vector relative to one or more indicator centroids. Each indicator centroid specifying a threshold distance to trigger an action. Based on such a determination, the malware processing workflow is altered.Type: GrantFiled: February 20, 2020Date of Patent: November 15, 2022Assignee: Cylance Inc.Inventors: Eric Glen Petersen, Michael Alan Hohimer, Jian Luan, Matthew Wolff, Brian Michael Wallace
-
Patent number: 11501121Abstract: A method for automatically classifying emission tomographic images includes receiving original images and a plurality of class labels designating each original image as belonging to one of a plurality of possible classifications and utilizing a data generator to create generated images based on the original images. The data generator shuffles the original images. The number of generated images is greater than the number of original images. One or more geometric transformations are performed on the generated images. A binomial sub-sampling operation is applied to the transformed images to yield a plurality of sub-sampled images for each original image. A multi-layer convolutional neural network (CNN) is trained using the sub-sampled images and the class labels to classify input images as corresponding to one of the possible classifications. A plurality of weights corresponding to the trained CNN are identified and those weights are used to create a deployable version of the CNN.Type: GrantFiled: January 7, 2020Date of Patent: November 15, 2022Assignee: Siemens Medical Solutions USA, Inc.Inventors: Shuchen Zhang, Xinhong Ding
-
Patent number: 11501122Abstract: An automotive sensor integration module including a plurality of sensors which differ in at least one of a sensing period or an output data format, an interface unit configured to receive pieces of detection data outputted from the plurality of sensors and convert the received detection data into a predetermined data format, and a signal processing unit configured to simultaneously output pieces of converted detection data from the interface unit on the basis of the sensing period of one among the plurality of sensors.Type: GrantFiled: December 24, 2019Date of Patent: November 15, 2022Assignee: Hyundai Mobis Co., Ltd.Inventors: Sang Hun Lee, Seung Bum Lee
-
Patent number: 11501123Abstract: A method and an apparatus for asynchronous data fusion, a storage medium and an electronic device are provided. The method includes: obtaining current frame LiDAR data, and determining current frame LiDAR three-dimensional embeddings; determining a previous frame fused hidden state, and performing a temporal fusion process based on the previous frame fused hidden state and the current frame LiDAR three-dimensional embeddings to generate a current frame temporary hidden state and a current frame output result; and obtaining current frame camera data, determining current frame camera three-dimensional embeddings, and generating a current frame fused hidden state based on the current frame camera three-dimensional embeddings and the current frame temporary hidden state. Asynchronous fusion is performed on the current frame LiDAR data and previous frame camera data, which leads to a low processing latency.Type: GrantFiled: September 25, 2020Date of Patent: November 15, 2022Assignee: BEIJING QINGZHOUZHIHANG INTELLIGENT TECHNOLOGY CO., LTDInventor: Yu Zhang
-
Patent number: 11501124Abstract: A measuring device includes a first roller pair, a colorimetric unit, a second roller pair, a third roller pair, a motor, and a torque limiter. The third roller pair is driven at a first speed before a trailing end of the sheet with respect to a sheet feeding direction passes through the first roller pair and is driven at a second speed higher than the first speed after the trailing end of the sheet passes through the first roller pair. The first speed is set so as to be equal to the peripheral speed of the first roller pair. The second speed is set so as to be equal to a peripheral speed of the second roller pair in a state in which a slip of the torque limiter is not caused.Type: GrantFiled: May 14, 2021Date of Patent: November 15, 2022Assignee: Canon Kabushiki KaishaInventors: Eikou Mori, Naoto Tokuma, Akito Sekigawa
-
Patent number: 11501125Abstract: Surgical articles, such as sponges, are provided in a pack which contains individual surgical articles having UHF RFID or other electronic labels which provide both (a) unique identification information for each article as well as (b) unique identification information for all articles in the pack. Prior to the surgical procedure, the labels are scanned and the identification information for all articles in the pack uploaded to a processor to create a list of available individual surgical articles. At the end of the surgical procedure, the surgical articles are collected and the electronic labels read and compared to the initial list to determine if there are any unaccounted individual surgical articles. The UHF RFID tag may then be used to determine a location of the unaccounted article outside the patient's body.Type: GrantFiled: June 17, 2020Date of Patent: November 15, 2022Assignee: Innovo Surgical, Inc.Inventors: Brian E. Stewart, William Stewart, Jan Svoboda
-
Patent number: 11501126Abstract: Technology for generating, reading, and using machine-readable codes is disclosed. There is a method, performed by an image capture device, for reading and using the codes. The method includes obtaining an image, identifying an area in the image having a machine-readable code. The method also includes, within the image area, finding a predefined start marker defining a start point and a predefined stop marker defining a stop point, an axis being defined there between. A plurality of axis points can be defined along the axis. For each axis point, a first distance within the image area to a mark is determined. The distance can be measured from the axis point in a first direction which is orthogonal to the axis. The first distances can be converted to a binary code using Gray code such that each first distance encodes at least one bit of data in the code.Type: GrantFiled: January 20, 2021Date of Patent: November 15, 2022Assignee: Spotify ABInventors: Keenan Cassidy, Damian Ferrai, Ludvig Strigeus, Mattias Svala, Nicklas Söderlind, Jimmy Wahlberg
-
Patent number: 11501127Abstract: A method for producing a metal insert for a radio-frequency chip card includes the steps of forming or providing an assembly comprising an insulating substrate bearing: at least one antenna coil resting on the substrate, comprising a connection interface to a radio-frequency module, a metal plate comprising radio-frequency permittivity perforations and a cavity for receiving a radio-frequency chip module, respectively arranged facing the antenna coil and its connection interface. The perforations comprise at least two longitudinal slots extending along and facing a portion of the antenna coil, each slot also opening onto the edge of the plate via a passage arranged on the edge. The invention also relates to a corresponding card produced by the method.Type: GrantFiled: December 6, 2019Date of Patent: November 15, 2022Assignee: THALES DIS FRANCE SASInventors: Jean-Luc Meridiano, Arek Buyukkalender, Claude Colombard, Frédérick Seban, Lucile Mendez
-
Patent number: 11501128Abstract: A transaction card is described. The transaction card includes a non-plastic layer, one or more embedded electronics, a fill layer, and one or more additional layers. The non-plastic layer has first and second faces and a thickness therebetween, and at least a first opening in the first face. The one or more embedded electronic components are disposed in or adjacent the first opening. The fill layer is in contact with the embedded electronic components, disposed in portions of the first opening not occupied by the embedded electronics. The one or more additional layers are disposed over the fill layer.Type: GrantFiled: November 23, 2020Date of Patent: November 15, 2022Assignee: Composecure, LLCInventors: Adam Lowe, Syeda Hussain
-
Patent number: 11501129Abstract: An antenna structure comprises an impedance matching part, a first conductive structure and a second conductive structure. The first conductive structure with a first length along a first direction is coupled to a first side of the impedance matching part and has a plurality of first polygon conductive structures, each of which is coupled to each other through a first conductive element. The second conductive structure with a second length along a first direction is coupled to a second side of the impedance matching part and has a plurality of second polygon conductive structures, each of which is coupled to each other through a second conductive element, wherein the second length is larger than the first length. The first and second polygon conductive structures are protrusion toward the second direction. In one embodiment, the antenna structure can be applied on an object having metal housing or liquid contained therein.Type: GrantFiled: October 8, 2019Date of Patent: November 15, 2022Assignee: Securitag Assembly Group Co., LtdInventor: Kai-Jun Liang
-
Patent number: 11501130Abstract: A memory-centric neural network system and operating method thereof includes: a processing unit; semiconductor memory devices coupled to the processing unit, the semiconductor memory devices containing instructions executed by the processing unit; a weight matrix constructed with rows and columns of memory cells, inputs of the memory cells of a same row being connected to one of axons, outputs of the memory cells of a same column being connected to one of neurons; timestamp registers registering timestamps of the axons and the neurons; and a lookup table containing adjusting values indexed in accordance with the timestamps, wherein the processing unit updates the weight matrix in accordance with the adjusting values.Type: GrantFiled: August 11, 2017Date of Patent: November 15, 2022Assignee: SK hynix Inc.Inventors: Kenneth C. Ma, Dongwook Suh
-
Patent number: 11501131Abstract: A memory-centric neural network system and operating method thereof includes: a processing unit; semiconductor memory devices coupled to the processing unit, the semiconductor memory devices containing instructions executed by the processing unit; a weight matrix constructed with rows and columns of memory cells, inputs of the memory cells of a same row being connected to one of axons, outputs of the memory cells of a same column being connected to one of neurons; timestamp registers registering timestamps of the axons and the neurons; and a lookup table containing adjusting values indexed in accordance with the timestamps, wherein the processing unit updates the weight matrix in accordance with the adjusting values.Type: GrantFiled: August 11, 2017Date of Patent: November 15, 2022Assignee: SK hynix Inc.Inventors: Kenneth C. Ma, Dongwook Suh
-
Patent number: 11501132Abstract: In example implementations described herein, there are systems and methods for processing sensor data from an equipment over a period of time to generate sensor time series data; processing the sensor time series data in a kernel weight layer configured to generate weights to weigh the sensor time series data; providing the weighted sensor time series data to fully connected layers configured to conduct a correlation on the weighted sensor time series data with predictive maintenance labels to generate an intermediate predictive maintenance label; and providing the intermediate predictive maintenance label to an inversed kernel weight layer configured to inverse the weights generated by the kernel weight layer, to generate a predictive maintenance label for the equipment.Type: GrantFiled: February 7, 2020Date of Patent: November 15, 2022Assignee: Hitachi, Ltd.Inventors: Qiyao Wang, Haiyan Wang, Chetan Gupta, Hamed Khorasgani, Huijuan Shao, Aniruddha Rajendra Rao
-
Patent number: 11501133Abstract: A method of training and using a machine learning model that controls for consideration of undesired factors which might otherwise be considered by the trained model during its subsequent analysis of new data. For example, the model may be a neural network trained on a set of training images to evaluate an insurance applicant based upon an image or audio data of the insurance applicant as part of an underwriting process to determine an appropriate life or health insurance premium. The model is trained to probabilistically correlate an aspect of the applicant's appearance with a personal and/or health-related characteristic. Any undesired factors, such as age, sex, ethnicity, and/or race, are identified for exclusion. The trained model receives the image (e.g., a “selfie”) of the insurance applicant, analyzes the image without considering the identified undesired factors, and suggests the appropriate insurance premium based only on the remaining desired factors.Type: GrantFiled: June 4, 2020Date of Patent: November 15, 2022Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: Jeffrey S. Myers, Kenneth J. Sanchez, Michael L. Bernico
-
Patent number: 11501134Abstract: Disclosed is a convolution operator system for performing a convolution operation concurrently on an image. An input router receives image data. A controller allocates image data to a set of computing blocks based on the size of the image data and number of available computing blocks. Each computing block produces a convolution output corresponding to each row of the image. The controller allocates a plurality of group having one or more computing blocks to generate a set of convolution output. Further, a pipeline adder aggregates the set of convolution output to produce an aggregated convolution output. An output router transmits either the convolution output or the aggregated convolution output for performing subsequent convolution operation to generate a convolution result for the image data.Type: GrantFiled: December 19, 2019Date of Patent: November 15, 2022Assignee: HCL TECHNOLOGIES LIMITEDInventors: Prasanna Venkatesh Balasubramaniyan, Sainarayanan Gopalakrishnan, Gunamani Rajagopal
-
Patent number: 11501135Abstract: There is provided a smart engine including a profile collector and a main processing module. The profile collector is configured to store a plurality of profiles, one or more suitable profiles being dynamically selected according to an instruction from a user or an automatic selector. The main processing module is connected to the profile collector and directly or indirectly connected to a sensor, and configured to perform a detailed analysis to determine detailed properties of features, objects, or scenes based on suitable sensor data from the sensor.Type: GrantFiled: May 9, 2019Date of Patent: November 15, 2022Assignee: BRITISH CAYMAN ISLANDS INTELLIGO TECHNOLOGY INC.Inventors: Meng-Hsun Wen, Cheng-Chih Tsai, Jen-Feng Li, Hong-Ching Chen, Chen-Chu Hsu, Tsung-Liang Chen
-
Patent number: 11501136Abstract: Systems, methods, and computer program products for determining an attack on a neural network. A data sample is received at a first classifier neural network and at a watermark classifier neural network, wherein the first classifier neural network is trained using a first dataset and a watermark dataset. The first classifier neural network determines a classification label for the data sample. A watermark classifier neural network determines a watermark classification label for the data sample. A data sample is determined as an adversarial data sample based on the classification label for the data sample and the watermark classification label for the data sample.Type: GrantFiled: May 29, 2020Date of Patent: November 15, 2022Assignee: PayPal, Inc.Inventor: Jiyi Zhang
-
Patent number: 11501137Abstract: A transitive closure data structure is constructed for a pair of features represented in a vector space corresponding to an input dataset. The data structure includes a set of entries corresponding to a set of all possible paths between a first feature in the pair and a second feature in the pair in a graph of the vector space. The data structure is reduced by removing a subset of the set of entries such that only a single entry corresponding to a single path remains in the transitive closure data structure. A feature cross is formed from a cluster of features remaining in a reduced ontology graph resulting from the reducing the transitive closure data structure. A layer is configured in a neural network to represent the feature cross, which causes the neural network to produce a prediction that is within a defined accuracy relative to the dataset.Type: GrantFiled: June 28, 2019Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Craig M. Trim, Mary E. Rudden, Aaron K. Baughman, Stefan A. G. Van Der Stockt, Bernard Freund, Augustina Monica Ragwitz
-
Patent number: 11501138Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes a set of clusters of core computation circuits and a channel, connecting the core computation circuits, that includes separate segments corresponding to each of the clusters. The NNIC includes a fabric controller circuit, a cluster controller circuit for each of the clusters, and a core controller circuit for each of the core computation circuits. The fabric controller circuit receives high-level neural network instructions from a microprocessor and parses the high-level neural network instructions.Type: GrantFiled: December 6, 2018Date of Patent: November 15, 2022Assignee: PERCEIVE CORPORATIONInventors: Kenneth Duong, Jung Ko, Steven L. Teig
-
Patent number: 11501139Abstract: One embodiment provides for a machine-learning accelerator device a multiprocessor to execute parallel threads of an instruction stream, the multiprocessor including a compute unit, the compute unit including a set of functional units, each functional unit to execute at least one of the parallel threads of the instruction stream. The compute unit includes compute logic configured to execute a single instruction to scale an input tensor associated with a layer of a neural network according to a scale factor, the input tensor stored in a floating-point data type, the compute logic to scale the input tensor to enable a data distribution of data of the input tensor to be represented by a 16-bit floating point data type.Type: GrantFiled: January 12, 2018Date of Patent: November 15, 2022Assignee: Intel CorporationInventors: Naveen Mellempudi, Dipankar Das
-
Patent number: 11501140Abstract: Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.Type: GrantFiled: June 19, 2018Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Andrew S. Cassidy, Rathinakumar Appuswamy, John V. Arthur, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Dharmendra S. Modha, Hartmut Penner, Jun Sawada, Brian Taba
-
Patent number: 11501141Abstract: Enhanced techniques and circuitry are presented herein for artificial neural networks. These artificial neural networks are formed from artificial synapses, which in the implementations herein comprise a memory arrays having non-volatile memory elements. In one implementation, an apparatus comprises a plurality of non-volatile memory arrays configured to store weight values for an artificial neural network. Each of the plurality of non-volatile memory arrays can be configured to receive data from a unified buffer shared among the plurality of non-volatile memory arrays, operate on the data, and shift at least portions of the data to another of the plurality of non-volatile memory arrays.Type: GrantFiled: March 15, 2019Date of Patent: November 15, 2022Assignee: Western Digital Technologies, Inc.Inventors: Pi-Feng Chiu, Won Ho Choi, Wen Ma, Martin Lueker-Boden
-
Patent number: 11501142Abstract: A download dispatch circuit initiates download of an input tile of an input feature map in response to a source buffer of two or more source buffers being available for the input tile, and indicates that the input tile is available in response to completion of the download. An operation dispatch circuit initiates a neural network operation on the input tile in response to the input tile being available and a first destination buffer of two or more destination buffers being available for an output tile of an output feature map, and indicates that the output tile is available in response to completion of the neural network operation. An upload dispatch circuit initiates upload of the output tile to the output feature map in response to the output tile being available, and indicates that the first destination buffer is available in response to completion of the upload.Type: GrantFiled: April 3, 2019Date of Patent: November 15, 2022Assignee: XILINX, INC.Inventors: Victor J. Wu, Poching Sun, Thomas A. Branca, Justin Thant Hsin Oo
-
Patent number: 11501143Abstract: A reconfigurable neural circuit includes an array of processing nodes. Each processing node includes a single physical neuron circuit having only one input and an output, a single physical synapse circuit having a presynaptic input, and a single physical output coupled to the input of the neuron circuit, a weight memory for storing N synaptic conductance value or weights having an output coupled to the single physical synapse circuit, a single physical spike timing dependent plasticity (STDP) circuit having an output coupled to the weight memory, a first input coupled to the output of the neuron circuit, and a second input coupled to the presynaptic input, and interconnect circuitry connected to the presynaptic input and connected to the output of the single physical neuron circuit. The synapse circuit and the STDP circuit are each time multiplexed circuits. The interconnect circuitry in each respective processing node is coupled to the interconnect circuitry in each other processing node.Type: GrantFiled: June 20, 2019Date of Patent: November 15, 2022Assignee: HRL LABORATORIES, LLCInventors: Jose Cruz-Albrecht, Timothy Derosier, Narayan Srinivasa
-
Patent number: 11501144Abstract: One embodiment of an accelerator includes a computing unit; a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations, the second memory bank configured to store a sufficient amount of the neural network parameters on the computing unit to allow for latency below a specified level with throughput above a specified level. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs computations associated with at least one element of a data array, the one or more computations performed by the MAC operator.Type: GrantFiled: September 12, 2019Date of Patent: November 15, 2022Assignee: Google LLCInventors: Olivier Temam, Harshit Khaitan, Ravi Narayanaswami, Dong Hyuk Woo
-
Patent number: 11501145Abstract: In one example, a neural network accelerator executes instructions to: load a first weight data element of an array of weight data elements from a memory into a systolic array; extract, from the instructions, information indicating a first number of input data elements to be obtained from a first address of the memory and a second number of input data elements to be skipped between adjacent input data elements to be obtained, the first address being based on first coordinates of the first weight data element, and the first and second numbers being based on a stride of a convolution operation; based on the information, obtain first input data elements from the first address of the memory; and control the systolic array to perform first computations based on the first weight data element and the first input data elements to generate first output data elements of an output data array.Type: GrantFiled: September 17, 2019Date of Patent: November 15, 2022Assignee: Amazon Technologies, Inc.Inventors: Jeffrey T. Huynh, Ron Diamant
-
Patent number: 11501146Abstract: A image recognition system includes a first convolution layer, a pooling layer, a second convolution layer, a crossbar circuit having a plurality of input lines, at least one output line intersecting with the input lines, and a plurality of weight elements that are provided at intersection points between the input lines and the output line, weights each input value input to the input lines to output to the output line, and a control portion that selects from convolution operation results of the first convolution layer, an input value needed to acquire each pooling operation result needed to perform second filter convolution operation at each shift position in the second convolution layer, and inputs the input value selected to the input lines.Type: GrantFiled: December 31, 2019Date of Patent: November 15, 2022Assignee: DENSO CORPORATIONInventors: Irina Kataeva, Shigeki Otsuka
-
Patent number: 11501147Abstract: A disclosed computer-implemented method may include maintaining, within a local memory device (LMD) included in a hardware accelerator (1) a filter matrix corresponding to a filter location included in each of a set of filters of a convolutional layer of an artificial neural network (ANN), and (2) a set of activation vectors corresponding to an active region of an activation volume input into the convolutional layer. The method may also include determining that the active region of the activation volume is contiguous with a padding region associated with at least a portion of the activation volume. The method may further include directing a matrix multiplication unit (MMU) included in the hardware accelerator to execute a matrix multiplication operation (MMO) using the filter matrix and an activation matrix that may include (1) the set of activation vectors, and (2) at least one padding vector corresponding to the padding region.Type: GrantFiled: January 30, 2020Date of Patent: November 15, 2022Assignee: Meta Platforms, Inc.Inventors: Krishnakumar Narayanan Nair, Ehsan Khish Ardestani, Martin Schatz, Yuchen Hao, Abdulkadir Utku Diril, Rakesh Komuravelli
-
Patent number: 11501148Abstract: A device configured to implement an artificial intelligence deep neural network includes a first matrix and a second matrix. The first matrix resistive processing unit (“RPU”) array receives a first input vector along the rows of the first matrix RPU. A second matrix RPU array receives a second input vector along the rows of the second matrix RPU. A reference matrix RPU array receives an inverse of the first input vector along the rows of the reference matrix RPU and an inverse of the second input vector along the rows of the reference matrix RPU. A plurality of analog to digital converters are coupled to respective outputs of a plurality of summing junctions that receive respective column outputs of the first matrix RPU array, the second matrix RPU array, and the reference RPU array and provides a digital value of the output of the plurality of summing junctions.Type: GrantFiled: March 4, 2020Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Tayfun Gokmen, Seyoung Kim, Murat Onen
-
Patent number: 11501149Abstract: A memory device comprising: N cell array regions, a computation processing block suitable for generating computation-completion data by performing a network-level operation on input data, the network-level operation indicating an operation of repeating a layer-level operation M times in a loop, the layer-level operation indicating an operation of performing N neural network computations in parallel, a data operation block suitable for storing the input data and (M*N) pieces of neural network processing information in the N cell array regions, and outputting the computation-completion data through the data transfer buffer, and an operation control block suitable for controlling the computation processing block and the data operation block.Type: GrantFiled: May 6, 2020Date of Patent: November 15, 2022Assignee: SK hynix Inc.Inventor: Chang-Min Kwak
-
Patent number: 11501150Abstract: Various implementations are related to an apparatus with memory cells arranged in columns and rows, and the memory cells are accessible with a column control voltage for accessing the memory cells via the columns and a row control voltage for accessing the memory cells via the rows. The apparatus may include neural network circuitry having neuronal junctions that are configured to receive, record, and provide information related to incoming voltage spikes associated with input signals based on resistance through the neuronal junctions. The apparatus may include stochastic re-programmer circuitry that receives the incoming voltage spikes, receives the information provided by the neuronal junctions, and reconfigure the information recorded in the neuronal junctions based on the incoming voltage spikes associated with the input signals along with a programming control signal provided by the memory circuitry.Type: GrantFiled: May 20, 2020Date of Patent: November 15, 2022Assignee: Arm LimitedInventors: Mbou Eyole, Shidhartha Das, Fernando Garcia Redondo
-
Patent number: 11501151Abstract: The present disclosure advantageously provides a pipelined accumulator that includes a data selector configured to receive a sequence of operands to be summed, an input register coupled to the data selector, an output register, coupled to the data selector, configured to store a sequence of partial sums and output a final sum, and a multi-stage add module coupled to the input register and the output register. The multi-stage add module is configured to store a sequence of partial sums and a final sum in a redundant format, and perform back-to-back accumulation into the output register.Type: GrantFiled: May 28, 2020Date of Patent: November 15, 2022Assignee: Arm LimitedInventors: Paul Nicholas Whatmough, Zhi-Gang Liu, Matthew Mattina
-
Patent number: 11501152Abstract: A mechanism is described for facilitating learning and application of neural network topologies in machine learning at autonomous machines. A method of embodiments, as described herein, includes monitoring and detecting structure learning of neural networks relating to machine learning operations at a computing device having a processor, and generating a recursive generative model based on one or more topologies of one or more of the neural networks. The method may further include converting the generative model into a discriminative model.Type: GrantFiled: July 26, 2017Date of Patent: November 15, 2022Assignee: INTEL CORPORATIONInventors: Raanan Yonatan Yehezkel Rohekar, Guy Koren, Shami Nisimov, Gal Novik
-
Patent number: 11501153Abstract: Methods, apparatus, systems, and articles of manufacture for training a neural network are disclosed. An example apparatus includes a training data segmenter to generate a partial set of labeled training data from a set of labeled training data. A matrix constructor is to create a design of experiments matrix identifying permutations of hyperparameters to be tested. A training controller is to cause a neural network trainer to train a neural network using a plurality of the permutations of hyperparameters in the design of experiments matrix and the partial set of labeled training data, and access results of the training corresponding of each of the permutations of hyperparameters. A result comparator is to select a permutation of hyperparameters based on the results, the training controller to instruct the neural network trainer to train the neural network using the selected permutation of hyperparameters and the labeled training data.Type: GrantFiled: December 28, 2017Date of Patent: November 15, 2022Assignee: Intel CorporationInventors: LayWai Kong, Takeshi Nakazawa, Anne Hansen-Musakwa
-
Patent number: 11501154Abstract: A sensor transformation attention network (STAN) model including sensors, attention modules, a merge module and a task-specific module is provided. The attention modules calculate attention scores of feature vectors corresponding to the input signals collected by the sensors. The merge module calculates attention values of the attention scores, and generates a merged transformation vector based on the attention values and the feature vectors. The task-specific module classifies the merged transformation vector.Type: GrantFiled: March 5, 2018Date of Patent: November 15, 2022Assignees: SAMSUNG ELECTRONICS CO., LTD., UNIVERSITAET ZUERICHInventors: Stefan Braun, Daniel Neil, Enea Ceolini, Jithendar Anumula, Shih-Chii Liu
-
Patent number: 11501155Abstract: Methods, apparatus, and processor-readable storage media for learning machine behavior related to install base information and determining event sequences based thereon are provided herein. An example computer-implemented method includes parsing data storage information based at least in part on parameters related to install base information comprising temporal parameters and event-related parameters; formatting the parsed set of data storage information into a parsed set of sequential data storage information compatible with a neural network model; training the neural network model using the parsed set of sequential data storage information and additional training parameters; predicting, by applying the trained neural network model to the parsed set of sequential data storage information, a future data unavailability event and/or a future data loss event; and outputting an alert based at least in part on the predicted future data unavailability event and/or predicted future data loss event.Type: GrantFiled: April 30, 2018Date of Patent: November 15, 2022Assignee: EMC IP Holding Company LLCInventors: Diwahar Sivaraman, Rashmi Sudhakar, Kartikeya Putturaya, Abhishek Gupta, Venkata Chandra Sekar Rao
-
Patent number: 11501156Abstract: Decoy data is generated from regular data. A deep neural network, which has been trained with the regular data, is trained with the decoy data. The trained deep neural network, responsive to a client request comprising input data, is operated on the input data. Post-processing is performed using at least an output of the operated trained deep neural network to determine whether the input data is regular data or decoy data. One or more actions are performed based on a result of the performed post-processing.Type: GrantFiled: June 28, 2018Date of Patent: November 15, 2022Assignee: International Business Machines CorporationInventors: Jialong Zhang, Frederico Araujo, Teryl Taylor, Marc Philippe Stoecklin
-
Patent number: 11501157Abstract: A method is provided for reinforcement learning. The method includes obtaining, by a processor device, a first set and a second set of state-action tuples. Each of the state-action tuples in the first set represents a respective good demonstration. Each of the state-action tuples in the second set represents a respective bad demonstration. The method further includes training, by the processor device using supervised learning with the first set and the second set, a neural network which takes as input a state to provide an output. The output is parameterized to obtain each of a plurality of real-valued constraint functions used for evaluation of each of a plurality of action constraints. The method also includes training, by the processor device, a policy using reinforcement learning by restricting actions predicted by the policy according to each of the plurality of action constraints with each of the plurality of real-valued constraint functions.Type: GrantFiled: July 30, 2018Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Tu-Hoa Pham, Don Joven Ravoy Agravante, Giovanni De Magistris, Ryuki Tachibana
-
Patent number: 11501158Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a controller unit configured to receive an instruction to generate a random vector that includes one or more elements. The instruction may include a predetermined distribution, a count of the elements, and an address of the random vector. The aspects may further include a computation module configured to generate the one or more elements, wherein the one or more elements are subject to the predetermined distribution.Type: GrantFiled: October 25, 2018Date of Patent: November 15, 2022Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.Inventors: Daofu Liu, Xiao Zhang, Shaoli Liu, Tianshi Chen, Yunji Chen
-
Patent number: 11501159Abstract: A method for text sequence style transfer by two encoder-decoders, including generating, by a first encoder-decoder network model, an output sequence based on a first input sequence and an input sequence style, wherein the output sequence is associated with a second sequence, generating, by a second-encoder decoder network model, a prediction of the first input sequence based on the first input sequence, the output sequence, and a first input sequence style associated with the first input sequence, generating, by a classifier, a prediction of the first input sequence style based on the prediction of the first input sequence, and updating the neural network model based on comparisons between the output sequence and the second sequence, between the prediction of the first input sequence and the first input sequence, and between the prediction of the first input sequence style and the first input sequence style.Type: GrantFiled: March 26, 2019Date of Patent: November 15, 2022Assignee: Alibaba Group Holding LimitedInventors: Shanchan Wu, Qiong Zhang, Luo Si
-
Patent number: 11501160Abstract: In deep learning, and in particular, for data compression for allreduce in deep learning, a gradient may be compressed for synchronization in a data parallel deep neural network training for allreduce by sharing a consensus vector between each node in a plurality of nodes to ensure identical indexing in each of the plurality of nodes prior to performing sparse encoding.Type: GrantFiled: March 28, 2019Date of Patent: November 15, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Minsik Cho, Wei Zhang, Ulrich Finkler
-
Patent number: 11501161Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for providing factors that explain the generated results of a deep neural network (DNN). In embodiments, multiple machine learning models and a DNN are trained on a training dataset. A preliminary set of trained machine learning models with similar results to the trained DNN are selected for further evaluation. The preliminary set of machine learning models may be evaluated using a distribution analysis to select a reduced set of machine learning models. Results produced by the reduced set of machine learning models are compared, point-by-point, to the results produced by the DNN. The best performing machine learning model with generated results that performs closest to the DNN generated results may be selected. One or more factors used by the selected machine learning model are determined.Type: GrantFiled: April 4, 2019Date of Patent: November 15, 2022Assignee: ADOBE INC.Inventors: Vaidyanathan Venkatraman, Rajan Madhavan, Omar Rahman, Niranjan Shivanand Kumbi, Brajendra Kumar Bhujabal, Ajay Awatramani