Multilayer Feedforward Patents (Class 706/31)
  • Patent number: 11948063
    Abstract: Computer systems and computer-implemented methods improve a base neural network. In an initial training, preliminary activations values computed for base network nodes for data in the training data set are stored in memory. After the initial training, a new node set is merged into the base neural network to form an expanded neural network, including directly connecting each of the nodes of the new node set to one or more base network nodes. Then the expanded neural network is trained on the training data set using a network error loss function for the expanded neural network.
    Type: Grant
    Filed: June 1, 2023
    Date of Patent: April 2, 2024
    Assignee: D5AI LLC
    Inventors: James K. Baker, Bradley J. Baker
  • Patent number: 11934791
    Abstract: The present disclosure provides projection neural networks and example applications thereof. In particular, the present disclosure provides a number of different architectures for projection neural networks, including two example architectures which can be referred to as: Self-Governing Neural Networks (SGNNs) and Projection Sequence Networks (ProSeqoNets). Each projection neural network can include one or more projection layers that project an input into a different space. For example, each projection layer can use a set of projection functions to project the input into a bit-space, thereby greatly reducing the dimensionality of the input and enabling computation with lower resource usage. As such, the projection neural networks provided herein are highly useful for on-device inference in resource-constrained devices. For example, the provided SGNN and ProSeqoNet architectures are particularly beneficial for on-device inference such as, for example, solving natural language understanding tasks on-device.
    Type: Grant
    Filed: August 1, 2022
    Date of Patent: March 19, 2024
    Assignee: GOOGLE LLC
    Inventors: Sujith Ravi, Zornitsa Kozareva
  • Patent number: 11915146
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Grant
    Filed: November 11, 2022
    Date of Patent: February 27, 2024
    Assignee: PREFERRED NETWORKS, INC.
    Inventors: Seiya Tokui, Yuya Unno, Kenta Oono, Ryosuke Okuta
  • Patent number: 11915119
    Abstract: A convolutional neural network (CNN) processing method includes selecting a survival network in a precision convolutional network based on a result of performing a high speed convolution operation between an input and a kernel using a high speed convolutional network, and performing a precision convolution operation between the input and the kernel using the survival network.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: February 27, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Changyong Son, Jinwoo Son, Chang Kyu Choi, Jaejoon Han
  • Patent number: 11881012
    Abstract: In one aspect, an example method includes (i) extracting a sequence of audio features from a portion of a sequence of media content; (ii) extracting a sequence of video features from the portion of the sequence of media content; (iii) providing the sequence of audio features and the sequence of video features as an input to a transition detector neural network that is configured to classify whether or not a given input includes a transition between different content segments; (iv) obtaining from the transition detector neural network classification data corresponding to the input; (v) determining that the classification data is indicative of a transition between different content segments; and (vi) based on determining that the classification data is indicative of a transition between different content segments, outputting transition data indicating that the portion of the sequence of media content includes a transition between different content segments.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: January 23, 2024
    Assignee: Gracenote, Inc.
    Inventors: Joseph Renner, Aneesh Vartakavi, Robert Coover
  • Patent number: 11868878
    Abstract: Disclosed herein are techniques for implementing a large fully-connected layer in an artificial neural network. The large fully-connected layer is grouped into multiple fully-connected subnetworks. Each fully-connected subnetwork is configured to classify an object into an unknown class or a class in a subset of target classes. If the object is classified as the unknown class by a fully-connected subnetwork, a next fully-connected subnetwork may be used to further classify the object. In some embodiments, the fully-connected layer is grouped based on a ranking of target classes.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: January 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Randy Huang, Ron Diamant
  • Patent number: 11860608
    Abstract: The present invention discloses an industrial equipment operation, maintenance and optimization method and system based on a complex network model. The method includes the following steps: obtaining data of all sensors of industrial equipment, and calculating a Spearman correlation coefficient between data of every two of the sensors within the same time period; using each sensor as a node, and using the Spearman correlation coefficient as a weight of a network edge, to construct a fully connected weighted network; and obtaining, when an adjustment instruction for a target feature is received, a currently optimal parameter adjustment path of the target feature based on the fully connected weighted network. In the present invention, production equipment in reality is digitized to construct a complex network oriented to industrial big data. An optimal path for equipment parameter tuning may be found by using the network, thereby reducing dependence of an enterprise on a domain expert.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: January 2, 2024
    Assignee: QILU UNIVERSITY OF TECHNOLOGY
    Inventors: Xuesong Jiang, Chao Meng, Xiumei Wei, Qingcun Zhu, Dapeng Hu
  • Patent number: 11836449
    Abstract: An information processing device includes processing circuitry to acquire object spatiotemporal information including spatiotemporal information indicating coordinates of objects in time and space and a name of each of the objects and to generate morphological analysis-undergone object spatiotemporal information by executing a morphological analysis as a process of analyzing the name of each of the objects included in the object spatiotemporal information into one or more words; to acquire morphological analysis-undergone names of vicinal objects, as objects existing in a vicinity of each of the objects in time and space, from the morphological analysis-undergone object spatiotemporal information; to calculate a distribution of vicinal object name words, as words included in the names of the vicinal objects of each of the objects, from the morphological analysis-undergone names; and to convert the distribution of the vicinal object name words to a spatiotemporal information-considered distributed representati
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: December 5, 2023
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Hideaki Joko, Takahiro Otsuka
  • Patent number: 11830056
    Abstract: The present disclosure provides method and apparatus for determining a food item from a photograph and a corresponding restaurant serving the food item. An image is received from a user, the image being associated with a consumable item. One or more ingredients of the consumable item in the image is identified along with a location of the user and using a neural network, determining one or more similar images from a database. A restaurant associated with each of the one or more similar images is determined along with a similarity score indicating a similarity between the restaurant and the identified content of the image. The one or more restaurants and/or associated similar food items are ranked based on the similarity score and a list of ranked restaurants is provided to the user.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: November 28, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia X. Gong, Jyotkumar Patel, Yale Song, Xuetao Yin, Xiujia Guo, Rajiv S. Binwade, Houdong Hu
  • Patent number: 11816558
    Abstract: An integrated circuit device for reservoir computing can include a weighted input layer, an unweighted, asynchronous, internal recurrent neural network made up of nodes having binary weighting, and a weighted output layer. Weighting of output signals can be performed using predetermined weighted sums stored in memory. Application specific integrated circuit (ASIC) embodiments may include programmable nodes. Characteristics of the reservoir of the device can be tunable to perform rapid processing and pattern recognition of signals at relatively large rates.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: November 14, 2023
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Daniel Lathrop, Itamar Shani, Peter Megson, Alessandro Restelli, Anthony Robert Mautino
  • Patent number: 11812589
    Abstract: Systems and methods for cooling a datacenter are disclosed. In at least one embodiment, a refrigerant distribution unit (RDU) distributes first refrigerant from a refrigerant reservoir to one or more cold plates to extract heat from at least one computing device and also interfaces between a first refrigerant cooling loop having a first refrigerant and a second refrigerant cooling loop, so that a second refrigerant cooling loop uses second refrigerant to dissipate at least part of such heat through a second condenser unit to an ambient environment.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: November 7, 2023
    Assignee: Nvidia Corporation
    Inventor: Ali Heydari
  • Patent number: 11808230
    Abstract: A method for estimating pressure in an intake manifold of an indirect injection combustion engine. A pressure sensor measures pressure in the intake manifold, the intake manifold being in fluidic communication with a combustion cylinder, a piston guided in translation in the combustion cylinder and connected to a rotating crankshaft. The method includes: measuring, with the pressure sensor, a maximum pressure corresponding substantially to a maximum pressure in the intake manifold during a preceding cycle of the engine; measuring, with the pressure sensor, a minimum pressure corresponding substantially to a minimum pressure in the intake manifold during the preceding cycle of the engine; determining a pre-calculated average pressure correction factor from a crankshaft angular position and from an engine speed; and estimating the pressure in the intake manifold for the crankshaft angular position of the current engine cycle from the average correction factor and from the minimum and maximum pressures.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: November 7, 2023
    Inventor: Xavier Moine
  • Patent number: 11790234
    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: October 17, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
  • Patent number: 11774553
    Abstract: In an embodiment, a method includes: transmitting a plurality of radar signals using a millimeter-wave radar sensor towards a target; receiving a plurality of reflected radar signals that correspond to the plurality of transmitted radar signals using the millimeter-wave radar; mixing a replica of the plurality of transmitted radar signals with the plurality of received reflected radar signals to generate an intermediate frequency signal; generating raw digital data based on the intermediate frequency signal using an analog-to-digital converter; processing the raw digital data using a constrained L dimensional convolutional layer of a neural network to generate intermediate digital data, where L is a positive integer greater than or equal to 2, and where the neural network includes a plurality of additional layers; and processing the intermediate digital data using the plurality of additional layers to generate information about the target.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: October 3, 2023
    Assignee: Infineon Technologies AG
    Inventors: Avik Santra, Thomas Reinhold Stadelmayer
  • Patent number: 11741370
    Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: generating a plurality of deep transfer learning networks including a source deep transfer learning network for a source domain and a target deep transfer learning network for a target domain. Transfer layers of the source deep transfer learning network are encoded to a chromosome, diversified, and integrated with the target deep transfer learning network and the target deep transfer learning network passing a predefined fitness threshold condition is produced.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: August 29, 2023
    Assignee: International Business Machines Corporation
    Inventors: Craig M. Trim, Aaron K. Baughman, Garfield W. Vaughn, Micah Forster
  • Patent number: 11741362
    Abstract: A system for training a neural network receives training data and performing lower precision format training calculations using lower precision format data at one or more training phases. One or more results from the lower precision format training calculations are converted to higher precision format data, and higher precision format training calculations are performed using the higher precision format data at one or more additional training phases. The neural network is modified using the results from the one or more additional training phases. The mixed precision format training calculations train the neural network more efficiently, while maintaining an overall accuracy.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: August 29, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Lo, Eric Sen Chung, Bita Darvish Rouhani
  • Patent number: 11732664
    Abstract: A control device 60 of a vehicle drive device comprises a processing part 81 configured to use a trained model using a neural network to calculate an output parameter of a vehicle, and a control part 82 configured to control the vehicle drive device based on the output parameter. The neural network includes a first input layer to which input parameters of the vehicle other than a design value are input, a second input layer to which the design values are input, a first hidden layer to which outputs of the first input layer are input, a second hidden layer to which outputs of the second input layer are input, and an output layer outputting the output parameter, and is configured so that the second hidden layer becomes closer to the output layer than the first hidden layer.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: August 22, 2023
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Toshihiro Nakamura
  • Patent number: 11734214
    Abstract: The present disclosure relates to devices for using a configurable stacked architecture for a fixed function datapath with an accelerator for accelerating an operation or a layer of a deep neural network (DNN). The stacked architecture may have a fixed function datapath that includes one or more configurable micro-execution units that execute a series of vector, scalar, reduction, broadcasting, and normalization operations for a DNN layer operation. The fixed function datapath may be customizable based on the DNN or the operation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: August 22, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Sangho Youn, Steven Karl Reinhardt, Jeremy Halden Fowers, Lok Chand Koppaka, Kalin Ovtcharov
  • Patent number: 11727172
    Abstract: A tuning method for a clutch temperature estimation model may include, generating n tuning genes, calculating a tuning value corresponding to a tuning variable by using information of each of the n tuning genes, calculating a temperature estimation accuracy by applying the calculated tuning value to the clutch temperature estimation model, extracting n tuning genes of highest calculated accuracies, and regenerating m tuning genes through recombination of the extracted n tuning genes.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: August 15, 2023
    Assignees: HYUNDAI MOTOR COMPANY, KIA MOTORS CORPORATION
    Inventors: Geontae Lee, Min Ki Kim, Kyoung Song
  • Patent number: 11715011
    Abstract: A neural network recognition method includes obtaining a first neural network that includes layers and a second neural network that includes a layer connected to the first neural network, actuating a processor to compute a first feature map from input data based on a layer of the first neural network, compute a second feature map from the input data based on the layer connected to the first neural network in the second neural network, and generate a recognition result based on the first neural network from an intermediate feature map computed by applying an element-wise operation to the first feature map and the second feature map.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: August 1, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Byungin Yoo, Youngsung Kim, Youngjun Kwak, Chang Kyu Choi
  • Patent number: 11699069
    Abstract: Systems and methods are provided for performing predictive assignments pertaining to genetic information. One embodiment is a system that includes a genetic prediction server. The genetic prediction server includes an interface that acquires records that each indicate one or more genetic variants determined to exist within an individual, and a controller. The controller selects one or more machine learning models that utilize the genetic variants as input, and loads the machine learning models. For each individual in the records: the controller predictively assigns at least one characteristic to that individual by operating the machine learning models based on at least one genetic variant indicated in the records for that individual. The controller also generates a report indicating at least one predictively assigned characteristic for at least one individual, and transmits a command via the interface for presenting the report at a display.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: July 11, 2023
    Assignee: Helix, Inc.
    Inventors: Ryan P. Trunck, Christopher M. Glode, Rani K. Powers, Jennifer L. Lescallett
  • Patent number: 11663468
    Abstract: A method for training a neural network, includes: training a super network to obtain a network parameter of the super network, wherein each network layer of the super network includes multiple candidate network sub-structures in parallel; for each network layer of the super network, selecting, from the multiple candidate network sub-structures, a candidate network sub-structure to be a target network sub-structure; constructing a sub-network based on target network sub-structures each selected in a respective network layer of the super network; and training the sub-network, by taking the network parameter inherited from the super network as an initial parameter of the sub-network, to obtain a network parameter of the sub-network.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: May 30, 2023
    Assignee: Beijing Xiaomi Intelligent Technology Co., Ltd.
    Inventors: Xiangxiang Chu, Ruijun Xu, Bo Zhang, Jixiang Li, Qingyuan Li, Bin Wang
  • Patent number: 11663478
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for characterizing activity in a recurrent artificial neural network. In one aspect, a method for identifying decision moments in a recurrent artificial neural network includes determining a complexity of patterns of activity in the recurrent artificial neural network, wherein the activity is responsive to input into the recurrent artificial neural network, determining a timing of activity having a complexity that is distinguishable from other activity that is responsive to the input, and identifying the decision moment based on the timing of the activity that has the distinguishable complexity.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: May 30, 2023
    Assignee: INAIT SA
    Inventors: Henry Markram, Ran Levi, Kathryn Pamela Hess Bellwald
  • Patent number: 11645835
    Abstract: A method and system for creating hypercomplex representations of data includes, in one exemplary embodiment, at least one set of training data with associated labels or desired response values, transforming the data and labels into hypercomplex values, methods for defining hypercomplex graphs of functions, training algorithms to minimize the cost of an error function over the parameters in the graph, and methods for reading hierarchical data representations from the resulting graph. Another exemplary embodiment learns hierarchical representations from unlabeled data. The method and system, in another exemplary embodiment, may be employed for biometric identity verification by combining multimodal data collected using many sensors, including, data, for example, such as anatomical characteristics, behavioral characteristics, demographic indicators, artificial characteristics.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: May 9, 2023
    Assignee: Board of Regents, The University of Texas System
    Inventors: Aaron Benjamin Greenblatt, Sos S. Agaian
  • Patent number: 11636318
    Abstract: Techniques and mechanisms for servicing a search query using a spiking neural network. In an embodiment, a spiking neural network receives an indication of a first context of the search query, wherein a set of nodes of the spiking neural network each correspond to a respective entry of a repository. One or more nodes of the set of nodes are each excited to provide a respective cyclical response based on the first context, wherein a first cyclical response is by a first node. Due at least in part to a coupling of the excited nodes, a perturbance signal, based on a second context of the search query, results in a change of the first resonance response relative to one or more other resonance responses. In another embodiment, data corresponding to the first node is selected, based on the change, as an at least partial result of the search query.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: April 25, 2023
    Assignee: Intel Corporation
    Inventors: Arnab Paul, Narayan Srinivasa
  • Patent number: 11604973
    Abstract: Some embodiments provide a method for training parameters of a machine-trained (MT) network. The method receives an MT network with multiple layers of nodes, each of which computes an output value based on a set of input values and a set of trained weight values. Each layer has a set of allowed weight values. For a first layer with a first set of allowed weight values, the method defines a second layer with nodes corresponding to each of the nodes of the first layer, each second-layer node receiving the same input values as the corresponding first-layer node. The second layer has a second, different set of allowed weight values, with the output values of the nodes of the first layer added with the output values of the corresponding nodes of the second layer to compute output values that are passed to a subsequent layer. The method trains the weight values.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: March 14, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 11604987
    Abstract: Various embodiments include methods and neural network computing devices implementing the methods, for generating an approximation neural network. Various embodiments may include performing approximation operations on a weights tensor associated with a layer of a neural network to generate an approximation weights tensor, determining an expected output error of the layer in the neural network due to the approximation weights tensor, subtracting the expected output error from a bias parameter of the layer to determine an adjusted bias parameter and substituting the adjusted bias parameter for the bias parameter in the layer. Such operations may be performed for one or more layers in a neural network to produce an approximation version of the neural network for execution on a resource limited processor.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: March 14, 2023
    Assignee: Qualcomm Incorporated
    Inventors: Marinus Willem Van Baalen, Tijmen Pieter Frederik Blankevoort, Markus Nagel
  • Patent number: 11601146
    Abstract: Examples described herein include methods, devices, and systems which may compensate input data for nonlinear power amplifier noise to generate compensated input data. In compensating the noise, during an uplink transmission time interval (TTI), a switch path is activated to provide amplified input data to a receiver stage including a recurrent neural network (RNN). The RNN may calculate an error representative of the noise based partly on the input signal to be transmitted and a feedback signal to generate filter coefficient data associated with the power amplifier noise. The feedback signal is provided, after processing through the receiver, to the RNN. During an uplink TTI, the amplified input data may also be transmitted as the RF wireless transmission via an RF antenna. During a downlink TTI, the switch path may be deactivated and the receiver stage may receive an additional RF wireless transmission to be processed in the receiver stage.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: March 7, 2023
    Assignee: MICRON TECHNOLOGY, INC.
    Inventor: Fa-Long Luo
  • Patent number: 11593664
    Abstract: A method can be performed prior to implementation of a neural network by a processing unit. The neural network comprising a succession of layers and at least one operator applied between at least one pair of successive layers. A computational tool generates an executable code intended to be executed by the processing unit in order to implement the neural network. The computational tool generates at least one transfer function between the at least one pair of layers taking the form of a set of pre-computed values.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: February 28, 2023
    Assignees: STMicroelectronics (Rousset) SAS, STMicroelectronics S.r.l.
    Inventors: Laurent Folliot, Pierre Demaj, Emanuele Plebani
  • Patent number: 11551093
    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
  • Patent number: 11544535
    Abstract: Various embodiments describe techniques for making inferences from graph-structured data using graph convolutional networks (GCNs). The GCNs use various pre-defined motifs to filter and select adjacent nodes for graph convolution at individual nodes, rather than merely using edge-defined immediate-neighbor adjacency for information integration at each node. In certain embodiments, the graph convolutional networks use attention mechanisms to select a motif from multiple motifs and select a step size for each respective node in a graph, in order to capture information from the most relevant neighborhood of the respective node.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: January 3, 2023
    Assignee: ADOBE INC.
    Inventors: John Boaz Tsang Lee, Ryan Rossi, Sungchul Kim, Eunyee Koh, Anup Rao
  • Patent number: 11544525
    Abstract: An artificial intelligence (AI) system is disclosed. The AI system provides an AI system lane processing chain, at least one AI processing block, a local memory, a hardware sequencer, and a lane composer. Each of the at least one AI processing block, the local memory coupled to the AI system lane processing chain, the hardware sequencer coupled to the AI system lane processing chain, and the lane composer is coupled to the AI system lane processing chain. The AI system lane processing chain is dynamically created by the lane composer.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: January 3, 2023
    Inventor: Sateesh Kumar Addepalli
  • Patent number: 11537869
    Abstract: Systems and methods provide a learned difference metric that operates in a wide artifact space. An example method includes initializing a committee of deep neural networks with labeled distortion pairs, iteratively actively learning a difference metric using the committee and psychophysics tasks for informative distortion pairs, and using the difference metric as an objective function in a machine-learned digital file processing task. Iteratively actively learning the difference metric can include providing an unlabeled distortion pair as input to each of the deep neural networks in the committee, a distortion pair being a base image and a distorted image resulting from application of an artifact applied to the base image, obtaining a plurality of difference metric scores for the unlabeled distortion pair from the deep neural networks, and identifying the unlabeled distortion pair as an informative distortion pair when the difference metric scores satisfy a diversity metric.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: December 27, 2022
    Assignee: Twitter, Inc.
    Inventors: Ferenc Huszar, Lucas Theis, Pietro Berkes
  • Patent number: 11526989
    Abstract: In brain analysis, anatomical standardization is performed when analyzing a region of interest (ROI). There are individual differences in the shape and size of the brain and by converting the brain into a standard brain, these differences can be compared with each other and subjected to statistical analysis. When generating a standard brain analysis, a large number of pieces of image data are classified into a plurality of groups based on their anatomical features. An intermediate template that is an intermediate conversion image and a conversion map is calculated for each group, and the calculation of the intermediate template and the generation of the intermediate conversion image are repeated while gradually reducing the number of classifications, so that a final standard image is generated. Using the standard image and the intermediate template calculated during the generation of the standard image, spatial standardization of the measured image is performed.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: December 13, 2022
    Assignee: FUJIFILM HEALTHCARE CORPORATION
    Inventors: Toru Shirai, Ryota Satoh, Yasuo Kawata, Tomoki Amemiya, Yoshitaka Bito, Hisaaki Ochi
  • Patent number: 11526680
    Abstract: Systems and methods are provided to pre-train projection networks for use as transferable natural language representation generators. In particular, example pre-training schemes described herein enable learning of transferable deep neural projection representations over randomized locality sensitive hashing (LSH) projections, thereby surmounting the need to store any embedding matrices because the projections can be dynamically computed at inference time.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: December 13, 2022
    Assignee: GOOGLE LLC
    Inventors: Sujith Ravi, Zornitsa Kozareva, Chinnadhurai Sankar
  • Patent number: 11518637
    Abstract: Medicine packaging apparatuses and methods for accurately determining a remaining sheet amount of a medicine packaging sheet are described. The apparatus includes: a roll support section to which a core tube of a medicine packaging sheet roll is attached; a sensor disposed in the roll support section for outputting a count value according to a rotation amount; a wireless reader-writer unit for writing information to a core tube IC tag and reading said information; an information generation section for generating information to be written to the core tube IC tag; a remaining sheet amount estimation section for estimating a current amount of remaining sheet based on the information and dimensional information of the core tube; and a controller which selectively performs an operation if a reference time-point count value is not yet written to the core tube IC tag and another operation if the count value is already written thereto.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: December 6, 2022
    Assignee: YUYAMA MFG. CO., LTD.
    Inventors: Katsunori Yoshina, Tomohiro Sugimoto, Noriyoshi Fujii
  • Patent number: 11500767
    Abstract: In accordance with an embodiment, a method for determining an overall memory size of a global memory area configured to store input data and output data of each layer of a neural network includes: for each current layer of the neural network after a first layer, determining a pair of elementary memory areas based on each preceding elementary memory area associated with a preceding layer, wherein: the two elementary memory areas of the pair of elementary memory areas respectively have two elementary memory sizes, each of the two elementary memory areas are configured to store input data and output data of the current layer of the neural network, the output data is respectively stored in two different locations, and the overall memory size of the global memory area corresponds to a smallest elementary memory size at an output of the last layer of the neural network.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: November 15, 2022
    Assignee: STMicroelectronics (Rousset) SAS
    Inventors: Laurent Folliot, Pierre Demaj
  • Patent number: 11477046
    Abstract: A method and device for aggregating connected objects of a communications network. The connected objects have at least one basic feature. The method includes the following steps implemented on an aggregation device, in order to obtain a group avatar suitable for representing the connected objects: obtaining at least one basic feature; obtaining at least one feature of the group object, linked to a basic feature; and creating the group avatar including: a structure having a basic feature; a structure having a group feature; a structure for linking the group feature to at least one basic feature; and a group proxy structure having an association between an address of the group avatar and an address of the connected objects.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: October 18, 2022
    Assignee: ORANGE
    Inventors: St├ęphane Petit, Olivier Berteche
  • Patent number: 11475273
    Abstract: Systems and methods are provided for automatically scoring a constructed response. The constructed response is processed to generate a plurality of numerical vectors that is representative of the constructed response. A model is applied to the plurality of numerical vectors. The model includes an input layer configured to receive the plurality of numerical vectors, the input layer being connected to a following layer of the model via a first plurality of connections. Each of the connections has a first weight. An intermediate layer of nodes is configured to receive inputs from an immediately-preceding layer of the model via a second plurality of connections, each of the connections having a second weight. An output layer is connected to the intermediate layer via a third plurality of connections, each of the connections having a third weight. The output layer is configured to generate a score for the constructed response.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: October 18, 2022
    Assignee: Educational Testing Service
    Inventors: Derrick Higgins, Lei Chen, Michael Heilman, Klaus Zechner, Nitin Madnani
  • Patent number: 11475274
    Abstract: A computer-implemented method optimizes a neural network. One or more processors define layers in a neural network based on neuron locations relative to incoming initial inputs and original outgoing final outputs of the neural network, where a first defined layer is closer to the incoming initial inputs than a second defined layer, and where the second defined layer is closer to the original outgoing final outputs than the first defined layer. The processor(s) define parameter criticalities for parameter weights stored in a memory used by the neural network, and associate defined layers in the neural network with different memory banks based on the parameter criticalities for the parameter weights. The processor(s) store parameter weights used by neurons in the first defined layer in the first memory bank and parameter weights used by neurons in the second defined layer in the second memory bank.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: October 18, 2022
    Assignee: International Business Machines Corporation
    Inventors: Pradip Bose, Alper Buyuktosunoglu, Augusto J. Vega
  • Patent number: 11461414
    Abstract: A searchable database of software features for software projects can be automatically built in some examples. One such example can involve analyzing descriptive information about a software project to determine software features of the software project. Then a feature vector for the software project can be generated based on the software features of the software project. The feature vector can be stored in a database having multiple feature vectors for multiple software projects. The multiple feature vectors can be easily and quickly searched in response to search queries.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: October 4, 2022
    Assignee: RED HAT, INC.
    Inventors: Fridolin Pokorny, Sanjay Arora, Christoph Goern
  • Patent number: 11454183
    Abstract: A method of generating an ROI profile for a fuel injector using machine learning and a constrained/limited training data set is disclosed. The method includes receiving a first plurality of measurement sets for a fuel injector when operating at a first target set point. Preferably, at least two measurement sets of the first plurality of measurement sets are selected to generate a first averaged ROI profile for the first target condition. The at least two selected measurement sets are then used to train a machine learning model that can output a predicted ROI profile for a fuel injector based on a desired pressure value and/or desired mass flow rate value. Training of the machine learning model preferably includes a predetermined number of iterations that induces overfitting within the model/neural network.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: September 27, 2022
    Assignee: SOUTHWEST RESEARCH INSTITUTE
    Inventors: Khanh D. Cung, Zachary L. Williams, Ahmed A. Moiz, Daniel C. Bitsis, Jr.
  • Patent number: 11443238
    Abstract: A computer system is accessible to a database storing learning data to generate a prediction model, the learning data includes input data and teacher data, the computer system: performs first learning to set an extraction criterion for extracting the learning data including the input data similar to prediction target data in a case of being input the prediction target data; extract the learning data from the first database based on the extraction criterion and generate a dataset; perform second learning to generate a prediction model using the dataset; generate a decision logic showing a prediction logic of the prediction model; and output information to present the decision logic.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: September 13, 2022
    Assignee: HITACHI, LTD.
    Inventor: Wataru Takeuchi
  • Patent number: 11442779
    Abstract: Embodiments of the present disclosure relate to a method, device and computer program product for determining a resource amount of dedicated processing resources. The method comprises obtaining a structural representation of a neural network for deep learning processing, the structural representation indicating a layer attribute of the neural network that is associated with the dedicated processing resources; and determining the resource amount of the dedicated processing resources required for the deep learning processing based on the structural representation. In this manner, the resource amount of the dedicated processing resources required by the deep learning processing may be better estimated to improve the performance and resource utilization rate of the dedicated processing resource scheduling.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: September 13, 2022
    Assignee: Dell Products L.P.
    Inventors: Junping Zhao, Sanping Li
  • Patent number: 11423233
    Abstract: The present disclosure provides projection neural networks and example applications thereof. In particular, the present disclosure provides a number of different architectures for projection neural networks, including two example architectures which can be referred to as: Self-Governing Neural Networks (SGNNs) and Projection Sequence Networks (ProSeqoNets). Each projection neural network can include one or more projection layers that project an input into a different space. For example, each projection layer can use a set of projection functions to project the input into a bit-space, thereby greatly reducing the dimensionality of the input and enabling computation with lower resource usage. As such, the projection neural networks provided herein are highly useful for on-device inference in resource-constrained devices. For example, the provided SGNN and ProSeqoNet architectures are particularly beneficial for on-device inference such as, for example, solving natural language understanding tasks on-device.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: August 23, 2022
    Assignee: GOOGLE LLC
    Inventors: Sujith Ravi, Zornitsa Kozareva
  • Patent number: 11301776
    Abstract: A method for a machine learning model training is provided which operates in a mixed CPU/GPU environment. The amount of general processing unit memory is larger than the amount of special processing unit memory. The method includes loading a complete training data set into the memory of the general processing unit, determining importance values relating to training data vectors in the provided training data set of the training data vectors, dynamically transferring training data vectors of the training data set from the general processing unit memory to a special processing unit memory using as decision criteria the importance value of the training data vector, wherein the importance value used is taken from an earlier training round of the machine learning model, and executing a training algorithm on the special processing unit with the training data vectors having the highest available importance values of one of the earlier training rounds.
    Type: Grant
    Filed: April 14, 2018
    Date of Patent: April 12, 2022
    Assignee: International Business Machines Corporation
    Inventors: Celestine Duenner, Thomas P. Parnell, Charalampos Pozidis
  • Patent number: 11295174
    Abstract: A computer system and method for extending parallelized asynchronous reinforcement learning to include agent modeling for training a neural network is described. Coordinated operation of plurality of hardware processors or threads is utilized such that each functions as a worker process that is configured to simultaneously interact with a target computing environment for local gradient computation based on a loss determination mechanism and to update global network parameters. The loss determination mechanism includes at least a policy loss term (actor), a value loss term (critic), and a supervised cross entropy loss. Variations are described further where the neural network is adapted to include a latent space to track agent policy features.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: April 5, 2022
    Assignee: ROYAL BANK OF CANADA
    Inventors: Pablo Francisco Hernandez Leal, Bilal Kartal, Matthew Edmund Taylor
  • Patent number: 11238337
    Abstract: A method is described for designing systems that provide efficient implementations of feed-forward, recurrent, and deep networks that process dynamic signals using temporal filters and static or time-varying nonlinearities. A system design methodology is described that provides an engineered architecture. This architecture defines a core set of network components and operations for efficient computation of dynamic signals using temporal filters and static or time-varying nonlinearities. These methods apply to a wide variety of connected nonlinearities that include temporal filters in the connections. Here we apply the methods to synaptic models coupled with spiking and/or non-spiking neurons whose connection parameters are determined using a variety of methods of optimization.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: February 1, 2022
    Assignee: Applied Brain Research Inc.
    Inventors: Aaron Russell Voelker, Christopher David Eliasmith
  • Patent number: 11138505
    Abstract: A method of generating a neural network may be provided. A method may include applying non-linear quantization to a plurality of synaptic weights of a neural network model. The method may further include training the neural network model. Further, the method ma include generating a neural network output from the trained neural network model based on or more inputs received by the trained neural network model.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: October 5, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Masaya Kibune, Xuan Tan
  • Patent number: 11016649
    Abstract: Various systems, methods, and media allow for graphical display of multivariate data in parallel coordinate plots and similar plots for visualizing data for a plurality of variables simultaneously. These systems, methods, and media can aggregate individual data points into curves between axes, significantly improving functioning of computer systems by decreasing the rendering time for such plots. Certain implementations can allow a user to examine the relationship between two or more variables, by displaying the data on non-parallel or other transformed axes.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: May 25, 2021
    Assignee: Palantir Technologies Inc.
    Inventors: Albert Slawinski, Andreas Sjoberg