Multilayer Feedforward Patents (Class 706/31)
-
Patent number: 12248861Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using antisymmetric neural networks.Type: GrantFiled: September 3, 2020Date of Patent: March 11, 2025Assignee: DeepMind Technologies LimitedInventors: David Benjamin Pfau, James Spencer, Alexander Graeme de Garis Matthews
-
Patent number: 12236352Abstract: Methods, computer program products, and systems are presented. The methods can include, for instance: generating a plurality of deep transfer learning networks. Further, the methods can include, for instance: encoding one or more transfer layers.Type: GrantFiled: July 10, 2023Date of Patent: February 25, 2025Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Craig M. Trim, Aaron K. Baughman, Garfield W. Vaughn, Micah Forster
-
Patent number: 12216229Abstract: In an embodiment, a method includes: transmitting a plurality of radar signals using a millimeter-wave radar sensor towards a target; receiving a plurality of reflected radar signals that correspond to the plurality of transmitted radar signals using the millimeter-wave radar; mixing a replica of the plurality of transmitted radar signals with the plurality of received reflected radar signals to generate an intermediate frequency signal; generating raw digital data based on the intermediate frequency signal using an analog-to-digital converter; processing the raw digital data using a constrained L dimensional convolutional layer of a neural network to generate intermediate digital data, where L is a positive integer greater than or equal to 2, and where the neural network includes a plurality of additional layers; and processing the intermediate digital data using the plurality of additional layers to generate information about the target.Type: GrantFiled: August 29, 2023Date of Patent: February 4, 2025Assignee: Infineon Technologies AGInventors: Avik Santra, Thomas Reinhold Stadelmayer
-
Patent number: 12205007Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed that optimize layers of a machine learning model for a target hardware platform. An example apparatus includes a communication processor to obtain information specific to the target hardware platform (THP) on which to execute the machine learning model; a layer generation controller to generate layers of the machine learning model based on the information specific to the THP; and a deployment controller to, in response to the machine learning model satisfying a threshold error metric, deploy the machine learning model to the THP.Type: GrantFiled: June 26, 2020Date of Patent: January 21, 2025Assignee: Intel CorporationInventor: Amit Bleiweiss
-
Patent number: 12190870Abstract: A learning device includes a memory, and processing circuitry coupled to the memory and configured to receive an input of a plurality of series for learning having known accuracy, and learn a model represented by a neural network, the model being capable of determining accuracy levels of two series when given feature amounts of the two series among the plurality of series.Type: GrantFiled: February 1, 2019Date of Patent: January 7, 2025Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Atsunori Ogawa, Marc Delcroix, Shigeki Karita, Tomohiro Nakatani
-
Patent number: 12182692Abstract: An operation method based on a chip having an operation array is provided, including: obtaining a neural network model including D neural network layers, each neural network layer being used for operating M*N neurons; determining, from M*N*D neurons of the D neural network layers, K neurons to be operated corresponding to each operation clock cycle, and inputting the K neurons to K rows of operation units of an operation array; and performing an operation on the inputted K neurons in the each operation clock cycle by using the operation array. The M*N*D neurons are mapped to K dimensions and then allocated to the K rows of operation units. The operation array is used for operating the neural network model in a full-load operation mode.Type: GrantFiled: May 28, 2021Date of Patent: December 31, 2024Assignee: Tencent Technology (Shenzhen) Company LimitedInventor: Jiaxin Li
-
Patent number: 12165069Abstract: Some embodiments provide a compiler for optimizing the implementation of a machine-trained network (e.g., a neural network) on an integrated circuit (IC). The compiler of some embodiments receives a specification of a machine-trained network including multiple layers of computation nodes and generates a graph representing options for implementing the machine-trained network in the IC. In some embodiments, the graph includes nodes representing options for implementing each layer of the machine-trained network and edges between nodes for different layers representing different implementations that are compatible. In some embodiments, the graph is populated according to rules relating to memory use and the numbers of cores necessary to implement a particular layer of the machine trained network such that nodes for a particular layer, in some embodiments, represent fewer than all the possible groupings of sets of clusters.Type: GrantFiled: July 29, 2019Date of Patent: December 10, 2024Assignee: Amazon Technologies, Inc.Inventors: Brian Thomas, Steven L. Teig
-
Patent number: 12130167Abstract: A remedy judging system includes a weight checking device and a judgment unit. The weight checking device checks the weight of weighed product discharged from a weighing device. The judgment unit makes a judgment relating to remedies for the weighing device on the basis of “correct weight,” “overweight,” and “underweight” checking results, relative to a predetermined weight serving as a norm, obtained by the weight checking device. The judgment unit makes the judgment on the basis of determination patterns having an “underweight” checking result and at least one checking result that consecutively follows the “underweight” checking result.Type: GrantFiled: November 12, 2020Date of Patent: October 29, 2024Assignee: Ishida Co., Ltd.Inventors: Ryoichi Sato, Mikio Kishikawa
-
Patent number: 12124391Abstract: The present disclosure relates to devices for using a configurable stacked architecture for a fixed function datapath with an accelerator for accelerating an operation or a layer of a deep neural network (DNN). The stacked architecture may have a fixed function datapath that includes one or more configurable micro-execution units that execute a series of vector, scalar, reduction, broadcasting, and normalization operations for a DNN layer operation. The fixed function datapath may be customizable based on the DNN or the operation.Type: GrantFiled: July 5, 2023Date of Patent: October 22, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Stephen Sangho Youn, Steven Karl Reinhardt, Jeremy Halden Fowers, Lok Chand Koppaka, Kalin Ovtcharov
-
Patent number: 12106200Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting intermediate reinforcement learning goals. One of the methods includes obtaining a plurality of demonstration sequences, each of the demonstration sequences being a sequence of images of an environment while a respective instance of a reinforcement learning task is being performed; for each demonstration sequence, processing each image in the demonstration sequence through an image processing neural network to determine feature values for a respective set of features for the image; determining, from the demonstration sequences, a partitioning of the reinforcement learning task into a plurality of subtasks, wherein each image in each demonstration sequence is assigned to a respective subtask of the plurality of subtasks; and determining, from the feature values for the images in the demonstration sequences, a respective set of discriminative features for each of the plurality of subtasks.Type: GrantFiled: February 13, 2023Date of Patent: October 1, 2024Assignee: Google LLCInventor: Pierre Sermanet
-
Patent number: 12099934Abstract: User-driven exploration functionality, referred to herein as a Scratchpad, is a post-learning extension for machine learning systems. For example, in ESP, consisting of the Predictor (a surrogate model of the domain) and Prescriptor (a solution generator model), the Scratchpad allows the user to modify the suggestions of the Prescriptor, and evaluate each such modification interactively with the Predictor. Thus, the Scratchpad makes it possible for the human expert and the AI to work together in designing better solutions. This interactive exploration also allows the user to conclude that the solutions derived in this process are the best found, making the process trustworthy and transparent to the user.Type: GrantFiled: March 23, 2021Date of Patent: September 24, 2024Assignee: Cognizant Technology Solutions U.S. CorporationInventors: Olivier Francon, Babak Hodjat, Risto Miikkulainen
-
Patent number: 12073310Abstract: Deep neural network accelerators (DNNs) with independent datapaths for simultaneous processing of different classes of operations and related methods are described. An example DNN accelerator includes an instruction dispatcher for receiving chains of instructions having both instructions for performing a first class of operations and a second class of operations corresponding to a neural network model. The DNN accelerator further includes a first datapath and a second datapath, where each is configured to execute at least one instruction chain locally before outputting any results. The instruction dispatcher is configured to forward instructions for performing the first class of operations to the first datapath and forward instructions for performing the second class of operations to the second datapath to overlap in time a performance of at least a subset of the first class of operations with a performance of at least a subset of the second class of operations.Type: GrantFiled: April 1, 2020Date of Patent: August 27, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Stephen Sangho Youn, Lok Chand Koppaka, Steven Karl Reinhardt
-
Patent number: 12065323Abstract: Medicine packaging apparatuses and methods for accurately determining a remaining sheet amount of a medicine packaging sheet are described. The apparatus includes: a roll support section to which a core tube of a medicine packaging sheet roll is attached; a sensor disposed in the roll support section for outputting a count value according to a rotation amount; a wireless reader-writer unit for writing information to a core tube IC tag and reading said information; an information generation section for generating information to be written to the core tube IC tag; a remaining sheet amount estimation section for estimating a current amount of remaining sheet based on the information and dimensional information of the core tube; and a controller which selectively performs an operation if a reference time-point count value is not yet written to the core tube IC tag and another operation if the count value is already written thereto.Type: GrantFiled: October 31, 2022Date of Patent: August 20, 2024Assignee: YUYAMA MFG. CO., LTD.Inventors: Katsunori Yoshina, Tomohiro Sugimoto, Noriyoshi Fujii
-
Patent number: 12068767Abstract: A parameter determination apparatus adds a third layer between first and second layers of the neural network. The third layer includes a third node not including a non-linear activation function. Outputs of first nodes of the first layer is inputted to the third node The number of the third node of the third layer is smaller than the number of second nodes of the second layer. The parameter determination apparatus further learns a weight between the third and second layers as a part of the parameters and selects, as a part of the parameters, one valid path used as a valid connecting path in the neural network for each second node from connecting paths that connect the third node and the second nodes on the basis of the learned weight.Type: GrantFiled: September 3, 2021Date of Patent: August 20, 2024Assignee: NEC CORPORATIONInventor: Masaaki Tanio
-
Patent number: 12066570Abstract: A method for classifying objects based on measured data recorded by at least one radar sensor. In the method, a frequency spectrum of time-dependent measured data of the radar sensor is provided; from this frequency spectrum, locations from which reflected radar radiation has reached the radar sensor are ascertained; at least one group of such locations belonging to one and the same object is ascertained; for each location in this group, a portion of the frequency spectrum that corresponds to the radar radiation reflected from this location is ascertained; all these portions for the object are aggregated and are fed to a classifier; the object is assigned by the classifier to one or multiple classes of a predefined classification.Type: GrantFiled: July 2, 2021Date of Patent: August 20, 2024Assignee: ROBERT BOSCH GMBHInventors: Kilian Rambach, Lisa-Kristina Morgan, Adriana-Eliza Cozma
-
Patent number: 12026620Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.Type: GrantFiled: August 26, 2019Date of Patent: July 2, 2024Assignee: Preferred Networks, Inc.Inventors: Seiya Tokui, Yuya Unno, Kenta Oono, Ryosuke Okuta
-
Patent number: 12014272Abstract: A circuit for performing neural network computations for a neural network comprising a plurality of layers, the circuit comprising: activation circuitry configured to receive a vector of accumulated values and configured to apply a function to each accumulated value to generate a vector of activation values; and normalization circuitry coupled to the activation circuitry and configured to generate a respective normalized value from each activation value.Type: GrantFiled: March 1, 2023Date of Patent: June 18, 2024Assignee: Google LLCInventors: Gregory Michael Thorson, Christopher Aaron Clark, Dan Luu
-
Patent number: 11948063Abstract: Computer systems and computer-implemented methods improve a base neural network. In an initial training, preliminary activations values computed for base network nodes for data in the training data set are stored in memory. After the initial training, a new node set is merged into the base neural network to form an expanded neural network, including directly connecting each of the nodes of the new node set to one or more base network nodes. Then the expanded neural network is trained on the training data set using a network error loss function for the expanded neural network.Type: GrantFiled: June 1, 2023Date of Patent: April 2, 2024Assignee: D5AI LLCInventors: James K. Baker, Bradley J. Baker
-
Patent number: 11934791Abstract: The present disclosure provides projection neural networks and example applications thereof. In particular, the present disclosure provides a number of different architectures for projection neural networks, including two example architectures which can be referred to as: Self-Governing Neural Networks (SGNNs) and Projection Sequence Networks (ProSeqoNets). Each projection neural network can include one or more projection layers that project an input into a different space. For example, each projection layer can use a set of projection functions to project the input into a bit-space, thereby greatly reducing the dimensionality of the input and enabling computation with lower resource usage. As such, the projection neural networks provided herein are highly useful for on-device inference in resource-constrained devices. For example, the provided SGNN and ProSeqoNet architectures are particularly beneficial for on-device inference such as, for example, solving natural language understanding tasks on-device.Type: GrantFiled: August 1, 2022Date of Patent: March 19, 2024Assignee: GOOGLE LLCInventors: Sujith Ravi, Zornitsa Kozareva
-
Patent number: 11915119Abstract: A convolutional neural network (CNN) processing method includes selecting a survival network in a precision convolutional network based on a result of performing a high speed convolution operation between an input and a kernel using a high speed convolutional network, and performing a precision convolution operation between the input and the kernel using the survival network.Type: GrantFiled: December 20, 2017Date of Patent: February 27, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Changyong Son, Jinwoo Son, Chang Kyu Choi, Jaejoon Han
-
Patent number: 11915146Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.Type: GrantFiled: November 11, 2022Date of Patent: February 27, 2024Assignee: PREFERRED NETWORKS, INC.Inventors: Seiya Tokui, Yuya Unno, Kenta Oono, Ryosuke Okuta
-
Patent number: 11881012Abstract: In one aspect, an example method includes (i) extracting a sequence of audio features from a portion of a sequence of media content; (ii) extracting a sequence of video features from the portion of the sequence of media content; (iii) providing the sequence of audio features and the sequence of video features as an input to a transition detector neural network that is configured to classify whether or not a given input includes a transition between different content segments; (iv) obtaining from the transition detector neural network classification data corresponding to the input; (v) determining that the classification data is indicative of a transition between different content segments; and (vi) based on determining that the classification data is indicative of a transition between different content segments, outputting transition data indicating that the portion of the sequence of media content includes a transition between different content segments.Type: GrantFiled: April 9, 2021Date of Patent: January 23, 2024Assignee: Gracenote, Inc.Inventors: Joseph Renner, Aneesh Vartakavi, Robert Coover
-
Patent number: 11868878Abstract: Disclosed herein are techniques for implementing a large fully-connected layer in an artificial neural network. The large fully-connected layer is grouped into multiple fully-connected subnetworks. Each fully-connected subnetwork is configured to classify an object into an unknown class or a class in a subset of target classes. If the object is classified as the unknown class by a fully-connected subnetwork, a next fully-connected subnetwork may be used to further classify the object. In some embodiments, the fully-connected layer is grouped based on a ranking of target classes.Type: GrantFiled: March 23, 2018Date of Patent: January 9, 2024Assignee: Amazon Technologies, Inc.Inventors: Randy Huang, Ron Diamant
-
Patent number: 11860608Abstract: The present invention discloses an industrial equipment operation, maintenance and optimization method and system based on a complex network model. The method includes the following steps: obtaining data of all sensors of industrial equipment, and calculating a Spearman correlation coefficient between data of every two of the sensors within the same time period; using each sensor as a node, and using the Spearman correlation coefficient as a weight of a network edge, to construct a fully connected weighted network; and obtaining, when an adjustment instruction for a target feature is received, a currently optimal parameter adjustment path of the target feature based on the fully connected weighted network. In the present invention, production equipment in reality is digitized to construct a complex network oriented to industrial big data. An optimal path for equipment parameter tuning may be found by using the network, thereby reducing dependence of an enterprise on a domain expert.Type: GrantFiled: January 3, 2020Date of Patent: January 2, 2024Assignee: QILU UNIVERSITY OF TECHNOLOGYInventors: Xuesong Jiang, Chao Meng, Xiumei Wei, Qingcun Zhu, Dapeng Hu
-
Patent number: 11836449Abstract: An information processing device includes processing circuitry to acquire object spatiotemporal information including spatiotemporal information indicating coordinates of objects in time and space and a name of each of the objects and to generate morphological analysis-undergone object spatiotemporal information by executing a morphological analysis as a process of analyzing the name of each of the objects included in the object spatiotemporal information into one or more words; to acquire morphological analysis-undergone names of vicinal objects, as objects existing in a vicinity of each of the objects in time and space, from the morphological analysis-undergone object spatiotemporal information; to calculate a distribution of vicinal object name words, as words included in the names of the vicinal objects of each of the objects, from the morphological analysis-undergone names; and to convert the distribution of the vicinal object name words to a spatiotemporal information-considered distributed representatiType: GrantFiled: April 6, 2021Date of Patent: December 5, 2023Assignee: MITSUBISHI ELECTRIC CORPORATIONInventors: Hideaki Joko, Takahiro Otsuka
-
Patent number: 11830056Abstract: The present disclosure provides method and apparatus for determining a food item from a photograph and a corresponding restaurant serving the food item. An image is received from a user, the image being associated with a consumable item. One or more ingredients of the consumable item in the image is identified along with a location of the user and using a neural network, determining one or more similar images from a database. A restaurant associated with each of the one or more similar images is determined along with a similarity score indicating a similarity between the restaurant and the identified content of the image. The one or more restaurants and/or associated similar food items are ranked based on the similarity score and a list of ranked restaurants is provided to the user.Type: GrantFiled: November 23, 2020Date of Patent: November 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Julia X. Gong, Jyotkumar Patel, Yale Song, Xuetao Yin, Xiujia Guo, Rajiv S. Binwade, Houdong Hu
-
Patent number: 11816558Abstract: An integrated circuit device for reservoir computing can include a weighted input layer, an unweighted, asynchronous, internal recurrent neural network made up of nodes having binary weighting, and a weighted output layer. Weighting of output signals can be performed using predetermined weighted sums stored in memory. Application specific integrated circuit (ASIC) embodiments may include programmable nodes. Characteristics of the reservoir of the device can be tunable to perform rapid processing and pattern recognition of signals at relatively large rates.Type: GrantFiled: May 16, 2018Date of Patent: November 14, 2023Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARKInventors: Daniel Lathrop, Itamar Shani, Peter Megson, Alessandro Restelli, Anthony Robert Mautino
-
Patent number: 11808230Abstract: A method for estimating pressure in an intake manifold of an indirect injection combustion engine. A pressure sensor measures pressure in the intake manifold, the intake manifold being in fluidic communication with a combustion cylinder, a piston guided in translation in the combustion cylinder and connected to a rotating crankshaft. The method includes: measuring, with the pressure sensor, a maximum pressure corresponding substantially to a maximum pressure in the intake manifold during a preceding cycle of the engine; measuring, with the pressure sensor, a minimum pressure corresponding substantially to a minimum pressure in the intake manifold during the preceding cycle of the engine; determining a pre-calculated average pressure correction factor from a crankshaft angular position and from an engine speed; and estimating the pressure in the intake manifold for the crankshaft angular position of the current engine cycle from the average correction factor and from the minimum and maximum pressures.Type: GrantFiled: September 14, 2021Date of Patent: November 7, 2023Inventor: Xavier Moine
-
Patent number: 11812589Abstract: Systems and methods for cooling a datacenter are disclosed. In at least one embodiment, a refrigerant distribution unit (RDU) distributes first refrigerant from a refrigerant reservoir to one or more cold plates to extract heat from at least one computing device and also interfaces between a first refrigerant cooling loop having a first refrigerant and a second refrigerant cooling loop, so that a second refrigerant cooling loop uses second refrigerant to dissipate at least part of such heat through a second condenser unit to an ambient environment.Type: GrantFiled: May 12, 2021Date of Patent: November 7, 2023Assignee: Nvidia CorporationInventor: Ali Heydari
-
Patent number: 11790234Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.Type: GrantFiled: December 9, 2022Date of Patent: October 17, 2023Assignee: Adobe Inc.Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
-
Patent number: 11774553Abstract: In an embodiment, a method includes: transmitting a plurality of radar signals using a millimeter-wave radar sensor towards a target; receiving a plurality of reflected radar signals that correspond to the plurality of transmitted radar signals using the millimeter-wave radar; mixing a replica of the plurality of transmitted radar signals with the plurality of received reflected radar signals to generate an intermediate frequency signal; generating raw digital data based on the intermediate frequency signal using an analog-to-digital converter; processing the raw digital data using a constrained L dimensional convolutional layer of a neural network to generate intermediate digital data, where L is a positive integer greater than or equal to 2, and where the neural network includes a plurality of additional layers; and processing the intermediate digital data using the plurality of additional layers to generate information about the target.Type: GrantFiled: June 18, 2020Date of Patent: October 3, 2023Assignee: Infineon Technologies AGInventors: Avik Santra, Thomas Reinhold Stadelmayer
-
Patent number: 11741370Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: generating a plurality of deep transfer learning networks including a source deep transfer learning network for a source domain and a target deep transfer learning network for a target domain. Transfer layers of the source deep transfer learning network are encoded to a chromosome, diversified, and integrated with the target deep transfer learning network and the target deep transfer learning network passing a predefined fitness threshold condition is produced.Type: GrantFiled: August 28, 2019Date of Patent: August 29, 2023Assignee: International Business Machines CorporationInventors: Craig M. Trim, Aaron K. Baughman, Garfield W. Vaughn, Micah Forster
-
Patent number: 11741362Abstract: A system for training a neural network receives training data and performing lower precision format training calculations using lower precision format data at one or more training phases. One or more results from the lower precision format training calculations are converted to higher precision format data, and higher precision format training calculations are performed using the higher precision format data at one or more additional training phases. The neural network is modified using the results from the one or more additional training phases. The mixed precision format training calculations train the neural network more efficiently, while maintaining an overall accuracy.Type: GrantFiled: May 8, 2018Date of Patent: August 29, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Daniel Lo, Eric Sen Chung, Bita Darvish Rouhani
-
Patent number: 11732664Abstract: A control device 60 of a vehicle drive device comprises a processing part 81 configured to use a trained model using a neural network to calculate an output parameter of a vehicle, and a control part 82 configured to control the vehicle drive device based on the output parameter. The neural network includes a first input layer to which input parameters of the vehicle other than a design value are input, a second input layer to which the design values are input, a first hidden layer to which outputs of the first input layer are input, a second hidden layer to which outputs of the second input layer are input, and an output layer outputting the output parameter, and is configured so that the second hidden layer becomes closer to the output layer than the first hidden layer.Type: GrantFiled: September 12, 2019Date of Patent: August 22, 2023Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Toshihiro Nakamura
-
Patent number: 11734214Abstract: The present disclosure relates to devices for using a configurable stacked architecture for a fixed function datapath with an accelerator for accelerating an operation or a layer of a deep neural network (DNN). The stacked architecture may have a fixed function datapath that includes one or more configurable micro-execution units that execute a series of vector, scalar, reduction, broadcasting, and normalization operations for a DNN layer operation. The fixed function datapath may be customizable based on the DNN or the operation.Type: GrantFiled: March 25, 2021Date of Patent: August 22, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Stephen Sangho Youn, Steven Karl Reinhardt, Jeremy Halden Fowers, Lok Chand Koppaka, Kalin Ovtcharov
-
Patent number: 11727172Abstract: A tuning method for a clutch temperature estimation model may include, generating n tuning genes, calculating a tuning value corresponding to a tuning variable by using information of each of the n tuning genes, calculating a temperature estimation accuracy by applying the calculated tuning value to the clutch temperature estimation model, extracting n tuning genes of highest calculated accuracies, and regenerating m tuning genes through recombination of the extracted n tuning genes.Type: GrantFiled: November 12, 2020Date of Patent: August 15, 2023Assignees: HYUNDAI MOTOR COMPANY, KIA MOTORS CORPORATIONInventors: Geontae Lee, Min Ki Kim, Kyoung Song
-
Patent number: 11715011Abstract: A neural network recognition method includes obtaining a first neural network that includes layers and a second neural network that includes a layer connected to the first neural network, actuating a processor to compute a first feature map from input data based on a layer of the first neural network, compute a second feature map from the input data based on the layer connected to the first neural network in the second neural network, and generate a recognition result based on the first neural network from an intermediate feature map computed by applying an element-wise operation to the first feature map and the second feature map.Type: GrantFiled: September 11, 2019Date of Patent: August 1, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Byungin Yoo, Youngsung Kim, Youngjun Kwak, Chang Kyu Choi
-
Patent number: 11699069Abstract: Systems and methods are provided for performing predictive assignments pertaining to genetic information. One embodiment is a system that includes a genetic prediction server. The genetic prediction server includes an interface that acquires records that each indicate one or more genetic variants determined to exist within an individual, and a controller. The controller selects one or more machine learning models that utilize the genetic variants as input, and loads the machine learning models. For each individual in the records: the controller predictively assigns at least one characteristic to that individual by operating the machine learning models based on at least one genetic variant indicated in the records for that individual. The controller also generates a report indicating at least one predictively assigned characteristic for at least one individual, and transmits a command via the interface for presenting the report at a display.Type: GrantFiled: July 13, 2017Date of Patent: July 11, 2023Assignee: Helix, Inc.Inventors: Ryan P. Trunck, Christopher M. Glode, Rani K. Powers, Jennifer L. Lescallett
-
Patent number: 11663468Abstract: A method for training a neural network, includes: training a super network to obtain a network parameter of the super network, wherein each network layer of the super network includes multiple candidate network sub-structures in parallel; for each network layer of the super network, selecting, from the multiple candidate network sub-structures, a candidate network sub-structure to be a target network sub-structure; constructing a sub-network based on target network sub-structures each selected in a respective network layer of the super network; and training the sub-network, by taking the network parameter inherited from the super network as an initial parameter of the sub-network, to obtain a network parameter of the sub-network.Type: GrantFiled: January 16, 2020Date of Patent: May 30, 2023Assignee: Beijing Xiaomi Intelligent Technology Co., Ltd.Inventors: Xiangxiang Chu, Ruijun Xu, Bo Zhang, Jixiang Li, Qingyuan Li, Bin Wang
-
Patent number: 11663478Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for characterizing activity in a recurrent artificial neural network. In one aspect, a method for identifying decision moments in a recurrent artificial neural network includes determining a complexity of patterns of activity in the recurrent artificial neural network, wherein the activity is responsive to input into the recurrent artificial neural network, determining a timing of activity having a complexity that is distinguishable from other activity that is responsive to the input, and identifying the decision moment based on the timing of the activity that has the distinguishable complexity.Type: GrantFiled: June 11, 2018Date of Patent: May 30, 2023Assignee: INAIT SAInventors: Henry Markram, Ran Levi, Kathryn Pamela Hess Bellwald
-
Patent number: 11645835Abstract: A method and system for creating hypercomplex representations of data includes, in one exemplary embodiment, at least one set of training data with associated labels or desired response values, transforming the data and labels into hypercomplex values, methods for defining hypercomplex graphs of functions, training algorithms to minimize the cost of an error function over the parameters in the graph, and methods for reading hierarchical data representations from the resulting graph. Another exemplary embodiment learns hierarchical representations from unlabeled data. The method and system, in another exemplary embodiment, may be employed for biometric identity verification by combining multimodal data collected using many sensors, including, data, for example, such as anatomical characteristics, behavioral characteristics, demographic indicators, artificial characteristics.Type: GrantFiled: August 30, 2018Date of Patent: May 9, 2023Assignee: Board of Regents, The University of Texas SystemInventors: Aaron Benjamin Greenblatt, Sos S. Agaian
-
Patent number: 11636318Abstract: Techniques and mechanisms for servicing a search query using a spiking neural network. In an embodiment, a spiking neural network receives an indication of a first context of the search query, wherein a set of nodes of the spiking neural network each correspond to a respective entry of a repository. One or more nodes of the set of nodes are each excited to provide a respective cyclical response based on the first context, wherein a first cyclical response is by a first node. Due at least in part to a coupling of the excited nodes, a perturbance signal, based on a second context of the search query, results in a change of the first resonance response relative to one or more other resonance responses. In another embodiment, data corresponding to the first node is selected, based on the change, as an at least partial result of the search query.Type: GrantFiled: December 15, 2017Date of Patent: April 25, 2023Assignee: Intel CorporationInventors: Arnab Paul, Narayan Srinivasa
-
Patent number: 11604973Abstract: Some embodiments provide a method for training parameters of a machine-trained (MT) network. The method receives an MT network with multiple layers of nodes, each of which computes an output value based on a set of input values and a set of trained weight values. Each layer has a set of allowed weight values. For a first layer with a first set of allowed weight values, the method defines a second layer with nodes corresponding to each of the nodes of the first layer, each second-layer node receiving the same input values as the corresponding first-layer node. The second layer has a second, different set of allowed weight values, with the output values of the nodes of the first layer added with the output values of the corresponding nodes of the second layer to compute output values that are passed to a subsequent layer. The method trains the weight values.Type: GrantFiled: November 27, 2019Date of Patent: March 14, 2023Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig
-
Patent number: 11604987Abstract: Various embodiments include methods and neural network computing devices implementing the methods, for generating an approximation neural network. Various embodiments may include performing approximation operations on a weights tensor associated with a layer of a neural network to generate an approximation weights tensor, determining an expected output error of the layer in the neural network due to the approximation weights tensor, subtracting the expected output error from a bias parameter of the layer to determine an adjusted bias parameter and substituting the adjusted bias parameter for the bias parameter in the layer. Such operations may be performed for one or more layers in a neural network to produce an approximation version of the neural network for execution on a resource limited processor.Type: GrantFiled: March 23, 2020Date of Patent: March 14, 2023Assignee: Qualcomm IncorporatedInventors: Marinus Willem Van Baalen, Tijmen Pieter Frederik Blankevoort, Markus Nagel
-
Patent number: 11601146Abstract: Examples described herein include methods, devices, and systems which may compensate input data for nonlinear power amplifier noise to generate compensated input data. In compensating the noise, during an uplink transmission time interval (TTI), a switch path is activated to provide amplified input data to a receiver stage including a recurrent neural network (RNN). The RNN may calculate an error representative of the noise based partly on the input signal to be transmitted and a feedback signal to generate filter coefficient data associated with the power amplifier noise. The feedback signal is provided, after processing through the receiver, to the RNN. During an uplink TTI, the amplified input data may also be transmitted as the RF wireless transmission via an RF antenna. During a downlink TTI, the switch path may be deactivated and the receiver stage may receive an additional RF wireless transmission to be processed in the receiver stage.Type: GrantFiled: March 24, 2021Date of Patent: March 7, 2023Assignee: MICRON TECHNOLOGY, INC.Inventor: Fa-Long Luo
-
Patent number: 11593664Abstract: A method can be performed prior to implementation of a neural network by a processing unit. The neural network comprising a succession of layers and at least one operator applied between at least one pair of successive layers. A computational tool generates an executable code intended to be executed by the processing unit in order to implement the neural network. The computational tool generates at least one transfer function between the at least one pair of layers taking the form of a set of pre-computed values.Type: GrantFiled: June 30, 2020Date of Patent: February 28, 2023Assignees: STMicroelectronics (Rousset) SAS, STMicroelectronics S.r.l.Inventors: Laurent Folliot, Pierre Demaj, Emanuele Plebani
-
Patent number: 11551093Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.Type: GrantFiled: January 22, 2019Date of Patent: January 10, 2023Assignee: Adobe Inc.Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
-
Patent number: 11544525Abstract: An artificial intelligence (AI) system is disclosed. The AI system provides an AI system lane processing chain, at least one AI processing block, a local memory, a hardware sequencer, and a lane composer. Each of the at least one AI processing block, the local memory coupled to the AI system lane processing chain, the hardware sequencer coupled to the AI system lane processing chain, and the lane composer is coupled to the AI system lane processing chain. The AI system lane processing chain is dynamically created by the lane composer.Type: GrantFiled: July 31, 2019Date of Patent: January 3, 2023Inventor: Sateesh Kumar Addepalli
-
Patent number: 11544535Abstract: Various embodiments describe techniques for making inferences from graph-structured data using graph convolutional networks (GCNs). The GCNs use various pre-defined motifs to filter and select adjacent nodes for graph convolution at individual nodes, rather than merely using edge-defined immediate-neighbor adjacency for information integration at each node. In certain embodiments, the graph convolutional networks use attention mechanisms to select a motif from multiple motifs and select a step size for each respective node in a graph, in order to capture information from the most relevant neighborhood of the respective node.Type: GrantFiled: March 8, 2019Date of Patent: January 3, 2023Assignee: ADOBE INC.Inventors: John Boaz Tsang Lee, Ryan Rossi, Sungchul Kim, Eunyee Koh, Anup Rao
-
Patent number: 11537869Abstract: Systems and methods provide a learned difference metric that operates in a wide artifact space. An example method includes initializing a committee of deep neural networks with labeled distortion pairs, iteratively actively learning a difference metric using the committee and psychophysics tasks for informative distortion pairs, and using the difference metric as an objective function in a machine-learned digital file processing task. Iteratively actively learning the difference metric can include providing an unlabeled distortion pair as input to each of the deep neural networks in the committee, a distortion pair being a base image and a distorted image resulting from application of an artifact applied to the base image, obtaining a plurality of difference metric scores for the unlabeled distortion pair from the deep neural networks, and identifying the unlabeled distortion pair as an informative distortion pair when the difference metric scores satisfy a diversity metric.Type: GrantFiled: December 27, 2017Date of Patent: December 27, 2022Assignee: Twitter, Inc.Inventors: Ferenc Huszar, Lucas Theis, Pietro Berkes