Patents Examined by Li B. Zhen
-
Patent number: 12373715Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for radio frequency band segmentation, signal detection and labelling using machine learning. In some implementations, a sample of electromagnetic energy processed by one or more radio frequency (RF) communication receivers is received from the one or more receivers. The sample of electromagnetic energy is examined to detect one or more RF signals present in the sample. In response to detecting one or more RF signals present in the sample, the one or more RF signals are extracted from the sample, and time and frequency bounds are estimated for each of the one or more RF signals. For each of the one or more RF signals, at least one of a type of a signal present, or a likelihood of signal being present, in the sample is classified.Type: GrantFiled: November 6, 2019Date of Patent: July 29, 2025Assignee: DeepSig Inc.Inventors: Nathan West, Tamoghna Roy, Timothy James O'Shea, Ben Hilburn
-
Patent number: 12361278Abstract: A system for obtaining optimized regular expression may convert a received input data into a plurality of embeddings. The system may receive a generated regular expression corresponding to the plurality of embeddings, wherein the generated regular expression is at least one of an existing regular expression from a database and a newly generated regular expression. The system may parse the generated regular expression into a plurality of sub-blocks. The system may classify the plurality of sub-blocks to obtain a plurality of classified sub-blocks. The system may evaluate a quantifier class for each classified sub-block to identify a corresponding computationally expensive class. The system may perform an iterative analysis to obtain a plurality of optimized sub-blocks associated with a minimum computation time. The system may combine the plurality of optimized sub-blocks to obtain the optimized regular expression.Type: GrantFiled: March 26, 2021Date of Patent: July 15, 2025Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Vinu Varghese, Nirav Jagdish Sampat, Balaji Janarthanam, Anil Kumar, Shikhar Srivastava, Kunal Jaiwant Kharsadia, Saran Prasad
-
Patent number: 12346192Abstract: Methods and system pertaining to performing a diagnostic flowchart are described. A method includes determining a first input indicative of a vehicle problem, and transmitting a request indicative of the problem to a server. The method includes receiving a diagnostic flowchart associated with the vehicle problem. Each diagnostic flowchart includes one or more path elements. Each path element is ordered within the diagnostic flowchart and performable. Each path element leads to one or more ordered decision elements. Each path element comprises one or more diagnostic steps. The method includes outputting a GUI including at least a portion of diagnostic flowchart, and determining a second input indicative of an instruction to perform a first path element of the diagnostic flowchart. The method further includes performing the first path element, determining first feedback data associated with performing the first path element, and outputting the first feedback data onto a display.Type: GrantFiled: August 3, 2021Date of Patent: July 1, 2025Assignee: Snap-on IncorporatedInventor: Patrick S. Merg
-
Patent number: 12314831Abstract: Disclosed is an electronic device may include a memory storing a neural network including a plurality of layers, each of the plurality of layers comprising a plurality of kernels, and at least one processor, wherein the at least one processor is configured to: arrange the neural network; and perform neural network processing on input data based on the arranged neural network, and wherein the arranging the neural network includes: with respect to each of the plurality of layers of the neural network, identifying a number of first weights of each of a plurality of kernels included in a layer; identifying a turn that each of the plurality of kernels included in the layer has in an operation sequence based on the identified number of first weights; and updating the turn that each of the plurality of kernels has in the operation sequence based on the identified turn for each of the plurality of kernels.Type: GrantFiled: October 28, 2020Date of Patent: May 27, 2025Assignee: Samsung Electronics Co., LtdInventors: Dongyul Lee, Sunghyun Kim, Minjung Kim
-
Patent number: 12314867Abstract: Methods, computer program products, and systems are presented. The methods computer program products, and systems can include, for instance: determining an insertion interval of a row for insertion into a decision table; and guiding insertion of the row for insertion into the decision table based on a result of the determining.Type: GrantFiled: November 27, 2019Date of Patent: May 27, 2025Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Pierre C. Berlandier
-
Patent number: 12314861Abstract: Embodiments described herein provide an approach (referred to as “Co-training” mechanism throughout this disclosure) that jointly learns two representations of the training data, their class probabilities and low-dimensional embeddings. Specifically, two representations of each image sample are generated: a class probability produced by the classification head and a low-dimensional embedding produced by the projection head. The classification head is trained using memory-smoothed pseudo-labels, where pseudo-labels are smoothed by aggregating information from nearby samples in the embedding space. The projection head is trained using contrastive learning on a pseudo-label graph, where samples with similar pseudo-labels are encouraged to have similar embeddings.Type: GrantFiled: January 28, 2021Date of Patent: May 27, 2025Assignee: Salesforce, Inc.Inventors: Junnan Li, Chu Hong Hoi
-
Patent number: 12293294Abstract: A method is disclosed. The method may include receiving an image or video; extracting a plurality of features from the image or video; executing a neural network using the plurality of features to obtain a performance score for the image or video, the neural network comprising an input layer, a plurality of intermediate layers subsequent to the input layer, and a regression layer or a classification layer; extracting values from one or more signals between an intermediate layer and the regression layer or the classification layer; for each of the plurality of features, calculating, based on at least one of the one or more values, an impact score indicating an impact the feature had on the performance score; and generating, based on one or more impact scores for the plurality of features, indications indicating an impact different features of the image or video had on the performance score.Type: GrantFiled: May 10, 2021Date of Patent: May 6, 2025Assignee: VIZIT LABS, INC.Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran, Arsenii Mustafin
-
Patent number: 12293284Abstract: Generative adversarial models have several benefits; however, due to mode collapse, these generators face a quality-diversity trade-off (i.e., the generator models sacrifice generation diversity for increased generation quality). Presented herein are embodiments that improve the performance of adversarial content generation by decelerating mode collapse. In one or more embodiments, a cooperative training paradigm is employed where a second model is cooperatively trained with the generator and helps efficiently shape the data distribution of the generator against mode collapse. Moreover, embodiments of a meta learning mechanism may be used, where the cooperative update to the generator serves as a high-level meta task and which helps ensures the generator parameters after the adversarial update stay resistant against mode collapse. In experiments, tested employments demonstrated efficient slowdown of mode collapse for the adversarial text generators.Type: GrantFiled: December 29, 2020Date of Patent: May 6, 2025Assignee: Baidu USA, LLCInventors: Dingcheng Li, Haiyan Yin, Xu Li, Ping Li
-
Patent number: 12277480Abstract: Techniques for in-flight scaling of machine learning training jobs are described. A request to execute a machine learning (ML) training job is received within a provider network, and the ML training job is executed using a first one or more compute instances. Upon a determination that a performance characteristic of the ML training job satisfies a scaling condition, a second one or more compute instances are added to the ML training job while the first one or more compute instances continue to execute portions of the ML training job.Type: GrantFiled: March 23, 2018Date of Patent: April 15, 2025Assignee: Amazon Technologies, Inc.Inventors: Edo Liberty, Thomas Albert Faulhaber, Jr., Zohar Karnin, Gowda Dayananda Anjaneyapura Range, Amir Sadoughi, Swaminathan Sivasubramanian, Alexander Johannes Smola, Stefano Stefani, Craig Wiley
-
Patent number: 12261947Abstract: A learning system according to an embodiment includes a model generation device and n calculation devices. The model generation device includes a splitting unit, a secret sharing unit, and a share transmission unit. The splitting unit splits m×n pieces of training data into n groups each including m training data pieces, the n groups corresponding to the respective n calculation devices on one-to-one basis. The secret sharing unit generates m distribution training data pieces for each of the n groups by distributing using a secret sharing scheme and generates distribution training data for each of the m training data pieces in an i-th group among the n groups, using an i-th element Pi among n elements P1, P2, . . . , Pi, . . . , Pn, by distributing using the secret sharing scheme. The share transmission unit transmits corresponding m distribution training data pieces to each of the n calculation devices.Type: GrantFiled: February 19, 2021Date of Patent: March 25, 2025Assignee: Kabushiki Kaisha ToshibaInventors: Mari Matsumoto, Masanori Furuta
-
Patent number: 12260338Abstract: A transformer-based neural network includes at least one mask attention network (MAN). The MAN computes an original attention data structure that expresses influence between pairs of data items in a sequence of data items. The MAN then modifies the original data structure by mask values in a mask data structure, to produce a modified attention data structure. Compared to the original attention data structure, the modified attention data structure better accounts for the influence of neighboring data items in the sequence of data items, given a particular data item under consideration. The mask data structure used by the MAN can have static and/or machine-trained mask values. In one implementation, the transformer-based neural network includes at least one MAN in combination with at least one other attention network that does not use a mask data structure, and at least one feed-forward neural network.Type: GrantFiled: August 27, 2020Date of Patent: March 25, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Jian Jiao, Yeyun Gong, Nan Duan, Ruofei Zhang, Ming Zhou
-
Patent number: 12254410Abstract: A method and system are disclosed for training a model that implements a machine-learning algorithm. The technique utilizes latent descriptor vectors to change a multiple-valued output problem into a single-valued output problem and includes the steps of receiving a set of training data, processing, by a model, the set of training data to generate a set of output vectors, and adjusting a set of model parameters and component values for at least one latent descriptor vector in the plurality of latent descriptor vectors based on the set of output vectors. The set of training data includes a plurality of input vectors and a plurality of desired output vectors, and each input vector in the plurality of input vectors is associated with a particular latent descriptor vector in a plurality of latent descriptor vectors. Each latent descriptor vector comprises a plurality of scalar values that are initialized prior to training the model.Type: GrantFiled: May 5, 2023Date of Patent: March 18, 2025Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine
-
Patent number: 12249154Abstract: A method of operating a neural network device including a plurality of layers, includes receiving sensing data from at least one sensor, determining environmental information, based on the received sensing data, determining multiple layers corresponding to the determined environmental information, and dynamically reconstructing the neural network device by changing at least two layers, among the plurality of layers, to the determined multiple layers.Type: GrantFiled: March 9, 2021Date of Patent: March 11, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Byeoungsu Kim, Sangsoo Ko, Kyoungyoung Kim, Sanghyuck Ha
-
Patent number: 12248367Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.Type: GrantFiled: September 29, 2020Date of Patent: March 11, 2025Inventors: Avi Baum, Daniel Chibotero, Roi Seznayov, Or Danon, Ori Katz, Guy Kaminitz
-
Patent number: 12236328Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: obtaining communication data streams, extracting data relevant to a point of view of a user, and generating a point of view record in a knowledge base that may be utilized by another user communicating with the user.Type: GrantFiled: November 22, 2017Date of Patent: February 25, 2025Assignee: Kyndryl, Inc.Inventors: James E. Bostick, Danny Y. Chen, Sarbajit K. Rakshit, Keith R. Walker
-
Patent number: 12229651Abstract: A block-based inference method for a memory-efficient convolutional neural network implementation is performed to process an input image. A block-based inference step is performed to execute a multi-layer convolution operation on each of a plurality of input block data to generate an output block data and includes selecting a plurality of ith layer recomputing features according to a position of the output block data along a scanning line feed direction, and then selecting an ith layer recomputing input feature block data according to the position of the output block data and the ith layer recomputing features, and selecting a plurality of ith layer reusing features according to the ith layer recomputing input feature block data along a block scanning direction, and then combining the ith layer recomputing input feature block data with the ith layer reusing features to generate an ith layer reusing input feature block data.Type: GrantFiled: October 6, 2020Date of Patent: February 18, 2025Assignee: NATIONAL TSING HUA UNIVERSITYInventor: Chao-Tsung Huang
-
Patent number: 12229668Abstract: An operation method and apparatus for a network layer in a Deep Neural Network are provided. The method includes: acquiring a weighted tensor of the network layer in the Deep Neural Network, the weighted tensor comprising a plurality of filters; converting each filter into a linear combination of a plurality of fixed-point convolution kernels by splitting the filter, wherein a weight value of each of the fixed-point convolution kernels is a fixed-point quantized value having a specified bit-width; for each filter, performing a convolution operation on input data of the network layer and each of the fixed-point convolution kernels, respectively, to obtain a plurality of convolution results, and calculating a weighted sum of the obtained convolution results based on the linear combination of the plurality of fixed-point convolution kernels of the filter to obtain an operation result of the filter; determining output data of the network layer.Type: GrantFiled: June 24, 2019Date of Patent: February 18, 2025Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.Inventors: Yuan Zhang, Di Xie, Shiliang Pu
-
Patent number: 12217158Abstract: An apparatus includes circuitry for a neural network that is configured to perform forward propagation neural network operations on floating point numbers having a first n-bit floating point format. The first n-bit floating point format has a configuration consisting of a sign bit, m exponent bits and p mantissa bits where m is greater than p. The circuitry is further configured to perform backward propagation neural network operations on floating point numbers having a second n-bit floating point format that is different than the first n-bit floating point format. The second n-bit floating point format has a configuration consisting of a sign bit, q exponent bits and r mantissa bits where q is greater than m and r is less than p.Type: GrantFiled: September 3, 2019Date of Patent: February 4, 2025Assignee: International Business Machines CorporationInventors: Xiao Sun, Jungwook Choi, Naigang Wang, Chia-Yu Chen, Kailash Gopalakrishnan
-
Patent number: 12182697Abstract: A computing device includes one or more processors, a first random access memory (RAM) comprising magnetic random access memory (MRAM), a second random access memory of a type distinct from MRAM, and a non-transitory computer-readable storage medium storing instructions for execution by the one or more processors. The computing device receives first data on which to train an artificial neural network (ANN) and trains the ANN by, using the first RAM comprising the MRAM, performing a first set of training iterations to train the ANN using the first data, and, after performing the first set of training iterations, using the second RAM of the type distinct from MRAM, performing a second set of training iterations to train the ANN using the first data. The computing device stores values for the trained ANN. The trained ANN is configured to classify second data based on the stored values.Type: GrantFiled: December 17, 2018Date of Patent: December 31, 2024Assignee: Integrated Silicon Solution, (Cayman) Inc.Inventors: Michail Tzoufras, Marcin Gajek
-
Patent number: 12169786Abstract: Described herein is a neural network accelerator (NNA) with reconfigurable memory resources for forming a set of local memory buffers comprising at least one activation buffer, at least one weight buffer, and at least one output buffer. The NNA supports a plurality of predefined memory configurations that are optimized for maximizing throughput and reducing overall power consumption in different types of neural networks. The memory configurations differ with respect to at least one of a total amount of activation, weight, or output buffer memory, or a total number of activation, weight, or output buffers. Depending on which type of neural network is being executed and the memory behavior of the specific neural network, a memory configuration can be selected accordingly.Type: GrantFiled: June 27, 2019Date of Patent: December 17, 2024Assignee: Amazon Technologies, Inc.Inventors: Tariq Afzal, Arvind Mandhani, Shiva Navab