Patents Issued in September 12, 2024
  • Publication number: 20240303458
    Abstract: The present invention generally relates to articles (e.g., tubes, storage containers, rings, RNS etc.) with embedded radio frequency identification (RFID) labels and method of manufacturing the same, and more particularly relates to articles/storage containers comprising a special design of RFID labels facilitating tracking of storage container throughout its lifecycle, and facilitating reading/writing of the RFID label from one and/or all directions with single IC (Integrated Circuit). The radio frequency identification (RFID) labelled article comprising (i) an open front end, (b) a closed rear end, and (c) at least one sidewall between the open front end and the closed rear end to enclose the article, wherein the at least one sidewall comprises an embedded RFID label, and wherein the RFID label is readable in one and/or all directions and also capable of being read in bulk.
    Type: Application
    Filed: January 10, 2022
    Publication date: September 12, 2024
    Inventors: Puneet KAPOOR, Alok KAPOOR
  • Publication number: 20240303459
    Abstract: A noncontact communication medium includes a resonance circuit configured to resonate using an induced current induced by the antenna coil to generate an alternating current voltage, and a control circuit. The resonance circuit includes a variable condenser connected in parallel to the antenna coil, and a variable resistor connected in parallel to the antenna coil. The control circuit adjusts capacitance of the variable condenser to cause the resonance circuit to resonate at a predetermined resonance frequency, and adjusts a resistance value of the variable resistor to adjust a Q value. The noncontact communication medium operates using power based on a direct current voltage obtained by rectifying the alternating current voltage. And the communication is communication using a signal corresponding to a waveform of the alternating current voltage.
    Type: Application
    Filed: February 27, 2024
    Publication date: September 12, 2024
    Inventor: Kenji NISHIDA
  • Publication number: 20240303460
    Abstract: Wearable ring using near field communications (NFC) to store, receive, and transmit information. A ring can be made from silver or gold materials, the ring band having a first unfinished section with a recess for retaining an NFC chip and antenna and a cut or broken section creating a thin vertical line inside the ring from which a signal can escape. The antenna is a copper wire wrapped around the ring, in the recess of the unfinished band, before being covered with a second layer of metal having a non-metallic section covering the cut or broken section allowing the NFC signal to escape from the inner confines/recess of the ring. The antenna is turned around the ring and soldered to a near field communications (NFC) chip. The ring uses the power from the NFC chip, which communicates via magnetic induction providing a reading distance up to 5 cm.
    Type: Application
    Filed: March 10, 2024
    Publication date: September 12, 2024
    Inventor: Davit Kvitsiani
  • Publication number: 20240303461
    Abstract: Disclosed are various embodiments of a radio-frequency identification (RFID) watch band. Various embodiments include a strap body, a first radio frequency identifier (RFID) chip embedded within the strap body, and a second RFID chip embedded within the strap body. Various embodiments can also include a third RFID chip embedded within the strap body. In various embodiments, the watch band can include a first connector attached at a first end of the strap body. Various embodiments can further include a second connector attached at a second end of the strap body.
    Type: Application
    Filed: May 20, 2024
    Publication date: September 12, 2024
    Inventor: Tobias J. Bushway
  • Publication number: 20240303462
    Abstract: In various examples, a machine learning model is converted for execution by a computing device. For example, a computing graph is generated based on the machine learning model and sub-graphs within the computing graph that match sub-structures that are detected and combined into a vertex to generate an optimized computing graph. A net-list object and weight object are then generated based on the optimized computing graph and provided to the computing device to enable inferencing operations.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Zichuan LIU, Xin LU, Wentian ZHAO
  • Publication number: 20240303463
    Abstract: Methods, systems, and devices for wireless communication are described. A first device may select a partition layer for partitioning a neural network model between the first device and a second device. The first device may implement a first sub-neural network model that includes the partition layer and the second device may implement a second sub-neural network model that includes a layer adjacent to the partition layer.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Kyle Chi GUAN, Hong Cheng, Qing Li, Kapil Gulati, Himaja Kesavareddigari, Mahmoud Ashour
  • Publication number: 20240303464
    Abstract: A method includes providing a first set of data objects to a first skip router of a neural network (NN). The NN includes a first NN layer and a second NN layer. The first set of data objects is subdivided into a first set of skip objects and a first set of non-skip objects based on a first skip logic implemented by the first skip router and a first context of each data object in the first set of data objects. A first set of processed objects is generated based on the first set of non-skip objects and a first layer logic implemented by the first NN layer. Predictions are generated based on a second set of data objects and a second layer logic implemented by the second NN layer. The second set of data objects includes the first set of processed objects and the first set of skip objects.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Inventors: Nan Du, Tao Wang, Yanqi Zhou, Tao Lei, Yuanzhong Xu, Andrew Mingbo Dai, Zhifeng Chen, Dewen Zeng, Yingwei Cui
  • Publication number: 20240303465
    Abstract: A method for training a ranking model for intelligent recommendation, and an intelligent recommendation method are provided, which relate to fields of data processing and machine learning technologies. The method includes: acquiring first user data and first resource data of a target domain, and acquiring second user data and second resource data of a source domain; determining an implicit feature based on the first user data, the first resource data, the second user data and the second resource data; and training the ranking model based on the implicit feature, wherein the ranking model is configured to recommend a resource to a user of the target domain.
    Type: Application
    Filed: June 1, 2022
    Publication date: September 12, 2024
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Xuechao WU, Qian CAO, Xiahui HE, Yunlong BAI
  • Publication number: 20240303466
    Abstract: Methods and systems are presented for improving the accuracy performance and utilization rates of a cascade machine learning model system. The cascade machine learning model system includes multiple machine learning models configured to process transactions according to a cascade operation scheme. Hyperparameter values usable to configure the multiple machine learning models are determined collectively such that the hyperparameter values are selected to optimize the performance of the multiple machine learning models when the models operate according to the cascade operation scheme. Furthermore, an efficacy determination model is used to determine an efficacy of the cascade machine learning model in processing a given transaction. Based on an output of the efficacy determination model, one or more characteristics of the cascade machine learning model are modified for processing the transaction.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 12, 2024
    Inventors: Nitin Satyanarayan Sharma, Sanae Amani Geshnigani
  • Publication number: 20240303467
    Abstract: Machine learning and testing devices are provided. The device applies a neural network operation of a first student neural network to a training image to generate first prediction information, applies a neural network operation of a second student neural network to generate second prediction information, applies an error identification operation to a first integrated image to generate first error identification prediction information, applies the error identification operation to a second integrated image to generate second error identification prediction information, applies a network operation of a first teacher neural network to the training image to generate first pseudo label information, applies a neural network operation of a second teacher neural network to the training image to generate second pseudo label information, back-propagates a loss and updates parameters of the first and second student neural networks.
    Type: Application
    Filed: November 13, 2023
    Publication date: September 12, 2024
    Inventors: Jae Hoon Cho, Jae Hyeon Park, Hyun Kook Park
  • Publication number: 20240303468
    Abstract: A process for handling interleaved sequences using RNNs includes receiving data of a first transaction, retrieving a first state (e.g., a default or a saved RNN state for an entity associated with the first transaction), and determining a new second state and a prediction result using the first state and an input data based on the first transaction. The process includes updating the saved RNN state for the entity to be the second state. The process includes receiving data of a second transaction, where the second transaction is associated with the same entity as the first transaction. The process unloops an RNN associated with the saved RNN state including by: retrieving the second state, determining a new third state and a prediction result using the second state and an input data based the second transaction, and updating the saved RNN state for the entity to be the third state.
    Type: Application
    Filed: April 16, 2024
    Publication date: September 12, 2024
    Inventors: Bernardo José Amaral Nunes de Almeida Branco, Pedro Caldeira Abreu, Ana Sofia Leal Gomes, Mariana S.C. Almeida, João Tiago Barriga Negra Ascensão, Pedro Gustavo Santos Rodrigues Bizarro
  • Publication number: 20240303469
    Abstract: Implementations for training a denoising stacked autoencoder (DAE) using a noisy training dataset comprising a noisy sub-set and a non-noisy sub-set, providing an artificial neural network (ANN) including multiple hidden layers, at least one hidden layer including at least a portion of an encoder of the DAE, the at least a portion of the encoder comprising parameters determined during training of the DAE, training the ANN using a training dataset, and providing a version of the ANN for inference.
    Type: Application
    Filed: June 14, 2023
    Publication date: September 12, 2024
    Inventors: Georgios Passalis, Neha Chopra, Sneha Chate, Boobesh Rajendran, Shiva Tyagi, Rajan Gaba
  • Publication number: 20240303470
    Abstract: This application discloses a construction method and apparatus for a bipartite graph, and a display method and apparatus for a bipartite graph. The construction method includes: searching a computational graph for at least one cross-communication edge corresponding to a first communication node, where the first communication node is one of M communication nodes included in the computational graph, the first communication node corresponds to P predecessor nodes and Q successor nodes, each of the at least one cross-communication edge indicates a communication path between one of the P predecessor nodes and one of the Q successor nodes, and no cross-communication edge passes through the M communication nodes; and cutting cross-communication edges respectively corresponding to the M communication nodes, and performing an aggregation operation to obtain the bipartite graph, where any two of the M communication nodes are connected without an edge in the bipartite graph.
    Type: Application
    Filed: May 17, 2024
    Publication date: September 12, 2024
    Inventors: Zhongwei WANG, Rongchen ZHU, Han GAO
  • Publication number: 20240303471
    Abstract: Implementations herein disclose an activation function for homomorphically-encrypted neural networks. A data-agnostic activation technique is provided that collects information about the distribution of the most-dominant activated locations in the feature maps of the trained model and maintains a map of those locations. This map, along with a defined percent of random locations, decides which neurons in the model are activated using an activation function. Advantages of implementations herein include allowing for efficient activation function computations in encrypted computations of neural networks, yet no data-dependent computation is done during inference time (e.g., data-agnostic). Implementations utilize negligible overhead in model storage, while preserving the same accuracy as with general activation functions and runs in orders of magnitude faster than approximation-based activation functions.
    Type: Application
    Filed: March 6, 2023
    Publication date: September 12, 2024
    Applicant: Intel Corporation
    Inventors: Raizy Kellerman, Alex Nayshtut, Omer Ben-Shalom
  • Publication number: 20240303472
    Abstract: An example apparatus includes processor circuitry to: access first input data from meters, the meters to monitor media devices associated with a plurality of panelists, the first input data including media source data and panel data; reduce a dimensionality of the first input data to generate second input data of reduced dimensionality relative to the first input data, the dimensionality of the first input data to be reduced based on a prior probability of an audience rating associated with the plurality of panelists and an approximation of a dependency of the audience rating on at least one of the media source data and the panel data; and decode the second input data of reduced dimensionality to output a probability model parameter for a multivariate probability model, the multivariate probability model having dimensions corresponding to the first input data, the multivariate probability model to label census data.
    Type: Application
    Filed: May 15, 2024
    Publication date: September 12, 2024
    Inventors: Joshua Ivan Friedman, Tara Zeynep Baris, Neel Parekh
  • Publication number: 20240303473
    Abstract: Embodiments provide a generative AI creation framework to a customized generative AI stack using a foundational model (such as GPT) based on user-defined prompts, a natural language description of the task to be accomplished, and domain adaptation. In one embodiment, organization-specific knowledge may be injected into either the prompt and/or the foundational model. In this way, the customized generative AI stack thus supports a full spectrum of domain-adaptive prompts to enable a full spectrum of personalized and adaptive AI chat applications.
    Type: Application
    Filed: October 27, 2023
    Publication date: September 12, 2024
    Inventors: Na (Claire) Cheng, Jayesh Govindarajan, Zachary Alexander, Shashank Harinath, Atul Kshirsagar, Fermin Ordaz
  • Publication number: 20240303474
    Abstract: A method and a server for fine-tuning a generative machine-learning model (GMLM) are provided. The method comprises: receiving a given textual description of a testing object a testing image thereof, the given textual description being indicative of what is to be depicted in the testing image in a natural language; receiving keywords associated with the given textual description, a given keyword being indicative of a rendering instruction for rendering the testing object in the testing image; generating, based on the keywords, augmented textual descriptions of the image; feeding to the GMLM, each one of the augmented textual descriptions to generate image candidates of the object; transmitting the image candidates to a plurality of human assessors for pairwise comparison thereof; based on the pairwise comparison, determining for the given image candidate, a respective degree of visual appeal; and using the respective degree of visual appeal for fine-tuning the GMLM.
    Type: Application
    Filed: February 23, 2024
    Publication date: September 12, 2024
    Inventors: Nikita PAVLICHENKO, Dmitrii USTALOV
  • Publication number: 20240303475
    Abstract: A method of privacy protection includes receiving from a service customer, a service request requesting for a service of privacy protection. A generative model is used to generate synthetic data based on the service request and the synthetic data is provided to a discriminator. The discriminator performs a comparison between data from the service customer and the received synthetic data, and providing a result of the comparison to the generator, where privacy of the service customer is included in or inferred from the data from the service customer. Based on the result of the comparison from the discriminator, the generator updates the generative model until the generated synthetic data meets a preconfigured requirement. Each time the generative model is updated, newly updated synthetic data is provide to the discriminator. Once the preconfigured requirement is met, the latest synthetic data or the latest generative model is provided to a data consumer.
    Type: Application
    Filed: May 17, 2024
    Publication date: September 12, 2024
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Chenchen YANG, Hang ZHANG, Xu LI, Bidi YING
  • Publication number: 20240303476
    Abstract: A neural network system includes a neural network circuit including first memory cells arranged in an array; and a self-referencing circuit electrically connected to a row line or a column line of the neural network circuit and configured to apply current to the connected row line or column line so that a plurality of target memory cells have preset target weights, wherein the target memory cells include all memory cells positioned on the row line or the column line to which the self-referencing circuit is connected.
    Type: Application
    Filed: March 6, 2024
    Publication date: September 12, 2024
    Applicant: PEBBLE SQUARE, INC.
    Inventor: Choong Hyun LEE
  • Publication number: 20240303477
    Abstract: Embodiments include methods, and processing devices for implementing the methods. Various embodiments may include calculating a batch softmax normalization factor using a plurality of logit values from a plurality of logits of a layer of a neural network, normalizing the plurality of logit values using the batch softmax normalization factor, and mapping each of the normalized plurality of logit values to one of a plurality of manifolds in a coordinate space. In some embodiments, each of the plurality of manifolds represents a number of labels to which a logit can be classified. In some embodiments, at least one of the plurality of manifolds represents a number of labels other than one label.
    Type: Application
    Filed: November 16, 2020
    Publication date: September 12, 2024
    Inventors: Shuai LIAO, Efstratios GAVVES, Cornelis Gerardus Maria SNOEK
  • Publication number: 20240303478
    Abstract: Cooperative training migration is performed by training, cooperatively with a computational device through a network, the neural network model, creating, during the iterations of training, a data checkpoint, the data checkpoint including the gradient values and the weight values of the server partition, the loss value, and an optimizer state, receiving, during the iterations of training, a migration notice, the migration notice including an identifier of a second edge server, and transferring, during the iterations of training, the data checkpoint to the second edge server.
    Type: Application
    Filed: March 23, 2022
    Publication date: September 12, 2024
    Inventors: Rehmat ULLAH, Di WU, Paul HARVEY, Peter KILPATRICK, Ivor SPENCE, Blesson VARGHESE
  • Publication number: 20240303479
    Abstract: Disclosed is a model training method, a performance prediction method, an apparatus, a device and a medium, which relate to the technical field of display. The model training method includes acquiring a training sample set, wherein the training sample set includes: training design data and test data of a sample display device; inputting the training sample design data into a model to be trained, and training the model to be trained according to an output of the model to be trained and the training sample test data to obtain an initial prediction model; when the initial prediction model satisfies a pre-set condition, determining the initial prediction model as a performance prediction model; and a performance prediction model for predicting performance data of a target display device.
    Type: Application
    Filed: March 30, 2022
    Publication date: September 12, 2024
    Applicant: BOE Technology Group Co., Ltd.
    Inventors: Quanguo Zhou, Jie Wang, Zhidong Wang, Cheng Zeng, Lirong Xu, Qing Zhang, Hao Tang, Lijia Zhou
  • Publication number: 20240303480
    Abstract: A Vector Processing Unit (VPU) comprises a plurality of core clusters comprising multiple Single Instruction Multiple DataStream (SIMD) cores, wherein each SIMD core comprises at least one Optical Arithmetic Unit (OAU). The VPU may be configured to receive instructions as digital input vectors, determine information related to light intensities and arithmetic operations, and transmit the digital input vectors and the determined information to at least one of the plurality of core clusters. The plurality of core clusters is configured to receive the digital input vectors and related information. The data is then converted to analog form which is again converted to light beams. The light beams are multiplexed to generate an operand that is processed by the OAU to generate a result. The result is again converted to digital signal and sent to the VPU.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventor: SATHVIK REDROUTHU
  • Publication number: 20240303481
    Abstract: A method and system for predicting a fuel consumption of an aircraft is disclosed. First, a centralized machine learning model is built and trained using available public data. A version of the centralized model is then sent to one or more edge computing devices controlled by entities that own aircraft, and they input their proprietary data into the version of the centralized model they receive. The received version of the centralized model is trained using the proprietary data and a neural network gain for the centralized machine learning model is determined based on a weighted average of edge neural network gains from the edge computing devices. This neural network gain is used to further train the centralized model to give a more accurate prediction of the fuel consumption without the centralized machine learning model actually ever receiving and being trained with the proprietary data of the airlines and other entities.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Chaitanya Pavan Kumar Aripirala, Veeresh Kumar Masaru Narasimhulu, Satyendra Yadav
  • Publication number: 20240303482
    Abstract: A method for prioritizing training examples in a training data set for a classifier designed to map measurement data to classification scores with respect to classes of a predetermined classification. In the method includes: the classifier is trained with the training examples from the training data set; modifications are generated for at least one training example; classification scores are respectively determined from the modifications by means of the classifier; a priority of the training example to which the modifications belong is determined from the distribution of these classification scores.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 12, 2024
    Inventors: Laura Beggel, William Harris Beluch
  • Publication number: 20240303483
    Abstract: A partitioning method includes the steps of acquiring an observation matrix including time series (x1 x2, . . . , xp); for each time series calculating a distance matrix comprising distance values between the elements of the time series, then generating the primary image on the basis of the distance matrix; implementing a learning algorithm for segmenting the primary image so as to obtain a segmented image; defining, on the basis of the segmented image, a primary boundary signal representative of the boundaries; and merging the primary boundary signals in order to obtain a global boundary signal, and defining classes on the basis of the global boundary signal.
    Type: Application
    Filed: December 9, 2021
    Publication date: September 12, 2024
    Inventors: Badr MANSOURI, Alexandre EID, Guy CLERC
  • Publication number: 20240303484
    Abstract: A neuromorphic circuit implementing a spiking neural network and including bidirectional synapses made by a set of memristors arranged in an array, neurons firing spikes at a variable rate and connected to neurons via a synapse, and a neural network training module including, for at least one bidirectional synapse, an estimation unit obtaining an estimation of the time derivative of the spike rate of each neuron, an interconnection having at least two positions between the synapse and each neuron, and a controller sending a control signal to the interconnection after a spike, the signal changing the position of the interconnection, so as to connect the estimation unit and the synapse.
    Type: Application
    Filed: February 8, 2022
    Publication date: September 12, 2024
    Inventors: Julie GROLLIER, Erwann MARTIN, Damien QUERLIOZ, Teodora PETRISOR
  • Publication number: 20240303485
    Abstract: The disclosure provides an apparatus, method, device, and medium for loss balancing in MTL. The apparatus includes interface circuitry and processor circuitry. The processor circuitry is configured to initialize parameters of shared layers of a deep neural network for MTL using a pre-trained neural network; determine a custom interval consisting of a designated number of mini-batch training steps and a designated window of N custom intervals (N>2); for each task, calculate a loss change rate between each pair of N?1 pairs of neighboring custom intervals within a designated window prior to a present custom interval and a gradient magnitude with respect to selected shared weights within the designated window prior to the present custom interval, and adjust, a weight of the task, based on the calculated loss change rate and gradient magnitude with respect to selected shared weights.
    Type: Application
    Filed: December 2, 2021
    Publication date: September 12, 2024
    Inventors: Wenjing KANG, Xiaochuan LUO, Xianchao XU
  • Publication number: 20240303486
    Abstract: An apparatus may be configured to: process at least one input with an efficient neural network; determine at least one performance criteria for the efficient neural network; and activate online learning for the efficient neural network based, at least partially, on the at least one performance criteria. An apparatus may be configured to: receive, from an efficient neural network, at least one video frame or at least one feature; determine at least one inference result based, at least partially, on the at least one video frame or the at least one feature; and transmit, to the efficient neural network, the at least one inference result.
    Type: Application
    Filed: February 20, 2024
    Publication date: September 12, 2024
    Inventors: Hamed Rezazadegan Tavakoli, Amirhossein Hassankhani, Esa Rahtu
  • Publication number: 20240303487
    Abstract: Multimodal training data comprising samples of a prediction target is received. Each sample includes at least a subset of the full set of a plurality of modalities, and the samples collectively include instances of each modality. An attention-based encoder receives sets of training vectors for the samples in fixed-dimensional input vector format, and generates a fixed-dimensional vector representation template for the prediction target. The number of dimensions in the template is constant and is independent of the number of modalities represented by the training vectors. The attention-based encoder uses the samples and the fixed-dimensional vector representation template to generate, from the training vectors for the samples, a latent distribution. The samples in fixed-dimensional input vector format and the latent distribution are used as input to a second attention-based neural network to generate an attention-based decoder that can predict from samples with missing modalities.
    Type: Application
    Filed: March 6, 2024
    Publication date: September 12, 2024
    Inventors: Zhitian ZHANG, Wenjie ZI, Yunduz RAKHMANGULOVA, Saghar IRANDOUST, Hossein HAJIMIRSADEGHI, Thibaut DURAND
  • Publication number: 20240303488
    Abstract: A system and method of designing a T-cell receptor (TCR) assay includes the use of processor-based predictive modeling of an HLA binding classifier, T-cell response, sequencing T-cells, and TCR classifier/regression. Particularly, embodiments include feeding a representation of various peptides into a trained HLA binding classifier model configured to determine average binding predictions of overlapping peptides at each position of the viral or cancer protein. Based upon the average binding predictions, one or more peptide pools can be selected and fed into the T-cell response model, along with representative blood samples associated with a patient/patient population. Further, a sequenced resultant T-cell response can be used to detect T-cell response patterns. These detected patterns can be used to train the TCR classifier/regression model to predict or estimate a patient state.
    Type: Application
    Filed: March 11, 2024
    Publication date: September 12, 2024
    Applicant: ImmunityBio, Inc.
    Inventors: Kamil Wnuk, Jeremi Sudol
  • Publication number: 20240303489
    Abstract: Systems and methods are provided for training an artificial intelligence system and generating audible content for output. The method utilizing a system including at least an application plane layer, a control plane layer including a cognitive computing unit, the cognitive computing unit using at least machine learning for training of the cognitive computing unit, a training input to the system including an input for receiving content for training during the machine learning, and a data plane layer, the data plane layer including an input interface to receive and store data input content from one or more data sources other than the control plane layer, the data input content being subject to transformation into audible content for output. Data input content information is used in synthesizing audible output content at least in part by transforming the data input content into the audible output content.
    Type: Application
    Filed: May 16, 2024
    Publication date: September 12, 2024
    Applicant: MILESTONE ENTERTAINMENT, LLC
    Inventors: Randall M. Katz, Robert Tercek
  • Publication number: 20240303490
    Abstract: Techniques and apparatuses are described for deep neural network (DNN) processing for a user equipment-coordination set (UECS). A network entity selects (910) an end-to-end (E2E) machine-learning (ML) configuration that forms an E2E DNN for processing UECS communications. The network entity directs (915) each device of multiple devices participating in an UECS to form, using at least a portion of the E2E ML configuration, a respective sub-DNN of the E2E DNN that transfers the UECS communications through the E2E communication, where the multiple devices include at least one base station, a coordinating user equipment (UE), and at least one additional UE. The network entity receives (940) feedback associated with the UECS communications and identifies (945) an adjustment to the E2E ML configuration. The network entity then directs at least some of the multiple devices participating in an UECS to update the respective sub-DNN of the E2E DNN based on the adjustment.
    Type: Application
    Filed: May 16, 2024
    Publication date: September 12, 2024
    Inventors: Jibing Wang, Erik Richard Stauffer
  • Publication number: 20240303491
    Abstract: Searching for a model is disclosed. Source nodes are configured to generate pruned candidate models starting from a distribution of models. A central node receives the pruned candidate models and their associated loss values. The central mode causes the pruned candidate models to be tested in a distributed manner at generalization nodes. Loss values returned to the central mode are associated with the pruned candidate models. The pruned candidate model with a lowest loss score, based on the distributed generalization testing, is selected as a winning candidate model and deployed to target nodes.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Vinicius Michel Gottin, Paulo Abelha Ferreira, Pablo Nascimento da Silva
  • Publication number: 20240303492
    Abstract: Computer-implemented methods for generating neural networks, and methods for classifying physiological data and patients based on the generated networks, are provided. The methods comprise determining values of a plurality of hyperparameters based on one or more properties of the received input data, which may be physiological data. A neural network comprising a plurality of layers is generated based on the hyperparameters, and is trained using the input data. If a first predetermined condition is not met, the values of one or more of the hyperparameters are updated. The steps of generating and training a neural network are repeated until the first predetermined condition is met. When the first predetermined condition is met, one of the trained neural networks is selected and is output.
    Type: Application
    Filed: March 4, 2022
    Publication date: September 12, 2024
    Inventors: Yanting SHEN, Robert CLARKE, Tingting ZHU, David CLIFTON
  • Publication number: 20240303493
    Abstract: One embodiment includes a method for predicting the progression of a current state. The method obtains input information concerning time-series forecasts of a state of an entity. The input information includes baseline information known about the state of the entity at a start time; and context information that includes a vector of time-independent background variables related to the entity. The method determines a first forecast for the entity at a first timestep that is separated from the start time by a time gap. The first forecast is determined, by a point prediction model, based on the baseline information and the context information. The method derives, from an autoregressive function, a mean parameter for a probabilistic function. The mean parameter is derived based on: the first forecast; and a learnable function trained based on the time gap and context information. The method parameterizes the probabilistic function based on the mean parameter.
    Type: Application
    Filed: May 13, 2024
    Publication date: September 12, 2024
    Applicant: Unlearn.AI, Inc.
    Inventors: Aaron Michael Smith, Charles Kenneth Fisher
  • Publication number: 20240303494
    Abstract: A few-shot, unsupervised image-to-image translation (“FUNIT”) algorithm is disclosed that accepts as input images of previously-unseen target classes. These target classes are specified at inference time by only a few images, such as a single image or a pair of images, of an object of the target type. A FUNIT network can be trained using a data set containing images of many different object classes, in order to translate images from one class to another class by leveraging few input images of the target class. By learning to extract appearance patterns from the few input images for the translation task, the network learns a generalizable appearance pattern extractor that can be applied to images of unseen classes at translation time for a few-shot image-to-image translation task.
    Type: Application
    Filed: May 16, 2024
    Publication date: September 12, 2024
    Inventors: Ming-Yu LIU, Xun HUANG, Tero Tapani KARRAS, Timo AILA, Jaakko LEHTINEN
  • Publication number: 20240303495
    Abstract: System and methods for machine learning are described. A first input value is obtained. A second input value is also obtained. A decision to use for generating a cycle output is selected based on a randomness factor. The decision is at least one of a random decision or a best decision from a previous cycle. A cycle output for the first and second inputs is generated using the selected decision. The selected decision and the resulting cycle output are stored.
    Type: Application
    Filed: May 20, 2024
    Publication date: September 12, 2024
    Inventor: Rix Ryskamp
  • Publication number: 20240303496
    Abstract: A method, apparatus, non-transitory computer readable medium, and system of training a domain-specific language model are described. One or more aspects of the method, apparatus, non-transitory computer readable medium, and system include obtaining domain-specific training data including a plurality of domain-specific documents having a document structure corresponding to a domain, and obtaining domain-agnostic training data including a plurality of documents outside of the domain. The domain-specific training data and the domain-agnostic training data are used to train a language model to perform a domain-specific task based on the domain-specific training data and to perform a domain agnostic task based on the domain-agnostic training data.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Inderjeet Jayakumar Nair, Natwar Modani
  • Publication number: 20240303497
    Abstract: A processor-implemented method for adapting an artificial neural network (ANN) at test-time includes receiving by a first ANN model and a second ANN model, a test data set. The test data set includes unlabeled data samples. The first ANN model is pretrained using a training data set and the test data set. The first ANN model generates first estimated labels for the test data set. The second ANN model generates second estimated labels for the test data set. Samples of the test data set are selected based on a confidence difference between the first estimated labels and the second estimated labels. The second ANN model is retrained based on the selected samples.
    Type: Application
    Filed: July 27, 2023
    Publication date: September 12, 2024
    Inventors: Jungsoo LEE, Debasmit DAS, Sungha CHOI
  • Publication number: 20240303498
    Abstract: Systems and methods are provided for training Reinforcement Learning (RL) policies for a number of agents distributed throughout a network. According to one implementation, a method associated with an individual agent includes participating in a training process involving each of the multiple agents, the training process including multiple rounds of training allowing each agent to perform a local improvement procedure using RL. During each round of training, the method also includes performing the local improvement procedure using training data associated with one or more other agents having a relatively high level of affiliation of different levels of affiliation with the individual agent and additional training data associated with the individual agent itself. According to additional embodiments, a controller may coordinate the local improvement procedures. During inference, the RL policies can be used without the help of the controller.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Emil Janulewicz, Sergio Slobodrian
  • Publication number: 20240303499
    Abstract: A method for optimizing a workflow-based neural network including an attention layer is provided. The method comprises: training the workflow-based neural network to predict a result from input elements under a prediction model with the attention layer assigning attention placements and weights, based on an original attention function, to the input elements; obtaining an original attention mask pattern and a proposed attention mask pattern; creating an attention mask updating function based on the original attention mask pattern and the proposed attention mask pattern; and combining the attention mask updating function with the original attention function to form an updated attention function.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Wai Kai Arvin TANG, Kai Kin CHAN
  • Publication number: 20240303500
    Abstract: There is provided mechanisms for configuring agent entities with a reporting schedule for reporting computational results during an iterative learning process. A method is performed by a server entity. The method comprises configuring the agent entities with a computational task and a reporting schedule. The reporting schedule defines an order according to which the agent entities are to report computational results of the computational task. The agent entities are configured to, per each iteration of the learning process, base their computation of the computational task on any computational result of the computational task received from any other of the agent entities prior to when the agent entities themselves are scheduled to report their own computational results for that iteration. The method comprises performing the iterative learning process with the agent entities according to the reporting schedule and until a termination criterion is met.
    Type: Application
    Filed: July 6, 2021
    Publication date: September 12, 2024
    Inventors: Erik G. Larsson, Reza Moosavi, Henrik Rydén
  • Publication number: 20240303501
    Abstract: Imitation and reinforcement learning for multi-agent simulation includes performing operations. The operations include obtaining a first real-world scenario of agents moving according to first trajectories and simulating the first real-world scenario in a virtual world to generate first simulated states. The simulating includes processing, by an agent model, the first simulated states for the agents to obtain second trajectories. For each of at least a subset of the agents, a difference between a first corresponding trajectory of the agent and a second corresponding trajectory of the agent is calculated and determining an imitation loss is determined based on the difference. The operations further include evaluating the second trajectories according to a reward function to generate a reinforcement learning loss, calculating a total loss as a combination of the imitation loss and the reinforcement learning loss, and updating the agent model using the total loss.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Applicant: Waabi Innovation Inc.
    Inventors: Chris ZHANG, James TU, Lunjun ZHANG, Kelvin WONG, Simon SUO, Raquel URTASUN
  • Publication number: 20240303502
    Abstract: Methods and apparatus for learning a target quantum state. In one aspect, a method for training a quantum generative adversarial network (QGAN) to learn a target quantum state includes iteratively adjusting parameters of the QGAN until a value of a QGAN loss function converges, wherein each iteration comprises: performing an entangling operation on a discriminator network input of a discriminator network in the QGAN to measure a fidelity of the discriminator network input, wherein the discriminator network input comprises the target quantum state and a first quantum state output from a generator network in the QGAN, wherein the first quantum state approximates the target quantum state; and performing a minimax optimization of the QGAN loss function to update the QGAN parameters, wherein the QGAN loss function is dependent on the measured fidelity of the discriminator network input.
    Type: Application
    Filed: March 10, 2022
    Publication date: September 12, 2024
    Inventors: Yuezhen Niu, Hartmut Neven, Vadim Smelyanskiy, Sergio Boixo Castrillo
  • Publication number: 20240303503
    Abstract: Provided are systems, methods, and computer program products including at least one processor programmed or configured to perturb at least one training dataset based on mutual information extracted from an ensemble machine learning model to provide at least one adversarial training dataset, execute at least two machine learning models of an ensemble machine learning model, train at least two machine learning models with the at least one training dataset by feeding an input or output of one of the at least two machine learning models to the other of the at least two machine learning models, train the ensemble machine learning model with the at least one adversarial training dataset, receive a runtime input from a client device, and provide the runtime input to the trained ensemble machine learning model to generate a signal output indicating that the runtime input includes an out-of-distribution sample.
    Type: Application
    Filed: October 12, 2023
    Publication date: September 12, 2024
    Applicant: Booz Allen Hamilton Inc.
    Inventors: Derek Scott Everett, Andre Tai Nguyen, Edward Simon Pastor Raff
  • Publication number: 20240303504
    Abstract: Apparatuses, systems, and techniques to train/use one or more neural networks. In at least one embodiment, a processor comprises one or more circuits to cause neural network training information to be aggregated based, at least in part, on contribution of the neural network training data and one or more performance metrics of the neural network.
    Type: Application
    Filed: March 22, 2023
    Publication date: September 12, 2024
    Inventors: Ziyue Xu, Holger Reinhard Roth, Meirui Jiang, Wenqi Li, Dong Yang, Can Zhao, Vishwesh Nath, Daguang Xu
  • Publication number: 20240303505
    Abstract: Disclosed are a federated learning method and device using device clustering. The federated learning method includes obtaining an arbitrary client group including some clients as a result of performing clustering on a plurality of clients; determining one of the some clients as a leader client based on a centroid associated with the clustering, wherein the leader client receives data associated with at least one parameter of a pre-trained model from each of the some clients; determining at least one client among the some clients as a target client based on an amount of computing resources of the pre-trained model and a training loss of the pre-trained model; and receiving some data associated with at least one parameter of the model of the target client from the leader client, wherein the some data is included in the data.
    Type: Application
    Filed: March 4, 2024
    Publication date: September 12, 2024
    Applicant: AJOU UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION
    Inventors: Young-Bae Ko, June-Pyo Jung
  • Publication number: 20240303506
    Abstract: One embodiment of the present invention sets forth a technique for training a machine learning model to perform feature extraction. The technique includes executing a student version of the machine learning model to generate a first set of features from a first set of image crops and executing a teacher version of the machine learning model to generate a second set of features from a second set of image crops. The technique also includes training the student version of the machine learning model based on one or more losses computed between the first and second sets of features. The technique further includes transmitting the trained student version of the machine learning model to a server, wherein the trained student version can be aggregated by the server with additional trained student versions of the machine learning model to generate a global version of the machine learning model.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Inventors: Ziheng WANG, Conor PERREAULT, Xi LIU, Anthony M. JARC
  • Publication number: 20240303507
    Abstract: Provided are a method and device for recommending goods, a method and device for training a goods knowledge graph, and a method and device for training a model. The method for training a goods knowledge graph includes: constructing an initial goods knowledge graph based on a first type of triples and a second type of triples, where a format of the first type of triples is head entity-relation-tail entity, and a format of the second type of triples is entity-attribute-attribute value (S101); and training the initial goods knowledge graph based on a graph embedding model to obtain embedding vectors of entities in the trained goods knowledge graph (S102).
    Type: Application
    Filed: March 30, 2022
    Publication date: September 12, 2024
    Inventors: Boran JIANG, Ge OU, Chao JI, Chuqian ZHONG, Shuqi WEI, Pengfei ZHANG