Patents Issued in December 26, 2024
-
Publication number: 20240428056Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing tasks. One of the methods includes obtaining a sequence of input tokens, where each token is selected from a vocabulary of tokens that includes text tokens and audio tokens, and wherein the sequence of input tokens includes tokens that describe a task to be performed and data for performing the task; generating a sequence of embeddings by embedding each token in the sequence of input tokens in an embedding space; and processing the sequence of embeddings using a language model neural network to generate a sequence of output tokens for the task, where each token is selected from the vocabulary.Type: ApplicationFiled: June 21, 2024Publication date: December 26, 2024Inventors: Paul Kishan Rubenstein, Matthew Sharifi, Alexandru Tudor, Chulayuth Asawaroengchai, Duc Dung Nguyen, Marco Tagliasacchi, Neil Zeghidour, Zalán Borsos, Christian Frank, Dalia Salem Hassan Fahmy Elbadawy, Hannah Raphaelle Muckenhirn, Dirk Ryan Padfield, Damien Vincent, Evgeny Kharitonov, Michelle Dana Tadmor, Mihajlo Velimirovic, Feifan Chen, Victoria Zayats
-
Publication number: 20240428057Abstract: An information processing apparatus includes: an initialization unit configured to initialize a plurality of weights based on a distribution with a positive average; a monotonic neural network to which the plurality of weights is applied; and a first calculation unit configured to calculate a cumulative intensity function based on an output from the monotonic neural network.Type: ApplicationFiled: November 26, 2021Publication date: December 26, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yoshiaki TAKIMOTO, Maya OKAWA, Tomoharu IWATA, Yusuke TANAKA
-
Publication number: 20240428058Abstract: A hardware based neural network may include a plurality of layers of artificial neurons with electronically adjusted activation function thresholds and a plurality of memristors providing weighted connections between the plurality of layers. The activation function thresholds and the weighted connections may be configured adjusted during a training of the hardware based neural network.Type: ApplicationFiled: June 23, 2023Publication date: December 26, 2024Applicant: Cyberswarm, Inc.Inventors: Andrei ILIESCU, Elena-Adelina DUCA, Viorel-Georgel DUMITRU
-
Publication number: 20240428059Abstract: An electronic device includes: a neural processing unit (NPU) configured to process an activation function; and an accelerator in the NPU, wherein the accelerator includes: a function processing block including at least one sub-operation block, and a final output block connected to the function processing block, wherein the at least one sub-operation block includes: a first sub-operation block configured to calculate an approximation output value for the activation function by processing the activation function based on a first point number and a first bit resolution, and a second sub-operation block configured to calculate a detailed output value for the activation function by processing the activation function based on a second point number and a second bit resolution, and wherein the final output block is configured to calculate a final output value corresponding to the activation function based on the approximation output value and the detailed output value.Type: ApplicationFiled: April 26, 2024Publication date: December 26, 2024Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jonghun Lee, Chulsoo Park, Cheolgyu Jin
-
Publication number: 20240428060Abstract: A neural network computation circuit holds a plurality of connection weight coefficients in one-to-one correspondence with a plurality of input data items, and outputs output data according to a result of a multiply-accumulate operation on the plurality of input data items and the plurality of connection weight coefficients in one-to-one correspondence, and includes at least two bits of semiconductor storage elements provided for each of the plurality of connection weight coefficients, the at least two bits of semiconductor storage elements including a first semiconductor storage element and a second semiconductor storage element that are provided for storing the connection weight coefficient. Each of the plurality of connection weight coefficients corresponds to a total current value that is a sum of a current value of current flowing through the first semiconductor storage element and a current value of current flowing through the second semiconductor storage element.Type: ApplicationFiled: September 4, 2024Publication date: December 26, 2024Inventors: Reiji MOCHIDA, Takashi ONO, Kazuyuki KOUNO, Masayoshi NAKAYAMA, Hitoshi SUWA, Junichi KATO
-
Publication number: 20240428061Abstract: In a neural network computation circuit that outputs output data according to a result of a multiply-accumulate operation on input data and connection weight coefficients, a computation circuit unit that expresses one connection weight coefficient includes a plurality of selection transistors and a plurality of nonvolatile variable resistance elements. The nonvolatile variable resistance elements each express a weight coefficient with a different weight. Each of the nonvolatile variable resistance elements holds information of an upper digit of an absolute value of a positive weight coefficient, information of a lower digit of the absolute value of the positive weight coefficient, information of an upper digit of an absolute value of a negative weight coefficient, or information of a lower digit of the absolute value of the negative weight coefficient.Type: ApplicationFiled: September 4, 2024Publication date: December 26, 2024Inventors: Satoshi AWAMURA, Masayoshi NAKAYAMA
-
Publication number: 20240428062Abstract: The neuron Logic Gate Metal-Oxide-Semiconductor (?LGMOS) circuits to mimic neurons' “integrate-and-fire” behaviors in biological neural network system can be fabricated with industry Complementary Metal-Oxide Semiconductor (CMOS) logic process technology, with which digital computational circuits are fabricated. A processing system having analog ?LGMOS circuits, conversion circuitry and digital circuits optimized for power and cost for varieties of applications can be then fabricated with the same CMOS logic process technology for IC chips. Meanwhile analog ?LGMOS circuits inspired from biological neural network systems can be simulated, designed, and fabricated for IC chips for the applications of biomedical fields.Type: ApplicationFiled: June 26, 2023Publication date: December 26, 2024Inventors: Lee WANG, Jeffrey WANG
-
Publication number: 20240428063Abstract: A neuromorphic optical computing architecture system includes: a multi-channel representation module, configured to encode, via a multi-spectral laser, an originally inputted target light field signal into coherent light having different wavelengths; an attention-aware optical neural network module including a bottom-up (BU) optical attention module and a top-down (TD) optical attention module, in which the coherent light having different wavelengths is input to the BU optical attention module and network training is performed on an attention-aware optical neural network, and the TD optical attention module performs, based on the trained attention-aware optical neural network, spectral and spatial transmittance modulation of multi-dimensional sparse features extracted by the BU optical attention module to obtain a final spatial light output; and an output module configured to detect and identify the final spatial light output on an output plane to obtain a location of an object in a light field and an identifType: ApplicationFiled: June 20, 2024Publication date: December 26, 2024Inventors: Lu FANG, Yuan CHENG
-
Publication number: 20240428064Abstract: The present disclosure relates to a system and an apparatus for an intelligent photonic computing lifelong learning architecture. The system includes: a multi-spectrum representation layer configured to transfer originally input electronic signals including multiple tasks into coherent light with different wavelengths by multi-spectrum representations; a lifelong learning optical neural network layer including cascaded sparse optical convolutional layers in a Fourier plane of an optical system, in which final spatial optical signals are output through the lifelong learning optical neural network layer by performing multi-task step-by-step training of the lifelong learning optical neural network layer on the coherent light with different wavelengths input into the cascaded sparse optical convolutional layers; and an electronic network read-out layer configured to recognize final optical output data obtained by detecting the final spatial optical signals, to obtain multi-task recognition results.Type: ApplicationFiled: June 20, 2024Publication date: December 26, 2024Inventors: Lu FANG, Yuan CHENG
-
Publication number: 20240428065Abstract: One embodiment of the invention provides a computer-implemented method for training an autoencoder to learn one or more chemical properties. The method comprises providing, as input, to an encoder of the autoencoder, a molecular graph representing a molecular structure. The method further comprises receiving, as output, from a decoder of the autoencoder, a production rule sequence for producing a molecule description of the molecular structure. The method further comprises optimizing the autoencoder using a loss function and the production rule sequence.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Inventors: AKIHIRO KISHIMOTO, Hiroshi Kajino
-
Publication number: 20240428066Abstract: A data set is received for training a machine learning model to perform a recognition task. Optimization is performed during training of the machine learning model. The optimization includes at least searching for a minimum value of a loss function, responsive to finding a local minimum, adding an additional term to the loss function, continuing to find another local minimum until a criterion is met, and identifying a global minimum having the lowest minimum value among the found local minima. The machine learning model can be updated with parameters identified at the global minimum.Type: ApplicationFiled: June 26, 2023Publication date: December 26, 2024Inventors: MALGORZATA JADWIGA ZIMON, Fausto Martelli
-
Publication number: 20240428067Abstract: A method for solving a stochastic differential equation includes receiving by a classical computer a partial differential equation describing dynamics of a quantile function QF associated a stochastic differential equation defining a stochastic process as a function of time and variable(s) and the QF defining a modelled distribution of the stochastic process; executing by the classical computer a first training process for training neural network(s) to model an initial quantile function, the neural network(s) being trained by a special purpose processor based on measurements of the stochastic process; executing by the classical computer a second training process wherein the neural network(s) are further trained based on the QFP equation for time interval(s) to model the time evolution of the initial quantile function; and, executing by the classical computer a sampling process including generating samples of the stochastic process using the quantile function, the generated samples representing solutions of thType: ApplicationFiled: August 8, 2022Publication date: December 26, 2024Inventors: Vincent Emanuel Elfving, Annie Emma Paine, Oleksandr Kyriienko
-
Publication number: 20240428068Abstract: In view of the need for a conversational recommender system (CRS) in guiding purchasing processes of complex items, embodiments described herein provide a CRS system that creates a realistic purchase scenario and agent evaluation for fulfilling the recommendation objective. Specifically, the CRS system utilizes existing buying guides as a knowledge source for the recommendation model.Type: ApplicationFiled: December 21, 2023Publication date: December 26, 2024Inventors: Lidiya Murakhovs'ka, Philippe Laban, Tian Xie, Chien-Sheng (Jason) Wu
-
Publication number: 20240428069Abstract: Disclosed herein are techniques for training code language models. Techniques include making a plurality of programming code segments available to a code language processing model; providing an output of the code language processing model to one or more regression layers; determining, based on the one or more regression layers, a degree of functional similarity between two portions of the output; providing the degree of functional similarity to the code language processing model; and updating, based on the degree of functional similarity, the code language processing model.Type: ApplicationFiled: June 20, 2024Publication date: December 26, 2024Applicant: Aurora Labs Ltd.Inventors: Carmit Sahar, Daniel Yellin, Stojancho Ganchev, Zohar Fox
-
Publication number: 20240428070Abstract: A method of model training is disclosed. The method includes: obtaining a second embedding vector input to a decoder in a pre-trained language model, where the second embedding vector corresponds to a second data sequence. The second data sequence includes first sub-data, a masked to-be-predicted data unit, and second sub-data. The first sub-data is located before the masked to-be-predicted data unit in the second data sequence, and the second sub-data is located after the masked to-be-predicted data unit in the second data sequence. The method further includes: obtaining a hidden state based on a first embedding vector by using an encoder in the pre-trained language model (PLM); and predicting the masked to-be-predicted data unit based on the first sub-data, the second sub-data, and the hidden state by using the decoder in the PLM and an output layer of the decoder.Type: ApplicationFiled: August 20, 2024Publication date: December 26, 2024Inventors: Pengfei LI, Liangyou LI, Meng ZHANG
-
Publication number: 20240428071Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a machine learning task on a network input to generate a network output. One of the systems includes an attention neural network configured to perform the machine learning task. The attention neural network includes one or more attentions layers that each include a squared ReLU activation layer, a depth-wise convolution layer, or both.Type: ApplicationFiled: September 3, 2024Publication date: December 26, 2024Inventors: David Richard So, Quoc V. Le, Hanxiao Liu, Wojciech Andrzej Manke, Zihang Dai, Noam M. Shazeer
-
Publication number: 20240428072Abstract: Described are a system, method, and computer program product for multivariate event prediction using multi-stream recurrent neural networks. The method includes receiving event data from a sample time period and generating feature vectors for each subperiod of each day. The method also includes providing the feature vectors as inputs to a set of first recurrent neural network (RNN) models and generating first outputs for each RNN node. The method further includes merging the first outputs for each same subperiod to form aggregated time-series layers. The method further includes providing the aggregated time-series layers as an input to a second RNN model and generating final outputs for each RNN node of the second RNN model.Type: ApplicationFiled: September 4, 2024Publication date: December 26, 2024Inventors: Zhongfang Zhuang, Michael Yeh, Liang Wang, Wei Zhang, Junpeng Wang
-
Publication number: 20240428073Abstract: A method and system of compressing a neural network model (NNM) is disclosed. The method includes determining filter contribution information and position wise contribution information of each of the plurality of layers based on a total number of the plurality of layers in the NNM, a total number of the plurality of filters in the NNM, and a number of filters in each of the plurality of layers. A layer score is determined based on a type of layer for each of the plurality of layers and a predefined scoring criteria. A pruning control parameter is determined of each of the plurality of layers based on the layer score, the filter contribution information and the position wise contribution information of the corresponding layers. A layer-wise pruning rate is determined of each of the plurality of layers based on the pruning control parameter and the pre-defined pruning ratio.Type: ApplicationFiled: September 1, 2023Publication date: December 26, 2024Inventors: SURESH GUNASEKARAN, VIKRAM SUBRAMANI, SUDHIR BHADAURIA, SANTHIYA RAJAN
-
Publication number: 20240428074Abstract: An optimizing method of semi-supervised learning and a computing apparatus are provided. In the method, a first predicted result of a labeled data set and a second predicted result of an unlabeled data set are respectively determined through a machine learning model. A pseudo-label threshold is determined according to a first confidence score of the first predicted result of a first sample of the labeled data set. The machine learning model is updated according to a compared result of the second predicted result of a second sample of the unlabeled data set and the pseudo-label threshold.Type: ApplicationFiled: August 9, 2023Publication date: December 26, 2024Applicant: Wistron CorporationInventors: Jiun-In Guo, Wei-Ting Hung
-
Publication number: 20240428075Abstract: A computer-implemented method includes receiving training data that includes groups of items and a respective user associated with each group, where each group includes a first item selected by the associated user and one or more second items rejected by the associated user from a user interface in which the first item and the one or more second items are presented together in ranked order. The method includes, for each group in the group of items: generating feature embeddings, calculating a pointwise loss for each item in the group based on the feature embeddings, calculating a comparator loss for a set that includes the first item and at least one of the one or more second items, and adjusting one or more parameters of the machine learning model based on the pointwise loss and the comparator loss. The method further includes obtaining a trained machine learning model.Type: ApplicationFiled: June 23, 2023Publication date: December 26, 2024Applicant: Roblox CorporationInventors: Xiaohong GONG, Frank ONG, Zhen ZHANG
-
Publication number: 20240428076Abstract: Methods and systems are disclosed that allows users to define, train, and deploy deep equilibrium models. Decoupled and structured interfaces allow users to easily customize deep equilibrium models. Disclosed systems support a number of different forward and backward solvers, normalization, and regularization approaches.Type: ApplicationFiled: June 23, 2023Publication date: December 26, 2024Inventors: Zhengyang Geng, Jeremy Kolter, Ivan Batalov, Joao Semedo
-
Publication number: 20240428077Abstract: A method, computer system, and a computer program product for digital twin usage are provided. A first digital twin and performance data of the first digital twin are input into a first machine learning model to produce a second digital twin. The first machine learning model performs neural network-based data clustering. The first and second digital twins digitally represent a first physical entity. The second digital twin includes one or more changes from the first digital twin. Performance data of the second digital twin is analyzed.Type: ApplicationFiled: June 20, 2023Publication date: December 26, 2024Inventors: Peng Hui Jiang, Jun Su, Dong Hui Liu, Jia Yu, Hua Wang, QING XIE
-
Publication number: 20240428078Abstract: A computing platform may train, using unsupervised learning techniques, a synthetic identity detection model to detect attempts to generate synthetic identities. The computing platform may receive identity information corresponding to an identity generation request. The computing platform may use the synthetic identity detection model to: 1) generate information clusters corresponding to the identity information, 2) compare a difference between actual and expected information clusters to an anomaly detection threshold, 3) based on identifying that the number of information clusters meets or exceeds the anomaly detection threshold, generate a threat score corresponding to the identity information, 4) compare the threat score to a synthetic identity detection threshold, and 5) based on identifying that the threat score meets or exceeds the synthetic identity detection threshold, identify a synthetic identity generation attempt.Type: ApplicationFiled: June 20, 2023Publication date: December 26, 2024Applicant: Bank of America CorporationInventors: Vijaya L. Vemireddy, Marcus Matos, Daniel Joseph Serna, Kevin Delson
-
Publication number: 20240428079Abstract: Embodiments described herein provide a system for training a neural network model using a teacher-student framework. The system includes a communication interface configured to communicate with a teacher model; a memory storing a student model and a plurality of processor-executable instructions; and a processor executing the processor-executable instructions to perform operations. The operations include: generating, by the student model, a first task output in response to a task input; obtaining, from an evaluation environment, a feedback relating to an accuracy of the first task output; obtaining a refinement output generated by the teacher model based on an input of the first task output and the feedback; and training the student model based on a training input of the first task output and the feedback and a training label of the refinement output.Type: ApplicationFiled: October 31, 2023Publication date: December 26, 2024Inventors: Hailin Chen, Amrita Saha, Chu Hong (Steven) Hoi, Shafiq Rayhan Joty
-
Publication number: 20240428080Abstract: According to one embodiment, an information processing device includes a target model learning unit, a change unit, a selection unit, and a student model learning unit. The target model learning unit learns a target model to be subjected to size reduction. The change unit changes the target model into a student model with a size smaller than a size of the target model. The selection unit selects, as a teacher model, one of a plurality of models including the target model and one or more intermediate models with a size smaller than the size of the target model in accordance with a comparison result between the size of the target model and the size of the student model. The student model learning unit learns the student model by distillation using the selected teacher model.Type: ApplicationFiled: February 27, 2024Publication date: December 26, 2024Applicant: KABUSHIKI KAISHA TOSHIBAInventor: Yusuke NATSUI
-
Publication number: 20240428081Abstract: According to an aspect of an embodiment, a method for performing a generative artificial intelligence model analytics operation may include obtaining an artificial intelligence (AI) model. The method may further include performing analysis of the AI model using one or more scoring agents. The method may further include generating a report including results of the analysis and providing the report on a user interface.Type: ApplicationFiled: June 24, 2024Publication date: December 26, 2024Inventors: Richard Knuszka, Seth Dobrin, Matthew Barker, Avinash Saxena
-
Publication number: 20240428082Abstract: A placement plan for training state checkpoints of a machine learning model is generated based at least in part on a number of training servers of a distributed training environment. The plan indicates, with respect to an individual server, one or more other servers at which replicas of training state checkpoints of the individual server are to be stored. During selected periods of one or more training iterations of the model, respective portions of a replica of a training state checkpoint of a first server are transmitted to a second server selected based on the placement plan. After an event causes disruption of the training iterations, one of the checkpoints generated at the first server is retrieved from the second server and used to resume the training iterations.Type: ApplicationFiled: October 20, 2023Publication date: December 26, 2024Applicant: Amazon Technologies, Inc.Inventors: Zhuang Wang, Zhen Jia, Shuai Zheng, Zhen Zhang, Xinwei Fu, Yida Wang
-
Publication number: 20240428083Abstract: A machine-learning system includes worker nodes communicating with a single server node. Worker nodes are independent neural networks initialized locally on separate data silos. The server node receives the last layer output (“smashed data”) from each worker node during training, aggregates the result, and feeds into its own server neural network. The server then calculates an error and instructs the worker nodes to update their model parameters using gradients to reduce the observed error. A parameterized level of noise is applied to the worker nodes between each training iteration for differential privacy. Each worker node separately parameterizes the amount of noise applied to its local neural network module in accordance with its independent privacy requirements.Type: ApplicationFiled: November 2, 2022Publication date: December 26, 2024Inventors: Grzegorz Gawron, Philip Stubbings, Chi Lang Ngo
-
Publication number: 20240428084Abstract: According to a present invention embodiment, a system for training a reinforcement learning agent comprises one or more memories and at least one processor coupled to the one or more memories. The system trains a machine learning model based on training data to generate a set of hyperparameters for training the reinforcement learning agent. The training data includes encoded information from hyperparameter tuning sessions for a plurality of different reinforcement learning environments and reinforcement learning agents. The machine learning model determines the set of hyperparameters for training the reinforcement learning agent, and the reinforcement learning agent is trained according to the set of hyperparameters. The machine learning model adjusts the set of hyperparameters based on information from testing of the reinforcement learning agent.Type: ApplicationFiled: June 23, 2023Publication date: December 26, 2024Inventors: Elita Astrid Angelina Lobo, Nhan Huu Pham, Dharmashankar Subramanian, Tejaswini Pedapati
-
Publication number: 20240428085Abstract: A method and system for generating a prediction in a low resource device using a decision tree based machine learning model includes receiving input for a prediction request, selecting a first tree from the machine learning model, selecting and loading a first node from the first tree into working memory, accumulating a result from the first node, releasing the first node from working memory, and selecting and loading a second node from the first tree into working memory.Type: ApplicationFiled: October 7, 2021Publication date: December 26, 2024Inventors: Prasanna CHALAPATHY, Ashwin Kumar MURUGANANDAM, Anila JOSHI
-
Publication number: 20240428086Abstract: Systems and methods are directed to incorporating approximate nearest neighbor search as implicit edges in a knowledge graph. The system generates an approximate nearest neighbor (ANN) index that indexes entities by their embeddings. The system models a knowledge graph by including the embeddings as nodes in the knowledge graph. Based on a search query, the system performs a search of the knowledge graph to obtain results, whereby performing the search includes traversing one or more implicit edges from a node of an embedding in the knowledge graph to one or more related nodes in semantic vector space based on the ANN index. The results are then presented on the device of the user.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Inventors: Jan-Ove Almli KARLBERG, Jeffrey L. Wight, Tor Kreutzer, Øystein Fledsberg, Ronny Jensen, Anders Tungeland Gjerdrum, Theodoros Gkountouvas
-
Publication number: 20240428087Abstract: Arrangements for enhanced system and graphical user interface customization based on machine-learned context are provided. In some aspects, historical data may be received from a plurality of data sources and used to train a machine learning model to generate recommended modifications to systems or user interfaces based on user specific data. User specific data may be received from a plurality of data sources. The user specific data may be used as inputs to the machine learning model and, upon execution of the model, a recommendation for one or more modifications to at least one of a system or a user interface may be output. The recommendation may be provided to the user and, if accepted, an instruction causing the recommended modification may be generated and transmitted to one or more computing devices. Additional user specific data may be subsequently received and analyzed to identify additional modifications for recommendation and/or execution.Type: ApplicationFiled: June 23, 2023Publication date: December 26, 2024Inventors: Shola L. Oni, Jo-Ann Taylor, Vijaya L. Vemireddy, Jinna Kim
-
MACHINE LEARNING USING MAP REPRESENTATIONS OF CATEGORICAL DATA TO PROVIDE CLASSIFICATION PREDICTIONS
Publication number: 20240428088Abstract: Various embodiments of the present disclosure provide machine learning using map representations of categorical data to provide classification predictions. In one example, an embodiment provides for generating a first map representation of a first categorical input feature set for categorical data based on a first coding standard. A second map representation of a second categorical input feature set for the categorical data may also be generated based on a second coding standard. Additionally, at least one machine learning model may be applied to the first map representation and the second map representation to generate the prediction output. Based on the prediction output one or more prediction-based actions may also be performed.Type: ApplicationFiled: June 20, 2023Publication date: December 26, 2024Inventors: Ahmed Selim, Paul J. Godden, Melanie McCarney, Gregory J. Boss, Erin A. Satterwhite, Nancy J. Mendelsohn, Michael Bridges -
Publication number: 20240428089Abstract: Approaches are described for generating suggestions for new nodes or new relationships in a knowledge graph based on content of data assets represented by existing nodes in the knowledge graph. The knowledge graph is defined by nodes connected by edges. A method includes determining that a data asset represented by a root node of a knowledge graph has been changed, where the changed data asset is represented by a version node connected to the root node. The changed data asset is processed, including: identifying one or more candidate terms in the changed data asset, and comparing each candidate term with each of one or more existing terms from data assets of the knowledge graph other than the changed data asset to obtain (i) one or more of the candidate terms that do not correspond to any existing term or (ii) one or more candidate terms that each corresponds to a respective existing term that is not related to the version node representing the changed data asset.Type: ApplicationFiled: March 4, 2024Publication date: December 26, 2024Inventors: Kyl Wellman, Jon Green, Tyler Warden, James Maniscalco, Rex Ahlstrom
-
Publication number: 20240428090Abstract: The information output method is an information output method executed by a computer, the information output method including: obtaining first information on at least one of a state of a mobile object or an environment surrounding the mobile object, the mobile object moving through at least one of autonomous movement or a remote operation by an operator; predicting whether an emergency situation that makes the autonomous movement or the remote operation of the mobile object difficult will occur, based on the first information obtained; and outputting a prediction result of whether the emergency situation will occur.Type: ApplicationFiled: September 5, 2024Publication date: December 26, 2024Inventors: Takashi HASHIMOTO, Shunsuke KUHARA, Toshiya ARAI
-
Publication number: 20240428091Abstract: A method and related system operations include determining a predicted category by providing a prediction model with a set of input feature values and generating a plurality of conditionals based on the set of input feature values for a set of features and the predicted category. The method also includes filtering the plurality of conditionals based on a knowledge base to obtain a selected conditional by generating a set of sub-conditional paths by providing, as an input for a prompt generator model, a candidate conditional of the plurality of conditionals to the prompt generator model and selecting the candidate conditional as the selected conditional based on a determination that the set of sub-conditional paths satisfies a set of criteria associated with a set of sequences of the knowledge base. The method further includes storing the selected conditional in a data structure in association with the set of input feature values.Type: ApplicationFiled: June 20, 2023Publication date: December 26, 2024Applicant: Capital One Services, LLCInventors: Samuel SHARPE, Christopher Bayan BRUSS, Brian BARR
-
Publication number: 20240428092Abstract: A computing device obtains a plurality of input objects. The computing device determines settable attributes of each input object of the plurality of input objects and creates a subset of the settable attributes based on an input filter. The computing device inserts the subset of the settable attributes into a rules engine, the rules engine comprising a set of rules evaluated with an input and producing an output during an execution of the rules engine. The computing device determines, during an execution of the rules engine, a plurality of output objects created during the execution of the rules engine and gettable attributes of each output object of the plurality of output objects and creates a subset of the gettable attributes based on an output filter. The computing device stores rules and corresponding gettable attributes and values of the gettable attributes based on the subset of the gettable attributes in memory.Type: ApplicationFiled: June 21, 2023Publication date: December 26, 2024Inventors: Robert Geada, Rui Vieira
-
Publication number: 20240428093Abstract: A three-dimensional (3D) logics visualization system. The system includes: an immersive headset configured to visualize a 3D environment; and a system for rendering and processing logic solutions within the 3D environment, the system including: an editor for creating a 3D hypergraph that represents a logic solution to a reasoning problem, wherein the 3D hypergraph includes nodes connected by arcs arranged in an x, y, and z dimension; and an interface manager for viewing the 3D hypergraph from different perspectives.Type: ApplicationFiled: June 24, 2024Publication date: December 26, 2024Inventors: Selmer Bringsjord, Alexander Bringsjord, Naveen Sundar Govindarajulu
-
Publication number: 20240428094Abstract: This application discloses a model accuracy determining method and apparatus, and a network-side device. A model accuracy determining method in an embodiment of this application includes: performing, by a first network element, inference for a task based on a first model; determining, by the first network element, first accuracy corresponding to the first model, where the first accuracy is used to indicate accuracy of an inference result of the task obtained by the first model; and in a case that the first accuracy meets a preset condition, sending, by the first network element, first information to a second network element, where the first information is used to indicate that accuracy of the first model does not meet an accuracy requirement or has decreased; where the second network element is a network element that triggers the task.Type: ApplicationFiled: September 4, 2024Publication date: December 26, 2024Inventors: Weiwei CHONG, Sihan CHENG, Xiaobo WU
-
Publication number: 20240428095Abstract: A system and method are disclosed to identify one or more price-demand elasticity causal factors and to forecast demand using the one or more price-demand elasticity causal factors. Embodiments include a computer comprising a processor and memory. Embodiments train a machine learning model to identify one or more external causal factors that influence demand for one or more products. Embodiments train the machine learning model to generate one or more price-demand elasticity causal factors to predict a target outcome for a given product demand. Embodiments predict, with the machine learning model, a demand for the one or more products based, at least in part, on the identified one or more external causal factors and the generated one or more price-demand elasticity causal factors.Type: ApplicationFiled: September 4, 2024Publication date: December 26, 2024Inventors: Felix Christopher Wick, Shyam Narasimhan
-
Publication number: 20240428096Abstract: A system and method incorporating Robotic Processing Automation (RPA) and machine leaning to finding telecom expense management information accessed through a site or portal such that RPA bots are able to learn the most effective way to access the information using the minimum amount computing resources and allowing for the RPA bot to self-modify to optimize and adjust to changing environments on the site or portal with minimal or even no manual intervention.Type: ApplicationFiled: September 5, 2024Publication date: December 26, 2024Applicant: Tangoe US, Inc.Inventor: Zachary Goldberg
-
Publication number: 20240428097Abstract: A method for determining model accuracy includes the first network element determines first accuracy of a first model, where the first accuracy indicates an accuracy of a result of inference performed on a task by using the first model; and sends first information to a second network element in a case that the first network element determines that the first accuracy meets a preset condition, where the first information indicates that accuracy of the first model degrades. The first network element is a network element providing the first model, and the second network element is a network element that performs inference on the task.Type: ApplicationFiled: September 6, 2024Publication date: December 26, 2024Inventors: Sihan Cheng, Weiwei Chong
-
Publication number: 20240428098Abstract: A method for processing data in a communication network includes determining, by a first network element, a first accuracy of a first model. The first accuracy is used for indicating accuracy of the first model in practical inference; and re-training the first model or re-selecting a second model, by the first network element, in a case that the first accuracy meets a preset condition.Type: ApplicationFiled: September 6, 2024Publication date: December 26, 2024Inventors: Sihan Cheng, Weiwei Chong, Xiaobo Wu
-
Publication number: 20240428099Abstract: An apparatus, computer-readable medium, and computer-implemented method for postal address identification, including receiving one or more sequences of one or more tokens corresponding to one or more candidate postal address data objects, computing at least one candidate vector in a vector space, the at least one candidate vector corresponding to at least one candidate postal address data object in the one or more candidate postal address data objects, the vector space describing a universe of postal addresses and being clustered into a plurality of clusters, and determining whether the at least one candidate postal address data object corresponds to a postal address based at least in part on applying one or more outlier detection methods to the at least one candidate vector and one or more clusters in the plurality of clusters.Type: ApplicationFiled: September 10, 2024Publication date: December 26, 2024Inventors: Igor Balabine, Dina Laevsky
-
Publication number: 20240428100Abstract: A regression model is trained, and sensor data of a vehicle is processed. Sensor data of an environment is obtained, and a segmentation model is applied to the sensor data to obtain segmented sensor data of an object in the environment. Further, a regression model is applied to the segmented sensor data to determine a steering angle for the vehicle.Type: ApplicationFiled: June 25, 2024Publication date: December 26, 2024Applicant: Elektrobit Automotive GmbHInventors: Sandy Rodrigues, Pavithree Shetty, Akshatha Balakrishna, Anoop George, Seyed Hami Nourbakhsh, Thomas Kleinhenz
-
Publication number: 20240428101Abstract: Embodiments are direct to methods and systems for authenticating a user and interpolating user preference embeddings. The systems generate, using a neural network trained to generate features based on training data comprising human voices spoken by a plurality of historical speakers inside a vehicle, input features based on a human voice of a current speaker inside the vehicle, and calculates similarities between an input vector of the input features and historical vectors in voiceprints of one or more enrolled users. After determining a similarity between the input vector and at least one historical vector in a voiceprint of an identified user is less than a threshold similarity, the systems authenticate the current speaker as the identified user, calculate a probabilistic notion based on the similarity, and apply the probabilistic notion to interpolate between downstream user preference embeddings associated with the identified user.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Applicant: Toyota Connected North America, Inc.Inventors: Taylor Smith, King Chun Ma, Benjamin R. Resnick, Haris Siddiqui
-
Publication number: 20240428102Abstract: A non-transitory computer-readable recording medium storing a training data generation program for causing a computer to execute processing includes: calculating an occurrence probability of a value of a second attribute other than a first attribute of a plurality of attributes, for each of a first value and a second value of the first attribute of the plurality of attributes included in data; selecting a single or a plurality of the second attributes from among the plurality of attributes, based on a loss function that includes a parameter that indicates a difference between the occurrence probabilities respectively for the first value and the second value; and generating training data, based on the single or the plurality of attributes.Type: ApplicationFiled: September 3, 2024Publication date: December 26, 2024Applicant: Fujitsu LimitedInventor: Ryosuke SONODA
-
Publication number: 20240428103Abstract: A method for obtaining a plurality of entangled qubits represented by a lattice structure that includes a plurality of contiguous lattice cells. A respective edge of a respective lattice cell corresponds to one or more edge qubits, and a respective face of the respective lattice cell corresponding to one or more face qubits. Each face qubit is entangled with adjacent edge qubits. A first face of the respective lattice cell corresponds to two or more face qubits, and/or a first edge corresponds to two or more edge qubits. A device for obtaining the plurality of entangled qubits represented by the above-described lattice structure is also described.Type: ApplicationFiled: August 2, 2024Publication date: December 26, 2024Inventors: Naomi Nickerson, Terence Rudolph
-
Publication number: 20240428104Abstract: A method, system, and computer program product for qubit sharing across simultaneous quantum job and/or model execution. Qubit groups within quantum jobs and/or trained models that match with respect to a starting state and a gate structure are identified. Furthermore, qubit groups that are considered for dynamic quantum job and/or model reset and reuse for another computation during a simultaneous quantum job and/or model execution are identified. Based on such identified qubit groups, a record of potential quantum job and/or model minimizations is created. A potential quantum job and/or model minimization is removed one at a time from the record until the quantum jobs and/or models can be positioned on the coupling map. Once that occurs, single compressed quantum jobs and/or models are generated that each use two or more quantum jobs and/or models that can share qubits based on the current record of potential quantum job and/or model minimizations.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Inventors: John S. Werner, Vladimir Rastunkov, Frederik Frank Flöther
-
Publication number: 20240428105Abstract: One or more systems, computer program products and/or computer-implemented methods of use provided herein relate to a process to generate an ansatz-hardware pairing. A system can comprise a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory, wherein the computer executable components can comprise a machine learning model that compares inputs to a database of stored ansatz-hardware pairings and that generates the ansatz-hardware pairing based on the comparing, wherein the inputs comprise desired ansatz metrics, defining a variational quantum algorithm, and hardware metrics of quantum hardware available to operate a quantum circuit defined by the ansatz, and a generating component that determines a prediction comprising the ansatz-hardware pairing, wherein the prediction comprises a predicted accuracy of an output of the quantum circuit to be performed on the quantum hardware of the ansatz-hardware pairing.Type: ApplicationFiled: June 26, 2023Publication date: December 26, 2024Inventors: Anupama RAY, Kalyan DASGUPTA, SheshaShayee K RAGHUNATHAN, Dhinakaran VINAYAGAMURTHY, Dhiraj MADAN