Patents Issued in May 2, 2024
  • Publication number: 20240143960
    Abstract: The present disclosure provides a method for recognizing 2D code information.
    Type: Application
    Filed: October 24, 2023
    Publication date: May 2, 2024
    Inventor: Thorsten KORPITSCH
  • Publication number: 20240143961
    Abstract: An information processing apparatus including an acquisition unit that acquires two-dimensional code information included in an acquired image, a control unit that, in a case where the acquired information satisfies a predetermined condition, executes processing corresponding to the two-dimensional code, and a setting unit that, in a case where the acquired information does not satisfy the predetermined condition, sets a corresponding two-dimensional code as a non-processing target code.
    Type: Application
    Filed: November 1, 2023
    Publication date: May 2, 2024
    Inventor: YASUYUKI TAGAMI
  • Publication number: 20240143962
    Abstract: A sports memorabilia is provided having visual images of highlights and important moments of an athlete's career and/or experience applied to an equipment used in the sport played by the athlete. Visual images applied to the sports equipment may be photographs, drawings, paintings, renderings, and the like without straying from the scope of the invention. In a particular embodiment, the present disclosure may involve a method of producing and applying the visual images to the equipment via a vinyl or other polymer wrap.
    Type: Application
    Filed: October 26, 2023
    Publication date: May 2, 2024
    Inventor: Daniel J. Flynn
  • Publication number: 20240143963
    Abstract: An information processing apparatus comprises: an acquisition unit configured to acquire information of an image forming apparatus; a display control unit configured to, based on the information of the image forming apparatus acquired by the acquisition unit, display a setting screen for setting a schedule by which to execute a color verification, wherein the color verification includes a plurality of steps, and the display control unit, based on the information of the image forming apparatus, specifies which steps can be executed without a user operation from the start of the color verification among the plurality of steps, and based on the specified steps, displays on the setting screen a selection candidate for a step to be executed when a scheduled time arrives.
    Type: Application
    Filed: October 25, 2023
    Publication date: May 2, 2024
    Inventor: TORU SHINNAE
  • Publication number: 20240143964
    Abstract: In a printing apparatus, when an image printed based on image data of an n-th print job as an n-th print target among designated two or more print jobs is an n-th image, and an image printed based on image data of a (n+1)-th print job as a (n+1)-th print target among the designated two or more print jobs is a (n+1)-th image, n-th post-processing setting information and (n+1)-th post-processing setting information are compared to determine whether or not setting of the post-processing needs to be changed, and when the setting of the post-processing does not need to be changed, print setting is changed so as not to print information based on post-processing setting information set to be printed at a position between the n-th image and the (n+1)-th image on the medium.
    Type: Application
    Filed: October 25, 2023
    Publication date: May 2, 2024
    Inventor: Yoichi MITSUI
  • Publication number: 20240143965
    Abstract: An image forming apparatus that includes a sheet feeding unit and forms an image on a sheet fed from the sheet feeding unit. Electronic data item is acquired from a cloud service that manages a plurality of electronic data items. A sheet size is extracted from the electronic data item. An image of the electronic data item is formed on a sheet having the extracted sheet size. In a case where predetermined attribute information indicating a job using the cloud service is included in the electronic data item, the image of the electronic data item, appropriate to a sheet fed from the sheet feeding unit, is formed on the fed sheet.
    Type: Application
    Filed: October 19, 2023
    Publication date: May 2, 2024
    Inventor: SHUICHI TAKENAKA
  • Publication number: 20240143966
    Abstract: The technology described herein can generate a unique identifier for a visual media that comprises pre-printed visual indications on the visual media and a user's handwritten signature. The location of the signature on the visual media can be determined by including preprinted fiducial marks on the visual media. The fiducial markers act as landmarks that allow the size and location of the signature to be determined in absolute terms. The unique identifier is then stored in computer memory on a user-experience server. The user-experience server can associate the unique identifier with a digital asset, such as an image or video, designated by the user. When the unique identifier is provided to the user-experience server a second time, the digital asset can be retrieved and output to the computing device that provided the unique identifier.
    Type: Application
    Filed: January 5, 2024
    Publication date: May 2, 2024
    Inventors: Scott A. Schimke, Jennifer R. Garbos, David Niel Johnson
  • Publication number: 20240143967
    Abstract: An anti-counterfeiting electromagnetic induction sealing liner includes, from top to bottom order, a composite function layer, an electromagnetic induction heating layer, and an adhesion layer combined together. The composite function layer includes, from top to bottom order, an information layer and a support layer combined together. The information layer has therein an information chip and an antenna circuit electrically connected to the information chip. The composite function layer has a structurally weak region. The structurally weak region crosses the antenna circuit. The anti-counterfeiting electromagnetic induction sealing liner seals a container mouth and has an information read/write function. Therefore, the information layer is destroyed the first time the container mouth is opened, thereby improving the anti-counterfeiting function.
    Type: Application
    Filed: October 28, 2022
    Publication date: May 2, 2024
    Inventor: YEN-WU YANG
  • Publication number: 20240143968
    Abstract: A system, method, and computer-readable medium are disclosed for performing a data center asset management and monitoring operation. The data center asset management and monitoring operation includes: receiving data center asset health information from respective data center assets from a plurality of data center assets; generating a neural network graph using the data center asset health information from the plurality of respective data center assets, the neural network graph comprising a plurality of nodes; calculating node edge weights based upon how similar certain data center assets are to other data center assets; and, calculating a data center asset health score using the neural network graph.
    Type: Application
    Filed: October 27, 2022
    Publication date: May 2, 2024
    Applicant: Dell Products L.P.
    Inventors: Saurabh Kishore, Sudhir Vittal Shetty
  • Publication number: 20240143969
    Abstract: A formulation graph convolution network (F-GCN) with multiple GCNs assembled in parallel and connected to filters and an external learning architecture is able to predict the effectiveness of a formulation. Input into the multiple GCNs are molecular structures of formulants, which are processed as molecular graphs and output as molecular descriptors. The molecular descriptors are filtered by normalized ratios or fractions of the ingredient molecules in a formulation, such as a battery electrolyte or solvent. A formulation descriptor combines the filtered molecular descriptors to arrive at a predicted performance for the formulation, such as the battery capacity for an electrolyte formulation, by an external learning architecture. F-GCN may use a pre-trained GCN with physico-chemical properties of known molecular structures.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 2, 2024
    Inventors: Vidushi Sharma, Maxwell Giammona, Dmitry Zubarev
  • Publication number: 20240143970
    Abstract: Some embodiments provide a method of predicting a state of a system that is represented by a partial differential equation. The method comprises training a neural network for an initial state of said system to obtain a set of neural network parameters to provide a spatial representation of said system at an initial time. The method further comprises modifying said parameters for intermediate times between said initial time and a prediction time such that each modified set of parameters is used to provide a respective spatial representation of said system at each corresponding intermediate time using said neural network. The method further comprises modifying said set of parameters to provide a prediction set of parameters that is used to provide a predicted spatial representation of said system at said prediction time using said neural network.
    Type: Application
    Filed: March 8, 2022
    Publication date: May 2, 2024
    Applicant: The Johns Hopkins University
    Inventors: Tamer ZAKI, Yifan DU
  • Publication number: 20240143974
    Abstract: A system and method for constructing a probability model and automatically responding to process anomalies identified by the probability model are disclosed. Data is received for current and prior states of a process, comprising variables in at least two dimensions, and the at least two dimensions being not independently and identically distributed. A segment of a fixed number of prior states is selected and fed into a neural network to output a probability vector for each of the two or more dimensions. The Cartesian product of these probability vectors is calculated to obtain a tensor, wherein each value in the tensor represents a probability that the prior states would be followed by a given state. If the probability in the tensor associated with the present state is less than a predetermined threshold, an electronic communication is automatically generated and transmitted to a client computing device.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 2, 2024
    Applicant: MORGAN STANLEY SERVICES GROUP INC.
    Inventors: Sarthak Banerjee, Mario Jayakumar
  • Publication number: 20240143975
    Abstract: Systems and methods of optimizing a charging of a vehicle battery are disclosed. Using one or more electronic battery sensors, observable battery state data is determined regarding the charging of the battery. A neural network feature extractor extracts features from preceding vehicle battery state information. A reinforcement learning model, such as an actor-critic model, includes an actor model configured to produce an output associated with a charge command to charge the battery, and a critic model configured to output a predicted reward. The reinforcement learning model is trained based on the vehicle battery state information and the extracted features. This includes updating weights of the actor model to maximize the predicted reward output by the critic model, and updating weights of the feature extractor and weights of the critic model to minimize a difference between the predicted reward and health-based rewards received from charging the battery.
    Type: Application
    Filed: November 2, 2022
    Publication date: May 2, 2024
    Inventors: Christoph KROENER, Jared EVANS
  • Publication number: 20240143976
    Abstract: A method and device for labeling are provided. A labeling method includes: determining inference performance features of respective neural network models included in an ensemble model, wherein the inference performance features correspond to performance of the neural network models with respect to inferring classes of the ensemble model; based on the inference performance features, determining weights for each of the classes for each of the neural network models, wherein the weights are not weights of nodes of the neural network models; generating classification result data by performing a classification inference operation on labeling target inputs by the neural network models; determining score data representing confidences for each of the classes for the labeling target inputs by applying weights of the weight data to the classification result data; and measuring classification accuracy of the classification operation for the labeling target inputs based on the score data.
    Type: Application
    Filed: March 31, 2023
    Publication date: May 2, 2024
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Huijin LEE, Wissam BADDAR, Saehyun AHN, Seungju HAN
  • Publication number: 20240143977
    Abstract: This application discloses a model training method, which may be applied to the field of artificial intelligence. The method includes: when training a first neural network model based on a training sample, determining N parameters from M parameters of the first neural network model based on a capability of affecting data processing precision by each parameter; and updating the N parameters. In this application, on a premise that it is ensured that the data processing precision of the model meets a precision requirement, because only N parameters in M parameters in an updated first neural network model are updated, an amount of data transmitted from a training device to a terminal device can be reduced.
    Type: Application
    Filed: October 27, 2023
    Publication date: May 2, 2024
    Inventors: Zhiyuan ZHANG, Xuancheng REN, Xu SUN, Bin HE, Li QIAN
  • Publication number: 20240143979
    Abstract: Synthetic data is an annotated information that computer simulations or algorithms generate as an alternative to real-world data. synthetic data is created in digital worlds rather than collected from or measured in the real world. Embodiments herein provide a method and system for generating synthetic data with domain adaptable features using a neural network. The system is configured to receive seed data from a source domain as an input data. The seed data is considered as a normal state of a machine. The normal state, which is an initial stage of the source domain, consists of a set of features with a certain range of values. Further, a neural network based model is used to generate high quality data with adaptation of the domain specific features. To obtain large amount data for training robust deep learning models to adapt domains emphasizing set of features/providing higher importance selectively.
    Type: Application
    Filed: September 11, 2023
    Publication date: May 2, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: SOMA BANDYOPADHYAY, ANISH DATTA, CHIRABRATA BHAUMIK, TAPAS CHAKRAVARTY, ARPAN PAL, RIDDHI PANSE, MUDASSIR ALI SABIR
  • Publication number: 20240143980
    Abstract: Conventional transport mode detection relies either on GPS data or uses supervised learning for transport mode detection, requiring labelled data with hand crafted features. Embodiments of the present disclosure provide a method and system for identification of transport modes of commuters via unsupervised learning implemented using a multistage learner. Unlabeled time series data received from accelerometer of commuters mobiles from a diversified population is processed using a unique journey segment detection technique to eliminate redundant data corresponding to stationary segments of commuter or user. The non-stationary journey segments are represented using domain generalizable Invariant Auto-Encoded Compact Sequence (I-AECS), which is a learned compact representation encompassing the encoded best diversity and commonality of latent feature representation across diverse users and cities.
    Type: Application
    Filed: September 25, 2023
    Publication date: May 2, 2024
    Applicant: Tata Consultancy Services Limited
    Inventors: SOMA BANDYOPADHYAY, ARPAN PAL, RAMESH KUMAR RAMAKRISHNAN, ANISH DATTA
  • Publication number: 20240143981
    Abstract: A recording medium stores a program for causing a computer to execute a process including: classifying data into classes based on a density of the data; performing data augmentation on first data that is positioned in a region where data which is positioned in a region of a first class and which belongs to the first class exists at a higher density than a predetermined density and on second data that is positioned in a region where the data which is positioned in the region of the first class and which belongs to the first class exists at a lower density than the predetermined density; and setting, when the first data after the data augmentation and the second data after the data augmentation overlap each other, a label that corresponds to the first class to first augmentation data, the second data, or second augmentation data.
    Type: Application
    Filed: July 13, 2023
    Publication date: May 2, 2024
    Applicant: Fujitsu Limited
    Inventor: Hiroaki KINGETSU
  • Publication number: 20240143982
    Abstract: Fused channel and/or fused filter convolutions for fast deep neural network execution are provided. In one aspect, a system includes: a processor, connected to a memory, configured to: implement an approximated datapath in a deep neural network having a sequence of adders and multipliers for adding up operands to provide accumulated sums for two or more groups of neurons in the deep neural network, and multiplying the accumulated sums to obtain a product; and make an inference using the deep neural network based on the product from the approximated datapath. A method for approximation in a deep neural network is also provided.
    Type: Application
    Filed: October 26, 2022
    Publication date: May 2, 2024
    Inventors: Swagath Venkataramani, Sarada Krithivasan, Vijayalakshmi Srinivasan
  • Publication number: 20240143983
    Abstract: An SNN circuit structure based on thin-film transistors, includes: a synaptic pass-transistor unit in which a synaptic transistor circuit unit in which N thin-film transistors are connected in parallel is connected to a common pass-transistor load to maintain an output voltage range of the synaptic transistor circuit unit constant; an output circuit unit configured to output a fire signal according to an output signal output from the synaptic pass-transistor unit; and a backpropagation generating transistor unit configured to provide a backpropagation signal transmitted to the synaptic transistor circuit unit with the fire signal output from the output circuit unit as an input.
    Type: Application
    Filed: October 26, 2023
    Publication date: May 2, 2024
    Applicant: Pusan National University Industry-University Cooperation Foundation
    Inventors: Sungsik LEE, Yeonsu KANG, Jeongkyun ROH, Danyoung CHA, Moonsuk YI
  • Publication number: 20240143984
    Abstract: A system is provided including a data pipeline and a model pipeline. A data pipeline includes: an input that receives a first dataset representing categorical features and a second dataset representing numerical features; a feature ingestion block that generates an output corresponding to a sum of the first dataset with the second dataset; an output that provides training labels based on a processing of the summed datasets to predict a temporally isolated and discrete event; and a label creation block that receives the output and generates labels for date features in the first dataset. A model pipeline includes a neural network(s) that: receives a first input corresponding to a summation of non learned date embedding with learned feature embedding; and contextualizes the summation by date embedding historical patient data into the summation. The model pipeline includes a prediction block that receives the contextualized summation and predicts one or more outcomes.
    Type: Application
    Filed: October 20, 2023
    Publication date: May 2, 2024
    Inventors: David Rimshnick, Grigoriy Koytiger, Joshua Hug
  • Publication number: 20240143985
    Abstract: One or more quantisation parameters are identified for transforming values to be processed by a Neural Network (NN) implemented in hardware. An output of a model of the NN is determined in response to training data, the model comprising quantisation blocks, each of which is configured to transform sets of values input to a layer of the NN to a respective fixed point number format defined by quantisation parameters prior to the model processing the sets of values in accordance with the layer. A cost metric of the NN is determined that is a combination of an error metric and an implementation metric representative of an implementation cost of the NN based on the quantisation parameters. The implementation metric is dependent on a first contribution representative of an implementation cost of an output from a layer, and a second contribution representative of an implementation cost of an output from a preceding layer.
    Type: Application
    Filed: June 29, 2023
    Publication date: May 2, 2024
    Inventor: Szabolcs Csefalvay
  • Publication number: 20240143986
    Abstract: Methods of dividing a neural network into chunks of operations executable in a hardware pass of hardware to execute a neural network. The layers of the neural network are divisible into layer groups that comprise a sequence of layers executable in the same hardware pass of the hardware. Each layer group is divisible into chunks of operations executable in a hardware pass of the hardware. The chunks for a layer group are defined by split parameters. A layer group loss function is obtained that represents a performance metric associated with executing a layer group on the hardware as a function of the split parameters and neural network architecture parameters for the layer group.
    Type: Application
    Filed: June 29, 2023
    Publication date: May 2, 2024
    Inventors: Aria Ahmadi, Cagatay Dikici, Clement Charnay, Jason Rogers
  • Publication number: 20240143987
    Abstract: An integrated circuit includes a computer unit configured to execute the neural network. Parameters of the neural network are stored in a first memory. Data supplied at the input of the neural network or generated by the neural network are stored in a second memory. A first barrel shifter circuit transmits data from the second memory to the computer unit. A second barrel shifter circuit delivers data generated during the execution of the neural network by the computer unit to the second memory. A control unit is configured to control the computer unit, the first and second barrel shifter circuits, and accesses to the first memory and to the second memory.
    Type: Application
    Filed: October 23, 2023
    Publication date: May 2, 2024
    Inventors: Vincent HEINRICH, Pascal URARD, Bruno PAILLE
  • Publication number: 20240143988
    Abstract: Dynamic data quantization may be applied to minimize the power consumption of a system that implements a convolutional neural network (CNN). Under such a quantization scheme, a quantized representation of a 3×3 array of m-bit activation values may include 9 n-bit mantissa values and one exponent shared between the n-bit mantissa values (n<m); and a quantized representation of a 3×3 kernel with p-bit parameter values may include 9 q-bit mantissa values and one exponent shared between the q-bit mantissa values (q<p). Convolution of the kernel with the activation data may include computing a dot product of the 9 n-bit mantissa values with the 9 q-bit mantissa values, and summing the shared exponents. In a CNN with multiple kernels, multiple computing units (each corresponding to one of the kernels) may receive the quantized representation of the 3×3 array of m-bit activation values from the same quantization-alignment module.
    Type: Application
    Filed: January 11, 2024
    Publication date: May 2, 2024
    Inventors: Jian hui Huang, James Michael Bodwin, Pradeep R. Joginipally, Shabarivas Abhiram, Gary S. Goldman, Martin Stefan Patz, Eugene M. Feinberg, Berend Ozceri
  • Publication number: 20240143989
    Abstract: A reservoir calculation device according to an embodiment includes a reservoir circuit and an output circuit. The reservoir circuit receives input data and outputs intermediate signals, each undergoing a transient change when the input data changes. The output circuit outputs an output signal obtained by combining the intermediate signals. The reservoir circuit includes intermediate circuits, each including a neuron circuit and an intermediate output circuit. The neuron circuit generates an intermediate voltage undergoing a transient change corresponding to weight data and the input data when the input data changes. The intermediate output circuit outputs an intermediate signal representing a level of the intermediate voltage from the neuron circuit. The neuron circuit includes a time constant circuit capable of changing a time constant. The time constant circuit is connected between a reference potential and an intermediate terminal outputting the intermediate voltage.
    Type: Application
    Filed: August 27, 2023
    Publication date: May 2, 2024
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Takao MARUKAME, Kumiko NOMURA, Koichi MIZUSHIMA, Yoshifumi NISHI
  • Publication number: 20240143990
    Abstract: A device implementing polariton optical neural network, comprising: Input layer (100) consisting of optical elements ensuring linear pre-processing and distribution of input optical signals (102), middle layer (104), consisting of nonlinear optical elements, realizing nonlinear transformation of the pre-processed input signals (106), output layer (108) consisting of linear optical elements, realizing linear transformation of the signals that had been transformed nonlinearly (110) and generating output signals, passive optical systems (114) optically coupling the input layer (100), the middle layer (104), the output layer (108) characterized in that the nonlinear transformation in the middle layer occurs as a result of interaction of pre-processed input signals (106) with a nonlinear element or elements in the form of an optical microcavity or multiple optical microcavities hosting exciton-polaritons.
    Type: Application
    Filed: June 10, 2022
    Publication date: May 2, 2024
    Applicants: Uniwersytet Warszawski, Instytut Fizyki Polskiej Akademii Nauk
    Inventors: Michal MATUSZEWSKI, Barbara PIETKA, Andrzej OPALA, Jacek SZCZYTKO, Rafal MIREK, Krzysztof TYSZKA, Mateusz KRÓL, Magdalena FURMAN, Jan SUFFCZYNSKI, Wojciech PACUSKI, Bartlomiej SEREDYNSKI
  • Publication number: 20240143992
    Abstract: An information handling system may include at least one processor and a non-transitory memory coupled to the at least one processor. The information handling system may be configured to: receive information regarding a set of variables relating to a machine learning task for analyzing a target variable; perform principal component analysis (PCA) on the set of variables to determine a reduced set of variables; in response to a change in the plurality of variables, dynamically update the reduced set of variables; and determine at least one hyperparameter for the machine learning task based on the dynamically updated reduced set of variables.
    Type: Application
    Filed: October 27, 2022
    Publication date: May 2, 2024
    Applicant: Dell Products L.P.
    Inventors: Bing YUAN, Peter P. O'BRIEN, Ally Junio Oliveira BARRA
  • Publication number: 20240143993
    Abstract: A computer trains, based on many timeseries, many anomaly detectors. Each anomaly detector is configured with a respective distinct contamination factor. Each timeseries is a temporal sequence of datapoints that characterize a device. Each datapoint in the many timeseries has a respective label that indicates whether the device failed when the datapoint occurred. Each anomaly detector detects: a set of anomalous datapoints, the size of which is proportional to the contamination factor of the anomaly detector, a healthy count of anomalous datapoints in timeseries of devices not failed, and an unhealthy count of anomalous datapoints in timeseries of failed devices. For a particular anomaly detector, the computer detects that the magnitude of the difference between the respective healthy count and the respective unhealthy count is less than a predefined threshold. Based on the contamination factor of the particular anomaly detector, anomalous datapoints are oversampled.
    Type: Application
    Filed: October 28, 2022
    Publication date: May 2, 2024
    Inventors: Arno Schneuwly, Suwen Yang
  • Publication number: 20240143994
    Abstract: Machine learning is used to predict a pleasantness of a sound emitted from a device. A plurality of pleasantness ratings from human jurors are received, each pleasantness rating corresponding to a respective one of a plurality of sounds emitted by one or more devices. A microphone system detects a plurality of measurable sound qualities (e.g., loudness, tonality, sharpness, etc.) of these rated sounds. A regression prediction model is trained based on the jury pleasantness ratings and the corresponding measurable sound qualities. Then, the microphone system detects measurable sound qualities of an unrated sound that has not been rated by the jury. The trained regression prediction model is executed on the measurable sound quality of the unrated sound to yield a predicted pleasantness of the unrated sound.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 2, 2024
    Inventors: Felix SCHORN, Florian LANG, Thomas ALBER, Michael KUKA, Carine AU, Filipe J. CABRITA CONDESSA, Rizal Zaini Ahmad FATHONY
  • Publication number: 20240143995
    Abstract: Systems and techniques are provided for determining a distribution for a neural network parameter in designing a neural network architecture of an autonomous vehicle (AV). An example method can include determining an exploration distribution of a neural network parameter for one or more neural networks of one or more AVs; determining, for a target context, a target distribution of the neural network parameter from the exploration distribution, the target context comprising at least one of a driving environment associated with a location, a hardware configuration of one or more AVs, a software configuration of the one or more AVs, and a task of the one or more AVs; and providing, to a computer of an AV, the target distribution for implementing one or more neural network parameter values in the target distribution to adjust a neural network of the computer of the AV for operation in the target context.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 2, 2024
    Inventor: Burkay Donderici
  • Publication number: 20240143996
    Abstract: Systems and methods for training machine learning models are disclosed. An example method includes receiving a semi-labeled set of training samples including a first set of training samples, where each training sample in the first set is assigned a known label, and a second set of training samples, where each training sample in the second set has an unknown label, determining a first loss component, the first loss component providing a loss associated with the first set, determining a second loss component, the second loss component having a value which increases based on a difference between a distribution of individually predicted values of at least the second set and an expected overall distribution of at least the second set, and training the machine learning model, based on the first loss component and the second loss component, to predict labels for unlabeled input data.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 2, 2024
    Applicant: Intuit Inc.
    Inventor: Itay MARGOLIN
  • Publication number: 20240143999
    Abstract: The present invention provides a multi-modal data prediction method based on a causal Markov model, belonging to the technical field of intelligent traffic technology; the method of the present invention includes: collecting regional data and multi-modal traffic data of a research region; taking the time position, the regional point of interest and the weather information as conditional feature variables; taking the regional attraction factor, the bicycle demand factor, the taxi demand factor, the bus demand factor and the traffic speed factor as physical concept variables; taking the bicycle traffic flow, the taxi traffic flow, the bus traffic flow and the regional speed as multi-modal traffic data observation variables, and describing the generation process of the multi-modal traffic flow by using a causal Markov process; solving the causal Markov process by using a neural network, and training a built neural network for the multi-modal traffic data observation.
    Type: Application
    Filed: May 19, 2023
    Publication date: May 2, 2024
    Inventors: Pan DENG, Yu ZHAO, Lin ZHANG, Xiaofeng JIA, Yan LIU, Junting LIU, Mulan WANG
  • Publication number: 20240144000
    Abstract: A neural network model is trained for fairness and accuracy using both real and synthesized training data, such as images. During training a first sampling ratio between the real and synthesized training data is optimized. The first sampling ratio may comprise a value for each group (or attribute), where each value is optimized. A second sampling ratio defines relative amounts of training data that are used for each one of the groups. Furthermore, a neural network model accuracy and a fairness metric are both used for updating the first and second sampling ratios during training iterations. The neural network model may be trained using different classes of training data. The second sampling ratio may vary for each class.
    Type: Application
    Filed: April 26, 2023
    Publication date: May 2, 2024
    Inventors: Yuji Roh, Weili Nie, De-An Huang, Arash Vahdat, Animashree Anandkumar
  • Publication number: 20240144001
    Abstract: A method and system are disclosed for training a model that implements a machine-learning algorithm. The technique utilizes latent descriptor vectors to change a multiple-valued output problem into a single-valued output problem and includes the steps of receiving a set of training data, processing, by a model, the set of training data to generate a set of output vectors, and adjusting a set of model parameters and component values for at least one latent descriptor vector in the plurality of latent descriptor vectors based on the set of output vectors. The set of training data includes a plurality of input vectors and a plurality of desired output vectors, and each input vector in the plurality of input vectors is associated with a particular latent descriptor vector in a plurality of latent descriptor vectors. Each latent descriptor vector comprises a plurality of scalar values that are initialized prior to training the model.
    Type: Application
    Filed: May 5, 2023
    Publication date: May 2, 2024
    Inventors: Tero Tapani Karras, Timo Oskari Aila, Samuli Matias Laine
  • Publication number: 20240144002
    Abstract: A system that includes a machine learning model that is configured to receive an input layout file that includes a portion of an integrated circuit layout that has a previously identified wafer hotspot, match the previously identified wafer hotspot to one of a plurality of categories of wafer hotspot types, and output a proposed layout modification associated with the matching category of wafer hotspot types.
    Type: Application
    Filed: July 19, 2023
    Publication date: May 2, 2024
    Applicant: SanDisk Technologies LLC
    Inventors: Chen-Che Huang, Lauren Matsumoto, Chunming Wang
  • Publication number: 20240144004
    Abstract: Embodiments generate machine learning predictions to discover target device energy usage. One or more trained machine learning models configured to discover target device energy usage from source location energy usage can be stored. Multiple instances of source location energy usage over a period of time can be received for a given source location. Using the trained machine learning model, multiple discovery predictions for the received instances of source location energy usage can be generated, the discovery predictions comprising a prediction about a presence of target device energy usage within the instances of source location energy usage. And based on the multiple discovery predictions, an overall prediction about a presence of target device energy usage within the given source location's energy usage over the period of time can be generated.
    Type: Application
    Filed: December 20, 2023
    Publication date: May 2, 2024
    Inventors: Selim MIMAROGLU, Oren BENJAMIN, Arhan GUNEL, Anqi SHEN, Ziran FENG
  • Publication number: 20240144005
    Abstract: A method of interpreting tabular data includes receiving, at a deep tabular data learning network (TabNet) executing on data processing hardware, a set of features. For each of multiple sequential processing steps, the method also includes: selecting, using a sparse mask of the TabNet, a subset of relevant features of the set of features; processing using a feature transformer of the TabNet, the subset of relevant features to generate a decision step output and information for a next processing step in the multiple sequential processing steps; and providing the information to the next processing step. The method also includes determining a final decision output by aggregating the decision step outputs generated for the multiple sequential processing steps.
    Type: Application
    Filed: January 4, 2024
    Publication date: May 2, 2024
    Applicant: Google LLC
    Inventors: Sercan Omer Arik, Tomas Jon Pfister
  • Publication number: 20240144006
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
    Type: Application
    Filed: January 8, 2024
    Publication date: May 2, 2024
    Inventors: Noam M. Shazeer, Aidan Nicholas Gomez, Lukasz Mieczyslaw Kaiser, Jakob D. Uszkoreit, Llion Owen Jones, Niki J. Parmar, Illia Polosukhin, Ashish Teku Vaswani
  • Publication number: 20240144007
    Abstract: A method of contrastive learning comprises: determining, based on a model construction criterion, a first encoder for a first modality and a second encoder for a second modality; constructing a first contrastive learning model, the first contrastive learning model comprising the first encoder and a third encoder for the second modality, and a model capacity of the third encoder being greater than a model capacity of the second encoder; performing pre-training of the first contrastive learning model based on a first training dataset for the first modality and the second modality; and providing the pre-trained first encoder in the pre-trained first contrastive learning model for a downstream task. Because only the model capacity of one encoder is increased in the pre-training stage, model performance may be improved without increasing model training overhead during downstream task fine-tuning and model running overhead during model application.
    Type: Application
    Filed: September 22, 2023
    Publication date: May 2, 2024
    Inventors: Hao Wu, Boyan Zhou, Quan Cui, Cheng Yang
  • Publication number: 20240144008
    Abstract: A learning apparatus comprises one or more hardware processors, and one or more memories storing one or more programs configured to be executed by the one or more hardware processors, the one or more programs including instructions for, performing learning of a second learning model having an arrangement that is at least partially the same as an arrangement of a first learning model by distillation learning using an output of the first learning model, and dynamically changing, during the learning of the second learning model, at least one of a parameter of the first learning model, the arrangement of the first learning model, a parameter of the second learning model, and the arrangement of the second learning model.
    Type: Application
    Filed: October 13, 2023
    Publication date: May 2, 2024
    Inventor: Koichi TANJI
  • Publication number: 20240144009
    Abstract: A terminal apparatus comprising capturing data, transmitting information indicative of computational resources available at the apparatus for neural network training, receiving an encoder, defining one or more layers of artificial neurons, to be used as an input portion of a neural network receiving a predictor, defining one or more layers of artificial neurons, to be used as an output portion of the neural network; training the predictor, not the encoder, using at least some of the captured data; and performing inference on captured data using the neural network formed from the encoder and the predictor.
    Type: Application
    Filed: October 18, 2023
    Publication date: May 2, 2024
    Inventors: Fan MO, Soumyajit CHATTERJEE, Mohammad MALEKZADEH, Akhil MATHUR
  • Publication number: 20240144010
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Application
    Filed: October 30, 2023
    Publication date: May 2, 2024
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Publication number: 20240144011
    Abstract: A VCN process may receive information associated with a value chain network. A VCN process may provide the information to a set of Artificial Intelligence (AI)-based learning models, wherein at least one member of the set of AI-based learning models is trained to classify at least one of: an operating state, a fault condition, an operating flow, or a behavior of the value chain network and at least one member of the set of AI-based learning models is trained on the training data set to determine, upon receiving the classification of the at least one of: the operating state, the fault condition, the operating flow, or the behavior, a task to be completed for the value chain network. A VCN process may provide a computer code instruction set to a machine to execute the task to facilitate an improvement in the operation of the value chain network.
    Type: Application
    Filed: November 30, 2023
    Publication date: May 2, 2024
    Inventors: Charles H. Cella, Andrew Cardno, Jenna Parenti, Andrew S. Locke, Brad Kell, Teymour S. El-Tahry, Leon Fortin, JR., Andrew Bunin, Kunal Sharma, Taylor Charon, Hristo Malchev, Eric P. Vetter, David Stein, Benjamin D. Goodman
  • Publication number: 20240144012
    Abstract: Provided are a method and apparatus for compressing a neural network model by using hardware characteristics. The method includes: obtaining a target number of output channels of a target layer to be compressed, from among layers included in the neural network model that is executed by hardware; adjusting the target number of output channels to meet a certain purpose, based on at least one of a first latency characteristic for output channels of the target layer and a second latency characteristic for input channels of a next layer, according to the hardware characteristics of the hardware; and compressing the neural network model such that the number of output channels of the target layer is equal to the adjusted target number of output channels.
    Type: Application
    Filed: November 18, 2022
    Publication date: May 2, 2024
    Applicant: Nota, Inc.
    Inventors: Shin Kook CHOI, Jun Kyeong CHOl
  • Publication number: 20240144015
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward.
    Type: Application
    Filed: November 3, 2023
    Publication date: May 2, 2024
    Inventors: Volodymyr Mnih, Wojciech Czarnecki, Maxwell Elliot Jaderberg, Tom Schaul, David Silver, Koray Kavukcuoglu
  • Publication number: 20240144016
    Abstract: Provided herein are exemplary systems and methods for using artificial intelligence for vehicle performance and tracking. The system and method are comprised of a plurality of sensory input devices such as ultra-wideband (UWB) positioned throughout the course, the sensors detecting vehicle movement and relaying data regarding vehicle movement to an onboard user device. The onboard user device may push such data to a central processing hub, which may then push such data to a cloud storage network. Additional users may access the data by way of the central processing hub or cloud storage network. Further embodiments may include a vehicular electronic control unit relaying internal vehicle data to the system, and the use of large language models and/or neural networks.
    Type: Application
    Filed: October 30, 2023
    Publication date: May 2, 2024
    Inventors: Taso Zografos, Dakota Sinclair, Jia Yi Sinclair
  • Publication number: 20240144017
    Abstract: Certain aspects of the present disclosure provide techniques for efficient quantized learning. A tensor is received at a layer of a neural network, and a current tensor is generated at a first bitwidth based on the received tensor. One or more quantization parameter values are determined based on the current tensor. The current tensor is quantized to a lower bitwidth based on one or more quantization parameter values determined based on a previous tensor generated during the training of a neural network.
    Type: Application
    Filed: April 18, 2022
    Publication date: May 2, 2024
    Inventors: Marios FOURNARAKIS, Markus NAGEL
  • Publication number: 20240144018
    Abstract: Methods and systems that provide one or more recommended configurations to planners using large data sets in an efficient manner. These methods and systems provide optimization of objectives using a genetic algorithm that can provide parameter recommendations that optimize one or more objectives in an efficient and timely manner. The methods and systems disclosed herein are flexible enough to satisfy diverse use cases.
    Type: Application
    Filed: January 5, 2024
    Publication date: May 2, 2024
    Inventors: Sebastien OUELLET, Phillip WILLIAMS, Nathaniel STANLEY, Jeffery DOWNING, Liam HEBERT
  • Publication number: 20240144019
    Abstract: A training system includes: a model; and a training module configured to: construct a first pair of images of at least a first portion of a first human captured at different times; construct a second pair of images of at least a second portion of a second human captured at the same time from different points of view; input the first and second pairs of images to the model; the model configured to: generate first and second reconstructed images of the at least the first portion of the first human based on the first and second pairs, respectively, and the training module is configured to selectively adjust one or more parameters of the model based on: the first reconstructed image and the second reconstructed image.
    Type: Application
    Filed: August 29, 2023
    Publication date: May 2, 2024
    Applicants: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Philippe WEINZAEPFEL, Vincent Leroy, Romain Brègier, Yohann Cabon, Thomas Lucas, Leonid Antsfield, Boris Chidlovskii, Gabriela Csurka Khedari, Jèrôme Revaud, Matthieu Armando, Fabien Baradel, Salma Galaaoui, Gregory Rogez