Patents Issued in April 9, 2020
-
Publication number: 20200110985Abstract: An artificial neural network circuit includes a crossbar circuit, and a processing circuit. The crossbar circuit transmits a signal between layered neurons of an artificial neural network. The crossbar circuit includes input bars, output bars arranged intersecting the input bars, and memristors. The processing circuit calculates a sum of signals flowing into each of the output bars. The processing circuit calculates, as the sum of the signals, a sum of signals flowing into a plurality of separate output bars and conductance values of the corresponding memristors are set so as to cooperate to give a desired weight to the signal to be transmitted.Type: ApplicationFiled: October 1, 2019Publication date: April 9, 2020Inventors: Irina KATAEVA, Shigeki OTSUKA
-
Publication number: 20200110986Abstract: Disclosed herein are apparatus, method, and computer-readable storage device embodiments for implementing deconvolution via a set of convolutions. An embodiment includes a convolution processor that includes hardware implementing logic to perform at least one algorithm comprising a convolution algorithm. The at least one convolution processor may be further configured to perform operations including performing a first convolution and outputting a first deconvolution segment as a result of the performing the first convolution. The at least one convolution processor may be further configured to perform a second convolution and output a second deconvolution segment as a result of the performing the second convolution.Type: ApplicationFiled: October 3, 2019Publication date: April 9, 2020Applicant: Synopsys, Inc.Inventors: Tom MICHIELS, Thomas Julian PENNELLO
-
Publication number: 20200110987Abstract: A reconfigurable, for example with time, network switch matrix coupling switch charge circuits representing multiply and add circuits (MACs) and neurons (MACs with activations) capable of accepting and outputting proportional to charge pulses through crossbars within said network, said crossbars controlled by local controllers and higher level controllers to setup said crossbar communications.Type: ApplicationFiled: October 9, 2019Publication date: April 9, 2020Applicant: AISTORM INC.Inventors: David SCHIE, Peter DRABOS, Andreas SIBRAI, Erik SIBRAI
-
Publication number: 20200110988Abstract: A computing device, comprising: a computing module, comprising one or more computing units; and a control module, comprising a computing control unit, and used for controlling shutdown of the computing unit of the computing module according to a determining condition. Also provided is a computing method. The computing device and method have the advantages of low power consumption and high flexibility, and can be combined with the upgrading mode of software, thereby further increasing the computing speed, reducing the computing amount, and reducing the computing power consumption of an accelerator.Type: ApplicationFiled: November 28, 2019Publication date: April 9, 2020Inventors: Zai WANG, Shengyuan ZHOU, Zidong DU, Tianshi CHEN
-
Publication number: 20200110989Abstract: A signal processing method and apparatus, where the apparatus includes an input interface configured to receive an input signal matrix and a weight matrix, a processor configured to interleave the input signal matrix to obtain an interleaved signal matrix, partition the interleaved signal matrix, interleave the weight matrix to obtain an interleaved weight matrix, process the interleaved weight matrix to obtain a plurality of sparsified partitioned weight matrices, perform matrix multiplication on the sparsified partitioned weight matrices and a plurality of partitioned signal matrices to obtain a plurality of matrix multiplication results, and an output interface configured to output a signal processing result.Type: ApplicationFiled: December 6, 2019Publication date: April 9, 2020Inventor: Ruosheng Xu
-
Publication number: 20200110990Abstract: Novel connection between neurons of a neural network is provided. A perceptron included in the neural network includes a plurality of neurons; the neuron includes a synapse circuit and an activation function circuit; and the synapse circuit includes a plurality of memory cells. A bit line selected by address information for selecting a memory cell is shared by a plurality of perceptrons. The memory cell is supplied with a weight coefficient of an analog signal, and the synapse circuit is supplied with an input signal. The memory cell multiplies the input signal by the weight coefficient and converts the multiplied result into a first current. The synapse circuit generates a second current by adding a plurality of first currents and converts the second current into a first potential.Type: ApplicationFiled: April 2, 2018Publication date: April 9, 2020Inventors: Shintaro HARADA, Hiroki INOUE, Takeshi AOKI
-
Publication number: 20200110991Abstract: A method for adjusting output level of a neuron in a multilayer neural network is provided. The multilayer neural network includes a memristor and an analog processing circuit, causing transmission of the signals between the neurons and the signal processing in the neurons to be performed in an analog region. The method includes an adjustment step that adjusts an output level of the neurons of each of the layers, causing the output value to become lower than a write threshold voltage of the memristor and to fall within a maximum output range set for the analog processing circuit executing the generation of the output value in accordance with the activation function when each of the output values of the neurons of each of the layers becomes highest.Type: ApplicationFiled: December 11, 2019Publication date: April 9, 2020Inventors: Irina KATAEVA, Shigeki OTSUKA
-
Publication number: 20200110992Abstract: A system includes a first unit configured to generate a plurality of modulator control signals, and a processor unit. The processor unit includes: a light source or port configured to provide a plurality of light outputs, and a first set of optical modulators coupled to the light source or port and the first unit. The optical modulators in the first set are configured to generate an optical input vector by modulating the plurality of light outputs provided by the light source or port based on digital input values corresponding to a first set of modulator control signals in the plurality of modulator control signals, the optical input vector comprising a plurality of optical signals. The processor unit also includes a matrix multiplication unit that includes a second set of optical modulators.Type: ApplicationFiled: December 4, 2019Publication date: April 9, 2020Inventors: Arash Hosseinzadeh, Yelong Xu, Yanfei Bai, Huaiyu Meng, Ronald Gagnon, Cheng-Kuan Lu, Jonathan Terry, Jingdong Deng, Maurice Steinman, Yichen Shen
-
Publication number: 20200110993Abstract: Authenticity of Artificial Intelligence (AI) results may be verified by creating, for an AI system, from a plurality of original inputs to form a plurality of original inference results, a plurality of original signatures of representative elements of an internal state of the AI system constructed from each individual original inference result of the plurality of original inference results. During deployment of the AI system, a matching of a plurality of deployment time inference results with a plurality of deployment time signatures, to the plurality of original signatures and the plurality of original inference results, may be verified.Type: ApplicationFiled: October 3, 2018Publication date: April 9, 2020Inventors: Frank Liu, Bishop Brock, Thomas S. Hubregtsen
-
Publication number: 20200110994Abstract: Methods and systems are provided for training a neural network with augmented data. A dataset comprising a plurality of classes is obtained for training a neural network. Prior to initiation of training, the dataset may be augmented by performing affine transformations of the data in the dataset, wherein the amount of augmentation is determined by a data augmentation variable. The neural network is trained with the augmented dataset. A training loss and a difference of class accuracy for each class is determined. The data augmentation variable is updated based on the total loss and class accuracy for each class. The dataset is augmented by performing affine transformations of the data in the dataset according to the updated data augmentation variable, and the neural network is trained with the augmented dataset.Type: ApplicationFiled: October 4, 2018Publication date: April 9, 2020Inventors: Takuya Goto, Masaharu Sakamoto, Hiroki Nakano
-
Publication number: 20200110995Abstract: A method of reducing kernel computations; the method comprising ordering a plurality of kernel channels. A first of the ordered kernel channels is then convolved with input data to produce a convolution output, and it is determined whether to convolve one or more subsequent kernel channels of the ordered kernel channels. Determining whether to convolve subsequent kernel channels comprises considering a potential contribution of at least one of the one or more subsequent kernel channels in combination with the convolution output.Type: ApplicationFiled: October 4, 2018Publication date: April 9, 2020Inventors: Daren CROXFORD, Jayavarapu SRINIVASA RAO
-
Publication number: 20200110996Abstract: A method, computer program product, and a system where a processor(s), obtains content from a meeting hosting system during a pre-defined interval. The processor(s) parses the textual content to identify potential keywords. The processor(s) iteratively cognitively analyzes the potential keywords to determine which potential keywords comprise seed keywords, where the seed keywords meet a maturity threshold for inclusion in a data structure, where the iterative cognitive analysis of each potential keyword of the potential keywords is repeated a pre-defined number of times, and where the iteratively cognitively analyzing includes generating and updating the data structure. The processor(s) outputs, based on completing the pre-defined number of times, the data structure comprising the seed keywords.Type: ApplicationFiled: October 5, 2018Publication date: April 9, 2020Inventors: Gopal Bhageria, Siddharth Saraya, Rajesh Kumar Saxena, Anindya Ghosh
-
Publication number: 20200110997Abstract: An artificial neural network with a context pathway and a method of identifying a classification of information using an artificial neural network with a context pathway. An artificial neural network comprises up-stream layers and down-stream layers. An output of the up-stream layers is provided as input to the down-stream layers. A first input to the artificial neural network to the up-stream layers is configured to receive input data. A second input to the artificial neural network to the down-stream layers is configured to receive context data. The context data identifies a characteristic of information in the input data. The artificial neural network is configured to identify a classification of the information in the input data at an output of the down-stream layers using the context data.Type: ApplicationFiled: October 5, 2018Publication date: April 9, 2020Inventors: William Mark Severa, James Bradley Aimone
-
Publication number: 20200110998Abstract: A server accesses a plurality of users' sessions with the web server. Each user session indicating a page flow of a corresponding user session for a plurality of web pages provided by the web server. The server generates a learning model using a neural network based on the plurality of users' sessions. The learning model is configured to predict a next user activity based on a current page flow of a current user session. The next user activity indicating one of continuing the current user session by visiting another web page provided by the web server and ending the current user session. The server dynamically adjusts a content of a web page based on the predicted next user activity.Type: ApplicationFiled: January 21, 2019Publication date: April 9, 2020Inventors: Keyu Nie, Yang Zhou, Zezhong Zhang, Tao Yuan, Qian Wang, Giorgio Ballardin, Liren Sun
-
Publication number: 20200110999Abstract: A thermodynamic RAM technology stack, two or more memristors or pairs of memristors comprising AHaH (Anti-Hebbian and Hebbian) computing components, and one or more AHaH nodes composed of such memristor pairs to that forms at least a portion of the thermodynamic RAM technology stack. The levels of the thermodynamic-RAM technology stack include the memristor, a Knowm synapse, an AHaH Node, a kT-RAM, kT-RAM instruction set, a sparse spike encoding, a kT-RAM emulator, and a SENSE Server.Type: ApplicationFiled: May 3, 2019Publication date: April 9, 2020Inventors: Alex Nugent, Timothy Molter
-
Publication number: 20200111000Abstract: Systems and methods for training a neural network or an ensemble of neural networks are described. A hyper-parameter that controls the variance of the ensemble predictors is used to address overfitting. For larger values of the hyper-parameter, the predictions from the ensemble have more variance, so there is less overfitting. This technique can be applied to ensemble learning with various cost functions, structures and parameter sharing. A cost function is provided and a set of techniques for learning are described.Type: ApplicationFiled: August 15, 2019Publication date: April 9, 2020Inventors: Hui Yuan XIONG, Andrew DELONG, Brendan FREY
-
Publication number: 20200111001Abstract: Aspects of the present disclosure relate to a computer-implemented method of processing data portion. The method comprises processing a first data portion in a convolutional neural network to generate a first input to an activation function in the convolutional neural network; providing a first output by applying the activation function to the first input; and storing an indicator, representative of the first input to the activation function, for the first data portion. The method further comprises determining whether to provide a second output by applying the activation function to a second input, generated from a second data portion, based at least in part on an evaluation of the indicator for the first data portion.Type: ApplicationFiled: September 3, 2019Publication date: April 9, 2020Inventors: Daren CROXFORD, Sharjeel SAEED
-
Publication number: 20200111003Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computing a layer output for a convolutional neural network layer, the method comprising: receiving the layer input, the layer input comprising a plurality of activation inputs, the plurality of activation inputs represented as a multi-dimensional matrix comprising a plurality of depth levels, each depth level being a respective matrix of distinct activation inputs from the plurality of activation inputs; sending each respective kernel matrix structure to a distinct cell along a first dimension of the systolic array; for each depth level, sending the respective matrix of distinct activation inputs to a distinct cell along a second dimension of the systolic array; causing the systolic array to generate an accumulated output from the respective matrices sent to the cells; and generating the layer output from the accumulated output.Type: ApplicationFiled: October 4, 2019Publication date: April 9, 2020Inventors: Jonathan Ross, Andrew Everett Phelps
-
Publication number: 20200111004Abstract: Methods and systems are provided for deblurring images. A neural network is trained where the training includes selecting a central training image from a sequence of blurred images. An earlier training image and a later training image are selected based on the earlier training image preceding the central training image in the sequence and the later training image following the central training image in the sequence and based on proximity of the images to the central training image in the sequence. A training output image is generated by the neural network from the central training image, the earlier training image, and the later training image. Similarity is evaluated between the training output image and a reference image. The neural network is modified based on the evaluated similarity. The trained neural network is used to generate a deblurred output image from a blurry input image.Type: ApplicationFiled: December 4, 2019Publication date: April 9, 2020Inventors: OLIVER WANG, JUE WANG, SHUOCHEN SU
-
Publication number: 20200111005Abstract: In general, the disclosure describes techniques for facilitating trust in neural networks using a trusted neural network system. For example, described herein are multi-headed, trusted neural network systems that can be trained to satisfy one or more constraints as part of the training process, where such constraints may take the form of one or more logical rules and cause the objective function of at least one the heads of the trusted neural network system to steer, during machine learning model training, the overall objective function for the system toward an optimal solution that satisfies the constraints. The constraints may be non-temporal, temporal, or a combination of non-temporal and temporal. The constraints may be directly compiled to a neural network or otherwise used to train the machine learning model.Type: ApplicationFiled: December 19, 2018Publication date: April 9, 2020Inventors: Shalini Ghosh, Patrick Lincoln, Ashish Tiwari, Susmit Jha
-
Publication number: 20200111006Abstract: Provided is a system, method, and computer program product for perforated backpropagation. The method includes segmenting a plurality of nodes into at least two sets including a set of first nodes and a set of second nodes, determining an error term for each node of the set of first nodes, the first set of nodes comprising a first and second subset of nodes, backpropagating the error terms for each node throughout the set of first nodes, determining an error term for each node of the first subset of nodes of the set of first nodes based on direct connections between the first subset of nodes and the second subset of nodes independent of error terms of the set of second nodes, determining an error term for each node of the set of second nodes, and updating weights of each node of the plurality of nodes based on the error term.Type: ApplicationFiled: October 4, 2019Publication date: April 9, 2020Inventor: Rorry Brenner
-
Publication number: 20200111007Abstract: Aspects for backpropagation of a convolutional neural network are described herein. The aspects may include a direct memory access unit configured to receive input data from a storage device and a master computation module configured to select one or more portions of the input data based on a predetermined convolution window. Further, the aspects may include one or more slave computation modules respectively configured to convolute one of the one or more portions of the input data with one of one or more previously calculated first data gradients to generate a kernel gradient, wherein the master computation module is further configured to update a prestored convolution kernel based on the kernel gradient.Type: ApplicationFiled: December 11, 2019Publication date: April 9, 2020Inventors: Yunji CHEN, Tian ZHI, Shaoli LIU, Qi GUO, Tianshi CHEN
-
Publication number: 20200111008Abstract: A method for training an artificial neural network circuit is provided. The artificial neural network circuit includes a crossbar circuit that has a plurality of input bars, a plurality of output bars crossing the plurality of input bars, and memristors each of which includes a variable conductance element provided at corresponding one of intersections of the input bars and the output bars.Type: ApplicationFiled: December 11, 2019Publication date: April 9, 2020Inventor: Irina KATAEVA
-
Publication number: 20200111009Abstract: Advanced analytics refers to theories, technologies, tools, and processes that enable an in-depth understanding and discovery of actionable insights in big data, wherein conventional systems and methods may be prone to errors leading to inaccuracies.Type: ApplicationFiled: March 12, 2019Publication date: April 9, 2020Applicant: Tata Consultancy Services LimitedInventors: Tanushyam CHATTOPADHYAY, Satanik PANDA, Prateep MISRA, Arpan PAL, Indrajit BHATTACHYARYA, Puneet AGARWAL, Soma BANDYOPADHYAY, Arijit UKIL, Snehasis BANERJEE, Abhisek DAS
-
Publication number: 20200111010Abstract: A system includes a processor configured to receive data indicating a detected vehicle-event, event location, and vehicle traits. The processor is also configured to add the data to a combined set of events occurring within a predefined proximity of the location. The processor is further configured to determine, from the set, a common vehicle trait, responsive to occurrence of the events in the set exceeding a threshold, for the location and designate the location as likely to cause the event for vehicles having the trait.Type: ApplicationFiled: October 4, 2018Publication date: April 9, 2020Inventor: Abraham MEZAAEL
-
Publication number: 20200111011Abstract: An approach is provided for providing predictive classification of sensor error. The approach involves, for example, receiving sensor data from at least one sensor, the sensor data collected at a geographic location. The approach also involves extracting a set of input features from the sensor data, map data representing the geographic location, or combination thereof. The approach further involves processing the set of input features using a machine learning model to calculate a predicted sensor error of a target sensor operating at the geographic location. The machine learning model, for instance, has been trained on ground truth sensor error data to use the set of input features to calculate the predicted sensor error.Type: ApplicationFiled: October 4, 2018Publication date: April 9, 2020Inventor: Anirudh VISWANATHAN
-
Publication number: 20200111012Abstract: The present disclosure provides a method for computer simulation of human brain learning knowledge, a reasoning apparatus, and a brain-like artificial intelligence service platform. The method includes: establishing a brain-like knowledge library, including a word library, a class library, a resource library, and an intelligent information management library; processing, by a semantic analyzer, a natural language single sentence to generate class basic elements and semantic properties in a manner of creating classes, and storing the class basic elements and the semantic properties in the class library; generating, by a semantic analyzer, the intelligent application program satisfying an intelligent application requirement based on the intelligent knowledge elements, and storing the intelligent application program in the intelligent information management library.Type: ApplicationFiled: October 17, 2019Publication date: April 9, 2020Inventor: Jihua WAN
-
Publication number: 20200111013Abstract: A system and method are disclosed for collecting and analyzing data in a cognitive fabric. The system can include a network of intelligent nodes, each node being configured for sharing or receiving data as a function of analytic processing to be performed at the node.Type: ApplicationFiled: October 8, 2019Publication date: April 9, 2020Applicant: Booz Allen Hamilton Inc.Inventors: Ki Hyun LEE, John David PISANO, Saurin Pankaj SHAH, Andre Tai NGUYEN, Yuxun LEI, Christopher BROWN, Michael BECKER
-
Publication number: 20200111014Abstract: An efficient fact checking system analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information. The efficient fact checking system automatically monitors information, processes the information, fact checks the information efficiently and/or provides a status of the information.Type: ApplicationFiled: December 6, 2019Publication date: April 9, 2020Inventor: Lucas J. Myslinski
-
Publication number: 20200111015Abstract: An enhanced prediction model utilizing product life cycle segmentation and historic prediction data for generating more accurate future failure rates. Input data is segmented into groups based on failure modes of a corresponding life cycle. A prediction model such as Weibull analysis is implemented for each segmented group. Historical prediction data is also segmented into groups. Prediction parameters for each group of segmented historical prediction data are compared with one another and the comparisons are then used to adjust the prediction parameters generated from the segmented groups of input data. Updated parameters for the input data are then output thereby generating a new future failure rate.Type: ApplicationFiled: October 9, 2018Publication date: April 9, 2020Inventors: Lu Liu, Sia Kai Julian Tan, Kevin A. Dore, II, Steven Hurley, JR.
-
Publication number: 20200111016Abstract: Approaches useful to operation of scalable processors with ever larger numbers of logic devices (e.g., qubits) advantageously take advantage of QFPs, for example to implement shift registers, multiplexers (i.e., MUXs), de-multiplexers (i.e., DEMUXs), and permanent magnetic memories (i.e., PMMs), and the like, and/or employ XY or XYZ addressing schemes, and/or employ control lines that extend in a “braided” pattern across an array of devices. Many of these described approaches are particularly suited for implementing input to and/or output from such processors. Superconducting quantum processors comprising superconducting digital-analog converters (DACs) are provided. The DACs may use kinetic inductance to store energy via thin-film superconducting materials and/or series of Josephson junctions, and may use single-loop or multi-loop designs. Particular constructions of energy storage elements are disclosed, including meandering structures.Type: ApplicationFiled: November 25, 2019Publication date: April 9, 2020Inventor: Kelly T. R. Boothby
-
Publication number: 20200111017Abstract: Technologies and implementations for training a predictive intelligence associated with electronic discovery (e-discovery) are generally disclosed.Type: ApplicationFiled: May 11, 2019Publication date: April 9, 2020Inventors: Tarun Chanchalani, Bala Manikandan Gopalakrishnan, Ramya Ramasamy, Pallav Tadon, Aniesh Udayakumar, Scott Giordano, Dan Burke, Manish Bafna, Shashidhar Angadi, Prabhu Palaniswamy, Bobby Balachandran, Karthik Palani, Ajith Samuel
-
Publication number: 20200111018Abstract: A computer-implemented method is provided for optimization of parameters of a system, product, or process. The method includes establishing an optimization procedure for a system, product, or process. The system, product, or process has an evaluable performance that is dependent on values of one or more adjustable parameters. The method includes receiving one or more prior evaluations of performance of the system, product, or process. The one or more prior evaluations are respectively associated with one or more prior variants of the system, product, or process. The one or more prior variants are each defined by a set of values for the one or more adjustable parameters. The method includes utilizing an optimization algorithm to generate a suggested variant based at least in part on the one or more prior evaluations of performance and the associated set of values.Type: ApplicationFiled: June 2, 2017Publication date: April 9, 2020Inventors: Daniel Reuben Golovin, Benjamin Solnik, Subhodeep Moitra, David W. Sculley, II
-
Publication number: 20200111019Abstract: An exemplary system, method, and computer-accessible medium can include, for example, (a) receiving a dataset(s), (b) determining if a misclassification(s) is generated during a training of a model(s) on the dataset(s), (c) generating a synthetic dataset(s) based on the misclassification(s), and (d) determining if the misclassification(s) is generated during the training of the model(s) on the synthetic dataset(s). The dataset(s) can include a plurality of data types. The misclassification(s) can be determined by determining if one of the data types is misclassified. The dataset(s) can include an identification of each of the data types in the dataset(s).Type: ApplicationFiled: August 12, 2019Publication date: April 9, 2020Inventors: Jeremy GOODSITT, Anh TRUONG, Reza FARIVAR, Fardin Abdi Taghi ABAD, Mark WATSON, Vincent PHAM, Austin WALTERS
-
Publication number: 20200111020Abstract: A method, system, and computer program product for resource management are described. The method includes selecting trouble regions within the service area, generating clustered regions, and training a trouble forecast model for the trouble regions for each type of damage, the training for each trouble region using training data from every trouble region within the clustered region associated with the trouble region. The method also includes applying the trouble forecast model for each trouble region within the service area for each type of damage, determining a trouble forecast for the service area for each type of damage based on the trouble forecast for each of the trouble regions within the service area, and determining a job forecast for the service area based on the trouble forecast for the service area, wherein the managing resources is based on the job forecast for the service area.Type: ApplicationFiled: August 20, 2019Publication date: April 9, 2020Inventors: Fook-Luen Heng, Zhiguo Li, Stuart A. Siegel, Amith Singhee, Haijing Wang
-
Publication number: 20200111021Abstract: Systems and methods for generating strings based on a seed string are disclosed. Machine learning models are trained using domain-specific training data. Random walk models are derived from the trained machine learning models. A seed string is input into each of the random walk models, and each of the random walk models iteratively generate one or more next characters for the seed string to generate at least one term from each of the random walk models. A predicted class for the at least one term generated by each of the random walk models can be determined, and a ranked order for the at least one term generated by each of the random walk models with the predicted classes can be output to a graphical user interface.Type: ApplicationFiled: October 3, 2019Publication date: April 9, 2020Inventors: Peter Keyngnaert, Jan Waerniers, Ann Smet, Akanksha Mishra
-
Publication number: 20200111022Abstract: Provided is a process that includes sharing information among two or more parties or systems for modeling and decision-making purposes, while limiting the exposure of details either too sensitive to share, or whose sharing is controlled by laws, regulations, or business needs.Type: ApplicationFiled: October 3, 2019Publication date: April 9, 2020Inventors: Gabriel Mauricio Silberman, Alain Charles Briancon, Lee David Harper, Luke Philip Reding, David Alexander Curry, Jean Joseph Belanger, Michael Thomas Wegan, Thejas Narayana Prasad
-
Publication number: 20200111023Abstract: An Artificial Intelligence (AI)-based regulatory data processing system accesses a regulatory text corpus for training machine learning (ML) models including a topic extraction model, a feature selection model, an entity identification model and a section classification model. The regulatory text corpus includes documents pertaining to a specific domain corresponding to a received domain-specific regulatory text document. Various trained machine learning (ML) models are used to extract topics, identify entities from the new regulatory document and to classify portions of the domain-specific regulatory text document into one of a plurality of predetermined sections. The information in the new regulatory document is therefore converted into machine consumable form which can facilitate automatic execution of downstream processes such as identification of actions needed to implement the regulations and robotic process automation (RPA).Type: ApplicationFiled: October 3, 2019Publication date: April 9, 2020Applicant: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Ramkumar PONDICHERRY MURUGAPPAN, Ashwinee GODBOLE
-
Publication number: 20200111024Abstract: Performance in a multi-classification system having multiple component classifiers can be based on a combination of the true positive rate (TPR) and false positive rate (FPR) of the component classifiers. Each component classifier can be configured with a decision threshold, and its TPR and FPR determined from a training set presented to the component classifier so configured. A system TPR and system FPR can be determined from the component TPRs and FPRs. A set of system TPRs and FPRs can be determined from additional sets of decision thresholds.Type: ApplicationFiled: October 3, 2019Publication date: April 9, 2020Inventors: Leon Bergen, Kenneth Ko, Pelu S Tran
-
Publication number: 20200111025Abstract: An electronic apparatus and a control method thereof are provided. The control method of the electronic apparatus includes receiving, from a first external electronic apparatus and a second external electronic apparatus, a first artificial intelligence model and a second artificial intelligence model used by the first and second external electronic apparatuses, respectively, and a plurality of learning data stored in the first and second external electronic apparatuses, identifying first learning data, which corresponds to second learning data received from the second external electronic apparatus, among learning data received from the first external electronic apparatus, training the second artificial intelligence model used by the second external electronic apparatus based on the first learning data, and transmitting the trained second artificial intelligence model to the second external electronic apparatus.Type: ApplicationFiled: October 4, 2019Publication date: April 9, 2020Inventors: Youngho HAN, Kwangyoun KIM, Sangha KIM, Sungchan KIM, Sungsoo KIM, Kyungmin LEE, Yongchan LEE, Jaewon LEE
-
Publication number: 20200111026Abstract: A method for computing a probability that an object comprises a target includes: performing a scan of an area comprising the object, generating points; creating a segment corresponding to the object using the points as segment points, the segment extending from a first segment point to a last segment point, the segment comprising a plurality of the segment points; and applying a metric, computing the probability that the object comprises the target.Type: ApplicationFiled: November 10, 2019Publication date: April 9, 2020Applicant: Fetch Robotics, lnc.Inventors: Alex Henning, Michael Ferguson, Melonee Wise
-
Publication number: 20200111027Abstract: Systems and methods for providing recommendations based on seeded supervised learning are disclosed. The method may include acquiring, through a communication network, similarity data associated with a first entity, a second entity, and a third entity, and acquiring, through the communication network, external data associated with the first entity and the second entity. The method may further include training a classification model based on the external data and the similarity data. The method may also include determining an expectation score of the third entity based on classification model, and providing, through the communication network, a recommendation based on the expectation score to the third entity.Type: ApplicationFiled: December 5, 2019Publication date: April 9, 2020Applicant: BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD.Inventors: Zhiwei QIN, Chengxiang ZHUO, Wei TAN, Jun XIE
-
Publication number: 20200111028Abstract: In one embodiment, techniques are shown and described relating to traffic-based inference of influence domains in a network by using learning machines. In particular, in one embodiment, a management device computes a time-based traffic matrix indicating traffic between pairs of transmitter and receiver nodes in a computer network, and also determines a time-based quality parameter for a particular node in the computer network. By correlating the time-based traffic matrix and time-based quality parameter for the particular node, the device may then determine an influence of particular traffic of the traffic matrix on the particular node.Type: ApplicationFiled: December 10, 2019Publication date: April 9, 2020Inventors: Grégory Mermoud, Jean-Philippe Vasseur, Sukrit Dasgupta
-
Publication number: 20200111029Abstract: A multiple regression analysis apparatus capable of accurately performing a multiple regression analysis is provided. A multiple regression analysis apparatus includes a determination unit, a division unit, an analysis unit, and a regression equation acquisition unit. The determination unit determines one of a plurality of explanatory variables that is effective as a parameter when stratification of a plurality of data sets is performed to be a stratification explanatory variable. The division unit divides the plurality of data sets for each layer using the stratification explanatory variable. The analysis unit performs a multiple regression analysis on each of groups of the plurality of data sets that have been divided. The regression equation acquisition unit acquires an integrated multiple regression equation in which results of the multiple regression analysis are integrated.Type: ApplicationFiled: August 9, 2019Publication date: April 9, 2020Inventor: Takahiro TSUBOUCHI
-
Publication number: 20200111030Abstract: In one embodiment, a device distributes sets of training records from a training dataset for a random forest-based classifier among a plurality of workers of a computing cluster. Each worker determines whether it can perform a node split operation locally on the random forest by comparing a number of training records at the worker to a predefined threshold. The device determines, for each of the split operations, a data size and entropy measure of the training records to be used for the split operation. The device applies a machine learning-based predictor to the determined data size and entropy measure of the training records to be used for the split operation, to predict its completion time. The device coordinates the workers of the computing cluster to perform the node split operations in parallel such that the node split operations in a given batch are grouped based on their predicted completion times.Type: ApplicationFiled: October 5, 2018Publication date: April 9, 2020Inventors: Radek Starosta, Jan Brabec, Lukas Machlica
-
Publication number: 20200111031Abstract: A computer-implemented method allows a wait time to be determined automatically for a queue area. The queue is part of an environment and includes defined entrance and exit areas. A series of images showing the environment are received over time. A wait time associated with the queue area is determined by detecting a location of a object corresponding to a person in a first one of the images; associating the object with an identifier uniquely identifying the object in the first one of the images matching objects in later images; and determining the wait time based on times associated with an image in which an object associated with the identifier enters the queue area through the defined entrance area and later one of the images in which an object associated with the identifier exits the queue area through the defined exit area. An indication of the wait time is output.Type: ApplicationFiled: October 3, 2018Publication date: April 9, 2020Applicant: The Toronto-Dominion BankInventors: Matta WAKIM, Dexter Lamont FICHUK, Sophia DHROLIA, Christopher Michael DULHANTY
-
Publication number: 20200111032Abstract: According to an embodiment, a time setting device may be installed on an electronic device including a communication unit, a memory, a touchscreen, a controller, and a key input device. The time setting device may be configured to display time sections partitioned per hour or minute on a clock-shaped background, enable a reservation to be set by selecting at least one of the time sections, and display preset reservation information on at least one of the time sections.Type: ApplicationFiled: December 19, 2018Publication date: April 9, 2020Inventor: JE YOOK SON
-
Publication number: 20200111033Abstract: A lead time architecture includes a lead time aggregator that calculates a lead time estimate to move products between a first node in a supply chain and a second node in the supply chain. The lead time aggregator can additionally, or alternatively, calculate lead time estimate for the sourcing of a product. This lead time estimate reflects the time between receipt of a purchase order by a vendor and the time of delivery of the ordered product to a node within the supply chain. The lead time aggregator is in communication with resources which provide the data upon which the lead time aggregator bases its calculations. The calculations can take into account expected times for route travel, warehouse, and supplier sourcing to be completed.Type: ApplicationFiled: October 3, 2018Publication date: April 9, 2020Inventors: GAGAN MAHAJAN, PRAVEEN KUMAR KUMARESEN, ABHILASH KONERI, KRAIG ` NARR
-
Publication number: 20200111034Abstract: A platform provides recommendations for points of interest in a venue to venue attendees. Different points of interest are recommended in different amounts in order to prevent congestion in the venue in the form of extremely long queues or extremely large crowds. To achieve this, the platform divides a large group of venue attendees into multiple sub-groups, with each sub-group being recommended a different point of interest, and the size of each sub-group based on a difference between an optimal queue or crowd size and an actual queue or crowd size of a queue or crowd associated with that point of interest.Type: ApplicationFiled: October 8, 2019Publication date: April 9, 2020Inventor: Scott Sahadi
-
Publication number: 20200111035Abstract: An information processing method includes calculating, for each reference target among reference targets to which a measure has been applied, a difference between a value of an index of the reference target linked to predetermined variables, and a value of the index of the reference target obtained under a virtual scenario in which the measure is not applied to the reference target, as a first index value difference, calculating a relation expression that links the first index value difference to the predetermined variables, and calculating, by using the predetermined variables of a target and the relation expression, a difference between a value of the index of the target obtained under a virtual scenario in which the measure is applied to the target and a value of the index of the target obtained when the measure is not applied to the target, as a second index value difference.Type: ApplicationFiled: December 5, 2019Publication date: April 9, 2020Applicant: FUJITSU LIMITEDInventors: Katsuhito Nakazawa, Tetsuyoshi Shiota, Takahiro HOSHINO, Yuki SAITO, Takayuki TODA, Yuya MATSUMURA