Patents by Inventor Patrice Simard
Patrice Simard has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230325674Abstract: A differential recurrent neural network (RNN) is described that handles dependencies that go arbitrarily far in time by allowing the network system to store states using recurrent loops without adversely affecting training. The differential RNN includes a state component for storing states, and a trainable transition and differential non-linearity component which includes a neural network. The trainable transition and differential non-linearity component takes as input, an output of the previous stored states from the state component along with an input vector, and produces positive and negative contribution vectors which are employed to produce a state contribution vector. The state contribution vector is input into the state component to create a set of current states. In one implementation, the current states are simply output.Type: ApplicationFiled: June 14, 2023Publication date: October 12, 2023Applicant: Microsoft Technology Licensing, LLCInventor: Patrice SIMARD
-
Patent number: 11720797Abstract: A differential recurrent neural network (RNN) is described that handles dependencies that go arbitrarily far in time by allowing the network system to store states using recurrent loops without adversely affecting training. The differential RNN includes a state component for storing states, and a trainable transition and differential non-linearity component which includes a neural network. The trainable transition and differential non-linearity component takes as input, an output of the previous stored states from the state component along with an input vector, and produces positive and negative contribution vectors which are employed to produce a state contribution vector. The state contribution vector is input into the state component to create a set of current states. In one implementation, the current states are simply output.Type: GrantFiled: April 28, 2020Date of Patent: August 8, 2023Assignee: Microsoft Technology Licensing, LLCInventor: Patrice Simard
-
Publication number: 20230106295Abstract: A processor-implemented method for deriving at least one performance metric of an artificial intelligence (AI) model that is trained based on a sample set of examples (E), by estimating a relative size of a first partition of the sample set of examples (E) is provided. The method includes populating a binary decision tree by adding at least one unlabeled example from the sample set of examples (E) at a root node of the binary decision tree, partitioning the sample set of examples (E) into the first partition that includes a subset of the sample set of examples (E), propagating the at least one unlabeled example from the root node to the first leaf node in the binary decision tree and automatically estimating the relative size of the first partition that corresponds to the first leaf node to derive the at least one performance metric of the AI model.Type: ApplicationFiled: November 29, 2022Publication date: April 6, 2023Applicant: Intelus Inc.Inventor: Patrice Simard
-
Publication number: 20220391756Abstract: A processor-implemented method includes (i) defining a region of interest ranging between a first and second boundary location for each label in the M documents that comprise N labels, (ii) summarizing information, in a selected document, from a first content location to the first boundary location of the region of interest to obtain a first summary that represents context information from the first content location to the first boundary location of the region of interest, (iii) summarizing information, in the selected document, from a second content location to the second boundary location to obtain a second summary that represents context information from the second boundary location to the second content location, (iv) performing training of the AI model including restricting training data from the M documents based on the region of interest, and (v) extracting the target data from the M documents using trained AI model.Type: ApplicationFiled: January 24, 2022Publication date: December 8, 2022Applicant: Intelus Inc.Inventors: Patrice Simard, Riham Mansour
-
Publication number: 20220391643Abstract: A processor-implemented method includes (i) selecting initial features using a machine learning algorithm with a training data, (ii) automatically generating selected candidate features for an artificial intelligence (AI) model from the initial features, wherein the selected candidate features are generated from the training data or selected from a repository of curated features, (iii) automatically selecting a subset from selected candidate features and augmenting them to obtain suggested features based on an external knowledge source, (iv) presenting the suggested features to a user based on an improvement in the objective function of the AI model caused by addition of the suggested features to the AI model, (v) enabling the user to validate the suggested features, wherein the suggested features are validated by the user to improve a generalization of the AI model, and (vi) adding validated suggested features to the AI model to improve the generalization of the AI model.Type: ApplicationFiled: August 24, 2021Publication date: December 8, 2022Applicant: Katam.ai Inc.Inventors: Riham Mansour, Patrice Simard
-
Publication number: 20220391719Abstract: A processor-implemented method includes (i) obtaining raw data and value of a parameter in a column of tabular data, (ii) defining, based on user input, a smart column with tabular data prediction generated from raw data, (iii) validating, based on user input, a first label and a second label corresponding respectively to a first and a second predefined category to obtain a first and a second user-validated label respectively, (iv) detecting error in training set of the predictive AI model when there is a mismatch between a value from predictive AI model and user-validated label, (v) automatically generating a formula for the tabular data prediction to fix the error in training set, (vi) validating the first formula data prediction based on user input to obtain a user-validated formula, and (vii) automatically generating a first tabular data prediction in the smart column using user-validated formula to some of the raw data.Type: ApplicationFiled: June 3, 2021Publication date: December 8, 2022Applicant: Katam.ai Inc.Inventors: Riham Mansour, Amit Mital, Patrice Simard
-
Publication number: 20200327395Abstract: A differential recurrent neural network (RNN) is described that handles dependencies that go arbitrarily far in time by allowing the network system to store states using recurrent loops without adversely affecting training. The differential RNN includes a state component for storing states, and a trainable transition and differential non-linearity component which includes a neural network. The trainable transition and differential non-linearity component takes as input, an output of the previous stored states from the state component along with an input vector, and produces positive and negative contribution vectors which are employed to produce a state contribution vector. The state contribution vector is input into the state component to create a set of current states. In one implementation, the current states are simply output.Type: ApplicationFiled: April 28, 2020Publication date: October 15, 2020Applicant: Microsoft Technology Licensing, LLCInventor: Patrice Simard
-
Patent number: 10685285Abstract: The mirror deep neural networks (DNNs) as described herein recognize patterns in an input signal. Mirror DNNs regularize to a linear function and train very quickly. Mirror DNNs employ a neural network pattern recognizer that receives a set of features extracted from an input signal and inputs the set of features into a multi-layer neural network. The multi-layer neural network has an input layer that receives the set of features, a plurality of intermediate layers, and an output layer that generates a set of output values that are indicative of a recognized pattern exhibited in the input signal. A first and second non-linear equation pair are chosen and applied to intermediate layers of the neural network so as to make the output values that are indicative of a pattern exhibited in the input signal linear.Type: GrantFiled: November 23, 2016Date of Patent: June 16, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Patrice Simard
-
Patent number: 10671908Abstract: A differential recurrent neural network (RNN) is described that handles dependencies that go arbitrarily far in time by allowing the network system to store states using recurrent loops without adversely affecting training. The differential RNN includes a state component for storing states, and a trainable transition and differential non-linearity component which includes a neural network. The trainable transition and differential non-linearity component takes as input, an output of the previous stored states from the state component along with an input vector, and produces positive and negative contribution vectors which are employed to produce a state contribution vector. The state contribution vector is input into the state component to create a set of current states. In one implementation, the current states are simply output.Type: GrantFiled: April 14, 2017Date of Patent: June 2, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Patrice Simard
-
Publication number: 20180144245Abstract: A differential recurrent neural network (RNN) is described that handles dependencies that go arbitrarily far in time by allowing the network system to store states using recurrent loops without adversely affecting training. The differential RNN includes a state component for storing states, and a trainable transition and differential non-linearity component which includes a neural network. The trainable transition and differential non-linearity component takes as input, an output of the previous stored states from the state component along with an input vector, and produces positive and negative contribution vectors which are employed to produce a state contribution vector. The state contribution vector is input into the state component to create a set of current states. In one implementation, the current states are simply output.Type: ApplicationFiled: April 14, 2017Publication date: May 24, 2018Inventor: Patrice Simard
-
Publication number: 20180144242Abstract: The mirror deep neural networks (DNNs) as described herein recognize patterns in an input signal. Mirror DNNs regularize to a linear function and train very quickly. Mirror DNNs employ a neural network pattern recognizer that receives a set of features extracted from an input signal and inputs the set of features into a multi-layer neural network. The multi-layer neural network has an input layer that receives the set of features, a plurality of intermediate layers, and an output layer that generates a set of output values that are indicative of a recognized pattern exhibited in the input signal. A first and second non-linear equation pair are chosen and applied to intermediate layers of the neural network so as to make the output values that are indicative of a pattern exhibited in the input signal linear.Type: ApplicationFiled: November 23, 2016Publication date: May 24, 2018Applicant: Microsoft Technology Licensing, LLCInventor: Patrice Simard
-
Patent number: 8606608Abstract: Counterfactual analysis can be performed “offline”, or “after the fact”, based on data collected during a trial in which random variations are applied to the output of the system whose parameters are to be the subject of the counterfactual analysis. A weighting factor can be derived and applied to data collected during the trial to emphasize that data obtained when the random variations most closely resembled the output that would be expected if counterfactual parameters were utilized to generate the output. If the counterfactual parameters being considered differ too much from the parameters under which the trial was conducted, the offline counterfactual analysis can estimate a direction and magnitude of the change of the system performance, as opposed to deriving a specific expected system performance value. In economic transactions, the random variations can be considered variations in the price paid by another party, thereby enabling derivation of their marginal cost.Type: GrantFiled: December 17, 2010Date of Patent: December 10, 2013Assignee: Microsoft CorporationInventors: Leon Bottou, Denis Charles, David Maxwell Chickering, Patrice Simard
-
Publication number: 20120158488Abstract: Counterfactual analysis can be performed “offline”, or “after the fact”, based on data collected during a trial in which random variations are applied to the output of the system whose parameters are to be the subject of the counterfactual analysis. A weighting factor can be derived and applied to data collected during the trial to emphasize that data obtained when the random variations most closely resembled the output that would be expected if counterfactual parameters were utilized to generate the output. If the counterfactual parameters being considered differ too much from the parameters under which the trial was conducted, the offline counterfactual analysis can estimate a direction and magnitude of the change of the system performance, as opposed to deriving a specific expected system performance value. In economic transactions, the random variations can be considered variations in the price paid by another party, thereby enabling derivation of their marginal cost.Type: ApplicationFiled: December 17, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Leon Bottou, Denis Charles, David Maxwell Chickering, Patrice Simard
-
Publication number: 20090119604Abstract: The claimed subject matter provides a system and/or a method that facilitates communicating data utilizing holographic representations. An interface component can receive a portion of data related to a virtual meeting. A holographic component can generate at least one holographic image within a virtual meeting space, wherein the holographic image can virtually represent at least one of the portion of data related to the virtual meeting or a user associated with the virtual meeting. Moreover, a share component can employ a public view or a private view for the holographic image within the virtual meeting space.Type: ApplicationFiled: November 6, 2007Publication date: May 7, 2009Applicant: MICROSOFT CORPORATIONInventors: Patrice Simard, Ajitesh Kishore
-
Publication number: 20070292028Abstract: A system and method facilitating activity (e.g., dithering/half toning and/or noise) detection is provided. The invention includes an activity detection system having a connected component analyzer and an activity detector. The invention provides for the quantity of connected component(s) in and/or intersecting a region surrounding a pixel to be determined. The activity detector provides an activity map output based, at least in part, upon the quantity of connected component(s) in and/or intersecting the region. The invention further provides for an optional image processor. In one example, if the quantity exceeds a first threshold, dithering/half toning is detected and appropriate action can be taken. Additionally, if the quantity is less than a second threshold, noise is detected and appropriate action can be taken.Type: ApplicationFiled: August 27, 2007Publication date: December 20, 2007Applicant: MICROSOFT CORPORATIONInventor: Patrice Simard
-
Publication number: 20070242888Abstract: A system and method facilitating compression of bi-level images with explicit representation of ink clusters is provided. The present invention includes a cluster shape estimator that analyzes connected component information, extracts clusters and stores the cluster in a global dictionary, a page dictionary or a store of unclustered shapes. A bitmap estimation from clusters component determines dictionary positions for clusters stored in the global dictionary which are then encoded. A cluster position estimator determines page positions of clusters of the global dictionary and/or the page dictionary that are then encoded. Further, the global dictionary, the page dictionary and the store of unclustered shapes are also encoded.Type: ApplicationFiled: April 12, 2007Publication date: October 18, 2007Applicant: MICROSOFT CORPORATIONInventors: Erin Renshaw, Patrice Simard, Henrique Malvar
-
Patent number: 7274821Abstract: A system and method to facilitate pattern recognition or matching between patterns are disclosed that is substantially invariant to small transformations. A substantially smooth deformation field is applied to a derivative of a first pattern and a resulting deformation component is added to the first pattern to derive a first deformed pattern. An indication of similarity between the first pattern and a second pattern may be determined by minimizing the distance between the first deformed pattern and the second pattern with respect to deformation coefficients associated with each deformed pattern. The foregoing minimization provides a system (e.g., linear) that may be solved with standard methods.Type: GrantFiled: December 13, 2005Date of Patent: September 25, 2007Assignee: Microsoft CorporationInventors: Nebojsa Jojic, Patrice Simard
-
Publication number: 20070211064Abstract: A system and method for processing machine learning techniques (such as neural networks) and other non-graphics applications using a graphics processing unit (GPU) to accelerate and optimize the processing. The system and method transfers an architecture that can be used for a wide variety of machine learning techniques from the CPU to the GPU. The transfer of processing to the GPU is accomplished using several novel techniques that overcome the limitations and work well within the framework of the GPU architecture. With these limitations overcome, machine learning techniques are particularly well suited for processing on the GPU because the GPU is typically much more powerful than the typical CPU. Moreover, similar to graphics processing, processing of machine learning techniques involves problems with solving non-trivial solutions and large amounts of data.Type: ApplicationFiled: May 14, 2007Publication date: September 13, 2007Applicant: Microsoft CorporationInventors: Ian Buck, Patrice Simard, David Steinkraus
-
Publication number: 20070192687Abstract: A system that can convert content and structure of a document from an original format into a target format irrespective of the functional specifics of the original format. The system can automatically infer the content and structure of a document via a rendered format thereby restoring the programmatic functionality of the original file (or generating programmatic functionality of a desired target format) through the novel conversion/import process. The system can extract the document structure (e.g., layout) together with the content in order to effectuate the conversion. Heuristics (e.g., logic and/or reasoning) can be employed to make decisions with respect to importing the document into a target format and/or formats.Type: ApplicationFiled: February 14, 2006Publication date: August 16, 2007Inventors: Patrice Simard, Radoslav Nickolov
-
Publication number: 20070177183Abstract: A system for generating soft copy (digital) versions of hard copy documents uses images of the hard copy documents. The images may be captured using a device suitable for capturing images, like a camera phone. Once available, the images may be processed to improve their suitability for document generation. The images may then be processed to recognize and generate soft copy versions of the documents represented by the images.Type: ApplicationFiled: February 2, 2006Publication date: August 2, 2007Applicant: Microsoft CorporationInventors: Merle Robinson, Matthieu Uyttendaele, Zhengyou Zhang, Patrice Simard