Patents by Inventor Tarek Aziz LAHLOU
Tarek Aziz LAHLOU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11694341Abstract: A CNN operates on the disparity or motion outputs of a block matching hardware module, such as a DMPAC module, to produce refined disparity or motion streams which improve operations in images having ambiguous regions. As the block matching hardware module provides most of the processing, the CNN can be small and thus able to operate in real time, in contrast to CNNs which are performing all of the processing. In one example, the CNN operation is performed only if the block hardware module output confidence level is below a predetermined amount. The CNN can have a number of different configurations and still be sufficiently small to operate in real time on conventional platforms.Type: GrantFiled: December 23, 2019Date of Patent: July 4, 2023Assignee: Texas Instmments IncorporatedInventors: Jing Li, Do-Kyoung Kwon, Tarek Aziz Lahlou
-
Publication number: 20230108193Abstract: Aspects discussed herein may relate to methods and techniques for embedding constrained and unconstrained optimization programs as layers in a neural network architecture. Systems are provided that implement a method of solving a particular optimization problem by a neural network architecture. Prior systems required use of external software to pre-solve optimization programs so that previously determined parameters could be used as fixed input in the neural network architecture. Aspects described herein may transform the structure of common optimization problems/programs into forms suitable for use in a neural network. This transformation may be invertible, allowing the system to learn the solution to the optimization program using gradient descent techniques via backpropagation of errors through the neural network architecture. Thus these optimization layers may be solved via operation of the neural network itself.Type: ApplicationFiled: December 12, 2022Publication date: April 6, 2023Inventors: Tarek Aziz Lahlou, Christopher Larson, Oluwatobi Olabiyi
-
Patent number: 11544572Abstract: Aspects discussed herein may relate to methods and techniques for embedding constrained and unconstrained optimization programs as layers in a neural network architecture. Systems are provided that implement a method of solving a particular optimization problem by a neural network architecture. Prior systems required use of external software to pre-solve optimization programs so that previously determined parameters could be used as fixed input in the neural network architecture. Aspects described herein may transform the structure of common optimization problems/programs into forms suitable for use in a neural network. This transformation may be invertible, allowing the system to learn the solution to the optimization program using gradient descent techniques via backpropagation of errors through the neural network architecture. Thus these optimization layers may be solved via operation of the neural network itself.Type: GrantFiled: February 14, 2020Date of Patent: January 3, 2023Assignee: Capital One Services, LLCInventors: Tarek Aziz Lahlou, Christopher Larson, Oluwatobi Olabiyi
-
Publication number: 20220398377Abstract: Systems, apparatuses, and methods are described for inverting neural embeddings. One or more forward neural embeddings associated with meanings, features, and/or characteristics of data samples may be generated for one or more data samples. One or more inverse neural embeddings associated with the one or more forward neural embeddings may be determined. One or more inverse feature sets for the one or more inverse neural embeddings may be generated.Type: ApplicationFiled: June 14, 2021Publication date: December 15, 2022Inventors: Tarek Aziz Lahlou, Nathan Wolfe, Christopher Larson
-
Publication number: 20220335311Abstract: Systems, apparatuses, and methods are described for data labeling for training artificial intelligence systems. A candidate dataset comprising data samples and corresponding labels may be used to update an incumbent dataset comprise data samples and corresponding labels. The integrity of a data sample-label pair in the candidate dataset may be determined before the data sample-label pair is added to the incumbent dataset. For determining labeling integrity, a plurality of machine classifiers may be trained based on the incumbent dataset and portions of the candidate dataset. The plurality of machine classifiers as trained may be used to generate predicted labels for data samples in the candidate dataset. The integrity of the data sample-label pair in the candidate dataset may be measured based on the predicted labels for the data sample.Type: ApplicationFiled: April 14, 2021Publication date: October 20, 2022Inventors: Tarek Aziz Lahlou, Megan Lynn DeLaunay, Corey Jonathan Fyock, Erin Babinsky
-
Patent number: 11417317Abstract: Aspects described herein may relate to the determination of data that is indicative of a greater range of speech properties than input text data. The determined data may be used as input to one or more speech processing tasks, such as model training, model validation, model testing, or classification. For example, after a model is trained based on the determined data, the model's performance may exhibit more resilience to a wider range of speech properties. The determined data may include one or more modified versions of the input text data. The one or more modified versions may be associated with the one or more speakers or accents and/or may be associated with one or more levels of semantic similarity in relation to the input text data. The one or more modified versions may be determined based on one or more machine learning algorithms.Type: GrantFiled: February 20, 2020Date of Patent: August 16, 2022Assignee: Capital One Services, LLCInventors: Christopher Larson, Tarek Aziz Lahlou, Diana Mingels, Zachary Kulis, Erik T. Mueller
-
Publication number: 20210192752Abstract: A CNN operates on the disparity or motion outputs of a block matching hardware module, such as a DMPAC module, to produce refined disparity or motion streams which improve operations in images having ambiguous regions. As the block matching hardware module provides most of the processing, the CNN can be small and thus able to operate in real time, in contrast to CNNs which are performing all of the processing. In one example, the CNN operation is performed only if the block hardware module output confidence level is below a predetermined amount. The CNN can have a number of different configurations and still be sufficiently small to operate in real time on conventional platforms.Type: ApplicationFiled: December 23, 2019Publication date: June 24, 2021Inventors: Jing LI, Do-Kyoung KWON, Tarek Aziz LAHLOU
-
Patent number: 10809978Abstract: A merge sort accelerator (MSA) includes a pre-processing stage configured to receive an input vector and generate a pre-processing output vector based on a pre-processing instruction and the input vector. The MSA also includes a merge sort network having multiple sorting stages configured to be selectively enabled. The merge sort network is configured to receive the pre-processing output vector and generate a sorted output vector based on a sorting instruction and the pre-processing output vector. The MSA includes an accumulator stage configured to receive the sorted output vector and update an accumulator vector based on the accumulator instruction and the sorted output vector. The MSA also includes a post-processing stage configured to receive the accumulator vector and generate a post-processing output vector based on a post-processing instruction and the accumulator vector.Type: GrantFiled: June 1, 2018Date of Patent: October 20, 2020Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Arthur John Redfern, Asheesh Bhardwaj, Tarek Aziz Lahlou, William Franklin Leven
-
Patent number: 10810281Abstract: An outer product multiplier (GPM) system/method that integrates compute gating and input/output circular column rotation functions to balance time spent in compute and data transfer operations while limiting overall dynamic power dissipation is disclosed. Matrix compute gating (MCG) based on a computation decision matrix (CDM) limits the number of computations required on a per cycle basis to reduce overall matrix compute cycle power dissipation. A circular column rotation vector (CRV) automates input/output data formatting to reduce the number of data transfer operations required to achieve a given matrix computation result. Matrix function operators (MFO) utilizing these features are disclosed and include: matrix-matrix multiplication; matrix-matrix and vector-vector point-wise multiplication, addition, and assignment; matrix-vector multiplication; vector-vector inner product; matrix transpose; matrix row permute; and vector-column permute.Type: GrantFiled: August 7, 2018Date of Patent: October 20, 2020Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Arthur John Redfern, Donald Edward Steiss, Mihir Narendra Mody, Tarek Aziz Lahlou
-
Publication number: 20200320982Abstract: Aspects described herein may relate to the determination of data that is indicative of a greater range of speech properties than input text data. The determined data may be used as input to one or more speech processing tasks, such as model training, model validation, model testing, or classification. For example, after a model is trained based on the determined data, the model's performance may exhibit more resilience to a wider range of speech properties. The determined data may include one or more modified versions of the input text data. The one or more modified versions may be associated with the one or more speakers or accents and/or may be associated with one or more levels of semantic similarity in relation to the input text data. The one or more modified versions may be determined based on one or more machine learning algorithms.Type: ApplicationFiled: February 20, 2020Publication date: October 8, 2020Inventors: Christopher Larson, Tarek Aziz Lahlou, Diana Mingels, Zachary Kulis, Erik T. Mueller
-
Publication number: 20200265321Abstract: Aspects discussed herein may relate to methods and techniques for embedding constrained and unconstrained optimization programs as layers in a neural network architecture. Systems are provided that implement a method of solving a particular optimization problem by a neural network architecture. Prior systems required use of external software to pre-solve optimization programs so that previously determined parameters could be used as fixed input in the neural network architecture. Aspects described herein may transform the structure of common optimization problems/programs into forms suitable for use in a neural network. This transformation may be invertible, allowing the system to learn the solution to the optimization program using gradient descent techniques via backpropagation of errors through the neural network architecture. Thus these optimization layers may be solved via operation of the neural network itself.Type: ApplicationFiled: February 14, 2020Publication date: August 20, 2020Inventors: Tarek Aziz Lahlou, Christopher Larson, Oluwatobi Olabiyi
-
Patent number: 10607598Abstract: Aspects described herein may relate to the determination of data that is indicative of a greater range of speech properties than input text data. The determined data may be used as input to one or more speech processing tasks, such as model training, model validation, model testing, or classification. For example, after a model is trained based on the determined data, the model's performance may exhibit more resilience to a wider range of speech properties. The determined data may include one or more modified versions of the input text data. The one or more modified versions may be associated with the one or more speakers or accents and/or may be associated with one or more levels of semantic similarity in relation to the input text data. The one or more modified versions may be determined based on one or more machine learning algorithms.Type: GrantFiled: July 25, 2019Date of Patent: March 31, 2020Assignee: Capital One Services, LLCInventors: Christopher Larson, Tarek Aziz Lahlou, Diana Mingels, Zachary Kulis, Erik T. Mueller
-
Patent number: 10452960Abstract: An image classification system includes a convolutional neural network, a confidence predictor, and a fusion classifier. The convolutional neural network is configured to assign a plurality of probability values to each pixel of a first image of a scene and a second image of the scene. Each of the probability values corresponds to a different feature that the convolutional neural network is trained to identify. The confidence predictor is configured to assign a confidence value to each pixel of the first image and to each pixel of the second image. The confidence values correspond to a greatest of the probability values generated by the convolutional neural network for each pixel. The fusion classifier is configured to assign, to each pixel of the first image, a feature that corresponds to a higher of the confidence values assigned to the pixel of the first image and the second image.Type: GrantFiled: October 1, 2018Date of Patent: October 22, 2019Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Yingmao Li, Vikram VijayanBabu Appia, Ziguo Zhong, Tarek Aziz Lahlou
-
Publication number: 20180373678Abstract: An outer product multiplier (GPM) system/method that integrates compute gating and input/output circular column rotation functions to balance time spent in compute and data transfer operations while limiting overall dynamic power dissipation is disclosed. Matrix compute gating (MCG) based on a computation decision matrix (CDM) limits the number of computations required on a per cycle basis to reduce overall matrix compute cycle power dissipation. A circular column rotation vector (CRV) automates input/output data formatting to reduce the number of data transfer operations required to achieve a given matrix computation result. Matrix function operators (MFO) utilizing these features are disclosed and include: matrix-matrix multiplication; matrix-matrix and vector-vector point-wise multiplication, addition, and assignment; matrix-vector multiplication; vector-vector inner product; matrix transpose; matrix row permute; and vector-column permute.Type: ApplicationFiled: August 7, 2018Publication date: December 27, 2018Inventors: Arthur John Redfern, Donald Edward Steiss, Mihir Narendra Mody, Tarek Aziz Lahlou
-
Publication number: 20180349096Abstract: A merge sort accelerator (MSA) includes a pre-processing stage configured to receive an input vector and generate a pre-processing output vector based on a pre-processing instruction and the input vector. The MSA also includes a merge sort network having multiple sorting stages configured to be selectively enabled. The merge sort network is configured to receive the pre-processing output vector and generate a sorted output vector based on a sorting instruction and the pre-processing output vector. The MSA includes an accumulator stage configured to receive the sorted output vector and update an accumulator vector based on the accumulator instruction and the sorted output vector. The MSA also includes a post-processing stage configured to receive the accumulator vector and generate a post-processing output vector based on a post-processing instruction and the accumulator vector.Type: ApplicationFiled: June 1, 2018Publication date: December 6, 2018Inventors: Arthur John REDFERN, Asheesh BHARDWAJ, Tarek Aziz LAHLOU, William Franklin LEVEN