Patents by Inventor Taniya Mishra
Taniya Mishra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11769056Abstract: Machine learning is performed using synthetic data for neural network training using vectors. Facial images are obtained for a neural network training dataset. Facial elements from the facial images are encoded into vector representations of the facial elements. A generative adversarial network (GAN) generator is trained to provide one or more synthetic vectors based on the one or more vector representations, wherein the one or more synthetic vectors enable avoidance of discriminator detection in the GAN. The training a GAN further comprises determining a generator accuracy using the discriminator. The generator accuracy can enable a classifier, where the classifier comprises a multi-layer perceptron. Additional synthetic vectors are generated in the GAN, wherein the additional synthetic vectors avoid discriminator detection. A machine learning neural network is trained using the additional synthetic vectors.Type: GrantFiled: December 29, 2020Date of Patent: September 26, 2023Assignee: Affectiva, Inc.Inventors: Sandipan Banerjee, Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Taniya Mishra
-
Patent number: 11704574Abstract: Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.Type: GrantFiled: April 20, 2020Date of Patent: July 18, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
-
Publication number: 20230142720Abstract: Methods, apparatuses and media for providing content upon request are provided. A search request for content is received from a user. A first filter is applied to the search request to modify the search request before a search algorithm searches for the content to return in response to the search request. Items of content are determined based on the search request to which the first filter is applied. A second filter is applied to the items of content to determine search results. The search results are provided to the user.Type: ApplicationFiled: January 5, 2023Publication date: May 11, 2023Applicant: AT&T Intellectual Property I, L.P.Inventors: Taniya Mishra, Dimitrios Dimitriadis, Diane Kearns
-
Patent number: 11594225Abstract: Methods, apparatuses and media for providing content upon request are provided. A search request for content is received from a user. A first filter is applied to the search request to modify the search request before a search algorithm searches for the content to return in response to the search request. Items of content are determined based on the search request to which the first filter is applied. A second filter is applied to the items of content to determine search results. The search results are provided to the user.Type: GrantFiled: August 21, 2018Date of Patent: February 28, 2023Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Taniya Mishra, Dimitrios Dimitriadis, Diane Kearns
-
Publication number: 20230033776Abstract: Techniques for cognitive analysis for directed control transfer with autonomous vehicles are described. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.Type: ApplicationFiled: October 10, 2022Publication date: February 2, 2023Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11465640Abstract: Techniques are described for cognitive analysis for directed control transfer for autonomous vehicles. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.Type: GrantFiled: December 28, 2018Date of Patent: October 11, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11410438Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.Type: GrantFiled: November 8, 2019Date of Patent: August 9, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
-
Publication number: 20220114912Abstract: Disclosed techniques describe a digital platform for proxy mentor/mentee communication. A digital platform that facilitates communication between a mentee and a plurality of mentors is provided. The needs of the mentee are determined. The mentee needs are used to define the plurality of mentors for the mentee. A query is obtained from a mentee and communicated to one or more of the mentors. A response is received from one or more mentors. Machine learning is performed by the digital platform using the response from the one or more mentors and the query from the mentee. A further query is received from the mentee. The further query is based on the set of determined needs. A specific mentor is queried with the further query, based on the machine learning. Bespoke information about the mentee is provided, based on the machine learning by the digital platform.Type: ApplicationFiled: October 13, 2021Publication date: April 14, 2022Applicant: MySureStart, Inc.Inventor: Taniya Mishra
-
Patent number: 11292477Abstract: Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.Type: GrantFiled: June 2, 2019Date of Patent: April 5, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Taniya Mishra
-
Publication number: 20220101146Abstract: Techniques for machine learning based on neural network training with bias mitigation are disclosed. Facial images for a neural network configuration and a neural network training dataset are obtained. The training dataset is associated with the neural network configuration. The facial images are partitioned into multiple subgroups, wherein the subgroups represent demographics with potential for biased training. A multifactor key performance indicator (KPI) is calculated per image. The calculating is based on analyzing performance of two or more image classifier models. The neural network configuration and the training dataset are promoted to a production neural network, wherein the promoting is based on the KPI. The KPI identifies bias in the training dataset. Promotion of the neural network configuration and the neural network training dataset is based on identified bias.Type: ApplicationFiled: September 23, 2021Publication date: March 31, 2022Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Sneha Bhattacharya, Taniya Mishra, Shruti Ranjalkar
-
Publication number: 20220067519Abstract: Disclosed techniques include neural network architecture using encoder-decoder models. A facial image is obtained for processing on a neural network. The facial image includes unpaired facial image attributes. The facial image is processed through a first encoder-decoder pair and a second encoder-decoder pair. The first encoder-decoder pair decomposes a first image attribute subspace. The second encoder-decoder pair decomposes a second image attribute subspace. The first encoder-decoder pair outputs a transformation mask based on the first image attribute subspace. The second encoder-decoder pair outputs a second image transformation mask based on the second image attribute subspace. The first image transformation mask and the second image transformation mask are concatenated to enable downstream processing. The concatenated transformation masks are processed on a third encoder-decoder pair and a resulting image is output. The resulting image eliminates a paired training data requirement.Type: ApplicationFiled: August 27, 2021Publication date: March 3, 2022Applicant: Affectiva, Inc.Inventors: Taniya Mishra, Sandipan Banerjee, Ajjen Das Joshi
-
Patent number: 11073899Abstract: Techniques for multidevice, multimodal emotion services monitoring are disclosed. An expression to be detected is determined. The expression relates to a cognitive state of an individual. Input on the cognitive state of the individual is obtained using a device local to the individual. Monitoring for the expression is performed. The monitoring uses a background process on a device remote from the individual. An occurrence of the expression is identified. The identification is performed by the background process. Notification that the expression was identified is provided. The notification is provided from the background process to a device distinct from the device running the background process. The expression is defined as a multimodal expression. The multimodal expression includes image data and audio data from the individual. The notification enables emotion services to be provided. The emotion services augment messaging, social media, and automated help applications.Type: GrantFiled: September 30, 2019Date of Patent: July 27, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Gregory Poulin, Panu James Turcot
-
Publication number: 20210201003Abstract: Machine learning is performed using synthetic data for neural network training using vectors. Facial images are obtained for a neural network training dataset. Facial elements from the facial images are encoded into vector representations of the facial elements. A generative adversarial network (GAN) generator is trained to provide one or more synthetic vectors based on the one or more vector representations, wherein the one or more synthetic vectors enable avoidance of discriminator detection in the GAN. The training a GAN further comprises determining a generator accuracy using the discriminator. The generator accuracy can enable a classifier, where the classifier comprises a multi-layer perceptron. Additional synthetic vectors are generated in the GAN, wherein the additional synthetic vectors avoid discriminator detection. A machine learning neural network is trained using the additional synthetic vectors.Type: ApplicationFiled: December 29, 2020Publication date: July 1, 2021Applicant: Affectiva, Inc.Inventors: Sandipan Banerjee, Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Taniya Mishra
-
Publication number: 20210201911Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: ApplicationFiled: March 15, 2021Publication date: July 1, 2021Inventors: Ann K. SYRDAL, Sumit CHOPRA, Patrick HAFFNER, Taniya MISHRA, Ilija ZELJKOVIC, Eric ZAVESKY
-
Patent number: 10950237Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.Type: GrantFiled: November 30, 2015Date of Patent: March 16, 2021Assignee: Nuance Communications, Inc.Inventors: Ann K. Syrdal, Sumit Chopra, Patrick Haffner, Taniya Mishra, Ilija Zeljkovic, Eric Zavesky
-
Patent number: 10853420Abstract: Extracting, from user activity data, quantitative attributes and qualitative attributes collected for users having user profiles. The quantitative attributes and the qualitative attributes are extracted during a specified time period determined before the user activity data is collected. Values for the quantitative attributes and the qualitative attributes are plotted, and subsets of the user profiles are clustered into separate group of users based on the plotted values. Delivering a product related content to the groups of users based on the clustering.Type: GrantFiled: August 21, 2017Date of Patent: December 1, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Srinivas Bangalore, Junlan Feng, Michael J. Johnston, Taniya Mishra
-
Publication number: 20200327880Abstract: Systems, methods, and computer-readable storage devices for receiving an utterance from a user and analyzing the utterance to identify the demographics of the user. The system then analyzes the utterance to determine the prosody of the utterance, and retrieves from the Internet data associated with the determined demographics. Using the retrieved data, the system retrieves, also from the Internet, recorded speech matching the identified prosody. The recorded speech, which is based on the demographic data of the utterance and has a prosody matching the utterance, is then saved to a database for future use in generating speech specific to the user.Type: ApplicationFiled: June 24, 2020Publication date: October 15, 2020Inventors: Srinivas BANGALORE, Taniya MISHRA
-
Publication number: 20200242383Abstract: Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.Type: ApplicationFiled: April 20, 2020Publication date: July 30, 2020Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
-
Patent number: 10720147Abstract: Systems, methods, and computer-readable storage devices for receiving an utterance from a user and analyzing the utterance to identify the demographics of the user. The system then analyzes the utterance to determine the prosody of the utterance, and retrieves from the Internet data associated with the determined demographics. Using the retrieved data, the system retrieves, also from the Internet, recorded speech matching the identified prosody. The recorded speech, which is based on the demographic data of the utterance and has a prosody matching the utterance, is then saved to a database for future use in generating speech specific to the user.Type: GrantFiled: August 1, 2019Date of Patent: July 21, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Srinivas Bangalore, Taniya Mishra
-
Patent number: 10665226Abstract: Systems, methods, and computer-readable storage devices for generating speech using a presentation style specific to a user, and in particular the user's social group. Systems configured according to this disclosure can then use the resulting, personalized, text and/or speech in a spoken dialogue or presentation system to communicate with the user. For example, a system practicing the disclosed method can receive speech from a user, identify the user, and respond to the received speech by applying a personalized natural language generation model. The personalized natural language generation model provides communications which can be specific to the identified user.Type: GrantFiled: June 4, 2019Date of Patent: May 26, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Taniya Mishra, Alistair D. Conkie, Svetlana Stoyanchev