Patents by Inventor Jill Fain Lehman
Jill Fain Lehman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220366210Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.Type: ApplicationFiled: July 7, 2022Publication date: November 17, 2022Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
-
Patent number: 11416732Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.Type: GrantFiled: December 5, 2018Date of Patent: August 16, 2022Assignee: Disney Enterprises, Inc.Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
-
Patent number: 11403556Abstract: A system providing an interactive social agent can include a computing platform having a hardware processor and a memory storing a training content standardization software code configured to receive content depicting human expressions and including annotation data describing the human expressions from multiple content annotation sources, generate a corresponding content descriptor for each content annotation source to translate the annotation data into a standardized data format, and transform the annotation data into the standardized data format using the corresponding content descriptor. The content and the annotation data in the to standardized format are stored as training data for use in training expressions for the interactive social agent.Type: GrantFiled: April 3, 2019Date of Patent: August 2, 2022Assignee: Disney Enterprises, Inc.Inventors: James R. Kennedy, Jill Fain Lehman, Kevin El Haddad, Douglas A. Fidaleo
-
Publication number: 20200320427Abstract: A system providing an interactive social agent can include a computing platform having a hardware processor and a memory storing a training content standardization software code configured to receive content depicting human expressions and including annotation data describing the human expressions from multiple content annotation sources, generate a corresponding content descriptor for each content annotation source to translate the annotation data into a standardized data format, and transform the annotation data into the standardized data format using the corresponding content descriptor. The content and the annotation data in the to standardized format are stored as training data for use in training expressions for the interactive social agent.Type: ApplicationFiled: April 3, 2019Publication date: October 8, 2020Inventors: James R. Kennedy, Jill Fain Lehman, Kevin El Haddad, Douglas A. Fidaleo
-
Publication number: 20200184306Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.Type: ApplicationFiled: December 5, 2018Publication date: June 11, 2020Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
-
Patent number: 10311863Abstract: There is provided a system including a microphone configured to receive an input speech, an analog to digital (A/D) converter configured to convert the input speech to a digital form and generate a digitized speech including a plurality of segments having acoustic features, a memory storing an executable code, and a processor executing the executable code to extract a plurality of acoustic feature vectors from a first segment of the digitized speech, determine, based on the plurality of acoustic feature vectors, a plurality of probability distribution vectors corresponding to the probabilities that the first segment includes each of a first keyword, a second keyword, both the first keyword and the second keyword, a background, and a social speech, and assign a first classification label to the first segment based on an analysis of the plurality of probability distribution vectors of one or more segments preceding the first segment and the probability distribution vectors of the first segment.Type: GrantFiled: September 2, 2016Date of Patent: June 4, 2019Assignee: Disney Enterprises, Inc.Inventors: Jill Fain Lehman, Nikolas Wolfe, Andre Pereira
-
Patent number: 10269356Abstract: There is provided a system comprising a microphone, configured to receive an input speech from an individual, an analog-to-digital (A/D) converter to convert the input speech to digital form and generate a digitized speech, a memory storing an executable code and an age estimation database, a hardware processor executing the executable code to receive the digitized speech, identify a plurality of boundaries in the digitized speech delineating a plurality of phonemes in the digitized speech, extract a plurality of formant-based feature vectors from each phoneme in the digitized speech based on at least one of a formant position, a formant bandwidth, and a formant dispersion, compare the plurality of formant-based feature vectors with age determinant formant-based feature vectors of the age estimation database, determine the age of the individual when the comparison finds a match in the age estimation database, and communicate an age-appropriate response to the individual.Type: GrantFiled: August 22, 2016Date of Patent: April 23, 2019Assignee: Disney Enterprises, Inc.Inventors: Rita Singh, Jill Fain Lehman
-
Patent number: 10162815Abstract: A dialog knowledge acquisition system includes a hardware processor, a memory, and hardware processor controlled input and output modules. The memory stores a dialog manager configured to instantiate a persistent interactive personality (PIP), and a dialog graph having linked dialog state nodes. The dialog manager receives dialog initiation data, identifies a first state node on the dialog graph corresponding to the dialog initiation data, determines a dialog interaction by the PIP based on the dialog initiation data and the first state node, and renders the dialog interaction. The dialog manager also receives feedback data corresponding to the dialog interaction, identifies a second state node based on the dialog initiation data, the dialog interaction, and the feedback data, and utilizes the dialog initiation data, the first state node, the dialog interaction, the feedback data, and the second state node to train the dialog graph.Type: GrantFiled: January 3, 2017Date of Patent: December 25, 2018Assignee: Disney Enterprises, Inc.Inventors: Jill Fain Lehman, Boyang Albert Li, Andre Tiago Abelho Pereira, Iolanda M. Dos Santos Caravalho Leite, Ming Sun, Eli Pincus
-
Patent number: 10019992Abstract: A device includes a plurality of components, a memory having a keyword recognition module and a context recognition module, a microphone configured to receive an input speech spoken by a user, an analog-to-digital converter configured to convert the input speech from an analog form to a digital form and generate a digitized speech, and a processor. The processor is configured to detect, using the keyword recognition module, a keyword in the digitized speech, initiate, in response to detecting the keyword by the keyword recognition module, an action to be taken one of the plurality of components, wherein the keyword is associated with the action, determine, using the context recognition module, a context for the keyword, and execute the action if the context determined by the context recognition module indicates that the keyword is a command.Type: GrantFiled: June 29, 2015Date of Patent: July 10, 2018Assignee: Disney Enterprises, Inc.Inventors: Jill Fain Lehman, Samer Al Moubayed
-
Publication number: 20180068656Abstract: There is provided a system including a microphone configured to receive an input speech, an analog to digital (A/D) converter configured to convert the input speech to a digital form and generate a digitized speech including a plurality of segments having acoustic features, a memory storing an executable code, and a processor executing the executable code to extract a plurality of acoustic feature vectors from a first segment of the digitized speech, determine, based on the plurality of acoustic feature vectors, a plurality of probability distribution vectors corresponding to the probabilities that the first segment includes each of a first keyword, a second keyword, both the first keyword and the second keyword, a background, and a social speech, and assign a first classification label to the first segment based on an analysis of the plurality of probability distribution vectors of one or more segments preceding the first segment and the probability distribution vectors of the first segment.Type: ApplicationFiled: September 2, 2016Publication date: March 8, 2018Inventors: Jill Fain Lehman, Nikolas Wolfe, Andre Pereira
-
Publication number: 20180068658Abstract: A dialog knowledge acquisition system includes a hardware processor, a memory, and hardware processor controlled input and output modules. The memory stores a dialog manager configured to instantiate a persistent interactive personality (PIP), and a dialog graph having linked dialog state nodes. The dialog manager receives dialog initiation data, identifies a first state node on the dialog graph corresponding to the dialog initiation data, determines a dialog interaction by the PIP based on the dialog initiation data and the first state node, and renders the dialog interaction. The dialog manager also receives feedback data corresponding to the dialog interaction, identifies a second state node based on the dialog initiation data, the dialog interaction, and the feedback data, and utilizes the dialog initiation data, the first state node, the dialog interaction, the feedback data, and the second state node to train the dialog graph.Type: ApplicationFiled: January 3, 2017Publication date: March 8, 2018Inventors: Jill Fain Lehman, Boyang Albert Li, Andre Tiago Abelho Pereira, Iolanda M. Dos Santos Carvalho Leite, Ming Sun, Eli Pincus
-
Publication number: 20180053514Abstract: There is provided a system comprising a microphone, configured to receive an input speech from an individual, an analog-to-digital (A/D) converter to convert the input speech to digital form and generate a digitized speech, a memory storing an executable code and an age estimation database, a hardware processor executing the executable code to receive the digitized speech, identify a plurality of boundaries in the digitized speech delineating a plurality of phonemes in the digitized speech, extract a plurality of formant-based feature vectors from each phoneme in the digitized speech based on at least one of a formant position, a formant bandwidth, and a formant dispersion, compare the plurality of formant-based feature vectors with age determinant formant-based feature vectors of the age estimation database, determine the age of the individual when the comparison finds a match in the age estimation database, and communicate an age-appropriate response to the individual.Type: ApplicationFiled: August 22, 2016Publication date: February 22, 2018Inventors: Rita Singh, Jill Fain Lehman
-
Patent number: 9588588Abstract: There is provided a system and method for producing a haptic effect. In one implementation, such a system includes a system processor, a system memory, and a haptic engine stored in the system memory. The system processor is configured to execute the haptic engine to receive a media content, to map an event contained in the media content to a predetermined haptic effect, and to display an interface enabling a system user to produce a customized haptic effect based on the predetermined haptic effect. The system processor is further configured to generate an output data for causing one of the predetermined haptic effect and the customized haptic effect to occur.Type: GrantFiled: January 6, 2015Date of Patent: March 7, 2017Assignee: Disney Enterprises, Inc.Inventors: Ali Israr, Roberta Klatzky, Siyan Zhao, Jill Fain Lehman, Oliver Schneider
-
Publication number: 20170061959Abstract: There is provided a system for keyword recognition comprising a memory storing a keyword recognition application, a processor executing the keyword recognition application to receive a digitized speech from an analog-to-digital (A/D) converter, divide the digitized speech into a plurality of speech segments having a first speech segment, calculate a first probability of distribution of a first keyword in the first speech segment, determine that a first fraction of the first speech segment includes the first keyword, in response to comparing the first probability of distribution with a first threshold associated with the first keyword, calculate a second probability of distribution of a second keyword in the first speech segment, and determine that a second fraction of the first speech segment includes the second keyword, in response to comparing the second probability of distribution with a second threshold associated with the second keyword.Type: ApplicationFiled: September 1, 2015Publication date: March 2, 2017Inventors: Jill Fain Lehman, Rita Singh
-
Publication number: 20160379633Abstract: A device includes a plurality of components, a memory having a keyword recognition module and a context recognition module, a microphone configured to receive an input speech spoken by a user, an analog-to-digital converter configured to convert the input speech from an analog form to a digital form and generate a digitized speech, and a processor. The processor is configured to detect, using the keyword recognition module, a keyword in the digitized speech, initiate, in response to detecting the keyword by the keyword recognition module, an action to be taken one of the plurality of components, wherein the keyword is associated with the action, determine, using the context recognition module, a context for the keyword, and execute the action if the context determined by the context recognition module indicates that the keyword is a command.Type: ApplicationFiled: June 29, 2015Publication date: December 29, 2016Inventors: Jill Fain Lehman, Samer Al Moubayed
-
Publication number: 20160085303Abstract: There is provided a system and method for producing a haptic effect. In one implementation, such a system includes a system processor, a system memory, and a haptic engine stored in the system memory. The system processor is configured to execute the haptic engine to receive a media content, to map an event contained in the media content to a predetermined haptic effect, and to display an interface enabling a system user to produce a customized haptic effect based on the predetermined haptic effect. The system processor is further configured to generate an output data for causing one of the predetermined haptic effect and the customized haptic effect to occur.Type: ApplicationFiled: January 6, 2015Publication date: March 24, 2016Inventors: Ali Israr, Roberta Klatzky, Siyan Zhao, Jill Fain Lehman, Oliver Schneider
-
Publication number: 20130231933Abstract: A method and system for assignee identification of speech includes defining several time intervals and utilizing one or more function evaluations to classify each of the several participants as addressing speech to an automated character or not addressing speech to the automated character during each of the several time intervals. A first function evaluation includes computing values for a predetermined set of features for each of the participants during a particular time interval and assigning a first addressing status to each of the several participants in the particular time interval, based on the values of each of the predetermined sets of features determined during the particular time interval. A second function evaluation may assign a second addressing status to each of the several participants in the particular time interval utilizing results of the first function evaluation for the particular time interval and for one or more additional contiguous time intervals.Type: ApplicationFiled: March 2, 2012Publication date: September 5, 2013Applicant: DISNEY ENTERPRISES, INC.Inventors: Hannaneh Hajishirzi, Jill Fain Lehman