Patents by Inventor Philip R. Cohen
Philip R. Cohen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12118321Abstract: Methods and systems for multimodal collaborative conversational dialogue are disclosed. The multimodal collaborative conversational dialogue system include a multimodal avatar interface and one or more sensors, which obtains one or more multimodal inputs. A multimodal semantic parser generates one or more logical form representations based on the one or more multimodal inputs. A collaborative dialogue manager infers a goal of the user from the one or more logical form representations, and develops a plan including communicative actions and non-communicative actions with regard to the goal. The multimodal avatar interface outputs one or more multimodal collaborative plan-based dialogue system-generated communications with respect to execution of at least one communicative action. The collaborative dialogue manager maintains a collaborative dialogue with the user until obtainment of the goal.Type: GrantFiled: December 27, 2023Date of Patent: October 15, 2024Assignee: Openstream Inc.Inventors: Philip R. Cohen, Lucian Galescu, Rajasekhar Tumuluri
-
Publication number: 20240242035Abstract: Methods and systems for multimodal collaborative conversational dialogue are disclosed. The multimodal collaborative conversational dialogue system include a multimodal avatar interface and one or more sensors, which obtains one or more multimodal inputs. A multimodal semantic parser generates one or more logical form representations based on the one or more multimodal inputs. A collaborative dialogue manager infers a goal of the user from the one or more logical form representations, and develops a plan including communicative actions and non-communicative actions with regard to the goal. The multimodal avatar interface outputs one or more multimodal collaborative plan-based dialogue system-generated communications with respect to execution of at least one communicative action. The collaborative dialogue manager maintains a collaborative dialogue with the user until obtainment of the goal.Type: ApplicationFiled: December 27, 2023Publication date: July 18, 2024Applicant: Openstream Inc.Inventors: Philip R. Cohen, Lucian Galescu, Rajasekhar Tumuluri
-
Publication number: 20240221758Abstract: Methods and systems for multimodal collaborative plan-based dialogue. The multimodal collaborative plan-based dialogue system includes multiple sensors to detect multimodal inputs from a user. The multimodal collaborative plan-based dialogue system includes a multimodal sematic parser that generates logical forms based on the multimodal inputs. The multimodal collaborative plan-based dialogue system includes a dialogue manager that infers a goal of the user from the logical forms and develops a plan that includes communicative actions with regard to the goal.Type: ApplicationFiled: March 8, 2024Publication date: July 4, 2024Applicant: Openstream Inc.Inventors: Philip R. Cohen, Rajasekhar Tumuluri
-
Patent number: 10431215Abstract: A system and method is provided for adjusting natural language conversations between a human user and a computer based on the human user's cognitive state and/or situational state, particularly when the user is operating a vehicle. The system may disengage in conversation with the user (e.g., the driver) or take other actions based on various situational and/or user states. For example, the system may disengage conversation when the system detects that the driving situation is complex (e.g., car merging onto a highway, turning right with multiple pedestrians trying to cross, etc.). The system may (in addition or instead) sense the user's cognitive load and disengage conversation based on the cognitive load. The system may alter its personality (e.g. by engaging in mentally non-taxing conversations such as telling jokes based on situational and/or user states.Type: GrantFiled: December 6, 2016Date of Patent: October 1, 2019Assignee: Voicebox Technologies CorporationInventor: Philip R. Cohen
-
Publication number: 20170162197Abstract: A system and method is provided for adjusting natural language conversations between a human user and a computer based on the human user's cognitive state and/or situational state, particularly when the user is operating a vehicle. The system may disengage in conversation with the user (e.g., the driver) or take other actions based on various situational and/or user states. For example, the system may disengage conversation when the system detects that the driving situation is complex (e.g., car merging onto a highway, turning right with multiple pedestrians trying to cross, etc.). The system may (in addition or instead) sense the user's cognitive load and disengage conversation based on the cognitive load. The system may alter its personality (e.g. by engaging in mentally non-taxing conversations such as telling jokes based on situational and/or user states.Type: ApplicationFiled: December 6, 2016Publication date: June 8, 2017Applicant: VoiceBox Technologies CorporationInventor: Philip R. COHEN
-
Patent number: 8473866Abstract: A decision assistance device provides a user the ability to access and obtain actionable intelligence or other information to make tactical and strategic decisions. The decision assistance device allows the user to obtain information for future, near-future, and/or real time scenarios. The device advantageously provides the information to the user in a manner that minimizes an amount of attention and interaction required by the user while still permitting the user to rapidly manipulate the device while moving down a desired decision path. The decision assistance device may take a physical or virtual form.Type: GrantFiled: October 15, 2007Date of Patent: June 25, 2013Inventors: Philip R. Cohen, Scott Lind, David McGee
-
Patent number: 8438489Abstract: The system and method as described herein can be advantageously used in a plurality of scenarios, two of which include field markup and data collection and collaborative review. The system and method handles the allocation of digital paper pattern background and the creation of required page definition files embedded into digital paper enabled PDFs. Optionally, action palettes can be automatically overlaid on the drawings as legend boxes to enable field personnel to select the operations they want to perform on the digital paper as they would on a computer interface. For instance letting users select the types of callouts and clouds to add to their markup. These drawings can be printed or plotted onto paper and sent to a work site for markup.Type: GrantFiled: January 26, 2009Date of Patent: May 7, 2013Inventors: Paulo Barthelmess, David McGee, Philip R. Cohen, Edward C. Kaiser
-
Publication number: 20120066578Abstract: In many environments such as municipal, military and construction the use of digital pen and paper systems permits end users to create or modify features on digital document, attributes associated with those features, or attribute values associated with those features. The attribute value management system includes a digital pen, at least one digital document, one or more computing devices, and a number of software programs for creating data relationships between the digital documents (e.g., features on maps and their underlying attribute values), interpreting voice or handwritten data, validating the interpreted data, and uploading the validated data to a geo-database. The attribute value management system functions to create, update or otherwise change the attribute values associated with the features by a temporal association method; a linked identification method; or a direct handwriting method.Type: ApplicationFiled: August 9, 2011Publication date: March 15, 2012Applicant: ADAPX, INC.Inventors: Michael Robin, Philip R. Cohen, Melissa Trapp Petty
-
Publication number: 20120054601Abstract: Methods, techniques, and systems for icon automated generation and placement are provided. Some embodiments provide an icon generation and placement system configured to ingest one or more icon templates, wherein the icon templates comprise an icon symbol, an icon label, and an icon dimensionality; populate source data for the one or more icon templates, wherein the source data is accessible by at least one of a speech recognition subsystem, handwriting recognition subsystem, and sketch recognition subsystem; generate an icon attribute table for the one or more icon templates; store the ingested one or more icon templates in an icon database; and place an icon, from the icon database into a selected location within a digital document in response to one or more multimodal inputs.Type: ApplicationFiled: May 31, 2011Publication date: March 1, 2012Applicant: ADAPX, INC.Inventors: Philip R. Cohen, R. Matthews Wesson, Michael Robin, David R. McGee
-
Publication number: 20090193342Abstract: The system and method as described herein can be advantageously used in a plurality of scenarios, two of which include field markup and data collection and collaborative review. The system and method handles the allocation of digital paper pattern background and the creation of required page definition files embedded into digital paper enabled PDFs. Optionally, action palettes can be automatically overlaid on the drawings as legend boxes to enable field personnel to select the operations they want to perform on the digital paper as they would on a computer interface. For instance letting users select the types of callouts and clouds to add to their markup. These drawings can be printed or plotted onto paper and sent to a work site for markup.Type: ApplicationFiled: January 26, 2009Publication date: July 30, 2009Inventors: Paulo Barthelmess, David McGee, Philip R. Cohen, Edward C. Kaiser
-
Publication number: 20080186255Abstract: Systems, devices, and methods to provide tools enhance the tactical or strategic situation awareness of on-scene and remotely located personnel involved with the surveillance of a region-of-interest using field-of-view sensory augmentation tools. The sensory augmentation tools provide updated, visual, text, audio, and graphic information associated with the region-of-interest adjusted for the positional frame of reference of the on-scene or remote personnel viewing the region-of-interest, map, document or other surface. Annotations and augmented reality graphics projected onto and positionally registered with objects or regions-of-interest visible within the field of view of a user looking through a see through monitor may select the projected graphics for editing and manipulation by sensory feedback from the viewer.Type: ApplicationFiled: December 7, 2007Publication date: August 7, 2008Inventors: Philip R. Cohen, David McGee
-
Publication number: 20080184149Abstract: A decision assistance device provides a user the ability to access and obtain actionable intelligence or other information to make tactical and strategic decisions. The decision assistance device allows the user to obtain information for future, near-future, and/or real time scenarios. The device advantageously provides the information to the user in a manner that minimizes an amount of attention and interaction required by the user while still permitting the user to rapidly manipulate the device while moving down a desired decision path. The decision assistance device may take a physical or virtual form.Type: ApplicationFiled: October 15, 2007Publication date: July 31, 2008Inventors: Philip R. Cohen, Scott Lind, David McGee
-
Patent number: 6674426Abstract: A method and system designed to augment, rather than replace, the work habits of its users. These work habits include practices such as drawing on Post-it™ notes using a symbolic language. The system observes and understands the language used on the Post-it notes and the system assigns meaning simultaneously to objects in both the physical and virtual worlds. Since the data is preserved in physical form, the physical documents serve as a back-up in the case of electronic system failure. With the present invention users can rollout a primary paper document such as a map, register it, and place secondary physical documents on the map. The secondary physical documents can be moved from one place to another on the primary document. Once an object is augmented, users can modify the meaning represented by it, ask questions about that representation, view it in virtual reality, or give directions to it, all with speech and gestures.Type: GrantFiled: March 8, 2001Date of Patent: January 6, 2004Assignee: Oregon Health & Science UniversityInventors: David R. McGee, Philip R. Cohen, Lizhong Wu