Patents by Inventor Rachel K.E. Bellamy
Rachel K.E. Bellamy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11263188Abstract: A method for automatically generating documentation for an artificial intelligence model includes receiving, by a computing device, an artificial intelligence model. The computing device accesses a model facts policy that indicates data to be collected for artificial intelligence models. The computing device collects artificial intelligence model facts regarding the artificial intelligence model according to the model facts policy. The computing device accesses a factsheet template. The factsheet template provides a schema for an artificial intelligence model factsheet for the artificial intelligence model. The computing device populates the artificial intelligence model factsheet using the factsheet template with the artificial intelligence model facts related to the artificial intelligence model.Type: GrantFiled: November 1, 2019Date of Patent: March 1, 2022Assignee: International Business Machines CorporationInventors: Matthew R. Arnold, Rachel K. E. Bellamy, Kaoutar El Maghraoui, Michael Hind, Stephanie Houde, Kalapriya Kannan, Sameep Mehta, Aleksandra Mojsilovic, Ramya Raghavendra, Darrell C. Reimer, John T. Richards, David J. Piorkowski, Jason Tsay, Kush R. Varshney, Manish Kesarwani
-
Patent number: 11182600Abstract: A processor may record a first location at an event with at least one person. The processor may monitor a plurality of actions of that at least one person at the first location. The processor may interpret at least one action of the at least one person that indicates a change of interest to a second location at the event. Based on the at least one action, the processor may determine the second location at the event. The processor may record the second location at the event.Type: GrantFiled: September 24, 2015Date of Patent: November 23, 2021Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
-
Patent number: 11100407Abstract: Embodiments for building domain models from dialog interactions by a processor. A domain knowledge may be elicited from one or more dialog interactions with one or more users according to one or more dialog strategies. One or more domain models may be built and/or enhanced according to the domain knowledge.Type: GrantFiled: October 10, 2018Date of Patent: August 24, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Oznur Alkan, Rachel K. E. Bellamy, Elizabeth Daly, Matthew Davis, Vera Liao, Biplav Srivastava
-
Publication number: 20210133162Abstract: A method for automatically generating documentation for an artificial intelligence model includes receiving, by a computing device, an artificial intelligence model. The computing device accesses a model facts policy that indicates data to be collected for artificial intelligence models. The computing device collects artificial intelligence model facts regarding the artificial intelligence model according to the model facts policy. The computing device accesses a factsheet template. The factsheet template provides a schema for an artificial intelligence model factsheet for the artificial intelligence model. The computing device populates the artificial intelligence model factsheet using the factsheet template with the artificial intelligence model facts related to the artificial intelligence model.Type: ApplicationFiled: November 1, 2019Publication date: May 6, 2021Inventors: Matthew R. Arnold, Rachel K.E. Bellamy, Kaoutar El Maghraoui, Michael Hind, Stephanie Houde, Kalapriya Kannan, Sameep Mehta, Aleksandra Mojsilovic, Ramya Raghavendra, Darrell C. Reimer, John T. Richards, David J. Piorkowski, Jason Tsay, Kush R. Varshney, Manish Kesarwani
-
Patent number: 10956831Abstract: In one embodiment, in accordance with the present invention, a method, computer program product, and system for performing actions based on captured interpersonal interactions during a meeting is provided. One or more computer processors capture the interpersonal interactions between people in a physical space during a period of time, using machine learning algorithms to detect the interpersonal interactions and a state of each person based on vision and audio sensors in the physical space. The one or more computer processors analyze and categorize the interactions and state of each person, and tag representations of each person with the respectively analyzed and categorized interactions and states of the respective person over the period of time. The one or more computer processors then take an action based on the analysis.Type: GrantFiled: November 13, 2017Date of Patent: March 23, 2021Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
-
Publication number: 20200118008Abstract: Embodiments for building domain models from dialog interactions by a processor. A domain knowledge may be elicited from one or more dialog interactions with one or more users according to one or more dialog strategies. One or more domain models may be built and/or enhanced according to the domain knowledge.Type: ApplicationFiled: October 10, 2018Publication date: April 16, 2020Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Oznur ALKAN, Rachel K. E. BELLAMY, Elizabeth DALY, Matthew DAVIS, Vera LIAO, Biplav SRIVASTAVA
-
Publication number: 20190147367Abstract: In one embodiment, in accordance with the present invention, a method, computer program product, and system for performing actions based on captured interpersonal interactions during a meeting is provided. One or more computer processors capture the interpersonal interactions between people in a physical space during a period of time, using machine learning algorithms to detect the interpersonal interactions and a state of each person based on vision and audio sensors in the physical space. The one or more computer processors analyze and categorize the interactions and state of each person, and tag representations of each person with the respectively analyzed and categorized interactions and states of the respective person over the period of time. The one or more computer processors then take an action based on the analysis.Type: ApplicationFiled: November 13, 2017Publication date: May 16, 2019Inventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
-
Patent number: 10244013Abstract: A computer-implemented method manages remote electronic drop-ins on local conversations. A local audio sensor transmits a captured conversation from a local cluster of persons to a remote communication device where members of the local cluster of persons are within a predefined distance of one another, and where the remote communication device is at a location that is beyond a human hearing range from the local audio sensor. One or more processors determine that the captured conversation is about a particular topic. A request from a remote user is received from the remote communication device to electronically drop in on a particular remote cluster of persons who are having a conversation about the particular topic. In response to receiving the request from the remote user, one or more processors selectively connect a local communication device proximate to the cluster of persons to the remote communication device.Type: GrantFiled: December 8, 2017Date of Patent: March 26, 2019Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
-
Patent number: 10013160Abstract: Detecting user input based on multiple gestures is provided. One or more interactions are received from a user via a user interface. An inferred interaction is determined based, at least in part, on a geometric operation, wherein the geometric operation is based on the one or more interactions. The inferred interaction is presented via the user interface. Whether a confirmation has been received for the inferred interaction is determined.Type: GrantFiled: May 29, 2014Date of Patent: July 3, 2018Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Bonnie E. John, Peter K. Malkin, John T. Richards, Calvin B. Swart, John C. Thomas, Jr., Sharon M. Trewin
-
Publication number: 20180109571Abstract: A computer-implemented method manages remote electronic drop-ins on local conversations. A local audio sensor transmits a captured conversation from a local cluster of persons to a remote communication device where members of the local cluster of persons are within a predefined distance of one another, and where the remote communication device is at a location that is beyond a human hearing range from the local audio sensor. One or more processors determine that the captured conversation is about a particular topic. A request from a remote user is received from the remote communication device to electronically drop in on a particular remote cluster of persons who are having a conversation about the particular topic. In response to receiving the request from the remote user, one or more processors selectively connect a local communication device proximate to the cluster of persons to the remote communication device.Type: ApplicationFiled: December 8, 2017Publication date: April 19, 2018Inventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
-
Patent number: 9923938Abstract: A computer-implemented method manages drop-ins on conversations near a focal point of proximal activity in a gathering place. One or more processors receive a first set of sensor data from one or more sensors in a gathering place, and then identify a focal point of proximal activity based on the first set of received sensor data received from the one or more sensors. One or more processors characterize a conversation near the focal point based on a second set of received sensor data from the one or more sensors, and then present a characterization of the conversation to an electronic device. One or more processors enable the electronic device to allow a user to drop-in on the conversation.Type: GrantFiled: July 13, 2015Date of Patent: March 20, 2018Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
-
Patent number: 9881171Abstract: Information regarding one or more sensing devices in an environment is broadcasted. The broadcasted information is received by a user application running on a user device in the environment. The broadcasted information comprises information regarding presence of the one or more sensing devices in the environment and at least one of a capacity profile and an activity profile of the one or more sensing devices.Type: GrantFiled: November 16, 2015Date of Patent: January 30, 2018Assignee: International Business Machines CorporationInventors: Rachel K.E. Bellamy, Thomas D. Erickson
-
Patent number: 9782069Abstract: Systems and methods are provided for post-hoc correction of calibration errors in eye tracking data, which take into consideration calibration errors that result from changes in user position during a user session in which the user's fixations on a display screen are captured and recorded by an eye tracking system, and which take into consideration errors that occur when the user looks away from a displayed target item before selecting the target item.Type: GrantFiled: November 6, 2014Date of Patent: October 10, 2017Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Bonnie E. John, John T. Richards, Calvin B. Swart, John C. Thomas, Jr., Sharon M. Trewin
-
Patent number: 9740398Abstract: Detecting user input based on multiple gestures is provided. One or more interactions are received from a user via a user interface. An inferred interaction is determined based, at least in part, on a geometric operation, wherein the geometric operation is based on the one or more interactions. The inferred interaction is presented via the user interface. Whether a confirmation has been received for the inferred interaction is determined.Type: GrantFiled: November 2, 2016Date of Patent: August 22, 2017Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Bonnie E. John, Peter K. Malkin, John T. Richards, Calvin B. Swart, John C. Thomas, Jr., Sharon M. Trewin
-
Publication number: 20170140164Abstract: Information regarding one or more sensing devices in an environment is broadcasted. The broadcasted information is received by a user application running on a user device in the environment. The broadcasted information comprises information regarding presence of the one or more sensing devices in the environment and at least one of a capacity profile and an activity profile of the one or more sensing devices.Type: ApplicationFiled: November 16, 2015Publication date: May 18, 2017Inventors: Rachel K.E. Bellamy, Thomas D. Erickson
-
Publication number: 20170094179Abstract: A processor may record a first location at an event with at least one person. The processor may monitor a plurality of actions of that at least one person at the first location. The processor may interpret at least one action of the at least one person that indicates a change of interest to a second location at the event. Based on the at least one action, the processor may determine the second location at the event. The processor may record the second location at the event.Type: ApplicationFiled: September 24, 2015Publication date: March 30, 2017Inventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
-
Publication number: 20170046064Abstract: Detecting user input based on multiple gestures is provided. One or more interactions are received from a user via a user interface. An inferred interaction is determined based, at least in part, on a geometric operation, wherein the geometric operation is based on the one or more interactions. The inferred interaction is presented via the user interface. Whether a confirmation has been received for the inferred interaction is determined.Type: ApplicationFiled: November 2, 2016Publication date: February 16, 2017Inventors: Rachel K. E. Bellamy, Bonnie E. John, Peter K. Malkin, John T. Richards, Calvin B. Swart, John C. Thomas, JR., Sharon M. Trewin
-
Patent number: 9563354Abstract: Detecting user input based on multiple gestures is provided. One or more interactions are received from a user via a user interface. An inferred interaction is determined based, at least in part, on a geometric operation, wherein the geometric operation is based on the one or more interactions. The inferred interaction is presented via the user interface. Whether a confirmation has been received for the inferred interaction is determined.Type: GrantFiled: July 26, 2016Date of Patent: February 7, 2017Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Bonnie E. John, Peter K. Malkin, John T. Richards, Calvin B. Swart, John C. Thomas, Jr., Sharon M. Trewin
-
Publication number: 20170017640Abstract: A computer-implemented method manages drop-ins on conversations near a focal point of proximal activity in a gathering place. One or more processors receive a first set of sensor data from one or more sensors in a gathering place, and then identify a focal point of proximal activity based on the first set of received sensor data received from the one or more sensors. One or more processors characterize a conversation near the focal point based on a second set of received sensor data from the one or more sensors, and then present a characterization of the conversation to an electronic device. One or more processors enable the electronic device to allow a user to drop-in on the conversation.Type: ApplicationFiled: July 13, 2015Publication date: January 19, 2017Inventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
-
Patent number: 9495098Abstract: Detecting user input based on multiple gestures is provided. One or more interactions are received from a user via a user interface. An inferred interaction is determined based, at least in part, on a geometric operation, wherein the geometric operation is based on the one or more interactions. The inferred interaction is presented via the user interface. Whether a confirmation has been received for the inferred interaction is determined.Type: GrantFiled: April 4, 2016Date of Patent: November 15, 2016Assignee: International Business Machines CorporationInventors: Rachel K. E. Bellamy, Bonnie E. John, Peter K. Malkin, John T. Richards, Calvin B. Swart, John C. Thomas, Jr., Sharon M. Trewin