Patents by Inventor Jonathan H. Connell, II

Jonathan H. Connell, II has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170286632
    Abstract: Embodiments include method, systems and computer program products for providing medication-related feedback. Aspects include receiving medication information for a patient. Aspects also include receiving a biological, behavioral, or environmental output from a sensor. Aspects also include determining, based upon the biological, behavioral, or environmental output and the medication information for the patient, whether a medication dose is needed. Aspects also include, based on a determination that the medication dose is needed, generating an alert.
    Type: Application
    Filed: March 29, 2016
    Publication date: October 5, 2017
    Inventors: Maryam Ashoori, Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Jonathan H. Connell, II, Nalini K. Ratha, Michael Rizzolo
  • Patent number: 9758095
    Abstract: Techniques are provided for alerting drivers of hazardous driving conditions using the sensing capabilities of wearable mobile technology. In one aspect, a method for alerting drivers of hazardous driving conditions includes the steps of: collecting real-time data from a driver of a vehicle, wherein the data is collected via a mobile device worn by the driver; determining whether the real-time data indicates that a hazardous driving condition exists; providing feedback to the driver if the real-time data indicates that a hazardous driving condition exists, and continuing to collect data from the driver in real-time if the real-time data indicates that a hazardous driving condition does not exist. The real-time data may also be collected and used to learn characteristics of the driver. These characteristics can be compared with the data being collected to help determine, in real-time, whether the driving behavior is normal and whether a hazardous driving condition exists.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: September 12, 2017
    Assignee: International Business Machines Corporation
    Inventors: Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Jonathan H. Connell, II, Nalini K. Ratha, Michael Rizzolo
  • Patent number: 9744671
    Abstract: Mechanisms are provided for classifying an obstacle as an asset type. The mechanisms receive a digital image of an obstacle from an image capture device of an automated robot. The mechanisms perform a classification operation on the digital image of the obstacle to identify a proposed asset type classification for the obstacle. The mechanisms determine a final asset type for the obstacle based on the proposed asset type classification for the obstacle. The mechanisms update a map data structure for a physical premises in which the obstacle is present based on the final asset type.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: August 29, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jonathan H. Connell, II, Jeffrey O. Kephart, Jonathan Lenchner
  • Publication number: 20170210288
    Abstract: Techniques are provided for alerting drivers of hazardous driving conditions using the sensing capabilities of wearable mobile technology. In one aspect, a method for alerting drivers of hazardous driving conditions includes the steps of: collecting real-time data from a driver of a vehicle, wherein the data is collected via a mobile device worn by the driver; determining whether the real-time data indicates that a hazardous driving condition exists; providing feedback to the driver if the real-time data indicates that a hazardous driving condition exists, and continuing to collect data from the driver in real-time if the real-time data indicates that a hazardous driving condition does not exist. The real-time data may also be collected and used to learn characteristics of the driver. These characteristics can be compared with the data being collected to help determine, in real-time, whether the driving behavior is normal and whether a hazardous driving condition exists.
    Type: Application
    Filed: January 25, 2016
    Publication date: July 27, 2017
    Inventors: Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Jonathan H. Connell, II, Nalini K. Ratha, Michael Rizzolo
  • Publication number: 20170213470
    Abstract: Techniques for motivating a user during a workout using different coaching styles are provided. In one aspect, a method for motivational coaching of a user during workout sessions includes the steps of: selecting a coaching style for the user based on input from the user and from coaching styles used for at least one other user; determining, during a workout session, whether the coaching style should be changed to enhance performance of the user based on data obtained from the user via a mobile device worn by the user; changing the coaching style if it is determined that the coaching style should be changed to enhance performance of the user; continuing with a current coaching style if it is determined that the coaching style should not be changed; and providing feedback to the user during the workout session based on the coaching style.
    Type: Application
    Filed: January 27, 2016
    Publication date: July 27, 2017
    Inventors: Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Jonathan H. Connell, II, Nalini K. Ratha, Michael Rizzolo
  • Publication number: 20170169303
    Abstract: An embodiment of the invention provides a method of analyzing an image of a user to determine whether the image is authentic, where a first image of a user's face is received with a camera. Four or more two-dimensional feature points can be located that do not lie on the same two-dimensional plane. Additional images of the user's face can be received; and, the at least four two-dimensional feature points can be located on each additional image with the image processor. The image processor can identify displacements between the two-dimensional feature points on the additional image and the two-dimensional feature points on the first image for each additional image. A processor can determine whether the displacements conform to a three-dimensional surface model. The processor can determine whether to authenticate the user based on the determination of whether the displacements conform to the three-dimensional surface model.
    Type: Application
    Filed: December 10, 2015
    Publication date: June 15, 2017
    Applicant: International Business Machines Corportation
    Inventors: Jonathan H. Connell, II, Nalini K. Ratha
  • Publication number: 20170169727
    Abstract: Techniques for leveraging the capabilities of wearable mobile technology to collect data and to provide real-time feedback to an orator about his/her performance and/or audience interaction are provided. In one aspect, a method for providing real-time feedback to a speaker making a presentation to an audience includes the steps of: collecting real-time data from the speaker during the presentation, wherein the data is collected via a mobile device worn by the speaker; analyzing the real-time data collected from the speaker to determine whether corrective action is needed to improve performance; and generating a real-time alert to the speaker suggesting the corrective action if the real-time data indicates that corrective action is needed to improve performance, otherwise continuing to collect data from the speaker in real-time. Real-time data may also be collected from members of the audience and/or from other speakers (if present) via wearable mobile devices.
    Type: Application
    Filed: December 10, 2015
    Publication date: June 15, 2017
    Inventors: Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Jonathan H. Connell, II, Nalini K. Ratha, Michael Rizzolo
  • Patent number: 9678600
    Abstract: A display device comprises a plurality of light emitting elements in a layer on a substrate, a plurality of microprisms positioned over the layer, a plurality of light detectors on the substrate, each light detector respectively corresponding to a light emitting element of the plurality of light emitting elements, and a display screen, wherein the light detectors are used to sense at least one property of an item in contact with the display screen.
    Type: Grant
    Filed: April 4, 2014
    Date of Patent: June 13, 2017
    Assignee: International Business Machines Corporation
    Inventors: Carl E. Abrams, Jonathan H. Connell, II, Nalini K. Ratha
  • Publication number: 20170142101
    Abstract: An embodiment of the invention provides a method for secure biometrics matching with split phase client-server matching protocol, wherein a first biometric input is received in an electronic device. The first biometric input is stored in the electronic device as a biometric profile; and, the biometric profile is sent to a server. An additional biometric input is received from a user in the electronic device; and, the additional biometric input is compared to the biometric profile stored in the electronic device to generate a local matching score. The additional biometric input is sent to the server. The local matching score and a remote matching score generated by the at least one server are compared; and, it is determined whether to authenticate the user based on the comparison of the local matching score and the remote matching score.
    Type: Application
    Filed: November 16, 2015
    Publication date: May 18, 2017
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan H. Connell, II, Jae-Eun Park, Nalini K. Ratha
  • Publication number: 20170140629
    Abstract: The present invention provides techniques for leveraging the sensing capabilities of wearable mobile technology, such as a smartwatch, to provide real-time harm prevention. In one aspect of the invention, a method for harm prevention is provided. The method includes the steps of: collecting real-time data from at least one user, wherein the data is collected via a mobile device worn by the user (e.g., a smartwatch); analyzing the real-time data collected from the user to determine whether the real-time data indicates an emergency situation exists; and undertaking an appropriate action if the real-time data indicates that an emergency situation exists, otherwise continuing to collect data from the user in real-time. Third party data relating to potential source of harm to the user may also be obtained (e.g., from a weather service, newsfeeds, etc.
    Type: Application
    Filed: November 18, 2015
    Publication date: May 18, 2017
    Inventors: Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Jonathan H. Connell, II, Nalini K. Ratha, Michael Rizzolo
  • Publication number: 20170134832
    Abstract: Techniques for modifying user behavior and screening for impairment using a mobile feedback controller, such as a smartwatch, are provided. In one aspect, a method for monitoring a user includes the steps of: collecting real-time data from the user, wherein the data is collected via a mobile feedback controller worn by the user; determining whether the data collected from the user indicates impairment; determining appropriate corrective actions to be taken if the data collected from the user indicates impairment, otherwise continuing to collect data from the user in real-time; determining whether any action is needed; and undertaking the appropriate corrective actions if action is needed, otherwise continuing to collect data from the user in real-time.
    Type: Application
    Filed: November 6, 2015
    Publication date: May 11, 2017
    Inventors: Benjamin D. Briggs, Lawrence A. Clevenger, Leigh Anne H. Clevenger, Jonathan H. Connell, II, Nalini K. Ratha, Michael Rizzolo
  • Publication number: 20170133016
    Abstract: A method includes the following steps. A speech input is received. At least two speech recognition candidates are generated from the speech input. A scene related to the speech input is observed using one or more non-acoustic sensors. The observed scene is segmented into one or more regions. One or more properties for the one or more regions are computed. One of the speech recognition candidates is selected based on the one or more computed properties of the one or more regions.
    Type: Application
    Filed: January 13, 2017
    Publication date: May 11, 2017
    Inventors: Jonathan H. Connell, II, Etienne Marcheret
  • Patent number: 9632589
    Abstract: A method includes the following steps. A speech input is received. At least two speech recognition candidates are generated from the speech input. A scene related to the speech input is observed using one or more non-acoustic sensors. The observed scene is segmented into one or more regions. One or more properties for the one or more regions are computed. One of the speech recognition candidates is selected based on the one or more computed properties of the one or more regions.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: April 25, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jonathan H. Connell, II, Etienne Marcheret
  • Patent number: 9626001
    Abstract: A method includes the following steps. A speech input is received. At least two speech recognition candidates are generated from the speech input. A scene related to the speech input is observed using one or more non-acoustic sensors. The observed scene is segmented into one or more regions. One or more properties for the one or more regions are computed. One of the speech recognition candidates is selected based on the one or more computed properties of the one or more regions.
    Type: Grant
    Filed: November 13, 2014
    Date of Patent: April 18, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jonathan H. Connell, II, Etienne Marcheret
  • Publication number: 20170094179
    Abstract: A processor may record a first location at an event with at least one person. The processor may monitor a plurality of actions of that at least one person at the first location. The processor may interpret at least one action of the at least one person that indicates a change of interest to a second location at the event. Based on the at least one action, the processor may determine the second location at the event. The processor may record the second location at the event.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 30, 2017
    Inventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
  • Publication number: 20170017843
    Abstract: A system and method for generating compact iris representations based on a database of iris images includes providing full-length iris codes for iris images in a database, where the full-length iris code includes a plurality of portions corresponding to circumferential rings in an associated iris image. Genuine and imposter score distributions are computed for the full-length iris codes, and code portions are identified that have a contribution that provides separation between imposter and genuine distributions relative to a threshold. A correlation between remaining code portions is measured. A subset of code portions having low correlations within the subset is generated to produce a compact iris representation.
    Type: Application
    Filed: September 27, 2016
    Publication date: January 19, 2017
    Inventors: JONATHAN H. CONNELL, II, JAMES E. GENTILE, NALINI K. RATHA
  • Publication number: 20170017640
    Abstract: A computer-implemented method manages drop-ins on conversations near a focal point of proximal activity in a gathering place. One or more processors receive a first set of sensor data from one or more sensors in a gathering place, and then identify a focal point of proximal activity based on the first set of received sensor data received from the one or more sensors. One or more processors characterize a conversation near the focal point based on a second set of received sensor data from the one or more sensors, and then present a characterization of the conversation to an electronic device. One or more processors enable the electronic device to allow a user to drop-in on the conversation.
    Type: Application
    Filed: July 13, 2015
    Publication date: January 19, 2017
    Inventors: Rachel K. E. Bellamy, Jonathan H. Connell, II, Robert G. Farrell, Brian P. Gaucher, Jonathan Lenchner, David O. S. Melville, Valentina Salapura
  • Publication number: 20160358609
    Abstract: A method includes the following steps. An acoustic input is obtained from a user, including issuing a verbal prompt to the user and receiving the acoustic input from the user in response to the verbal prompt. One or more acoustic representations are obtained, wherein the one or more acoustic representations are generated from a list of expected responses to the issued verbal prompt. The acoustic input from the user is compared to the one or more acoustic representations. One or more speech recognition parameters are adjusted based on the comparison.
    Type: Application
    Filed: June 2, 2015
    Publication date: December 8, 2016
    Inventors: Jonathan H. Connell, II, Etienne Marcheret
  • Publication number: 20160358601
    Abstract: A method includes the following steps. An acoustic input is obtained from a user, including issuing a verbal prompt to the user and receiving the acoustic input from the user in response to the verbal prompt. One or more acoustic representations are obtained, wherein the one or more acoustic representations are generated from a list of expected responses to the issued verbal prompt. The acoustic input from the user is compared to the one or more acoustic representations. One or more speech recognition parameters are adjusted based on the comparison.
    Type: Application
    Filed: June 30, 2015
    Publication date: December 8, 2016
    Inventors: Jonathan H. Connell, II, Etienne Marcheret
  • Patent number: 9514354
    Abstract: One or more processors generate a set of facial appearance parameters that are derived from a first facial image. One or more processors generate a graphics control vector based, at least in part, on the set of facial appearance parameters. One or more processors render a second facial image based on the graphics control vector. One or more processors compare the second facial image to the first image. One or more processors generate an adjusted vector by adjusting one or more parameters of the graphics control vector such that a degree of similarity between the second facial image and the first facial image is increased. The adjusted vector includes a biometric portion. One or more processors generate a first face representation based, at least in part, on the biometric portion of the adjusted vector.
    Type: Grant
    Filed: December 5, 2014
    Date of Patent: December 6, 2016
    Assignee: International Business Machines Corporation
    Inventors: Jonathan H. Connell, II, Sharathchandra U. Pankanti, Nalini K. Ratha