Patents by Inventor Jeffrey P. Bigham
Jeffrey P. Bigham has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11789528Abstract: Calibration of eye tracking is improved by collecting additional calibration pairs when user is using apps with eye tracking. A user input component is presented on a display of an electronic device, detecting a dwelling action for user input component, and in response to detecting the dwelling action, obtaining a calibration pair comprising an uncalibrated gaze point and a screen location of the user input component, wherein the uncalibrated gaze point is determined based on an eye pose during the dwelling action. A screen gaze estimation is determine based on the uncalibrated gaze point, and in response to determining that the calibration pair is a valid calibration pair, training a calibration model using the calibration pair.Type: GrantFiled: August 30, 2021Date of Patent: October 17, 2023Assignee: Apple Inc.Inventors: Jeffrey P. Bigham, Mingzhe Li, Samuel C. White, Xiaoyi Zhang, Qi Shan, Carlos E. Guestrin
-
Publication number: 20210349587Abstract: Representative embodiments set forth techniques for optimizing user interfaces on a client device. A method may include receiving a spatial difficulty map associated with the user interface. The method also includes identifying one or more user interface elements using an element detection model and generating a user interface layout based on at least the spatial difficulty map. The method also includes generating an updated user interface by editing the one or more user interface elements using the user interface layout and rendering, on a display of the client device, the updated user interface.Type: ApplicationFiled: October 9, 2020Publication date: November 11, 2021Inventors: Jeffrey P. BIGHAM, Colin S. LEA, Jason WU, Xiaoyi ZHANG
-
Patent number: 11106280Abstract: Calibration of eye tracking is improved by collecting additional calibration pairs when user is using apps with eye tracking. A user input component is presented on a display of an electronic device, detecting a dwelling action for user input component, and in response to detecting the dwelling action, obtaining a calibration pair comprising an uncalibrated gaze point and a screen location of the user input component, wherein the uncalibrated gaze point is determined based on an eye pose during the dwelling action. A screen gaze estimation is determine based on the uncalibrated gaze point, and in response to determining that the calibration pair is a valid calibration pair, training a calibration model using the calibration pair.Type: GrantFiled: September 21, 2020Date of Patent: August 31, 2021Assignee: Apple Inc.Inventors: Jeffrey P. Bigham, Mingzhe Li, Samuel C. White, Xiaoyi Zhang, Qi Shan, Carlos E. Guestrin
-
Patent number: 10740540Abstract: Techniques for programmatically magnifying one or more visible content elements of at least one markup language document, so as to increase the display size of those visible content elements. A magnification facility may be configured to apply multiple different zoom techniques. The magnification facility may be configured to evaluate the markup language document(s) at a time that the document(s) are being processed for display to select which of the multiple different zoom techniques may be applied at a time to increase a display size of visible content elements relative to a default display size for those elements. The magnification facility may be incorporated within the markup language document(s) and executed by a viewing application that processes markup language documents. For example, the markup language document(s) may form a web page and the magnification facility may be implemented as scripting language code incorporated into the document(s) of the web page.Type: GrantFiled: December 12, 2014Date of Patent: August 11, 2020Assignee: Freedom Scientific, Inc.Inventors: Aaron M. Leventhal, Jeffrey P. Bigham
-
Patent number: 10657385Abstract: The disclosure describes a sensor system that provides end users with intelligent sensing capabilities, and embodies both crowd sourcing and machine learning together. Further, a sporadic crowd assessment is used to ensure continued sensor accuracy when the system is relying on machine learning analysis. This sensor approach requires minimal and non-permanent sensor installation by utilizing any device with a camera as a sensor host, and provides human-centered and actionable sensor output.Type: GrantFiled: March 25, 2016Date of Patent: May 19, 2020Assignees: CARNEGIE MELLON UNIVERSITY, a Pennsylvania Non-Pro fit Corporation, UNIVERSITY OF ROCHESTERInventors: Gierad Laput, Christopher Harrison, Jeffrey P. Bigham, Walter S. Lasecki, Bo Robert Xiao, Jason Wiese
-
Patent number: 10366147Abstract: Techniques for programmatically magnifying one or more visible content elements of at least one markup language document, so as to increase the display size of those visible content elements. A magnification facility may be configured to apply multiple different zoom techniques. The magnification facility may be configured to evaluate the markup language document(s) at a time that the document(s) are being processed for display to select which of the multiple different zoom techniques may be applied at a time to increase a display size of visible content elements relative to a default display size for those elements. The magnification facility may be incorporated within the markup language document(s) and executed by a viewing application that processes markup language documents. For example, the markup language document(s) may form a web page and the magnification facility may be implemented as scripting language code incorporated into the document(s) of the web page.Type: GrantFiled: December 12, 2014Date of Patent: July 30, 2019Assignee: Freedom Scientific, Inc.Inventors: Aaron M. Leventhal, Jeffrey P. Bigham, Brian Watson
-
Publication number: 20180107879Abstract: The disclosure describes a sensor system that provides end users with intelligent sensing capabilities, and embodies both crowd sourcing and machine learning together. Further, a sporadic crowd assessment is used to ensure continued sensor accuracy when the system is relying on machine learning analysis. This sensor approach requires minimal and non-permanent sensor installation by utilizing any device with a camera as a sensor host, and provides human-centered and actionable sensor output.Type: ApplicationFiled: March 25, 2016Publication date: April 19, 2018Applicant: CARNEGIE MELLON UNIVERSITY, a Pennsylvania Non-Pro fit CorporationInventors: Gierad Laput, Christopher Harrison, Jeffrey P. Bigham, Walter S. Lasecki, Bo Robert Xiao, Jason Wiese
-
Patent number: 9336753Abstract: In a computer system receiving input from a user via at least a keyboard and a pointing device, in which input via the pointing device causes corresponding movement of a pointing image on a display screen of the computer system, user input may be received via the pointing device to point the pointing image at an onscreen object on the display screen. In response to the user activating a key on the keyboard while the pointing image is pointing at the onscreen object, a secondary action with respect to the onscreen object may be executed.Type: GrantFiled: September 4, 2013Date of Patent: May 10, 2016Assignee: AI SquaredInventors: Aaron M. Leventhal, Jeffrey P. Bigham
-
Patent number: 9196227Abstract: In a computer system having a display screen configured to display visual content, a plurality of techniques may be identified to be considered for enhancing visual accessibility of a particular collection of visual content to be displayed to an end user on the display screen. For each technique, an algorithm may be applied to compute one or more measures of health of the display of the particular collection of visual content resulting from applying the respective technique to enhance the visual accessibility of the particular collection of visual content. Based at least in part on the computed measures of health, one or more best techniques may be selected and applied to enhance the visual accessibility of the particular collection of visual content. The enhanced particular collection of visual content may be displayed to the end user on the display screen.Type: GrantFiled: September 4, 2013Date of Patent: November 24, 2015Assignee: AI SquaredInventors: Aaron M. Leventhal, Jeffrey P. Bigham
-
Publication number: 20150169520Abstract: Techniques for programmatically magnifying one or more visible content elements of at least one markup language document, so as to increase the display size of those visible content elements. A magnification facility may be configured to apply multiple different zoom techniques. The magnification facility may be configured to evaluate the markup language document(s) at a time that the document(s) are being processed for display to select which of the multiple different zoom techniques may be applied at a time to increase a display size of visible content elements relative to a default display size for those elements. The magnification facility may be incorporated within the markup language document(s) and executed by a viewing application that processes markup language documents. For example, the markup language document(s) may form a web page and the magnification facility may be implemented as scripting language code incorporated into the document(s) of the web page.Type: ApplicationFiled: December 12, 2014Publication date: June 18, 2015Applicant: AI SquaredInventors: Aaron M. Leventhal, Jeffrey P. Bigham, Brian Watson
-
Publication number: 20150169521Abstract: Techniques for programmatically magnifying one or more visible content elements of at least one markup language document, so as to increase the display size of those visible content elements. A magnification facility may be configured to apply multiple different zoom techniques. The magnification facility may be configured to evaluate the markup language document(s) at a time that the document(s) are being processed for display to select which of the multiple different zoom techniques may be applied at a time to increase a display size of visible content elements relative to a default display size for those elements. The magnification facility may be incorporated within the markup language document(s) and executed by a viewing application that processes markup language documents. For example, the markup language document(s) may form a web page and the magnification facility may be implemented as scripting language code incorporated into the document(s) of the web page.Type: ApplicationFiled: December 12, 2014Publication date: June 18, 2015Applicant: AI SquaredInventors: Aaron M. Leventhal, Jeffrey P. Bigham
-
Publication number: 20140063071Abstract: In a computer system having originating software configured to provide visual content for display on a display screen and enhancement software configured to apply enhancements to the visual content for display on the display screen, the visual content may be magnified, via the enhancement software, to a magnification level different from a size at which the visual content is provided by the originating software. In response to an instruction to change the magnification level, the magnification level to which the visual content is magnified via the enhancement software may be changed, and one or more enhancements, other than magnification, applied to the visual content via the enhancement software may also be changed.Type: ApplicationFiled: September 4, 2013Publication date: March 6, 2014Applicant: Al SquaredInventors: Aaron M. Leventhal, Jeffrey P. Bigham
-
Publication number: 20140068525Abstract: In a computer system receiving input from a user via at least a keyboard and a pointing device, in which input via the pointing device causes corresponding movement of a pointing image on a display screen of the computer system, user input may be received via the pointing device to point the pointing image at an onscreen object on the display screen. In response to the user activating a key on the keyboard while the pointing image is pointing at the onscreen object, a secondary action with respect to the onscreen object may be executed.Type: ApplicationFiled: September 4, 2013Publication date: March 6, 2014Applicant: Al SquaredInventors: Aaron M. Leventhal, Jeffrey P. Bigham
-
Publication number: 20140063070Abstract: In a computer system having a display screen configured to display visual content, a plurality of techniques may be identified to be considered for enhancing visual accessibility of a particular collection of visual content to be displayed to an end user on the display screen. For each technique, an algorithm may be applied to compute one or more measures of health of the display of the particular collection of visual content resulting from applying the respective technique to enhance the visual accessibility of the particular collection of visual content. Based at least in part on the computed measures of health, one or more best techniques may be selected and applied to enhance the visual accessibility of the particular collection of visual content. The enhanced particular collection of visual content may be displayed to the end user on the display screen.Type: ApplicationFiled: September 4, 2013Publication date: March 6, 2014Applicant: Al SquaredInventors: Aaron M. Leventhal, Jeffrey P. Bigham
-
Publication number: 20130317818Abstract: Methods and systems for captioning speech in real-time are provided. Embodiments utilize captionists, who may be non-expert captionists, to transcribe a speech using a worker interface. Each worker is provided with the speech or portions of the speech, and is asked to transcribe all or portions of what they receive. The transcriptions received from each worker are aligned and combined to create a resulting caption. Automated speech recognition systems may be integrated by serving in the role of one or more workers, or integrated in other ways. Workers may work locally (able to hear the speech) and/or workers may work remotely, the speech being provided to them as an audio stream. Worker performance may be measured and used to provide feedback into the system such that overall performance is improved.Type: ApplicationFiled: May 24, 2013Publication date: November 28, 2013Inventors: Jeffrey P. Bigham, Walter Laceski
-
Publication number: 20120245952Abstract: Methods of crowdsourcing medical expert information are described. One such method includes receiving, from a first user during a first session over a network, a first query that includes a request for information regarding a patient condition, a therapy, and/or a medical test, and transmitting the query to a plurality of medical personnel. The first query further includes first session parameters selected by the first user from categories displayed by a processor to the first user. The method also includes receiving responses to the query from the plurality of medical personnel, and changing, based on the responses, an assigned probability of presenting to an index user, in response to a second query and during another session, at least one of a suggested diagnosis, a suggested therapy, a suggested inquiry to aid in establishing a diagnosis, a suggested medical test, a recommended source for further information, or a recommended medical professional.Type: ApplicationFiled: March 23, 2012Publication date: September 27, 2012Applicant: University of RochesterInventors: Marc W. HALTERMAN, Jeffrey P. Bigham, Henry Kautz, James W. Hill
-
Publication number: 20110196853Abstract: A computer-implemented method for automatically generating a script for a target web interface instance. Embodiments include receiving a task description of a task to be completed on a target web interface instance. The computer-implemented method also includes repeating steps until the task is completed. The repeating steps include determining from the target web interface instance a plurality of actions that may be performed on the target web interface instance and using the task description, predicting which action of the plurality of actions from the target web interface instance is an action most likely to be selected. The repeating steps also include performing the action most likely to be selected, thus proceeding to a first web interface instance and setting the first web interface instance as the target web interface instance.Type: ApplicationFiled: February 8, 2010Publication date: August 11, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jeffrey P. Bigham, Clemens Drews, Tessa A. Lau, Ian A. R. Li, Jeffrey W. Nichols