Patents by Inventor Timothy Thomas Gray

Timothy Thomas Gray has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9721031
    Abstract: Devices, systems and methods are disclosed for anchoring bookmarks to individual words for precise positioning within electronic documents. The bookmarks may be anchored based on user input selecting particular words, based on gaze tracking identifying most recently read words, or based on estimated reading speed. The bookmarks may be a link used to navigate within the document, may be used as an anchor for a new layout after content reflow or may be automatically saved when the e-reader turns off the display to provide the user with a most recently read passage. If a bookmark isn't anchored to specific words by the user, the device may anchor the bookmark to the beginning of a sentence or a paragraph including the recently read words determined using gaze tracking or estimated reading speed.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: August 1, 2017
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Stanton Todd Marcum, Michael Patrick Bacus, Timothy Thomas Gray
  • Patent number: 9671941
    Abstract: A computing device can utilize a recognition mode wherein an interface utilizes graphical elements, such as virtual fireflies, to indicate recognized or identified objects. Fireflies can be displayed near an input element to indicate that a recognition mode is available. When a user selects the input element, the fireflies can appear to emanate from the input element and disperse across the display screen. As objects are recognized, fireflies can create bounding boxes around those objects, or otherwise appear proximate those objects, to indicate recognition. The fireflies can again disperse as the objects fall out of view, and can begin moving towards new objects as features of those objects are identified as potential object features. A subsequent selection of the input element to exit recognition mode can cause the fireflies to appear to retreat to their original location in, or near, the input element.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: June 6, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Timothy Thomas Gray, Charles Eugene Cummins, Russell Edward Glaser, Tito Pagan, Steven Michael Sommer, Brian Peter Kralyevich, Angela Kathleen Warren, Marc Anthony Salazar, Suzan Marashi
  • Patent number: 9600720
    Abstract: Processes such as image matching, computer vision, and object recognition can utilize additional data, such as spatial data, to attempt to improve the accuracy of the results of those processes. For example, a computing device acquiring scene data including a representation of an object can also determine spatial data (e.g., location and orientation data). By determining the spatial data, a set of potential matches can be found which can help to more quickly and accurately identify the object based on one or more objects known to be at a corresponding position. The data acquired by the computing device can also be used to update matching data stored for that location, which can assist with subsequent processing.
    Type: Grant
    Filed: March 18, 2014
    Date of Patent: March 21, 2017
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: Timothy Thomas Gray
  • Patent number: 9471154
    Abstract: A computing device can obtain information about how the device is held, moved, and/or used by a hand of a user holding the device. The information can be obtained utilizing one or more sensors of the device independently or working in conjunction. For example, an orientation sensor can determine whether a left hand or a right hand is likely rotating, tilting, and/or moving, and thus holding, the device. In another example, a camera and/or a hover sensor can obtain information about a finger position of the user's hand to determine whether the hand is likely a left hand or a right hand. In a further example, a touch sensor can determine a shape of an imprint of a portion of the user's hand to determine which hand is likely holding the device. Based on which hand is holding the device, the device can improve one or more computing tasks.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: October 18, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Timothy Thomas Gray, Dong Zhou, Kenneth Mark Karakotsios, Jennifer Silva
  • Patent number: 9462230
    Abstract: A system determines if someone watching a live video feed looks or moves away from a display screen, and when their attention is back on the display, provides an accelerated recap of the content that they missed. The video component of the feed may be shown as a series of selected still images or clips from the original feed, while audio and/or text captioning is output at an accelerated rate. The rate may be adaptively adjusted to maintain a consistent speed, and superfluous content may be omitted. When the recap catches up to the live feed, output returns to regular speed.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: October 4, 2016
    Assignee: Amazon Technologies
    Inventors: Amit Kumar Agrawal, Timothy Thomas Gray, Ambrish Tyagi
  • Patent number: 9411422
    Abstract: Systems and methods are provided for enabling user interaction with content markers of a content item. Content markers can generally correspond to any point of interest in the content item. In one embodiment, a scrub bar is provided enabling user navigation to locations within the content item. As a user utilizes the scrub bar to select a location corresponding to a content marker, a haptic notification is provided to the user indicative of a corresponding point of interest. Thereafter, the user may halt interaction with the scrub bar to being playback of the content item at the point of interest. In another embodiment, a user is enabled to provide input perpendicular to a displayed scrub bar to alternate between multiple available scrub bars and/or points of interest. For example, multiple scrub bars may be provided, each associated with a given type of point of interest.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: August 9, 2016
    Assignee: Audible, Inc.
    Inventors: Phillip Scott McClendon, Ajay Arora, Timothy Thomas Gray, Douglas Vincent O'Dell, III
  • Patent number: 9146631
    Abstract: A computing device can obtain information about how the device is held, moved, and/or used by a hand of a user holding the device. The information can be obtained utilizing one or more sensors of the device independently or working in conjunction. For example, an orientation sensor can determine whether a left hand or a right hand is likely rotating, tilting, and/or moving, and thus holding, the device. In another example, a camera and/or a hover sensor can obtain information about a finger position of the user's hand to determine whether the hand is likely a left hand or a right hand. In a further example, a touch sensor can determine a shape of an imprint of a portion of the user's hand to determine which hand is likely holding the device. Based on which hand is holding the device, the device can improve one or more computing tasks.
    Type: Grant
    Filed: February 11, 2013
    Date of Patent: September 29, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Timothy Thomas Gray, Dong Zhou, Kenneth Mark Karakotsios, Jennifer Silva
  • Publication number: 20150082145
    Abstract: Approaches enable three-dimensional (3D) display and interaction with interfaces (such as a webpage, an application, etc.) when the device is operating in a 3D view mode. For example, interface elements can be highlighted, emphasized, animated, or otherwise altered in appearance, and/or arrangement in the renderings of those interfaces based at least in part on an orientation of the device or a position of a user using the device. Further, the 3D view mode can provide for an animated 3D departure and appearance of elements as the device navigates from a current page to a new page. Further still, approaches provide for the ability to specify 3D attributes (such as the appearance, action, etc.) of the interface elements. In this way, a developer of such interfaces can use information (e.g., tags, CSS, JavaScript, etc.) to specify a 3D appearance change to be applied to at least one interface element when the 3D view mode is activated.
    Type: Application
    Filed: September 17, 2013
    Publication date: March 19, 2015
    Applicant: Amazon Technologies, Inc.
    Inventors: CHARLEY AMES, DENNIS PILARINOS, PETER FRANK HILL, SASHA MIKHAEL PEREZ, TIMOTHY THOMAS GRAY
  • Publication number: 20150082180
    Abstract: Approaches enable three-dimensional (3D) display and interaction with interfaces (such as a webpage, an application, etc.) when the device is operating in a 3D view mode. For example, interface elements can be highlighted, emphasized, animated, or otherwise altered in appearance, and/or arrangement in the renderings of those interfaces based at least in part on an orientation of the device or a position of a user using the device. Further, the 3D view mode can provide for an animated 3D departure and appearance of elements as the device navigates from a current page to a new page. Further still, approaches provide for the ability to specify 3D attributes (such as the appearance, action, etc.) of the interface elements. In this way, a developer of such interfaces can use information (e.g., tags, CSS, JavaScript, etc.) to specify a 3D appearance change to be applied to at least one interface element when the 3D view mode is activated.
    Type: Application
    Filed: September 17, 2013
    Publication date: March 19, 2015
    Applicant: Amazon Technologies, Inc.
    Inventors: CHARLEY AMES, DENNIS PILARINOS, PETER FRANK HILL, SASHA MIKHAEL PEREZ, TIMOTHY THOMAS GRAY
  • Publication number: 20150082181
    Abstract: Approaches enable three-dimensional (3D) display and interaction with interfaces (such as a webpage, an application, etc.) when the device is operating in a 3D view mode. For example, interface elements can be highlighted, emphasized, animated, or otherwise altered in appearance, and/or arrangement in the renderings of those interfaces based at least in part on an orientation of the device or a position of a user using the device. Further, the 3D view mode can provide for an animated 3D departure and appearance of elements as the device navigates from a current page to a new page. Further still, approaches provide for the ability to specify 3D attributes (such as the appearance, action, etc.) of the interface elements. In this way, a developer of such interfaces can use information (e.g., tags, CSS, JavaScript, etc.) to specify a 3D appearance change to be applied to at least one interface element when the 3D view mode is activated.
    Type: Application
    Filed: September 17, 2013
    Publication date: March 19, 2015
    Applicant: Amazon Technologies, Inc.
    Inventors: Charley Ames, Dennis Pilarinos, Peter Frank Hill, Sasha Mikhael Perez, Timothy Thomas Gray
  • Publication number: 20140337800
    Abstract: A computing device can utilize a recognition mode wherein an interface utilizes graphical elements, such as virtual fireflies or other such elements, to indicate objects that are recognized or identified. As objects are recognized, fireflies perform one or more specified actions to indicate recognition. A ribbon or other user-selectable icon is displayed indicates a specific action that the device can perform with respect to the respective object. As additional objects are recognized, additional ribbons are created and older ribbons can be moved off screen and stored for subsequent retrieval or search. The fireflies disperse when the objects are no longer represented in captured sensor data, and can be animated to move towards representations of new objects as features of those objects are identified as potential object features, in order to communicate a level of recognition for a current scene or environment.
    Type: Application
    Filed: December 20, 2013
    Publication date: November 13, 2014
    Applicant: Amazon Technologies, Inc.
    Inventors: Timothy Thomas Gray, Gray Anthony Salazar, Steven Steven Sommer, Charles Eugene Cummins, Sean Anthony Rooney, Bryan Todd Agnetta, Jae Pum Park, Richard Leigh Mains, Suzan Marashi