Patents by Inventor Joseph Wheeler

Joseph Wheeler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150301693
    Abstract: Methods, systems, and media for presenting related content are provided. In some embodiments, the method comprises: causing a first media interface to be presented, wherein the first media interface represents a first plurality of media content items and wherein first metadata is associated with the first media interface; determining, using a hardware processor, that at least one media interface representing related content should be presented; in response to determining that at least one media interface representing related content should be presented, generating a plurality of media interfaces, wherein each of the plurality of media interfaces is associated with metadata related to the first metadata; and causing at least one of the plurality of media interfaces to be presented concurrently with the first media interface.
    Type: Application
    Filed: April 17, 2014
    Publication date: October 22, 2015
    Inventors: Aaron Joseph Wheeler, Sarah Hatem Ali
  • Publication number: 20150301699
    Abstract: Methods, systems, and media for media guidance are provided. In some embodiments, the method comprises: receiving a request to browse through a plurality of media content items; causing a plurality of media interfaces to be presented in response to receiving the request, wherein each of the plurality of media interfaces is a selectable object and includes information associated with a media content item placed within the interface; determining that a media interface from the plurality of media interfaces has been selected; causing a media content item corresponding to the selected media interface to be played back in a media player window in response to determining that the media interface has been selected; and concurrently with causing the media content item to be played back in the media player window, causing the selected media interface to be presented for a predetermined period of time, wherein the selected media interface identifies the media content item.
    Type: Application
    Filed: February 2, 2015
    Publication date: October 22, 2015
    Inventors: Aaron Joseph Wheeler, Jennifer Arden
  • Publication number: 20150293681
    Abstract: Methods, systems, and media for providing a media interface with multiple control interfaces are provided.
    Type: Application
    Filed: February 2, 2015
    Publication date: October 15, 2015
    Inventors: Aaron Joseph Wheeler, Sarah Hatem Ali
  • Patent number: 9153043
    Abstract: A non-transitory computer-readable medium includes instructions stored thereon for causing a display device to display a field of view of a media item and a user interface over the media item. The field of view defines a reference point and the user interface defines a perimeter. The medium also includes instructions for processing input data for controlling relative movement of one or more of the field of view, the media item, and the user interface. In addition, the medium includes instructions for causing the display device, responsive to the input data, to move the user interface relative to the field of view and the media item, provided that the reference point is within the perimeter. The medium also includes instructions for causing the display device, responsive to the input data, to move the field of view relative to the media item, provided that the reference point is outside the perimeter.
    Type: Grant
    Filed: February 16, 2012
    Date of Patent: October 6, 2015
    Assignee: Google, Inc.
    Inventors: Luis Ricardo Prada Gomez, Aaron Joseph Wheeler
  • Patent number: 9134881
    Abstract: Systems and methods for facilitating character input using a graphical input display having a carousel of characters are provided. In an aspect, a system includes an interface component configured to generate a carousel graphical input display, the carousel graphical input display comprising a plurality of characters arranged in a fixed line, wherein a cursor is configured to move over the characters about the line and the cursor shifts from a first end of the line to a second end of the line in response to reaching either the first end of the line or the second end of the line. The system further includes an input component configured to receive a command to move the cursor over the characters to focus on respective ones of the characters.
    Type: Grant
    Filed: March 4, 2013
    Date of Patent: September 15, 2015
    Assignee: Google Inc.
    Inventors: Aaron Joseph Wheeler, Luke Bayes, Marc Layne Hemeon, Matias Cudich, Allan Stephan Mills, Tyler Wesley Breisch
  • Publication number: 20150234545
    Abstract: Multitasking and full screen menu contexts are described. In one or more implementations, an input is received to cause output of a menu in a user interface of a computing device. Responsive to this receipt, a determination is made as which of a plurality of portions displayed simultaneously in the user interface in a multitasking mode has focus, each of the plurality of portions corresponding to an output of a respective one of a plurality of applications. Responsive to the determination, output is caused of the menu as associated with the focused portion of the user interface and having a representation of at least one function based the focused portion, the representation selectable to cause performance of the function.
    Type: Application
    Filed: February 17, 2014
    Publication date: August 20, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: John E. Churchill, Joseph Wheeler, Jerome Jean-Louis Vasseur, Thomas Fuller, Jason Dean Giles
  • Publication number: 20150199085
    Abstract: Functionality is described herein for presenting representations of the z most recently presented items. The functionality also presents indicators which convey the presentation modes that were last used to present the z items. When the user selects one of the z items, the functionality presents it, as a default, using the last-used presentation mode, as conveyed by the indicator associated with this item. In one particular case, the last-used presentation mode corresponds to a full mode or a snap mode.
    Type: Application
    Filed: January 13, 2014
    Publication date: July 16, 2015
    Applicant: Microsoft Corporation
    Inventors: John E. Churchill, Joseph Wheeler, Jérôme Jean-Louis Vasseur, Thomas R. Fuller, Jason D. Giles
  • Publication number: 20150199081
    Abstract: Methods and devices for providing a user-interface are disclosed. In one embodiment, the method comprises receiving data corresponding to a first position of a wearable computing device and responsively causing the wearable computing device to provide a user-interface. The user-interface comprises a view region and a menu, where the view region substantially fills a field of view of the wearable computing device and the menu is not fully visible in the view region. The method further comprises receiving movement data corresponding to a triggerable movement of the wearable computing device and responsively causing the wearable computing device to move the menu such that the menu becomes more visible in the view region. When the wearable computing device moves to a second position, the wearable computing device moves the menu and the view region substantially together to follow the movement of the wearable computing device.
    Type: Application
    Filed: November 8, 2011
    Publication date: July 16, 2015
    Applicant: GOOGLE INC.
    Inventor: Aaron Joseph Wheeler
  • Publication number: 20150199086
    Abstract: Functionality is described for activating a service which presents a collection of items that are capable of being presented in a particular presentation mode. Upon a user's selection of one of them items, the functionality presents it in the particular mode. In one implementation, the particular presentation mode corresponds to a snap mode, in which the selected item is presented in a side display portion of a split-screen output presentation. The service itself constitutes an application which provides an output in the snap mode.
    Type: Application
    Filed: January 13, 2014
    Publication date: July 16, 2015
    Applicant: Microsoft Corporation
    Inventors: John E. Churchill, Joseph Wheeler, Jérôme Jean-Louis Vasseur, Thomas R. Fuller, Jason D. Giles
  • Publication number: 20150193098
    Abstract: Methods and systems disclosed herein relate to an action that could proceed or be dismissed in response to an affirmative or negative input, respectively. An example method could include displaying, using a head-mountable device, a graphical interface that presents a graphical representation of an action. The action could relate to at least one of a contact, a contact's avatar, a media file, a digital file, a notification, and an incoming communication. The example method could further include receiving a binary selection from among an affirmative input and a negative input. The example method may additionally include proceeding with the action in response to the binary selection being the affirmative input and dismissing the action in response to the binary selection being the negative input.
    Type: Application
    Filed: March 23, 2012
    Publication date: July 9, 2015
    Applicant: GOOGLE INC.
    Inventors: Alejandro Kauffmann, Hayes Solos Raffle, Aaron Joseph Wheeler, Luis Ricardo Prada Gomez, Steven John Lee
  • Publication number: 20150169054
    Abstract: A wearable computing device or a head-mounted display (HMD) may be configured to track the gaze axis of an eye of the wearer. In particular, the device may be configured to observe movement of a wearer's pupil and, based on the movement, determine inputs to a user interface. For example, using eye gaze detection, the HMD may change a tracking rate of a displayed virtual image based on where the user is looking. Gazing at the center of the HMD field of view may, for instance, allow for fine movements of the virtual display. Gazing near an edge of the HMD field of view may provide coarser movements.
    Type: Application
    Filed: January 26, 2015
    Publication date: June 18, 2015
    Inventors: Aaron Joseph Wheeler, Hayes Solos Raffle
  • Publication number: 20150172634
    Abstract: Systems and techniques are disclosed for visually rendering a requested scene based on a virtual camera perspective request as well as a projection of two or more video streams. The video streams can be captured using two dimensional cameras or three dimensional depth cameras and may capture different perspectives. The projection may be an internal projection that maps out the scene in three dimensions based on the two or more video streams. An object internal or external to the scene may be identified and the scene may be visually rendered based on a property of the object. For example, a scene may be visually rendered based on where an mobile object is located within the scene.
    Type: Application
    Filed: June 11, 2013
    Publication date: June 18, 2015
    Inventors: Aaron Joseph Wheeler, Christian Plagemann, Hendrik Dahlkamp, Liang-Yu Chi, Yong Zhao, Varun Ganapathi, Alejandro Jose Kauffmann
  • Publication number: 20150153822
    Abstract: Aspects of the present disclosure relate to controlling the functions of various devices based on spatial relationships. In one example, a system may include a depth and visual camera and a computer (networked or local) for processing data from the camera. The computer may be connected (wired or wirelessly) to any number of devices that can be controlled by the system. A user may use a mobile device to define a volume of space relative to the camera. The volume of space may then be associated with a controlled device as well as one or more control commands. When the volume of space is subsequently occupied, the one or more control commands may be used to control the controlled device. In this regard, a user may switch a device on or off, increase volume or speed, etc. simply by occupying the volume of space.
    Type: Application
    Filed: August 10, 2012
    Publication date: June 4, 2015
    Applicant: GOOGLE INC.
    Inventors: Alejandro Kauffmann, Aaron Joseph Wheeler, Liang-Yu Chi, Hendrik Dahlkamp, Varun Ganapathi, Yong Zhao, Christian Plagemann
  • Publication number: 20150153910
    Abstract: A video playlist associated with a set of videos is distinguished in a graphical user interface using a dynamic thumbnail to represent the playlist. The dynamic thumbnail comprises a static portion comprising a first image associated with the set of videos and a dynamic portion comprising one or more second images associated with the set of videos. An image provided in the dynamic portion is configured to change while the first image remains the same in response to a shift in the graphical user interface that results in a change in position of the thumbnail about the user graphical user interface.
    Type: Application
    Filed: December 3, 2013
    Publication date: June 4, 2015
    Applicant: Google Inc.
    Inventors: Aaron Joseph Wheeler, Chris Lauritzen
  • Publication number: 20150137937
    Abstract: Embodiments are disclosed that relate to persistently identifying a user interacting with a computing device. For example, one disclosed embodiment provides a method comprising receiving biometric data regarding the user, determining a determined identity of the user based on the biometric data, outputting a notification of the determined identity of the user, and providing a mechanism to receive feedback regarding a correctness of the determined identity of from the user.
    Type: Application
    Filed: November 18, 2013
    Publication date: May 21, 2015
    Applicant: Microsoft Corporation
    Inventors: Robert M. Smith, Joseph Wheeler, Victoria N. Podmajersky
  • Patent number: 9035878
    Abstract: Methods and systems involving a graphic display in a head mounted display (HMD) are disclosed herein. An exemplary system may be configured to: (1) display a pointer and a graphic object in a graphic display; (2) receive body movement data; (3) use the body movement data as a basis to move the pointer in the graphic display; (4) define an active region in an area of the graphic display, where the graphic object is activated when the pointer is located within the active region; (5) define an expanded active region that encompasses and is larger than the active region; and (6) make the graphic object active in response to the pointer being moved into the active region and keep the graphic object active until the pointer is moved outside of the expanded active region.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: May 19, 2015
    Assignee: Google Inc.
    Inventor: Aaron Joseph Wheeler
  • Publication number: 20150135308
    Abstract: Aspects of the subject disclosure are directed towards providing feedback to users of multi-user system that has biometric recognition capabilities, so that a user knows whether the system has correctly associated the user with his or her identity. The feedback may include a display of a current camera view, along with visible identity information that is associated with each user in the view. The feedback may include per-user icons (e.g., tiles, thumbnail images and so on) by which a user visually confirms that he or she is correctly recognized. Any misrecognition may be detected via the feedback and corrected. Feedback may convey other information, such as the current interaction state/capabilities of a user.
    Type: Application
    Filed: May 16, 2014
    Publication date: May 14, 2015
    Applicant: Microsoft Corporation
    Inventors: Robert Mitchell Smith, Emily M. Yang, Joseph Wheeler, Sergio Paolantonio, Xiaoji Chen, Eric C. Sanderson, Calvin Kent Carter, Christian Klein, Mark D. Schwesinger, Rita A. Yu
  • Publication number: 20150128042
    Abstract: A user interface (“UI”) includes a personalized home screen that can be brought up at any time from any experience provided by applications, games, movies, television, and other content that is available on a computing platform such as a multimedia console using a single button press on a controller, using a “home” gesture, or using a “home” voice command. The home screen features a number of dynamically maintained visual objects called tiles that represent the experiences available on the console. An application can be “snapped” to the application that fills the PIP so that the snapped application renders into a separate window that is placed next to the UI for the filled application. The user interface is further adapted so that the user can quickly and easily switch focus between the tiles in the home screen and resume an experience in full screen.
    Type: Application
    Filed: November 4, 2013
    Publication date: May 7, 2015
    Applicant: Microsoft Corporation
    Inventors: John E. Churchill, Joseph Wheeler, Jérôme Vasseur, Thomas Fuller
  • Patent number: 8971570
    Abstract: A wearable computing system may include an eye-tracking system configured to track the position of an eye of a wearer of the wearable computing system. In particular, an infrared light source illuminating the eye of a wearer at a relatively high intensity may generate specular reflections off the wearer's cornea, also called ‘glints’. The glints can be imaged with an infrared camera. When the infrared light sources are illuminated at a relatively lower intensity, determination of the pupil location is possible. Glints, in combination with the pupil location, may be used to accurately determine the gaze direction and eye rotation. The determined gaze direction could be used in various eye-tracking applications. By controlling the light sources to change intensity levels and by combining multiple images of the eye to incorporate multiple glint locations with the pupil location, eye tracking can be performed with better accuracy and with fewer light sources.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: March 3, 2015
    Assignee: Google Inc.
    Inventors: Hayes Solos Raffle, Aaron Joseph Wheeler
  • Patent number: 8970452
    Abstract: A wearable computing device or a head-mounted display (HMD) may be configured to track the gaze axis of an eye of the wearer. In particular, the device may be configured to observe movement of a wearer's pupil and, based on the movement, determine inputs to a user interface. For example, using eye gaze detection, the HMD may change a tracking rate of a displayed virtual image based on where the user is looking. Gazing at the center of the HMD field of view may, for instance, allow for fine movements of the virtual display. Gazing near an edge of the HMD field of view may provide coarser movements.
    Type: Grant
    Filed: November 2, 2011
    Date of Patent: March 3, 2015
    Assignee: Google Inc.
    Inventors: Aaron Joseph Wheeler, Hayes Solos Raffle