Patents by Inventor Ralph Brunner

Ralph Brunner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11816269
    Abstract: Systems, methods, devices and non-transitory, computer-readable storage mediums are disclosed for gesture recognition for a wearable multimedia device using real-time data streams. In an embodiment, a method comprises: detecting a trigger event from one or more real-time data streams running on a wearable multimedia device; taking one or more data snapshots of the one or more real-time data streams; inferring user intent from the one or more data snapshots; and selecting a service or preparing content for the user based on the inferred user intent. In an embodiment, a hand and finger pointing direction is determined from a depth image, a 2D bounding box for the hand/finger is projected into a 2D image space and compared to bounding boxes for identified/labeled objects in the 2D image space to identify an object that the hand is holding or the finger is pointing toward.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: November 14, 2023
    Assignee: Humane, Inc.
    Inventors: Imran A. Chaudhri, Bethany Bongiorno, Patrick Gates, Wangju Tsai, Monique Relova, Nathan Lord, Yanir Nulman, Ralph Brunner, Lilynaz Hashemi, Britt Nelson
  • Patent number: 11523055
    Abstract: Techniques are disclosed for creating scaled images with super resolution using neighborhood patches of pixels to provide higher resolution than traditional interpolation techniques. Also disclosed are techniques for creating super field-of-view (FOV) images of a scene created from previously captured and stored images of the scene that are stitched together with a current image of the scene, to generate an image of the scene that extends beyond the fixed FOV of the camera.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: December 6, 2022
    Assignee: Humane, Inc.
    Inventors: Imran A. Chaudhri, Ralph Brunner, Monique Relova
  • Patent number: 11222456
    Abstract: A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, etc., for an application's user interface. The application commits state changes of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, after a synchronization threshold has been met, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer, synchronized with the display. Portions of the render tree changing relative to prior versions can be tracked.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: January 11, 2022
    Assignee: Apple Inc.
    Inventors: Ralph Brunner, John Harper, Peter Graffagnino
  • Patent number: 10877353
    Abstract: At least certain embodiments described herein provide a continuous autofocus mechanism for an image capturing device. The continuous autofocus mechanism can perform an autofocus scan for a lens of the image capturing device and obtain focus scores associated with the autofocus scan. The continuous autofocus mechanism can determine an acceptable band of focus scores based on the obtained focus scores. Next, the continuous autofocus mechanism can determine whether a current focus score is within the acceptable band of focus scores. A refocus scan may be performed if the current focus score is outside of the acceptable band of focus scores.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: December 29, 2020
    Assignee: APPLE INC.
    Inventors: Ralph Brunner, David Hayward
  • Publication number: 20200126285
    Abstract: A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, etc., for an application's user interface. The application commits state changes of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, after a synchronization threshold has been met, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer, synchronized with the display. Portions of the render tree changing relative to prior versions can be tracked.
    Type: Application
    Filed: December 20, 2019
    Publication date: April 23, 2020
    Inventors: Ralph Brunner, John Harper, Peter Graffagnino
  • Publication number: 20200097135
    Abstract: A user interface can have one or more spaces presented therein. A space is a grouping of one or more program windows in relation to windows of other application programs, such that the program(s) of only a single space is visible when the space is active. A view can be generated of all spaces and their contents.
    Type: Application
    Filed: November 26, 2019
    Publication date: March 26, 2020
    Inventors: Assana Fard, John O. Louch, Ralph Brunner, Haroon Sheikh, Eric Steven Peyton, Christopher Hynes
  • Patent number: 10521949
    Abstract: A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media or other type of objects for an application's user interface. The application commits state changes of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, after a synchronization threshold has been met, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer, synchronized with the display. Portions of the render tree changing relative to prior versions can be tracked to improve resource management.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: December 31, 2019
    Assignee: Apple Inc.
    Inventors: Ralph Brunner, John Harper, Peter Graffagnino
  • Patent number: 10513202
    Abstract: A vehicle seat depth adjuster (1) includes a base plate (2) connectable to a seat structure (17); a detent unit (8); a carrier plate (3) moveably arranged above the base plate; a locking unit (4) detachably engaging the detent unit; and a drive unit (11) moving the carrier plate. The carrier plate is fixed in relation to the base plate in an engaged position of the locking unit and is moveable, between a retracted position and an extended position in a released position of the locking unit by the drive unit. The detent unit is arranged on the upper side (2.5) of the base plate, which faces the carrier plate, and includes receiving elements (8.1) staggered in the direction (L) of movement. The locking unit includes an unlocking lever (5) with at least one locking tooth (5.1) that engages in one of the detent receiving elements in the engaged position.
    Type: Grant
    Filed: November 25, 2016
    Date of Patent: December 24, 2019
    Assignees: Adient Luxembourg Holding S.à r.l., Müller-Technik GmbH
    Inventors: Markus Gumbrich, Christian Klostermann, Joerg Madsen, René Grahl, Ralph Brunner, André Osterhues
  • Patent number: 10511772
    Abstract: Methods, devices, and systems for continuous image capturing are described herein. In one embodiment, a method includes continuously capturing a sequence of images with an image capturing device. The method may further include storing a predetermined number of the sequence of images in a buffer. The method may further include receiving a user request to capture an image. In response to the user request, the method may further include automatically selecting one of the buffered images based on an exposure time of one of the buffered images. The sequence of images is captured prior to or concurrently with receiving the user request.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: December 17, 2019
    Assignee: Apple Inc.
    Inventors: Ralph Brunner, Nikhil Bhogal, James David Batson
  • Patent number: 10503342
    Abstract: A user interface can have one or more spaces presented therein. A space is a grouping of one or more program windows in relation to windows of other application programs, such that the program(s) of only a single space is visible when the space is active. A view can be generated of all spaces and their contents.
    Type: Grant
    Filed: August 4, 2006
    Date of Patent: December 10, 2019
    Assignee: APPLE INC.
    Inventors: Assana Fard, John O. Louch, Ralph Brunner, Haroon Sheikh, Eric Peyton, Christopher Hynes
  • Patent number: 10437342
    Abstract: Various of the disclosed embodiments provide Human Computer Interfaces (HCI) that incorporate depth sensors at multiple positions and orientations. The depth sensors may be used in conjunction with a display screen to permit users to interact dynamically with the system, e.g., via gestures. Calibration methods for orienting depth values between sensors are also presented. The calibration methods may generate both rotation and translation transformations that can be used to determine the location of a depth value acquired in one sensor from the perspective of another sensor. The calibration process may itself include visual feedback to direct a user assisting with the calibration. In some embodiments, floor estimation techniques may be used alone or in conjunction with the calibration process to facilitate data processing and gesture identification.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: October 8, 2019
    Assignee: YouSpace, Inc.
    Inventor: Ralph Brunner
  • Patent number: 10402934
    Abstract: Disclosed is a system for producing images including techniques for reducing the memory and processing power required for such operations. The system provides techniques for programmatically representing a graphics problem. The system further provides techniques for reducing and optimizing graphics problems for rendering with consideration of the system resources, such as the availability of a compatible GPU.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: September 3, 2019
    Assignee: Apple Inc.
    Inventors: John Harper, Ralph Brunner, Peter Graffagnino, Mark Zimmer
  • Patent number: 10366470
    Abstract: Various of the disclosed embodiments present systems and methods for distinguishing portions of a virtual model associated with a clothing article from portions of the virtual model not associated with the clothing article. Some embodiments facilitate quick and effective separation by employing a feature vector structure conducive to separation by a linear classifier. Such efficient separation may be especially beneficial in applications requiring the rapid scanning of large quantities of clothing while retaining high-fidelity representations of the clothing's geometry. Some embodiments further accommodate artist participation in the filtering process as well as scanning of articles from a variety of orientations and with a variety of supporting structures.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: July 30, 2019
    Assignee: YouSpace, Inc.
    Inventor: Ralph Brunner
  • Patent number: 10331287
    Abstract: A user interface can have one or more spaces presented therein. A space is a grouping of one or more program windows in relation to windows of other application programs, such that the program(s) of only a single space is visible when the space is active. A view can be generated of all spaces and their contents.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: June 25, 2019
    Assignee: Apple Inc.
    Inventors: Assana Fard, John O. Louch, Ralph Brunner, Haroon Sheikh, Eric Steven Peyton, Christopher Hynes
  • Patent number: 10325184
    Abstract: Human Computer Interfaces (HCI) may allow a user to interact with a computer via a variety of mechanisms, such as hand, head, and body gestures. Various of the disclosed embodiments allow information captured from a depth camera on an HCI system to be used to recognize such gestures. Particularly, by training a classifier using vectors having both base and extended components, more accurate classification results may be subsequently obtained. The base vector may include a leaf-based assessment of the classification results from a forest for a given depth value candidate pixel. The extended vector may include additional information, such as the leaf-based assessment of the classification results for one or more pixels related to the candidate pixel. Various embodiments employ this improved structure with various optimization methods and structure to provide more efficient in-situ operation.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: June 18, 2019
    Assignee: YouSpace, Inc.
    Inventors: Ralph Brunner, Yichen Pan
  • Publication number: 20190180410
    Abstract: Various of the disclosed embodiments present systems and methods for distinguishing portions of a virtual model associated with a clothing article from portions of the virtual model not associated with the clothing article. Some embodiments facilitate quick and effective separation by employing a feature vector structure conducive to separation by a linear classifier. Such efficient separation may be especially beneficial in applications requiring the rapid scanning of large quantities of clothing while retaining high-fidelity representations of the clothing's geometry. Some embodiments further accommodate artist participation in the filtering process as well as scanning of articles from a variety of orientations and with a variety of supporting structures.
    Type: Application
    Filed: December 11, 2017
    Publication date: June 13, 2019
    Inventor: Ralph Brunner
  • Patent number: 10303417
    Abstract: Various of the disclosed embodiments present depth-based user interaction systems. By anticipating various installation constraints and factors into the interface's design, various of the disclosed embodiments facilitate benefits such as complementary depth fields of view and display orientation flexibility. Some embodiments include a frame housing for the depth sensors. Within the housing, depth sensors may be affixed to a mount, such that each sensor's field of view is at a disparate angle. These disparate angles may facilitate gesture recognitions that might otherwise be difficult or impossible to achieve. When mounted in connection with a modular unit, the housing may provide a versatile means for integrating multiple module units into a composite interface.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: May 28, 2019
    Assignee: YouSpace, Inc.
    Inventors: Ralph Brunner, Hsin-Yi Chien, John Philip Stoddard, Po-Jui Chen, Vivi Brunner
  • Patent number: 10304002
    Abstract: Human Computer Interfaces (HCI) may allow a user to interact with a computer via a variety of mechanisms, such as hand, head, and body gestures. Various of the disclosed embodiments allow information captured from a depth camera on an HCI system to be used to recognize such gestures. Particularly, the HCI system's depth sensor may capture depth frames of the user's movements over time. To discern gestures from these movements, the system may group portions of the user's anatomy represented by the depth data into classes. “Features” which reflect distinguishing features of the user's anatomy may be used to accomplish this classification. Some embodiments provide improved systems and methods for generating and/or selecting these features. Features prepared by various of the disclosed embodiments may be less susceptible to overfitting training data and may more quickly distinguish portions of the user's anatomy.
    Type: Grant
    Filed: February 8, 2016
    Date of Patent: May 28, 2019
    Assignee: YouSpace, Inc.
    Inventor: Ralph Brunner
  • Patent number: 10303259
    Abstract: Various of the disclosed embodiments present depth-based user interaction systems facilitating natural and immersive user interactions. Particularly, various embodiments integrate immersive visual presentations with natural and fluid gesture motions. This integration facilitates more rapid user adoption and more precise user interactions. Some embodiments may take advantage of the particular form factors disclosed herein to accommodate user interactions. For example, dual depth sensor arrangements in a housing atop the interface's display may facilitate depth fields of view accommodating more natural gesture recognition than may be otherwise possible. In some embodiments, these gestures may be organized into a framework for universal control of the interface by the user and for application-specific control of the interface by the user.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: May 28, 2019
    Assignee: YouSpace, Inc.
    Inventors: Ralph Brunner, Andrew Emmett Seligman, Hsin-Yi Chien, John Philip Stoddard, Po-Jui Chen, Elon Sharton-Bierig, Sierra Justine Gaston, Vivi Brunner
  • Publication number: 20190037140
    Abstract: Methods, devices, and systems for continuous image capturing are described herein. In one embodiment, a method includes continuously capturing a sequence of images with an image capturing device. The method may further include storing a predetermined number of the sequence of images in a buffer. The method may further include receiving a user request to capture an image. In response to the user request, the method may further include automatically selecting one of the buffered images based on an exposure time of one of the buffered images. The sequence of images is captured prior to or concurrently with receiving the user request.
    Type: Application
    Filed: August 3, 2018
    Publication date: January 31, 2019
    Inventors: Ralph Brunner, Nikhil Bhogal, James David Batson