Patents by Inventor Ralph Brunner

Ralph Brunner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10402934
    Abstract: Disclosed is a system for producing images including techniques for reducing the memory and processing power required for such operations. The system provides techniques for programmatically representing a graphics problem. The system further provides techniques for reducing and optimizing graphics problems for rendering with consideration of the system resources, such as the availability of a compatible GPU.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: September 3, 2019
    Assignee: Apple Inc.
    Inventors: John Harper, Ralph Brunner, Peter Graffagnino, Mark Zimmer
  • Patent number: 10366470
    Abstract: Various of the disclosed embodiments present systems and methods for distinguishing portions of a virtual model associated with a clothing article from portions of the virtual model not associated with the clothing article. Some embodiments facilitate quick and effective separation by employing a feature vector structure conducive to separation by a linear classifier. Such efficient separation may be especially beneficial in applications requiring the rapid scanning of large quantities of clothing while retaining high-fidelity representations of the clothing's geometry. Some embodiments further accommodate artist participation in the filtering process as well as scanning of articles from a variety of orientations and with a variety of supporting structures.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: July 30, 2019
    Assignee: YouSpace, Inc.
    Inventor: Ralph Brunner
  • Patent number: 10331287
    Abstract: A user interface can have one or more spaces presented therein. A space is a grouping of one or more program windows in relation to windows of other application programs, such that the program(s) of only a single space is visible when the space is active. A view can be generated of all spaces and their contents.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: June 25, 2019
    Assignee: Apple Inc.
    Inventors: Assana Fard, John O. Louch, Ralph Brunner, Haroon Sheikh, Eric Steven Peyton, Christopher Hynes
  • Patent number: 10325184
    Abstract: Human Computer Interfaces (HCI) may allow a user to interact with a computer via a variety of mechanisms, such as hand, head, and body gestures. Various of the disclosed embodiments allow information captured from a depth camera on an HCI system to be used to recognize such gestures. Particularly, by training a classifier using vectors having both base and extended components, more accurate classification results may be subsequently obtained. The base vector may include a leaf-based assessment of the classification results from a forest for a given depth value candidate pixel. The extended vector may include additional information, such as the leaf-based assessment of the classification results for one or more pixels related to the candidate pixel. Various embodiments employ this improved structure with various optimization methods and structure to provide more efficient in-situ operation.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: June 18, 2019
    Assignee: YouSpace, Inc.
    Inventors: Ralph Brunner, Yichen Pan
  • Publication number: 20190180410
    Abstract: Various of the disclosed embodiments present systems and methods for distinguishing portions of a virtual model associated with a clothing article from portions of the virtual model not associated with the clothing article. Some embodiments facilitate quick and effective separation by employing a feature vector structure conducive to separation by a linear classifier. Such efficient separation may be especially beneficial in applications requiring the rapid scanning of large quantities of clothing while retaining high-fidelity representations of the clothing's geometry. Some embodiments further accommodate artist participation in the filtering process as well as scanning of articles from a variety of orientations and with a variety of supporting structures.
    Type: Application
    Filed: December 11, 2017
    Publication date: June 13, 2019
    Inventor: Ralph Brunner
  • Patent number: 10303259
    Abstract: Various of the disclosed embodiments present depth-based user interaction systems facilitating natural and immersive user interactions. Particularly, various embodiments integrate immersive visual presentations with natural and fluid gesture motions. This integration facilitates more rapid user adoption and more precise user interactions. Some embodiments may take advantage of the particular form factors disclosed herein to accommodate user interactions. For example, dual depth sensor arrangements in a housing atop the interface's display may facilitate depth fields of view accommodating more natural gesture recognition than may be otherwise possible. In some embodiments, these gestures may be organized into a framework for universal control of the interface by the user and for application-specific control of the interface by the user.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: May 28, 2019
    Assignee: YouSpace, Inc.
    Inventors: Ralph Brunner, Andrew Emmett Seligman, Hsin-Yi Chien, John Philip Stoddard, Po-Jui Chen, Elon Sharton-Bierig, Sierra Justine Gaston, Vivi Brunner
  • Patent number: 10304002
    Abstract: Human Computer Interfaces (HCI) may allow a user to interact with a computer via a variety of mechanisms, such as hand, head, and body gestures. Various of the disclosed embodiments allow information captured from a depth camera on an HCI system to be used to recognize such gestures. Particularly, the HCI system's depth sensor may capture depth frames of the user's movements over time. To discern gestures from these movements, the system may group portions of the user's anatomy represented by the depth data into classes. “Features” which reflect distinguishing features of the user's anatomy may be used to accomplish this classification. Some embodiments provide improved systems and methods for generating and/or selecting these features. Features prepared by various of the disclosed embodiments may be less susceptible to overfitting training data and may more quickly distinguish portions of the user's anatomy.
    Type: Grant
    Filed: February 8, 2016
    Date of Patent: May 28, 2019
    Assignee: YouSpace, Inc.
    Inventor: Ralph Brunner
  • Patent number: 10303417
    Abstract: Various of the disclosed embodiments present depth-based user interaction systems. By anticipating various installation constraints and factors into the interface's design, various of the disclosed embodiments facilitate benefits such as complementary depth fields of view and display orientation flexibility. Some embodiments include a frame housing for the depth sensors. Within the housing, depth sensors may be affixed to a mount, such that each sensor's field of view is at a disparate angle. These disparate angles may facilitate gesture recognitions that might otherwise be difficult or impossible to achieve. When mounted in connection with a modular unit, the housing may provide a versatile means for integrating multiple module units into a composite interface.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: May 28, 2019
    Assignee: YouSpace, Inc.
    Inventors: Ralph Brunner, Hsin-Yi Chien, John Philip Stoddard, Po-Jui Chen, Vivi Brunner
  • Publication number: 20190037140
    Abstract: Methods, devices, and systems for continuous image capturing are described herein. In one embodiment, a method includes continuously capturing a sequence of images with an image capturing device. The method may further include storing a predetermined number of the sequence of images in a buffer. The method may further include receiving a user request to capture an image. In response to the user request, the method may further include automatically selecting one of the buffered images based on an exposure time of one of the buffered images. The sequence of images is captured prior to or concurrently with receiving the user request.
    Type: Application
    Filed: August 3, 2018
    Publication date: January 31, 2019
    Inventors: Ralph Brunner, Nikhil Bhogal, James David Batson
  • Publication number: 20180304776
    Abstract: A vehicle seat depth adjuster (1) includes a base plate (2) connectable to a seat structure (17); a detent unit (8); a carrier plate (3) moveably arranged above the base plate; a locking unit (4) detachably engaging the detent unit; and a drive unit (11) moving the carrier plate. The carrier plate is fixed in relation to the base plate in an engaged position of the locking unit and is moveable, between a retracted position and an extended position in a released position of the locking unit by the drive unit. The detent unit is arranged on the upper side (2.5) of the base plate, which faces the carrier plate, and includes receiving elements (8.1) staggered in the direction (L) of movement. The locking unit includes an unlocking lever (5) with at least one locking tooth (5.1) that engages in one of the detent receiving elements in the engaged position.
    Type: Application
    Filed: November 25, 2016
    Publication date: October 25, 2018
    Inventors: Markus GUMBRICH, Christian KLOSTERMANN, Joerg MADSEN, René GRAHL, Ralph BRUNNER, André OSTERHUES
  • Publication number: 20180300591
    Abstract: Human Computer Interfaces (HCI) may allow a user to interact with a computer via a variety of mechanisms, such as hand, head, and body gestures. Various of the disclosed embodiments allow information captured from a depth camera on an HCI system to be used to recognize such gestures. Particularly, by training a classifier using vectors having both base and extended components, more accurate classification results may be subsequently obtained. The base vector may include a leaf-based assessment of the classification results from a forest for a given depth value candidate pixel. The extended vector may include additional information, such as the leaf-based assessment of the classification results for one or more pixels related to the candidate pixel. Various embodiments employ this improved structure with various optimization methods and structure to provide more efficient in-situ operation.
    Type: Application
    Filed: April 12, 2017
    Publication date: October 18, 2018
    Inventors: Ralph Brunner, Yichen Pan
  • Publication number: 20180288894
    Abstract: Various of the disclosed embodiments present depth-based user interaction systems. By anticipating various installation constraints and factors into the interface's design, various of the disclosed embodiments facilitate benefits such as complementary depth fields of view and display orientation flexibility. Some embodiments include a frame housing for the depth sensors. Within the housing, depth sensors may be affixed to a mount, such that each sensor's field of view is at a disparate angle. These disparate angles may facilitate gesture recognitions that might otherwise be difficult or impossible to achieve. When mounted in connection with a modular unit, the housing may provide a versatile means for integrating multiple module units into a composite interface.
    Type: Application
    Filed: April 3, 2017
    Publication date: October 4, 2018
    Inventors: Ralph Brunner, Hsin-Yi Chien, John Philip Stoddard, Po-Jui Chen, Vivi Brunner
  • Publication number: 20180284901
    Abstract: Various of the disclosed embodiments present depth-based user interaction systems facilitating natural and immersive user interactions. Particularly, various embodiments integrate immersive visual presentations with natural and fluid gesture motions. This integration facilitates more rapid user adoption and more precise user interactions. Some embodiments may take advantage of the particular form factors disclosed herein to accommodate user interactions. For example, dual depth sensor arrangements in a housing atop the interface's display may facilitate depth fields of view accommodating more natural gesture recognition than may be otherwise possible. In some embodiments, these gestures may be organized into a framework for universal control of the interface by the user and for application-specific control of the interface by the user.
    Type: Application
    Filed: April 3, 2017
    Publication date: October 4, 2018
    Inventors: Ralph Brunner, Andrew Emmett Seligman, Hsin-Yi Chien, John Philip Stoddard, Po-Jui Chen, Elon Sharton-Bierig, Sierra Justine Gaston, Vivi Brunner
  • Patent number: 10063778
    Abstract: Methods, devices, and systems for continuous image capturing are described herein. In one embodiment, a method includes continuously capturing a sequence of images with an image capturing device. The method may further include storing a predetermined number of the sequence of images in a buffer. The method may further include receiving a user request to capture an image. In response to the user request, the method may further include automatically selecting one of the buffered images based on an exposure time of one of the buffered images. The sequence of images is captured prior to or concurrently with receiving the user request.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: August 28, 2018
    Assignee: Apple Inc.
    Inventors: Ralph Brunner, Nikhil Bhogal, James David Batson
  • Patent number: 10030968
    Abstract: Human Computer Interfaces (HCI) may allow a user to interact with a computer via a variety of mechanisms, such as hand, head, and body gestures. Various of the disclosed embodiments allow information captured from a depth camera on an HCI system to be used to recognize such gestures. Particularly, the HCI system's depth sensor may capture depth frames of the user's movements over time. To discern gestures from these movements, the system may group portions of the user's anatomy represented by the depth data into classes. This grouping may require that the relevant depth data be extracted from the depth frame. Such extraction may itself require that appropriate clipping planes be determined. Various of the disclosed embodiments better establish floor planes from which such clipping planes may be derived.
    Type: Grant
    Filed: February 8, 2016
    Date of Patent: July 24, 2018
    Assignee: YouSpace, Inc.
    Inventor: Ralph Brunner
  • Publication number: 20180157328
    Abstract: Various of the disclosed embodiments provide Human Computer Interfaces (HCI) that incorporate depth sensors at multiple positions and orientations. The depth sensors may be used in conjunction with a display screen to permit users to interact dynamically with the system, e.g., via gestures. Calibration methods for orienting depth values between sensors are also presented. The calibration methods may generate both rotation and translation transformations that can be used to determine the location of a depth value acquired in one sensor from the perspective of another sensor. The calibration process may itself include visual feedback to direct a user assisting with the calibration. In some embodiments, floor estimation techniques may be used alone or in conjunction with the calibration process to facilitate data processing and gesture identification.
    Type: Application
    Filed: December 5, 2016
    Publication date: June 7, 2018
    Inventor: Ralph Brunner
  • Publication number: 20180122126
    Abstract: A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media or other type of objects for an application's user interface. The application commits state changes of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, after a synchronization threshold has been met, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer, synchronized with the display. Portions of the render tree changing relative to prior versions can be tracked to improve resource management.
    Type: Application
    Filed: November 13, 2017
    Publication date: May 3, 2018
    Inventors: Ralph Brunner, John Harper, Peter Graffagnino
  • Publication number: 20180059511
    Abstract: At least certain embodiments described herein provide a continuous autofocus mechanism for an image capturing device. The continuous autofocus mechanism can perform an autofocus scan for a lens of the image capturing device and obtain focus scores associated with the autofocus scan. The continuous autofocus mechanism can determine an acceptable band of focus scores based on the obtained focus scores. Next, the continuous autofocus mechanism can determine whether a current focus score is within the acceptable band of focus scores. A refocus scan may be performed if the current focus score is outside of the acceptable band of focus scores.
    Type: Application
    Filed: July 28, 2017
    Publication date: March 1, 2018
    Inventors: Ralph Brunner, David Hayward
  • Patent number: 9852535
    Abstract: A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media or other type of objects for an application's user interface. The application commits state changes of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, after a synchronization threshold has been met, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer, synchronized with the display. Portions of the render tree changing relative to prior versions can be tracked to improve resource management.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: December 26, 2017
    Assignee: Apple Inc.
    Inventors: Ralph Brunner, John Harper, Peter Graffagnino
  • Publication number: 20170345123
    Abstract: Disclosed is a system for producing images including techniques for reducing the memory and processing power required for such operations. The system provides techniques for programmatically representing a graphics problem. The system further provides techniques for reducing and optimizing graphics problems for rendering with consideration of the system resources, such as the availability of a compatible GPU.
    Type: Application
    Filed: June 21, 2017
    Publication date: November 30, 2017
    Inventors: John Harper, Ralph Brunner, Peter Graffagnino, Mark Zimmer