Patents Assigned to DYNAVOX SYSTEMS, LLC
  • Patent number: 10031576
    Abstract: A speech generation device is disclosed. The speech generation device may include a head mounted display unit having a variety of different components that enhance the functionality of the speech generation device. The speech generation device may further include computer-readable medium that, when executed by a processor, instruct the speech generation device to perform desired functions.
    Type: Grant
    Filed: June 2, 2011
    Date of Patent: July 24, 2018
    Assignee: Dynavox Systems LLC
    Inventor: Bob Cunningham
  • Patent number: 9760123
    Abstract: In several embodiments, a speech generation device is disclosed. The speech generation device may generally include a projector configured to project images in the form of a projected display onto a projection Surface, an optical input device configured to detect an input directed towards the projected display and a speaker configured to generate an audio output. In addition, the speech generation device may include a processing unit communicatively coupled to the projector, the optical input device and the speaker. The processing unit may include a processor and related computer readable medium configured to store instructions executable by the processor, wherein the instructions stored on the computer readable medium configure the speech generation device to generate text-to-speech output.
    Type: Grant
    Filed: August 3, 2011
    Date of Patent: September 12, 2017
    Assignee: Dynavox Systems LLC
    Inventor: Bob Cunningham
  • Publication number: 20140334666
    Abstract: Eye tracking systems and methods include such exemplary features as a display device, at least one image capture device and a processing device. The display device displays a user interface including one or more interface elements to a user. The at least one image capture device detects a user's gaze location relative to the display device. The processing device electronically analyzes the location of user elements within the user interface relative to the user's gaze location and dynamically determine whether to initiate the display of a zoom window. The dynamic determination of whether to initiate display of the zoom window may further include analysis of the number, size and density of user elements within the user interface relative to the user's gaze location, the application type associated with the user interface or at the user's gaze location, and/or the structure of eye movements relative to the user interface.
    Type: Application
    Filed: October 10, 2011
    Publication date: November 13, 2014
    Applicant: DYNAVOX SYSTEMS LLC
    Inventors: Chris Lankford, Timothy Mulholland, II, Charles McKinley
  • Publication number: 20130300636
    Abstract: A speech generation device is disclosed. The speech generation device may include a head mounted display unit having a variety of different components that enhance the functionality of the speech generation device. The speech generation device may further include computer-readable medium that, when executed by a processor, instruct the speech generation device to perform desired functions.
    Type: Application
    Filed: June 2, 2011
    Publication date: November 14, 2013
    Applicant: DYNAVOX SYSTEMS LLC
    Inventors: Bob Cunningham, Riad Hammoud
  • Publication number: 20120137254
    Abstract: Systems and methods of providing electronic features for creating context-aware vocabulary suggestions for an electronic device include providing a graphical user interface design area having a plurality of display elements. An electronic device user may be provided automated context-aware analysis from information from plural sources including GPS, compass, speaker identification (i.e., voice recognition), facial identification, speech content determination, user specifications, speech output monitoring, and software navigation monitoring to provide a selectable display of suggested vocabulary, previously stored words and phrases, or a keyboard as input options to create messages for text display and/or speech generation. The user may, optionally, manually specify a context.
    Type: Application
    Filed: November 23, 2011
    Publication date: May 31, 2012
    Applicant: DYNAVOX SYSTEMS LLC
    Inventors: BOB CUNNINGHAM, DAVID EDWARD LEE
  • Publication number: 20120105486
    Abstract: Eye tracking systems and methods include such exemplary features as a display device, at least one image capture device and a processing device. The display device displays a user interface including one or more interface elements to a user. The at least one image capture device detects a user's gaze location relative to the display device. The processing device electronically analyzes the location of user elements within the user interface relative to the user's gaze location and dynamically determine whether to initiate the display of a zoom window. The dynamic determination of whether to initiate display of the zoom window may further include analysis of the number, size and density of user elements within the user interface relative to the user's gaze location, the application type associated with the user interface or at the user's gaze location, and/or the structure of eye movements relative to the user interface.
    Type: Application
    Filed: April 9, 2010
    Publication date: May 3, 2012
    Applicant: DYNAVOX SYSTEMS LLC
    Inventors: Chris Lankford, Timothy Mulholland, II, Charles McKinley
  • Publication number: 20120035934
    Abstract: In several embodiments, a speech generation device is disclosed. The speech generation device may generally include a projector configured to project images in the form of a projected display onto a projection Surface, an optical input device configured to detect an input directed towards the projected display and a speaker configured to generate an audio output. In addition, the speech generation device may include a processing unit communicatively coupled to the projector, the optical input device and the speaker. The processing unit may include a processor and related computer readable medium configured to store instructions executable by the processor, wherein the instructions stored on the computer readable medium configure the speech generation device to generate text-to-speech output.
    Type: Application
    Filed: August 3, 2011
    Publication date: February 9, 2012
    Applicant: DYNAVOX SYSTEMS LLC
    Inventor: BOB CUNNINGHAM
  • Publication number: 20110202842
    Abstract: Systems and methods of providing electronic features for creating a customized media player interface for an electronic device include providing a graphical user interface design area having a plurality of display elements. Electronic input signals then may define for association with selected display elements one or more electronic actions relative to the initiation and control of media files accessible by the electronic device (e.g., playing, pausing, stopping, adjusting play speed, adjusting volume, adjusting current file position, toggling modes such as repeat or shuffle, and/or establishing, viewing and/or clearing a playlist.) Additional electronic input signals may define for association with selected display elements labels such as action identification labels or media status labels.
    Type: Application
    Filed: February 12, 2010
    Publication date: August 18, 2011
    Applicant: DYNAVOX SYSTEMS, LLC
    Inventors: BRENT MICHAEL WEATHERLY, PIERRIE JEAN MUSICK
  • Publication number: 20110197156
    Abstract: Systems and methods for generating an interactive zoom interface for an electronic device include electronically displaying a first graphical user interface area to a user. Input is then received from a user indicating a desire to initiate a magnified or zoomed display state. Upon receipt of such electronic input, a second user interface area is displayed to a user. The second user interface area (i.e., a zoom frame) corresponds to a magnified view of some or all of the first user interface area, which may be displayed in place of or overlaid on some or all of the first user interface area. The second user interface area may include at least one zoom toolbar portion including selectable controls such as but not limited to one or more of a zoom in, zoom out, zoom amount, pan directions, scroll directions, cancel/dismiss, contrast and display options, zoom frame toolbar position options, etc.
    Type: Application
    Filed: February 9, 2010
    Publication date: August 11, 2011
    Applicant: DYNAVOX SYSTEMS, LLC
    Inventors: John Strait, Dan Sweeney, Jason McCullough
  • Publication number: 20110191699
    Abstract: Systems and methods for interfacing interactive content items and shared data variables include electronically generating a first program interface to provide a module for creating one or more interactive content items (e.g., new graphical user interfaces or other activities). A second program interface is also electronically generated to provide a module for creating one or more shared data variables (e.g., data tables or the like) and for entering data into such shared data variables. Features are also provided to generate a third program interface for defining instructions to reference one or more shared data variables from an interactive content item. The instructions created using the third program interface are electronically executed to populate one or more elements in the interactive content item with data from one or more of the shared data variables.
    Type: Application
    Filed: February 2, 2010
    Publication date: August 4, 2011
    Applicant: DYNAVOX SYSTEMS, LLC
    Inventors: BOB CUNNINGHAM, Greg Brown, Mike Salandro
  • Publication number: 20110161073
    Abstract: Systems and methods for automatically selecting dictionary definitions for one or more target words include receiving electronic signals from an input device indicating one or more target words for which a dictionary definition is desired. The target word(s) and selected surrounding words defining an observation sequence are subjected to a part of speech tagging algorithm to electronically determine one or more most likely part of speech tags for the target word(s). Potential relations are examined between the target word(s) and selected surrounding keywords. The target word(s), the part of speech tag(s) and the discovered keyword relations are then used to map the target word(s) to one or more specific dictionary definitions. The dictionary definitions are then provided as electronic output, such as by audio and/or visual display, to a user.
    Type: Application
    Filed: December 29, 2009
    Publication date: June 30, 2011
    Applicant: DYNAVOX SYSTEMS, LLC
    Inventors: GREG LESHER, BOB CUNNINGHAM
  • Publication number: 20110161068
    Abstract: Systems and methods for automatically discovering and assigning symbols for identified text in a software application include receiving electronic signals from an input device indicating identified text for which symbol assignment is desired. Additional information such as part of speech, additional words, context of use, etc. may also be provided. The identified text and optional additional information is analyzed to establish a mapping of the identified text to one or more identified word senses from a word sense model database. An electronic determination of whether any of the identified word senses has an associated symbol is conducted. Related word senses may also be analyzed to determine if any related word senses have symbols. One of the determined symbols may then be associated with the identified text such that the symbol is thereafter displayed in conjunction with or instead of the text in the application.
    Type: Application
    Filed: December 29, 2009
    Publication date: June 30, 2011
    Applicant: DYNAVOX SYSTEMS, LLC
    Inventors: GREG LESHER, BOB CUNNINGHAM
  • Publication number: 20110161067
    Abstract: Systems and methods for automatically discovering and assigning symbols for identified text in a software application include identifying text for which symbol assignment is desired. The words within the identified text and selected surrounding words defining an observation sequence are subjected to a part of speech tagging algorithm to electronically determine one or more most likely part of speech tags for the identified text. Context relations between the identified text and selected surrounding keywords may also be identified. The identified text, part of speech tag(s) and/or determined relations are then analyzed to map the identified text to one or more identified word senses. Related word senses may also be analyzed to determine if any related word senses have symbols. One of the determined symbols may then be associated with the identified text such that the symbol is thereafter displayed in conjunction with or instead of the text in the application.
    Type: Application
    Filed: December 29, 2009
    Publication date: June 30, 2011
    Applicant: DYNAVOX SYSTEMS, LLC
    Inventors: GREG LESHER, BOB CUNNINGHAM