Audio Input For On-screen Manipulation (e.g., Voice Controlled Gui) Patents (Class 715/728)
  • Publication number: 20150067516
    Abstract: A wearable electronic device including a wireless communication unit configured to be wirelessly connected to a projector for projecting a stored presentation onto a screen of an external device; a main body configured to be worn by a user; a microphone integrally connected to the main body; a display unit configured to be attached to the main body; and a controller configured to match voice information input through the microphone with corresponding contents of the stored presentation, and display at least a following portion of content that follow the corresponding contents on the display unit.
    Type: Application
    Filed: February 21, 2014
    Publication date: March 5, 2015
    Applicant: LG ELECTRONICS INC.
    Inventors: Jongseok PARK, Sanghyuck LEE
  • Publication number: 20150067517
    Abstract: An electronic device and a method for controlling the electronic device are provided. The method includes receiving at least one input sound from a user, determining one of a plurality of reference sounds included in a guide track as a device playing sound corresponding to the at least one input sound, and playing the device playing sound.
    Type: Application
    Filed: August 25, 2014
    Publication date: March 5, 2015
    Inventors: Hae-Seok OH, Jeong-Yeon KIM, Dae-Beom PARK, Lae-Hyuk BANG, Chul-Hyung YANG, Ji-Woong OH, Gyu-Cheol CHOI
  • Publication number: 20150067515
    Abstract: An electronic device, a controlling method of a screen, and a program storage medium thereof are provided. The screen includes a display panel and a touch-sensitive panel. The display panel shows a root window on which all display contents are shown. The controlling method comprises the following steps. A command signal is received. The coordinate system of the screen is transformed with a transformation according to the command signal.
    Type: Application
    Filed: August 27, 2013
    Publication date: March 5, 2015
    Applicant: Industrial Technology Research Institute
    Inventor: Chia-Ming CHANG
  • Patent number: 8966365
    Abstract: An apparatus, system, and method are disclosed for an information processing apparatus capable of allowing a user to select appropriate processing beforehand when an application program outputs sound in a state in which an audio device is silenced. The apparatus in one embodiment includes a silencing module for silencing audio information output from an audio device, a detection module for detecting a sound playback request from an application program while silencing is set, a display module for displaying a select screen for allowing a user to select processing when the sound playback request from the application program is detected by the detection module, and a processing module for executing the processing selected by the user on the select screen.
    Type: Grant
    Filed: May 12, 2011
    Date of Patent: February 24, 2015
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Koutaroh Maki, Mikio Hagiwara
  • Publication number: 20150046825
    Abstract: The present disclosure involves a method of improving one-handed operation of a mobile computing device. A first visual content is displayed on a screen of the mobile computing device. The first visual content occupies a substantial entirety of a viewable area of the screen. While the first visual content is being displayed, an action performed by a user to the mobile computing device is detected. The first visual content is scaled down in response to the detected action and displayed on the screen. The scaled-down first visual content occupies a fraction of the viewable area of the screen. A user interaction with the scaled-down first visual content is then detected. In response to the user interaction, a second visual content is displayed on the screen. The second visual content is different from the first visual content and occupies a substantial entirety of the viewable area of the screen.
    Type: Application
    Filed: August 8, 2013
    Publication date: February 12, 2015
    Inventor: Eric Qing Li
  • Publication number: 20150040012
    Abstract: Techniques described herein provide a computing device configured to provide an indication that the computing device has recognized a voice-initiated action. In one example, a method is provided for outputting, by a computing device and for display, a speech recognition graphical user interface (GUI) having at least one element in a first visual format. The method further includes receiving, by the computing device, audio data and determining, by the computing device, a voice-initiated action based on the audio data. The method also includes outputting, while receiving additional audio data and prior to executing a voice-initiated action based on the audio data, and for display, an updated speech recognition GUI in which the at least one element is displayed in a second visual format, different from the first visual format, to indicate that the voice-initiated action has been identified.
    Type: Application
    Filed: December 17, 2013
    Publication date: February 5, 2015
    Applicant: Google Inc.
    Inventors: Alexander Faaborg, Peter Ng
  • Publication number: 20150033129
    Abstract: A mobile terminal including a camera; a display unit configured to display an image input through the camera; and a controller configured to display at least one user-defined icon corresponding to linked image-setting information, receive a touch signal indicating a touch is applied to a corresponding user-defined icon, and control the camera to capture the image based on image-setting information linked to the corresponding user-defined icon in response to the received touch signal.
    Type: Application
    Filed: June 9, 2014
    Publication date: January 29, 2015
    Inventors: Kyungmin CHO, Jeonghyun LEE, Minah SONG
  • Publication number: 20150033128
    Abstract: A multi-dimensional surgical safety countermeasure system and method for using automated checklists to provide information to surgical staff in a surgical procedure. The system and method involve using checklists and receiving commands through the prompts of the checklists to update the information displayed on the display to guide the performance of a medical procedure.
    Type: Application
    Filed: July 24, 2013
    Publication date: January 29, 2015
    Inventors: Steve Curd, Mark Heinemeyer, Victor Culafic
  • Publication number: 20150033130
    Abstract: A computing device detects a user viewing the computing device and outputs a cue if the user is detected to view the computing device. The computing device receives an audio input from the user if the user continues to view the computing device for a predetermined amount of time.
    Type: Application
    Filed: April 27, 2012
    Publication date: January 29, 2015
    Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventor: Evan Scheessele
  • Patent number: 8943411
    Abstract: A system, method, and computer program product are provided for displaying controls to a user. In use, input is received from a user. Additionally, a location of the user is determined with respect to a display, utilizing the input. Further, one or more controls are positioned on the display, based on the location of the user.
    Type: Grant
    Filed: March 6, 2012
    Date of Patent: January 27, 2015
    Assignee: Amdocs Software Systems Limited
    Inventor: Matthew Davis Hill
  • Publication number: 20150026580
    Abstract: A system of communicating between first and second electronic devices, comprises, in a first device, receiving from a second device, voice representative information acquired by the second device, and connection information indicating characteristics of communication to be used in establishing a communication link with the second device. The system compares the voice representative information with predetermined reference voice representative information and in response to the comparison, establishes a communication link with the second device by using the connection information received from the second device.
    Type: Application
    Filed: July 1, 2014
    Publication date: January 22, 2015
    Inventors: Hyuk KANG, Kyung-tae KIM, Seong-min JE
  • Publication number: 20150026579
    Abstract: The disclosed embodiments illustrate methods and systems for processing one or more crowdsourced tasks. The method comprises converting an audio input received from a crowdworker to one or more phrases by one or more processors in at least one computing device. The audio input is at least a response to a crowdsourced task. A mode of the audio input is selected based on one or more parameters associated with the crowdworker. Thereafter, the one or more phrases are presented on a display of the at least one computing device by the one or more processors. Finally, one of the one or more phrases is selected by the crowdworker as a correct response to the crowdsourced task.
    Type: Application
    Filed: July 16, 2013
    Publication date: January 22, 2015
    Inventor: Shailesh Vaya
  • Patent number: 8938676
    Abstract: A method of enabling a user to adjust at least first and second control parameters for controlling an electronic system includes displaying a coordinate system on a display screen, where a first coordinate represents a range of values of the first control parameter, and a second coordinate represents a range of values of the second control parameter. The method further included visually indicating a position in coordinate system corresponding to a currently selected combination of values of the first and second control parameters, and enabling the user to select a new combination of values of the first and second control parameters by indicating a position within the coordinate system.
    Type: Grant
    Filed: March 12, 2004
    Date of Patent: January 20, 2015
    Assignee: Koninklijke Philips N.V.
    Inventors: Leon Maria Van De Kerkhof, Mykola Ostrovskyy, Arnoldus Werner Johannes Oomen
  • Publication number: 20150019974
    Abstract: There is provided an information processing device including a processor configured to realize an address term definition function of defining an address term for at least a partial region of an image to be displayed on a display, a display control function of displaying the image on the display and temporarily displaying the address term on the display in association with the region, a voice input acquisition function of acquiring a voice input for the image, and a command issuing function of issuing a command relevant to the region when the address term is included in the voice input.
    Type: Application
    Filed: May 9, 2014
    Publication date: January 15, 2015
    Applicant: Sony Corporation
    Inventors: Shouichi DOI, Yoshiki TAKEOKA, Masayuki TAKADA
  • Publication number: 20150019975
    Abstract: Described herein are technologies pertaining to transmitting electronic contact data from a first application to a second application by way of an operating system without generating a centralized contact store or providing the second application with programmatic access to all electronic contact data retained by first application.
    Type: Application
    Filed: July 28, 2014
    Publication date: January 15, 2015
    Inventors: John Morrow, Neil Pankey, Michael Farnsworth, Ashish Bangale
  • Patent number: 8935166
    Abstract: Some embodiments disclosed herein store a target application and a dictation application. The target application may be configured to receive input from a user. The dictation application interface may include a full overlay mode option, where in response to selection of the full overlay mode option, the dictation application interface is automatically sized and positioned over the target application interface to fully cover a text area of the target application interface to appear as if the dictation application interface is part of the target application interface. The dictation application may be further configured to receive an audio dictation from the user, convert the audio dictation into text, provide the text in the dictation application interface and in response to receiving a first user command to complete the dictation, automatically copy the text from the dictation application interface and inserting the text into the target application interface.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: January 13, 2015
    Assignee: Dolbey & Company, Inc.
    Inventors: Curtis A. Weeks, Aaron G. Weeks, Stephen E. Barton
  • Publication number: 20150012829
    Abstract: A computer implemented method and an apparatus for facilitating voice user interface (VUI) design are provided. The method comprises identifying a plurality of user intentions from user interaction data. The method further comprises associating each user intention with at least one feature from among a plurality of features. One or more features from among the plurality of features are extracted from natural language utterances associated with the user interaction data. Further, the method comprises computing a plurality of distance metrics corresponding to pairs of user intentions from among the plurality of user intentions. A distance metric is computed for each pair of user intentions from among the pairs of user intentions. Furthermore, the method comprises generating a plurality of clusters based on the plurality of distance metrics. Each cluster comprises a set of user intentions. The method further comprises provisioning a VUI design recommendation based on the plurality of clusters.
    Type: Application
    Filed: June 30, 2014
    Publication date: January 8, 2015
    Inventors: Kathy L. BROWN, Vaibhav SRIVASTAVA
  • Patent number: 8930342
    Abstract: Multidimensional search capabilities are enabled on a non-PC (personal computer) device being utilized by a user. An original query submitted by the user via the non-PC device is received. A structured data repository is accessed to extract structured data that is available for the original query, where the extracted structured data represents attributes of the original query. The extracted structured data is provided to the user in the form of a hierarchical menu which allows the user to interactively modify the original query, such modification resulting in a revised query.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: January 6, 2015
    Assignee: Microsoft Corporation
    Inventors: Johnson Apacible, Mark Encarnacion, Aleksey Sinyagin
  • Patent number: 8924856
    Abstract: Provided is a method of providing a slide show. The method includes determining whether a first image to be displayed is an image photographed in a continuous photographing mode, when the first image is an image photographed in the continuous photographing mode, displaying the first image for a first time interval, and when the first image is not an image photographed in the continuous photographing mode, displaying the first image for a second time interval, wherein the first time interval and the second time interval are different from each other. The first time interval may be shorter than the second time interval.
    Type: Grant
    Filed: January 7, 2010
    Date of Patent: December 30, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Jae-myung Lee
  • Publication number: 20140380170
    Abstract: A method for receiving processed information at a remote device is described. The method includes transmitting from the remote device a verbal request to a first information provider and receiving a digital message from the first information provider in response to the transmitted verbal request. The digital message includes a symbolic representation indicator associated with a symbolic representation of the verbal request and data used to control an application. The method also includes transmitting, using the application, the symbolic representation indicator to a second information provider for generating results to be displayed on the remote device.
    Type: Application
    Filed: September 5, 2014
    Publication date: December 25, 2014
    Inventors: Gudmundur Hafsteinsson, Michael J. LeBeau, Natalia Marmasse, Sumit Agarwal, Dipchand Nishar
  • Publication number: 20140380169
    Abstract: Disclosed are methods for disambiguating an input phrase or group of words. An implementation may include receiving a phrase as an input to a processor. The received phrase may be presented on a display device. The received phrase may be determined to be ambiguous based on a threshold uncertainty in either a definition or a pronunciation related to the phrase. An indication may be provided that a word in the phrase is the cause of the ambiguity. A menu of words with each word incorporating at least one diacritic mark to a word in the received phrase to disambiguate the received phrase may be presented. A word from the menu of words may be selected and presented on the display device.
    Type: Application
    Filed: June 20, 2013
    Publication date: December 25, 2014
    Inventor: Mohamed S. Eldawy
  • Publication number: 20140372892
    Abstract: Embodiments of the present invention automatically register user interfaces with a voice control system. Registering the interface allows interactive elements within the interface to be controlled by a user's voice. A voice control system analyzes audio including voice commands spoken by a user and manipulates the user interface in response. The automatic registration of a user interface with a voice control system allows a user interface to be voice controlled without the developer of the application associated with the interface having to do anything. Embodiments of the invention allow an application's interface to be voice controlled without the application needing to account for states of the voice control system.
    Type: Application
    Filed: June 18, 2013
    Publication date: December 18, 2014
    Inventors: GERSHOM LOUIS PAYZER, NICHOLAS DORIAN RAPP, NALIN SINGAL, LAWRENCE WAYNE OLSON
  • Patent number: 8913189
    Abstract: Audio data and video data are processed to determine one or more audible events and visual events, respectively. Contemporaneous presentation of the video data with audio data may be synchronized based at least in part on the audible events and the visual events. Audio processing functions, such as filtering, may be initiated for audio data based at least in part on the visual events.
    Type: Grant
    Filed: March 8, 2013
    Date of Patent: December 16, 2014
    Assignee: Amazon Technologies, Inc.
    Inventors: Richard William Mincher, Todd Christopher Mason
  • Publication number: 20140365896
    Abstract: Described herein are frameworks, devices and methods configured for enabling display for facility information and content, in some cases via touch/gesture controlled interfaces. Embodiments of the invention have been particularly developed for allowing an operator to conveniently access a wide range of information relating to a facility via, for example, one or more wall mounted displays. While some embodiments will be described herein with particular reference to that application, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts.
    Type: Application
    Filed: June 9, 2014
    Publication date: December 11, 2014
    Inventors: John D. Morrison, Peter C. Davis, Graeme Laycock
  • Patent number: 8908050
    Abstract: An imaging apparatus includes an imaging unit, a field angle change unit, and a movement detection unit. The imaging unit includes a lens that forms an image of a subject and acquires a picture image by taking the image formed by the lens. The field angle change unit changes a field angle of the picture image acquired by the imaging unit. The movement detection unit detects a movement of the imaging apparatus. The field angle change unit changes the field angle of the picture image in accordance with a moving direction of the imaging apparatus when the movement detection unit detects the movement of the imaging apparatus.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: December 9, 2014
    Assignee: Olympus Imaging Corp.
    Inventors: Tatsuya Kino, Yoshinori Matsuzawa, Osamu Nonaka
  • Publication number: 20140351700
    Abstract: An apparatus may comprise at least one processor-readable non-statutory storage medium and at least one processor in communication with the at least one storage medium. The at least one medium may comprise at least one set of instructions for changing an audio-visual effect of a user interface on the apparatus. The at least one processor may be configured to execute the at least one set of instructions to obtain operating data associated with at least one of an acceleration input and acoustic input from a sensor of the terminal device; determine whether the operating data meet a preset condition; and replace a current audio-visual effect of a user interface (UI) with a selected audio-visual effect when the operating data meet the preset conditions.
    Type: Application
    Filed: August 7, 2014
    Publication date: November 27, 2014
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Cheng FENG, Bo HU, Xi WANG, Ruiyi ZHOU, Zhipei WANG, Kai ZHANG, Xin QING, Huijiao YANG, Ying HUANG, Yulei LIU, Wei LI, Zhengkai XIE
  • Publication number: 20140344701
    Abstract: A system and method for image based report correction for medical image software, which incorporates such report correction as part of the report generation process. Such a system and method features a report generator, a report correction functionality and also some type of medical image software, for providing medical image processing capabilities, which allows the doctor or other medical personnel to generate the report, and as part of the report generation process, to be checked by the report correction functionality.
    Type: Application
    Filed: May 15, 2014
    Publication date: November 20, 2014
    Inventor: Reuven R. Shreiber
  • Publication number: 20140337740
    Abstract: Provided herein is a method for selecting an object. The method for selecting an object according to an exemplary embodiment includes displaying a plurality of objects on a screen, recognizing a voice uttered by a user and tracking an eye of the user with respect to the screen, and selecting at least one object from among the plurality of objects on the screen based on the recognized user's voice and the tracked eye.
    Type: Application
    Filed: May 7, 2014
    Publication date: November 13, 2014
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sung-hyuk KWON, Jae-yeop KIM, Jin-ha LEE, Christophe NAOURJEAN
  • Publication number: 20140337741
    Abstract: A method includes determining, using signals captured from two or more microphones (1A) configured to detect an acoustic signal from one or more sound sources, one or more prominent sound sources based on the one or more sound sources (1C-1D). The method further includes determining one or more directions relative to a position of one or more of the two or more microphones for the one or more prominent sound sources (1B-1D). The method includes modifying one or more user interface elements displayed on a user interface of a display to provide an indication at least in part of the one or more directions, relative to position of at least one microphone, of the one or more prominent sound sources (1G).
    Type: Application
    Filed: November 21, 2012
    Publication date: November 13, 2014
    Applicant: Nokia Corporation
    Inventors: Erika Reponen, Ravi Shenoy, Mikko Tammi, Sampo Vesa
  • Patent number: 8887051
    Abstract: A method, system, and computer-readable product for positioning a virtual sound capturing device in a graphical user interface (GUI) are disclosed. The method includes displaying a virtual sound capturing device in relation to a virtual sound producing device in a three dimensional interface and in a two dimensional graphical map. Additionally, the method includes adjusting the display of the virtual sound capturing device in relation to the virtual sound producing device in both the three dimensional interface and the two dimensional graphical map in response to commands received from an input device.
    Type: Grant
    Filed: December 3, 2012
    Date of Patent: November 11, 2014
    Assignee: Apple Inc.
    Inventors: Markus Sapp, Kerstin Heitmann, Thorsten Quandt, Manfred Knauff, Marko Junghanns
  • Publication number: 20140325360
    Abstract: A display apparatus which is capable of performing an initial setting and a controlling control method thereof are provided. The display apparatus includes an output unit configured to output a user interface (UI) which is controllable by a plurality of input modes, and a controller configured to set an input mode according to a user feedback type regarding the UI, and configured to output another UI which corresponds to the set input mode.
    Type: Application
    Filed: April 24, 2014
    Publication date: October 30, 2014
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yeon-hee JUNG, Do-hyoung KIM, Se-jun PARK, Yoon-woo JUN, Joo-yeon CHO, Sun CHOI
  • Patent number: 8875026
    Abstract: The present invention is directed to directed communication in a virtual environment. A method for method for providing directed communication between avatars in a virtual environment in accordance with an embodiment of the present invention includes: determining a relative location of a first avatar and a second avatar in a virtual environment; and adjusting an aspect of a communication between the first avatar and the second avatar based on the relative location.
    Type: Grant
    Filed: May 1, 2008
    Date of Patent: October 28, 2014
    Assignee: International Business Machines Corporation
    Inventors: Rick A. Hamilton, II, John P. Karidis, Brian M. O'Connell, Clifford A. Pickover, Keith R. Walker
  • Patent number: 8869032
    Abstract: A method of defining a voice browser for browsing a plurality of voice sites, at least some of the voice sites having different telephone numbers, the voice sites being configured to be accessed by telephone, is provided including storing information relating to voice sites visited by a voice user; and providing forward and back functions, comprising transferring a user from one voice site to another, in response to commands by the user. Computer program code and systems are also provided.
    Type: Grant
    Filed: March 13, 2008
    Date of Patent: October 21, 2014
    Assignee: International Business Machines Corporation
    Inventors: Sheetal K. Agarwal, Dipanjan Chakraborty, Arun Kumar, Amit Anil Nanavati, Nitendra Rajput
  • Patent number: 8862985
    Abstract: A screen reader application for visually impaired users suppresses unwanted content that is output by Braille or text-to-speech. The invention accesses, but does not modify, the document object model of the web page and enumerates web page elements for the end user to either hide or skip to. The end user selections are saved as rules which may be applied according to various levels of scope include web page specific, site specific or web-wide. A screen magnification application for visually impaired users automatically sets the visual focus and magnification level on a web page element according to end-user selection.
    Type: Grant
    Filed: June 8, 2012
    Date of Patent: October 14, 2014
    Assignee: Freedom Scientific, Inc.
    Inventors: Robert Gallo, Glen Gordon
  • Patent number: 8862475
    Abstract: Speech-enabled content navigation and control of a distributed multimodal browser is disclosed, the browser providing an execution environment for a multimodal application, the browser including a graphical user agent (‘GUA’) and a voice user agent (‘VUA’), the GUA operating on a multimodal device, the VUA operating on a voice server, that includes: transmitting, by the GUA, a link message to the VUA, the link message specifying voice commands that control the browser and an event corresponding to each voice command; receiving, by the GUA, a voice utterance from a user, the voice utterance specifying a particular voice command; transmitting, by the GUA, the voice utterance to the VUA for speech recognition by the VUA; receiving, by the GUA, an event message from the VUA, the event message specifying a particular event corresponding to the particular voice command; and controlling, by the GUA, the browser in dependence upon the particular event.
    Type: Grant
    Filed: April 12, 2007
    Date of Patent: October 14, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Soonthorn Ativanichayaphong, Charles W. Cross, Jr., Gerald M. McCobb
  • Publication number: 20140304605
    Abstract: A system that acquires captured voice data corresponding to a spoken command; sequentially analyzes the captured voice data; causes a display to display a visual indication corresponding to the sequentially analyzed captured voice data; and performs a predetermined operation corresponding to the spoken command when it is determined that the sequential analysis of the captured voice data is complete.
    Type: Application
    Filed: March 11, 2014
    Publication date: October 9, 2014
    Applicant: SONY CORPORATION
    Inventors: Junki OHMURA, Michinari Kohno, Kenich Okada
  • Publication number: 20140304606
    Abstract: An information processing device includes circuitry configured to cause first display information to be displayed in a first format. The circuitry also changes the first display information to be displayed in a second format in response to a voice being recognized. The information processing may also be accomplished with a method and via a non-transient computer readable storage device.
    Type: Application
    Filed: March 12, 2014
    Publication date: October 9, 2014
    Applicant: SONY CORPORATION
    Inventors: Junki OHMURA, Michinari Kohno, Kenichi Okada
  • Publication number: 20140298177
    Abstract: Example embodiments relate to processing user interactions with a computing device, comprising receiving a user-initiated action performed on a character button, the character button representing a character; determining whether the user-initiated action is performed in a normal an abnormal operating manner. When a normal operating manner is determined, displaying the character on a graphical display.
    Type: Application
    Filed: March 28, 2013
    Publication date: October 2, 2014
    Inventor: Vasan Sun
  • Publication number: 20140289633
    Abstract: The present disclosure provides a method and an electronic device for information processing. The electronic device comprises a sensing unit and a display unit having a display area. The display unit displays a graphical interface. The display area displays a first part of the graphical interface. The method comprises: detecting a first operation by the sensing unit when the display unit displays the first part of the graphical interface; displaying the second part of the graphical interface on the display unit in response to the first operation; detecting a second operation; determining whether a preset condition is satisfied during the detecting of the second operation to obtain first decision information; and displaying a speech control on the display unit when the first decision information indicates that the preset condition is satisfied during the detecting of the second operation.
    Type: Application
    Filed: March 20, 2014
    Publication date: September 25, 2014
    Applicants: LENOVO (BEIJING) LIMITED, BEIJING LENOVO SOFTWARE LTD.
    Inventors: Xu JIA, Xinru HOU, Qianying WANG, Gaoge WANG, Shifeng PENG, Yuanyi ZHANG, Jun CHEN
  • Publication number: 20140289632
    Abstract: According to an embodiment, a picture drawing support apparatus includes following components. The feature extractor extracts a feature amount from a picture drawn by a user. The speech recognition unit performs speech recognition on speech input by the user. The keyword extractor extracts at least one keyword from a result of the speech recognition. The image search unit retrieves one or more images corresponding to the at least one keyword from a plurality of images prepared in advance. The image selector selects an image which matches the picture, from the one or more images based on the feature amount. The image deformation unit deforms the image based on the feature amount to generate an output image. The presentation unit presents the output image.
    Type: Application
    Filed: March 4, 2014
    Publication date: September 25, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Masaru Suzuki, Masayuki Okamoto, Kenta Cho, Kosei Fume
  • Publication number: 20140282008
    Abstract: An interactive holographic display system includes a holographic generation module configured to display a holographically rendered anatomical image. A localization system is configured to define a monitored space on or around the holographically rendered anatomical image. One or more monitored objects have their position and orientation monitored by the localization system such that coincidence of spatial points between the monitored space and the one or more monitored objects triggers a response in the holographically rendered anatomical image.
    Type: Application
    Filed: October 15, 2012
    Publication date: September 18, 2014
    Inventors: Laurent Verard, Raymond Chan, Daniel Simon Anna Ruijters, Sander Hans Denissen, Sander Slegt
  • Publication number: 20140282007
    Abstract: Methods and systems are provided for diagnosing inadvertent activation of user interface settings on an electronic device. The electronic device receives a user input indicating that the user is having difficulty operating the electronic device. The device then determines whether a setting was changed on the device within a predetermined time period prior to receiving the user input. When a first setting was changed within the predetermined time period prior to receiving the user input, the device restores the changed setting to a prior setting.
    Type: Application
    Filed: March 3, 2014
    Publication date: September 18, 2014
    Applicant: APPLE INC.
    Inventor: Christopher B. FLEIZACH
  • Publication number: 20140282006
    Abstract: A method comprising generating, by a computer, a model of a website using user interaction primitives to represent hierarchical and hypertextual structures of the website; generating, by the computer, a linear aural flow of content of the website based upon the model and a set of user constraints; audibly presenting, by the computer, the linear aural flow of the content such that the linear aural flow of content is controlled through the use of user supplied primitives, wherein, the linear aural flow can be turned into a dynamic aural flow based upon the user supplied primitives.
    Type: Application
    Filed: September 11, 2013
    Publication date: September 18, 2014
    Inventor: Davide Bolchini
  • Publication number: 20140282005
    Abstract: Incoming messages, like incoming wounded on the battlefield, can be initially sorted into groups e.g. a) those which can be or should be treated immediately, b) those which can be treated later, and c) those which should not be treated. Like in a triage unit on a battlefield, it is useful to reduce the amount of effort and increase the speed at which this sort takes place. The present invention allows the user's effort to sort to be reduced to a minimum, with a consequent increase in speed.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventor: Howard Gutowitz
  • Publication number: 20140281995
    Abstract: A method of controlling a mobile terminal, and which includes displaying an application screen of an executing application and a corresponding keypad on a display of the mobile terminal; modifying, via a controller of the mobile terminal, the keypad into a new keypad arrangement; displaying, via the controller, a display window in a vacant space created by the modification of the keypad, wherein a function of the display window is automatically selected based on a type of the executing application; and inputting text on the displayed application screen through the display window and modified keypad.
    Type: Application
    Filed: January 29, 2014
    Publication date: September 18, 2014
    Applicant: LG ELECTRONICS INC.
    Inventors: Mina KIM, Byoungjoo KWAK, Hosung SONG, Keansub LEE
  • Publication number: 20140282259
    Abstract: Navigating through objects or items in a display device using a first input device to detect pointing of a user's finger to an object or item and using a second input device to receive the user's indication on the selection of the object or the item. An image of the hand is captured by the first input device and is processed to determine a location on the display device corresponding to the location of the fingertip of the pointing finger. The object or the item corresponding to the location of the fingertip is selected after the second input device receives predetermined user input from the second input device.
    Type: Application
    Filed: February 27, 2014
    Publication date: September 18, 2014
    Applicant: Honda Motor Co., Ltd.
    Inventors: Kikuo Fujimura, Victor Ng-Thow-Hing, Behzad Dariush
  • Patent number: 8832596
    Abstract: Input sources to a display, such as a television, are depicted with a first indicator if active, such as active with a signal having visual information, and are depicted with a second indicator if inactive, such as lacking an active signal. A selection module allows a user of the display to select an active source but precludes selection of an inactive source. For instance, a user is allowed to select highlighted input sources that are active but not allowed to select grayed out input sources that are inactive. An override allows the user to present all input sources with the first indicator, whether or not the input sources are active.
    Type: Grant
    Filed: May 23, 2005
    Date of Patent: September 9, 2014
    Assignee: Dell Products L.P.
    Inventors: Jeffrey Stephens, Andrew G. Habas
  • Patent number: 8831955
    Abstract: Methods and arrangements for facilitating tangible interactions in voice applications. At least two tangible objects are provided, along with a measurement interface. The at least two tangible objects are disposed to each be displaceable with respect to one another and with respect to the measurement interface. The measurement interface is communicatively connected with a voice application. At least one of the two tangible objects is displaced with respect to the measurement interface, and the displacement of at least one of the at least two tangible objects is converted to input for the voice application.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: September 9, 2014
    Assignee: International Business Machines Corporation
    Inventors: Nitendra Rajput, Shrey Sahay, Saurabh Srivastava, Kundan Shrivastava
  • Patent number: 8826133
    Abstract: A system and method are disclosed for providing improved 3D sound experience to a user. The sound generation layer is customizable to allow the user and/or application provider to modify the internal rules the sound generation layer uses to render sounds, to amplify sounds that fall below a pre-set or user-set volume level, and to specifically amplify/soften certain sounds (such as game specific sounds like gunfire or footsteps), or specific frequencies of sounds. A graphical user interface can communicates with the sound generation layer to handle any or all the above, so that a lay user can easily adjust these settings without having to understand the underlying algorithms.
    Type: Grant
    Filed: March 6, 2006
    Date of Patent: September 2, 2014
    Assignee: Razer (Asia-Pacific) Pte. Ltd.
    Inventors: Chern Ann Ng, Min-Liang Tan
  • Patent number: 8826137
    Abstract: A screen reader software product for low-vision users, the software having a reader module collecting textual and non-textual display information generated by a web browser or word processor. Font styling, interface layout information and the like are communicated to the end user by sounds broadcast simultaneously rather than serially with the synthesized speech to improve the speed and efficiency in which information may be digested by the end user.
    Type: Grant
    Filed: August 12, 2004
    Date of Patent: September 2, 2014
    Assignee: Freedom Scientific, Inc.
    Inventors: Christian D. Hofstader, Glen Gordon, Eric Damery, Ralph Ocampo, David Baker, Joseph K. Stephen