Audio Input For On-screen Manipulation (e.g., Voice Controlled Gui) Patents (Class 715/728)
-
Publication number: 20150067516Abstract: A wearable electronic device including a wireless communication unit configured to be wirelessly connected to a projector for projecting a stored presentation onto a screen of an external device; a main body configured to be worn by a user; a microphone integrally connected to the main body; a display unit configured to be attached to the main body; and a controller configured to match voice information input through the microphone with corresponding contents of the stored presentation, and display at least a following portion of content that follow the corresponding contents on the display unit.Type: ApplicationFiled: February 21, 2014Publication date: March 5, 2015Applicant: LG ELECTRONICS INC.Inventors: Jongseok PARK, Sanghyuck LEE
-
ELECTRONIC DEVICE SUPPORTING MUSIC PLAYING FUNCTION AND METHOD FOR CONTROLLING THE ELECTRONIC DEVICE
Publication number: 20150067517Abstract: An electronic device and a method for controlling the electronic device are provided. The method includes receiving at least one input sound from a user, determining one of a plurality of reference sounds included in a guide track as a device playing sound corresponding to the at least one input sound, and playing the device playing sound.Type: ApplicationFiled: August 25, 2014Publication date: March 5, 2015Inventors: Hae-Seok OH, Jeong-Yeon KIM, Dae-Beom PARK, Lae-Hyuk BANG, Chul-Hyung YANG, Ji-Woong OH, Gyu-Cheol CHOI -
Publication number: 20150067515Abstract: An electronic device, a controlling method of a screen, and a program storage medium thereof are provided. The screen includes a display panel and a touch-sensitive panel. The display panel shows a root window on which all display contents are shown. The controlling method comprises the following steps. A command signal is received. The coordinate system of the screen is transformed with a transformation according to the command signal.Type: ApplicationFiled: August 27, 2013Publication date: March 5, 2015Applicant: Industrial Technology Research InstituteInventor: Chia-Ming CHANG
-
Patent number: 8966365Abstract: An apparatus, system, and method are disclosed for an information processing apparatus capable of allowing a user to select appropriate processing beforehand when an application program outputs sound in a state in which an audio device is silenced. The apparatus in one embodiment includes a silencing module for silencing audio information output from an audio device, a detection module for detecting a sound playback request from an application program while silencing is set, a display module for displaying a select screen for allowing a user to select processing when the sound playback request from the application program is detected by the detection module, and a processing module for executing the processing selected by the user on the select screen.Type: GrantFiled: May 12, 2011Date of Patent: February 24, 2015Assignee: Lenovo (Singapore) Pte. Ltd.Inventors: Koutaroh Maki, Mikio Hagiwara
-
Publication number: 20150046825Abstract: The present disclosure involves a method of improving one-handed operation of a mobile computing device. A first visual content is displayed on a screen of the mobile computing device. The first visual content occupies a substantial entirety of a viewable area of the screen. While the first visual content is being displayed, an action performed by a user to the mobile computing device is detected. The first visual content is scaled down in response to the detected action and displayed on the screen. The scaled-down first visual content occupies a fraction of the viewable area of the screen. A user interaction with the scaled-down first visual content is then detected. In response to the user interaction, a second visual content is displayed on the screen. The second visual content is different from the first visual content and occupies a substantial entirety of the viewable area of the screen.Type: ApplicationFiled: August 8, 2013Publication date: February 12, 2015Inventor: Eric Qing Li
-
Publication number: 20150040012Abstract: Techniques described herein provide a computing device configured to provide an indication that the computing device has recognized a voice-initiated action. In one example, a method is provided for outputting, by a computing device and for display, a speech recognition graphical user interface (GUI) having at least one element in a first visual format. The method further includes receiving, by the computing device, audio data and determining, by the computing device, a voice-initiated action based on the audio data. The method also includes outputting, while receiving additional audio data and prior to executing a voice-initiated action based on the audio data, and for display, an updated speech recognition GUI in which the at least one element is displayed in a second visual format, different from the first visual format, to indicate that the voice-initiated action has been identified.Type: ApplicationFiled: December 17, 2013Publication date: February 5, 2015Applicant: Google Inc.Inventors: Alexander Faaborg, Peter Ng
-
Publication number: 20150033129Abstract: A mobile terminal including a camera; a display unit configured to display an image input through the camera; and a controller configured to display at least one user-defined icon corresponding to linked image-setting information, receive a touch signal indicating a touch is applied to a corresponding user-defined icon, and control the camera to capture the image based on image-setting information linked to the corresponding user-defined icon in response to the received touch signal.Type: ApplicationFiled: June 9, 2014Publication date: January 29, 2015Inventors: Kyungmin CHO, Jeonghyun LEE, Minah SONG
-
Publication number: 20150033128Abstract: A multi-dimensional surgical safety countermeasure system and method for using automated checklists to provide information to surgical staff in a surgical procedure. The system and method involve using checklists and receiving commands through the prompts of the checklists to update the information displayed on the display to guide the performance of a medical procedure.Type: ApplicationFiled: July 24, 2013Publication date: January 29, 2015Inventors: Steve Curd, Mark Heinemeyer, Victor Culafic
-
Publication number: 20150033130Abstract: A computing device detects a user viewing the computing device and outputs a cue if the user is detected to view the computing device. The computing device receives an audio input from the user if the user continues to view the computing device for a predetermined amount of time.Type: ApplicationFiled: April 27, 2012Publication date: January 29, 2015Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventor: Evan Scheessele
-
Patent number: 8943411Abstract: A system, method, and computer program product are provided for displaying controls to a user. In use, input is received from a user. Additionally, a location of the user is determined with respect to a display, utilizing the input. Further, one or more controls are positioned on the display, based on the location of the user.Type: GrantFiled: March 6, 2012Date of Patent: January 27, 2015Assignee: Amdocs Software Systems LimitedInventor: Matthew Davis Hill
-
Publication number: 20150026580Abstract: A system of communicating between first and second electronic devices, comprises, in a first device, receiving from a second device, voice representative information acquired by the second device, and connection information indicating characteristics of communication to be used in establishing a communication link with the second device. The system compares the voice representative information with predetermined reference voice representative information and in response to the comparison, establishes a communication link with the second device by using the connection information received from the second device.Type: ApplicationFiled: July 1, 2014Publication date: January 22, 2015Inventors: Hyuk KANG, Kyung-tae KIM, Seong-min JE
-
Publication number: 20150026579Abstract: The disclosed embodiments illustrate methods and systems for processing one or more crowdsourced tasks. The method comprises converting an audio input received from a crowdworker to one or more phrases by one or more processors in at least one computing device. The audio input is at least a response to a crowdsourced task. A mode of the audio input is selected based on one or more parameters associated with the crowdworker. Thereafter, the one or more phrases are presented on a display of the at least one computing device by the one or more processors. Finally, one of the one or more phrases is selected by the crowdworker as a correct response to the crowdsourced task.Type: ApplicationFiled: July 16, 2013Publication date: January 22, 2015Inventor: Shailesh Vaya
-
Patent number: 8938676Abstract: A method of enabling a user to adjust at least first and second control parameters for controlling an electronic system includes displaying a coordinate system on a display screen, where a first coordinate represents a range of values of the first control parameter, and a second coordinate represents a range of values of the second control parameter. The method further included visually indicating a position in coordinate system corresponding to a currently selected combination of values of the first and second control parameters, and enabling the user to select a new combination of values of the first and second control parameters by indicating a position within the coordinate system.Type: GrantFiled: March 12, 2004Date of Patent: January 20, 2015Assignee: Koninklijke Philips N.V.Inventors: Leon Maria Van De Kerkhof, Mykola Ostrovskyy, Arnoldus Werner Johannes Oomen
-
Publication number: 20150019974Abstract: There is provided an information processing device including a processor configured to realize an address term definition function of defining an address term for at least a partial region of an image to be displayed on a display, a display control function of displaying the image on the display and temporarily displaying the address term on the display in association with the region, a voice input acquisition function of acquiring a voice input for the image, and a command issuing function of issuing a command relevant to the region when the address term is included in the voice input.Type: ApplicationFiled: May 9, 2014Publication date: January 15, 2015Applicant: Sony CorporationInventors: Shouichi DOI, Yoshiki TAKEOKA, Masayuki TAKADA
-
Publication number: 20150019975Abstract: Described herein are technologies pertaining to transmitting electronic contact data from a first application to a second application by way of an operating system without generating a centralized contact store or providing the second application with programmatic access to all electronic contact data retained by first application.Type: ApplicationFiled: July 28, 2014Publication date: January 15, 2015Inventors: John Morrow, Neil Pankey, Michael Farnsworth, Ashish Bangale
-
Patent number: 8935166Abstract: Some embodiments disclosed herein store a target application and a dictation application. The target application may be configured to receive input from a user. The dictation application interface may include a full overlay mode option, where in response to selection of the full overlay mode option, the dictation application interface is automatically sized and positioned over the target application interface to fully cover a text area of the target application interface to appear as if the dictation application interface is part of the target application interface. The dictation application may be further configured to receive an audio dictation from the user, convert the audio dictation into text, provide the text in the dictation application interface and in response to receiving a first user command to complete the dictation, automatically copy the text from the dictation application interface and inserting the text into the target application interface.Type: GrantFiled: October 16, 2013Date of Patent: January 13, 2015Assignee: Dolbey & Company, Inc.Inventors: Curtis A. Weeks, Aaron G. Weeks, Stephen E. Barton
-
Publication number: 20150012829Abstract: A computer implemented method and an apparatus for facilitating voice user interface (VUI) design are provided. The method comprises identifying a plurality of user intentions from user interaction data. The method further comprises associating each user intention with at least one feature from among a plurality of features. One or more features from among the plurality of features are extracted from natural language utterances associated with the user interaction data. Further, the method comprises computing a plurality of distance metrics corresponding to pairs of user intentions from among the plurality of user intentions. A distance metric is computed for each pair of user intentions from among the pairs of user intentions. Furthermore, the method comprises generating a plurality of clusters based on the plurality of distance metrics. Each cluster comprises a set of user intentions. The method further comprises provisioning a VUI design recommendation based on the plurality of clusters.Type: ApplicationFiled: June 30, 2014Publication date: January 8, 2015Inventors: Kathy L. BROWN, Vaibhav SRIVASTAVA
-
Patent number: 8930342Abstract: Multidimensional search capabilities are enabled on a non-PC (personal computer) device being utilized by a user. An original query submitted by the user via the non-PC device is received. A structured data repository is accessed to extract structured data that is available for the original query, where the extracted structured data represents attributes of the original query. The extracted structured data is provided to the user in the form of a hierarchical menu which allows the user to interactively modify the original query, such modification resulting in a revised query.Type: GrantFiled: April 14, 2014Date of Patent: January 6, 2015Assignee: Microsoft CorporationInventors: Johnson Apacible, Mark Encarnacion, Aleksey Sinyagin
-
Patent number: 8924856Abstract: Provided is a method of providing a slide show. The method includes determining whether a first image to be displayed is an image photographed in a continuous photographing mode, when the first image is an image photographed in the continuous photographing mode, displaying the first image for a first time interval, and when the first image is not an image photographed in the continuous photographing mode, displaying the first image for a second time interval, wherein the first time interval and the second time interval are different from each other. The first time interval may be shorter than the second time interval.Type: GrantFiled: January 7, 2010Date of Patent: December 30, 2014Assignee: Samsung Electronics Co., Ltd.Inventor: Jae-myung Lee
-
Publication number: 20140380170Abstract: A method for receiving processed information at a remote device is described. The method includes transmitting from the remote device a verbal request to a first information provider and receiving a digital message from the first information provider in response to the transmitted verbal request. The digital message includes a symbolic representation indicator associated with a symbolic representation of the verbal request and data used to control an application. The method also includes transmitting, using the application, the symbolic representation indicator to a second information provider for generating results to be displayed on the remote device.Type: ApplicationFiled: September 5, 2014Publication date: December 25, 2014Inventors: Gudmundur Hafsteinsson, Michael J. LeBeau, Natalia Marmasse, Sumit Agarwal, Dipchand Nishar
-
Publication number: 20140380169Abstract: Disclosed are methods for disambiguating an input phrase or group of words. An implementation may include receiving a phrase as an input to a processor. The received phrase may be presented on a display device. The received phrase may be determined to be ambiguous based on a threshold uncertainty in either a definition or a pronunciation related to the phrase. An indication may be provided that a word in the phrase is the cause of the ambiguity. A menu of words with each word incorporating at least one diacritic mark to a word in the received phrase to disambiguate the received phrase may be presented. A word from the menu of words may be selected and presented on the display device.Type: ApplicationFiled: June 20, 2013Publication date: December 25, 2014Inventor: Mohamed S. Eldawy
-
Publication number: 20140372892Abstract: Embodiments of the present invention automatically register user interfaces with a voice control system. Registering the interface allows interactive elements within the interface to be controlled by a user's voice. A voice control system analyzes audio including voice commands spoken by a user and manipulates the user interface in response. The automatic registration of a user interface with a voice control system allows a user interface to be voice controlled without the developer of the application associated with the interface having to do anything. Embodiments of the invention allow an application's interface to be voice controlled without the application needing to account for states of the voice control system.Type: ApplicationFiled: June 18, 2013Publication date: December 18, 2014Inventors: GERSHOM LOUIS PAYZER, NICHOLAS DORIAN RAPP, NALIN SINGAL, LAWRENCE WAYNE OLSON
-
Patent number: 8913189Abstract: Audio data and video data are processed to determine one or more audible events and visual events, respectively. Contemporaneous presentation of the video data with audio data may be synchronized based at least in part on the audible events and the visual events. Audio processing functions, such as filtering, may be initiated for audio data based at least in part on the visual events.Type: GrantFiled: March 8, 2013Date of Patent: December 16, 2014Assignee: Amazon Technologies, Inc.Inventors: Richard William Mincher, Todd Christopher Mason
-
Publication number: 20140365896Abstract: Described herein are frameworks, devices and methods configured for enabling display for facility information and content, in some cases via touch/gesture controlled interfaces. Embodiments of the invention have been particularly developed for allowing an operator to conveniently access a wide range of information relating to a facility via, for example, one or more wall mounted displays. While some embodiments will be described herein with particular reference to that application, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts.Type: ApplicationFiled: June 9, 2014Publication date: December 11, 2014Inventors: John D. Morrison, Peter C. Davis, Graeme Laycock
-
Patent number: 8908050Abstract: An imaging apparatus includes an imaging unit, a field angle change unit, and a movement detection unit. The imaging unit includes a lens that forms an image of a subject and acquires a picture image by taking the image formed by the lens. The field angle change unit changes a field angle of the picture image acquired by the imaging unit. The movement detection unit detects a movement of the imaging apparatus. The field angle change unit changes the field angle of the picture image in accordance with a moving direction of the imaging apparatus when the movement detection unit detects the movement of the imaging apparatus.Type: GrantFiled: November 23, 2010Date of Patent: December 9, 2014Assignee: Olympus Imaging Corp.Inventors: Tatsuya Kino, Yoshinori Matsuzawa, Osamu Nonaka
-
Publication number: 20140351700Abstract: An apparatus may comprise at least one processor-readable non-statutory storage medium and at least one processor in communication with the at least one storage medium. The at least one medium may comprise at least one set of instructions for changing an audio-visual effect of a user interface on the apparatus. The at least one processor may be configured to execute the at least one set of instructions to obtain operating data associated with at least one of an acceleration input and acoustic input from a sensor of the terminal device; determine whether the operating data meet a preset condition; and replace a current audio-visual effect of a user interface (UI) with a selected audio-visual effect when the operating data meet the preset conditions.Type: ApplicationFiled: August 7, 2014Publication date: November 27, 2014Applicant: Tencent Technology (Shenzhen) Company LimitedInventors: Cheng FENG, Bo HU, Xi WANG, Ruiyi ZHOU, Zhipei WANG, Kai ZHANG, Xin QING, Huijiao YANG, Ying HUANG, Yulei LIU, Wei LI, Zhengkai XIE
-
Publication number: 20140344701Abstract: A system and method for image based report correction for medical image software, which incorporates such report correction as part of the report generation process. Such a system and method features a report generator, a report correction functionality and also some type of medical image software, for providing medical image processing capabilities, which allows the doctor or other medical personnel to generate the report, and as part of the report generation process, to be checked by the report correction functionality.Type: ApplicationFiled: May 15, 2014Publication date: November 20, 2014Inventor: Reuven R. Shreiber
-
Publication number: 20140337740Abstract: Provided herein is a method for selecting an object. The method for selecting an object according to an exemplary embodiment includes displaying a plurality of objects on a screen, recognizing a voice uttered by a user and tracking an eye of the user with respect to the screen, and selecting at least one object from among the plurality of objects on the screen based on the recognized user's voice and the tracked eye.Type: ApplicationFiled: May 7, 2014Publication date: November 13, 2014Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sung-hyuk KWON, Jae-yeop KIM, Jin-ha LEE, Christophe NAOURJEAN
-
Publication number: 20140337741Abstract: A method includes determining, using signals captured from two or more microphones (1A) configured to detect an acoustic signal from one or more sound sources, one or more prominent sound sources based on the one or more sound sources (1C-1D). The method further includes determining one or more directions relative to a position of one or more of the two or more microphones for the one or more prominent sound sources (1B-1D). The method includes modifying one or more user interface elements displayed on a user interface of a display to provide an indication at least in part of the one or more directions, relative to position of at least one microphone, of the one or more prominent sound sources (1G).Type: ApplicationFiled: November 21, 2012Publication date: November 13, 2014Applicant: Nokia CorporationInventors: Erika Reponen, Ravi Shenoy, Mikko Tammi, Sampo Vesa
-
Patent number: 8887051Abstract: A method, system, and computer-readable product for positioning a virtual sound capturing device in a graphical user interface (GUI) are disclosed. The method includes displaying a virtual sound capturing device in relation to a virtual sound producing device in a three dimensional interface and in a two dimensional graphical map. Additionally, the method includes adjusting the display of the virtual sound capturing device in relation to the virtual sound producing device in both the three dimensional interface and the two dimensional graphical map in response to commands received from an input device.Type: GrantFiled: December 3, 2012Date of Patent: November 11, 2014Assignee: Apple Inc.Inventors: Markus Sapp, Kerstin Heitmann, Thorsten Quandt, Manfred Knauff, Marko Junghanns
-
Publication number: 20140325360Abstract: A display apparatus which is capable of performing an initial setting and a controlling control method thereof are provided. The display apparatus includes an output unit configured to output a user interface (UI) which is controllable by a plurality of input modes, and a controller configured to set an input mode according to a user feedback type regarding the UI, and configured to output another UI which corresponds to the set input mode.Type: ApplicationFiled: April 24, 2014Publication date: October 30, 2014Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yeon-hee JUNG, Do-hyoung KIM, Se-jun PARK, Yoon-woo JUN, Joo-yeon CHO, Sun CHOI
-
Patent number: 8875026Abstract: The present invention is directed to directed communication in a virtual environment. A method for method for providing directed communication between avatars in a virtual environment in accordance with an embodiment of the present invention includes: determining a relative location of a first avatar and a second avatar in a virtual environment; and adjusting an aspect of a communication between the first avatar and the second avatar based on the relative location.Type: GrantFiled: May 1, 2008Date of Patent: October 28, 2014Assignee: International Business Machines CorporationInventors: Rick A. Hamilton, II, John P. Karidis, Brian M. O'Connell, Clifford A. Pickover, Keith R. Walker
-
Patent number: 8869032Abstract: A method of defining a voice browser for browsing a plurality of voice sites, at least some of the voice sites having different telephone numbers, the voice sites being configured to be accessed by telephone, is provided including storing information relating to voice sites visited by a voice user; and providing forward and back functions, comprising transferring a user from one voice site to another, in response to commands by the user. Computer program code and systems are also provided.Type: GrantFiled: March 13, 2008Date of Patent: October 21, 2014Assignee: International Business Machines CorporationInventors: Sheetal K. Agarwal, Dipanjan Chakraborty, Arun Kumar, Amit Anil Nanavati, Nitendra Rajput
-
Patent number: 8862985Abstract: A screen reader application for visually impaired users suppresses unwanted content that is output by Braille or text-to-speech. The invention accesses, but does not modify, the document object model of the web page and enumerates web page elements for the end user to either hide or skip to. The end user selections are saved as rules which may be applied according to various levels of scope include web page specific, site specific or web-wide. A screen magnification application for visually impaired users automatically sets the visual focus and magnification level on a web page element according to end-user selection.Type: GrantFiled: June 8, 2012Date of Patent: October 14, 2014Assignee: Freedom Scientific, Inc.Inventors: Robert Gallo, Glen Gordon
-
Patent number: 8862475Abstract: Speech-enabled content navigation and control of a distributed multimodal browser is disclosed, the browser providing an execution environment for a multimodal application, the browser including a graphical user agent (‘GUA’) and a voice user agent (‘VUA’), the GUA operating on a multimodal device, the VUA operating on a voice server, that includes: transmitting, by the GUA, a link message to the VUA, the link message specifying voice commands that control the browser and an event corresponding to each voice command; receiving, by the GUA, a voice utterance from a user, the voice utterance specifying a particular voice command; transmitting, by the GUA, the voice utterance to the VUA for speech recognition by the VUA; receiving, by the GUA, an event message from the VUA, the event message specifying a particular event corresponding to the particular voice command; and controlling, by the GUA, the browser in dependence upon the particular event.Type: GrantFiled: April 12, 2007Date of Patent: October 14, 2014Assignee: Nuance Communications, Inc.Inventors: Soonthorn Ativanichayaphong, Charles W. Cross, Jr., Gerald M. McCobb
-
Publication number: 20140304605Abstract: A system that acquires captured voice data corresponding to a spoken command; sequentially analyzes the captured voice data; causes a display to display a visual indication corresponding to the sequentially analyzed captured voice data; and performs a predetermined operation corresponding to the spoken command when it is determined that the sequential analysis of the captured voice data is complete.Type: ApplicationFiled: March 11, 2014Publication date: October 9, 2014Applicant: SONY CORPORATIONInventors: Junki OHMURA, Michinari Kohno, Kenich Okada
-
Publication number: 20140304606Abstract: An information processing device includes circuitry configured to cause first display information to be displayed in a first format. The circuitry also changes the first display information to be displayed in a second format in response to a voice being recognized. The information processing may also be accomplished with a method and via a non-transient computer readable storage device.Type: ApplicationFiled: March 12, 2014Publication date: October 9, 2014Applicant: SONY CORPORATIONInventors: Junki OHMURA, Michinari Kohno, Kenichi Okada
-
Publication number: 20140298177Abstract: Example embodiments relate to processing user interactions with a computing device, comprising receiving a user-initiated action performed on a character button, the character button representing a character; determining whether the user-initiated action is performed in a normal an abnormal operating manner. When a normal operating manner is determined, displaying the character on a graphical display.Type: ApplicationFiled: March 28, 2013Publication date: October 2, 2014Inventor: Vasan Sun
-
Publication number: 20140289633Abstract: The present disclosure provides a method and an electronic device for information processing. The electronic device comprises a sensing unit and a display unit having a display area. The display unit displays a graphical interface. The display area displays a first part of the graphical interface. The method comprises: detecting a first operation by the sensing unit when the display unit displays the first part of the graphical interface; displaying the second part of the graphical interface on the display unit in response to the first operation; detecting a second operation; determining whether a preset condition is satisfied during the detecting of the second operation to obtain first decision information; and displaying a speech control on the display unit when the first decision information indicates that the preset condition is satisfied during the detecting of the second operation.Type: ApplicationFiled: March 20, 2014Publication date: September 25, 2014Applicants: LENOVO (BEIJING) LIMITED, BEIJING LENOVO SOFTWARE LTD.Inventors: Xu JIA, Xinru HOU, Qianying WANG, Gaoge WANG, Shifeng PENG, Yuanyi ZHANG, Jun CHEN
-
Publication number: 20140289632Abstract: According to an embodiment, a picture drawing support apparatus includes following components. The feature extractor extracts a feature amount from a picture drawn by a user. The speech recognition unit performs speech recognition on speech input by the user. The keyword extractor extracts at least one keyword from a result of the speech recognition. The image search unit retrieves one or more images corresponding to the at least one keyword from a plurality of images prepared in advance. The image selector selects an image which matches the picture, from the one or more images based on the feature amount. The image deformation unit deforms the image based on the feature amount to generate an output image. The presentation unit presents the output image.Type: ApplicationFiled: March 4, 2014Publication date: September 25, 2014Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Masaru Suzuki, Masayuki Okamoto, Kenta Cho, Kosei Fume
-
Publication number: 20140282008Abstract: An interactive holographic display system includes a holographic generation module configured to display a holographically rendered anatomical image. A localization system is configured to define a monitored space on or around the holographically rendered anatomical image. One or more monitored objects have their position and orientation monitored by the localization system such that coincidence of spatial points between the monitored space and the one or more monitored objects triggers a response in the holographically rendered anatomical image.Type: ApplicationFiled: October 15, 2012Publication date: September 18, 2014Inventors: Laurent Verard, Raymond Chan, Daniel Simon Anna Ruijters, Sander Hans Denissen, Sander Slegt
-
Publication number: 20140282007Abstract: Methods and systems are provided for diagnosing inadvertent activation of user interface settings on an electronic device. The electronic device receives a user input indicating that the user is having difficulty operating the electronic device. The device then determines whether a setting was changed on the device within a predetermined time period prior to receiving the user input. When a first setting was changed within the predetermined time period prior to receiving the user input, the device restores the changed setting to a prior setting.Type: ApplicationFiled: March 3, 2014Publication date: September 18, 2014Applicant: APPLE INC.Inventor: Christopher B. FLEIZACH
-
Publication number: 20140282006Abstract: A method comprising generating, by a computer, a model of a website using user interaction primitives to represent hierarchical and hypertextual structures of the website; generating, by the computer, a linear aural flow of content of the website based upon the model and a set of user constraints; audibly presenting, by the computer, the linear aural flow of the content such that the linear aural flow of content is controlled through the use of user supplied primitives, wherein, the linear aural flow can be turned into a dynamic aural flow based upon the user supplied primitives.Type: ApplicationFiled: September 11, 2013Publication date: September 18, 2014Inventor: Davide Bolchini
-
Publication number: 20140282005Abstract: Incoming messages, like incoming wounded on the battlefield, can be initially sorted into groups e.g. a) those which can be or should be treated immediately, b) those which can be treated later, and c) those which should not be treated. Like in a triage unit on a battlefield, it is useful to reduce the amount of effort and increase the speed at which this sort takes place. The present invention allows the user's effort to sort to be reduced to a minimum, with a consequent increase in speed.Type: ApplicationFiled: March 15, 2013Publication date: September 18, 2014Inventor: Howard Gutowitz
-
Publication number: 20140281995Abstract: A method of controlling a mobile terminal, and which includes displaying an application screen of an executing application and a corresponding keypad on a display of the mobile terminal; modifying, via a controller of the mobile terminal, the keypad into a new keypad arrangement; displaying, via the controller, a display window in a vacant space created by the modification of the keypad, wherein a function of the display window is automatically selected based on a type of the executing application; and inputting text on the displayed application screen through the display window and modified keypad.Type: ApplicationFiled: January 29, 2014Publication date: September 18, 2014Applicant: LG ELECTRONICS INC.Inventors: Mina KIM, Byoungjoo KWAK, Hosung SONG, Keansub LEE
-
Publication number: 20140282259Abstract: Navigating through objects or items in a display device using a first input device to detect pointing of a user's finger to an object or item and using a second input device to receive the user's indication on the selection of the object or the item. An image of the hand is captured by the first input device and is processed to determine a location on the display device corresponding to the location of the fingertip of the pointing finger. The object or the item corresponding to the location of the fingertip is selected after the second input device receives predetermined user input from the second input device.Type: ApplicationFiled: February 27, 2014Publication date: September 18, 2014Applicant: Honda Motor Co., Ltd.Inventors: Kikuo Fujimura, Victor Ng-Thow-Hing, Behzad Dariush
-
Patent number: 8832596Abstract: Input sources to a display, such as a television, are depicted with a first indicator if active, such as active with a signal having visual information, and are depicted with a second indicator if inactive, such as lacking an active signal. A selection module allows a user of the display to select an active source but precludes selection of an inactive source. For instance, a user is allowed to select highlighted input sources that are active but not allowed to select grayed out input sources that are inactive. An override allows the user to present all input sources with the first indicator, whether or not the input sources are active.Type: GrantFiled: May 23, 2005Date of Patent: September 9, 2014Assignee: Dell Products L.P.Inventors: Jeffrey Stephens, Andrew G. Habas
-
Patent number: 8831955Abstract: Methods and arrangements for facilitating tangible interactions in voice applications. At least two tangible objects are provided, along with a measurement interface. The at least two tangible objects are disposed to each be displaceable with respect to one another and with respect to the measurement interface. The measurement interface is communicatively connected with a voice application. At least one of the two tangible objects is displaced with respect to the measurement interface, and the displacement of at least one of the at least two tangible objects is converted to input for the voice application.Type: GrantFiled: August 31, 2011Date of Patent: September 9, 2014Assignee: International Business Machines CorporationInventors: Nitendra Rajput, Shrey Sahay, Saurabh Srivastava, Kundan Shrivastava
-
Patent number: 8826133Abstract: A system and method are disclosed for providing improved 3D sound experience to a user. The sound generation layer is customizable to allow the user and/or application provider to modify the internal rules the sound generation layer uses to render sounds, to amplify sounds that fall below a pre-set or user-set volume level, and to specifically amplify/soften certain sounds (such as game specific sounds like gunfire or footsteps), or specific frequencies of sounds. A graphical user interface can communicates with the sound generation layer to handle any or all the above, so that a lay user can easily adjust these settings without having to understand the underlying algorithms.Type: GrantFiled: March 6, 2006Date of Patent: September 2, 2014Assignee: Razer (Asia-Pacific) Pte. Ltd.Inventors: Chern Ann Ng, Min-Liang Tan
-
Patent number: 8826137Abstract: A screen reader software product for low-vision users, the software having a reader module collecting textual and non-textual display information generated by a web browser or word processor. Font styling, interface layout information and the like are communicated to the end user by sounds broadcast simultaneously rather than serially with the synthesized speech to improve the speed and efficiency in which information may be digested by the end user.Type: GrantFiled: August 12, 2004Date of Patent: September 2, 2014Assignee: Freedom Scientific, Inc.Inventors: Christian D. Hofstader, Glen Gordon, Eric Damery, Ralph Ocampo, David Baker, Joseph K. Stephen