Audio Input For On-screen Manipulation (e.g., Voice Controlled Gui) Patents (Class 715/728)
  • Patent number: 9426443
    Abstract: An image processing system according to an embodiment is an image processing system having a terminal device including a display unit that displays a medical image. The image processing system includes an acquiring unit and a display controller. The acquiring unit acquires a position of the terminal device relative to a predetermined target. The display controller causes the display unit to display a medical image in accordance with the relative position of the terminal device relative to the target acquired by the acquiring unit.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: August 23, 2016
    Assignee: Toshiba Medical Systems Corporation
    Inventors: Kazumasa Arakita, Shinsuke Tsukagoshi, Tatsuo Maeda, Takumi Hara, Kenta Moriyasu
  • Patent number: 9214135
    Abstract: A computer-implemented method monitors a video-based graphic. The method includes displaying a video-based graphic. A position of a pointer is monitored. The method includes determining a transparency of the video-based graphic at a location of the pointer. An action is performed based on the determined transparency of the video-based graphic at a location of the pointer.
    Type: Grant
    Filed: July 18, 2011
    Date of Patent: December 15, 2015
    Assignee: YAHOO! INC.
    Inventor: Lawrence Anthony Deguzman
  • Patent number: 9208397
    Abstract: A service provider receives, from a user, picture information captured by a user device from a picture mark associated with a product or service of a merchant. It determines a matching picture image by comparing the picture information with picture images in a server, previously registered by the merchant. It also determines, out of attributes previously registered by the merchant, a matching attribute set uniquely associated with the matching picture image. The attributes may be web links, mobile APPs, or any media files that the merchant desires to communicate to users about its products or services. The service provider then communicates to the user the matching attribute set to be loaded on the user device and direct the user to the web links, mobile APPs, or media files that the merchant predetermined.
    Type: Grant
    Filed: August 27, 2012
    Date of Patent: December 8, 2015
    Assignee: PAYPAL, INC.
    Inventor: German Scipioni
  • Patent number: 9195385
    Abstract: A physiological monitor touchscreen interface which presents interface constructs on a touchscreen display that are particularly adapted to finger gestures. The finger gestures operate to change at least one of a physiological monitor operating characteristic and a physiological touchscreen display characteristic. The physiological monitor touchscreen interface includes a first interface construct operable to select a menu item from a touchscreen display and a second interface construct operable to define values for the selected menu item. The first interface construct can include a first scroller that presents a rotating set of menu items in a touchscreen display area and a second scroller that presents a rotating set of thumbnails in a display well. The second interface construct can operate to define values for a selected menu item.
    Type: Grant
    Filed: March 25, 2013
    Date of Patent: November 24, 2015
    Assignee: Masimo Corporation
    Inventors: Ammar Al-Ali, Bilal Muhsin, Keith Indorf
  • Patent number: 9075780
    Abstract: A document comparison system compares revisions of a document by comparing content of a first revision of an object with content of a second revision of the object to produce a comparison object in a comparison document. The system then determines a footprint of the second revision of the object, where the footprint is a dimensional length and width of the object and its position relative to a page boundary of the document. Next, the system determines whether the entire compared content fits within the footprint of the second revision of the object. If the system determines that the entire compared content fits within the footprint of the second revision of the object, the system displays the comparison object having the same footprint as the second revision of the object. If the system determines that the entire compared content does not fit within the footprint of the second revision of the object, the system then displays the comparison object in a second manner.
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: July 7, 2015
    Assignee: Workiva Inc.
    Inventors: John Arthur Bonk, Anthony Ryan Oskvarek, Scott Johns Bacon, Christopher James Lo Coco, Bert Jeffrey Lutzenberger
  • Publication number: 20150149907
    Abstract: A portable electronic apparatus and an interface display method thereof are disclosed. The method includes the following steps of: executing an application; capturing and analyzing an environmental sound around the portable electronic apparatus to obtain at least one sound character; determining a state of motion of the portable electronic apparatus; comparing the at least one sound character and the state of motion with a statistics data of the application to determine an interface display mode of the application; and locking the interface display mode as a predetermined interface display mode for displaying a display interface of the application when a compared result is obtained by the comparing step.
    Type: Application
    Filed: August 4, 2014
    Publication date: May 28, 2015
    Inventor: Jung-Yu Wu
  • Patent number: 9043703
    Abstract: In one embodiment, a method includes accessing a social graph that includes a plurality of nodes and edges, receiving from a first user a voice message comprising one or more commands, receiving location information associated with the first user, identifying edges and nodes in the social graph based on the location information, where each of the identified edges and nodes corresponds to at least one of the commands of the voice message, and generating new nodes or edges in the social graph based on the identified nodes or identified edges.
    Type: Grant
    Filed: October 16, 2012
    Date of Patent: May 26, 2015
    Assignee: Facebook, Inc.
    Inventors: Jenny Yuen, David Harry Garcia
  • Publication number: 20150143241
    Abstract: A system and method are disclosed for navigation on the World Wide Web using voice commands. The name of a website may be called out by users several different ways. A user may speak the entire URL, a portion of the URL, or a name of the website which may bear little resemblance to the URL. The present technology uses rules and heuristics embodied in various software engines to determine the best candidate website based on the received voice command, and then navigates to that website.
    Type: Application
    Filed: November 19, 2013
    Publication date: May 21, 2015
    Applicant: Microsoft Corporation
    Inventors: Andrew S. Zeigler, Michael Han-Young Kim, Rodger William Benson, Raman Kumar Sarin
  • Publication number: 20150143242
    Abstract: A method for providing a user interface of a communication apparatus comprises switching from a low power mode to a working mode upon receiving a stream of audio data; and upon switching from the low power mode to the working mode: extracting at least one audio feature from said stream of audio data, and modifying the appearance of at least one user interface component configured for invoking a function of the communication apparatus, in accordance with said extracted audio feature.
    Type: Application
    Filed: December 5, 2014
    Publication date: May 21, 2015
    Inventor: Jarmo KAUKO
  • Patent number: 9037973
    Abstract: An interactive presentation environment for eMeetings or the like that provides participants with more control over what they see and hear. The interactive presentation environment may comprise a meeting recorder adapted to create a recording of a live meeting and a navigation control for selecting a portion of the recording to view during the live meeting. The interactive presentation environment may further comprise a timeline control containing a first graphical indicator associated with a live position and a second graphical indicator associated with a current position, a bookmark control adapted to mark a portion of the recording for archiving, and a display operatively connected to the meeting recorder and the navigation control.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: May 19, 2015
    Assignee: Google Inc.
    Inventors: Gregory Richard Hintermeister, Michael D. Rahn
  • Publication number: 20150135080
    Abstract: A mobile terminal including a wireless communication unit configured to provide wireless communication; a touch screen; and a controller configured to receive a plurality of taps applied to the touch screen, and display at least one function executable by the mobile terminal on the touch screen based the received plurality of taps and based on at least one of an operating state and an ambient environmental state of the mobile terminal.
    Type: Application
    Filed: October 30, 2014
    Publication date: May 14, 2015
    Applicant: LG ELECTRONICS INC.
    Inventors: Jaeyoung JI, Byoungzoo JEONG, Jinhae CHOI, Soyeon YIM, Kyungjin YOU, Soohyun LEE, Younghoon LEE, Nayeoung KIM
  • Publication number: 20150128049
    Abstract: An advanced user interface includes a display device and a processing unit. The processing unit causes the display device to display a dynamic user interface containing a plurality of input areas in an adaptive graphical arrangement, detect user in puts on the dynamic user interface, and record the user inputs in a memory unit in association with a context of information inputted by the user. The graphical arrangement of input areas includes at least one primary input area, each of which is respectively associated with different information. The processing unit detects a user input for one of the input areas, compares the detected user input with prior user inputs recorded in the memory unit, and predicts a first next user input based on the comparison and the context of information associated with the detected user input.
    Type: Application
    Filed: July 8, 2013
    Publication date: May 7, 2015
    Inventors: Robert S. BLOCK, Alexander A. WENGER, Paul SIDLO
  • Patent number: 9026915
    Abstract: The invention provides for a system, method, and computer readable medium storing instructions related to controlling a presentation in a multimodal system. The method embodiment of the invention is a method for the retrieval of information on the basis of its content for incorporation into an electronic presentation. The method comprises receiving from a user a content-based request for at least one segment from a first plurality of segments within a media presentation preprocessed to enable natural language content searchability; in response to the request, presenting a subset of the first plurality of segments to the user; receiving a selection indication from the user associated with at least one segment of the subset of the first plurality of segments and adding the selected at least one segment to a deck for use in a presentation.
    Type: Grant
    Filed: October 31, 2005
    Date of Patent: May 5, 2015
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Patrick Ehlen, David Crawford Gibbon, Mazin Gilbert, Michael Johnston, Zhu Liu, Behzad Shahraray
  • Publication number: 20150121229
    Abstract: A method for processing information applied in an electronic apparatus is provided. The method includes: acquiring a first operation to trigger the multi-window manager; displaying the multi-window management interface corresponding to the multi-window manager in the touch-control display unit based on the first operation; displaying the at least one object identifier corresponding to the at least one application in the multi-window management interface, and displaying running status information corresponding to the at least one application. Using the technical solution of the present invention, the user is able to know the applications which may be displayed in a form of a small window and the current running status thereof conveniently and quickly by means of a multi-window management interface, and thereby the user experience is improved.
    Type: Application
    Filed: March 31, 2014
    Publication date: April 30, 2015
    Applicant: Lenovo (Beijing) Co., Ltd.
    Inventors: Guizhen Wang, Lijun Lin, Jing Wang, Leilei Zhao
  • Publication number: 20150121230
    Abstract: In an audio setup comprising at least one audio headset configurable to process audio for a user (e.g., when participating in an online multiplayer game), input audio and/or output audio in the audio headset may be monitored, and when the audio matches triggering criteria, one or more update messages may be triggered via a social networking service. The triggering criteria may comprise (or be set based on) identity of the speaker, content of the audio, and/or conditions association with the audio. Different triggering criteria may be associated with different applications (e.g., different video games). The update messages may be made available to one or more other users, who may be selected based on matching particular user selection criteria and/or based on successful user validation. The user selection criteria may comprise participation in the same online multiplayer game.
    Type: Application
    Filed: September 4, 2014
    Publication date: April 30, 2015
    Inventors: Richard Kulavik, Michael Jessup, Kevin Arthur Robertson
  • Publication number: 20150113410
    Abstract: Audio files representing files intended primarily for viewing (e.g., by sighted users) are created and organized into hierarchies that mimic those of the original files as instantiated at original websites incorporating such files. Thus, visually impaired users are provided access to and navigation of the audio files in a way that mimics the original website.
    Type: Application
    Filed: December 31, 2014
    Publication date: April 23, 2015
    Inventors: Nathaniel T. Bradley, William C. O'Conor, David Ide
  • Publication number: 20150113409
    Abstract: A computer system may include logic configured to enable voice-enabled web pages. The logic may be configured to receive a request for a web page that includes Hypertext Markup Language (HTML) content and voice browser content from an HTML browser running on a user device; generate a co-browsing session identifier based on the received request; provide a response to the HTML browser, wherein the response includes the HTML content, the generated co-browsing session identifier, and an instruction to establish a Web Real-Time Communication (WebRTC) connection with an interactive voice response (IVR) system associated with the voice browser content; receive an indication from the IVR system that the WebRTC connection has been established for the co-browsing session identifier; and provide the voice browser content to a voice browser in the IVR system, in response to receiving the indication that the WebRTC connections has been established for the co-browsing session identifier.
    Type: Application
    Filed: December 29, 2014
    Publication date: April 23, 2015
    Inventors: Brian S. Badger, David E. Phelps
  • Patent number: 9009592
    Abstract: Automatic capture and population of task and list items in an electronic task or list surface via voice or audio input through an audio recording-capable mobile computing device is provided. A voice or audio task or list item may be captured for entry into a task application interface or into a list authoring surface interface for subsequent use as task items, reminders, “to do” items, list items, agenda items, work organization outlines, and the like. Captured voice or audio content may be transcribed locally or remotely, and transcribed content may be populated into a task or list authoring surface user interface that may be displayed on the capturing device (e.g., mobile telephone), or that may be stored remotely and subsequently displayed in association with a number of applications on a number of different computing devices.
    Type: Grant
    Filed: October 12, 2011
    Date of Patent: April 14, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ned B. Friend, Kanav Arora, Marta Rey-Babarro, David De La Brena Valderrama, Erez Kikin-Gil, Matthew J. Kotler, Charles W. Parker, Maya Rodrig, Igor Zaika
  • Patent number: 8996997
    Abstract: Embodiments relate to systems and methods providing a flip-though format for viewing notification of messages and related items on devices, for example personal mobile devices such as smart phones. According to an embodiment, an unread item most recently received is shown in full screen on the mobile device. While the user is viewing this item, the device will automatically retrieve and load into a cache memory, the next most recently received item. When the user is done viewing the item most recently received, the user can swipe a finger across the touch screen to trigger a page flipping animation and display of the next most recently received item. Embodiments avoid the user having to click back and forth between a list of notifications/links and corresponding notification items.
    Type: Grant
    Filed: April 18, 2012
    Date of Patent: March 31, 2015
    Assignee: SAP SE
    Inventor: Jian Xu
  • Patent number: 8996998
    Abstract: Systems and processes that incorporate teachings of the present disclosure may include, for example, transmitting a client program having a graphical user interface to a media device accessible via an interactive television network. Temporal actions of users are collected, while presenting a media program of the client program. A symbolic overlay of the client program is generated including a linear presentation of the timeline corresponding to temporal progression of presentation of the media program, and an iconic symbol corresponding to the temporal action that superimposes the symbolic overlay onto the media content. The iconic symbol enables association of comments with the media content. The comments are presented by at least one symbol situated relative to the linear presentation of the timeline corresponding to the temporal progression of the media. Other embodiments are disclosed.
    Type: Grant
    Filed: October 23, 2012
    Date of Patent: March 31, 2015
    Assignee: AT&T Intellectual Property I, LP
    Inventors: Linda Roberts, E-Lee Chang, Ja-Young Sung, Natasha Barrett Schultz, Robert Arthur King
  • Publication number: 20150089373
    Abstract: A system and method for facilitating user access to software functionality, such as enterprise-related software applications and associated data. An example method includes receiving language input responsive to one or more prompts; determining, based on the language input, a subject category associated with a computing object, such as a Customer Relationship Management (CRM) opportunity object; identifying an action category pertaining to a software action to be perform pertaining to the computing object; employing identification of the software action to obtain action context information pertaining to the action category; and implementing a software action in accordance with the action context information. Context information pertaining to a software flow and a particular computing object may guide efficient implementation of voice-guided software tasks corresponding to the software flows.
    Type: Application
    Filed: September 2, 2014
    Publication date: March 26, 2015
    Inventors: Vinay Dwivedi, Seth Stafford, Daniel Valdivia Milanes, Fernando Jimenez Lopez, Brent-Kaan William White
  • Publication number: 20150088499
    Abstract: Embodiments provide user access to software functionality such as enterprise-related software applications and accompanying actions and data. An example method includes receiving natural language input; analyzing the natural language input and selecting one or more portions of the natural language input; employing the one or more keywords to select software functionality; and presenting one or more user interface controls in combination with a representation of the natural language input, wherein the one or more user interface controls are adapted to facilitate user access to the software functionality. In a more specific embodiment, the natural language input is functionally augmented via in-line tagging of keywords or phrases, wherein the tags act as user interface controls for accessing selected software functionality.
    Type: Application
    Filed: September 20, 2013
    Publication date: March 26, 2015
    Applicant: Oracle International Corporation
    Inventors: Brent-Kaan William White, Burkay Gur
  • Publication number: 20150082175
    Abstract: An apparatus includes a receiver, a shared information unit, a transmitter, a voice recognition unit, and an application execution unit. The receiver is configured to receive a voice signal and information from a second apparatus. The shared information unit is configured to create shared information shared by both the apparatus and the second apparatus based on the information received from the second apparatus. The transmitter is configured to transmit the shared information to the second apparatus. The voice recognition unit is configured to analyze the voice signal. The application execution unit is configured to execute an application based on a result generated by the voice recognition unit.
    Type: Application
    Filed: March 15, 2013
    Publication date: March 19, 2015
    Applicant: SONY CORPORATION
    Inventors: Takashi Onohara, Roka Ueda, Keishi Daini, Taichi Yoshio, Yuji Kawabe, Seizi Iwayagano, Takuma Higo, Eri Sakai
  • Patent number: 8977962
    Abstract: A method for displaying reference waveforms to facilitate visual identification of different points such as maximum points and minimum points of an audio clip is provided. The reference waveform includes points that correspond to points on the original audio waveform, except that some or all points on the reference waveform are accentuated to easily identify the positions of the corresponding points on the audio waveform. The reference waveforms are especially useful when an audio waveform (or at least a portion of the clip) has low volume which makes the visual identification of the maximums and minimums of the waveform difficult. Displaying the reference waveform which accentuates the peaks and valleys of the original waveform facilitates the identification of these maximums and minimums.
    Type: Grant
    Filed: September 6, 2011
    Date of Patent: March 10, 2015
    Assignee: Apple Inc.
    Inventor: Aaron M. Eppolito
  • Publication number: 20150067517
    Abstract: An electronic device and a method for controlling the electronic device are provided. The method includes receiving at least one input sound from a user, determining one of a plurality of reference sounds included in a guide track as a device playing sound corresponding to the at least one input sound, and playing the device playing sound.
    Type: Application
    Filed: August 25, 2014
    Publication date: March 5, 2015
    Inventors: Hae-Seok OH, Jeong-Yeon KIM, Dae-Beom PARK, Lae-Hyuk BANG, Chul-Hyung YANG, Ji-Woong OH, Gyu-Cheol CHOI
  • Publication number: 20150067515
    Abstract: An electronic device, a controlling method of a screen, and a program storage medium thereof are provided. The screen includes a display panel and a touch-sensitive panel. The display panel shows a root window on which all display contents are shown. The controlling method comprises the following steps. A command signal is received. The coordinate system of the screen is transformed with a transformation according to the command signal.
    Type: Application
    Filed: August 27, 2013
    Publication date: March 5, 2015
    Applicant: Industrial Technology Research Institute
    Inventor: Chia-Ming CHANG
  • Publication number: 20150067516
    Abstract: A wearable electronic device including a wireless communication unit configured to be wirelessly connected to a projector for projecting a stored presentation onto a screen of an external device; a main body configured to be worn by a user; a microphone integrally connected to the main body; a display unit configured to be attached to the main body; and a controller configured to match voice information input through the microphone with corresponding contents of the stored presentation, and display at least a following portion of content that follow the corresponding contents on the display unit.
    Type: Application
    Filed: February 21, 2014
    Publication date: March 5, 2015
    Applicant: LG ELECTRONICS INC.
    Inventors: Jongseok PARK, Sanghyuck LEE
  • Patent number: 8966365
    Abstract: An apparatus, system, and method are disclosed for an information processing apparatus capable of allowing a user to select appropriate processing beforehand when an application program outputs sound in a state in which an audio device is silenced. The apparatus in one embodiment includes a silencing module for silencing audio information output from an audio device, a detection module for detecting a sound playback request from an application program while silencing is set, a display module for displaying a select screen for allowing a user to select processing when the sound playback request from the application program is detected by the detection module, and a processing module for executing the processing selected by the user on the select screen.
    Type: Grant
    Filed: May 12, 2011
    Date of Patent: February 24, 2015
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Koutaroh Maki, Mikio Hagiwara
  • Publication number: 20150046825
    Abstract: The present disclosure involves a method of improving one-handed operation of a mobile computing device. A first visual content is displayed on a screen of the mobile computing device. The first visual content occupies a substantial entirety of a viewable area of the screen. While the first visual content is being displayed, an action performed by a user to the mobile computing device is detected. The first visual content is scaled down in response to the detected action and displayed on the screen. The scaled-down first visual content occupies a fraction of the viewable area of the screen. A user interaction with the scaled-down first visual content is then detected. In response to the user interaction, a second visual content is displayed on the screen. The second visual content is different from the first visual content and occupies a substantial entirety of the viewable area of the screen.
    Type: Application
    Filed: August 8, 2013
    Publication date: February 12, 2015
    Inventor: Eric Qing Li
  • Publication number: 20150040012
    Abstract: Techniques described herein provide a computing device configured to provide an indication that the computing device has recognized a voice-initiated action. In one example, a method is provided for outputting, by a computing device and for display, a speech recognition graphical user interface (GUI) having at least one element in a first visual format. The method further includes receiving, by the computing device, audio data and determining, by the computing device, a voice-initiated action based on the audio data. The method also includes outputting, while receiving additional audio data and prior to executing a voice-initiated action based on the audio data, and for display, an updated speech recognition GUI in which the at least one element is displayed in a second visual format, different from the first visual format, to indicate that the voice-initiated action has been identified.
    Type: Application
    Filed: December 17, 2013
    Publication date: February 5, 2015
    Applicant: Google Inc.
    Inventors: Alexander Faaborg, Peter Ng
  • Publication number: 20150033129
    Abstract: A mobile terminal including a camera; a display unit configured to display an image input through the camera; and a controller configured to display at least one user-defined icon corresponding to linked image-setting information, receive a touch signal indicating a touch is applied to a corresponding user-defined icon, and control the camera to capture the image based on image-setting information linked to the corresponding user-defined icon in response to the received touch signal.
    Type: Application
    Filed: June 9, 2014
    Publication date: January 29, 2015
    Inventors: Kyungmin CHO, Jeonghyun LEE, Minah SONG
  • Publication number: 20150033130
    Abstract: A computing device detects a user viewing the computing device and outputs a cue if the user is detected to view the computing device. The computing device receives an audio input from the user if the user continues to view the computing device for a predetermined amount of time.
    Type: Application
    Filed: April 27, 2012
    Publication date: January 29, 2015
    Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventor: Evan Scheessele
  • Publication number: 20150033128
    Abstract: A multi-dimensional surgical safety countermeasure system and method for using automated checklists to provide information to surgical staff in a surgical procedure. The system and method involve using checklists and receiving commands through the prompts of the checklists to update the information displayed on the display to guide the performance of a medical procedure.
    Type: Application
    Filed: July 24, 2013
    Publication date: January 29, 2015
    Inventors: Steve Curd, Mark Heinemeyer, Victor Culafic
  • Patent number: 8943411
    Abstract: A system, method, and computer program product are provided for displaying controls to a user. In use, input is received from a user. Additionally, a location of the user is determined with respect to a display, utilizing the input. Further, one or more controls are positioned on the display, based on the location of the user.
    Type: Grant
    Filed: March 6, 2012
    Date of Patent: January 27, 2015
    Assignee: Amdocs Software Systems Limited
    Inventor: Matthew Davis Hill
  • Publication number: 20150026579
    Abstract: The disclosed embodiments illustrate methods and systems for processing one or more crowdsourced tasks. The method comprises converting an audio input received from a crowdworker to one or more phrases by one or more processors in at least one computing device. The audio input is at least a response to a crowdsourced task. A mode of the audio input is selected based on one or more parameters associated with the crowdworker. Thereafter, the one or more phrases are presented on a display of the at least one computing device by the one or more processors. Finally, one of the one or more phrases is selected by the crowdworker as a correct response to the crowdsourced task.
    Type: Application
    Filed: July 16, 2013
    Publication date: January 22, 2015
    Inventor: Shailesh Vaya
  • Publication number: 20150026580
    Abstract: A system of communicating between first and second electronic devices, comprises, in a first device, receiving from a second device, voice representative information acquired by the second device, and connection information indicating characteristics of communication to be used in establishing a communication link with the second device. The system compares the voice representative information with predetermined reference voice representative information and in response to the comparison, establishes a communication link with the second device by using the connection information received from the second device.
    Type: Application
    Filed: July 1, 2014
    Publication date: January 22, 2015
    Inventors: Hyuk KANG, Kyung-tae KIM, Seong-min JE
  • Patent number: 8938676
    Abstract: A method of enabling a user to adjust at least first and second control parameters for controlling an electronic system includes displaying a coordinate system on a display screen, where a first coordinate represents a range of values of the first control parameter, and a second coordinate represents a range of values of the second control parameter. The method further included visually indicating a position in coordinate system corresponding to a currently selected combination of values of the first and second control parameters, and enabling the user to select a new combination of values of the first and second control parameters by indicating a position within the coordinate system.
    Type: Grant
    Filed: March 12, 2004
    Date of Patent: January 20, 2015
    Assignee: Koninklijke Philips N.V.
    Inventors: Leon Maria Van De Kerkhof, Mykola Ostrovskyy, Arnoldus Werner Johannes Oomen
  • Publication number: 20150019974
    Abstract: There is provided an information processing device including a processor configured to realize an address term definition function of defining an address term for at least a partial region of an image to be displayed on a display, a display control function of displaying the image on the display and temporarily displaying the address term on the display in association with the region, a voice input acquisition function of acquiring a voice input for the image, and a command issuing function of issuing a command relevant to the region when the address term is included in the voice input.
    Type: Application
    Filed: May 9, 2014
    Publication date: January 15, 2015
    Applicant: Sony Corporation
    Inventors: Shouichi DOI, Yoshiki TAKEOKA, Masayuki TAKADA
  • Publication number: 20150019975
    Abstract: Described herein are technologies pertaining to transmitting electronic contact data from a first application to a second application by way of an operating system without generating a centralized contact store or providing the second application with programmatic access to all electronic contact data retained by first application.
    Type: Application
    Filed: July 28, 2014
    Publication date: January 15, 2015
    Inventors: John Morrow, Neil Pankey, Michael Farnsworth, Ashish Bangale
  • Patent number: 8935166
    Abstract: Some embodiments disclosed herein store a target application and a dictation application. The target application may be configured to receive input from a user. The dictation application interface may include a full overlay mode option, where in response to selection of the full overlay mode option, the dictation application interface is automatically sized and positioned over the target application interface to fully cover a text area of the target application interface to appear as if the dictation application interface is part of the target application interface. The dictation application may be further configured to receive an audio dictation from the user, convert the audio dictation into text, provide the text in the dictation application interface and in response to receiving a first user command to complete the dictation, automatically copy the text from the dictation application interface and inserting the text into the target application interface.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: January 13, 2015
    Assignee: Dolbey & Company, Inc.
    Inventors: Curtis A. Weeks, Aaron G. Weeks, Stephen E. Barton
  • Publication number: 20150012829
    Abstract: A computer implemented method and an apparatus for facilitating voice user interface (VUI) design are provided. The method comprises identifying a plurality of user intentions from user interaction data. The method further comprises associating each user intention with at least one feature from among a plurality of features. One or more features from among the plurality of features are extracted from natural language utterances associated with the user interaction data. Further, the method comprises computing a plurality of distance metrics corresponding to pairs of user intentions from among the plurality of user intentions. A distance metric is computed for each pair of user intentions from among the pairs of user intentions. Furthermore, the method comprises generating a plurality of clusters based on the plurality of distance metrics. Each cluster comprises a set of user intentions. The method further comprises provisioning a VUI design recommendation based on the plurality of clusters.
    Type: Application
    Filed: June 30, 2014
    Publication date: January 8, 2015
    Inventors: Kathy L. BROWN, Vaibhav SRIVASTAVA
  • Patent number: 8930342
    Abstract: Multidimensional search capabilities are enabled on a non-PC (personal computer) device being utilized by a user. An original query submitted by the user via the non-PC device is received. A structured data repository is accessed to extract structured data that is available for the original query, where the extracted structured data represents attributes of the original query. The extracted structured data is provided to the user in the form of a hierarchical menu which allows the user to interactively modify the original query, such modification resulting in a revised query.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: January 6, 2015
    Assignee: Microsoft Corporation
    Inventors: Johnson Apacible, Mark Encarnacion, Aleksey Sinyagin
  • Patent number: 8924856
    Abstract: Provided is a method of providing a slide show. The method includes determining whether a first image to be displayed is an image photographed in a continuous photographing mode, when the first image is an image photographed in the continuous photographing mode, displaying the first image for a first time interval, and when the first image is not an image photographed in the continuous photographing mode, displaying the first image for a second time interval, wherein the first time interval and the second time interval are different from each other. The first time interval may be shorter than the second time interval.
    Type: Grant
    Filed: January 7, 2010
    Date of Patent: December 30, 2014
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Jae-myung Lee
  • Publication number: 20140380170
    Abstract: A method for receiving processed information at a remote device is described. The method includes transmitting from the remote device a verbal request to a first information provider and receiving a digital message from the first information provider in response to the transmitted verbal request. The digital message includes a symbolic representation indicator associated with a symbolic representation of the verbal request and data used to control an application. The method also includes transmitting, using the application, the symbolic representation indicator to a second information provider for generating results to be displayed on the remote device.
    Type: Application
    Filed: September 5, 2014
    Publication date: December 25, 2014
    Inventors: Gudmundur Hafsteinsson, Michael J. LeBeau, Natalia Marmasse, Sumit Agarwal, Dipchand Nishar
  • Publication number: 20140380169
    Abstract: Disclosed are methods for disambiguating an input phrase or group of words. An implementation may include receiving a phrase as an input to a processor. The received phrase may be presented on a display device. The received phrase may be determined to be ambiguous based on a threshold uncertainty in either a definition or a pronunciation related to the phrase. An indication may be provided that a word in the phrase is the cause of the ambiguity. A menu of words with each word incorporating at least one diacritic mark to a word in the received phrase to disambiguate the received phrase may be presented. A word from the menu of words may be selected and presented on the display device.
    Type: Application
    Filed: June 20, 2013
    Publication date: December 25, 2014
    Inventor: Mohamed S. Eldawy
  • Publication number: 20140372892
    Abstract: Embodiments of the present invention automatically register user interfaces with a voice control system. Registering the interface allows interactive elements within the interface to be controlled by a user's voice. A voice control system analyzes audio including voice commands spoken by a user and manipulates the user interface in response. The automatic registration of a user interface with a voice control system allows a user interface to be voice controlled without the developer of the application associated with the interface having to do anything. Embodiments of the invention allow an application's interface to be voice controlled without the application needing to account for states of the voice control system.
    Type: Application
    Filed: June 18, 2013
    Publication date: December 18, 2014
    Inventors: GERSHOM LOUIS PAYZER, NICHOLAS DORIAN RAPP, NALIN SINGAL, LAWRENCE WAYNE OLSON
  • Patent number: 8913189
    Abstract: Audio data and video data are processed to determine one or more audible events and visual events, respectively. Contemporaneous presentation of the video data with audio data may be synchronized based at least in part on the audible events and the visual events. Audio processing functions, such as filtering, may be initiated for audio data based at least in part on the visual events.
    Type: Grant
    Filed: March 8, 2013
    Date of Patent: December 16, 2014
    Assignee: Amazon Technologies, Inc.
    Inventors: Richard William Mincher, Todd Christopher Mason
  • Publication number: 20140365896
    Abstract: Described herein are frameworks, devices and methods configured for enabling display for facility information and content, in some cases via touch/gesture controlled interfaces. Embodiments of the invention have been particularly developed for allowing an operator to conveniently access a wide range of information relating to a facility via, for example, one or more wall mounted displays. While some embodiments will be described herein with particular reference to that application, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts.
    Type: Application
    Filed: June 9, 2014
    Publication date: December 11, 2014
    Inventors: John D. Morrison, Peter C. Davis, Graeme Laycock
  • Patent number: 8908050
    Abstract: An imaging apparatus includes an imaging unit, a field angle change unit, and a movement detection unit. The imaging unit includes a lens that forms an image of a subject and acquires a picture image by taking the image formed by the lens. The field angle change unit changes a field angle of the picture image acquired by the imaging unit. The movement detection unit detects a movement of the imaging apparatus. The field angle change unit changes the field angle of the picture image in accordance with a moving direction of the imaging apparatus when the movement detection unit detects the movement of the imaging apparatus.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: December 9, 2014
    Assignee: Olympus Imaging Corp.
    Inventors: Tatsuya Kino, Yoshinori Matsuzawa, Osamu Nonaka
  • Publication number: 20140351700
    Abstract: An apparatus may comprise at least one processor-readable non-statutory storage medium and at least one processor in communication with the at least one storage medium. The at least one medium may comprise at least one set of instructions for changing an audio-visual effect of a user interface on the apparatus. The at least one processor may be configured to execute the at least one set of instructions to obtain operating data associated with at least one of an acceleration input and acoustic input from a sensor of the terminal device; determine whether the operating data meet a preset condition; and replace a current audio-visual effect of a user interface (UI) with a selected audio-visual effect when the operating data meet the preset conditions.
    Type: Application
    Filed: August 7, 2014
    Publication date: November 27, 2014
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventors: Cheng FENG, Bo HU, Xi WANG, Ruiyi ZHOU, Zhipei WANG, Kai ZHANG, Xin QING, Huijiao YANG, Ying HUANG, Yulei LIU, Wei LI, Zhengkai XIE