Patents Examined by Jeffrey Gaffin
  • Patent number: 10078487
    Abstract: A list of notification items is received, the list including a plurality of notification items, wherein each respective one of the plurality of notification items is associated with a respective urgency value. An information item is detected. In some implementations, the information item is a communication (e.g., an email). In some implementations, the information item is a change in context of a user. Upon determining that the information item is relevant to the urgency value of the first notification item, the urgency value of the first notification item is adjusted. Upon determining that the adjusted urgency value satisfies the predetermined threshold, a first audio prompt is provided to a user.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: September 18, 2018
    Assignee: Apple Inc.
    Inventors: Thomas R. Gruber, Donald W. Pitschel
  • Patent number: 10079892
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for suggesting and inserting automated assistants in a graphical user interface for managing communication sessions. A system for suggesting an automated assistant generates a first vector describing a current context of a current communication session, and generates a comparison of the first vector and a second vector associated with a past context of an automated assistant in a past communication session. Then, if the comparison exceeds a similarity threshold, the system suggests the automated assistant to at least one user in the current communication session. Optionally, the system can predictively insert the automated assistant in a communication session if the comparison exceeds a similarity threshold. The graphical user interface for managing communication sessions displays automated assistants in a same manner as human participants.
    Type: Grant
    Filed: December 27, 2010
    Date of Patent: September 18, 2018
    Assignee: Avaya Inc.
    Inventors: Trung Dinh-Trong, Birgit Geppert, Frank Roessler
  • Patent number: 10073708
    Abstract: A system and method of providing visual indicators to manage peripheral devices is disclosed. In some embodiments, a graphical user interface environment is provided in which icons represent peripheral device objects. The appearance of the icons is modified based on attribute values associated with the device objects.
    Type: Grant
    Filed: March 14, 2008
    Date of Patent: September 11, 2018
    Assignee: S-PRINTING SOLUTION CO., LTD
    Inventor: Constantinos Kardamilas
  • Patent number: 10073607
    Abstract: A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: September 11, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Lae-Hoon Kim, Erik Visser, Raghuveer Peri, Phuong Lam Ton, Jeremy Patrick Toman, Troy Schultz, Jimeng Zheng
  • Patent number: 10073532
    Abstract: A system for a spatial-gesture user interface employing grammatical rules at various levels. Various distinct subset of the gestemes can be concatenated in space and time to construct a distinct gestures. Real-time spatial-gesture information measured by a spatial-gesture user interface is processed to at least a recognized sequence of specific gestemes and that the sequence of gestemes that the user's execution a gesture has been completed. The specific gesture rendered by the user is recognized according to the sequence of gestemes. Many additional features are then provided from this foundation, including gesture grammars, structured-meaning gesture-lexicon, imposed interpretations, context, and the use of gesture-rendering prosody. The invention can be used to provide very general spatial-gesture grammar user interface for touchscreens, high dimensional touch pad (hdtp), free-space camera, and other user interfaces.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: September 11, 2018
    Assignee: NRI R&D PATENT LICENSING, LLC
    Inventor: Lester F. Ludwig
  • Patent number: 10067656
    Abstract: A method and apparatus are provided for application selection in virtual reality mode. The method includes receiving a selection of an application. The method also includes determining whether a display of a user equipment is in the virtual reality mode. The method also includes, responsive to the user equipment being in the virtual reality mode, determining whether the selected application is included in a grouping of applications. The grouping includes one or more applications related to the selected application. The method also includes, responsive to the selected application being included in a grouping of applications, executing the grouping of applications. The method also includes providing the executed grouping of applications to the display.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: September 4, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Sridhar Kocharlakota
  • Patent number: 10061498
    Abstract: A graph display device includes an expression input unit, a graph display control unit, first and second display control units and a coefficient changed graph display control unit. The expression input unit recognizes a functional expression. The graph display control unit displays a graph corresponding to the recognized functional expression in a display part. The first display control unit displays a first user interface object for changing a numeric value set to a variable when a coefficient in the functional expression is the variable. The second display control unit displays a second user interface object for changing a numeric value as a constant when a coefficient in the functional expression is the constant. The coefficient changed graph display control unit displays a graph corresponding to the functional expression in which a numeric value changed by the first or second user interface object is set as a coefficient value.
    Type: Grant
    Filed: April 16, 2014
    Date of Patent: August 28, 2018
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Kota Endo
  • Patent number: 10063408
    Abstract: Systems and methods are provided herein for enabling a first user to set up an alert that will notify the first user when the first user has caught up to a second user's progress in consuming media. These systems and methods are used to ensure that the first user is informed, while they are consuming media, that they have caught up to the progress of a second user. By providing an alert while the first user is viewing media, the first user does not have to remember the progress of the second user while viewing the media, alleviating the first user from worrying they will pass the progress made by the second user without realizing they have done so.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: August 28, 2018
    Assignee: Rovi Guides, Inc.
    Inventor: Ajit Shanware
  • Patent number: 10055118
    Abstract: An information processing method and an electronic device are provided. The method is applicable in the electronic device having a display unit. The electronic device is capable of running a voice application. Once the voice application is started, prompt information is displayed in a first display region of the display unit, for directing a user to operate the electronic device in a voice input mode by following the prompt information to implement an operation function corresponding to the prompt information. The method includes: detecting a first operation performed by the user on the first display region in a first operation mode different from the voice operation mode; and in response to the first operation, triggering a first operation instruction and executing the operation function corresponding to the prompt information. Hence, the user may use the function of the electronic device conveniently and the user experience is good.
    Type: Grant
    Filed: September 17, 2014
    Date of Patent: August 21, 2018
    Assignees: Beijing Lenovo Software Ltd., Lenovo (Beijing) Co., Ltd.
    Inventors: Guangjie Zhang, Haisheng Dai
  • Patent number: 10048831
    Abstract: Methods, systems, and computer readable media can be operable to facilitate the generation of a user interface displaying the devices associated with a local network. A client device may retrieve information associated with one or more devices associated with a common central device, local network, and/or subscriber. The client device may generate a user interface including one or more device objects organized along an ellipsoidal wireframe, wherein each device object represents an identified device. The user interface may include device identification and/or status information associated with each displayed device. Devices displayed within the user interface may be filtered based upon one or more parameters selected by a user. The client device may update and rearrange the displayed device objects based upon navigation commands received from a user via a control device.
    Type: Grant
    Filed: February 1, 2016
    Date of Patent: August 14, 2018
    Assignee: ARRIS Enterprises LLC
    Inventors: Marek Bugajski, Marcin Morgos
  • Patent number: 10048832
    Abstract: The present invention relates to a user interface in a device, in which first and second selection areas are displayed on a display. The first selection area can be an area for selecting an object. In response to user input being received that selects the first selection area and the second selection area, a first predetermined action is performed for the selected object in response to the user input selecting the first selection area before the second selection area, and a second predetermined action is performed for the selected object in response to the user input selecting the second selection area before the first selection area. The user input can be received by various methods, including a touch and/or drag event received through a touch screen.
    Type: Grant
    Filed: August 29, 2013
    Date of Patent: August 14, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: German Jose D' Jesus Bencci
  • Patent number: 10050921
    Abstract: An emotion function chain may be generated from an email thread and displayed to a user in one or more windows on a computer display of a computer system. An email content analyzer may extract emotion indicators from the body of an email within the email thread. Using the emotion indicators, an attitude factor for an email may be determined. The attitude factors determined from each email within the email thread may be pictorially depicted in the emotion function chain. In response to a user interaction with a pictorial depiction of an attitude factor within the emotion function chain, a second window may generated to display the body of the email used to calculate the attitude factor interacted with.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: August 14, 2018
    Assignee: International Business Machines Corporation
    Inventors: Song Bai, Ming Qun Chi, Hui Huang, Hui Liu, Xiang Xing Shi, Ang Yi
  • Patent number: 10048854
    Abstract: A user of a web application can perform a drag and drop operation from a first component of the web application to a second component of the web application. The drag and drop operation can include three actions. The first action can be initializing a drag of an object within a first component of a web application. The second action can be dragging the object from within the first component over a drop target located within a second component of the web application. The third action can be dropping the object onto the drop target located within the second component of the web application. One of the first and second components can be a web component, and the other component can be a visualization component. The first and second components of the web application can communicate with each other using a communication component of the web application.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: August 14, 2018
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Hugh Zhang, Teck Hua Lee, Kevin Chow, Diar Ahmed, Prashant Singh
  • Patent number: 10037215
    Abstract: A method of interworking an application and a browser in a terminal by receiving a user input through the browser, determining whether an application related to the user input is registered in an external device, determining, if the application is registered in the external device, whether the registered application is installed in the terminal, and when the registered application is installed, running the installed application.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: July 31, 2018
    Assignee: Samsung Electronics Co., Ltd
    Inventor: Hyung-jin Seo
  • Patent number: 10025486
    Abstract: Methods, apparatuses, computer program products, devices and systems are described that carry out detecting a first action of a user at a location in a real world field of view of an augmented reality device; displaying an augmented reality representation in response to at least one of a user input or detecting a first action of a user at a location in a real world field of view of an augmented reality device; moving the displayed augmented reality representation on a display of the augmented reality device according to at least one detected second action of the user; and registering the displayed augmented reality representation at a location in the display of the augmented reality device in response to at least one of a user input or moving a displayed augmented reality representation on a display of the augmented reality device according to at least one detected second action of the user.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: July 17, 2018
    Assignee: ELWHA LLC
    Inventors: Gene Fein, Royce A. Levien, Richard T. Lord, Robert W. Lord, Mark A. Malamud, John D. Rinaldo, Jr., Clarence T. Tegreene
  • Patent number: 10025378
    Abstract: Embodiments are disclosed herein that relate to selecting user interface elements via a periodically updated position signal. For example, one disclosed embodiment provides a method comprising displaying on a graphical user interface a representation of a user interface element and a representation of an interactive target. The method further comprises receiving an input of coordinates of the periodically updated position signal, and determining a selection of the user interface element if a motion interaction of the periodically updated position signal with the interactive target meets a predetermined motion condition.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: July 17, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Morgan Kolya Venable, Bernard James Kerr, Vaibhav Thukral, David Nister
  • Patent number: 10025871
    Abstract: A method of providing content includes detecting execution of a mark-all-read command associated with a specified stream. The specified stream includes two or more content feeds, wherein each content feed includes a set of content items published by a respective publication source. The method also includes recording a time of execution of the mark-all-read command and displaying content items associated with the specified stream. The displayed content items have associated timestamps, and content items having associated timestamps dated prior to the recorded time of execution are displayed in a visually distinctive format from content items having associated timestamps dated after the recorded time of execution.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: July 17, 2018
    Assignee: GOOGLE LLC
    Inventors: Benjamin G. Darnell, Justin Christopher Haugh
  • Patent number: 10019134
    Abstract: An edit processing apparatus for enhancing user operability. The edit processing apparatus has an output device for displaying a target to be edited; and a CPU that displays on the output device edit menus for the target in accordance with a relative positional relationship between a position of the target and a cursor position on the output device and a frequency of selection made at the cursor position in the past.
    Type: Grant
    Filed: July 2, 2014
    Date of Patent: July 10, 2018
    Assignee: TEAC CORPORATION
    Inventor: Kaname Hayasaka
  • Patent number: 10013138
    Abstract: Method and apparatus for secure data entry. In the method a virtual data entry interface is generated, and is outputted so as to be readable only by the user. The user then enters data using the interface. The apparatus includes at least one display, or optionally a pair of displays that output a 3D stereo image. It also includes a data processor, and at least one sensor, or optionally a pair of sensors that capture 3D stereo data. The data processor generates a virtual data entry interface, and communicates it to the display or displays. The displays output the virtual interface such that it is only readable by the user. The sensor or sensors receives data entered by the user's actions, and send signals representing those actions to the processor. The processor then detects the data from the signals.
    Type: Grant
    Filed: October 22, 2012
    Date of Patent: July 3, 2018
    Assignee: Atheer, Inc.
    Inventor: Sleiman Itani
  • Patent number: 10013159
    Abstract: A driving support information display device includes a status display area controller, a menu display area controller, and a status information controller. The status display area controller manages a status display area having a plurality of status display sections arranged vertically in one side area of a display screen. The menu display area controller manages an upper menu display area having a plurality of selection button sections arranged side by side in an upper area of the display screen and a lower menu display area having a plurality of selection button sections arranged side by side in a lower area of the display screen. The status information controller displays status information in the status display section by providing to the status display area controller status information relating to a functional module assigned to the selection button section that has been touch input.
    Type: Grant
    Filed: September 28, 2015
    Date of Patent: July 3, 2018
    Assignee: KUBOTA CORPORATION
    Inventors: Keiji Takahashi, Susumu Umemoto, Eiji Nishi, Yoshihiro Kushita