Abstract: An interface display method for a hiboard having a top display area and a dynamic message display area can include: displaying state information of a terminal on the dynamic message display area; displaying associated information of the state information on the top display area according to the state information; updating the associated information when the state information is updated; and displaying preset information on the top display area when the state information is not updated within a preset time.
Type:
Grant
Filed:
November 30, 2019
Date of Patent:
June 15, 2021
Assignee:
BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
Abstract: Apparatus, system and computer-implemented method for arranging, organising and mapping ideas or planning processes in a graphical workspace for idea management. This includes a graphical user interface (GUI) with an idea map window and a word processor document window. The idea map represents ideas as a hierarchical network of nodes and links. Selection of a subset of the ideas to be shown as a separate idea map layer, wherein the ideas of the subset have different positions, different links and relationships to the ideas displayed in the main idea map, thus creating a different layout and mapping representation; and display a separate word processor document associated with only the ideas of the subset and the relationships inside the separate idea map layer.
Abstract: Systems and methods for preloading an amount of content based on user scrolling are disclosed. A body of content may be presented that takes up a certain amount of display space within a graphical user interface. Scroll information characterizing user scrolling within the graphical user interface may be obtained. A portion of the body of content outside a field of view of the graphical user interface may be determined based on the scroll information.
Abstract: Implementations generally relate to a method of adding asynchronous validation and conversion capabilities to web components by adding an attribute to an existing web component accessed through, for example, a GUI interface. Such attribute will specify what type of validation or formatting conversion is expected. The web component may provide feedback on errors or changes in an interactive fashion for the user based on web service data. Feedback may be provided to the user asynchronously, without waiting for all user inputs to be validated or converted before displaying the errors or changes to the user. The web component remains completely interactive to the user even while validation or conversion is in progress on the server. Furthermore, the web component will handle multiple user inputs and only display the validation or conversion results from the last user input.
Abstract: A system for providing information technology (IT) assistance packages enables a graphical user interface that can be launched from a single application icon and that can be customised on an individual user basis to provide a locally-branded system for servicing the user's IT needs.
Abstract: Provided is an information processing apparatus including: a first information acquisition unit configured to acquire first information indicating behavior of at least one user; a second information acquisition unit configured to acquire second information on the at least one user, the second information being different from the first information; and a display control unit configured to display, in a display unit, a user object which is configured based on the first information and represents the corresponding at least one user and a virtual space which is configured based on the second information and in which the user object are arranged.
Abstract: In an example embodiment, a request that includes a first image is received. A second image and a description are accessed from an item listing. An item identifier that corresponds to the second image is parsed from the description. A first edge in the first image and a second edge in a second image are detected. A match between the first image and the second image is determined based on the detection. The first image is associated with the item identifier. Item information corresponding to the item identifier is accessed from web pages. The item information is then transmitted.
Type:
Grant
Filed:
October 28, 2016
Date of Patent:
March 23, 2021
Assignee:
eBay Inc.
Inventors:
John Tapley, Eric J. Farraro, Raghav Gupta, Roopnath Grandhi
Abstract: A virtual reality network provides access to a number of virtual reality representations, each virtual reality representation representing a location in a virtual universe and defined by VR data stored on the network. The VR data can be in a simplified data format and include data from an intelligent personal assistant and knowledge navigator (IPAKN). The IPAKN receives queries about a VR representation of a location and generate a new VR data set based on information downloaded from web sources about the location. A database server that provides access to the VR data is updated with the new VR data set.
Type:
Grant
Filed:
August 14, 2014
Date of Patent:
March 16, 2021
Assignee:
Sony Interactive Entertainment America LLC
Abstract: A method may include receiving, via a processor, image data associated with a user's surrounding and generating, via the processor, a visualization that may include a virtual industrial automation device. The virtual industrial automation device may depict a virtual object within image data, and the virtual object may correspond to a physical industrial automation device. The method may include displaying, via the processor, the visualization via an electronic display and detecting, via the processor, a gesture in image data that may include the user's surrounding and the visualization. The gesture may be indicative of a request to move the virtual industrial automation device. The method may include tracking, via the processor, a user's movement, generating, via the processor, a visualization that may include an animation of the virtual industrial automation device moving based on the user's movement, and displaying, via the processor, the visualization via the electronic display.
Type:
Grant
Filed:
September 26, 2018
Date of Patent:
March 9, 2021
Assignee:
Rockwell Automation Technologies, Inc.
Inventors:
Thong T. Nguyen, Paul D. Schmirler, Timothy T. Duffy
Abstract: The invention provides a graphical user interface implemented on a computer including an information area for displaying to a user at the computer inspection status information in connection with one or more components of a linear asset infrastructure. The graphical user interface also includes a control component operable by the user at the computer to cause the graphical user interface to display additional information on the one or more components of the linear asset infrastructure.
Type:
Grant
Filed:
November 8, 2019
Date of Patent:
March 9, 2021
Assignee:
CANADIAN NATIONAL RAILWAY COMPANY
Inventors:
Dwight Tays, David Lilley, Brian Abbott
Abstract: Methods and devices for determining a head movement are provided. A method comprises: acquiring, in response to a head movement performed by a user, a piece of electrooculographic information of the user; and determining information related to the head movement according to the piece of electrooculographic information and at least one piece of reference information. The head movement can be identified according to electrooculographic information. For some devices integrated with electrooculographic sensors, the electrooculographic sensor can be reused to collect the electrooculographic information, and thereby reduce implementation costs.
Abstract: A screen capture method includes the following steps: obtaining a screenshot instruction for a target page; obtaining, according to the screenshot instruction, a screenshot of an area currently displayed on the target page; covering the area currently displayed on the target page with the screenshot for display; changing an area covered by the screenshot on the target page to a designated area on the target page; obtaining a screenshot of the designated area; and restoring the target page to the area displayed before the target page is changed to the designated area.
Type:
Grant
Filed:
May 13, 2019
Date of Patent:
February 23, 2021
Assignee:
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Abstract: An electronic apparatus, computer-readable recording medium, and method of providing a plurality of work group objects are provided. The method includes providing a plurality of work group objects respectively representing a plurality of work groups, providing a plurality of window objects in response to a selection of a work group object and representing a plurality of windows, providing a window in response to a selection of a window object, and providing an object representing windows corresponding to all of the window objects associated with one of the work groups.
Abstract: A method and system provides personalized search results to users of a data management system. The method and system receives a search query from a user and generate initial search results including a plurality of assistance documents relevant to the query data. The method and system utilizes natural language analysis and machine learning processes to analyze the query data, user attributes data, and the assistance documents in order to generate personalized previews of the assistance documents for the user. The method and system output personalized search results to the user including the personalized previews of the assistance documents.
Type:
Grant
Filed:
April 19, 2018
Date of Patent:
February 16, 2021
Assignee:
Intuit Inc.
Inventors:
Igor A. Podgorny, Benjamin Indyk, Ling Feng Wei
Abstract: The present invention is directed to a system and method for providing information during content breakpoints in a virtual universe. The system comprises a placement engine configured to detect a content breakpoint within a virtual universe, which is defined as at least one of a login process, a logoff process, a teleportation, a wait state, and during any point where a user changes information streams in the virtual universe. The system also comprises an insertion resolution engine configured to create a list of prioritized information to present to a user within the virtual universe and an information definition engine configured to present the prioritized information to the user of the virtual universe during the content breakpoint.
Type:
Grant
Filed:
November 28, 2018
Date of Patent:
February 2, 2021
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Sheila E. Allen, Christopher J. Dawson, Rick A. Hamilton, II, Clifford A. Pickover
Abstract: A computer is programmed to receive data from an occupant wearable device and detect a movement of the wearable device relative to the vehicle steering wheel based on a movement classifier. The movement classifier is created based on ground truth data indicating the movement of the wearable device while an occupant's hand contacts the vehicle steering wheel. The computer is further programmed to then cause an action in the vehicle according to the detected movement.
Type:
Grant
Filed:
October 24, 2016
Date of Patent:
January 12, 2021
Assignee:
FORD GLOBAL TECHNOLOGIES, LLC
Inventors:
Yifan Chen, Abhishek Sharma, Qianyi Wang
Abstract: An estimation results display system capable of displaying an estimation result so that persons can intuitively recognize at a glance which learning model is selected when deriving the estimation result is provided. An input unit receives input of information associating an estimation result and information indicating a learning model used when deriving the estimation result. A display unit displays a graph that represents the estimation result by a symbol and in which a type of the symbol is changed depending on the learning model corresponding to the estimation result.
Abstract: Techniques in this disclosure may provide a user interface that concurrently displays multiple panels which provide visualization of emergency call data of a law enforcement agency. The user interface can provide a high-level overview of emergency calls in a geographical area. Each panel in the user interface can provide visualization of the emergency calls and/or statistics relating to the calls. A user can customize which panels to include in the user interface and/or customize setting for each panel. The user may apply various types of filters to the data displayed in the user interface, and the panels can update the visualizations according to the filters. The user interface can also provide the ability to show data at various levels of detail within the same user interface or panel. The techniques in the disclosure can provide a convenient, digestible overview of tactical and/or strategic data in a single user interface.
Abstract: A plurality of sharing groups are available to a user, each mapping to a corresponding set of users. While a first sharing group is the currently-selected sharing group, a first one or more activations of a capture control is detected. Based on the first sharing group being the currently-selected sharing group when the first one or more activations occurred, one or more first digital media items captured in response are automatically shared with a first set of users corresponding to the first sharing group. While a second sharing group is the currently-selected sharing group, a second one or more activations of the capture control is detected. Based on the second sharing group being the currently-selected sharing group when the second one or more activations occurred, one or more second digital media items captured in response are automatically shared with a second set of users corresponding to the second sharing group.
Type:
Grant
Filed:
April 22, 2019
Date of Patent:
December 8, 2020
Assignee:
Western Digital Technologies, Inc.
Inventors:
Roger Bodamer, Chris Bourdon, Laurent Baumann, Daniel Feldman
Abstract: A mobile terminal tap event recognition method and apparatus are disclosed. The method includes: recording a plurality of touch events detected by a touch screen of a mobile terminal by recording an event type and a timestamp of each touch event; forming one or more completed groups of touch events based on the even types and the timestamps of the plurality of touch events; determining, in response to a tap event and according to the plurality of touch events, whether the tap event takes place in a page scrolling process; and cancelling, in response to a determination that the tap event takes place in the page scrolling process, the tap event.