Patents Examined by Yi Wang
  • Patent number: 11107270
    Abstract: A method of deriving one or more medical scene model characteristics for use by one or more software applications is disclosed. The method includes receiving one or more sensor data streams. Each sensor data stream of the one or more sensor data steams includes position information relating to a medical scene. A medical scene model including a three-dimensional representation of a state of the medical scene is dynamically updated based on the one or more sensor data streams. Based on the medical scene model, the one or more medical scene model characteristics are derived.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: August 31, 2021
    Assignee: Siemens Healthcare GmbH
    Inventors: Klaus J. Kirchberg, Vivek Kumar Singh, Terrence Chen
  • Patent number: 11107186
    Abstract: A method is implemented at an electronic device for displaying output from an application. The electronic device includes a display module and an application. The application sends to the display module a request to display output on the fixed orientation display. The display module determines whether the application is able to scale the output from the application to fit the fixed orientation display. In accordance with a determination that the application is able to scale the output, the electronic device causes the application to receive information concerning the fixed orientation display from the display module and scale the output for display on the fixed orientation display according to the information. In accordance with a determination that the application is not able to scale the output, the display module scales the output received from the application, thereby enabling the output of the application to be displayed on the fixed orientation display.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: August 31, 2021
    Assignee: Google LLC
    Inventors: Patrick Brady, Dianne Hackborn, Jason Bayer
  • Patent number: 11100901
    Abstract: Provided are a method and apparatus for controlling rendering of layers, and a terminal. The method includes the following. Layer attribute information of a current layer rendered by an application is obtained, where the current layer has a specified type. A target frame rate of rendering is determined according to the layer attribute information of the current layer. The application is controlled to render, according to the target frame rate of rendering, a layer to-be-rendered of the specified type.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: August 24, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Deliang Peng, Yongpeng Yi, Shengjun Gou, Xiaori Yuan, Gaoting Gan, Zhiyong Zheng, Hai Yang
  • Patent number: 11080787
    Abstract: A method may include receiving image data representative of cash acquired via one or more image sensors, identifying a first currency depicted in the cash based on a plurality of images associated with a plurality of currencies, and determining a currency conversion rate between the first currency and a second currency. The method may also include generating a visualization representative of a currency value of the cash in the second currency based on the currency conversion rate and overlaying the visualization on the image data.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: August 3, 2021
    Assignee: United Services Ausomobile Association (USAA)
    Inventors: Carlos JP Chavez, David Jason Anderson James, Rachel Elizabeth Csabi, Quian Jones, Andrea Marie Richardson
  • Patent number: 11069091
    Abstract: Communications devices and methods perform spatial, visual content and a separate preview of other content apart from the performed content. Content may include 3-D performances or AR content. Immersive visual content may be received by the communications device and simplified into transcript cells and/or performed render nodes based on metadata, visual attributes, and/or capabilities of the communications device for performance. Render nodes may preview other content, which is performable and selectable with ease from the communications device. Devices may perform both a piece of content and display, in context, render nodes for other visual content, as well as buffer and prepare unseen other content such that a user may seamlessly preview, select, and perform other visual content. Example GUIs may arrange nodes at a distance or arrayed long a selection line in the same coordinates as performed visual content. Users may input commands to move between or modify the nodes.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: July 20, 2021
    Inventor: Patrick S. Piemonte
  • Patent number: 11069100
    Abstract: The present disclosure discloses an intelligent interactive interface, comprising: an interface underlayer drawn from trajectory formed by measurement; a plurality of identifications disposed on the interface underlayer, each of the identifications corresponds to an external device, information of the external device is uploaded in real time, displayed on the interface underlayer and can be stored on a server, and a mapping relationship is established between the information of the external device and the corresponding identification of respective external device; a terminal apparatus is connected to the external device and displays the interface underlayer, identifications and control and/or exchange information with the external device; wherein the information of the external device is displayed in real time on the terminal apparatus through the identifications, the identifications can be added or deleted in real time.
    Type: Grant
    Filed: January 8, 2016
    Date of Patent: July 20, 2021
    Assignee: NORTHWEST INSTRUMENT INC.
    Inventors: Yufei Wei, Xin Shi, David Xing
  • Patent number: 11062677
    Abstract: Devices and methods for automatic application of mapping functions to video signals based on inferred parameters are provided. In one example, a method, including initiating display of content based on a video signal being processed by a device, is provided. The method may further include in response to at least a first change in an intensity of ambient light or a second change in a color of the ambient light subsequent to the initiating of the display of the content based on the video signal, selecting a first mapping function applicable to pixels corresponding to frames of the video signal based at least on a first inferred parameter from a selected machine learning model. The method may further include automatically applying the first mapping function to a first plurality of pixels corresponding to a first set of frames of the video signal.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: July 13, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Abo Talib Mafoodh, Mehmet Kucukgoz, Holly H. Pollock
  • Patent number: 11056076
    Abstract: A method for controlling a screen backlight and a terminal are provided. The method includes the following. When a system wake-up event is detected via system wake-up meta service of the terminal, whether a TouchDown event exists is detected via FingerService of the terminal, where the TouchDown event is generated when a touch operation on a fingerprint sensor is detected. When the TouchDown event is detected via the FingerService, whether the screen backlight is lit up is detected upon elapse of a preset period, where the preset period is longer than duration of fingerprint unlocking processing performed in response to the TouchDown event by the terminal. When it is detected that the screen backlight is lit up, the system wake-up event is discarded.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: July 6, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Jian Wang, Zongjun Li
  • Patent number: 11042749
    Abstract: The disclosed computer-implemented method may include receiving, from devices in an environment, real-time data associated with the environment and determining, from the real-time data, current object data for the environment. The current object data may include both state data and relationship data for objects in the environment. The method may also include determining object deltas between the current object data and prior object data from an event graph. The prior object data may include prior state data and prior relationship data for the objects. The method may include detecting an unknown state for one of the objects, inferring a state for the object based on the event graph, and updating the event graph based on the object deltas and the inferred state. The method may further include sending updated event graph data to the devices. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: June 22, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
  • Patent number: 11030818
    Abstract: Systems and methods for presenting virtual-reality information in a vehicular environment are disclosed herein. One embodiment receives, at a first vehicle, a set of presentation attributes for a second vehicle that is in an external environment of the first vehicle, the set of presentation attributes for the second vehicle corresponding to a virtual vehicle that is different from the second vehicle and within a same vehicle category as the second vehicle; and presents to an occupant of the first vehicle, in a virtual-reality space, the second vehicle in accordance with the received set of presentation attributes for the second vehicle while the second vehicle is visible from the first vehicle in the external environment of the first vehicle.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: June 8, 2021
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Timothy Wang, Roger Akira Kyle, Bryan E. Yamasaki
  • Patent number: 11000250
    Abstract: The present invention relates to visualizing vasculature structure.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: May 11, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Vincent Maurice Andre Auvray, Pierre Henri Lelong, Raul Florent
  • Patent number: 11003166
    Abstract: In an example, a method includes receiving a first data model of an object to be generated in additive manufacturing, at a processor. Using the processor, a second data model may be determined. Determining the second data model may include generating for each of plurality of contiguous, non-overlapping sub-volumes of a volume containing the object, a sub-volume octree characterising the sub-volume and having a root node. Determining the second data model may further include generating a volume octree characterising the volume containing the object, the volume octree characterising in its lowest nodes the root nodes of the sub-volume octrees.
    Type: Grant
    Filed: October 12, 2016
    Date of Patent: May 11, 2021
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Alex Carruesco Llorens, Alvar Vinacua, Pere Brunet, Sergio Gonzalez, Jordi Gonzalez Rogel, Marc Comino, Josep Giralt Adroher, Lluis Abello Rosello, Sebastia Cortes I Herms
  • Patent number: 10997760
    Abstract: Embodiments of the present disclosure relate generally to systems for enhancing a first media item through the addition of a supplemental second media item. A user may provide a request to enhance a selected media item, and in response, an enhancement system retrieves and presents a curated collection of supplemental content to be added to the media, to the user. The user may review the curated collection of supplemental content, for example by providing a tactile input to scroll through the curated collection of content.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: May 4, 2021
    Assignee: Snap Inc.
    Inventors: Itamar Berger, Piers Cowburn, Avihay Assouline
  • Patent number: 10928202
    Abstract: A computer implemented method for three-dimensional volumetric indoor location geocoding relative to a geographic location is provided. The method includes: creating a three-dimensional representation of the geographic location; notionally subdividing the three-dimensional representation into an array of discrete elements; receiving an address and converting the address into geographic coordinates; querying the array of discrete elements representing the geographic location; determining a list of all discrete elements with at least one of a matching address and sub-address element attribute; generating a notional minimum bounding three-dimensional polygon containing the matched discrete elements with the at least one matched address and matched sub-address element attribute; determining a list of geodetic coordinates defining the minimum bounding three-dimensional polygon; and presenting the list of geodetic coordinates defining the minimum bounding three-dimensional polygon.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 23, 2021
    Assignee: Geo-Comm Inc.
    Inventors: John T. Brosowsky, Avery Penniston, Steven Henningsgard
  • Patent number: 10930077
    Abstract: The disclosed computer-implemented method may include determining a local position and a local orientation of a local device in an environment and receiving, by the local device and from a mapping system, object data for objects within the environment. The object data may include position data and orientation data for the objects and relationship data between the objects. The method may also include deriving, based on the object data received from the mapping system, and the local position and orientation of the local device, a contextual rendering of the objects that provides contextual data that modifies a user's view of the environment. The method may include displaying, using the local device, the contextual rendering of at least one of the plurality of objects to modify the user's view of the environment. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: February 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Richard Andrew Newcombe, Jakob Julian Engel, Julian Straub, Thomas John Whelan, Steven John Lovegrove, Yuheng Ren
  • Patent number: 10904171
    Abstract: A method, computer program product, and computer system for defining, at a first computing device, at least a portion of a display area associated with the first computing device. A specialized communication from a second computing device is received at the first computing device. The specialized communication is rendered at the first computing device in at least the portion of the display area. Use of an application within at least the portion of the display is prevented at least while the specialized communication is accessed.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Corville O. Allen, Faheem Altaf, Robert E. Loredo, Henri F. Meli
  • Patent number: 10887543
    Abstract: Methods, systems and computer program products for automatic adjustment of video orientation are provided. A computer-implemented method may include receiving a video comprising a plurality of image frames, receiving, via a user interface, a user request to initiate an automatic correction of the video that was recorded by the video recording device of the mobile device that was shaken during the recording, performing the automatic correction of the video, comprising automatically adjusting one or more of the plurality of image frames in the video to correct shaking, presenting the user interface providing a playback of a preview of the adjusted video, and presenting, in the user interface, alongside the playback of the preview of the adjusted video, a concurrent playback of the video originally recorded by the video recording device of the mobile device.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: January 5, 2021
    Assignee: GOOGLE LLC
    Inventors: Maciek S. Nowakowski, Balazs Szabo
  • Patent number: 10878638
    Abstract: In one embodiment, a computing system accesses a tracking record of a real-world object during a first movement session. The first tracking record comprises a plurality of locations of the real-world object relative to a first user. The system determines a display position of a virtual object representing the real-world object on a display screen of the second user based on the tracking record of the real-world object and the current location of the second user. Based on the determined position of the virtual object on the display screen, the system displays the virtual object on the display screen.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: December 29, 2020
    Assignee: Facebook, Inc.
    Inventor: David Michael Viner
  • Patent number: 10867451
    Abstract: An example device may include an electronic display configured to generate an augmented reality image element and an optical combiner configured to receive the augmented reality image element along with ambient light from outside the device. The optical combiner may be configured to provide an augmented reality image having the augmented reality image element located within a portion of an ambient image formed from the ambient light. The device may also include a dimmer element configured to selectively dim the portion of the ambient image in which the augmented reality image element is located.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: December 15, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Jasmine Soria Sears, Alireza Moheghi, Oleg Yaroshchuk, Douglas Robert Lanman, Andrew Maimone, Kavitha Ratnam, Nathan Matsuda
  • Patent number: 10867446
    Abstract: An exemplary method includes a virtual world creation system detecting a request from a user of a user computing device to experience a three-dimensional (3D) virtual world, dynamically generating, in response to the request, a 3D mesh that defines a structure of a custom 3D virtual world to be experienced by the user, and providing, by the virtual world creation system, the custom 3D virtual world for experiencing by the user. The generating of the custom 3D virtual world includes selecting, based on profile information for the user and a set of virtual world building rules, a custom set of modules for inclusion in the custom 3D virtual world, and using the selected custom set of modules to generate the 3D mesh based on the set of virtual world building rules.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: December 15, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Mohammad Raheel Khalid, Craig Elliott Brown, Joseph M. Knight