Patents Examined by Andrew Shin
  • Patent number: 11051753
    Abstract: Disclosed is an information processing method executed by a computer. The information processing method includes identifying a sleep onset time and an awakening time based on time series data relating to an activity status of a user; calculating a first midpoint at which an interval between the sleep onset time and the awakening time is evenly divided into two; and outputting information indicating a time at the calculated first midpoint.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: July 6, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Takayuki Yamaji
  • Patent number: 11036287
    Abstract: An electronic device according to the present invention includes at least one processor to perform the operations of: a display control unit configured to control a display device to display a part of an image; and a detection unit configured to detect an orientation change of the display device, wherein the display control unit performs first processing relating to changing a position of a part of the image displayed according to the orientation change if the detected orientation change is a pitch or a yaw, and performs second processing relating to reproduction of the image, the second processing being different from the first processing, if the detected orientation change is a roll.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: June 15, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Genjiro Shibagami
  • Patent number: 11032662
    Abstract: An augmented reality (AR) device includes a memory configured to store instructions of an augmented reality (AR) application. The AR device further includes a processor configured to initiate a first image capture operation to generate first image data and to determine a three-dimensional (3D) map based on the first image data. The 3D map represents a set of locations including a first location. The processor is further configured to initiate a second image capture operation to generate second image data and to execute the instructions to identify, based on the second image data, a second location of the set of locations. The processor is further configured to modify an audio signal to synthesize one or more acoustic characteristics associated with audio sent from the first location to the second location.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: June 8, 2021
    Assignee: Qualcomm Incorporated
    Inventor: Saurjya Sarkar
  • Patent number: 11030879
    Abstract: Methods and systems for monitoring an immersive system include detecting a first event that occurs in an environment of a user of the immersive system that is external to an immersive headset of the immersive system while the user is operating the immersive system, initiating monitoring of the environment of the user of the immersive system responsive to detecting the first event, detecting a second event that occurs in the environment of the user of the immersive system that is external to the immersive headset while monitoring the environment of the user of the immersive system, and, responsive to determining that the second event matches a defined pattern, providing an indication to the user of the immersive system so that the user is alerted.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: June 8, 2021
    Assignee: Sony Corporation
    Inventor: Anders Gullander
  • Patent number: 11010945
    Abstract: The present invention provides an image display method applied to an augmented reality (AR) system which positions a virtual object by a marker image. The image display method includes that: a reality image is acquired, wherein the reality image includes a first image where the marker image is positioned and a second image, and the marker image includes a known pattern; the first image in the reality image is replaced with an extending image of the second image on the basis of a relationship between the known pattern and the first image to generate a display image; and the display image is displayed. Therefore, the condition that an AR image includes the marker image and thus the whole image looks unnatural may be avoided. In addition, the AR system using the foregoing method is also provided.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: May 18, 2021
    Assignees: LITE-ON ELECTRONICS (GUANGZHOU) LIMITED, Lite-On Technology Corporation
    Inventors: Shou-Te Wei, Wei-Chih Chen
  • Patent number: 10983668
    Abstract: Embodiments of the present invention provide a component display processing method and apparatus. The method includes: receiving indication information indicating that a component is in a waiting-to-be-processed state; and according to the indication information, reducing a displayed region of a container that is displayed on a display screen, so that t a hidden region of the container is displayed on the display screen.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: April 20, 2021
    Assignee: Huawei Device Co., Ltd.
    Inventor: Yuzhuo Peng
  • Patent number: 10970920
    Abstract: Rendering shadows of transparent objects using ray tracing in real-time is disclosed. For each pixel in an image, a ray is launched towards the light source. If the ray intersects a transparent object, lighting information (e.g., color, brightness) is accumulated for the pixel. A new ray is launched from the point of intersection, either towards the light source or in a direction based on reflection/refraction from the surface. Ray tracing continues recursively, accumulating lighting information at each transparent object intersection. Ray tracing terminates when a ray intersects an opaque object, indicating a dark shadow. Ray tracing also terminates when a ray exits the scene without intersecting an object, where the accumulated lighting information is used to render a shadow for the pixel location. Soft shadows can be rendered using the disclosed technique by launching a plurality of rays in different directions based on a size of the light source.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: April 6, 2021
    Assignee: Electronic Arts Inc.
    Inventor: Karl Henrik Halén
  • Patent number: 10942028
    Abstract: Techniques are disclosed for systems and methods for video based sensor fusion with respect to mobile structures. A mobile structure may include at least one imaging module and multiple navigational sensors and/or receive navigational data from various sources. A navigational database may be generated that includes data from the imaging module, navigational sensors, and/or other sources. Aspects of the navigational database may then be used to generate an integrated model, forecast weather conditions, warn of dangers, identify hard to spot items, and generally aid in the navigation of the mobile structure.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: March 9, 2021
    Assignee: FLIR Belgium BVBA
    Inventors: Mark Johnson, Richard Jales, Gordon Pope, Christopher Gatland, Paul Stokes, Aaron Ridout, Chris Jones, Jay E. Robinson, Neil R. Owens, Peter Long
  • Patent number: 10942027
    Abstract: Techniques are disclosed for systems and methods for video based sensor fusion with respect to mobile structures. A mobile structure may include at least one imaging module and multiple navigational sensors and/or receive navigational data from various sources. A navigational database may be generated that includes data from the imaging module, navigational sensors, and/or other sources. Aspects of the navigational database may then be used to generate an integrated model, forecast weather conditions, warn of dangers, identify hard to spot items, and generally aid in the navigation of the mobile structure.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: March 9, 2021
    Assignee: FLIR Belgium BVBA
    Inventors: Mark Johnson, Richard Jales, Gordon Pope, Christopher Gatland, Paul Stokes, Aaron Ridout, Chris Jones, Jay E. Robinson, Neil R. Owens, Peter Long
  • Patent number: 10937213
    Abstract: Example implementations described herein are directed to a graphical user interface (GUI) tool that provides representations of generated charts on a map, wherein distances between representations are provided based on similarity between charts. Similarity is determined through machine learning techniques that are applied on a vectorized form of charts. Example implementations described herein encode charts into vectors using deep learning techniques, which facilitates machine learning techniques such as nearest neighbor to be utilized to determine similarity between charts based on their corresponding vectors.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: March 2, 2021
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Jian Zhao
  • Patent number: 10930074
    Abstract: Embodiments of the present application provide a method for real-time control of a three-dimensional model configured to solve technical issues that a real-time feedback for an actual object is not formed through limited resources in order to control an action of the three-dimensional model to form a live video in a mobile internet environment. The method includes: capturing a real-time video of an actual object; marking an action of the actual object in an image of the real-time video; and forming an action control instruction of a corresponding 3D model according to a change of the action that is marked.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: February 23, 2021
    Assignee: APPMAGICS TECH (BEIJING) LIMITED
    Inventors: Yingna Fu, Yulin Jin
  • Patent number: 10909751
    Abstract: Methods and apparatus to transition between 2D and 3D renderings of augmented reality content are disclosed. An example apparatus includes a user input analyzer to determine an intended movement of an AR object relative to a first zone of a real world environment and a second zone of the real world environment. The apparatus also includes an AR content generator, in response to user input, to: render an appearance of movement of the AR object in the first zone based upon a first set of rules; and render the AR object in the second zone, movement of the AR object in the second zone based on a second set of rules different than the first set of rules.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: February 2, 2021
    Assignee: INTEL CORPORATION
    Inventors: Pete Denman, John Sherry, Glen J. Anderson, Benjamin Bair, Rebecca Chierichetti, Ankur Agrawal, Meng Shi
  • Patent number: 10908714
    Abstract: There is provided a portable information code display apparatus. The apparatus includes a display unit that is capable of displaying an image in addition to a direction detecting unit and a display control unit. The direction detecting unit is capable of detecting that the display unit is oriented is a predetermined orientation. The display control unit controlling display performed in the display unit. Practically, the display control unit displays an information code on the display unit in a predetermined readable state in which the information is readable and maintains the display of the information code in the readable state, when the direction detecting unit detects that the display unit is oriented in the predetermined orientation.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: February 2, 2021
    Assignee: DENSO WAVE INCORPORATED
    Inventor: Takao Ushijima
  • Patent number: 10896524
    Abstract: There is provided a system and method for color representation generation. In an aspect, the method includes: receiving three base colors; receiving a patchwork parameter; and generating a color representation having each of the three base colors at a vertex of a triangular face, the triangular face having a color distribution therein, the color distribution discretized into discrete portions, the amount of discretization based on the patchwork parameter, each discrete portion having an interpolated color determined to be a combination of the base colors at respective coordinates of such discrete portion. In further aspects, one or more color representations are generated based on an input image and can be used to modify colors of a reconstructed image.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: January 19, 2021
    Assignee: THE GOVERNING COUNCIL OF THE UNIVERSITY OF TORONTO
    Inventors: Maria Shugrina, Amlan Kar, Sanja Fidler, Karan Singh
  • Patent number: 10885697
    Abstract: One embodiment of the present disclosure presents a technique for generating an augmented reality effect. The technique includes receiving first input data including an image of a face. The technique further includes, based on the first input data, generating a first intermediate texture corresponding to an eyelid of the face with make-up. The technique further includes, based on the first input data, generating a second intermediate texture corresponding to the eyelid of the face without make-up. The technique also includes generating an output texture based on the first intermediate texture and the second intermediate texture. The technique further includes generating an effect by applying the output texture to second input data corresponding to a second image of a second face.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: January 5, 2021
    Assignee: Facebook, Inc.
    Inventors: Srinidhi Viswanathan, Ian Erik Smith Heisters, Bruno Pereira Evangelista, Jennifer Dolson, Alexandra Louise Krakaris
  • Patent number: 10878177
    Abstract: Improved techniques are presented for generating stereoscopic image of 2D web pages. In accordance with an exemplary embodiment, a stereo-enhancing annotation tool is provided and used to generate intermediate HTML source code. The intermediate HTML source code—together with the normal HTML code that is served when a user's browser makes a URL call—are used by a computer processing unit to generate stereoscopic images. Algorithms optimize the look and feel of stereoscopically-imaged web-page content using a number of known presentation optimized parameters that are automatically determined based on a priori assumptions of depth cues.
    Type: Grant
    Filed: January 21, 2019
    Date of Patent: December 29, 2020
    Assignee: EQULDO LIMITED
    Inventors: Dimitrios Andriotis, Ioannis Paliokas, Athanasios Tsakiris
  • Patent number: 10825422
    Abstract: A display where water is applied onto the screen of a visual display monitor and graphics are presented on the display that coincides with the applied water is described. The timing of the water being applied to the screen and the playing of the graphics may be orchestrated such that the applied water seems to create the presented graphics. A controller may be used to control the water and the display.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: November 3, 2020
    Assignee: WET
    Inventors: Mark W. Fuller, James W. Doyle, Gautam Rangan, John Canavan
  • Patent number: 10803626
    Abstract: Systems and techniques for a large scale online lossless animated GIF processor are described herein. In an example, a lossless animated GIF processor is adapted to receive an animated GIF image and decode a first and second frame of the animated GIF image, wherein the decoding identifies a disposal method for each frame. The lossless animated GIF processor may determine an optimized disposal method for the second frame based on transparency pixels in the second frame and an overlap estimation between the second frame and the first frame. The lossless animated GIF processor may encode the second frame with the optimized disposal method. The lossless animated GIF processor may be further adapted to identify pixels in an area of interest, designate pixels outside the area of interest as transparent, and encode the area of interest and the pixels designated as transparent for the second frame.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 13, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yurong Jiang, Ivaylo G. Dimitrov, Sining Ma, Si Lao
  • Patent number: 10762703
    Abstract: A method for visualizing 3D image data of a 3D sensor (10) with a plurality of 3D points which form a lateral 2D arrangement with a respective depth value, wherein connected segments (32) are formed from connected 3D points and the segments (32) are displayed, and wherein two respective 3D points are connected in the same segment (32) if they are laterally adjacent and also differ in their depth value by at most a depth threshold (z).
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: September 1, 2020
    Assignee: SICK AG
    Inventors: Davide Impera, Thomas Neumann
  • Patent number: 10762695
    Abstract: Rendering shadows of transparent objects using ray tracing in real-time is disclosed. For each pixel in an image, a ray is launched towards the light source. If the ray intersects a transparent object, lighting information (e.g., color, brightness) is accumulated for the pixel. A new ray is launched from the point of intersection, either towards the light source or in a direction based on reflection/refraction from the surface. Ray tracing continues recursively, accumulating lighting information at each transparent object intersection. Ray tracing terminates when a ray intersects an opaque object, indicating a dark shadow. Ray tracing also terminates when a ray exits the scene without intersecting an object, where the accumulated lighting information is used to render a shadow for the pixel location. Soft shadows can be rendered using the disclosed technique by launching a plurality of rays in different directions based on a size of the light source.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: September 1, 2020
    Assignee: Electronic Arts Inc.
    Inventor: Karl Henrik Halén