Patents Examined by Grace Q Li
  • Patent number: 11170700
    Abstract: An artificial window is provided. In one example, the artificial window includes a transparent light emitting diode (LED) display panel, and a directional backlight module located at a back side of the transparent LED display panel. The transparent LED display panel includes multiple first LEDs to display at least one image frame, and the directional backlight module includes multiple second LEDs forming a directional LED array to generate a collimated directional light toward the transparent LED display. In another example, the artificial window includes a display panel to display at least one image frame; and a backlight module located at a back side of the display panel. The backlight module includes a first light source providing backlight for the display panel, and a second light source to generate a collimated directional light.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: November 9, 2021
    Assignee: A.U. VISTA, INC.
    Inventor: David Slobodin
  • Patent number: 11164356
    Abstract: Systems, devices, and methods provide an augmented reality visualization of a real world accident scene. The system may comprise an augmented reality visualization device in communication with a display interface. The display interface may be configured to present a graphical user interface including an accident scene that corresponds to a real world location. The augmented reality visualization device (and/or system) may comprise one or more data stores configured to store accident scene information corresponding to the real world location, such as vehicles, motorcycles, trees, road objects, etc. Additionally, the one or more data stores may also store participant information corresponding to details about other multiple participants, such as witnesses, other drivers, and/or police officers.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: November 2, 2021
    Assignee: Allstate Insurance Company
    Inventors: Chase Davis, Jeraldine Dahlman
  • Patent number: 11158277
    Abstract: A display device includes: a plurality of sub-pixels each including a memory block that includes a plurality of memories each of which is configured to store sub-pixel data; a plurality of memory selection line groups provided to respective rows and each including a plurality of memory selection lines electrically coupled to the corresponding memory blocks in the sub-pixels that belong to a corresponding row; a memory selection circuit configured to simultaneously output a memory selection signal to the memory selection line groups, the memory selection signal being a signal for selecting one from the plurality of memories in each of the memory blocks. In accordance with the memory selection lines supplied with the memory selection signal, the sub-pixels display an image based on the sub-pixel data stored in memories in the respective sub-pixels, the memories each being one of the plurality of memories in the corresponding sub-pixel.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: October 26, 2021
    Assignee: Japan Display Inc.
    Inventors: Yutaka Mitsuzawa, Takayuki Nakao, Masaya Tamaki, Yutaka Ozawa
  • Patent number: 11157725
    Abstract: Embodiments are directed to a near eye display (NED) system for displaying artificial reality content to a user and to manipulate displayed content items based upon gestures performed by users of the NED system. A user of the NED system may perform a gesture simulating the throwing of an object to “cast” a content item to a target location in an artificial reality (AR) environment displayed by the NED system. The gesture may comprise a first portion in which the user's hand “grabs” or “pinches” a virtual object corresponding to the content item and moves backwards relative to their body, and a second portion in which the user's hand moves forwards relative to their body and releases the virtual object. The target location may be identified based upon a trajectory associated with the backwards motion of the first portion of the gesture.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: October 26, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Daniel Andersen, Albert Peter Hwang, Kenrick Cheng-Kuo Kin
  • Patent number: 11150739
    Abstract: A Chinese character Pinyin input method and apparatus are disclosed. The method includes displaying on a human-machine interaction interface (HMI) initial keys representing all initials and simple final keys representing all simple finals. The method further includes, in response to an operation with respect to a simple final, displaying on the HMI auxiliary keys corresponding to the simple final, wherein various combinations of the simple final and symbols represented by the auxiliary keys respectively form compound finals starting with the simple final. The disclosed method and apparatus are especially applicable to inputting Chinese characters on a smart device touchscreen.
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: October 19, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wei Bin Gao, Hong Bo Peng, Cheng Xu, Quan Wen Zhang
  • Patent number: 11145138
    Abstract: A computing system and method to generate an avatar wearing multiple layers of clothing. For each clothing model acquired for the avatar, the system generates a customized clothing model based on transforming the original clothing model for fitting on the avatar based on deforming and physical simulation and a reduced clothing model based on collapsing the customized clothing model on the body of the avatar such that applying the reduced clothing model is simplified as painting the texture of the reduced clothing model onto the avatar model. Wearing the inner layers of the clothing by avatar is computed by applying the texture of the corresponding reduced clothing model on the body of the avatar in a sequence from inside layers to outside layers. The customized clothing model of the outermost layer is combined with the avatar wearing the inner layers to generate the avatar wearing the multiple layers of clothing.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: October 12, 2021
    Assignee: Linden Research, Inc.
    Inventors: Jeremiah Arthur Grant, Avery Lauren Orman, David Parks, Richard Benjamin Trent Nelson
  • Patent number: 11100603
    Abstract: A controller is attached to a back face of a display device. The controller includes: a housing; a control unit removably accommodated in a slot formed in the housing and configured to control the display device; a backboard provided inside the housing on the interior side with respect to the direction of insertion of the control unit and connected to the control unit; and a second connector connected to a first connector provided for the display device to establish connection between the backboard and the display device.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: August 24, 2021
    Assignee: FANUC CORPORATION
    Inventors: Kouhei Yoshida, Hideo Kobayashi, Hiroyuki Suwa, Junichi Sakamoto
  • Patent number: 11087505
    Abstract: In implementations of weighted color palette generation, one or more computing devices implement a generation system which receives input data including an input color palette. A first machine learning model receives the input color palette and generates an unweighted color palette based on the input color palette. A second machine learning model receives the generated unweighted color palette and generates a weighted color palette based on the generated unweighted color palette. The generation system renders the weighted color palette in a user interface.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: August 10, 2021
    Assignee: Adobe Inc.
    Inventors: Ankit Phogat, Vineet Batra, Sayan Ghosh
  • Patent number: 11080817
    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Kumar Ayush, Surgan Jandial, Mayur Hemani, Balaji Krishnamurthy, Ayush Chopra
  • Patent number: 11080934
    Abstract: The present invention relates to a mixed reality system integrated with a surgical navigation system including a group of moveable position markers configured on a surgical instrument; a position sensor sensing the group of moveable position markers to acquire an instrument coordinate for the surgical instrument; a registered positioning marker configured in proximity to a surgical area to acquire a surgical area coordinate for the surgical area; a plurality of mixed reality sensors detecting the registered positioning marker and a plurality of mixed reality information; a computing unit module configured to receive the instrument coordinate, the surgical area coordinate, the plurality of mixed reality information, and a digital model of the surgical area, to render the digital model corresponded to the surgical area, and to add a digital instrument object into the digital model in accordance with the instrument coordinate; and a mixed reality display providing for a user to view and showing the digital model
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: August 3, 2021
    Assignee: National Central University
    Inventors: Ching Shiow Tseng, Te-Hsuan Feng
  • Patent number: 11055894
    Abstract: A platform for visualization of traffic information at an observed roadway or traffic intersection converts data collected from sensors for rendering as dynamic animations on a virtual map of the observed roadway or traffic intersection. The platform parses and curates incoming sensor data from either a single or multiple sensors representing one or more objects at the observed roadway or traffic intersection, and translates at least location data of each object for correlation of the object's movement relative to the observed roadway or traffic intersection. The platform then generates dynamic animations of the movement of each object and displays the animations as an overlay on the virtual map.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: July 6, 2021
    Assignee: ITERIS, INC.
    Inventors: Todd W. Kreter, Michael T. Whiting, Peter Chen
  • Patent number: 11042746
    Abstract: In an approach for presenting information on an object to be observed, a processor obtains information from a sensor about an object. A processor predicts a time period in which the object is expected to be at a location and in a state, based on historical data, obtained from the sensor, associated with the object. A processor determines whether the object is at the location and in the state within the time period. A processor presents information to a user, wherein the information is about the object.
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: June 22, 2021
    Assignee: international Business Machines Corporation
    Inventor: Daisuke Maruyama
  • Patent number: 11037373
    Abstract: A method for generating a 3D digital model used for creating a hairpiece are disclosed. The method comprises: obtaining a 3D model of a head, the 3D model containing a 3D surface mesh having one single boundary and color information associated with the 3D mesh; mapping the 3D model into a 2D image in such a manner that any continuously connected line on the 3D model is mapped into a continuously connected line in the 2D image, the 2D image containing a 2D mesh with color information applied thereto; displaying the 2D image; identifying a feature in the 2D image based on the color information; and mapping the identified feature back onto the 3D model. A system for generating a 3D digital model used for creating a hairpiece is also provided in another aspect of the present disclosure.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: June 15, 2021
    Assignee: TRUE HAIR LLC
    Inventors: Shing-Tung Yau, Eugene M. Yau, Dale Owen Royer
  • Patent number: 11030813
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program, and a method for rendering a three-dimensional virtual object in a video clip. The method and system include capturing, using a camera-enabled device, video content of a real-world scene and movement information collected by the camera-enabled device during capture of the video content. The captured video and movement information are stored. The stored captured video content is processed to identify a real-world object in the scene. An interactive augmented reality display is generated that: adds a virtual object to the stored video content to create augmented video content comprising the real-world scene and the virtual object; and adjusts, during playback of the augmented video content, an on-screen position of the virtual object within the augmented video content based at least in part on the stored movement information.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: June 8, 2021
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Andrew James McPhee, Tony Mathew
  • Patent number: 11017235
    Abstract: An augmented reality displaying system for displaying a virtual object through compositing on an image taken of the real world, comprising: a camera for capturing an image of the real world; a location information acquiring portion for acquiring, as location information, the coordinates and orientation at the instant of imaging by the camera; an image analyzing portion for analyzing, as depth information, the relative distances to imaging subjects for individual pixels that structure the real world image that has been captured; a virtual display data generating portion for generating virtual display data, on real map information that includes geographical information in the real world, based on the location information acquired by the location information acquiring portion; and a compositing processing portion for displaying the virtual display data, generated by the virtual display data generating portion, superimposed on an image captured by the camera in accordance with the depth information.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: May 25, 2021
    Inventors: Kouichi Nakamura, Kazuya Asano
  • Patent number: 11002999
    Abstract: Described herein is a mechanism for automatically correcting perspective of displayed information based on the orientation of the display device. A user can identify a screen orientation where the displayed information does not need perspective correction. The system can monitor changes to the display orientation, and when the orientation changes, the system can measure the distance to a user's eye or calculate the distance based on measured data. Based on the distance, a correction factor can be calculated. An area in the display where corrected information will be displayed is identified. The currently displayed information is corrected based on the correction factor and displayed in the identified display area.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: May 11, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: William Murdock Wellen
  • Patent number: 10991127
    Abstract: Methods and devices for index buffer block compression in a computer system include a compressor in communication with a graphical processing unit (GPU). The methods and devices include selecting one or more primitives of at least a portion of a mesh formed by a total number of primitives for inclusion within a compressed index buffer block, the one or more primitives each associated with a number of indices each corresponding to a vertex within the mesh. The methods and devices may identify at least one redundant index in the number of indices associated with the one or more primitives of the compressed index buffer block. The methods and devices removing the at least one redundant index from the number of indices associated with the one or more primitives of the compressed index buffer block to form the compressed index buffer block as a set of one or more unique indices.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: April 27, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ivan Nevraev, Jason M. Gould
  • Patent number: 10991065
    Abstract: Method, system, and computer readable medium for processing graphics using an OpenGL Embedded Systems Application Programming Interface (Open GLES API) include: decoding a source graphic to generate a graphic object, where the graphic object includes a set of index values and a color palette; providing the graphic object to a Graphical Processing Unit (GPU) through the Open GLES API, including providing the set of index values in a first acceptable graphic format of the Open GLES API to the GPU, and providing the color palette in a second acceptable graphic format of the Open GLES API to the GPU; and triggering the GPU to render the source graphic according to the set of index values received in the first acceptable graphic format of Open GLES API and the palette received in the second acceptable graphic format of Open GLES API.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: April 27, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Xiaodong Jin
  • Patent number: 10991151
    Abstract: A game rendering method and a terminal are provided. The method includes the following. A rendering instruction is stored when a JS engine of the terminal receives the rendering instruction, where the rendering instruction carries a plurality of data identifiers of to-be-rendered data, a plurality of time interval identifiers corresponding to the data identifiers, and a plurality of rendering parameter identifiers corresponding to the time interval identifiers. The rendering instruction is sent to a target rendering system. A target time interval identifier corresponding to current time is determined, and a target data identifier and a target rendering parameter identifier corresponding to the target time interval identifier are determined. To-be-rendered data corresponding to the target data identifier and target rendering parameter corresponding to the target rendering parameter identifier, are determined.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: April 27, 2021
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Senlin Li
  • Patent number: 10988144
    Abstract: A driver assistance system for a motor vehicle having a first assistance subsystem for capturing a first sub-area of an environment of the motor vehicle and having a second assistance subsystem for capturing a second sub-area of the environment. The first assistance subsystem is configured to generate a first image of the first sub-area and the second assistance subsystem is configured to generate a second image of the second sub-area. The driver assistance system has a display device for displaying at least the first image. The display device being configured to display a first augmented reality, the first augmented reality being generated by superimposition of the first image with first assistance information of the first assistance subsystem as the first augmented reality content and by superimposition with second assistance information of the second assistance subsystem as the second augmented reality content.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: April 27, 2021
    Assignee: AUDI AG
    Inventor: Michael Lübcke