Patents Examined by Steven Z Elbinger
  • Patent number: 10650563
    Abstract: A method is provided that includes receiving, from a camera, a plurality of images representing a portion of a face containing a mouth. One or more images of the plurality of images depict a tongue extended out of the mouth. The method also includes determining, based on the plurality of images, an amount of time for which the tongue has been extended out of the mouth. The method additionally includes determining, based on the amount of time, a tongue length for a digital representation of the tongue. The digital representation of the tongue forms part of a digital representation of the face. The method further includes adjusting the digital representation of the face to have the digital representation of the tongue extend out of the mouth with the determined tongue length. The method yet further includes providing instructions to display the adjusted digital representation of the face.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: May 12, 2020
    Assignee: BinaryVR, Inc.
    Inventors: Jihun Yu, Jungwoon Park, Junggun Lim, Taeyoon Lee
  • Patent number: 10643383
    Abstract: In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: May 5, 2020
    Assignee: FotoNation Limited
    Inventor: Kartik Venkataraman
  • Patent number: 10636190
    Abstract: A method for motion estimation in an augmented reality (AR) system includes receiving inertial sensor data and image data during movement of the AR system generating a probability map based on the inertial sensor data and the image data, the probability map corresponding to one frame in the image data and including probability values indicating that each pixel in the one frame is in an inertial coordinate frame or a local coordinate frame with a convolutional neural network encoder/decoder, identifying visual observations of at least one landmark in the local coordinate frame based on the image data and the probability map, and generating an estimate of secondary motion in the local coordinate frame based on a first prior state in a hidden Markov model (HMM) corresponding to the local coordinate frame and the visual observations of the at least one landmark in the local coordinate frame.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: April 28, 2020
    Assignee: Robert Bosch GmbH
    Inventors: Benzun Pious Wisely Babu, Zhixin Yan, Mao Ye, Liu Ren
  • Patent number: 10636182
    Abstract: Data visualization features are described that provide synchronized displaying of interactive visualizations for high parameter data. The visualization features include graphically representing multiple parameters simultaneously with the associated statistical data for each parameter in an interactive way that maintains the contextual relationships between parameters and the related cell population. The visualization features may be used for displaying high parameter multi-color flow cytometry or genomic data sets.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: April 28, 2020
    Assignee: Becton, Dickinson and Company
    Inventors: Oliver Crespo-Diaz, Alexander Fainshtein, Mengxiang Tang
  • Patent number: 10628997
    Abstract: A method for three-dimensional modeling through constrained sketches for obtaining potentially infinite variations of any one model is provided. The method generates three-dimensional modeling through constrained sketches and its related operations with the objective of obtaining potentially infinite variations of any one model by performing the following steps: acquiring the seed two-dimensional geometry of the model; acquiring a set of identifiers attaching semantic meaning to geometric portions; acquiring a set of instructions, where each instruction possesses inputs and outputs and performs a specific task.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: April 21, 2020
    Inventor: Emilio Santos
  • Patent number: 10629166
    Abstract: Video is described with selectable tag overlay auxiliary pictures. In one example video content is prepared by identifying an object in a sequence of video frames, generating a tag overlay video frame having a visible representation of a tag in a position which is related to the position of the identified object, generating an overlay label frame to indicate pixel positions corresponding to the tag of the tag overlay frame, and encoding the video frame, the tag overlay video frame and the overlay label frame in an encoded video sequence.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: April 21, 2020
    Assignee: Intel Corporation
    Inventor: Jill MacDonald Boyce
  • Patent number: 10621762
    Abstract: Data visualization processes can utilize machine learning algorithms applied to visualization data structures to determine visualization parameters that most effectively provide insight into the data, and to suggest meaningful correlations for further investigation by users. In numerous embodiments, data visualization processes can automatically generate parameters that can be used to display the data in ways that will provide enhanced value. For example, dimensions can be chosen to be associated with specific visualization parameters that are easily digestible based on their importance, e.g. with higher value dimensions placed on more easily understood visualization aspects (color, coordinate, size, etc.). In a variety of embodiments, data visualization processes can automatically describe the graph using natural language by identifying regions of interest in the visualization, and generating text using natural language generation processes.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: April 14, 2020
    Assignee: Virtualitics, Inc.
    Inventors: Ciro Donalek, Michael Amori, Justin Gantenberg, Sarthak Sahu, Aakash Indurkhya
  • Patent number: 10607400
    Abstract: A graphics processing pipeline comprises vertex shading circuitry that operates to vertex shade position attributes of vertices of a set of vertices to be processed by the graphics processing pipeline, to generate, inter alia, a separate vertex shaded position attribute value for each view of the plural different views. Tiling circuitry then determines for the vertices that have been subjected to the first vertex shading operation, whether the vertices should be processed further. Vertex shading circuitry then performs a second vertex shading operation on the vertices that it has been determined should be processed further, to vertex shade the remaining vertex attributes for each vertex that it has been determined should be processed further, to generate, inter alia, a single vertex shaded attribute value for the set of plural views.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: March 31, 2020
    Assignee: Arm Limited
    Inventors: Sandeep Kakarlapudi, Jorn Nystad, Andreas Due-Engh Halstvedt
  • Patent number: 10601950
    Abstract: Data representative of a physical feature of a morphologic subject is received in connection with a procedure to be carried out with respect to the morphologic subject. A view of the morphologic subject overlaid by a virtual image of the physical feature is rendered for a practitioner of the procedure, including generating the virtual image of the physical feature based on the representative data, and rendering the virtual image of the physical feature within the view in accordance with one or more reference points on the morphologic subject such that the virtual image enables in-situ visualization of the physical feature with respect to the morphologic subject.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: March 24, 2020
    Assignee: ARIS MD, Inc.
    Inventors: Chandra Devam, Zaki Adnan Taher, William Scott Edgar
  • Patent number: 10593163
    Abstract: Method, computer program product, and system to provide an extended vision within an environment having a plurality of items, where the extended vision is based on a field of view of a person determined using a first visual sensor, and is further based on at least a second visual sensor disposed within the environment. Image information from the first and second visual sensors is associated to produce combined image information. Selected portions of the combined image information are displayed based on input provided through a user interface.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: March 17, 2020
    Assignee: Toshiba Global Commerce Solutions Holdings Corporation
    Inventors: Monsak Jason Chirakansakcharoen, Dean Frederick Herring, Ankit Singh, David John Steiner
  • Patent number: 10593085
    Abstract: In some embodiments, a source image depicting a face can be accessed. A portion of the source image that depicts the face can be determined. A search query can be acquired based on user input. A set of one or more target images associated with the search query can be identified. A respective location, within each target image from the set of one or more target images, where the portion of the source image is to be rendered can be identified. For each target image from the set of one or more target images, the portion of the source image can be rendered at the respective location within each target image to produce a set of one or more combined images. Each combined image in the set of combined images can include the portion of the source image rendered at the respective location within each target image.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: March 17, 2020
    Assignee: Facebook, Inc.
    Inventor: Irena Kemelmaher
  • Patent number: 10586306
    Abstract: The present disclosure relates to an immersive display apparatus and method for creation of a peripheral view image corresponding to an input video, the method comprising a pre-processing step of obtaining scene-space information at a main-view video signal corresponding to a first area, a pre-warping step of performing first warping to at least one neighborhood frame corresponding to a target frame included in the pro-processed video signal and determining an outlier from the result of the first warping, a sampling step of sampling at least one neighborhood frame to be used for extrapolation from the result of the first warping, a warping step of performing second warping to the sampled frame except for the outlier to generate a peripheral view image signal corresponding to a second area around the first area, and a blending step of blending the peripheral view image signal to the main-view video signal.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: March 10, 2020
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Junyong Noh, Sangwoo Lee, Jungjin Lee, Kyehyun Kim, Bumki Kim
  • Patent number: 10586392
    Abstract: Information representing a position and orientation of a captured image or information for deriving the position and orientation is acquired from the captured image as extraction information, and a reduced-information-amount image is generated by reducing an amount of information in the captured image in accordance with a distance from a position that a user is gazing at in the captured image. The reduced-information-amount image and the extraction information are outputted to an external device, and a composite image that has been generated based on the reduced-information-amount image and an image of a virtual space generated by the external device based on the extraction information is received.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: March 10, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kazutaka Inoguchi, Atsushi Hamaguchi
  • Patent number: 10586276
    Abstract: A method and an electronic device for composing an image are provided. An electronic device includes a display configured to display an image of a user photographed by a camera; an input component configured to receive a user input; a communicator configured to facilitate a communication with an external server; and a processor configured to display the image of the user on the display, set an avatar region within the image of the user based on the received user input, generate an avatar to be displayed on the avatar region, and control the display to combine the avatar with the image of the user and to display the resulting composite image.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: March 10, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sun Choi, Sun-hwa Kim, Jae-young Lee, Gi-ppeum Choi
  • Patent number: 10574962
    Abstract: Methods and apparatus for receiving content including images of surfaces of an environment visible from a default viewing position and images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are received in content streams that can be in a variety of stream formats. In one stream format non-occluded image content is packed into a frame with occluded image content with the occluded image content normally occupying a small portion of the frame. In other embodiments occluded image portions are received in an auxiliary data stream which is multiplexed with a data stream providing frames of non-occluded image content. UV maps which are used to map received image content to segments of an environmental model are also supplied with the UV maps corresponding to the format of the frames which are used to provide the images that serve as textures.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: February 25, 2020
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10559127
    Abstract: A method for forming a reconstructed 3D mesh includes receiving a set of captured depth maps associated with a scene, performing an initial camera pose alignment associated with the set of captured depth maps, and overlaying the set of captured depth maps in a reference frame. The method also includes detecting one or more shapes in the overlaid set of captured depth maps and updating the initial camera pose alignment to provide a shape-aware camera pose alignment. The method further includes performing shape-aware volumetric fusion and forming the reconstructed 3D mesh associated with the scene.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: February 11, 2020
    Assignee: Magic Leap, Inc.
    Inventors: Xiaolin Wei, Yifu Zhang
  • Patent number: 10529124
    Abstract: Systems, methods, and computer-readable storage media can be used to perform alpha-projection. One method may include receiving an image from a system storing one or more images. The method may further include alpha-projecting the received image to assign alpha channel values to the received image by projecting one or more pixels of the received image from an original color to a second color and setting alpha channel values for the one or more pixels by determining the alpha channel value that causes each second color alpha blended with a projection origin color to be the original color. The method may further include displaying the alpha-projected image as a foreground image over a background image.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: January 7, 2020
    Assignee: Google LLC
    Inventor: Emilio Antunez
  • Patent number: 10520889
    Abstract: Method for computing the code for the reconstruction of three-dimensional scenes which include objects which partly absorb light or sound. The method can be implemented in a computing unit. In order to reconstruct a three-dimensional scene as realistic as possible, the diffraction patterns are computed separately at their point of origin considering the instances of absorption in the scene. The method can be used for the representation of three-dimensional scenes in a holographic display or volumetric display. Further, it can be carried out to achieve a reconstruction of sound fields in an array of sound sources.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: December 31, 2019
    Assignee: SEEREAL TECHNOLOGIES S.A.
    Inventors: Enrico Zschau, Nils Pfeifer
  • Patent number: 10521937
    Abstract: Vector format based computer graphics tools have become very powerful tools allowing artists, designers etc. to mimic many artistic styles, exploit automated techniques, etc. and across different simulated physical media and digital media. However, hand-drawing and sketching in vector format graphics is unnatural and a user's strokes rendered by software are generally unnatural and appear artificial. In contrast to today's hand-drawing and sketching which requires significant training of and understanding by the user of complex vector graphics methods embodiments of the invention lower the barrier to accessing computer graphics applications for users in respect of making hand-drawing or sketching easier to perform. Accordingly, the inventors have established a direct vector-based hand-drawing/sketching entry format supporting any input methodology.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: December 31, 2019
    Assignee: Corel Corporation
    Inventors: Tony Severenuk, Paul Legomski, Tekin Ozbek, Thomas Jackson, Boban Bogdanic, Andrew Stacey
  • Patent number: 10504417
    Abstract: A novel display system includes a host and a display. In a particular embodiment the host includes a data scaler and a dual frame buffer. Frames of image data are down-scaled before being transferred to the display, and is up-scaled while being loaded into the frame buffer or the display. The down-scaled frames of image data include less data than the frames of image data. In another embodiment, the process of loading the image data into the display begins before an entire frame of data is loaded into the frame buffer. Increasingly sized portions of the image data, each corresponding to a different color field, are asserted on the display and displayed one at a time. The portions of the frame that were not previously displayed are displayed along with the initial portions of a subsequent frame.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: December 10, 2019
    Assignee: OmniVision Technologies, Inc.
    Inventor: Sunny Yat-san Ng