Patents Examined by Phi Hoang
  • Patent number: 11189105
    Abstract: There is provided an image processing device including: a data storage unit storing feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space and the feature data, the environment map representing a position of a physical object present in the real space; a control unit for acquiring procedure data for a set of procedures of operation to be performed in the real space, the procedure data defining a correspondence between a direction for each procedure and position information designating a position at which the direction is to be displayed; and a superimposing unit for generating an output image by superimposing the direction for each procedure at a position in the input image determined based on the environment map and the position information, using the procedure data.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: November 30, 2021
    Assignee: SONY CORPORATION
    Inventors: Yasuhiro Suto, Masaki Fukuchi, Kenichirou Ooi, Jingjing Guo, Kouichi Matsuda
  • Patent number: 11189085
    Abstract: Technologies for generating 3D and using models are described. In some embodiments the technologies employ a content creation device to produce a 3D model of an environment based at least in part on depth data and color data, which may be provided by one or more cameras. Contextual information such as location information, orientation information, etc., may also be collected or otherwise determined, and associated with points of the 3D model. Access points to the imaged environments may be identified and labeled as anchor points within the 3D model. Multiple 3D models may then be combined into an aggregate model, wherein anchor points of constituent 3D models in the aggregate model are substantially aligned. Devices, systems, and computer readable media utilizing such technologies are also described.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: November 30, 2021
    Assignee: INTEL CORPORATION
    Inventors: Shivakumar Doddamani, Jim S. Baca
  • Patent number: 11176631
    Abstract: Disclosed are various embodiments for GPU-based parallel indexing for concurrent spatial queries. A number of nodes in a tree to be partitioned is determined. The tree is then iteratively partitioned with the GPU. Nodes are created with the GPU. Finally, a point insertion is performed using the GPU.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: November 16, 2021
    Assignee: UNIVERSITY OF SOUTH FLORIDA
    Inventors: Yicheng Tu, Zhila Nouri Lewis
  • Patent number: 11170533
    Abstract: An image dataset is compressed by combining depth values from pixel depth arrays, wherein combining criteria are based on object data and/or depth variations of depth values in the first pixel image value array and generating a modified image dataset wherein a first pixel image value array represented in a received image dataset by the first number of image value array samples is in turn represented in the modified image dataset by a second number of compressed image value array samples with the second number being less than or equal to the first number.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: November 9, 2021
    Assignee: Weta Digital Limited
    Inventor: Peter M. Hillman
  • Patent number: 11164386
    Abstract: A computing device 2, such as a general-purpose smartphone or general-purpose tablet computing device, comprises one or more inertial sensors and an image sensor. The device 2 produces stereoscopic images of a virtual environment on the display during a virtual reality (VR) session controlled by a user of the computing device. The device conducts visual odometry using at least image data received from the image sensor, and selectively activates and deactivates the visual odometry according to activity of the user during the virtual reality session. When the visual odometry is activated, the device controls the virtual reality session by at least position information from the visual odometry. When the visual odometry is deactivated, the device controls the virtual reality session by at least orientation information from the one or more inertial sensors.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: November 2, 2021
    Assignee: Arm Limited
    Inventors: Roberto Lopez Mendez, Daren Croxford
  • Patent number: 11164370
    Abstract: An information processing apparatus includes: a memory; and a processor configured to: store data of images captured of a structure at positions changed relative to the structure and data of a three-dimensional model indicating a three-dimensional shape of the structure; select an image containing the largest image of a damaged portion in the structure as a first accumulated image in each of which the damaged portion is imaged among the images; and perform a selection process of selecting second accumulated images except the first accumulated image such that regarding an imaging range that is not covered by the first accumulated image in the three-dimensional model, an imaging range which overlaps between the second accumulated images is reduced, and a coverage ratio of the imaging range in the three-dimensional model with the selected second accumulated images and the first accumulated image is equal to or more than a predetermined value.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: November 2, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Eiji Hasegawa
  • Patent number: 11164002
    Abstract: Disclosed herein are a method for human-machine interaction and an apparatus for the same. The method includes receiving object identification input for identifying an object related to the task to be dictated to a machine through the I/O interface of a user device that displays a 3D space; displaying an object identification visual interface, corresponding to the object identified within the space recognized by the machine, on the user device in an augmented-reality manner; receiving position identification input for identifying a position in the 3D space related to the task; displaying a position identification visual interface, corresponding to the position identified within the space recognized by the machine, on the user device in an augmented-reality manner; and receiving information related to the result of the task performed through the machine.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: November 2, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Joo-Haeng Lee
  • Patent number: 11158101
    Abstract: A position data acquiring unit 200 acquires position data indicating a position of a real object. A delay deriving unit 206 derives delay time in an information processing system 1. A motion predicting unit 300 predicts a movement of the object, and identifies an object position at a future time on the basis of the position data acquired by the position data acquiring unit 200, the predetermined length of time being equal to or longer than the delay time. An image generating unit 302 generates a prediction image in which the object position at the future time is reflected. A display image providing unit 204 acquires the generated prediction image, and provides the prediction image to an HMD.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: October 26, 2021
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Tatsuki Amimoto, Akira Nishiyama
  • Patent number: 11158116
    Abstract: A method, computer program, and computer system for point cloud coding is provided. Data corresponding to a point cloud is received, and one or more geometric features are detected from among the data corresponding to the point cloud. A representation is determined for one or more of the detected geometric features, and the received data is encoded or decoded based on the determined representations whereby the point cloud is reconstructed based on the decoded data.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: October 26, 2021
    Assignee: TENCENT AMERICA LLC
    Inventors: Xiang Zhang, Wen Gao, Shan Liu
  • Patent number: 11151699
    Abstract: A virtual, augmented, or mixed reality display system includes a display configured to display virtual, augmented, or mixed reality image data, the display including one or more optical components which introduce optical distortions or aberrations to the image data. The system also includes a display controller configured to provide the image data to the display. The display controller includes memory for storing optical distortion correction information, and one or more processing elements to at least partially correct the image data for the optical distortions or aberrations using the optical distortion correction information.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: October 19, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Jose Felix Rodriguez, Ricardo Martinez Perez
  • Patent number: 11140939
    Abstract: Sensor assisted head mounted displays for welding are disclosed. Disclosed example head mounted devices include an optical sensor, an augmented reality controller, a graphics processing unit, and a semi-transparent display. The optical sensor collects an image of a weld environment. The augmented reality controller determines a simulated object to be presented in a field of view, a position in the field of view, and a perspective of the simulated object in the field of view. The graphics processing unit renders the simulated object based on the perspective to represent the simulated object being present in the field of view and in the weld environment. The display presents the rendered simulated object within the field of view based on the position. At least a portion of the weld environment is observable through the display and the lens when the display is presenting the rendered simulated object.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: October 12, 2021
    Assignee: Illinois Tool Works Inc.
    Inventor: Christopher Hsu
  • Patent number: 11144120
    Abstract: A computer system is provided. The computer system includes a memory and at least one processor coupled to the memory and configured to detect open eyes in an image received from a camera of the computer; recognize properties of the open eyes including orientation, designation as a left or right eye, and relative position of the eyes; group the detected open eyes into pairs of eyes based on the recognized properties; measure pupillary distance (e.g. represented in image pixels) of each of the pairs of eyes; identify the pair of eyes associated with the largest pupillary distance as the eyes closest to the camera; calculate a relative distance from the camera to the closest pair of eyes, the relative distance calculated as a ratio of the camera focal length to the largest pupillary distance; and reduce brightness of the computer screen if the relative distance is less than a threshold ratio.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: October 12, 2021
    Assignee: Citrix Systems, Inc.
    Inventors: Daowen Wei, Jian Ding, Hengbo Wang
  • Patent number: 11138791
    Abstract: A computer-implemented method that allows users to upload a set of two-dimensional images for evaluation using different virtual reality methods. The disclosed method allows the user to select one aspect of an image, see the corresponding image highlighted, and manipulate the corresponding image.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: October 5, 2021
    Assignee: Intuitive Research and Technology Corporation
    Inventors: Chanler Crowe Cantor, Michael Jones, Kyle Russell, Michael Yohe
  • Patent number: 11127193
    Abstract: A system for determining an approximate transformation between a first coordinate system and a second coordinate system, the system comprising a processing resource configured to: obtain an image captured by an image acquisition device of a mobile device; identify one or more synchronization objects within the image; determine first spatial dispositions of the synchronization objects with respect to a first coordinate-system origin of the first coordinate-system, the first coordinate system being a coordinate system of the mobile device; obtain information of second spatial dispositions of the synchronization objects with respect to a second coordinate-system origin of the second coordinate system; and determine, by employing an optimization scheme, utilizing the first spatial dispositions and the second spatial dispositions, the approximate transformation between the first coordinate-system and the second coordinate-system, the approximate transformation being usable by the mobile device for placing virtual
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: September 21, 2021
    Assignee: RESIGHT LTD.
    Inventors: Eran Segal, Ari Zigler, Omri Yaakov Stein
  • Patent number: 11113888
    Abstract: Dynamic virtual content(s) to be superimposed to a representation of a real 3D scene complies with a scenario defined before runtime and involving real-world constraints (23). Real-world information (22) is captured in the real 3D scene and the scenario is executed at runtime (14) in presence of the real-world constraints. When the real-world constraints are not identified (12) from the real-world information, a transformation of the representation of the real 3D scene to a virtually adapted 3D scene is carried out (13) before executing the scenario, so that the virtually adapted 3D scene fulfills those constraints, and the scenario is executed in the virtually adapted 3D scene replacing the real 3D scene instead of the real 3D scene. Application to mixed reality.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: September 7, 2021
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Anthony Laurent, Matthieu Fradet, Caroline Baillard
  • Patent number: 11113870
    Abstract: A method and an apparatus for transmitting and receiving video content including 3D data are provided. According to an embodiment, a method for transmitting data related to content including an omnidirectional image and a point cloud object is provided. The method includes generating media data and metadata for the content including the omnidirectional image and the point cloud object; and transmitting the generated media data and the generated metadata, wherein the metadata comprises information for specifying sub-spaces of a bounding space related to the point cloud object.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: September 7, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Eric Yip, Jaehyeon Bae, Hyunkoo Yang
  • Patent number: 11080819
    Abstract: An image processing system receives an input depth image with a surface that is not developable and generates an output depth image with a piecewise developable surface that approximates the input depth image. Height values for the output depth image are determined using an optimization problem that balances data fidelity and developability. Data fidelity is based on minimizing differences in height values of pixels in the output depth image and height values of pixels in the input depth image. Developability is based on rank minimization of Hessians computed for pixels in the output depth image. In some configurations, the optimization problem is formulated as a semi-definite programming problem and solved using a tailor-made alternating direction method of multipliers algorithm.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: August 3, 2021
    Assignee: ADOBE INC.
    Inventors: Noam Aigerman, Alec Jacobson, Silvia Gonzalez Sellan
  • Patent number: 11080814
    Abstract: A method including rendering graphics for an application using graphics processing units (GPUs). Responsibility for rendering of geometry is divided between GPUs based on screen regions, each GPU having a corresponding division of the responsibility which is known. First pieces of geometry are rendered at the GPUs during a rendering phase of a previous image frame. Statistics are generated for the rendering of the previous image frame. Second pieces of geometry of a current image frame are assigned based on the statistics to the GPUs for geometry testing. Geometry testing at a current image frame on the second pieces of geometry is performed to generate information regarding each piece of geometry and its relation to each screen region, the geometry testing performed at each of the GPUs based on the assigning. The information generated for the second pieces of geometry is used when rendering the geometry at the GPUs.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: August 3, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Mark E. Cerny, Florian Strauss, Tobias Berghoff
  • Patent number: 11074703
    Abstract: A device is provided for encrypting and/or decrypting a point cloud having a plurality of data points that collectively produce a three-dimensional (ā€œ3Dā€) image. Each data point may have a set of elements with values that define a position of the data point in 3D space and visual characteristics of the data point. Encrypting the point cloud may include deterministically a set of data point to encrypt, and deterministically changing the data point element values of the selected data points so that the 3D image produced by the encrypted data points is different than the 3D produced from the unencrypted data points. Decrypting the resulting encrypted point cloud may include deterministically reselecting the encrypted data points using an encryption key, and deterministically reversing the changes made to the data point element values of the selected data points based on transformations that are specified as part of the encryption key.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: July 27, 2021
    Assignee: Illuscio, Inc.
    Inventors: Robert Monaghan, Venkatarao Maruvada, Joseph Bogacz
  • Patent number: 11069027
    Abstract: In implementations of precise glyph transformations as editable text, a computing device implements a transformation system to generate bounding boxes for a first glyph and a second glyph of multiple glyphs. The bounding boxes are concatenated as a multiple glyph bounding box for the multiple glyphs. The transformation system receives a user input defining a transformation of the multiple glyph bounding box relative to an object, and the system maps the transformation of the multiple glyph bounding box to the bounding boxes for the first glyph and the second glyph. The multiple glyphs are rendered in a user interface as the editable text having the transformation based on the mapping.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: July 20, 2021
    Assignee: Adobe Inc.
    Inventors: Arushi Jain, Praveen Kumar Dhanuka, Ashish Jain